www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Pay as you go is really going to make a difference

reply aberba <karabutaworld gmail.com> writes:
https://tonsky.me/blog/disenchantment/

Let's kill the bloat!!

Software disenchantment
=============================
I’ve been programming for 15 years now. Recently, our industry’s 
lack of care for efficiency, simplicity, and excellence started 
really getting to me, to the point of me getting depressed by my 
own career and IT in general.
.....
Only in software, it’s fine if a program runs at 1% or even 0.01% 
of the possible performance. Everybody just seems to be ok with 
it. People are often even proud about how inefficient it is, as 
in “why should we worry, computers are fast enough”:
...
Look around: our portable computers are thousands of times more 
powerful than the ones that brought man to the moon. Yet every 
other webpage struggles to maintain a smooth 60fps scroll on the 
latest top-of-the-line MacBook Pro. I can comfortably play games, 
watch 4K videos, but not scroll web pages? How is that ok?
...
Modern text editors have higher latency than 42-year-old Emacs. 
Text editors! What can be simpler? On each keystroke, all you 
have to do is update a tiny rectangular region and modern text 
editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D 
game can fill the whole screen with hundreds of thousands (!!!) 
of polygons in the same 16ms and also process input, recalculate 
the world and dynamically load/unload resources. How come?
Jan 12 2020
next sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/
Wow, this person is really uninformed. They know just enough about something to make a naive comment but not enough to understand *why* it is the way it is.
 An Android system with no apps takes up almost 6 GB. Just think 
 for a second about how obscenely HUGE that number is. What’s in 
 there, HD movies? I guess it’s basically code: kernel, drivers. 
 Some string and resources too, sure, but those can’t be big. 
 So, how many drivers do you need for a phone?
The onboard memory on an android device is generally hardwired into the system. That means the system/vendor/boot/dtbo/vbmeta/etc... partitions are going to be a set size. So even if my system image is 1.4 GB and vendor image is 500 mb. It'll still take up 6 GB if that's what was allocated to that partition. The device I'm working on currently has about 4 GB for the system partition and 1 GB for the vendor partition. So about 1.4 GB for system image, the largest folders are for apps. The largest app is Webview totaling 108 MB (in app/). So just the webview alone can take half the space of the rest of the public apps. This is meant to be a minimal android build, so I wouldn't doubt a lot of that space does end up being taken up by pre-installed apps, and extra space for future updates. 12K addon.d 225M app 27M bin 4.0K build.prop 104K compatibility_matrix.xml 5.9M etc 20K fake-libs 16K fake-libs64 69M fonts 217M framework 149M lib 222M lib64 21M media 233M priv-app 8.0K product 27M usr 0 vendor 13M xbin
 Windows 95 was 30MB. Today we have web pages heavier than that! 
 Windows 10 is 4GB, which is 133 times as big. But is it 133 
 times as superior? I mean, functionally they are basically the 
 same. Yes, we have Cortana, but I doubt it takes 3970 MB. But 
 whatever Windows 10 is, is Android really 150% of that?
Not sure why he thinks things taking up more spaces means they have to be better somehow? Developers have limited time, I'm sure they could squeeze out 500+ MB or something, but how much developer time would that take? Is it worth wasting the time to minimize it that much when people have 4 TB hdds? When they can download a 4 GB file in < 2 mins. That's what he isn't getting. Doing these things isn't free. It takes development time to do all these things, development time that thanks to the hardware we have today, allows for it to be spent else where. Where it is more valuable. I remember using Windows 95, it's garbage in comparison to Windows 10. Comparing an OS based solely on it's file size, is just something someone incompetent would do.
 Modern text editors have higher latency than 42-year-old Emacs. 
 Text editors! What can be simpler? On each keystroke, all you 
 have to do is update a tiny rectangular region and modern text 
 editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D 
 game can fill the whole screen with hundreds of thousands (!!!) 
 of polygons in the same 16ms and also process input, 
 recalculate the world and dynamically load/unload resources. 
 How come?
I'll just assume he's talking about electron based editors here. They are built ontop of a web browser so yah they are going to be a bit more resource hungry. But if you take VS Code as an example. It is extremely easy to customize. There's no dozens of forks of it that modify little things to get certain features. There's more quality extensions for VS Code that integrate flawless, that aren't hacks than there are for Emacs, even though VS Code hasn't existed for nearly as long. There's a trade off for the ease of development and customizability. The latency also isn't that bad, it is pretty bad in Atom but that just shows the difference between the two. Then he compares that to games and GPU rendering, just ugh. Maybe he didn't know it runs in a web browser, but he also goes on a rant about how web browsers don't render fast enough for him. Web browsers need to be secure, achieving performance a long side that is difficult. He seems to be in the mind set that security doesn't matter, or at the very least he probably doesn't think about it, as it seems to be more often than it should be. There's a reason why there's only so many web browsers. Hell even Microsoft gave up and uses Chromium's backend. Think about that, Microsoft, with a B. Then the whole, oh we went to the moon with these slow computers. Yah going to the moon is pretty easy in comparison to some computer problems. As someone put, when a politician tried to make the same argument about going to the moon, it'd be akin to walking on the surface of the sun. I could go on, this article is way too long and it's filled with misconceptions, terribly awful comparisons, and just so much more.
Jan 12 2020
parent reply user5678 <user5678 9012.sd> writes:
On Sunday, 12 January 2020 at 22:59:22 UTC, Arine wrote:
 On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/
Wow, this person is really uninformed. They know just enough about something to make a naive comment but not enough to understand *why* it is the way it is.
 An Android system with no apps takes up almost 6 GB. Just 
 think for a second about how obscenely HUGE that number is. 
 What’s in there, HD movies? I guess it’s basically code: 
 kernel, drivers. Some string and resources too, sure, but 
 those can’t be big. So, how many drivers do you need for a 
 phone?
The onboard memory on an android device is generally hardwired into the system. That means the system/vendor/boot/dtbo/vbmeta/etc... partitions are going to be a set size. So even if my system image is 1.4 GB and vendor image is 500 mb. It'll still take up 6 GB if that's what was allocated to that partition. The device I'm working on currently has about 4 GB for the system partition and 1 GB for the vendor partition. So about 1.4 GB for system image, the largest folders are for apps. The largest app is Webview totaling 108 MB (in app/). So just the webview alone can take half the space of the rest of the public apps. This is meant to be a minimal android build, so I wouldn't doubt a lot of that space does end up being taken up by pre-installed apps, and extra space for future updates. 12K addon.d 225M app 27M bin 4.0K build.prop 104K compatibility_matrix.xml 5.9M etc 20K fake-libs 16K fake-libs64 69M fonts 217M framework 149M lib 222M lib64 21M media 233M priv-app 8.0K product 27M usr 0 vendor 13M xbin
 Windows 95 was 30MB. Today we have web pages heavier than 
 that! Windows 10 is 4GB, which is 133 times as big. But is it 
 133 times as superior? I mean, functionally they are basically 
 the same. Yes, we have Cortana, but I doubt it takes 3970 MB. 
 But whatever Windows 10 is, is Android really 150% of that?
Not sure why he thinks things taking up more spaces means they have to be better somehow? Developers have limited time, I'm sure they could squeeze out 500+ MB or something, but how much developer time would that take? Is it worth wasting the time to minimize it that much when people have 4 TB hdds? When they can download a 4 GB file in < 2 mins. That's what he isn't getting. Doing these things isn't free. It takes development time to do all these things, development time that thanks to the hardware we have today, allows for it to be spent else where. Where it is more valuable. I remember using Windows 95, it's garbage in comparison to Windows 10. Comparing an OS based solely on it's file size, is just something someone incompetent would do.
 Modern text editors have higher latency than 42-year-old 
 Emacs. Text editors! What can be simpler? On each keystroke, 
 all you have to do is update a tiny rectangular region and 
 modern text editors can’t do that in 16ms. It’s a lot of time. 
 A LOT. A 3D game can fill the whole screen with hundreds of 
 thousands (!!!) of polygons in the same 16ms and also process 
 input, recalculate the world and dynamically load/unload 
 resources. How come?
I'll just assume he's talking about electron based editors here. They are built ontop of a web browser so yah they are going to be a bit more resource hungry. But if you take VS Code as an example. It is extremely easy to customize. There's no dozens of forks of it that modify little things to get certain features. There's more quality extensions for VS Code that integrate flawless, that aren't hacks than there are for Emacs, even though VS Code hasn't existed for nearly as long. There's a trade off for the ease of development and customizability. The latency also isn't that bad, it is pretty bad in Atom but that just shows the difference between the two. Then he compares that to games and GPU rendering, just ugh. Maybe he didn't know it runs in a web browser, but he also goes on a rant about how web browsers don't render fast enough for him. Web browsers need to be secure, achieving performance a long side that is difficult. He seems to be in the mind set that security doesn't matter, or at the very least he probably doesn't think about it, as it seems to be more often than it should be. There's a reason why there's only so many web browsers. Hell even Microsoft gave up and uses Chromium's backend. Think about that, Microsoft, with a B. Then the whole, oh we went to the moon with these slow computers. Yah going to the moon is pretty easy in comparison to some computer problems. As someone put, when a politician tried to make the same argument about going to the moon, it'd be akin to walking on the surface of the sun. I could go on, this article is way too long and it's filled with misconceptions, terribly awful comparisons, and just so much more.
Nah, the author of the article is right. The web and its techs (js) are completely retarded. I've myself observed on top of the usual slowness, a regression, on several major sites, during the latest months. About the part on keyboard latency. This is based on another blog post I've read a few years ago. (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impac -keyboard-latency/) and other stuff too. The problem might be that you're so much into the web services that you don't even realize anymore how softs made in the classic way were much faster. But those softs failed to adapt to the web so developers have started to look elsewhere for a better interaction with the web. At some point in the late 2000's compiled languages have lost. The article posted by the OP is fundamentally about that, if you read between the lines.
Jan 13 2020
next sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 About the part on keyboard latency. This is based on another 
 blog post I've read a few years ago. 
 (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impac
-keyboard-latency/) and other stuff too.
He wasn't talking about general latency such as that. Otherwise games would have the same problem, and he was specifically talking about editors.
 The article posted by the OP is fundamentally about that, if 
 you read between the lines.
You're just taking your own meaning from the article. If you have to "read between the lines", you aren't reading what the author actually wrote.
Jan 13 2020
parent reply Basile B. <b2.temp gmx.com> writes:
On Monday, 13 January 2020 at 17:20:05 UTC, Arine wrote:
 On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 About the part on keyboard latency. This is based on another 
 blog post I've read a few years ago. 
 (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impac
-keyboard-latency/) and other stuff too.
He wasn't talking about general latency such as that. Otherwise games would have the same problem, and he was specifically talking about editors.
 The article posted by the OP is fundamentally about that, if 
 you read between the lines.
You're just taking your own meaning from the article. If you have to "read between the lines", you aren't reading what the author actually wrote.
everything is slow because OS like windows, who deprecate their native UI and prefer bloated apps that have to pass in a validation process making But their OS, so supposedly made with their "fantastic tech", has to be patched every month, using an remarkably slow process that will monopolize your network. At the same time everybody thinks that VScode is great. There's a name for that DIGITAL WASHING
Jan 14 2020
parent aria chris <ariachris56 gmail.com> writes:
I agree with the first comment.
Jan 16 2020
prev sibling next sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impac
-keyboard-latency/) and other stuff too.
He's comparing two different technologies. If you want low input lag, get a TN panel gaming monitor with a high refresh rate. The thing is those cost $$$. All the while most of the devices he's testing are laptops. I'd love to a see a CRT display in a laptop. Read between the lines, that the author doesn't know what their doing.
Jan 13 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jan 13, 2020 at 05:40:08PM +0000, Arine via Digitalmars-d wrote:
 On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impact-keyboard-latency/)
 and other stuff too.
He's comparing two different technologies. If you want low input lag, get a TN panel gaming monitor with a high refresh rate. The thing is those cost $$$. All the while most of the devices he's testing are laptops. I'd love to a see a CRT display in a laptop. Read between the lines, that the author doesn't know what their doing.
You're totally missing the point. The point is to take a step back at the current state of things and evaluate just how much it (doesn't) make sense: 1) Back in the 70's, we had 16 kHz CPUs and only up to 64KB of RAM. 2) Today we're in 2020, with multi-core CPUs running at speeds measured in GHz, and RAM measured in GBs. 3) A word processor in the 70's runs horribly slowly with horrible lag between input keystrokes. 4) Technologically speaking, today we have enough processing power to run AAA games that process hundreds of thousands of objects per frame running at 60 fps. We're talking about things like *real-time raytracing* here, something completely unimaginable in the 70's. 5) Yet a browser app of today, built with said modern technology with modern processing power, still runs just as horribly slowly as a word processor from the 70's running on ancient ultra-slow hardware, with just as horrible a lag between input keystrokes. Something isn't adding up. Yes, all of this can be explained, and if you lose sight of the forest for the trees, every step in the history of how this came about can be logically explained. But when you step back and look at the forest as a whole, the whole situation looks completely ridiculous. The necessary tech is all there to make things FAR more efficient. The development methodologies are all there, and we have orders of magnitude more manpower than in the 70's. What a word processor has to compute is peanuts compared to an AAA game with real-time raytracing running at 60 fps. Yet here we are, stuck with a completely insane web design philosophy building horribly slow and unreliable apps that are barely a step above an ancient word processor from the 70's. The browser king wears no clothes, yet its proponents see invisible. T -- An elephant: A mouse built to government specifications. -- Robert Heinlein
Jan 13 2020
parent reply Arine <arine123445128843 gmail.com> writes:
On Monday, 13 January 2020 at 18:22:19 UTC, H. S. Teoh wrote:
 On Mon, Jan 13, 2020 at 05:40:08PM +0000, Arine via 
 Digitalmars-d wrote:
 On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 (https://hexus.net/tech/news/peripherals/113648-modern-computer-complexity-heavy-impac
-keyboard-latency/) and other stuff too.
He's comparing two different technologies. If you want low input lag, get a TN panel gaming monitor with a high refresh rate. The thing is those cost $$$. All the while most of the devices he's testing are laptops. I'd love to a see a CRT display in a laptop. Read between the lines, that the author doesn't know what their doing.
You're totally missing the point. The point is to take a step back at the current state of things and evaluate just how much it (doesn't) make sense:
It does make sense. Software back then wasn't complicated, it didn't have to be. Developer time has remained constant. Software companies failed because they were trying to shoot for perfection. You can't create a perfect piece of software. You have to use the limited developer time you have to and allocate that time effectively. Not trying to reduce file size cause some UX designed that doesn't know what he's doing or talking about rants about it on his blog.
 4) Technologically speaking, today we have enough processing 
 power to run AAA games that process hundreds of thousands of 
 objects per frame running at 60 fps.  We're talking about 
 things like *real-time raytracing* here, something completely 
 unimaginable in the 70's.

 Yes, all of this can be explained, and if you lose sight of the 
 forest for the trees, every step in the history of how this 
 came about can be logically explained. But when you step back 
 and look at the forest as a whole, the whole situation looks 
 completely ridiculous.  The necessary tech is all there to make 
 things FAR more efficient. The development methodologies are 
 all there, and we have orders of magnitude more manpower than 
 in the 70's.  What a word processor has to compute is peanuts 
 compared to an AAA game with real-time raytracing running at 60 
 fps.
Raytracing is just a marketing buzzword, it's exist for decades in games and it's been used in realtime for almost as long. That's the problem when you have people like you that don't understand what they are talking about, throwing things like. Oh we can do "raytracing" in real time then comparing that as if it means something because we can do that. GPUs have been doing operations like that for a long time, doing a lot simple tasks thousands at a time in parallel. But there's still a reason you can't run an operating system using a GPU. It's fundamentally difference.
 5) Yet a browser app of today, built with said modern 
 technology with modern processing power, still runs just as 
 horribly slowly as a word processor from the 70's running on 
 ancient ultra-slow hardware, with just as horrible a lag 
 between input keystrokes.

 Yet here we are, stuck with a completely insane web design 
 philosophy building horribly slow and unreliable apps that are 
 barely a step above an ancient word processor from the 70's.
I use VS Code and Discord (both made using electron btw) all the time, there's no lag. It's probably more responsive than most bloated IDEs that weren't built using electron. Bad programs are going to be bad.
Jan 16 2020
parent JN <666total wp.pl> writes:
On Thursday, 16 January 2020 at 19:38:21 UTC, Arine wrote:
 Raytracing is just a marketing buzzword, it's exist for decades 
 in games and it's been used in realtime for almost as long. 
 That's the problem when you have people like you that don't 
 understand what they are talking about, throwing things like. 
 Oh we can do "raytracing" in real time then comparing that as 
 if it means something because we can do that. GPUs have been 
 doing operations like that for a long time, doing a lot simple 
 tasks thousands at a time in parallel. But there's still a 
 reason you can't run an operating system using a GPU. It's 
 fundamentally difference.
That's not true. While I believe the current trend of 'raytracing' is mostly hype build by NVidia to sell their RTX GPUs, real time raytracing wasn't viable in the past. It only worked for simple scenes with few cubes and spheres and was very low resolution/noisy. Now we have the performance to do that, also we can use machine learning to denoise the image in a much better way than the previous algorithms did.
Jan 16 2020
prev sibling parent user5678 <user5678 9012.sd> writes:
On Monday, 13 January 2020 at 11:54:19 UTC, user5678 wrote:
 On Sunday, 12 January 2020 at 22:59:22 UTC, Arine wrote:
 [...]
Nah, the author of the article is right. The web and its techs (js) are completely retarded. I've myself observed on top of the usual slowness, a regression, on several major sites, during the latest months. [...]
So instead of having your own system interacting with the web the situation is that the web tries to interact with your system, which is a completely crazy situation.
Jan 13 2020
prev sibling next sibling parent Ron Tarrant <rontarrant gmail.com> writes:
On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/

 Let's kill the bloat!!

 Software disenchantment
 =============================
 I’ve been programming for 15 years now. Recently, our 
 industry’s lack of care for efficiency, simplicity, and 
 excellence started really getting to me, to the point of me 
 getting depressed by my own career and IT in general.
 ...
 “why should we worry, computers are fast enough”:
 ...
 Look around: our portable computers are thousands of times more 
 powerful than the ones that brought man to the moon.
 ...
 Modern text editors have higher latency than 42-year-old Emacs.
This has bugged me for a while, too. Behind you all the way, Aberba.
Jan 13 2020
prev sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/

 Let's kill the bloat!!
And there is an other effect of this ever growing bloat. I have two old iPads one iPad 1 and an iPad 2. Both are in perfect hardware condition but, you can not use them for much anymore, because of their small (256 and 512 MB) RAM the available browsers are not able to render most of 'modern' webpages. So the ever increasing need of memory for the simplest tasks is killing old hardware. The last computer, which software was optimized to the ultimate was probably the Commodore C64. After that the availability of more and more resources (CPU speed and RAM) has started building an ever increasing amount of additional layers between input and output. Just look at a simple - statically linked - "hello world" DMD compilation result, how many C64 times floppy discs (180KByte) you would need to store? I think this process will not end as long as new storage and bandwidth is getting cheaper all the time. But maybe I am wrong and the next generation of software engineers will bring the gain of Moors Law to us. (And the resources needed for computing world wide will stop increasing.)
Jan 16 2020
next sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Thursday, 16 January 2020 at 14:03:15 UTC, Martin Tschierschke 
wrote:
 But maybe I am wrong and the next generation of software 
 engineers will bring the gain of Moors Law to us. (And the 
 resources needed for computing world wide will stop increasing.)
It is especially the current young generation of coders that gets socialized with HTML+JS "GUIs" (yup, scare quotes!) and "apps" that are just services on somebody else's server. There's just so many incentives pointing the wrong way: - Cloud providers want to lock their customers in (Google, Amazon, MS) - Software developers see how they can squeeze juicy subscription fees out of their customers when they don't sell installable software, but run it as a service - Commercial users see shiny presentations that tell them that not running their software in-house is so much cheaper (and it's likely true until they lose access to their data or a critical 3rd party service falls over) I only see a single chance to get out of this particular hole: completely new devices that are more desirable than PCs, tablets or smartphones and for which the web as it exists today makes absolutely no sense. I see one chance of this happening if everyday augmented reality matures in about 5 to 10 years - and that's still a pretty big if.
Jan 16 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 16, 2020 at 03:08:47PM +0000, Gregor Mckl via Digitalmars-d wrote:
[...][
 There's just so many incentives pointing the wrong way:
 
 - Cloud providers want to lock their customers in (Google, Amazon, MS)
Yep, that's why I'm skeptical of this whole cloud hype. It's just like the days of Java all over again: OO, which does have its uses, being sold far beyond its scope of usefulness, and Java, which actually isn't a *bad* language in certain respects, being sold as the panacea that will solve all your programming problems and relieve you of the necessity of thought. Only today, substitute OO with "cloud" and Java with "webapps". Cloud vendors want to lock you in, when the correct strategy is multiple redundant systems (cf. Walter's rants about Boeing design). But you can't have multiple redundant systems -- not in Walter's sense of multiple *independent* systems that aren't running upon the same principles that might fail *at the same time* -- if cloud vendors refuse to let you interoperate with their competitors' systems, or only allows arbitrarily restricted such interoperation, such that your redundancy is essentially crippled and you might as well not bother.
 - Software developers see how they can squeeze juicy subscription fees
 out of their customers when they don't sell installable software, but
 run it as a service
Yeah, I have a lot of ideological problems with that. The first and foremost being that your ability to use potentially mission-critical functionality is now dependent on the state of some remote server farm that's completely beyond your control. Last year's AWS outage is just the tip of the iceberg of what might happen if everyone becomes dependent on the web (they already are) and the web becomes fragile because of reliance on a small number of points of failure (already happened: cloud providers), and something happens to one of these points of failure. (Well, you object, cloud providers have multiple redundant distributed servers, so they're not vulnerable to single-point-of-failure problems. Wrong, their *individual* servers can failover transparently, but sometimes the *entire service* goes down for whatever reason -- faulty software that all servers are running copies of, for instance. Or targeted cybercriminal attacks on that service as a whole. Or the company goes bust suddenly, who knows. Centralization of critical services -- esp. on a 3rd party whose interests may not coincide with yours -- is not a wise move.)
 - Commercial users see shiny presentations that tell them that not
 running their software in-house is so much cheaper (and it's likely
 true until they lose access to their data or a critical 3rd party
 service falls over)
[...] Yeah, this is another major ideological problem I have with this whole cloud hype. Your data doesn't belong to you anymore; it's sitting on the hard drives of some 3rd party whose interests do not necessarily coincide with yours. The accessibility of your mission-critical data is dependent upon the availability of some remote service that isn't under your control. You're in trouble if the service goes down, or becomes unavailable for whatever reason during the critical times when you most need your data. You're in trouble if the 3rd party gets hacked and now your supposedly private data is out in the open. Or there's a serious security flaw that you were never aware of, that has left your data that you thought was securely stored open to the whole world. And worst of all, your data is in the hands of a 3rd party who has the power to do what they want with it, and their interests may not coincide with yours. How anyone could be comfortable with that idea is beyond me. T -- Why have vacation when you can work?? -- EC
Jan 16 2020
next sibling parent Rumbu <rumbu rumbu.ro> writes:
On Thursday, 16 January 2020 at 17:59:53 UTC, H. S. Teoh wrote:
 On Thu, Jan 16, 2020 at 03:08:47PM +0000, Gregor Mückl via 
 Digitalmars-d wrote: [...][
 There's just so many incentives pointing the wrong way:
 
 - Cloud providers want to lock their customers in (Google, 
 Amazon, MS)
Yeah, this is another major ideological problem I have with this whole cloud hype. Your data doesn't belong to you anymore; it's sitting on the hard drives of some 3rd party whose interests do not necessarily coincide with yours. The accessibility of your mission-critical data is dependent upon the availability of some remote service that isn't under your control. You're in trouble if the service goes down, or becomes unavailable for whatever reason during the critical times when you most need your data. You're in trouble if the 3rd party gets hacked and now your supposedly private data is out in the open. Or there's a serious security flaw that you were never aware of, that has left your data that you thought was securely stored open to the whole world. And worst of all, your data is in the hands of a 3rd party who has the power to do what they want with it, and their interests may not coincide with yours. How anyone could be comfortable with that idea is beyond me.
It depends. The business world is more dynamic today. As a startup company you have access to advanced technologies that you never dream of in a blink of time. Last year I started a new company. In 30 minutes I had a fully fledged e-mail system, a communication platform, a secure environment and a nice pack of development software. Uploaded my databases, opened Visual Studio, load the project, changed some settings in the configuration file, hit Build, hit Publish button. Zbang, my web application is up and running in the wild. As a service, I don't even need a virtual machine for this. The company doesn't even have a physical office, we are three partners and all we got are three laptops working from home. 300 eur/month licenses and services. Now imagine the same scenario years ago. Buy some servers, buy storage, buy firewall, configure, install. Set-up e-mail, setup network, have a server room, put some cables. 30k eur at least. More than that, since I am working in the payroll industry, clients ask for security certifications. We cannot afford to buy such systems and services to meet their criteria. Instead I gave them the security certifications of the cloud provider, which are state of the art. I have access to secure technologies like data leaking prevention, audit, logging without any supplementary investment.
Jan 17 2020
prev sibling parent Chris <wendlec tcd.ie> writes:
On Thursday, 16 January 2020 at 17:59:53 UTC, H. S. Teoh wrote:
 Yeah, this is another major ideological problem I have with 
 this whole cloud hype. Your data doesn't belong to you anymore; 
 it's sitting on the hard drives of some 3rd party whose 
 interests do not necessarily coincide with yours. The 
 accessibility of your mission-critical data is dependent upon 
 the availability of some remote service that isn't under your 
 control.  You're in trouble if the service goes down, or 
 becomes unavailable for whatever reason during the critical 
 times when you most need your data. You're in trouble if the 
 3rd party gets hacked and now your supposedly private data is 
 out in the open.  Or there's a serious security flaw that you 
 were never aware of, that has left your data that you thought 
 was securely stored open to the whole world.  And worst of all, 
 your data is in the hands of a 3rd party who has the power to 
 do what they want with it, and their interests may not coincide 
 with yours.

 How anyone could be comfortable with that idea is beyond me.


 T
All valid points, but what do you suggest as an alternative? Create your own service from scratch? Can you guarantee your customers that your own software is secure and will not be hacked easily? All their personal data and financial transactions. The whole thing is just too big to roll your own. If you buy a car, you're "locked in", but does that mean you should build your own car? The market is about division of labor, else there wouldn't be progress. Gone are the romantic days of yore when people where farmers, thatchers and fishermen at the same time.
Jan 17 2020
prev sibling next sibling parent reply Chris <wendlec tcd.ie> writes:
On Thursday, 16 January 2020 at 14:03:15 UTC, Martin Tschierschke 
wrote:
 On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/

 Let's kill the bloat!!
And there is an other effect of this ever growing bloat. I have two old iPads one iPad 1 and an iPad 2. Both are in perfect hardware condition but, you can not use them for much anymore, because of their small (256 and 512 MB) RAM the available browsers are not able to render most of 'modern' webpages. So the ever increasing need of memory for the simplest tasks is killing old hardware. The last computer, which software was optimized to the ultimate was probably the Commodore C64. After that the availability of more and more resources (CPU speed and RAM) has started building an ever increasing amount of additional layers between input and output. Just look at a simple - statically linked - "hello world" DMD compilation result, how many C64 times floppy discs (180KByte) you would need to store? I think this process will not end as long as new storage and bandwidth is getting cheaper all the time. But maybe I am wrong and the next generation of software engineers will bring the gain of Moors Law to us. (And the resources needed for computing world wide will stop increasing.)
This was already known in the 80ies. It was called the hardware-software spiral or something like that. It's partly natural and partly by design to sell hardware and software. The more powerful the hardware the more software developers do (think of image and video editing), the more powerful the software, the slower the existing hardware, so you need to buy a new, more powerful machine, rinse and repeat... As regards your iPads, Apple have always been mean with RAM and storage (unless you spend like $2000+). That's also by design, if Apple gives you 256/512MB RAM, they know exactly what they are doing, because they know that your iPad will soon be useless given the way the internet is evolving. They are aware of the hardware-software spiral.
Jan 17 2020
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 17 January 2020 at 12:19:18 UTC, Chris wrote:
 As regards your iPads, Apple have always been mean with RAM and 
 storage (unless you spend like $2000+). That's also by design, 
 if Apple gives you 256/512MB RAM, they know exactly what they 
 are doing, because they know that your iPad will soon be 
 useless given the way the internet is evolving. They are aware 
 of the hardware-software spiral.
Sadly, the iPad1 is quite capable, but is stuck on iOS5 and thus the browser will fail on many sites. In my experience RAM and CPU is not the main issue. I actually like the ergonomic shape of the iPad1 more than later models, but planned obsoletion is what you have to live with these days... I still use it for wikipedia and pdfs, though :-)
Jan 17 2020
parent Chris <wendlec tcd.ie> writes:
On Friday, 17 January 2020 at 13:36:52 UTC, Ola Fosheim Grøstad 
wrote:
 Sadly, the iPad1 is quite capable, but is stuck on iOS5 and 
 thus the browser will fail on many sites. In my experience RAM 
 and CPU is not the main issue. I actually like the ergonomic 
 shape of the iPad1 more than later models, but planned 
 obsoletion is what you have to live with these days...  I still 
 use it for wikipedia and pdfs, though :-)
That's, of course, another trick to render devices useless. Make them un-updatable.
Jan 17 2020
prev sibling parent reply aberba <karabutaworld gmail.com> writes:
On Thursday, 16 January 2020 at 14:03:15 UTC, Martin Tschierschke 
wrote:
 On Sunday, 12 January 2020 at 20:29:59 UTC, aberba wrote:
 https://tonsky.me/blog/disenchantment/

 Let's kill the bloat!!
 Just look at a simple - statically linked - "hello world" DMD 
 compilation result,
 how many C64 times floppy discs (180KByte) you would need to 
 store?
That's the issue I wanted to address with the thread. Why is a simple 5 lines of code compiling to such a large binary. That's a typical example of bloat.
Jan 22 2020
parent reply IGotD- <nise nise.com> writes:
On Wednesday, 22 January 2020 at 10:38:54 UTC, aberba wrote:
 Just look at a simple - statically linked - "hello world" DMD 
 compilation result,
 how many C64 times floppy discs (180KByte) you would need to 
 store?
That's the issue I wanted to address with the thread. Why is a simple 5 lines of code compiling to such a large binary. That's a typical example of bloat.
You have the same problem with C++ if you statically link the C and C++ library, then it will easily become over 2 MB. Many don't care because default is that you dynamically link those libraries in C++. Problem is that D depends on the C library which can be statically linked and then it also becomes bigger. Even if you use just a very small portion of it, it has a tendency you just include everything anyway. Many languages suffer from the C lib dependency which is kind of suboptimal. It is time to depreciate that dependency.
Jan 22 2020
next sibling parent reply FogD <hosszu outlook.com> writes:
On Wednesday, 22 January 2020 at 10:51:37 UTC, IGotD- wrote:
 Many languages suffer from the C lib dependency which is kind 
 of suboptimal. It is time to depreciate that dependency.
A recent comparison of languages from this perspective. https://drewdevault.com/2020/01/04/Slow.html
Jan 22 2020
next sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 22 January 2020 at 23:51:23 UTC, FogD wrote:
 On Wednesday, 22 January 2020 at 10:51:37 UTC, IGotD- wrote:
 Many languages suffer from the C lib dependency which is kind 
 of suboptimal. It is time to depreciate that dependency.
A recent comparison of languages from this perspective. https://drewdevault.com/2020/01/04/Slow.html
It would be interesting to know what that huge number of system calls really do, especially when it comes to D which has around 150.
Jan 22 2020
parent reply Johan Engelen <j j.nl> writes:
On Thursday, 23 January 2020 at 00:20:00 UTC, IGotD- wrote:
 On Wednesday, 22 January 2020 at 23:51:23 UTC, FogD wrote:
 A recent comparison of languages from this perspective.

 https://drewdevault.com/2020/01/04/Slow.html
It would be interesting to know what that huge number of system calls really do, especially when it comes to D which has around 150.
Indeed. Also to figure out why LDC's binary calls 31 more than DMD's binary. Much appreciated if someone could repeat the test and post a list of all syscalls being made. -Johan
Jan 24 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/24/20 4:52 AM, Johan Engelen wrote:
 On Thursday, 23 January 2020 at 00:20:00 UTC, IGotD- wrote:
 On Wednesday, 22 January 2020 at 23:51:23 UTC, FogD wrote:
 A recent comparison of languages from this perspective.

 https://drewdevault.com/2020/01/04/Slow.html
It would be interesting to know what that huge number of system calls really do, especially when it comes to D which has around 150.
Indeed. Also to figure out why LDC's binary calls 31 more than DMD's binary. Much appreciated if someone could repeat the test and post a list of all syscalls being made.
Most likely it's the runtime startup. Obviously sbrk quite a bit, but any runtime initialization (thread startup, mutex initialization, etc) are all going to go in there. Think of what the GC has to do! A good test would be to do a betterC version with printf and see what the difference is (technically it should be the same as the C version). -Steve
Jan 24 2020
next sibling parent IGotD- <nise nise.com> writes:
On Friday, 24 January 2020 at 13:06:33 UTC, Steven Schveighoffer 
wrote:
 Most likely it's the runtime startup. Obviously sbrk quite a 
 bit, but any runtime initialization (thread startup, mutex 
 initialization, etc) are all going to go in there. Think of 
 what the GC has to do!

 A good test would be to do a betterC version with printf and 
 see what the difference is (technically it should be the same 
 as the C version).

 -Steve
That makes perfectly sense, I didn't think about sbrk which needs to bump the heap quite a bit during the startup.
Jan 24 2020
prev sibling next sibling parent Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Friday, 24 January 2020 at 13:06:33 UTC, Steven Schveighoffer 
wrote:
 A good test would be to do a betterC version with printf and 
 see what the difference is (technically it should be the same 
 as the C version).
Indeed. I wrote an email to the author about that yesterday, though I haven't heard from him since.
Jan 24 2020
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 24, 2020 at 08:06:33AM -0500, Steven Schveighoffer via
Digitalmars-d wrote:
 On 1/24/20 4:52 AM, Johan Engelen wrote:
[...]
 Indeed. Also to figure out why LDC's binary calls 31 more than DMD's
 binary.
[...]
 Most likely it's the runtime startup. Obviously sbrk quite a bit, but
 any runtime initialization (thread startup, mutex initialization, etc)
 are all going to go in there. Think of what the GC has to do!
[...] It makes me wonder how much we can make all this startup stuff pay-as-you-go. I mean, IIRC, isn't the GC lazily initialized now? I vaguely remember some PR along that direction. Or was it the pool allocations? I suppose thread startup would be hard to elide, unless there was a way to initialize the thread stuff only on demand. Ditto for mutex inits. But it might not be worth the effort for such minimal benefits in such a marginal test case. T -- Those who've learned LaTeX swear by it. Those who are learning LaTeX swear at it. -- Pete Bleackley
Jan 24 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/24/20 1:24 PM, H. S. Teoh wrote:
 On Fri, Jan 24, 2020 at 08:06:33AM -0500, Steven Schveighoffer via
Digitalmars-d wrote:
 On 1/24/20 4:52 AM, Johan Engelen wrote:
[...]
 Indeed. Also to figure out why LDC's binary calls 31 more than DMD's
 binary.
[...]
 Most likely it's the runtime startup. Obviously sbrk quite a bit, but
 any runtime initialization (thread startup, mutex initialization, etc)
 are all going to go in there. Think of what the GC has to do!
[...] It makes me wonder how much we can make all this startup stuff pay-as-you-go. I mean, IIRC, isn't the GC lazily initialized now? I vaguely remember some PR along that direction. Or was it the pool allocations?
Yes, it is lazily initialized. It's kind of a cool mechanism too -- the "default" GC is a class that when used in a way where a "real" GC is needed (e.g. allocate some memory), figures out which one to create, creates it, and then replaces itself as the global handler with that new one. But the GC is going to be initialized in a writeln call I think. There's a few other things that are going to cause a lot of system calls too -- the static constructors and the cycle detection. At least the cycle detection we could rid ourselves of if we could make a post-compile step that runs the cycle detection algorithm and sets up the final ordering in the binary.
 I suppose thread startup would be hard to elide, unless there was a way
 to initialize the thread stuff only on demand. Ditto for mutex inits.
 But it might not be worth the effort for such minimal benefits in such a
 marginal test case.
I'm not sure why we need to exactly minimize the system calls, we should just be able to explain them. 150 calls isn't horrific, and trying to reduce an "artificial" metric like that really shouldn't be the goal. I know this is exactly what the author is complaining about, but there is a world of difference between a 50MB web site that can't scroll and 150 system calls to do runtime startup + print hello world. However, there could easily be an obvious candidate for removal if something looks like it's being called way too often. So explanation is still a good goal. -Steve
Jan 24 2020
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2020-01-24 10:52, Johan Engelen wrote:

 Indeed. Also to figure out why LDC's binary calls 31 more than DMD's 
 binary.
 Much appreciated if someone could repeat the test and post a list of all 
 syscalls being made.
Not exactly the same as the original post, but here's some data I pulled out for macOS 10.14.6 with a Hello World compiled with DMD 2.088.0. This should give somewhat of an idea what's going on in the application. I've included the stacktrace for all syscalls NOT made by the system. As you can see below, 10 calls are made by the application, all of the remaining calls are made by the system itself. Most of the calls made by the system are made by the dynamic loader. Only three calls originate from the D main function. Only one call into the C standard library is made from the D main function, which is the call to `fwrite`. It's not our fault that the system does so many calls :). total: 120, system: 110, app: 10 stat64, total: 40, system: 40, app: 0 ----------------------------------------------- mach_vm_map_trap, total: 8, system: 6, app: 2 6 libsystem_malloc.dylib malloc 7 foobar _D2rt5minfo11ModuleGroup9sortCtorsMFAyaZ6doSortMFmKAPyS6object10ModuleInfoZb 8 foobar _D2rt5minfo11ModuleGroup9sortCtorsMFAyaZv 9 foobar _D2rt5minfo11ModuleGroup9sortCtorsMFZv 10 foobar _D2rt5minfo13rt_moduleCtorUZ14__foreachbody1MFKSQBu19sections_osx_x86_6412SectionGroupZi 11 foobar rt_moduleCtor 12 foobar rt_init 13 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 14 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 15 foobar _d_run_main2 16 foobar _d_run_main 17 foobar main 18 libdyld.dylib start 10 libsystem_c.dylib fwrite 11 foobar _D3std5stdio__T13trustedFwriteTaZQsFNbNiNePOS4core4stdcQBx7__sFILExAaZm ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:4322 12 foobar _D3std5stdio4File17LockingTextWriter__T3putTAyaZQjMFNfMQlZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:2930 13 foobar _D3std5stdio__T7writelnTAyaZQnFNfQjZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:3855 14 foobar _Dmain ~/development/d/main.d:15 15 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZ9__lambda1MFZv 16 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 17 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 18 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 19 foobar _d_run_main2 20 foobar _d_run_main 21 foobar main 22 libdyld.dylib start ----------------------------------------------- mprotect, total: 8, system: 8 mach_port_deallocate_trap, total: 4, system: 4 ----------------------------------------------- sigaction, total: 4, system: 0, app: 4 0 libsystem_kernel.dylib __sigaction 1 libsystem_platform.dylib __platform_sigaction 2 foobar runModuleUnitTests 3 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 4 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 5 foobar _d_run_main2 6 foobar _d_run_main 7 foobar main 8 libdyld.dylib start ----------------------------------------------- mach_reply_port, total: 4, system: 4, app: 0 host_self_trap, total: 3, system: 3, app: 0 mach_port_mod_refs_trap, total: 3, system: 3, app: 0 ioctl, total: 3, system: 3, app: 0 mach_port_construct_trap, total: 3, system: 3, app: 0 kdebug_typefilter, total: 2, system: 2, app: 0 thread_self_trap, total: 2, system: 2, app: 0 proc_info, total: 2, system: 2, app: 0 mach_port_destruct_trap, total: 2, system: 2, app: 0 csops, total: 2, system: 2, app: 0 getpid, total: 2, system: 2, app: 0 task_self_trap, total: 2, system: 2, app: 0 access, total: 1, system: 1, app: 0 close, total: 1, system: 1, app: 0 shared_region_check_np, total: 1, system: 1, app: 0 ------------------------------------------------------- fstat64, total: 1, system: 0, app: 1 5 libsystem_c.dylib fwrite 6 foobar _D3std5stdio__T13trustedFwriteTaZQsFNbNiNePOS4core4stdcQBx7__sFILExAaZm ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:4322 7 foobar _D3std5stdio4File17LockingTextWriter__T3putTAyaZQjMFNfMQlZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:2930 8 foobar _D3std5stdio__T7writelnTAyaZQnFNfQjZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:3855 9 foobar _Dmain ~/development/d/main.d:15 10 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZ9__lambda1MFZv 11 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 12 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 13 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 14 foobar _d_run_main2 15 foobar _d_run_main 16 foobar main 17 libdyld.dylib start ----------------------------------------------- mac_vm_allocate_trap, total: 1, system: 1, app: 0 open, total: 1, system: 1, app: 0 mac_syscall, total: 1, system: 1, app: 0 ----------------------------------------------- sysctl, total: 1, system: 0, app: 1 3 libsystem_c.dylib sysconf 4 foobar _D4core6thread26_sharedStaticCtor_L3685_C1FZv 5 foobar _D4core6thread15__modsharedctorFZv 6 foobar _D2rt5minfo__T14runModuleFuncsSQBdQBd11ModuleGroup8runCtorsMFZ9__lambda2ZQChMFAxPyS6object10ModuleInfoZv 7 foobar _D2rt5minfo11ModuleGroup8runCtorsMFZv 8 foobar _D2rt5minfo13rt_moduleCtorUZ14__foreachbody1MFKSQBu19sections_osx_x86_6412SectionGroupZi 9 foobar rt_moduleCtor 10 foobar rt_init 11 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 12 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 13 foobar _d_run_main2 14 foobar _d_run_main 15 foobar main 16 libdyld.dylib start ----------------------------------------------- getentropy, total: 1, system: 1, app: 0 issetugid, total: 1, system: 1, app: 0 bsdthread_register, total: 1, system: 1, app: 0 ----------------------------------------------- write_nocancel, total: 1, system: 0, app: 1 0 libsystem_kernel.dylib __write_nocancel 1 libsystem_c.dylib _swrite 2 libsystem_c.dylib __sflush 3 libsystem_c.dylib fflush 4 foobar _d_run_main2 5 foobar _d_run_main 6 foobar main 7 libdyld.dylib start ----------------------------------------------- exit, total: 1, system: 1, app: 0 ----------------------------------------------- getrlimit, total: 1, system: 0, app: 1 8 libsystem_c.dylib fwrite 9 foobar _D3std5stdio__T13trustedFwriteTaZQsFNbNiNePOS4core4stdcQBx7__sFILExAaZm ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:4322 10 foobar _D3std5stdio4File17LockingTextWriter__T3putTAyaZQjMFNfMQlZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:2930 11 foobar _D3std5stdio__T7writelnTAyaZQnFNfQjZv ~/.dvm/compilers/dmd-2.088.0/osx/bin/../../src/phobos/std/stdio.d:3855 12 foobar _Dmain ~/development/d/main.d:15 13 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZ9__lambda1MFZv 14 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 15 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv 16 foobar _D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tryExecMFMDFZvZv 17 foobar _d_run_main2 18 foobar _d_run_main 19 foobar main 20 libdyld.dylib start ----------------------------------------------- mach_timebase_info, total: 1, system: 1, app: 0 sysctlbyname, total: 1, system: 1, app: 0 csrctl, total: 1, system: 1, app: 0 thread_selfid, total: 1, system: 1, app: 0 -- /Jacob Carlborg
Jan 24 2020
parent Jacob Carlborg <doob me.com> writes:
On 2020-01-24 21:29, Jacob Carlborg wrote:

 total: 120, system: 110, app: 10
I would like to add that the number of calls varies between runs. -- /Jacob Carlborg
Jan 24 2020
prev sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Friday, 24 January 2020 at 09:52:13 UTC, Johan Engelen wrote:
 Indeed. Also to figure out why LDC's binary calls 31 more than 
 DMD's binary.
 Much appreciated if someone could repeat the test and post a 
 list of all syscalls being made.

 -Johan
Here is a run of a statically linked hello world binary on Linux: execve("./hello", ["./hello"], 0x7fffd9d9d0e0 /* 15 vars */) = 0 arch_prctl(0x3001 /* ARCH_??? */, 0x7ffff1a05c70) = -1 EINVAL (Invalid argument) brk(NULL) = 0x1364000 brk(0x1365340) = 0x1365340 arch_prctl(ARCH_SET_FS, 0x1364a00) = 0 uname({sysname="Linux", nodename="kangoroo", ...}) = 0 set_tid_address(0x1364cd0) = 111 set_robust_list(0x1364ce0, 24) = 0 rt_sigaction(SIGRTMIN, {sa_handler=0x453cb0, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO, sa_restorer=0x4531b0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {sa_handler=0x453d50, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART|SA_SIGINFO, sa_restorer=0x4531b0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=8192*1024}) = 0 readlink("/proc/self/exe", "/root/hello", 4096) = 11 brk(0x1386340) = 0x1386340 brk(0x1387000) = 0x1387000 clock_getres(CLOCK_MONOTONIC, {tv_sec=0, tv_nsec=100}) = 0 clock_getres(CLOCK_BOOTTIME, {tv_sec=0, tv_nsec=100}) = 0 clock_getres(CLOCK_MONOTONIC_COARSE, {tv_sec=0, tv_nsec=100}) = 0 clock_getres(CLOCK_MONOTONIC, {tv_sec=0, tv_nsec=100}) = 0 clock_getres(CLOCK_PROCESS_CPUTIME_ID, {tv_sec=0, tv_nsec=15625000}) = 0 clock_getres(CLOCK_MONOTONIC_RAW, {tv_sec=0, tv_nsec=100}) = 0 clock_getres(CLOCK_THREAD_CPUTIME_ID, {tv_sec=0, tv_nsec=15625000}) = 0 rt_sigaction(SIGUSR1, {sa_handler=0x42ac20, sa_mask=~[RTMIN RT_1], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x4531b0}, NULL, 8) = 0 rt_sigaction(SIGUSR2, {sa_handler=0x42ad20, sa_mask=~[RTMIN RT_1], sa_flags=SA_RESTORER, sa_restorer=0x4531b0}, NULL, 8) = 0 openat(AT_FDCWD, "/proc/self/maps", O_RDONLY|O_CLOEXEC) = 3 prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=8192*1024}) = 0 fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 read(3, "00400000-00561000 r-xp 00000000 "..., 4096) = 488 close(3) = 0 brk(0x1386000) = 0x1386000 sched_getaffinity(111, 32, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) = 32 clock_getres(CLOCK_MONOTONIC, {tv_sec=0, tv_nsec=100}) = 0 clock_gettime(CLOCK_MONOTONIC, {tv_sec=469, tv_nsec=741732400}) = 0 rt_sigaction(SIGSEGV, {sa_handler=0x4402b0, sa_mask=~[RTMIN RT_1], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, sa_restorer=0x4531b0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa7e7f94fb0}, 8) = 0 rt_sigaction(SIGBUS, {sa_handler=0x4402b0, sa_mask=~[RTMIN RT_1], sa_flags=SA_RESTORER|SA_RESETHAND|SA_SIGINFO, sa_restorer=0x4531b0}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa7e7f94fb0}, 8) = 0 rt_sigaction(SIGSEGV, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x4531b0}, NULL, 8) = 0 rt_sigaction(SIGBUS, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x4531b0}, NULL, 8) = 0 fstat(1, {st_mode=S_IFCHR|0660, st_rdev=makedev(0x4, 0x1), ...}) = 0 ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0 write(1, "hello world\n", 12) = 12 write(1, "\n", 1) = 1 exit_group(0) = ? +++ exited with 0 +++ I linked it statically to exclude the huge number of system calls made by ld.so. There's little that could be done to improve on those, I think. If I let strace dump stack traces for each call, the first mention of druntime is after the two consecutive brk calls. But I'm not sure if the traces are trustworthy. They seem cut short in a couple of instances. So there's a couple of calls that should be fairly fast (clock_getres etc.). Why does the runtime need to read /proc/self/maps, though?
Jan 24 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/24/20 5:20 PM, Gregor Mückl wrote:

 So there's a couple of calls that should be fairly fast (clock_getres 
 etc.). Why does the runtime need to read /proc/self/maps, though?
This is great stuff, thanks! Note that /proc/self/maps appears nowhere in phobos or druntime. So I'm assuming that's some other function that's doing it (maybe in libc?) The clock_getres calls are so you can use core.time.MonoTime. rt_init initializes those so they can be used early in the process. I would assume those are really fast, as they are constants in the kernel. -Steve
Jan 25 2020
parent reply norm <norm.rowtree gmail.com> writes:
On Saturday, 25 January 2020 at 16:07:24 UTC, Steven 
Schveighoffer wrote:
 On 1/24/20 5:20 PM, Gregor Mückl wrote:

 So there's a couple of calls that should be fairly fast 
 (clock_getres etc.). Why does the runtime need to read 
 /proc/self/maps, though?
This is great stuff, thanks! Note that /proc/self/maps appears nowhere in phobos or druntime. So I'm assuming that's some other function that's doing it (maybe in libc?) The clock_getres calls are so you can use core.time.MonoTime. rt_init initializes those so they can be used early in the process. I would assume those are really fast, as they are constants in the kernel. -Steve
I was curious so I ran a quick strace hello world experiment using only printf, not writeln, compiled with D (DMD), C++ and C (gcc and clang). Only the D binary opens /proc/self/maps. Running `strace dmd --version` also opens /proc/self/maps, but I guess that makes sense since the compiler itself is now written in D. === // D hello world import core.stdc.stdio; int main() { printf("Hello world\n"); return 0; } // CC and C hello world #include <stdio.h> int main() { printf("Hello world\n"); return 0; } === Cheers, Norm
Jan 25 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/25/20 6:41 PM, norm wrote:
 On Saturday, 25 January 2020 at 16:07:24 UTC, Steven Schveighoffer wrote:
 On 1/24/20 5:20 PM, Gregor Mückl wrote:

 So there's a couple of calls that should be fairly fast (clock_getres 
 etc.). Why does the runtime need to read /proc/self/maps, though?
This is great stuff, thanks! Note that /proc/self/maps appears nowhere in phobos or druntime. So I'm assuming that's some other function that's doing it (maybe in libc?) The clock_getres calls are so you can use core.time.MonoTime. rt_init initializes those so they can be used early in the process. I would assume those are really fast, as they are constants in the kernel.
I was curious so I ran a quick strace hello world experiment using only printf, not writeln, compiled with D (DMD), C++ and C (gcc and clang). Only the D binary opens /proc/self/maps. Running `strace dmd --version` also opens /proc/self/maps, but I guess that makes sense since the compiler itself is now written in D.
Yeah, it's not being opened directly by druntime, but looks like pthread_getattr_np (man strace -k is useful!): openat(AT_FDCWD, "/proc/self/maps", O_RDONLY|O_CLOEXEC) = 3 > /lib/x86_64-linux-gnu/libc-2.27.so(__open_nocancel+0x41) [0x10fdb1] > /lib/x86_64-linux-gnu/libc-2.27.so(_IO_file_fopen+0x78d) [0x8cc3d] > /lib/x86_64-linux-gnu/libc-2.27.so(fopen+0x7a) [0x7eeaa] > /lib/x86_64-linux-gnu/libpthread-2.27.so(pthread_getattr_np+0x193) [0x95b3] > /mnt/hgfs/Documents/testd/teststrace(thread_init+0x250) [0x50d98] > /mnt/hgfs/Documents/testd/teststrace(rt_init+0x4c) [0x3f880] > /mnt/hgfs/Documents/testd/teststrace(_D2rt6dmain212_d_run_main2UAAamPUQg iZ6runAllMFZv+0x14) [0x3c55c] > /mnt/hgfs/Documents/testd/teststrace(_D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tr ExecMFMDFZvZv+0x21) [0x3c4f9] > /mnt/hgfs/Documents/testd/teststrace(_d_run_main2+0x22e) [0x3c462] > /mnt/hgfs/Documents/testd/teststrace(_d_run_main+0xbe) [0x3c21e] > /mnt/hgfs/Documents/testd/teststrace(main+0x22) [0x3c136] > /lib/x86_64-linux-gnu/libc-2.27.so(__libc_start_main+0xe7) [0x21b97] > /mnt/hgfs/Documents/testd/teststrace(_start+0x2a) [0x3c01a] So it's something to do with thread_init calling pthread_getattr_np. I don't see a direct call in there, so probably it's inlined or tail-calling. And is that really using fopen? my goodness.. I'm not sure we can pay-as-you-go the low-level thread support. And we can't do anything about how pthreads use the OS to implement their mechanics. -Steve
Jan 25 2020
parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Sunday, 26 January 2020 at 00:18:49 UTC, Steven Schveighoffer 
wrote:
 On 1/25/20 6:41 PM, norm wrote:
 On Saturday, 25 January 2020 at 16:07:24 UTC, Steven 
 Schveighoffer wrote:
 On 1/24/20 5:20 PM, Gregor Mückl wrote:

 So there's a couple of calls that should be fairly fast 
 (clock_getres etc.). Why does the runtime need to read 
 /proc/self/maps, though?
This is great stuff, thanks! Note that /proc/self/maps appears nowhere in phobos or druntime. So I'm assuming that's some other function that's doing it (maybe in libc?) The clock_getres calls are so you can use core.time.MonoTime. rt_init initializes those so they can be used early in the process. I would assume those are really fast, as they are constants in the kernel.
I was curious so I ran a quick strace hello world experiment using only printf, not writeln, compiled with D (DMD), C++ and C (gcc and clang). Only the D binary opens /proc/self/maps. Running `strace dmd --version` also opens /proc/self/maps, but I guess that makes sense since the compiler itself is now written in D.
Yeah, it's not being opened directly by druntime, but looks like pthread_getattr_np (man strace -k is useful!): openat(AT_FDCWD, "/proc/self/maps", O_RDONLY|O_CLOEXEC) = 3 > /lib/x86_64-linux-gnu/libc-2.27.so(__open_nocancel+0x41) [0x10fdb1] > /lib/x86_64-linux-gnu/libc-2.27.so(_IO_file_fopen+0x78d) [0x8cc3d] > /lib/x86_64-linux-gnu/libc-2.27.so(fopen+0x7a) [0x7eeaa] > /lib/x86_64-linux-gnu/libpthread-2.27.so(pthread_getattr_np+0x193) [0x95b3] > /mnt/hgfs/Documents/testd/teststrace(thread_init+0x250) [0x50d98] > /mnt/hgfs/Documents/testd/teststrace(rt_init+0x4c) [0x3f880] > /mnt/hgfs/Documents/testd/teststrace(_D2rt6dmain212_d_run_main2UAAamPUQg iZ6runAllMFZv+0x14) [0x3c55c] > /mnt/hgfs/Documents/testd/teststrace(_D2rt6dmain212_d_run_main2UAAamPUQgZiZ7tr ExecMFMDFZvZv+0x21) [0x3c4f9] > /mnt/hgfs/Documents/testd/teststrace(_d_run_main2+0x22e) [0x3c462] > /mnt/hgfs/Documents/testd/teststrace(_d_run_main+0xbe) [0x3c21e] > /mnt/hgfs/Documents/testd/teststrace(main+0x22) [0x3c136] > /lib/x86_64-linux-gnu/libc-2.27.so(__libc_start_main+0xe7) [0x21b97] > /mnt/hgfs/Documents/testd/teststrace(_start+0x2a) [0x3c01a] So it's something to do with thread_init calling pthread_getattr_np. I don't see a direct call in there, so probably it's inlined or tail-calling. And is that really using fopen? my goodness.. I'm not sure we can pay-as-you-go the low-level thread support. And we can't do anything about how pthreads use the OS to implement their mechanics. -Steve
You got some actually useful backtraces. Great! Looks like it is this implementation or very close to it: https://code.woboq.org/userspace/glibc/nptl/pthread_getattr_np.c.html So this opens the file if it doesn't have stack layout information readily available. If I read the code correctly, this information read form the file isn't even cached for the specific thread. So unless I'm missing something (and I hope I do...), pthread_getattr_np goes through that dance each and every time. Something about this feels odd. I found two other references to /proc/self/maps, one of them as part of vfprintf when _FORTIFY_SOURCE is enabled (then the format string *must* be in read only page) and the other one in fatal error handling. Those shouldn't concern us.
Jan 25 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/25/20 7:39 PM, Gregor Mückl wrote:
 If I read the code correctly, this information read form the file isn't 
 even cached for the specific thread. So unless I'm missing something 
 (and I hope I do...), pthread_getattr_np goes through that dance each 
 and every time.
If you read the notes, it looks like this is avoided for everything but the first thread. But I doubt we are going to be calling pthread_getattr_np more than once per thread. So probably this gets called only once per process. Still, to use fopen for this seems really heavy. I doubt the maps file isn't so large you can't fit it in one page, just read once and get the data you need. But it doesn't truly concern me to the point of worrying about it for D's sake. -Steve
Jan 25 2020
prev sibling parent reply Laurent =?UTF-8?B?VHLDqWd1aWVy?= <laurent.treguier.sink gmail.com> writes:
On Wednesday, 22 January 2020 at 23:51:23 UTC, FogD wrote:
 On Wednesday, 22 January 2020 at 10:51:37 UTC, IGotD- wrote:
 Many languages suffer from the C lib dependency which is kind 
 of suboptimal. It is time to depreciate that dependency.
A recent comparison of languages from this perspective. https://drewdevault.com/2020/01/04/Slow.html
I don't understand these numbers. When compiling on Linux x86_64 myself with DMD and LDC, I get a 900k executable and a 692k executable, respectively. I also get different results with GCC/Glibc (dynamic: 7.7k, static: 761k), or with the assembly itself (760 bytes). It's like all the sizes in the blog post are bigger somehow. How am I getting results so wildly different from the post ? I'm a bit puzzled
Jan 23 2020
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On Thursday, 23 January 2020 at 10:01:07 UTC, Laurent Tréguier 
wrote:
 On Wednesday, 22 January 2020 at 23:51:23 UTC, FogD wrote:
 On Wednesday, 22 January 2020 at 10:51:37 UTC, IGotD- wrote:
 Many languages suffer from the C lib dependency which is kind 
 of suboptimal. It is time to depreciate that dependency.
A recent comparison of languages from this perspective. https://drewdevault.com/2020/01/04/Slow.html
I don't understand these numbers. When compiling on Linux x86_64 myself with DMD and LDC, I get a 900k executable and a 692k executable, respectively. I also get different results with GCC/Glibc (dynamic: 7.7k, static: 761k), or with the assembly itself (760 bytes). It's like all the sizes in the blog post are bigger somehow. How am I getting results so wildly different from the post ? I'm a bit puzzled
Did you include the size of the C standard library and other related libraries? "The size of all files which must be present at runtime (interpreters, stdlib, libraries, loader, etc) are included". -- /Jacob Carlborg
Jan 23 2020
parent Laurent =?UTF-8?B?VHLDqWd1aWVy?= <laurent.treguier.sink gmail.com> writes:
On Thursday, 23 January 2020 at 12:37:22 UTC, Jacob Carlborg 
wrote:
 Did you include the size of the C standard library and other 
 related libraries?

 "The size of all files which must be present at runtime 
 (interpreters, stdlib, libraries, loader, etc) are included".

 --
 /Jacob Carlborg
Ah, yes. I still have to learn to read things before writing...
Jan 23 2020
prev sibling parent kinke <noone nowhere.com> writes:
On Thursday, 23 January 2020 at 10:01:07 UTC, Laurent Tréguier 
wrote:
 I don't understand these numbers. When compiling on Linux 
 x86_64 myself with DMD and LDC, I get a 900k executable and a 
 692k executable, respectively. I also get different results 
 with GCC/Glibc (dynamic: 7.7k, static: 761k), or with the 
 assembly itself (760 bytes).
 It's like all the sizes in the blog post are bigger somehow. 
 How am I getting results so wildly different from the post ? 
 I'm a bit puzzled
I cannot reproduce them either. My sizes with official LDC v1.18.0 (same version as used by the author), on Ubuntu 18.04 x64 (author: Arch): Phobos writeln variant: ldc2 -O: 990K (in the bog: 10305 KiB) ldc2 -O -static: 3.0M (incl. glibc) C puts variant: ldc2 -O: 478K ldc2 -O -static: 2.3M
Jan 23 2020
prev sibling parent user1234 <user1234 1234.de> writes:
On Wednesday, 22 January 2020 at 10:51:37 UTC, IGotD- wrote:
 [...]
 Many languages suffer from the C lib dependency which is kind 
 of suboptimal. It is time to depreciate that dependency.
make your syscall library in asm and handle optimally everything from there.
Jan 22 2020