www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - dmd makes D appear slow

reply "weaselcat" <weaselcat gmail.com> writes:
In nearly every benchmark I see D in, the default compiler used 
is dmd which runs computationally intense tasks 4-5x+ slower than 
GDC/LDC

example of a random blog post I found:
http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html

D is up to 10x(!) slower than Rust.

Well... dmd is. Under LDC:
MD5 is 5x faster,
SHA1 is about the same,
SHA256 is 10x faster,
SHA512 is 10x faster.

The kicker?
_all_ of these were faster than the Rust timings(albeit by 5-10%) 
when using LDC.

This isn't the first time I've seen this, in basically every 
benchmark featuring D I have to submit a patch/make a comment 
that dmd shouldn't be used. Make no mistake, this is damaging to 
D's reputation - how well does D's "native efficiency" go over 


LDC and GDC need promoted more.

Bye,
May 29 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, May 29, 2015 at 06:13:02PM +0000, weaselcat via Digitalmars-d wrote:
 In nearly every benchmark I see D in, the default compiler used is dmd
 which runs computationally intense tasks 4-5x+ slower than GDC/LDC
As I keep saying, in my own compute-intensive projects I have consistently found that dmd-generated code (dmd -O -inline -release) is about 20-30% slower on average, sometimes even up to 50% slower, compared to gdc-generated code (gdc -O3 -finline -frelease). This is measured by actual running time in an actual application, not benchmark-specific code. I have looked at the generated assembly before, and it's clear that the gdc optimizer is way ahead of dmd's. The dmd optimizer starts failing to inline inner loop code after about 1-2 levels of function call nesting, not to mention it's unable to factor out a lot of loop boilerplate code. The gdc optimizer, by contrast, not only factors out almost all loop boilerplate code and inlines inner loop function calls several levels deep, it also unrolls loops in a CPU-specific way, does major loop restructuring, compounded with much more linear code optimization than dmd does, instruction reordering and then refactoring after that, etc., in some cases reducing the size of inner loop code (as in, the amount of code that runs per iteration) by up to 90%. I don't know the internal workings of the dmd optimizer, but it's clear that at present, with almost nobody working on it except Walter, it's never going to catch up. (Maybe this statement will provoke Walter into working his magic? :-P) [...]
 This isn't the first time I've seen this, in basically every benchmark
 featuring D I have to submit a patch/make a comment that dmd shouldn't
 be used. Make no mistake, this is damaging to D's reputation - how
 well does D's "native efficiency" go over when people are saying it's

 
 LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise. T -- Marketing: the art of convincing people to pay for what they didn't need before which you fail to deliver after.
May 29 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 This will probably offend some people, but I think LDC/GDC 
 should be the
 default download on dlang.org, and dmd should be provided as an
 alternative for those who want the latest language version and 
 don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
May 29 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
This will probably offend some people, but I think LDC/GDC should be
the default download on dlang.org, and dmd should be provided as an
alternative for those who want the latest language version and don't
mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs. T -- Those who don't understand D are condemned to reinvent it, poorly. -- Daniel N
May 29 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 19:01:18 UTC, H. S. Teoh wrote:
 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via 
 Digitalmars-d wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
This will probably offend some people, but I think LDC/GDC 
should be
the default download on dlang.org, and dmd should be provided 
as an
alternative for those who want the latest language version 
and don't
mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs. T
I think it might be worth investigating why LDC/GDC are slower than dmd when compiling non-optimized builds. This seems like something that would be easier to solve than getting dmd up to the same performance level as LDC/GDC. Bye,
May 29 2015
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 5/29/15 12:58 PM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 This will probably offend some people, but I think LDC/GDC should be
 the default download on dlang.org, and dmd should be provided as an
 alternative for those who want the latest language version and don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
myOpinion = (fastCompileTimes * 10000 < fastCode); -Steve
May 29 2015
parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Friday, 29 May 2015 at 19:16:45 UTC, Steven Schveighoffer 
wrote:
 On 5/29/15 12:58 PM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via 
 Digitalmars-d wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 This will probably offend some people, but I think LDC/GDC 
 should be
 the default download on dlang.org, and dmd should be 
 provided as an
 alternative for those who want the latest language version 
 and don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
myOpinion = (fastCompileTimes * 10000 < fastCode); -Steve
For the development cycle too?
May 29 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 22:05:27 UTC, Idan Arye wrote:
 For the development cycle too?
I've had LDC edge out dmd in compilation times during development cycle. dmd sees zero boost from separate object compilation. Using DCD as an example(because it has not-long not-short build times,) initial non-optimized build took 3.10 elapsed with dmd, 5.11 with ldc. Changing one file and building took the same amount for dmd but only 0.43 seconds(430 milliseconds) for ldc. Bye,
May 29 2015
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 5/29/15 4:05 PM, Idan Arye wrote:
 On Friday, 29 May 2015 at 19:16:45 UTC, Steven Schveighoffer wrote:
 On 5/29/15 12:58 PM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d
 wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 This will probably offend some people, but I think LDC/GDC should be
 the default download on dlang.org, and dmd should be provided as an
 alternative for those who want the latest language version and don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
myOpinion = (fastCompileTimes * 10000 < fastCode);
For the development cycle too?
I saw the slide from Liran that shows your compiler requirements :) I can see why it's important to you. But compiled code outlives the compiler execution. It's the wart that persists. But techniques (for most projects) are available to speed compile time, and even in that case, compile time for whole project is quite low. For very large projects, toolchain performance is definitely important. We need to work on that. But I don't see how speed of compiler should sacrifice runtime performance. -Steve
May 29 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 30/05/15 03:57, Steven Schveighoffer wrote:

 I saw the slide from Liran that shows your compiler requirements :) I
 can see why it's important to you.
Then you misunderstood Liran's slides. Our compile resources problem isn't with GDC. It's with DMD. Single object compilation requires more RAM than most developers machines have, resulting in a complicated "rsync to AWS, run script there, compile, fetch results" cycle that adds quite a while to the compilation time. THAT DOES NOT MATCH THE SOURCE. I have not seen LDC myself, but according to Liran, situation there is even worse. The compiler simply does not finish compilation without crashing.
 But compiled code outlives the compiler execution. It's the wart that
 persists.
So does algorithmic code that, due to a compiler bugs, produces an assembly that does not implement the correct algorithm. When doing RAID parity calculation, it is imperative that the correct bit gets to the correct location with the correct value. If that doesn't happen, compilation speed is the least of your problems. Like Liran said in the lecture, we are currently faster than all of our competition. Still, in a correctly functioning storage system, the RAID part needs to take considerable amount of the total processing time under load (say, 30%). If we're losing x3 speed because we don't have compiler optimizations, the system, as a whole, is losing about half of its performance.
 But I don't see how speed of compiler should sacrifice runtime performance.
Our plan was to compile with DMD during the development stage, and then switch to GDC for code intended for deployment. This plan simply cannot work if each time we try and make that switch, Liran has to spend two months, each time yanking a different developer from the work said developer needs to be doing, in order to figure out which line of source gets compiled incorrectly.
 -Steve
Shachar
May 30 2015
next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 May 2015 at 20:38, Shachar Shemesh via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 30/05/15 03:57, Steven Schveighoffer wrote:

  I saw the slide from Liran that shows your compiler requirements :) I
 can see why it's important to you.
Then you misunderstood Liran's slides. Our compile resources problem isn't with GDC. It's with DMD. Single object compilation requires more RAM than most developers machines have, resulting in a complicated "rsync to AWS, run script there, compile, fetch results" cycle that adds quite a while to the compilation time. THAT DOES NOT MATCH THE SOURCE.
Got any bug reports to back that up? I should probably run the testsuite with optimisations turned on sometime.
May 30 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 30/05/15 21:44, Iain Buclaw via Digitalmars-d wrote:

 Got any bug reports to back that up?  I should probably run the
 testsuite with optimisations turned on sometime.
The latest one (the one that stung my code) is http://bugzilla.gdcproject.org/show_bug.cgi?id=188. In general, the bugs opened by Liran are usually around that area, as he's the one who does the porting of our code to GDC. Shachar
Jun 01 2015
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 1 Jun 2015 09:25, "Shachar Shemesh via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 30/05/15 21:44, Iain Buclaw via Digitalmars-d wrote:

 Got any bug reports to back that up?  I should probably run the
 testsuite with optimisations turned on sometime.
The latest one (the one that stung my code) is
http://bugzilla.gdcproject.org/show_bug.cgi?id=188. In general, the bugs opened by Liran are usually around that area, as he's the one who does the porting of our code to GDC.
 Shachar
OK thanks, I'll try to mentally couple you two together. I'm aware of the bugs Liran has made. There's just some 'very big things' going on which has me away from bug fixing currently.
Jun 01 2015
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 5/30/15 2:38 PM, Shachar Shemesh wrote:
 On 30/05/15 03:57, Steven Schveighoffer wrote:
 But I don't see how speed of compiler should sacrifice runtime
 performance.
Our plan was to compile with DMD during the development stage, and then switch to GDC for code intended for deployment. This plan simply cannot work if each time we try and make that switch, Liran has to spend two months, each time yanking a different developer from the work said developer needs to be doing, in order to figure out which line of source gets compiled incorrectly.
You're answering a question that was not asked. Obviously, compiler-generated code should match what the source says. That's way more important than speed of compilation or speed of execution. So given that a compiler actually *works* (i.e. produces valid binaries), is speed of compilation better than speed of execution of the resulting binary? How much is too much? And there are thresholds for things that really make the difference between works and not works. For instance, a requirement for 30GB of memory is not feasible for most systems. If you have to have 30GB of memory to compile, then the effective result is that compiler doesn't work. Similarly, if a compiler takes 2 weeks to output a binary, even if it's the fastest binary on the planet, that compiler doesn't work. But if we are talking the difference between a compiler taking 10 minutes to produce a binary that is 20% faster than a compiler that takes 1 minute, what is the threshold of pain you are willing to accept? My preference is for the 10 minute compile time to get the fastest binary. If it's possible to switch the compiler into "fast mode" that gives me a slower binary, I might use that for development. My original statement was obviously exaggerated, I would not put up with days-long compile times, I'd find another way to do development. But compile time is not as important to me as it is to others. -Steve
Jun 01 2015
next sibling parent reply Shachar Shemesh <shachar weka.io> writes:
On 01/06/15 18:40, Steven Schveighoffer wrote:
 On 5/30/15 2:38 PM, Shachar Shemesh wrote:

 So given that a compiler actually *works* (i.e. produces valid
 binaries), is speed of compilation better than speed of execution of the
 resulting binary?
There is no answer to that question. During development stage, there are many steps that have "compile" as a hard start/end barrier (i.e. - you have to finish a task before compile start, and cannot continue it until compile ends). During those stages, the difference between 1 and 10 minute compile is the difference between 1 and 10 bugs solved in a day. It is a huge difference, and one it is worth sacrificing any amount of run time efficiency to pay, assuming this is a tradeoff you can later make. Then again, when a release build is being prepared, the difference becomes moot. Even your "outrageous" figures become acceptable, so long as you can be sure that no bugs pop up in this build that did not exist in the non-optimized build. Then again, please bear in mind that our product is somewhat atypical. Most actual products in the market are not CPU bound on algorithmic code. When that's the case, the optimization stage (beyond the most basic inlining stuff) will rarely give you 20% overall speed increase. When your code performs a system call every 40 assembly instructions, there simply isn't enough room for the optimizer to work its magic. One exception to that above rule is where it hurts. Benchmarks, typically, do rely on algorithmic code to a large extent. Shachar
Jun 02 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 2 June 2015 at 19:42, Shachar Shemesh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 01/06/15 18:40, Steven Schveighoffer wrote:
 On 5/30/15 2:38 PM, Shachar Shemesh wrote:

 So given that a compiler actually *works* (i.e. produces valid
 binaries), is speed of compilation better than speed of execution of the
 resulting binary?
There is no answer to that question. During development stage, there are many steps that have "compile" as a hard start/end barrier (i.e. - you have to finish a task before compile start, and cannot continue it until compile ends). During those stages, the difference between 1 and 10 minute compile is the difference between 1 and 10 bugs solved in a day. It is a huge difference, and one it is worth sacrificing any amount of run time efficiency to pay, assuming this is a tradeoff you can later make. Then again, when a release build is being prepared, the difference becomes moot. Even your "outrageous" figures become acceptable, so long as you can be sure that no bugs pop up in this build that did not exist in the non-optimized build. Then again, please bear in mind that our product is somewhat atypical. Most actual products in the market are not CPU bound on algorithmic code. When that's the case, the optimization stage (beyond the most basic inlining stuff) will rarely give you 20% overall speed increase. When your code performs a system call every 40 assembly instructions, there simply isn't enough room for the optimizer to work its magic. One exception to that above rule is where it hurts. Benchmarks, typically, do rely on algorithmic code to a large extent. Shachar
Quality of optimisation is also proportional to battery consumption. Even if the performance increase for a user isn't significant to them in terms of responsiveness, it has an effect on their battery life, which they do appreciate, even if they are unaware of it.
Jun 02 2015
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Monday, 1 June 2015 at 15:40:55 UTC, Steven Schveighoffer 
wrote:
 But if we are talking the difference between a compiler taking 
 10 minutes to produce a binary that is 20% faster than a 
 compiler that takes 1 minute, what is the threshold of pain you 
 are willing to accept? My preference is for the 10 minute 
 compile time to get the fastest binary. If it's possible to 
 switch the compiler into "fast mode" that gives me a slower 
 binary, I might use that for development.
Same here.
Jun 02 2015
prev sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Monday, 1 June 2015 at 15:40:55 UTC, Steven Schveighoffer 
wrote:
 My original statement was obviously exaggerated, I would not 
 put up with days-long compile times, I'd find another way to do 
 development. But compile time is not as important to me as it 
 is to others.

 -Steve
I think if compile time was such a dealbreaker as people make it out to be, C++ would be a lot less popular. I'm honestly in favor of making GDC the default compiler so I can have more coffee breaks ;)
Jun 02 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 02/06/15 21:56, weaselcat wrote:
 On Monday, 1 June 2015 at 15:40:55 UTC, Steven Schveighoffer wrote:
 My original statement was obviously exaggerated, I would not put up
 with days-long compile times, I'd find another way to do development.
 But compile time is not as important to me as it is to others.

 -Steve
I think if compile time was such a dealbreaker as people make it out to be, C++ would be a lot less popular.
You know, I keep hearing this criticism. I have no idea where it comes from. I can tell you in no uncertain words that a project the size we're working on would compile in considerably less time had it been written in C++. If you are only referring to small projects, then compilation time isn't a major issue one way or the other. Shachar
Jun 02 2015
parent reply "Dicebot" <public dicebot.lv> writes:
Project size is irrelevant here. I had 500 line C++ project that 
took 10 minutes to compile (hello boost::spirit). It is 
impossible for C++ to compile faster than D by design. Any time 
it seems so you either aren't comparing same thing or get 
misinformed. Or do straightforward separate compilation.
Jun 03 2015
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project 
 that took 10 minutes to compile (hello boost::spirit). It is 
 impossible for C++ to compile faster than D by design. Any time 
 it seems so you either aren't comparing same thing or get 
 misinformed. Or do straightforward separate compilation.
Even C. Our project, back when I was doing C in the early 2000's, a "make clean all" took around one hour.
Jun 03 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 07:50:53 UTC, Paulo  Pinto wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project 
 that took 10 minutes to compile (hello boost::spirit). It is 
 impossible for C++ to compile faster than D by design. Any 
 time it seems so you either aren't comparing same thing or get 
 misinformed. Or do straightforward separate compilation.
Even C.
Now really? C was designed at a time where you couldn't even hold the source file in memory, so there is not even a need for an explicit AST. C can essentially be "streamed" in separate passes: cpp->cc->asm->linking If compiling C is slow, it is just the compiler or the build system, not the language.
Jun 03 2015
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 3 June 2015 at 10:37:24 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 3 June 2015 at 07:50:53 UTC, Paulo  Pinto wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project 
 that took 10 minutes to compile (hello boost::spirit). It is 
 impossible for C++ to compile faster than D by design. Any 
 time it seems so you either aren't comparing same thing or 
 get misinformed. Or do straightforward separate compilation.
Even C.
Now really? C was designed at a time where you couldn't even hold the source file in memory, so there is not even a need for an explicit AST. C can essentially be "streamed" in separate passes: cpp->cc->asm->linking If compiling C is slow, it is just the compiler or the build system, not the language.
Yes really, specially when comparing with Turbo Pascal, Delphi, Modula-2, Oberon and a few other languages not tied to UNIX linker model. Multiply that hour times HP-UX (aCC), Solaris (SunPro), Windows (cl), Aix (xlc), Red-Hat Linux (gcc). Which were the systems being used. As a side note, Visual C++ 2015 will be quite fast. http://channel9.msdn.com/Events/Build/2015/3-610 They literal have re-done their linker to use a database model and support incremental linking. Similarly to what IBM did with Visual C++ Code Store and Lucid's Energize. All the solutions have in common not relying in the traditional UNIX linker model. -- Paulo
Jun 03 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 12:20:29 UTC, Paulo  Pinto wrote:
 Yes really, specially when comparing with Turbo Pascal, Delphi, 
 Modula-2, Oberon and a few other languages not tied to UNIX 
 linker model.
Yeah, I agree that the implementation for Turbo Pascal was good for the hardware it ran on. But I don't think Pascal as a language is easier to compile fast than C. I think they match up.
 As a side note, Visual C++ 2015 will be quite fast.

 http://channel9.msdn.com/Events/Build/2015/3-610

 They literal have re-done their linker to use a database model 
 and support incremental linking.
Ok, I don't view linking as part of compilation... I don't think C as a language requires a specific linkage model (only the conceptual compilation units).
Jun 03 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 03 Jun 2015 12:20:28 +0000, Paulo  Pinto wrote:

 Now really? C was designed at a time where you couldn't even hold the
 source file in memory, so there is not even a need for an explicit AST.

 C can essentially be "streamed" in separate passes:
 cpp->cc->asm->linking

 If compiling C is slow, it is just the compiler or the build system,
 not the language.
=20 Yes really, specially when comparing with Turbo Pascal, Delphi, Modula-2, Oberon and a few other languages not tied to UNIX linker model.
yes, i remember lightning fast compile times with turbo pascal. yet the=20 code it produced was really awful: it was even unable to fold constants=20 sometimes!=
Jun 03 2015
next sibling parent Dan Olson <gorox comcast.net> writes:
ketmar <ketmar ketmar.no-ip.org> writes:

 yes, i remember lightning fast compile times with turbo pascal. yet the 
 code it produced was really awful: it was even unable to fold constants 
 sometimes!
I remember it being in a single DOS .COM (was it TURBO.COM?) only about 40k which included the editor, compiler, and libraries. It was the coolest thing for PC's at the time. I might even have a floppy with it on it :-)
Jun 03 2015
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 4 June 2015 at 03:04:31 UTC, ketmar wrote:
 On Wed, 03 Jun 2015 12:20:28 +0000, Paulo  Pinto wrote:

 Now really? C was designed at a time where you couldn't even 
 hold the
 source file in memory, so there is not even a need for an 
 explicit AST.

 C can essentially be "streamed" in separate passes:
 cpp->cc->asm->linking

 If compiling C is slow, it is just the compiler or the build 
 system,
 not the language.
Yes really, specially when comparing with Turbo Pascal, Delphi, Modula-2, Oberon and a few other languages not tied to UNIX linker model.
yes, i remember lightning fast compile times with turbo pascal. yet the code it produced was really awful: it was even unable to fold constants sometimes!
No different from other MS-DOS C compilers. Hence why such languages were the Pythons and Rubys of the day and anyone that cared about performance was using straight Assembly, in MS-DOS and other home systems, that is. Michael Abrash books The Zen of Assembly Language and Zen of Code Optimization were published in 1990 and 1994 respectively. -- Paulo
Jun 04 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-06-03 12:37, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:

 Now really? C was designed at a time where you couldn't even hold the
 source file in memory, so there is not even a need for an explicit AST.

 C can essentially be "streamed" in separate passes: cpp->cc->asm->linking

 If compiling C is slow, it is just the compiler or the build system, not
 the language.
Doesn't a C compiler need to reparse headers in C? Unlike D were they can be cached. -- /Jacob Carlborg
Jun 04 2015
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/3/15 3:50 AM, Paulo Pinto wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project that took
 10 minutes to compile (hello boost::spirit). It is impossible for C++
 to compile faster than D by design. Any time it seems so you either
 aren't comparing same thing or get misinformed. Or do straightforward
 separate compilation.
Even C. Our project, back when I was doing C in the early 2000's, a "make clean all" took around one hour.
It might be possible the processor/RAM constraints were different in 2000 than they are now :) -Steve
Jun 03 2015
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 3 June 2015 at 14:08:33 UTC, Steven Schveighoffer 
wrote:
 On 6/3/15 3:50 AM, Paulo Pinto wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project 
 that took
 10 minutes to compile (hello boost::spirit). It is impossible 
 for C++
 to compile faster than D by design. Any time it seems so you 
 either
 aren't comparing same thing or get misinformed. Or do 
 straightforward
 separate compilation.
Even C. Our project, back when I was doing C in the early 2000's, a "make clean all" took around one hour.
It might be possible the processor/RAM constraints were different in 2000 than they are now :) -Steve
Yeah, some people take 9 hours instead with C++ using modern hardware. http://www.reddit.com/r/programming/comments/37n39g/john_carmack_shares_his_experiences_with_static/croml2i If you noticed a later post from me, those systems were UNIX servers, not desktop PCs. -- Paulo
Jun 03 2015
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 6/3/15 10:19 AM, Paulo Pinto wrote:
 On Wednesday, 3 June 2015 at 14:08:33 UTC, Steven Schveighoffer wrote:
 On 6/3/15 3:50 AM, Paulo Pinto wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 Project size is irrelevant here. I had 500 line C++ project that took
 10 minutes to compile (hello boost::spirit). It is impossible for C++
 to compile faster than D by design. Any time it seems so you either
 aren't comparing same thing or get misinformed. Or do straightforward
 separate compilation.
Even C. Our project, back when I was doing C in the early 2000's, a "make clean all" took around one hour.
It might be possible the processor/RAM constraints were different in 2000 than they are now :)
Yeah, some people take 9 hours instead with C++ using modern hardware. http://www.reddit.com/r/programming/comments/37n39g/john_carmack_shares_his_experiences_with_static/croml2i If you noticed a later post from me, those systems were UNIX servers, not desktop PCs.
Sure, but I still think it's difficult to compare systems. Processors just weren't that fast back then. You could have 256 of them, and lots of RAM, but the RAM architecture was slower too. If your compiler could run in parallel to build, perhaps you could get faster compile times, but it's so difficult to compare these things in an apples-to-apples comparison. Especially when the size/complexity of the program being compiled isn't necessarily analogous. -Steve
Jun 03 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 It is impossible for C++ to compile faster than D by design. 
 Any time it seems so you either aren't comparing same thing or 
 get misinformed. Or do straightforward separate compilation.
There are lots of features in D, that C++ does not have, that will make separate compilation and partial evaluation/incomplete types difficult. So C++ is faster than D by design, even when the compiler isn't. The same features that many think are great about D are also the ones that makes formal reasoning about isolated parts of a D program difficult or impossible. You surely don't need a list?
Jun 03 2015
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
 There are lots of features in D, that C++ does not have, that 
 will make separate compilation and partial 
 evaluation/incomplete types difficult. So C++ is faster than D 
 by design, even when the compiler isn't.
I've tried to parse that last sentence a few times and I'm not sure what you mean. A theoretical compiler doesn't matter; what actual compilers do does. The empirical fact is that C++ is slower to compile than D (AFAIK C++ compiles slower than everything that isn't Scala). If you have a benchmarch that shows C++ compiling faster than D, please share it. Atila
Jun 03 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
 sure what you mean. A theoretical compiler doesn't matter; what 
 actual compilers do does.
Of course it does, it defines how far you can go in a concurrent build process before hitting an unsurpassable bottle-neck. (not that I personally care, as I find both C++ and D compilers to be fast enough)
Jun 03 2015
prev sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Wednesday, 3 June 2015 at 09:21:55 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 3 June 2015 at 07:05:37 UTC, Dicebot wrote:
 It is impossible for C++ to compile faster than D by design. 
 Any time it seems so you either aren't comparing same thing or 
 get misinformed. Or do straightforward separate compilation.
There are lots of features in D, that C++ does not have, that will make separate compilation and partial evaluation/incomplete types difficult. So C++ is faster than D by design, even when the compiler isn't.
LDC seems to manage separate compilation just fine, I use it for my projects at least. in my tests I find it to be 110-150% faster than all at once. it can get even better if you properly modularize your projects instead of having 1-2 files that build slow, which causes a lot of waiting.
Jun 03 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 11:06:39 UTC, weaselcat wrote:
 it can get even better if you properly modularize your projects 
 instead of having 1-2 files that build slow, which causes a lot 
 of waiting.
Yes, sure. You can probably get the same build speeds as with C if you organize your code in a particular way or shy away from certain patterns. What I was talking about was the language, meaning that you don't write your code to give a boost in compilation speed. Clearly possible, but if you use third party frameworks… then you're out of luck. An analogy: SQL isn't particularly expressive, but the limitations of the language makes it possible to execute it bottom-up. NOSQL engines are even less expressive, but can be even more easily distributed.
Jun 03 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Wednesday, 3 June 2015 at 11:25:50 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 3 June 2015 at 11:06:39 UTC, weaselcat wrote:
 it can get even better if you properly modularize your 
 projects instead of having 1-2 files that build slow, which 
 causes a lot of waiting.
Yes, sure. You can probably get the same build speeds as with C if you organize your code in a particular way or shy away from certain patterns. What I was talking about was the language, meaning that you don't write your code to give a boost in compilation speed. Clearly possible, but if you use third party frameworks… then you're out of luck. An analogy: SQL isn't particularly expressive, but the limitations of the language makes it possible to execute it bottom-up. NOSQL engines are even less expressive, but can be even more easily distributed.
ah yes, those famous fast C build times. Excuse me while I go take half an hour to build GDB.
Jun 03 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 11:35:43 UTC, weaselcat wrote:
 ah yes, those famous fast C build times.
 Excuse me while I go take half an hour to build GDB.
Heh... It is possible to write very fast C compilers with high concurrency in builds, if there is a market for it, but most people want some optimizations too. So that's what people evaluate a compiler by: typical integration builds. Squeaky wheel gets most oil.
Jun 03 2015
prev sibling next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 3 June 2015 at 13:35, weaselcat via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Wednesday, 3 June 2015 at 11:25:50 UTC, Ola Fosheim Gr=C3=B8stad wrote=
:
 On Wednesday, 3 June 2015 at 11:06:39 UTC, weaselcat wrote:

 it can get even better if you properly modularize your projects instead
 of having 1-2 files that build slow, which causes a lot of waiting.
Yes, sure. You can probably get the same build speeds as with C if you organize your code in a particular way or shy away from certain patterns=
.
 What I was talking about was the language, meaning that you don't write
 your code to give a boost in compilation speed. Clearly possible, but if
 you use third party frameworks=E2=80=A6 then you're out of luck.

 An analogy: SQL isn't particularly expressive, but the limitations of th=
e
 language makes it possible to execute it bottom-up. NOSQL engines are ev=
en
 less expressive, but can be even more easily distributed.
ah yes, those famous fast C build times. Excuse me while I go take half an hour to build GDB.
You're probably doing it wrong and accidentally building all of binutils instead. Use 'make all-gdb' and enjoy faster builds :-p
Jun 03 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 3 June 2015 at 15:14:47 UTC, Iain Buclaw wrote:
 You're probably doing it wrong and accidentally building all of 
 binutils
 instead.  Use 'make all-gdb' and enjoy faster builds :-p
Or use TCC. :^P
Jun 03 2015
prev sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 03 Jun 2015 11:35:41 +0000, weaselcat wrote:

 On Wednesday, 3 June 2015 at 11:25:50 UTC, Ola Fosheim Gr=C3=B8stad wrote=
:
 On Wednesday, 3 June 2015 at 11:06:39 UTC, weaselcat wrote:
 it can get even better if you properly modularize your projects
 instead of having 1-2 files that build slow, which causes a lot of
 waiting.
Yes, sure. You can probably get the same build speeds as with C if you organize your code in a particular way or shy away from certain patterns. What I was talking about was the language, meaning that you don't write your code to give a boost in compilation speed. Clearly possible, but if you use third party frameworks=E2=80=A6 then you're out of luck. An analogy: SQL isn't particularly expressive, but the limitations of the language makes it possible to execute it bottom-up. NOSQL engines are even less expressive, but can be even more easily distributed.
=20 ah yes, those famous fast C build times. Excuse me while I go take half an hour to build GDB.
yet gcc (the C compiler part) is significantly faster than gdc on my box=20 (both gcc and gdc are built from sources, tuned to my arch). i mean=20 separate compilation, of course. i believe it's due to phobos.=
Jun 03 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Fri, 29 May 2015 11:58:09 -0700, H. S. Teoh via Digitalmars-d wrote:

 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d
 wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
This will probably offend some people, but I think LDC/GDC should be
the default download on dlang.org, and dmd should be provided as an
alternative for those who want the latest language version and don't
mind the speed compromise.
=20 I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
=20 Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. =20 So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
the thing is that benchmarks are measuring execution time, not compiling=20 time. that's why D is failing benchmarks. making LDC or GDC a "reference"=20 compiler, and stating that if someone is ready to trade codegen quality=20 for compilation speed, he can use DMD instead, is the way to start being=20 "benchmark friendly". people doing benchmarks usually downloading what official site gives 'em.=20 so they taking DMD and assuming that it's the best *execution* speed D=20 can offer. i.e. developers can continue using DMD as their base, but offering it as=20 "experimental compiler not recommended to use in production" on the=20 offsite, replacing "download D compiler" links with LDC/GDC. this way=20 people will not get Hot New Features right away, but "D is sloooow" rants=20 will go down. ;-)=
May 29 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 May 2015 at 09:14, ketmar via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Fri, 29 May 2015 11:58:09 -0700, H. S. Teoh via Digitalmars-d wrote:

 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d
 wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
This will probably offend some people, but I think LDC/GDC should be
the default download on dlang.org, and dmd should be provided as an
alternative for those who want the latest language version and don't
mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
the thing is that benchmarks are measuring execution time, not compiling time. that's why D is failing benchmarks. making LDC or GDC a "reference" compiler, and stating that if someone is ready to trade codegen quality for compilation speed, he can use DMD instead, is the way to start being "benchmark friendly". people doing benchmarks usually downloading what official site gives 'em. so they taking DMD and assuming that it's the best *execution* speed D can offer. i.e. developers can continue using DMD as their base, but offering it as "experimental compiler not recommended to use in production" on the offsite, replacing "download D compiler" links with LDC/GDC. this way people will not get Hot New Features right away, but "D is sloooow" rants will go down. ;-)
I actually think this is a really good idea. I don't think it's right that random people should encounter DMD as a first impression, they should encounter GDC or LDC, since those are the toolsets they will be making direct comparisons against during their evaluation. At the point that they're not yet a D enthusiast, access to cutting edge language features should mean practically nothing to them. That said, it would be nice if the DMD developer community at large were able to work closer with GDC/LDC. Is there some changes in workflow that may keep GDC/LDC up to date beside DMD as PR's are added? Possible to produce 'nightlies' for those compilers, so that developers following mainline DMD can easily update their respective compilers to reflect? Perhaps DMD developers could even develop language features against LDC instead of DMD, and backport to DMD? For my work, and what I noticed in my other thread, is that LDC is central to expansion of the D ecosystem, and I think it needs to be taken more seriously by the entire DMD community; it can't be a little thing off to the side. LDC gives us portability; iOS, Android, Windows, Emscripten, NativeClient, and plenty of other platforms. It's 2015; the fact that we still don't support Android and iOS is just not unacceptable. Most computers in the world run those operating systems. LDC is also the only performant way to target Windows, the overwhelmingly largest desktop platform... but we lose the debugger! >_< How can we release products created with D if we still don't have a way to build and run on modern computers? So, LDC: Windows, Android, iOS... this must be 99.9999% of computers on the planet! LDC needs to be first-class. Ideally, even more polished than DMD, and it should probably be the first contact people have with D. * I don't mean to down-play GDC, but it can't give us Windows or iOS, which are critical targets. I want to use D in my work, right now. I could... if I could actually target the computers we run code on.
May 29 2015
next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 30 May 2015 11:43:07 +1000, Manu via Digitalmars-d wrote:

 * I don't mean to down-play GDC, but it can't give us Windows or iOS,
 which are critical targets.
just to note: ARM is supported in GDC (althru i never tested that support=20 myself), and there are semi-official windows builds of GDC. so GDC can give us windows support (it's simply not required by many GDC=20 users for the time), but this is relatively easy to fix. dunno about iOS specifics, but if LDC has some druntime fixes for that,=20 such fixes can be integrated in mainline, and GDC should be able to build=20 ARM binaries for that apple thingy.=
May 29 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 May 2015 05:25, "ketmar via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Sat, 30 May 2015 11:43:07 +1000, Manu via Digitalmars-d wrote:

 * I don't mean to down-play GDC, but it can't give us Windows or iOS,
 which are critical targets.
just to note: ARM is supported in GDC (althru i never tested that support myself), and there are semi-official windows builds of GDC. so GDC can give us windows support (it's simply not required by many GDC users for the time), but this is relatively easy to fix.
When he says Windows, he means MSVC, gcc backend will never support interfacing that ABI (at least I see no motivation as of writing).
May 30 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
 When he says Windows, he means MSVC, gcc backend will never support
 interfacing that ABI (at least I see no motivation as of writing).
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it? Shachar
May 30 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
 When he says Windows, he means MSVC, gcc backend will never support
 interfacing that ABI (at least I see no motivation as of writing).
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
If your program is isolated, MinGW is fine. Great even! But the Windows ecosystem is built around Microsoft's COFF formatted libraries (as produced by Visual Studio), and most Windows libs that I find myself working with are closed-source, or distributed as pre-built binaries. You can't do very large scale work in the Windows ecosystem without interacting with the MS ecosystem, that is, COFF libs, and CV8/PDB debuginfo. Even if we could use MinGW, we ship an SDK ourselves, and customers would demand COFF libs from us. LLVM is (finally!) addressing this Microsoft/VisualC-centric nature of the Windows dev environment... I just wish they'd hurry up! It's about 10 years overdue.
May 30 2015
parent reply Shachar Shemesh <shachar weka.io> writes:
On 31/05/15 02:08, Manu via Digitalmars-d wrote:
 On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
 When he says Windows, he means MSVC, gcc backend will never support
 interfacing that ABI (at least I see no motivation as of writing).
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
If your program is isolated, MinGW is fine. Great even! But the Windows ecosystem is built around Microsoft's COFF formatted libraries (as produced by Visual Studio), and most Windows libs that I find myself working with are closed-source, or distributed as pre-built binaries.
Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32 DLLs themselves). If that is not the case, what is the mingw format? How does it allow you to link in the Win32 DLLs if it does not support COFF? Shachar
May 31 2015
next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 31 May 2015 at 17:59, Shachar Shemesh via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 31/05/15 02:08, Manu via Digitalmars-d wrote:
 On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
 When he says Windows, he means MSVC, gcc backend will never support
 interfacing that ABI (at least I see no motivation as of writing).
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
If your program is isolated, MinGW is fine. Great even! But the Windows ecosystem is built around Microsoft's COFF formatted libraries (as produced by Visual Studio), and most Windows libs that I find myself working with are closed-source, or distributed as pre-built binaries.
Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32 DLLs themselves). If that is not the case, what is the mingw format? How does it allow you to link in the Win32 DLLs if it does not support COFF? Shachar
I did once play with a coff mingw build, but I think the key issue I had there was the C runtime. GCC built code seems to produce intrinsic calls to glibc, and it is incompatible with MSVCRT. I'm pretty certain that GCC can't emit code to match the Win32 exception model, and there's still the debuginfo data to worry about too.
May 31 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 31 May 2015 at 10:45, Manu via Digitalmars-d <digitalmars-d puremagic.com
 wrote:
 On 31 May 2015 at 17:59, Shachar Shemesh via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 31/05/15 02:08, Manu via Digitalmars-d wrote:
 On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
 When he says Windows, he means MSVC, gcc backend will never support
 interfacing that ABI (at least I see no motivation as of writing).
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
If your program is isolated, MinGW is fine. Great even! But the Windows ecosystem is built around Microsoft's COFF formatted libraries (as produced by Visual Studio), and most Windows libs that I find myself working with are closed-source, or distributed as pre-built binaries.
Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32
DLLs
 themselves).

 If that is not the case, what is the mingw format? How does it allow you
to
 link in the Win32 DLLs if it does not support COFF?

 Shachar
I did once play with a coff mingw build, but I think the key issue I had there was the C runtime. GCC built code seems to produce intrinsic calls to glibc, and it is incompatible with MSVCRT. I'm pretty certain that GCC can't emit code to match the Win32 exception model, and there's still the debuginfo data to worry about too.
Pretty much correct as far as I understand it. - GCC uses DWARF to embed debug information into the program, rather that store it in a separate PDB. - GCC uses SJLJ exceptions in C++ that work to it's own libunwind model. - GCC uses Itanium C++ mangling, so mixed MSVC/G++ is a no-go. - GCC uses cdecl as the default calling convention (need to double check this is correct though). That said, GCC does produce a COFF binary that is understood by the Windows platform (otherwise you wouldn't be able to run programs). But interacting with Windows libraries is restricted to the lowest API, that being anything that was marked with stdcall, fastcall or cdecl. MinGW is an entriely isolated runtime environment that fills the missing/incompatible gaps between Windows and GNU/Posix runtime to allows GCC built programs to run.
May 31 2015
prev sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 30/05/2015 1:43 p.m., Manu via Digitalmars-d wrote:
 On 30 May 2015 at 09:14, ketmar via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Fri, 29 May 2015 11:58:09 -0700, H. S. Teoh via Digitalmars-d wrote:

 On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d
 wrote:
 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 This will probably offend some people, but I think LDC/GDC should be
 the default download on dlang.org, and dmd should be provided as an
 alternative for those who want the latest language version and don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
the thing is that benchmarks are measuring execution time, not compiling time. that's why D is failing benchmarks. making LDC or GDC a "reference" compiler, and stating that if someone is ready to trade codegen quality for compilation speed, he can use DMD instead, is the way to start being "benchmark friendly". people doing benchmarks usually downloading what official site gives 'em. so they taking DMD and assuming that it's the best *execution* speed D can offer. i.e. developers can continue using DMD as their base, but offering it as "experimental compiler not recommended to use in production" on the offsite, replacing "download D compiler" links with LDC/GDC. this way people will not get Hot New Features right away, but "D is sloooow" rants will go down. ;-)
I actually think this is a really good idea. I don't think it's right that random people should encounter DMD as a first impression, they should encounter GDC or LDC, since those are the toolsets they will be making direct comparisons against during their evaluation. At the point that they're not yet a D enthusiast, access to cutting edge language features should mean practically nothing to them. That said, it would be nice if the DMD developer community at large were able to work closer with GDC/LDC. Is there some changes in workflow that may keep GDC/LDC up to date beside DMD as PR's are added? Possible to produce 'nightlies' for those compilers, so that developers following mainline DMD can easily update their respective compilers to reflect? Perhaps DMD developers could even develop language features against LDC instead of DMD, and backport to DMD? For my work, and what I noticed in my other thread, is that LDC is central to expansion of the D ecosystem, and I think it needs to be taken more seriously by the entire DMD community; it can't be a little thing off to the side. LDC gives us portability; iOS, Android, Windows, Emscripten, NativeClient, and plenty of other platforms. It's 2015; the fact that we still don't support Android and iOS is just not unacceptable. Most computers in the world run those operating systems. LDC is also the only performant way to target Windows, the overwhelmingly largest desktop platform... but we lose the debugger! >_< How can we release products created with D if we still don't have a way to build and run on modern computers? So, LDC: Windows, Android, iOS... this must be 99.9999% of computers on the planet! LDC needs to be first-class. Ideally, even more polished than DMD, and it should probably be the first contact people have with D. * I don't mean to down-play GDC, but it can't give us Windows or iOS, which are critical targets. I want to use D in my work, right now. I could... if I could actually target the computers we run code on.
Both you and ketmer are evil. I'm liking these ideas... Now we just need some pretty and nice packages for e.g. Windows for ldc with debugger full support and we will be good. Last time I looked llvm still needs a lot of work for Windows unfortunately. It may be time to direct some people to help them out ;)
May 29 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 30 May 2015 at 03:24:45 UTC, Rikki Cattermole wrote:
 Both you and ketmer are evil.
 I'm liking these ideas...

 Now we just need some pretty and nice packages for e.g. Windows 
 for ldc with debugger full support and we will be good.
 Last time I looked llvm still needs a lot of work for Windows 
 unfortunately. It may be time to direct some people to help 
 them out ;)
LDC seemed to work for the author of the blog on windows after fixing a path issue. After a quick look in the LDC NG, it seems to be mostly(?) working. It feels like GDC/LDC are fragmented from the main D community which isn't good due to an already not-so-large community. Outside of manually going to check their issue trackers it's hard to know what's going on with them. Bye,
May 29 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 30/05/2015 3:36 p.m., weaselcat wrote:
 On Saturday, 30 May 2015 at 03:24:45 UTC, Rikki Cattermole wrote:
 Both you and ketmer are evil.
 I'm liking these ideas...

 Now we just need some pretty and nice packages for e.g. Windows for
 ldc with debugger full support and we will be good.
 Last time I looked llvm still needs a lot of work for Windows
 unfortunately. It may be time to direct some people to help them out ;)
LDC seemed to work for the author of the blog on windows after fixing a path issue. After a quick look in the LDC NG, it seems to be mostly(?) working. It feels like GDC/LDC are fragmented from the main D community which isn't good due to an already not-so-large community. Outside of manually going to check their issue trackers it's hard to know what's going on with them. Bye,
Even as of a few months ago LLVM wasn't fully working on 64bit Windows. The llvm debugger had a long way to go for it. So hopefully its come along way since then :)
May 29 2015
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 May 2015 1:41 pm, "weaselcat via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Saturday, 30 May 2015 at 03:24:45 UTC, Rikki Cattermole wrote:
 Both you and ketmer are evil.
 I'm liking these ideas...

 Now we just need some pretty and nice packages for e.g. Windows for ldc
with debugger full support and we will be good.
 Last time I looked llvm still needs a lot of work for Windows
unfortunately. It may be time to direct some people to help them out ;)
 LDC seemed to work for the author of the blog on windows after fixing a
path issue. After a quick look in the LDC NG, it seems to be mostly(?) working. There's a big difference between compiling a few lines of code, and building a project with particular requirements, dependencies on various foreign libs, cross language linkage, etc. Ldc does a valiant effort, but there are still quite a lot of gaps. I can't hold that against them, the whole dmd community needs to take gdc/ldc as first class considerations.
May 30 2015
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
I'm having a pretty good experience with win64 ldc lately. Obviously the
fact that there's no debug info is a gigantic hole.

I have a hack environment where I dmd in debug and ldc for release builds,
but it's really not ideal. And you're limited to code that doesn't expose
bugs in both compilers.

The biggest problem I have with ldc, is that lots of normal compiler errors
pop up an ICE instead of a normal error message.
On 30 May 2015 1:26 pm, "Rikki Cattermole via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:

 On 30/05/2015 1:43 p.m., Manu via Digitalmars-d wrote:

 On 30 May 2015 at 09:14, ketmar via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:

 On Fri, 29 May 2015 11:58:09 -0700, H. S. Teoh via Digitalmars-d wrote:

  On Fri, May 29, 2015 at 06:50:02PM +0000, Dicebot via Digitalmars-d
 wrote:

 On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:

 This will probably offend some people, but I think LDC/GDC should be
 the default download on dlang.org, and dmd should be provided as an
 alternative for those who want the latest language version and don't
 mind the speed compromise.
I did make LDC default compiler used in Arch but now people are unhappy with increased compile times so I may need to revert it back :)
Can't please 'em all... According to Walter, many D users want fast compile times, and aren't as concerned about performance of the generated code. But from this thread's OP, it seems there's another group of users who don't care about fast compile times but want the generated code to squeeze every last drop of performance from their CPUs. So I guess we should be equally recommending all 3 compilers, with a note to help people choose their compiler depending on their needs.
the thing is that benchmarks are measuring execution time, not compiling time. that's why D is failing benchmarks. making LDC or GDC a "reference" compiler, and stating that if someone is ready to trade codegen quality for compilation speed, he can use DMD instead, is the way to start being "benchmark friendly". people doing benchmarks usually downloading what official site gives 'em. so they taking DMD and assuming that it's the best *execution* speed D can offer. i.e. developers can continue using DMD as their base, but offering it as "experimental compiler not recommended to use in production" on the offsite, replacing "download D compiler" links with LDC/GDC. this way people will not get Hot New Features right away, but "D is sloooow" rants will go down. ;-)
I actually think this is a really good idea. I don't think it's right that random people should encounter DMD as a first impression, they should encounter GDC or LDC, since those are the toolsets they will be making direct comparisons against during their evaluation. At the point that they're not yet a D enthusiast, access to cutting edge language features should mean practically nothing to them. That said, it would be nice if the DMD developer community at large were able to work closer with GDC/LDC. Is there some changes in workflow that may keep GDC/LDC up to date beside DMD as PR's are added? Possible to produce 'nightlies' for those compilers, so that developers following mainline DMD can easily update their respective compilers to reflect? Perhaps DMD developers could even develop language features against LDC instead of DMD, and backport to DMD? For my work, and what I noticed in my other thread, is that LDC is central to expansion of the D ecosystem, and I think it needs to be taken more seriously by the entire DMD community; it can't be a little thing off to the side. LDC gives us portability; iOS, Android, Windows, Emscripten, NativeClient, and plenty of other platforms. It's 2015; the fact that we still don't support Android and iOS is just not unacceptable. Most computers in the world run those operating systems. LDC is also the only performant way to target Windows, the overwhelmingly largest desktop platform... but we lose the debugger! >_< How can we release products created with D if we still don't have a way to build and run on modern computers? So, LDC: Windows, Android, iOS... this must be 99.9999% of computers on the planet! LDC needs to be first-class. Ideally, even more polished than DMD, and it should probably be the first contact people have with D. * I don't mean to down-play GDC, but it can't give us Windows or iOS, which are critical targets. I want to use D in my work, right now. I could... if I could actually target the computers we run code on.
Both you and ketmer are evil. I'm liking these ideas... Now we just need some pretty and nice packages for e.g. Windows for ldc with debugger full support and we will be good. Last time I looked llvm still needs a lot of work for Windows unfortunately. It may be time to direct some people to help them out ;)
May 29 2015
parent "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 30 May 2015 at 04:01:00 UTC, Manu wrote:
 I'm having a pretty good experience with win64 ldc lately. 
 Obviously the
 fact that there's no debug info is a gigantic hole.

 I have a hack environment where I dmd in debug and ldc for 
 release builds,
 but it's really not ideal. And you're limited to code that 
 doesn't expose
 bugs in both compilers.
Does it generate any debug info at all, or does it just lack PDB debug info? If the former, have you tried visualGDB with it(assuming you're using VS)? Just a guess, I don't use windows. : )
May 29 2015
prev sibling next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 On Fri, May 29, 2015 at 06:13:02PM +0000, weaselcat via 
 Digitalmars-d wrote:
 In nearly every benchmark I see D in, the default compiler 
 used is dmd
 which runs computationally intense tasks 4-5x+ slower than 
 GDC/LDC
As I keep saying, in my own compute-intensive projects I have consistently found that dmd-generated code (dmd -O -inline -release) is about 20-30% slower on average, sometimes even up to 50% slower, compared to gdc-generated code (gdc -O3 -finline -frelease). This is measured by actual running time in an actual application, not benchmark-specific code. I have looked at the generated assembly before, and it's clear that the gdc optimizer is way ahead of dmd's. The dmd optimizer starts failing to inline inner loop code after about 1-2 levels of function call nesting, not to mention it's unable to factor out a lot of loop boilerplate code. The gdc optimizer, by contrast, not only factors out almost all loop boilerplate code and inlines inner loop function calls several levels deep, it also unrolls loops in a CPU-specific way, does major loop restructuring, compounded with much more linear code optimization than dmd does, instruction reordering and then refactoring after that, etc., in some cases reducing the size of inner loop code (as in, the amount of code that runs per iteration) by up to 90%. I don't know the internal workings of the dmd optimizer, but it's clear that at present, with almost nobody working on it except Walter, it's never going to catch up. (Maybe this statement will provoke Walter into working his magic? :-P)
dmd's backend is also under a proprietary license reducing the amount of people willing to contribute. Not to mention that GDC and LDC benefit heavily from GCC and LLVM respectively, these aren't exactly one man projects(e.g, Google, Redhat, Intel, AMD etc contribute heavily to GCC and LLVM is basically Apple's baby.)
 [...]
 This isn't the first time I've seen this, in basically every 
 benchmark
 featuring D I have to submit a patch/make a comment that dmd 
 shouldn't
 be used. Make no mistake, this is damaging to D's reputation - 
 how
 well does D's "native efficiency" go over when people are 
 saying it's

 
 LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise. T
I think they probably should be if only for the licensing issues, dmd can't even be redistributed - AFAIK it's in very, very few D repositories on Linux. re dicebot:
I did make LDC default compiler used in Arch but now people are 
unhappy with increased compile times so I may need to revert it 
back :)
Maybe this should be brought up on LDC's issue tracker(that is, slower compilation times compared to dmd.) Although it might have already been discussed.
May 29 2015
next sibling parent "Kai Nacke" <kai redstar.de> writes:
On Friday, 29 May 2015 at 19:04:05 UTC, weaselcat wrote:
 Maybe this should be brought up on LDC's issue tracker(that is, 
 slower compilation times compared to dmd.)
 Although it might have already been discussed.
We are aware of this: https://github.com/ldc-developers/ldc/issues/830 Regards, Kai
May 30 2015
prev sibling parent "Kai Nacke" <kai redstar.de> writes:
On Friday, 29 May 2015 at 19:04:05 UTC, weaselcat wrote:
 Not to mention that GDC and LDC benefit heavily from GCC and 
 LLVM respectively, these aren't exactly one man projects(e.g, 
 Google, Redhat, Intel, AMD etc contribute heavily to GCC and 
 LLVM is basically Apple's baby.)
Google, Intel, AMD, Imagination, ... also contribute to LLVM. I think most companies contributing to GCC contribute to LLVM, too. Regards, Kai
May 30 2015
prev sibling next sibling parent "Chris" <wendlec tcd.ie> writes:
On Friday, 29 May 2015 at 18:38:20 UTC, H. S. Teoh wrote:
 On Fri, May 29, 2015 at 06:13:02PM +0000, weaselcat via 
 Digitalmars-d wrote:
 In nearly every benchmark I see D in, the default compiler 
 used is dmd
 which runs computationally intense tasks 4-5x+ slower than 
 GDC/LDC
As I keep saying, in my own compute-intensive projects I have consistently found that dmd-generated code (dmd -O -inline -release) is about 20-30% slower on average, sometimes even up to 50% slower, compared to gdc-generated code (gdc -O3 -finline -frelease). This is measured by actual running time in an actual application, not benchmark-specific code. I have looked at the generated assembly before, and it's clear that the gdc optimizer is way ahead of dmd's. The dmd optimizer starts failing to inline inner loop code after about 1-2 levels of function call nesting, not to mention it's unable to factor out a lot of loop boilerplate code. The gdc optimizer, by contrast, not only factors out almost all loop boilerplate code and inlines inner loop function calls several levels deep, it also unrolls loops in a CPU-specific way, does major loop restructuring, compounded with much more linear code optimization than dmd does, instruction reordering and then refactoring after that, etc., in some cases reducing the size of inner loop code (as in, the amount of code that runs per iteration) by up to 90%. I don't know the internal workings of the dmd optimizer, but it's clear that at present, with almost nobody working on it except Walter, it's never going to catch up. (Maybe this statement will provoke Walter into working his magic? :-P) [...]
 This isn't the first time I've seen this, in basically every 
 benchmark
 featuring D I have to submit a patch/make a comment that dmd 
 shouldn't
 be used. Make no mistake, this is damaging to D's reputation - 
 how
 well does D's "native efficiency" go over when people are 
 saying it's

 
 LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise. T
LDC can work wonders, indeed. I've seen it. Drawback: GDC and LDC lag behind. D doesn't like legacy code, so I always update my code. Maybe we could synchronize dmd, ldc and gdc faster? Dmd is the only way to update your code.
May 29 2015
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/05/2015 19:35, H. S. Teoh via Digitalmars-d wrote:
 [...]
This isn't the first time I've seen this, in basically every benchmark
featuring D I have to submit a patch/make a comment that dmd shouldn't
be used. Make no mistake, this is damaging to D's reputation - how
well does D's "native efficiency" go over when people are saying it's


LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise.
It should be more than just LDC/GDC being the default download on dlang.org, the DM backend and related toolchain should be phased out altogether in favor of LLVM. Walter might have written great compiler tools in the 90s or so, but in today's internet and FOSS online-collaborative era, how can the Digital Mars toolchain hope to compete with toolchains having teams of multiple full-time developers working on it? (plus a plethora of occasional volunteer contributors). The difference in manpower and resources is astonishing! And it's only gonna get bigger since LLVM is having more and more people and companies supporting it. By this rate, it may well one day make even GCC old and obsolete, left to be used by FSF zealots only. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Jun 05 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 5 Jun 2015 20:55, "Bruno Medeiros via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 29/05/2015 19:35, H. S. Teoh via Digitalmars-d wrote:
 [...]

This isn't the first time I've seen this, in basically every benchmark
featuring D I have to submit a patch/make a comment that dmd shouldn't
be used. Make no mistake, this is damaging to D's reputation - how
well does D's "native efficiency" go over when people are saying it's


LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise.
It should be more than just LDC/GDC being the default download on
dlang.org, the DM backend and related toolchain should be phased out altogether in favor of LLVM.
 Walter might have written great compiler tools in the 90s or so, but in
today's internet and FOSS online-collaborative era, how can the Digital Mars toolchain hope to compete with toolchains having teams of multiple full-time developers working on it? (plus a plethora of occasional volunteer contributors). The difference in manpower and resources is astonishing! And it's only gonna get bigger since LLVM is having more and more people and companies supporting it. By this rate, it may well one day make even GCC old and obsolete, left to be used by FSF zealots only.

At the risk of speaking with lack of foresight, are you on the gcc mailing
list too? If not, get on it. Otherwise you will enter this kind of
polarised view of X will dominate all.
Jun 05 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Friday, 5 June 2015 at 22:07:48 UTC, Iain Buclaw wrote:
 On 5 Jun 2015 20:55, "Bruno Medeiros via Digitalmars-d" <
 digitalmars-d puremagic.com> wrote:
 On 29/05/2015 19:35, H. S. Teoh via Digitalmars-d wrote:
 [...]

This isn't the first time I've seen this, in basically 
every benchmark
featuring D I have to submit a patch/make a comment that 
dmd shouldn't
be used. Make no mistake, this is damaging to D's 
reputation - how
well does D's "native efficiency" go over when people are 
saying it's


LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise.
It should be more than just LDC/GDC being the default download on
dlang.org, the DM backend and related toolchain should be phased out altogether in favor of LLVM.
 Walter might have written great compiler tools in the 90s or 
 so, but in
today's internet and FOSS online-collaborative era, how can the Digital Mars toolchain hope to compete with toolchains having teams of multiple full-time developers working on it? (plus a plethora of occasional volunteer contributors). The difference in manpower and resources is astonishing! And it's only gonna get bigger since LLVM is having more and more people and companies supporting it. By this rate, it may well one day make even GCC old and obsolete, left to be used by FSF zealots only.

 At the risk of speaking with lack of foresight, are you on the 
 gcc mailing
 list too? If not, get on it. Otherwise you will enter this kind 
 of
 polarised view of X will dominate all.
Slightly off topic, but I recently started digging into GDC( on your personal fork, anyways.) I find the code pleasantly easy to navigate and understand. I don't think I've given gdc its due credit in this thead. Bye,
Jun 05 2015
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 6 June 2015 at 01:20, weaselcat via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Friday, 5 June 2015 at 22:07:48 UTC, Iain Buclaw wrote:

 On 5 Jun 2015 20:55, "Bruno Medeiros via Digitalmars-d" <
 digitalmars-d puremagic.com> wrote:

 On 29/05/2015 19:35, H. S. Teoh via Digitalmars-d wrote:

 [...]

  >This isn't the first time I've seen this, in basically >every
 benchmark
featuring D I have to submit a patch/make a comment that >dmd
shouldn't
be used. Make no mistake, this is damaging to D's >reputation - how
well does D's "native efficiency" go over when people are >saying it's


LDC and GDC need promoted more.
[...] This will probably offend some people, but I think LDC/GDC should be the default download on dlang.org, and dmd should be provided as an alternative for those who want the latest language version and don't mind the speed compromise.
It should be more than just LDC/GDC being the default download on
dlang.org, the DM backend and related toolchain should be phased out altogether in favor of LLVM.
 Walter might have written great compiler tools in the 90s or so, but in
today's internet and FOSS online-collaborative era, how can the Digital Mars toolchain hope to compete with toolchains having teams of multiple full-time developers working on it? (plus a plethora of occasional volunteer contributors). The difference in manpower and resources is astonishing! And it's only gonna get bigger since LLVM is having more and more people and companies supporting it. By this rate, it may well one day make even GCC old and obsolete, left to be used by FSF zealots only.

 At the risk of speaking with lack of foresight, are you on the gcc mailing
 list too? If not, get on it. Otherwise you will enter this kind of
 polarised view of X will dominate all.
Slightly off topic, but I recently started digging into GDC( on your personal fork, anyways.) I find the code pleasantly easy to navigate and understand. I don't think I've given gdc its due credit in this thead. Bye,
If you've been following the 2.067 re-work, that is really the way things are going right now. More encapsulation, less of the flat hierachal structure is the key securing future interest. My hope is that GDC will fall into the "what a good frontend shoudl do" category after I'm done.
Jun 05 2015
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 05/06/2015 23:07, Iain Buclaw via Digitalmars-d wrote:
 On 5 Jun 2015 20:55, "Bruno Medeiros via Digitalmars-d"
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
  >
  > On 29/05/2015 19:35, H. S. Teoh via Digitalmars-d wrote:
  >>
  >> [...]
  >>
  >>> >This isn't the first time I've seen this, in basically every benchmark
  >>> >featuring D I have to submit a patch/make a comment that dmd shouldn't
  >>> >be used. Make no mistake, this is damaging to D's reputation - how
  >>> >well does D's "native efficiency" go over when people are saying it's

  >>> >
  >>> >LDC and GDC need promoted more.
  >>
  >> [...]
  >>
  >>
  >> This will probably offend some people, but I think LDC/GDC should be the
  >> default download on dlang.org <http://dlang.org>, and dmd should be
 provided as an
  >> alternative for those who want the latest language version and don't
  >> mind the speed compromise.
  >
  >
  > It should be more than just LDC/GDC being the default download on
 dlang.org <http://dlang.org>, the DM backend and related toolchain
 should be phased out altogether in favor of LLVM.
  >
  > Walter might have written great compiler tools in the 90s or so, but
 in today's internet and FOSS online-collaborative era, how can the
 Digital Mars toolchain hope to compete with toolchains having teams of
 multiple full-time developers working on it? (plus a plethora of
 occasional volunteer contributors). The difference in manpower and
 resources is astonishing! And it's only gonna get bigger since LLVM is
 having more and more people and companies supporting it. By this rate,
 it may well one day make even GCC old and obsolete, left to be used by
 FSF zealots only.

 At the risk of speaking with lack of foresight, are you on the gcc
 mailing list too? If not, get on it. Otherwise you will enter this kind
 of polarised view of X will dominate all.
I'm not on any LLVM mailing list or forum either. It's too much volume for what I need to care/know about. I'm only on the LLVM Weekly newsletter. If GCC has a similar newsletter I might sign up to that. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Jun 09 2015
prev sibling next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 29 May 2015 20:15, "weaselcat via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 In nearly every benchmark I see D in, the default compiler used is dmd
which runs computationally intense tasks 4-5x+ slower than GDC/LDC
 example of a random blog post I found:
http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html
 D is up to 10x(!) slower than Rust.

 Well... dmd is. Under LDC:
 MD5 is 5x faster,
 SHA1 is about the same,
 SHA256 is 10x faster,
 SHA512 is 10x faster.

 The kicker?
 _all_ of these were faster than the Rust timings(albeit by 5-10%) when
using LDC.
 This isn't the first time I've seen this, in basically every benchmark
featuring D I have to submit a patch/make a comment that dmd shouldn't be used. Make no mistake, this is damaging to D's reputation - how well does D's "native efficiency" go over when people are saying it's slower than
 LDC and GDC need promoted more.

 Bye,
It's also hurting in a lot of recent pull requests I've been seeing. People are going out their way to micro optimise code for DMD, but ultimately their intention ends up being rejected because of GDC/LDC providing said optimisations for free. It's not just PR, but also a waste/drain on resource for people who could be better focusing their limited free time.
May 29 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/29/15 12:13 PM, weaselcat wrote:
 In nearly every benchmark I see D in, the default compiler used is dmd
 which runs computationally intense tasks 4-5x+ slower than GDC/LDC

 example of a random blog post I found:
 http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html
One problem here is pointed by the blogger: "I tried to compile with LDC2 and failed (windows 7 x64): ..." Can he be helped? Andrei
May 29 2015
parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Friday, 29 May 2015 at 19:17:14 UTC, Andrei Alexandrescu wrote:
 On 5/29/15 12:13 PM, weaselcat wrote:
 In nearly every benchmark I see D in, the default compiler
 used is dmd
 which runs computationally intense tasks 4-5x+ slower than
 GDC/LDC

 example of a random blog post I found: 
 http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html
One problem here is pointed by the blogger: "I tried to compile with LDC2 and failed (windows 7 x64): ..." Can he be helped?
http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html?showComment=1432925658409#c6228383399074019471
May 29 2015
parent "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 20:23:13 UTC, Vladimir Panteleev wrote:
 On Friday, 29 May 2015 at 19:17:14 UTC, Andrei Alexandrescu 
 wrote:
 On 5/29/15 12:13 PM, weaselcat wrote:
 In nearly every benchmark I see D in, the default compiler
 used is dmd
 which runs computationally intense tasks 4-5x+ slower than
 GDC/LDC

 example of a random blog post I found: 
 http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html
One problem here is pointed by the blogger: "I tried to compile with LDC2 and failed (windows 7 x64): ..." Can he be helped?
http://vaskir.blogspot.com/2015/04/computing-cryptography-hashes-rust-vs-f.html?showComment=1432925658409#c6228383399074019471
Thanks, he updated the results. DMD MD5 - 16.05s (470% slower) SHA1 - 2.35s (19% faster) SHA256 - 47.96s (690% slower (!)) SHA512 - 61.47s (1375% slower (!)) LDC2 MD5 - 2,18s (55% faster) SHA1 - 2.88s (same) SHA256 - 6,79s (3% faster) SHA512 - 4,6s (3% slower) % is compared to Rust.
May 29 2015
prev sibling next sibling parent reply "Martin Krejcirik" <mk-junk i-line.cz> writes:
IMHO all what is needed is to update the download page with some 
description of deferences between the compilers, like:

dmd
   - reference compiler, Digital Mars backend
   - best for latest dlang features, fast compile times

gdc
   - GNU gcc backend based compiler
   - best for portability and compatibility with gnu tools

ldc
   - LLVM backend based compiler
   - best for optimized builds, best runtime speed

Note to benchmark users: please use ldc compiler with -inline -O 
-boundscheck=off (or whatever is correct for LDC) options for 
best results
May 29 2015
next sibling parent "Dennis Ritchie" <dennis.ritchie mail.ru> writes:
On Friday, 29 May 2015 at 20:02:49 UTC, Martin Krejcirik wrote:
 IMHO all what is needed is to update the download page with 
 some description of deferences between the compilers, like:
+1
May 29 2015
prev sibling next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 20:02:49 UTC, Martin Krejcirik wrote:
 Note to benchmark users: please use ldc compiler with -inline 
 -O -boundscheck=off (or whatever is correct for LDC) options 
 for best results
AFAIK you shouldn't use the -inline flag with LDC, as it tells LDC to run the inline LLVM pass. LDC's inlining is enabled with -enable-inlining and is enabled at -O2 and higher. I believe these are similar except LDC's -enable-inlining has better cost analysis configured for the pass(?) -inline should probably be renamed because this is confusing due to dmd's usage of it. But yes, a simple blurb on which compiler flags to use for optimization would probably help as there seems to be some confusion about this due to differing compiler flags. I imagine Iain, Kai, Nadlinger, etc would know which ones to use.
May 29 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Fri, 29 May 2015 20:02:47 +0000, Martin Krejcirik wrote:

 dmd
    - reference compiler, Digital Mars backend - best for latest dlang
    features, fast compile times
=20
 gdc
    - GNU gcc backend based compiler - best for portability and
    compatibility with gnu tools
=20
 ldc
    - LLVM backend based compiler - best for optimized builds, best
    runtime speed
does LDC really surpasses GDC in generated code speed?=
May 29 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Friday, 29 May 2015 at 23:19:36 UTC, ketmar wrote:
 On Fri, 29 May 2015 20:02:47 +0000, Martin Krejcirik wrote:

 dmd
    - reference compiler, Digital Mars backend - best for 
 latest dlang
    features, fast compile times
 
 gdc
    - GNU gcc backend based compiler - best for portability and
    compatibility with gnu tools
 
 ldc
    - LLVM backend based compiler - best for optimized builds, 
 best
    runtime speed
does LDC really surpasses GDC in generated code speed?
yes GDC is 2-3x slower than LDC on this bench for example. I think it's because of a lack of cross module inlining.
May 29 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Fri, 29 May 2015 23:29:39 +0000, weaselcat wrote:

 On Friday, 29 May 2015 at 23:19:36 UTC, ketmar wrote:
 On Fri, 29 May 2015 20:02:47 +0000, Martin Krejcirik wrote:

 dmd
    - reference compiler, Digital Mars backend - best for
 latest dlang
    features, fast compile times
=20
 gdc
    - GNU gcc backend based compiler - best for portability and
    compatibility with gnu tools
=20
 ldc
    - LLVM backend based compiler - best for optimized builds,
 best
    runtime speed
does LDC really surpasses GDC in generated code speed?
=20 yes GDC is 2-3x slower than LDC on this bench for example. I think it's because of a lack of cross module inlining.
thanks for the info.=
May 29 2015
prev sibling parent reply "Kyoji Klyden" <kyojiklyden yahoo.com> writes:
Honestly I've never taken DMD to be "the production compiler". 
I've always left that to the GNU compilers. GDC has all the magic 
and years of work with it's backend, so I'm not sure how dmd can 
compare. As others of said, it's really the frontend that DMD is 
providing that matters; once you have that you can more or less 
just stick that onto which ever backend works for you. Though DMD 
is definitely not entirely useless, I use it all the time, mainly 
for prototypes, quick builds, and testing libraries.

Also if someone is to do speed tests to see how powerful D is, 
how clueless would they have to be to check only dmd? You don't 
just compile C++ with MSVC, and then say "Welp, it looks like C++ 
is just slow and shitty". :P
You can probably safely dismiss any speed test that shows you 
only one compiler.


So personally I vote that speed optimizations on DMD are a waste 
of time at the moment.
May 30 2015
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:

 So personally I vote that speed optimizations on DMD are a waste of time
 at the moment.
it's not only waste of time, it's unrealistic to make DMD backend's=20 quality comparable to GDC/LDC. it will require complete rewrite of backend=20 and many man-years of work. and GDC/LDC will not simply sit frozen all=20 this time.=
May 30 2015
parent reply =?UTF-8?B?Ik3DoXJjaW8=?= Martins" <marcioapm gmail.com> writes:
On Saturday, 30 May 2015 at 14:29:56 UTC, ketmar wrote:
 On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:

 So personally I vote that speed optimizations on DMD are a 
 waste of time
 at the moment.
it's not only waste of time, it's unrealistic to make DMD backend's quality comparable to GDC/LDC. it will require complete rewrite of backend and many man-years of work. and GDC/LDC will not simply sit frozen all this time.
+1 for LDC as first class! D would become a lot more appealing if it could take advantage of the LLVM tooling already available! Regarding the speed problem - One could always have LDC have a nitro switch, where it simply runs less of the expensive passes, thus reducing the codegen quality, but improving speed. Would that work? I'm assuming the "slowness" in LLVM comes from the optimization passes. Would clang's thread-sanitizer and address-sanitizer be adaptable and usable with D as well?
May 30 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 30 May 2015 at 17:00:18 UTC, Márcio Martins wrote:
 Would clang's thread-sanitizer and address-sanitizer be 
 adaptable and usable with D as well?
these are already usable from LDC. make sure you use the -gcc=clang flag.
May 30 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 30 May 2015 19:05, "via Digitalmars-d" <digitalmars-d puremagic.com>
wrote:
 On Saturday, 30 May 2015 at 14:29:56 UTC, ketmar wrote:
 On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:

 So personally I vote that speed optimizations on DMD are a waste of time
 at the moment.
it's not only waste of time, it's unrealistic to make DMD backend's quality comparable to GDC/LDC. it will require complete rewrite of
backend
 and many man-years of work. and GDC/LDC will not simply sit frozen all
 this time.
+1 for LDC as first class! D would become a lot more appealing if it could take advantage of the
LLVM tooling already available!
 Regarding the speed problem - One could always have LDC have a nitro
switch, where it simply runs less of the expensive passes, thus reducing the codegen quality, but improving speed. Would that work? I'm assuming the "slowness" in LLVM comes from the optimization passes.

I'd imagine the situation is similar with GDC.  For large compilations,
it's the optimizer, for small compilations, it's the linker.  Small
compilations is at least solved by switching to shared libraries.  For
larger compilations, only using -O1 optimisations should be fine for most
programs that aren't trying to beat some sort of benchmark.
May 30 2015