digitalmars.D - Study: build times for D programs
- Andrei Alexandrescu (17/17) Jul 24 2012 Hello,
- Dmitry Olshansky (6/14) Jul 24 2012 Well, I'd rather pass this to someone else but I have DMDScript repo
- Andrei Alexandrescu (3/17) Jul 24 2012 Excellent, thanks!
- Dmitry Olshansky (8/26) Jul 24 2012 Done:
- Roman D. Boiko (5/7) Jul 24 2012 That would provide performance (compilation and run-time) for D1
- Paulo Pinto (1/7) Jul 24 2012 Still, is a good starting point.
- Walter Bright (8/15) Jul 24 2012 The reality is, no matter what such benchmark is chosen, it will be atta...
- Dmitry Olshansky (5/10) Jul 24 2012 In fact it's rather D2-ified. But yeah, no template heavy code in sight....
- Andrei Alexandrescu (15/20) Jul 24 2012 Ehm. There's any number of arguments that can be made to question the
- Roman D. Boiko (8/12) Jul 24 2012 OK. And it could serve as a basis for further variations:
- Walter Bright (4/7) Jul 24 2012 The translation is also just that, a line-by-line translation that start...
- Andrej Mitrovic (14/15) Jul 24 2012 I've got a codebase where it takes DMD 15 seconds to output an error
- Nick Sabalausky (10/28) Jul 24 2012 Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to
- Jonathan M Davis (9/17) Jul 24 2012 I don't have any hard evidence for it, but I've always gotten the impres...
- Peter Alexander (3/33) Jul 24 2012 There's also the nasty O(n^2) optimiser issue.
- Walter Bright (4/7) Jul 24 2012 I wouldn't be a bit surprised to find that there are some O(n*n) or wors...
- Jacob Carlborg (7/14) Jul 24 2012 We did some profiling on derelict in the process of adding support for
- Guillaume Chatelet (28/35) Jul 24 2012 Well I kind of did exactly that.
- Walter Bright (12/15) Jul 24 2012 Small programs are completely inadequate for getting any reasonable meas...
- Andrei Alexandrescu (5/11) Jul 24 2012 Nevertheless there's value in the shootout. Yes, if someone is up for it...
- Isaac Gouy (5/9) Jul 25 2012 The Python measurement scripts are here -- http://shootout.alioth.debian...
- Isaac Gouy (5/10) Jul 25 2012 -snip-
- Joseph Rushton Wakeling (21/24) Jul 24 2012 Suggest that this gets done with all 3 of the main D compilers, not just...
- =?UTF-8?B?QWxpIMOHZWhyZWxp?= (5/10) Jul 24 2012 Those C++ builds have very few C++ source files, right? In my experience...
- David Nadlinger (6/7) Jul 25 2012 Even for a rough comparison of compile times, you need to include
- Joseph Rushton Wakeling (8/13) Jul 25 2012 C++ compiler and library flags: -ansi -pedantic -Wall -O3 -march=native
- Andrei Alexandrescu (3/8) Jul 25 2012 Yes, and both debug and release build times are important.
- Walter Bright (5/6) Jul 25 2012 Optimized build time comparisons are less relevant - are you really will...
- Andrei Alexandrescu (8/14) Jul 25 2012 There are systems that only work in release mode (e.g. performance is
- Walter Bright (3/16) Jul 25 2012 The easy way to improve optimized build times is to do less optimization...
- Rainer Schuetze (8/14) Jul 25 2012 The "edit-compile-debug loop" is a use case where the D module system
- Andrei Alexandrescu (5/23) Jul 25 2012 The same dependency management techniques can be applied to large D
- Nick Sabalausky (8/22) Jul 25 2012 Aren't there still issues with what object files DMD chooses to store
- Jacob Carlborg (6/12) Jul 26 2012 I'm pretty sure nothing has changed. But Walter said if you use the -lib...
- Rainer Schuetze (17/42) Jul 25 2012 Incremental compilation does not work so well because
- Jonathan M Davis (14/31) Jul 25 2012 D should actually compile _faster_ if you compile everything at once -
- Jacob Carlborg (8/20) Jul 26 2012 Incremental builds don't have to mean "pass a single file to the
- =?UTF-8?B?QWxpIMOHZWhyZWxp?= (6/11) Jul 26 2012 GNU make has the special $? prerequisite that may help with the above:
- Jacob Carlborg (4/9) Jul 26 2012 I'm trying to avoid "make" as much as possible.
- SomeDude (2/15) Jul 28 2012 +1
- Nick Sabalausky (5/19) Jul 26 2012 944
- Andrej Mitrovic (5/10) Jul 25 2012 That's assuming that the lexing/parsing is the bottleneck for DMD. For
- Walter Bright (9/15) Jul 25 2012 I suspect that's one of two possibilities:
- Rainer Schuetze (4/23) Jul 25 2012 I think working with di-files is too painful. A lot of the analysis in
- Joseph Rushton Wakeling (3/4) Jul 26 2012 If you can advise some flag combinations (for D and C++) you'd like to s...
- Andrei Alexandrescu (5/9) Jul 26 2012 The classic to ones are: (a) no flags at all, (b) -O -release -inline,
- Joseph Rushton Wakeling (17/21) Jul 26 2012 Here's a little table of DMD to GDC comparisons for the Dregs codebase:
- Jonathan M Davis (12/36) Jul 26 2012 Clearly -O is where the big runtime speed difference is at between dmd a...
- David Nadlinger (12/21) Jul 26 2012 GDC probably performs inlining by default on -O2/-O3, just like
- Joseph Rushton Wakeling (23/24) Jul 26 2012 I was surprised that using -inline alone (without any optimization optio...
- Iain Buclaw (11/18) Jul 26 2012 -inline is mapped to -finline-functions in GDC. Inlining is possibly
- Joseph Rushton Wakeling (17/22) Jul 27 2012 Good to know. In this case it's all compiled together in one go:
- Iain Buclaw (8/33) Jul 27 2012 My best assumption would be it may say something more about the way
- David Nadlinger (6/10) Jul 26 2012 Oh, and I don't know what exactly you are referring to here, but
- Jonathan M Davis (9/19) Jul 26 2012 That was my point. -inline seems to be pretty much identical between th=
- David Nadlinger (6/13) Jul 26 2012 Ah, okay, I see what you meant. But no, as far as I'm aware, GDC
- ixid (1/2) Jul 25 2012 Where would one find these ideas?
- =?UTF-8?B?QWxpIMOHZWhyZWxp?= (10/12) Jul 25 2012 There are some papers at Andrei's site:
- Simen Kjaeraas (4/6) Jul 25 2012 http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201...
- Jonathan M Davis (11/22) Jul 25 2012 Not necessarily. The point is that there's extra work that has to be don...
- Jacob Carlborg (5/7) Jul 26 2012 Why? Just pass all the files to the compiler at once. Nothing says an
- Andrej Mitrovic (4/8) Jul 25 2012 That's exactly my point, you can take advantage of parallelism
- Andrej Mitrovic (3/4) Jul 25 2012 Well that would probably only be done once. With full builds you do it
- Jonathan M Davis (7/17) Jul 25 2012 Well, regardless, my and Andrei's point was that C++ has nothing on us h...
- Steven Schveighoffer (13/17) Aug 24 2012 Might I draw attention again to this bug:
- d_follower (6/24) Aug 24 2012 You can try testing DMD (written in C++) against DDMD (written in
Hello, I was talking to Walter on how to define a good study of D's compilation speed. We figured that we clearly need a good baseline, otherwise numbers have little meaning. One idea would be to take a real, non-trivial application, written in both D and another compiled language. We then can measure build times for both applications, and also measure the relative speeds of the generated executables. Although it sounds daunting to write the same nontrivial program twice, it turns out such an application does exist: dmdscript, a Javascript engine written by Walter in both C++ and D. It has over 40KLOC so it's of a good size to play with. What we need is a volunteer who dusts off the codebase (e.g. the D source is in D1 and should be adjusted to compile with D2), run careful measurements, and show the results. Is anyone interested? Thanks, Andrei
Jul 24 2012
On 24-Jul-12 18:34, Andrei Alexandrescu wrote:Hello, Although it sounds daunting to write the same nontrivial program twice, it turns out such an application does exist: dmdscript, a Javascript engine written by Walter in both C++ and D. It has over 40KLOC so it's of a good size to play with. What we need is a volunteer who dusts off the codebase (e.g. the D source is in D1 and should be adjusted to compile with D2), run careful measurements, and show the results. Is anyone interested?Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages. -- Dmitry Olshansky
Jul 24 2012
On 7/24/12 10:53 AM, Dmitry Olshansky wrote:On 24-Jul-12 18:34, Andrei Alexandrescu wrote:Excellent, thanks! AndreiHello, Although it sounds daunting to write the same nontrivial program twice, it turns out such an application does exist: dmdscript, a Javascript engine written by Walter in both C++ and D. It has over 40KLOC so it's of a good size to play with. What we need is a volunteer who dusts off the codebase (e.g. the D source is in D1 and should be adjusted to compile with D2), run careful measurements, and show the results. Is anyone interested?Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages.
Jul 24 2012
On 24-Jul-12 18:54, Andrei Alexandrescu wrote:On 7/24/12 10:53 AM, Dmitry Olshansky wrote:Done: https://github.com/blackwhale/DMDScript An awful lot of stuff got deprecated, e.g. it still uses class A: public B{...} syntax. To those taking this task - ready your shovels ;) -- Dmitry OlshanskyOn 24-Jul-12 18:34, Andrei Alexandrescu wrote:Excellent, thanks! AndreiHello, Although it sounds daunting to write the same nontrivial program twice, it turns out such an application does exist: dmdscript, a Javascript engine written by Walter in both C++ and D. It has over 40KLOC so it's of a good size to play with. What we need is a volunteer who dusts off the codebase (e.g. the D source is in D1 and should be adjusted to compile with D2), run careful measurements, and show the results. Is anyone interested?Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages.
Jul 24 2012
On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:the D source is in D1 and should be adjusted to compile with D2),That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
"Roman D. Boiko" wrote in message news:hpibxcqsmlpmgyngjzwp forum.dlang.org... On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:Still, is a good starting point.the D source is in D1 and should be adjusted to compile with D2),That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
On 7/24/2012 7:58 AM, Paulo Pinto wrote:The reality is, no matter what such benchmark is chosen, it will be attacked as being biased. There is no such thing as a perfect apples-apples comparison between languages, and there'll be no shortage of criticism of any shortcomings, valid and invalid. That doesn't mean we shouldn't do it. Heck, I've even been accused of "sabotaging" the Digital Mars C++ compiler in order to make D look good!"Roman D. Boiko" wrote in message news:hpibxcqsmlpmgyngjzwp forum.dlang.org... On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:Still, is a good starting point.the D source is in D1 and should be adjusted to compile with D2),That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
On 24-Jul-12 18:54, Roman D. Boiko wrote:On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:In fact it's rather D2-ified. But yeah, no template heavy code in sight. It doesn't even use std.range/std.algorithm IRC. -- Dmitry Olshanskythe D source is in D1 and should be adjusted to compile with D2),That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
On 7/24/12 10:54 AM, Roman D. Boiko wrote:On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:Ehm. There's any number of arguments that can be made to question the validity of the study: * the coding style does not use the entire language in either or both implementations * the application domain favors one language or the other * the application's use of libraries is too low/too high * the translation is too literal * the translation changes the size of the code (which is the case here, as the D version is actually shorter) Nevertheless, I think there is value in the study. We're looking at a real nontrivial application that wasn't written for a study, but for actual use, and that implements the same design and same functionality in both languages. Andreithe D source is in D1 and should be adjusted to compile with D2),That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
On Tuesday, 24 July 2012 at 15:06:58 UTC, Andrei Alexandrescu wrote:Nevertheless, I think there is value in the study. We're looking at a real nontrivial application that wasn't written for a study, but for actual use, and that implements the same design and same functionality in both languages.OK. And it could serve as a basis for further variations: * introduce some feature (e.g., ranges), measure impact * measure impact of multiple features alone and in combination Of course, trivial changes would unlikely yield anything useful, but I believe there is a way to produce valuable data in a controlled research.
Jul 24 2012
On 7/24/2012 8:06 AM, Andrei Alexandrescu wrote:Nevertheless, I think there is value in the study. We're looking at a real nontrivial application that wasn't written for a study, but for actual use, and that implements the same design and same functionality in both languages.The translation is also just that, a line-by-line translation that started by copying the .c source files to .d. It's probably as good as you're going to get in comparing compile speed.
Jul 24 2012
On 7/24/12, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:snipI've got a codebase where it takes DMD 15 seconds to output an error message to stdout. The error message is 3000 lines long. (and people thought C++ errors were bad!). It's all thanks to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=8082 The codebase isn't public yet so I can't help you with comparisons. Non-release full builds take 16 seconds for a template-heavy ~12k codebase (without counting lines of external dependencies). I use a lot of static foreach loops btw. Personally I think full builds are very fast compared to C++, although the transition from a small codebase which takes less than a second to compile to a bigger codebase which takes over a dozen seconds to compile is an unpleasant experience. I'd love to see DMD speed up its compile-time features like templates, mixins, static foreach, etc.
Jul 24 2012
On Tue, 24 Jul 2012 18:53:25 +0200 Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:On 7/24/12, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to compile (by D standards, not by C++ standards). I tried to benchmark it a while back, and was never really confident in the results I was getting or my understanding of the DMD source, so I never brought it up before. But it *seemed* to be template matching that was the big bottleneck (ie, IIUC, determining which template to instantiate, and I think the function was actually called "match" or something like that). Goldie does make use of a *lot* of that sort of thing.snipI've got a codebase where it takes DMD 15 seconds to output an error message to stdout. The error message is 3000 lines long. (and people thought C++ errors were bad!). It's all thanks to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=8082 The codebase isn't public yet so I can't help you with comparisons. Non-release full builds take 16 seconds for a template-heavy ~12k codebase (without counting lines of external dependencies). I use a lot of static foreach loops btw. Personally I think full builds are very fast compared to C++, although the transition from a small codebase which takes less than a second to compile to a bigger codebase which takes over a dozen seconds to compile is an unpleasant experience. I'd love to see DMD speed up its compile-time features like templates, mixins, static foreach, etc.
Jul 24 2012
On Tuesday, July 24, 2012 15:49:38 Nick Sabalausky wrote:Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to compile (by D standards, not by C++ standards). I tried to benchmark it a while back, and was never really confident in the results I was getting or my understanding of the DMD source, so I never brought it up before. But it *seemed* to be template matching that was the big bottleneck (ie, IIUC, determining which template to instantiate, and I think the function was actually called "match" or something like that). Goldie does make use of a *lot* of that sort of thing.I don't have any hard evidence for it, but I've always gotten the impression that it was templates, mixins, and CTFE which really slowed down compilation. Certainly, they increase the memory consumption of the compiler by quite a bit. My guess would be that if we were looking to improve the compiler's performance, that's where we'd need to focus. But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting). - Jonathan M Davis
Jul 24 2012
On Tuesday, 24 July 2012 at 22:19:07 UTC, Jonathan M Davis wrote:On Tuesday, July 24, 2012 15:49:38 Nick Sabalausky wrote:There's also the nasty O(n^2) optimiser issue. http://d.puremagic.com/issues/show_bug.cgi?id=7157Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to compile (by D standards, not by C++ standards). I tried to benchmark it a while back, and was never really confident in the results I was getting or my understanding of the DMD source, so I never brought it up before. But it *seemed* to be template matching that was the big bottleneck (ie, IIUC, determining which template to instantiate, and I think the function was actually called "match" or something like that). Goldie does make use of a *lot* of that sort of thing.I don't have any hard evidence for it, but I've always gotten the impression that it was templates, mixins, and CTFE which really slowed down compilation. Certainly, they increase the memory consumption of the compiler by quite a bit. My guess would be that if we were looking to improve the compiler's performance, that's where we'd need to focus. But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting). - Jonathan M Davis
Jul 24 2012
On 7/24/2012 3:18 PM, Jonathan M Davis wrote:But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting).I wouldn't be a bit surprised to find that there are some O(n*n) or worse algorithms embedded in the compiler that can be triggered by some types of code. Profiling is the way to root them out.
Jul 24 2012
On 2012-07-25 00:18, Jonathan M Davis wrote:I don't have any hard evidence for it, but I've always gotten the impression that it was templates, mixins, and CTFE which really slowed down compilation. Certainly, they increase the memory consumption of the compiler by quite a bit. My guess would be that if we were looking to improve the compiler's performance, that's where we'd need to focus. But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting).We did some profiling on derelict in the process of adding support for D2. This was mostly testing string mixins, the result was: It's a lot faster to use few string mixins containing a lot of code then using many string mixins containing very little code. -- /Jacob Carlborg
Jul 24 2012
On 07/24/12 16:34, Andrei Alexandrescu wrote:I was talking to Walter on how to define a good study of D's compilation speed. We figured that we clearly need a good baseline, otherwise numbers have little meaning.I agree.One idea would be to take a real, non-trivial application, written in both D and another compiled language. We then can measure build times for both applications, and also measure the relative speeds of the generated executables.Well I kind of did exactly that. I was planning to start a Blog ("you know the blog you should really really start but can't find time to do so") with such a comparison. I started it a few months ago and can't finish the post so it's still there, lying half finished. But as the subject pops out of the NG it would be stupid not to talk about it. I intended to add relevant numbers and go from deterministic measurable facts to more subjective remarks ( was it fun ? is it more maintainable ? ) but I really just did a bit of the the first part :( Anyway, so for people interested in my "findings" here is the half finished post : http://goo.gl/16Yrb This could serve as a basis of do's and don'ts for a more relevant comparison as Andrei proposed. For instance it could be interesting to compare the performance of several C++ and D compilers to get a measure of the performance standard deviation expected within the language. Also I think the D code could have been more idiomatic and optimized further more : it was just a quick test ( yet quite time consuming ). Both projects are opensource, one is endorsed by the company I'm working for (https://github.com/mikrosimage/sequenceparser), the other one is just a personal project for the purpose of the comparison ( https://github.com/gchatelet/d_sequence_parser ) By the way, it reminds me of the 'Computer Language Benchmarks Game' (http://shootout.alioth.debian.org/). I know D is not welcome aboard but couldn't we try do run the game for ourself so to have some more data ? -- Guillaume
Jul 24 2012
On 7/24/2012 11:02 AM, Guillaume Chatelet wrote:By the way, it reminds me of the 'Computer Language Benchmarks Game' (http://shootout.alioth.debian.org/). I know D is not welcome aboard but couldn't we try do run the game for ourself so to have some more data ?Small programs are completely inadequate for getting any reasonable measure of compiler speed. Even worse, they can be terribly wrong. (Back in the olden days, when men were men and and the sun revolved about the earth, everyone raved about Borland's compilation speed. In tests I ran myself, I found that it was fast, right up until you hit a certain size of source code, maybe about 5000 lines. Then, it fell off a cliff, and compile speed was terrible. But hey, it looked great in those tiny benchmarks.) The people who care about compile speed are compiling gigantic programs, and smallish ones can and do exhibit a very different performance profile. DMDScript is a medium sized program, not a gigantic one, but it's the best we've got for comparison.
Jul 24 2012
On 7/24/12 8:20 PM, Walter Bright wrote:On 7/24/2012 11:02 AM, Guillaume Chatelet wrote:Nevertheless there's value in the shootout. Yes, if someone is up for it that would be great. I also think if we have the setup ready we could convince the site maintainer to integrate D into the suite. AndreiBy the way, it reminds me of the 'Computer Language Benchmarks Game' (http://shootout.alioth.debian.org/). I know D is not welcome aboard but couldn't we try do run the game for ourself so to have some more data ?Small programs are completely inadequate for getting any reasonable measure of compiler speed. Even worse, they can be terribly wrong.
Jul 24 2012
Andrei Alexandrescu Wrote: -snip-Nevertheless there's value in the shootout. Yes, if someone is up for it that would be great.The Python measurement scripts are here -- http://shootout.alioth.debian.org/download/bencher.zip The whole ball of wax is available for download as a nightly snapshot, but if it was me I'd take the time to select particular programs from public CVS folders -- http://anonscm.debian.org/viewvc/shootout/shootout/bench/I also think if we have the setup ready we could convince the site maintainer to integrate D into the suite.I don't think so ;-)
Jul 25 2012
Walter Bright Wrote: -snip-(Back in the olden days, when men were men and and the sun revolved about the earth, everyone raved about Borland's compilation speed. In tests I ran myself, I found that it was fast, right up until you hit a certain size of source code, maybe about 5000 lines. Then, it fell off a cliff, and compile speed was terrible. But hey, it looked great in those tiny benchmarks.)-snip- Back in the olden days... "[Wirth] used the compiler's self-compilation speed as a measure of the compiler's quality." http://shootout.alioth.debian.org/dont-jump-to-conclusions.php#app
Jul 25 2012
On 24/07/12 15:34, Andrei Alexandrescu wrote:One idea would be to take a real, non-trivial application, written in both D and another compiled language. We then can measure build times for both applications, and also measure the relative speeds of the generated executables.Suggest that this gets done with all 3 of the main D compilers, not just DMD. I'd like to see the tradeoff between compilation speed and executable speed that one gets between them. I do have some pretty much equivalent simulation code written in both D and C++. For a rough comparison: Language Compiler Compile time (s) Runtime (s) D GDC 1.5 25.3 D DMD 0.4 52.1 C++ g++ 2.3 21.8 C++ Clang++ 1.8 27.6 DMD used is a fairly recent pull from GitHub; GDC is the 4.6.3 package found in Ubuntu 12.04. I don't have a working LDC2 compiler on my system. :-( The C++ has a template-based policy class design, while the D code uses template mixins to similar effect. The D code can be found here: https://github.com/WebDrake/Dregs While I'm happy to also share the C++ code, I confess I'm shy to do so given that it probably represents a travesty of the beautiful ideas Andrei developed on policy class design ... :-) Best wishes, -- Joe
Jul 24 2012
On 07/24/2012 04:54 PM, Joseph Rushton Wakeling wrote:Language Compiler Compile time (s) Runtime (s) D GDC 1.5 25.3 D DMD 0.4 52.1 C++ g++ 2.3 21.8 C++ Clang++ 1.8 27.6Those C++ builds have very few C++ source files, right? In my experience each source file takes a few seconds each, except the most trivial ones, because the standard library headers are compiled over and over again. :/ Ali
Jul 24 2012
On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling wrote:For a rough comparison: […]Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge. David
Jul 25 2012
On 25/07/12 09:37, David Nadlinger wrote:On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling wrote:C++ compiler and library flags: -ansi -pedantic -Wall -O3 -march=native -mtune=native -I. -DHAVE_INLINE -lm -lgsl -lgslcblas -lgrsl dmd and gdmd flags: -O -release -inline (which for gdmd corresponds to -O3 -fweb -frelease -finline-functions -I /usr/local/include/d2/). And yes, as Ali observed, this is a very small codebase (D is 3 files, 374 lines total; C++ is 18 files, 1266 lines -- so the comparison isn't 100% fair; but on the other hand, that's testament to how D can be used for more elegant code....).For a rough comparison: […]Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge.
Jul 25 2012
On 7/25/12 4:37 AM, David Nadlinger wrote:On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling wrote:Yes, and both debug and release build times are important. AndreiFor a rough comparison: […]Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge.
Jul 25 2012
On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On 7/25/12 1:24 PM, Walter Bright wrote:On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:There are systems that only work in release mode (e.g. performance is part of the acceptability criteria) and for which debugging means watching logs. So the problem is not faster optimization times for less optimization (though that's possible, too), but instead build times for a given level of optimization. AndreiYes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On 7/25/2012 10:50 AM, Andrei Alexandrescu wrote:On 7/25/12 1:24 PM, Walter Bright wrote:The easy way to improve optimized build times is to do less optimization. I'm saying be careful what you ask for - you might get it!On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:There are systems that only work in release mode (e.g. performance is part of the acceptability criteria) and for which debugging means watching logs. So the problem is not faster optimization times for less optimization (though that's possible, too), but instead build times for a given level of optimization.Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On 25.07.2012 19:24, Walter Bright wrote:On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On 7/25/12 4:53 PM, Rainer Schuetze wrote:On 25.07.2012 19:24, Walter Bright wrote:The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing? AndreiOn 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On Wed, 25 Jul 2012 17:31:10 -0400 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:On 7/25/12 4:53 PM, Rainer Schuetze wrote:Aren't there still issues with what object files DMD chooses to store instantiated templates into? Or has that all been fixed? The xfbuild developers wrestled a lot with this and AIUI eventually gave up. The symptoms are that you'll eventually start getting linker errors related to template instantiations, which will be fixed when you then do a complete rebuild.The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?
Jul 25 2012
On 2012-07-26 00:17, Nick Sabalausky wrote:Aren't there still issues with what object files DMD chooses to store instantiated templates into? Or has that all been fixed? The xfbuild developers wrestled a lot with this and AIUI eventually gave up. The symptoms are that you'll eventually start getting linker errors related to template instantiations, which will be fixed when you then do a complete rebuild.I'm pretty sure nothing has changed. But Walter said if you use the -lib flag it will output the templates to all object files. That will complicate things a bit but still possible to make it work. -- /Jacob Carlborg
Jul 26 2012
On 25.07.2012 23:31, Andrei Alexandrescu wrote:On 7/25/12 4:53 PM, Rainer Schuetze wrote:Incremental compilation does not work so well because - with combined declaration and implementation in the source, you also get the full dependencies if you just need a short declaration - even with di-files imports are viral: you must be very careful if you try to remove them from di-files because you might break runtime initialization order. - di-file generation has other known problems (e.g. missing implementation for CTFE) I thought about implementing incremental builds for Visual D, but soon gave up when I noticed that a single file compilation in a medium sized project (Visual D itself) almost takes as long as recompiling the whole thing. I suspect the problem is that dmd fully analyzes all the imported files and only skips the code generation for these. It could be much faster if it would do the analysis lazily (though this might slightly change evaluation order and skip error messages in unused code blocks).On 25.07.2012 19:24, Walter Bright wrote:The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On Wednesday, July 25, 2012 22:53:08 Rainer Schuetze wrote:On 25.07.2012 19:24, Walter Bright wrote:D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on. I don't know how much it shifts with large projects (maybe incremental builds actually end up being better then, because you have enough files which aren't related to one another that the amount of code which needs to be relexed a reparsed is minimal in comparison to the number of files), but you can do incremental building with dmd if you want to. It's just more typical to do it all at once, because for most projects, that's faster. So, I don't see how there's an complaint against D here. - Jonathan M DavisOn 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.Yes, and both debug and release build times are important.Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
On 2012-07-25 23:56, Jonathan M Davis wrote:D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on. I don't know how much it shifts with large projects (maybe incremental builds actually end up being better then, because you have enough files which aren't related to one another that the amount of code which needs to be relexed a reparsed is minimal in comparison to the number of files), but you can do incremental building with dmd if you want to. It's just more typical to do it all at once, because for most projects, that's faster. So, I don't see how there's an complaint against D here.Incremental builds don't have to mean "pass a single file to the compiler". You can start by passing all the files at once to the compiler and then later you just pass all the files that have changed, at once. But I don't know how much difference it will be to recompiling the whole project. -- /Jacob Carlborg
Jul 26 2012
On 07/26/2012 02:28 AM, Jacob Carlborg wrote:On 2012-07-25 23:56, Jonathan M Davis wrote:Incremental builds don't have to mean "pass a single file to the compiler". You can start by passing all the files at once to the compiler and then later you just pass all the files that have changed, at once.GNU make has the special $? prerequisite that may help with the above: "The names of all the prerequisites that are newer than the target, with spaces between them. " http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944 Ali
Jul 26 2012
On 2012-07-26 19:54, Ali Çehreli wrote:GNU make has the special $? prerequisite that may help with the above: "The names of all the prerequisites that are newer than the target, with spaces between them. " http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944 AliI'm trying to avoid "make" as much as possible. -- /Jacob Carlborg
Jul 26 2012
On Thursday, 26 July 2012 at 18:57:30 UTC, Jacob Carlborg wrote:On 2012-07-26 19:54, Ali Çehreli wrote:+1GNU make has the special $? prerequisite that may help with the above: "The names of all the prerequisites that are newer than the target, with spaces between them. " http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944 AliI'm trying to avoid "make" as much as possible.
Jul 28 2012
On Thu, 26 Jul 2012 10:54:07 -0700 Ali =C7ehreli <acehreli yahoo.com> wrote:On 07/26/2012 02:28 AM, Jacob Carlborg wrote: > On 2012-07-25 23:56, Jonathan M Davis wrote: =20 > Incremental builds don't have to mean "pass a single file to the > compiler". You can start by passing all the files at once to the > compiler and then later you just pass all the files that have > changed, at once. =20 GNU make has the special $? prerequisite that may help with the above: "The names of all the prerequisites that are newer than the target, with spaces between them. " =20 http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-=944=20So in other words, it'll completely crap out when a path contains spaces? (What is this, 1994?)
Jul 26 2012
On 7/25/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on.That's assuming that the lexing/parsing is the bottleneck for DMD. For example: a full build of WindowsAPI takes 14.6 seconds on my machine. But when compiling one module at a time and using parallelism it takes 7 seconds instead. And all it takes is a simple parallel loop.
Jul 25 2012
On 7/25/2012 1:53 PM, Rainer Schuetze wrote:The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.I suspect that's one of two possibilities: 1. everything is passed on one command line to dmd. This, of course, requires dmd to recompile everything. 2. modules are not separated into .d and .di files. Hence every module that imports a .d file has to, at least, parse and semantically analyze the whole thing, although it won't optimize or generate code for it. As for incremental linking, optlink has always been faster at doing a full link than the Microsoft linker does for an incremental link.
Jul 25 2012
On 26.07.2012 03:48, Walter Bright wrote:On 7/25/2012 1:53 PM, Rainer Schuetze wrote:I think working with di-files is too painful. A lot of the analysis in imported files could be skipped.The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.I suspect that's one of two possibilities: 1. everything is passed on one command line to dmd. This, of course, requires dmd to recompile everything. 2. modules are not separated into .d and .di files. Hence every module that imports a .d file has to, at least, parse and semantically analyze the whole thing, although it won't optimize or generate code for it.As for incremental linking, optlink has always been faster at doing a full link than the Microsoft linker does for an incremental link.Agreed, incremental linking is just a work-around for the linkers slowness.
Jul 25 2012
On 25/07/12 16:13, Andrei Alexandrescu wrote:Yes, and both debug and release build times are important.If you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.
Jul 26 2012
On 7/26/12 4:15 AM, Joseph Rushton Wakeling wrote:On 25/07/12 16:13, Andrei Alexandrescu wrote:The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck. You can skip the latter as it won't impact build time. AndreiYes, and both debug and release build times are important.If you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.
Jul 26 2012
On 26/07/12 15:42, Andrei Alexandrescu wrote:Here's a little table of DMD to GDC comparisons for the Dregs codebase: ----------DMD---------- ----------GDC---------- compiler flags compile time runtime compile time runtime -O -release -inline 0.43s 52s 1.51s 25s -O -release 0.35s 47s 1.50s 25s -O -noboundscheck 0.35s 56s 1.66s 25s -O -inline 0.47s 1m 5s 1.94s 45s -O 0.36s 1m 5s 1.98s 45s -release -inline 0.31s 1m 3s 0.63s 1m 3s -release 0.29s 1m 3s 0.63s 1m 3s -inline 0.32s 1m 24s 0.70s 1m 26s -noboundscheck 0.29s 1m 10s 0.666s 1m 9s [none] 0.29s 1m 24s 0.72s 1m 26s -debug 0.30s 1m 24s 0.70s 1m 26s -unittest 0.42s 1m 25s 0.75s 1m 26s -debug -unittest 0.42s 1m 25s 0.78s 1m 26sIf you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck.
Jul 26 2012
On Thursday, July 26, 2012 18:00:21 Joseph Rushton Wakeling wrote:On 26/07/12 15:42, Andrei Alexandrescu wrote:Clearly -O is where the big runtime speed difference is at between dmd and gdc, which _is_ a bit obvious, but I'm surprised that -inline had no differences, since dmd is generally accused at being poor at inlining. That probably just indicates that it's a frontend issue (which I suppose makes sense when I think about it). I guess that the way to go if you want to maxize both your efficiency and the code's efficiency is to do most of the coding with dmd but generate the final program with gdc (though obviously building and testing with both the whole way is probably necessary to ensure stability; still much of that could be automated and not affect programmer efficiency). - Jonathan M DavisHere's a little table of DMD to GDC comparisons for the Dregs codebase: ----------DMD---------- ----------GDC---------- compiler flags compile time runtime compile time runtime -O -release -inline 0.43s 52s 1.51s 25s -O -release 0.35s 47s 1.50s 25s -O -noboundscheck 0.35s 56s 1.66s 25s -O -inline 0.47s 1m 5s 1.94s 45s -O 0.36s 1m 5s 1.98s 45s -release -inline 0.31s 1m 3s 0.63s 1m 3s -release 0.29s 1m 3s 0.63s 1m 3s -inline 0.32s 1m 24s 0.70s 1m 26s -noboundscheck 0.29s 1m 10s 0.666s 1m 9s [none] 0.29s 1m 24s 0.72s 1m 26s -debug 0.30s 1m 24s 0.70s 1m 26s -unittest 0.42s 1m 25s 0.75s 1m 26s -debug -unittest 0.42s 1m 25s 0.78s 1m 26sIf you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck.
Jul 26 2012
On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:Clearly -O is where the big runtime speed difference is at between dmd and gdc, which _is_ a bit obvious, but I'm surprised that -inline had no differences, since dmd is generally accused at being poor at inlining. That probably just indicates that it's a frontend issue (which I suppose makes sense when I think about it).GDC probably performs inlining by default on -O2/-O3, just like LDC does. Also note that for the -release case (any performance measurements without it are most probably not worth it due to all the extra _d_invariant, etc. calls), -inline seems to increase the runtime of the DMD-produced executable by 10%. For inlining, you inevitably have to rely on heuristics, and there will always be cases where it slows down execution (worst case: the slight improvement in code size causes cache thrashing in a hot path), but 10% in a fairly standard application seems to be quite a lot. David
Jul 26 2012
On 26/07/12 20:27, David Nadlinger wrote:GDC probably performs inlining by default on -O2/-O3, just like LDC does.I was surprised that using -inline alone (without any optimization option) doesn't produce any meaningful improvement. It cuts maybe 1s off the DMD-compiled runtime, but it's not clear to me that actually corresponds to a reliable difference. Perhaps GDC just ignores the -inline flag ... ? I suppose it's possible that this is code that does not respond well to inlining, although I'd have thought the obvious optimization would be to inline many of the object methods that are only called internally and that are called in a tight loop: do { userDivergence(ratings); userReputation(ratings); reputationObjectOld_[] = reputationObject_[]; objectReputation(ratings); diff = 0; foreach(size_t o, Reputation rep; reputationObject_) { auto aux = rep - reputationObjectOld_[o]; diff += aux*aux; } ++iterations; } while (diff > convergence_); I might tweak it manually so that userDivergence(), userReputation() and objectReputation() are inline, and see if it makes any difference.
Jul 26 2012
On 26 July 2012 23:07, Joseph Rushton Wakeling <joseph.wakeling webdrake.net> wrote:On 26/07/12 20:27, David Nadlinger wrote:-inline is mapped to -finline-functions in GDC. Inlining is possibly done, but only in the backend. Some extra notes to bear in mind about GDC. 1) All methods and function literals are marked as 'inline' by default. 2) Cross module inlining does not occur if you are compiling one-at-a-time. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';GDC probably performs inlining by default on -O2/-O3, just like LDC does.I was surprised that using -inline alone (without any optimization option) doesn't produce any meaningful improvement. It cuts maybe 1s off the DMD-compiled runtime, but it's not clear to me that actually corresponds to a reliable difference. Perhaps GDC just ignores the -inline flag ... ?
Jul 26 2012
On 27/07/12 07:29, Iain Buclaw wrote:-inline is mapped to -finline-functions in GDC. Inlining is possibly done, but only in the backend. Some extra notes to bear in mind about GDC. 1) All methods and function literals are marked as 'inline' by default. 2) Cross module inlining does not occur if you are compiling one-at-a-time.Good to know. In this case it's all compiled together in one go: DC = gdmd DFLAGS = -O -release -inline DREGSRC = dregs/core.d dregs/codetermine.d all: test test: test.d $(DREGSRC) $(DC) $(DFLAGS) -of$ $^ .PHONY: clean clean: rm -f test *.o I'm just surprised that using -inline produces no measurable difference at all in performance for GDC, whether or not any other optimization flags are used. As I said, maybe I'll test some manual inlining and see what difference it might make ...
Jul 27 2012
On 27 July 2012 09:09, Joseph Rushton Wakeling <joseph.wakeling webdrake.net> wrote:On 27/07/12 07:29, Iain Buclaw wrote:My best assumption would be it may say something more about the way the program was written itself rather than the compiler. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';-inline is mapped to -finline-functions in GDC. Inlining is possibly done, but only in the backend. Some extra notes to bear in mind about GDC. 1) All methods and function literals are marked as 'inline' by default. 2) Cross module inlining does not occur if you are compiling one-at-a-time.Good to know. In this case it's all compiled together in one go: DC = gdmd DFLAGS = -O -release -inline DREGSRC = dregs/core.d dregs/codetermine.d all: test test: test.d $(DREGSRC) $(DC) $(DFLAGS) -of$ $^ .PHONY: clean clean: rm -f test *.o I'm just surprised that using -inline produces no measurable difference at all in performance for GDC, whether or not any other optimization flags are used. As I said, maybe I'll test some manual inlining and see what difference it might make ...
Jul 27 2012
On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:[…] That probably just indicates that it's a frontend issue (which I suppose makes sense when I think about it).Oh, and I don't know what exactly you are referring to here, but any difference between DMD and GDC is likely not a frontend issue, as GDC uses the DMD frontend, with only minor modifications. David
Jul 26 2012
On Thursday, July 26, 2012 21:29:57 David Nadlinger wrote:On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:That was my point. -inline seems to be pretty much identical between th= e two=20 compilers, and if the inlining is done in the frontend, then that makes= sense.=20 Thinking on it, it makes sense to me that it would be in the frontend, = but I=20 don't know where it actually is. - Jonathan M Davis[=E2=80=A6] That probably just indicates that it's a frontend issue (which I suppose makes sense when I think about it).=20 Oh, and I don't know what exactly you are referring to here, but any difference between DMD and GDC is likely not a frontend issue, as GDC uses the DMD frontend, with only minor modifications.
Jul 26 2012
On Thursday, 26 July 2012 at 19:36:51 UTC, Jonathan M Davis wrote:That was my point. -inline seems to be pretty much identical between the two compilers, and if the inlining is done in the frontend, then that makes sense. Thinking on it, it makes sense to me that it would be in the frontend, but I don't know where it actually is.Ah, okay, I see what you meant. But no, as far as I'm aware, GDC doesn't use DMD's inliner, but rather relies on the GCC one. LDC does the same, we entirely disable DMD's inlining code, it turned out to just be not worth it. David
Jul 26 2012
beautiful ideas Andrei developed on policy class designWhere would one find these ideas?
Jul 25 2012
On 07/25/2012 09:46 AM, ixid wrote:There are some papers at Andrei's site: http://erdani.com/index.php/articles/ Search for "policy" there. Policy based design is the main topic in Andrei's book, "Modern C++ Design": http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315 That book covers a publicly-available library, Loki, again by Andrei: http://loki-lib.sourceforge.net/ Alibeautiful ideas Andrei developed on policy class designWhere would one find these ideas?
Jul 25 2012
On Wed, 25 Jul 2012 18:46:58 +0200, ixid <nuaccount gmail.com> wrote:http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315 -- Simenbeautiful ideas Andrei developed on policy class designWhere would one find these ideas?
Jul 25 2012
On Thursday, July 26, 2012 00:34:07 Andrej Mitrovic wrote:On 7/25/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:Not necessarily. The point is that there's extra work that has to be done when compiling separately. So, whether it takes more or less time depends on how much other work you're avoiding by doing an incremental build. Certainly, I'd expect a full incremental build from scratch to take longer than one which was not incremental.D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on.That's assuming that the lexing/parsing is the bottleneck for DMD.For example: a full build of WindowsAPI takes 14.6 seconds on my machine. But when compiling one module at a time and using parallelism it takes 7 seconds instead. And all it takes is a simple parallel loop.Parallelism? How on earth do you manage that? dmd has no support for running on multiple threads AFAIK. Do you run multiple copies of dmd at once? Certainly, compiling files in parallel changes things. You've got multiple cores working on it at that point, so the equation is completely different. - Jonathan M Davis
Jul 25 2012
On 2012-07-26 00:42, Jonathan M Davis wrote:I'd expect a full incremental build from scratch to take longer than one which was not incremental.Why? Just pass all the files to the compiler at once. Nothing says an incremental build needs to pass a single file to the compiler. -- /Jacob Carlborg
Jul 26 2012
On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:Parallelism? How on earth do you manage that? dmd has no support for running on multiple threads AFAIK. You've got multiple cores working on it at that point, so the equation is completely different.That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?
Jul 25 2012
On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:Certainly, I'd expect a full incremental build from scratch to take longer than one which was not incremental.Well that would probably only be done once. With full builds you do it every time.
Jul 25 2012
On Thursday, July 26, 2012 00:44:14 Andrej Mitrovic wrote:On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:Well, regardless, my and Andrei's point was that C++ has nothing on us here. We can do incremental just fine. The fact that most people just build the whole program from scratch every time is irrelevant. That just means that the build times are fast enough for most people not to care about doing incremental builds, not that they can't do them. - Jonathan M DavisParallelism? How on earth do you manage that? dmd has no support for running on multiple threads AFAIK. You've got multiple cores working on it at that point, so the equation is completely different.That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?
Jul 25 2012
On Tue, 24 Jul 2012 10:34:57 -0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:Hello, I was talking to Walter on how to define a good study of D's compilation speed. We figured that we clearly need a good baseline, otherwise numbers have little meaning.Might I draw attention again to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=4900 Granted, this is really freaking old. A re-application of profiling should be done. But in general, what I have observed from DMD compiling is that the number and size (string size) of symbols is what really bogs it down. Most of the time, it's lightning fast. The reason the dcollections unit test taxes it so much is because I'm instantiating 15 objects for each container type, and each one has humongous symbols, a consequence of so many template arguments. -Steve
Aug 24 2012
On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:Hello, I was talking to Walter on how to define a good study of D's compilation speed. We figured that we clearly need a good baseline, otherwise numbers have little meaning. One idea would be to take a real, non-trivial application, written in both D and another compiled language. We then can measure build times for both applications, and also measure the relative speeds of the generated executables. Although it sounds daunting to write the same nontrivial program twice, it turns out such an application does exist: dmdscript, a Javascript engine written by Walter in both C++ and D. It has over 40KLOC so it's of a good size to play with. What we need is a volunteer who dusts off the codebase (e.g. the D source is in D1 and should be adjusted to compile with D2), run careful measurements, and show the results. Is anyone interested? Thanks, AndreiYou can try testing DMD (written in C++) against DDMD (written in D). I don't think you can find more fair comparison (both projects are in sync - though dated - and project size is fairly large).
Aug 24 2012