www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Comparing compilation time of random code in C++, D, Go, Pascal and

reply Gary Willoughby <dev nomad.so> writes:
This was posted on twitter a while ago:

Comparing compilation time of random code in C++, D, Go, Pascal 
and Rust

http://imgur.com/a/jQUav

D was doing well but in the larger examples the D compiler 
crashed: "Error: more than 32767 symbols in object file".
Oct 19 2016
next sibling parent =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby 
wrote:
 crashed: "Error: more than 32767 symbols in object file".
Will that many symbols ever happen in real applications? Anyway, nice!
Oct 19 2016
prev sibling next sibling parent Dennis Ritchie <dennis.ritchie mail.ru> writes:
On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby 
wrote:
 D was doing well but in the larger examples the D compiler 
 crashed: "Error: more than 32767 symbols in object file".
A bug of this series: https://issues.dlang.org/show_bug.cgi?id=14315
Oct 19 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/19/2016 10:05 AM, Gary Willoughby wrote:
 D was doing well but in the larger examples the D compiler crashed: "Error:
more
 than 32767 symbols in object file".
The article didn't say it crashed. That message only occurs for Win32 object files - it's a limitation of the OMF file format. We could change the object file format, but: 1. that means changing optlink, too, which is a more formidable task 2. the source file was a machine generated contrived one with 100,000 functions it in - not terribly likely to happen in a real case 3. I don't think Win32 has much of a future and is unlikely to be worth the investment
Oct 20 2016
parent reply eugene <egordeev18 gmail.com> writes:
On Thursday, 20 October 2016 at 08:19:21 UTC, Walter Bright wrote:

could you give facts that on linux it is ok?
Oct 20 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2016 9:20 AM, eugene wrote:
 could you give facts that on linux it is ok?
You can find out by writing a program to generate 100,000 functions and compile the result on linux.
Oct 20 2016
prev sibling parent reply Sebastien Alaiwan <ace17 free.fr> writes:
On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby 
wrote:
 This was posted on twitter a while ago:

 Comparing compilation time of random code in C++, D, Go, Pascal 
 and Rust

 http://imgur.com/a/jQUav
Very interesting, thanks for sharing! From the article:
 Surprise: C++ without optimizations is the fastest! A few other 
 surprises: Rust also seems quite competitive here. D starts out 
 comparatively slow."
These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. (However, then why do C++ standard committee members believe that the replacement of text-based #includes with C++ modules ("import") will speed up the compilation by one order of magnitude?) Working simultaneously on equally sized C++ projects and D projects, I believe that a "dcache" (using hashes of the AST?) might be usefull. The average project build time in my company is lower for C++ projects than for D projects (we're using "ccache g++ -O3" and "gdc -O3").
Oct 26 2016
next sibling parent reply Johan Engelen <j j.nl> writes:
On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan 
wrote:
 From the article:
 Surprise: C++ without optimizations is the fastest! A few 
 other surprises: Rust also seems quite competitive here. D 
 starts out comparatively slow."
These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial.
See https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html I also have a working dcache implementation in LDC but it still needs some polishing. -Johan
Oct 27 2016
parent reply Sebastien Alaiwan <ace17 free.fr> writes:
On Thursday, 27 October 2016 at 12:11:09 UTC, Johan Engelen wrote:
 On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan
 If code generation/optimization is the bottleneck, a 
 "ccache-for-D" ("dcache"?) tool might be very beneficial.
See https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html I also have a working dcache implementation in LDC but it still needs some polishing.
Hashing the LLVM bitcode ... how come I didn't think about this before! Unless someone manages to do the same thing with gdc + GIMPLE, this could very well be the "killer" feature of LDC ... Having a the fastest compiler on earth still doesn't provide scalability ; interestingly, when I build a full LLVM+LDC toolchain, the longest step is the compilation of the dmd frontend. It's the only part that is: 1) not cached: all the other source files from LLVM are ccache'd. 2) sequential: my CPU load drops to 12.5%, although it's near 100% for LLVM.
Oct 27 2016
parent Johan Engelen <j j.nl> writes:
On Friday, 28 October 2016 at 06:10:52 UTC, Sebastien Alaiwan 
wrote:
 
 Having a the fastest compiler on earth still doesn't provide 
 scalability ; interestingly, when I build a full LLVM+LDC 
 toolchain, the longest step is the compilation of the dmd 
 frontend. It's the only part that is:
 1) not cached: all the other source files from LLVM are 
 ccache'd.
 2) sequential: my CPU load drops to 12.5%, although it's near 
 100% for LLVM.
This is caused by how we set up the LDC build, and it needs fixing (the ARM buildbot times out due to the long build step of the D source!). The reason all D source is compiled at once is full inlining capability. Currently LDC does not cross-module inline for separate-compilation builds, so to get a fast compiler all D source must be compiled at once. :-( -Johan
Oct 27 2016
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/27/2016 02:43 AM, Sebastien Alaiwan wrote:
  From the article:
 Surprise: C++ without optimizations is the fastest! A few other
 surprises: Rust also seems quite competitive here. D starts out
 comparatively slow."
These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. (However, then why do C++ standard committee members believe that the replacement of text-based #includes with C++ modules ("import") will speed up the compilation by one order of magnitude?)
How many source files are used? If all the functions are always packed into one large source file, or just a small handful, then that would mean the tests are accidentally working around C++'s infamous #include slowdowns.
Oct 27 2016