digitalmars.D - D compiler benchmarks
- Robert Clipsham (16/16) Mar 07 2009 Hi all,
- The Anh Tran (4/24) Mar 07 2009 1. Could you add "some" bench that Alioth currently run on q6600 ubuntu?
- Robert Clipsham (9/12) Mar 07 2009 1. All the benchmarks currently up are just tango ports of tests from
- The Anh Tran (2/2) Mar 07 2009 Could you provide D compiler version that you're using?
- Robert Clipsham (3/5) Mar 07 2009 All compiler versions are given on the page. Gdc is a tango package, dmd...
- The Anh Tran (1/1) Mar 07 2009 3. Do you use multithread or single thread?
- Robert Clipsham (6/7) Mar 07 2009 I'm not sure what you mean here. All the current benchmarks are single
- The Anh Tran (8/16) Mar 07 2009 Sorry, my english is bad.
- Robert Clipsham (2/20) Mar 07 2009 No, I don't do that, tests just run as they come.
- Daniel Keep (11/31) Mar 07 2009 I went hunting through Tango, and it looks like it has the same method;
- Robert Clipsham (7/51) Mar 07 2009 I was thinking more a way of getting the memory usage using run.d (the
- Daniel Keep (6/16) Mar 07 2009 Ah.
- Robert Clipsham (4/17) Mar 07 2009 It should be running in an entirely different process, but that depends
- Daniel Keep (3/3) Mar 07 2009 Incidentally, this might be of assistance:
- Robert Clipsham (5/10) Mar 07 2009 Thanks! I've actually already downloaded these, but being me completely
- Georg Wrede (2/22) Mar 08 2009 The first run should not be included in the average.
- Robert Clipsham (3/27) Mar 08 2009 Could you explain your reasoning for this? I can't see why it shouldn't
- Frank Benoit (3/31) Mar 08 2009 fill up of disk and memory caches. That is why the first run has a
- Georg Wrede (8/35) Mar 08 2009 Suppose you have run the same program very recently before the test.
- Bill Baxter (15/52) Mar 08 2009 Also I think standard practice for benchmarks is not to average but to
- Robert Clipsham (2/53) Mar 08 2009 By minimum time, do you mean the fastest time or the slowest time?
- Robert Clipsham (4/13) Mar 08 2009 Ok, I will rerun the tests later today and disregard the first test. I
- Isaac Gouy (7/21) Mar 08 2009 As you're re-inventing functionality that's in the benchmarks game measu...
- Jason House (4/24) Mar 08 2009 I don't think it's proper to limit solutions to either Phobos or Tango, ...
- Robert Clipsham (17/21) Mar 08 2009 These benchmarks are designed purely to test the compilers, not the
- Isaac Gouy (6/26) Mar 08 2009 You could just use the benchmarks game measurement scripts -
- bearophile (12/12) Mar 09 2009 - Having a C or C++ (or something better, where necessary) baseline refe...
- Robert Clipsham (17/25) Mar 09 2009 How would you like them improved? I just copied and pasted some CSS to
- Andrei Alexandrescu (3/5) Mar 09 2009 Fastest result.
- bearophile (6/8) Mar 09 2009 Where do the shorter and longer timings come from? Think a bit about tha...
Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).
Mar 07 2009
1. Could you add "some" bench that Alioth currently run on q6600 ubuntu? 2. A gnu c++ for a reference would be great. I'm very eager to port C++ entries to D :) Robert Clipsham wrote:Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).
Mar 07 2009
The Anh Tran wrote:1. Could you add "some" bench that Alioth currently run on q6600 ubuntu? 2. A gnu c++ for a reference would be great. I'm very eager to port C++ entries to D :)1. All the benchmarks currently up are just tango ports of tests from alioth, if that's what you mean? 2. I wasn't planning on adding in C,C++ etc benchmarks as then it will just become a clone of the shootout. I don't mind adding a reference in for each test from C/C++ if there is enough demand, I would rather avoid it and make it purely for D benchmarks if possible. Please feel free to port benchmarks to D/tango, I'll be more than happy to incorporate them into the suite (which is currently fairly minimal).
Mar 07 2009
Could you provide D compiler version that you're using? Download-able compiler packages if you don't mind :D. I'm spoiled by EasyD.
Mar 07 2009
The Anh Tran wrote:Could you provide D compiler version that you're using? Download-able compiler packages if you don't mind :D. I'm spoiled by EasyD.All compiler versions are given on the page. Gdc is a tango package, dmd is dmd 1.041 and a tango package, ldc is from hg/tango from svn.
Mar 07 2009
The Anh Tran wrote:3. Do you use multithread or single thread?I'm not sure what you mean here. All the current benchmarks are single threaded as the multi threaded benchmarks use std.thread, and my knowledge of phobos is not good enough to port them. If you mean the machine itself, it does support multithreading, so tests could benefit from that.
Mar 07 2009
Robert Clipsham wrote:The Anh Tran wrote:Sorry, my english is bad. Does your bench split into single/multi thread categories like Alioth's: http://shootout.alioth.debian.org/u64q/ http://shootout.alioth.debian.org/u64/ They use affinity to emulate single core bench. But i think we can add a number to command line for that. Ie: fankuch 10 4 // run fankuch bench with 4 threads, array size is 103. Do you use multithread or single thread?I'm not sure what you mean here. All the current benchmarks are single threaded as the multi threaded benchmarks use std.thread, and my knowledge of phobos is not good enough to port them. If you mean the machine itself, it does support multithreading, so tests could benefit from that.
Mar 07 2009
The Anh Tran wrote:Robert Clipsham wrote:No, I don't do that, tests just run as they come.The Anh Tran wrote:Sorry, my english is bad. Does your bench split into single/multi thread categories like Alioth's: http://shootout.alioth.debian.org/u64q/ http://shootout.alioth.debian.org/u64/ They use affinity to emulate single core bench. But i think we can add a number to command line for that. Ie: fankuch 10 4 // run fankuch bench with 4 threads, array size is 103. Do you use multithread or single thread?I'm not sure what you mean here. All the current benchmarks are single threaded as the multi threaded benchmarks use std.thread, and my knowledge of phobos is not good enough to port them. If you mean the machine itself, it does support multithreading, so tests could benefit from that.
Mar 07 2009
Robert Clipsham wrote:... (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).There's a way to do it in Phobos;import gc=std.gc; import gcstats; void main() { GCStats gcst; gc.getStats(gcst); }I went hunting through Tango, and it looks like it has the same method; it just isn't exposed. Probably because of this comment: // NOTE: This routine is experimental. The stats or function name may // change before it is made officially available. None the less, if you want it now, you could try adding this to your code somewhere:struct GCStats { size_t poolsize; // total size of pool size_t usedsize; // bytes allocated size_t freeblocks; // number of blocks marked FREE size_t freelistsize; // total of memory on free lists size_t pageblocks; // number of blocks marked PAGE } extern(C) GCStats gc_stats();You should probably whack this in a module so you can replace it easily if and when it changes. -- Daniel
Mar 07 2009
Daniel Keep wrote:Robert Clipsham wrote:I was thinking more a way of getting the memory usage using run.d (the app I'm using for benchmarking, it's in the repository if you're interested). It's rather difficult to get the memory usage in run.d so I can put it straight into the stats page if it's being got from within benchmark, and doing this would probably affect the benchmark some amount which I would ideally like to avoid.... (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).There's a way to do it in Phobos;import gc=std.gc; import gcstats; void main() { GCStats gcst; gc.getStats(gcst); }I went hunting through Tango, and it looks like it has the same method; it just isn't exposed. Probably because of this comment: // NOTE: This routine is experimental. The stats or function name may // change before it is made officially available. None the less, if you want it now, you could try adding this to your code somewhere:struct GCStats { size_t poolsize; // total size of pool size_t usedsize; // bytes allocated size_t freeblocks; // number of blocks marked FREE size_t freelistsize; // total of memory on free lists size_t pageblocks; // number of blocks marked PAGE } extern(C) GCStats gc_stats();You should probably whack this in a module so you can replace it easily if and when it changes. -- Daniel
Mar 07 2009
Robert Clipsham wrote:I was thinking more a way of getting the memory usage using run.d (the app I'm using for benchmarking, it's in the repository if you're interested). It's rather difficult to get the memory usage in run.d so I can put it straight into the stats page if it's being got from within benchmark, and doing this would probably affect the benchmark some amount which I would ideally like to avoid.Ah. From the alioth FAQ:How did you measure memory use? By sampling GTop proc_mem for the program and it's child processes every 0.2 seconds. Obviously those measurements are unlikely to be reliable for programs that run for less than 0.2 seconds.Probably best to ensure this sampling thread is running on a different hardware thread to the tested program... -- Daniel
Mar 07 2009
Daniel Keep wrote:From the alioth FAQ:I had read this, but that's as far as I got with it!How did you measure memory use? By sampling GTop proc_mem for the program and it's child processes every 0.2 seconds. Obviously those measurements are unlikely to be reliable for programs that run for less than 0.2 seconds.Probably best to ensure this sampling thread is running on a different hardware thread to the tested program... -- DanielIt should be running in an entirely different process, but that depends on how tango.sys.Process deals with processes.
Mar 07 2009
Incidentally, this might be of assistance: http://shootout.alioth.debian.org/u32q/faq.php#measurementscripts -- Daniel
Mar 07 2009
Daniel Keep wrote:Incidentally, this might be of assistance: http://shootout.alioth.debian.org/u32q/faq.php#measurementscripts -- DanielThanks! I've actually already downloaded these, but being me completely overlooked them. If I remember correctly they were python scripts, and my current testing app is in D - http://hg.octarineparrot.com/dbench/file/tip/run.d
Mar 07 2009
Robert Clipsham wrote:Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
Georg Wrede wrote:Robert Clipsham wrote:Could you explain your reasoning for this? I can't see why it shouldn't be included personally.Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
Robert Clipsham schrieb:Georg Wrede wrote:fill up of disk and memory caches. That is why the first run has a different timing to the other runs.Robert Clipsham wrote:Could you explain your reasoning for this? I can't see why it shouldn't be included personally.Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
Robert Clipsham wrote:Georg Wrede wrote:Suppose you have run the same program very recently before the test. Then the executable will be in memory already, any other files it may want to access are in memory too. This makes execution much faster than if it were the first time ever this program is run. If things were deterministic, then you wouldn't run several times and average the results, right?Robert Clipsham wrote:Could you explain your reasoning for this? I can't see why it shouldn't be included personally.Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
On Mon, Mar 9, 2009 at 3:15 AM, Georg Wrede <georg.wrede iki.fi> wrote:Robert Clipsham wrote:Also I think standard practice for benchmarks is not to average but to take the minimum time. To the extent that things are not deterministic it is generally because of factors outside of your program's control -- virtual memory page fault kicking in, some other process stealing cycles, etc. Or put another way, there is no way for the measured run time of your program to come out artificially too low, but there are lots of ways it could come out too high. The reason you average measurements in other scenarios is because of an expectation that the measurements form a normal distribution around the true value. That is not the case for measurements of computer program running times. Measurements will basically always be higher than the true intrinsic run-time for your program. --bbGeorg Wrede wrote:Suppose you have run the same program very recently before the test. Then the executable will be in memory already, any other files it may want to access are in memory too. This makes execution much faster than if it were the first time ever this program is run. If things were deterministic, then you wouldn't run several times and average the results, right?Robert Clipsham wrote:Could you explain your reasoning for this? I can't see why it shouldn't be included personally.Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
Bill Baxter wrote:On Mon, Mar 9, 2009 at 3:15 AM, Georg Wrede <georg.wrede iki.fi> wrote:By minimum time, do you mean the fastest time or the slowest time?Robert Clipsham wrote:Also I think standard practice for benchmarks is not to average but to take the minimum time. To the extent that things are not deterministic it is generally because of factors outside of your program's control -- virtual memory page fault kicking in, some other process stealing cycles, etc. Or put another way, there is no way for the measured run time of your program to come out artificially too low, but there are lots of ways it could come out too high. The reason you average measurements in other scenarios is because of an expectation that the measurements form a normal distribution around the true value. That is not the case for measurements of computer program running times. Measurements will basically always be higher than the true intrinsic run-time for your program. --bbGeorg Wrede wrote:Suppose you have run the same program very recently before the test. Then the executable will be in memory already, any other files it may want to access are in memory too. This makes execution much faster than if it were the first time ever this program is run. If things were deterministic, then you wouldn't run several times and average the results, right?Robert Clipsham wrote:Could you explain your reasoning for this? I can't see why it shouldn't be included personally.Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).The first run should not be included in the average.
Mar 08 2009
Georg Wrede wrote:Suppose you have run the same program very recently before the test. Then the executable will be in memory already, any other files it may want to access are in memory too. This makes execution much faster than if it were the first time ever this program is run. If things were deterministic, then you wouldn't run several times and average the results, right?Ok, I will rerun the tests later today and disregard the first test. I may also take the minimum value rather than taking an average (thanks to Bill Baxter for this idea).
Mar 08 2009
Robert Clipsham Wrote:Georg Wrede wrote:As you're re-inventing functionality that's in the benchmarks game measurement scripts, let me suggest that there are 2 phases involved: 1) record measurements 2) analyze measurements As long as you keep the measurements in the order they were made in and keep the measurements for each different configuration in their own file, you can decide to do different selections from those measurements at some later date. You can throw away the first measurement or not, you can take the fastest or the median, you can ... without doing new measurements. As you are only trying to measure a couple of language implementations, measure them across a dozen different input values rather than one or two - leaving the computer churning overnight will help keep your home warm :-)Suppose you have run the same program very recently before the test. Then the executable will be in memory already, any other files it may want to access are in memory too. This makes execution much faster than if it were the first time ever this program is run. If things were deterministic, then you wouldn't run several times and average the results, right?Ok, I will rerun the tests later today and disregard the first test. I may also take the minimum value rather than taking an average (thanks to Bill Baxter for this idea).
Mar 08 2009
Robert Clipsham Wrote:Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).I don't think it's proper to limit solutions to either Phobos or Tango, or either D1 or D2. Why not include all mixes of standard libraries, compilers, and major D versions? I've always heard Tango is faster... Let's see proof! Similarly, D2 aims to do multithreading better. I'd love to see performance and code differences between D1 and D2.
Mar 08 2009
Jason House wrote:I don't think it's proper to limit solutions to either Phobos or Tango, or either D1 or D2. Why not include all mixes of standard libraries, compilers, and major D versions? I've always heard Tango is faster... Let's see proof! Similarly, D2 aims to do multithreading better. I'd love to see performance and code differences between D1 and D2.These benchmarks are designed purely to test the compilers, not the libraries. I agree that it might be interesting to see benchmarks between tango and phobos, I might set some up at some point. I know there are already some benchmarks up for XML performance of tango/phobos/other xml libraries at http://dotnot.org/, as well as some tests showing performance of the GC at http://www.dsource.org/projects/tango/wiki/GCBenchmark. Neither of these are up to date or test the full extent of the libraries, but do show some difference in performance. As I stated in my post I chose tango purely because ldc does not currently support phobos. The choice of library should not affect performance as all benchmarks use stdc for any external functions. I will not be setting up benchmarks for D2 yet, as there is currently only one D2 compiler and it is in alpha. When there are multiple D2 compilers, I will set up some more benchmarks for them. Similarly when D2 moves out of alpha I will happily put it against D1 if there is demand.
Mar 08 2009
Robert Clipsham Wrote:Hi all, I have set up some benchmarks for dmd, ldc and gdc at http://dbench.octarineparrot.com/. There are currently only 6 tests all from http://shootout.alioth.debian.org/gp4/d.php. My knowledge of phobos is not great enough to port the others to tango (I've chosen tango as ldc does not support phobos currently, so it make sense to choose tango as all compilers support it). If you would like to contribute new tests or improve on the current ones let me know and I'll include them next time I run them. All source code can be found at http://hg.octarineparrot.com/dbench/file/tip. Let me know if you have any ideas for how I can improve the benchmarks, I currently plan to add compile times, size of the final executable and memory usage (if anyone knows an easy way to get the memory usage of a process in D, let me know :D).You could just use the benchmarks game measurement scripts - http://shootout.alioth.debian.org/u32q/faq.php#measurementscripts They already report the time to complete all the MAKE actions - it wouldn't be difficult to break that out to show compile time. They already sample memory use, and record cpu time, elapsed time, source size - it wouldn't be difficult to get the executable size. But if you're having fun writing a measurement program... ;-)
Mar 08 2009
- Having a C or C++ (or something better, where necessary) baseline reference can be very useful to know how much far is the D code from the fastest non-ASM versions. - you can improve the graphs in the dbench.octarineparrot.com page so they can be read better. - Tune all your tests so they run for 6-15 seconds or more. If they run for less than 3 seconds there's too much measure noise. - Taking the average of three runs isn't that good, but this is a tricky topic... Take the minimum for now. - With my browser the label "binarytrees2" is misplaced. - what's the differece between nsievebits2 and nsievebits? And nbody and nbody2? Reading the source is good, but a small note too is good. - Both in GDC and LDC it's positive to add what backends they use (for example: ldc version: r1050 using LLVM 2.5). - Note that currently LDC goes up only to -O3. Regarding the backend of LDC: http://leonardo-m.livejournal.com/77877.html Bye, bearophile
Mar 09 2009
bearophile wrote:- Having a C or C++ (or something better, where necessary) baseline reference can be very useful to know how much far is the D code from the fastest non-ASM versions.This seems to be quite a popular request, I'll do this at some point- you can improve the graphs in the dbench.octarineparrot.com page so they can be read better.How would you like them improved? I just copied and pasted some CSS to generate them, it can easily be tweaked to be easier to read.- Tune all your tests so they run for 6-15 seconds or more. If they run for less than 3 seconds there's too much measure noise.Sounds like a good plan, I'll do that next time I run them- Taking the average of three runs isn't that good, but this is a tricky topic... Take the minimum for now.Someone has already pointed this out, and I plan to do it next time. By minimum do you mean the fastest or slowest result?- With my browser the label "binarytrees2" is misplaced.All the benchmarks on the right are slightly misplaced, I can't figure it out. I'll try and tweak it so it fits better one I get your input on how to improve the graphs.- what's the differece between nsievebits2 and nsievebits? And nbody and nbody2? Reading the source is good, but a small note too is good.As you probably know, the tests are just from the shootout. The number is the version number of the test, I picked the tests that performed the best when there was more than one.- Both in GDC and LDC it's positive to add what backends they use (for example: ldc version: r1050 using LLVM 2.5).I'll do that with the next update.- Note that currently LDC goes up only to -O3.I thought 4/5 introduced linker optimisations? Either way it doesn't matter, -O5 will perform all the optimisations available with the current version of ldc.
Mar 09 2009
Robert Clipsham wrote:Someone has already pointed this out, and I plan to do it next time. By minimum do you mean the fastest or slowest result?Fastest result. Andrei
Mar 09 2009
Robert Clipsham:How would you like them improved?In any way that lets me see them well and not makes Tufte cry.By minimum do you mean the fastest or slowest result?Where do the shorter and longer timings come from? Think a bit about that. (The answer is minimum, but you have to know why). Bye, bearophile
Mar 09 2009