digitalmars.D - DMD compilation speed
- Martin Krejcirik (5/5) Mar 29 2015 It seems like every DMD release makes compilation slower. This
- Walter Bright (5/9) Mar 29 2015 Sigh. Two things happen constantly:
- weaselcat (3/16) Mar 29 2015 would having benchmarks help keep this under control/make
- Walter Bright (2/4) Mar 29 2015 benchmarks would help.
- Gary Willoughby (4/17) Apr 09 2015 Are there any plans to fix this up in a point release? The
- Martin Nowak (3/6) Apr 09 2015 Filed a bug report, we'll figure something out.
- Gary Willoughby (2/10) Apr 10 2015 Cheers.
- John Colvin (4/17) Apr 09 2015 I just did some profiling of building phobos. I noticed ~20% of
- Martin Nowak (3/8) Mar 30 2015 25% slowdown is severe, can you share the project and probably file a
- Martin Krejcirik (16/16) Mar 30 2015 Here is one example:
- Jacob Carlborg (24/40) Mar 30 2015 These are the timings for compiling the unit tests without linking. It
- Mathias Lang via Digitalmars-d (3/7) Mar 30 2015 Is it only DMD compile time or DMD + ld ? ld can be very slow sometimes.
- lobo (6/11) Mar 30 2015 I'm finding memory usage the biggest problem for me. 3s speed
- lobo (11/24) Mar 30 2015 I should add that I am on a 32-bit machine with 4GB RAM. I just
- Andrei Alexandrescu (5/28) Mar 30 2015 Part of our acceptance tests should be peak memory, object file size,
- Vladimir Panteleev (4/8) Mar 30 2015 I have filed this issue today:
- Andrei Alexandrescu (3/10) Mar 30 2015 The current situation is a shame. I appreciate the free service we're
- Jake The Baker (5/32) Mar 31 2015 As far as memory is concerned. How hard would it be to simply
- Adam D. Ruppe (4/6) Mar 31 2015 That'd hit the same walls as the operating system trying to use a
- Jake The Baker (12/18) Mar 31 2015 I doubt it. If most modules are sparsely used it would improve
- lobo (12/33) Mar 31 2015 I have no idea what you're talking about here, sorry.
- Daniel Murphy (5/10) Apr 01 2015 Yeah, the big problem is that dmd's interpreter sort of evolved out of t...
- lobo (8/46) Mar 31 2015 It's incredibly slow and unproductive it's not really an option.
- Daniel Murphy (4/8) Apr 01 2015 It seems unlikely that having dmd use its own swap file would perform be...
- H. S. Teoh via Digitalmars-d (15/25) Mar 30 2015 [...]
- w0rp (9/48) Mar 30 2015 I sometimes think DMD's memory should be... garbage collected. I
- deadalnix (4/6) Mar 30 2015 Yes, set an initial heap size of 100Mb or something and the GC
- Andrei Alexandrescu (2/4) Mar 30 2015 Compiler workloads are a good candidate for GC. -- Andrei
- deadalnix (6/11) Mar 30 2015 Yes, compiler to perform significantly better with GC than with
- Martin Nowak (4/7) Mar 31 2015 Why? Compilers use a lot of long-lived data structures (AST, metadata)
- deadalnix (3/13) Mar 31 2015 The graph is not acyclic, which makes it even worse for anything
- weaselcat (3/9) Mar 30 2015 has anyone tried using boehm with dmd? I'm pretty sure it has a
- Daniel Murphy (9/16) Mar 30 2015 I've used D's GC with DDMD. It works*, but you're trading better memory...
- deadalnix (4/7) Mar 30 2015 That is not accurate. For small programs, yes. For anything non
- Daniel Murphy (6/14) Mar 30 2015 I don't see how it's inaccurate. Many projects fit into the range where...
- ketmar (6/15) Mar 30 2015 i think that DDMD can start with GC turned off, and automatically turn i...
- Vladimir Panteleev (6/15) Mar 30 2015 Recording the information necessary to free memory costs
- ketmar (5/18) Mar 31 2015 TANSTAAFL. alas. yet without `free()` there aren't free lists to scan an...
- Daniel Murphy (8/13) Mar 31 2015 It's possible that we could use a hybrid approach, where a GB or so is
- Temtaime (6/6) Mar 31 2015 Is anyone there looked how msvc for example compiles really big
- Temtaime (1/1) Mar 31 2015 *use pools...
- ketmar (3/6) Mar 31 2015 and it has no CTFE, so...
- deadalnix (6/14) Mar 31 2015 I'm going to propose again the same thing as in the past :
- ketmar (9/26) Mar 31 2015 this won't really help long CTFE calls (like building a parser based on=...
- John Colvin (4/21) Mar 31 2015 Wait, you mean DMD doesn't already do something like that? Yikes.
- Martin Nowak (5/10) Mar 31 2015 No, it's trivial enough to implement a full AST interpreter.
- deadalnix (2/14) Mar 31 2015 This is why I introduced a deep copy step in there.
- Random D-user (10/13) Mar 31 2015 As a random d-user (who cares about perf/speed and just happened
- Jacob Carlborg (4/6) Mar 31 2015 Doesn't DMD already have a GC that is disabled?
- Daniel Murphy (2/3) Mar 31 2015 It did once, but it's been gone for a while now.
- Temtaime (2/2) Mar 31 2015 I don't use CTFE in my game engine and DMD uses about 600 MB
- Mathias Lang via Digitalmars-d (6/19) Mar 30 2015 I can relate. DMD compilation speed was nothing but a myth to me until I
It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?
Mar 29 2015
On 3/29/2015 4:14 PM, Martin Krejcirik wrote:It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Mar 29 2015
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:On 3/29/2015 4:14 PM, Martin Krejcirik wrote:would having benchmarks help keep this under control/make regressions easier to find?It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Mar 29 2015
On 3/29/2015 5:14 PM, weaselcat wrote:would having benchmarks help keep this under control/make regressions easier to find?benchmarks would help.
Mar 29 2015
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:On 3/29/2015 4:14 PM, Martin Krejcirik wrote:Are there any plans to fix this up in a point release? The compile times have really taken a nose dive in v2.067. It's really taken the fun out of the language.It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Apr 09 2015
On 04/09/2015 03:41 PM, Gary Willoughby wrote:Are there any plans to fix this up in a point release? The compile times have really taken a nose dive in v2.067. It's really taken the fun out of the language.Filed a bug report, we'll figure something out. https://issues.dlang.org/show_bug.cgi?id=14431
Apr 09 2015
On Friday, 10 April 2015 at 02:02:17 UTC, Martin Nowak wrote:On 04/09/2015 03:41 PM, Gary Willoughby wrote:Cheers.Are there any plans to fix this up in a point release? The compile times have really taken a nose dive in v2.067. It's really taken the fun out of the language.Filed a bug report, we'll figure something out. https://issues.dlang.org/show_bug.cgi?id=14431
Apr 10 2015
On Monday, 30 March 2015 at 00:12:09 UTC, Walter Bright wrote:On 3/29/2015 4:14 PM, Martin Krejcirik wrote:I just did some profiling of building phobos. I noticed ~20% of the runtime and ~40% of the L2 cache misses were in slist_reset. Is this expected?It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?Sigh. Two things happen constantly: 1. object file sizes creep up 2. compilation speed slows down It's like rust on your car. Fixing it requires constant vigilance.
Apr 09 2015
On 03/30/2015 01:14 AM, Martin Krejcirik wrote:It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?25% slowdown is severe, can you share the project and probably file a bug report?
Mar 30 2015
Here is one example: Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661 https://github.com/jacob-carlborg/orange.git make 2.065.0 real 0m9.028s user 0m7.972s sys 0m0.940s 2.066.1 real 0m10.796s user 0m9.629s sys 0m1.056s 2.067.0 real 0m13.543s user 0m12.097s sys 0m1.348s
Mar 30 2015
On 2015-03-30 18:09, Martin Krejcirik wrote:Here is one example: Orange d5b2e0127c67f50bd885ee43a7dd61dd418b1661 https://github.com/jacob-carlborg/orange.git make 2.065.0 real 0m9.028s user 0m7.972s sys 0m0.940s 2.066.1 real 0m10.796s user 0m9.629s sys 0m1.056s 2.067.0 real 0m13.543s user 0m12.097s sys 0m1.348sThese are the timings for compiling the unit tests without linking. It passes all the files to DMD in one command. The make file invokes DMD once per file. 1.076 real 0m0.212s user 0m0.187s sys 0m0.022s 2.065.0 real 0m0.426s user 0m0.357s sys 0m0.065s 2.066.1 real 0m0.470s user 0m0.397s sys 0m0.064s 2.067.0 real 0m0.510s user 0m0.435s sys 0m0.074s It might not be fair to compare with D1 since it's not exactly the same code. -- /Jacob Carlborg
Mar 30 2015
Is it only DMD compile time or DMD + ld ? ld can be very slow sometimes. 2015-03-30 1:14 GMT+02:00 Martin Krejcirik via Digitalmars-d < digitalmars-d puremagic.com>:It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?
Mar 30 2015
On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 30 2015
On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, loboIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 30 2015
On 3/30/15 3:47 PM, lobo wrote:On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:Part of our acceptance tests should be peak memory, object file size, executable file size, and run time for building a few test programs (starting with "hello, world"). Any change in these must be investigated, justified, and documented. -- AndreiOn Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, loboIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 30 2015
On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu wrote:Part of our acceptance tests should be peak memory, object file size, executable file size, and run time for building a few test programs (starting with "hello, world"). Any change in these must be investigated, justified, and documented. -- AndreiI have filed this issue today: https://issues.dlang.org/show_bug.cgi?id=14381
Mar 30 2015
On 3/30/15 7:41 PM, Vladimir Panteleev wrote:On Monday, 30 March 2015 at 22:51:43 UTC, Andrei Alexandrescu wrote:The current situation is a shame. I appreciate the free service we're getting, but sometimes you just can't afford the free stuff. -- AndreiPart of our acceptance tests should be peak memory, object file size, executable file size, and run time for building a few test programs (starting with "hello, world"). Any change in these must be investigated, justified, and documented. -- AndreiI have filed this issue today: https://issues.dlang.org/show_bug.cgi?id=14381
Mar 30 2015
On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, loboIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 31 2015
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:As far as memory is concerned. How hard would it be to simply have DMD use a swap file?That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
Mar 31 2015
On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:I doubt it. If most modules are sparsely used it would improve memory usage in proportion to that. Basically if D would monitor file/module usage and compile areas that are relatively independent it should minimize disk usage. Basically page out stuff you know won't be needed. If it was smart enough it could order the data through module usage and compile the independent ones first, then only the ones that are simple dependencies, etc). The benefits to such a system is that larger projects get the biggest boost(there are more independent modules floating around. Hence at some point it becomes a non-issue.As far as memory is concerned. How hard would it be to simply have DMD use a swap file?That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
Mar 31 2015
On Wednesday, 1 April 2015 at 02:54:48 UTC, Jake The Baker wrote:On Tuesday, 31 March 2015 at 19:27:35 UTC, Adam D. Ruppe wrote:I have no idea what you're talking about here, sorry. I'm compiling modules separately already to object files. I think that helps reduce memory usage but I could be mistaken. I think the main culprit now is my attempts to (ab)use CTFE. After switching to DMD 2.066 I started adding `enum val=f()` where I could. After reading the discussions here I went about reverting most of these back to `auto val=<blah>` and I'm building again :-) DMD 2.067 is now maxing out at ~3.8GB and stable. bye, loboOn Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:I doubt it. If most modules are sparsely used it would improve memory usage in proportion to that. Basically if D would monitor file/module usage and compile areas that are relatively independent it should minimize disk usage. Basically page out stuff you know won't be needed. If it was smart enough it could order the data through module usage and compile the independent ones first, then only the ones that are simple dependencies, etc). The benefits to such a system is that larger projects get the biggest boost(there are more independent modules floating around. Hence at some point it becomes a non-issue.As far as memory is concerned. How hard would it be to simply have DMD use a swap file?That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
Mar 31 2015
"lobo" wrote in message news:vydmnbzapttzjnnctizm forum.dlang.org...I think the main culprit now is my attempts to (ab)use CTFE. After switching to DMD 2.066 I started adding `enum val=f()` where I could. After reading the discussions here I went about reverting most of these back to `auto val=<blah>` and I'm building again :-) DMD 2.067 is now maxing out at ~3.8GB and stable.Yeah, the big problem is that dmd's interpreter sort of evolved out of the constant folder, and wasn't designed for ctfe. A new interpreter for dmd is one of the projects I hope to get to after DDMD is complete, unless somebody beats me to it.
Apr 01 2015
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:It's incredibly slow and unproductive it's not really an option. My primary reason for using D is that I can be as productive as I am in Python but retain the same raw native power of C++. Anyway, it sounds D devs have a few good ideas on how to resolve this. bye, loboOn Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage: DMD 2.067 ~4.2GB (fails here so not sure of the full amount required) DMD 2.066 ~3.7GB (maximum) DMD 2.065 ~3.1GB (maximum) It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release. bye, loboIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects. bye, lobo
Mar 31 2015
"Jake The Baker" wrote in message news:bmwxxjmcoszhbexotufx forum.dlang.org...As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?It seems unlikely that having dmd use its own swap file would perform better than the operating system's implementation.
Apr 01 2015
On Mon, Mar 30, 2015 at 10:39:50PM +0000, lobo via Digitalmars-d wrote:On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:[...] Yeah, dmd memory consumption is way off the charts, because under the pretext of compile speed it never frees allocated memory. Unfortunately, the assumption that not managing memory == faster quickly becomes untrue once dmd runs out of RAM and the OS starts thrashing. Compile times quickly skyrocket exponentially as everything gets stuck on I/O. This is one of the big reasons I can't use D on my work PC, because it's an older machine with limited RAM, and when DMD is running the whole box slows down to an unusable crawl. This is not the first time this issue was brought up, but it seems nobody in the compiler team cares enough to do anything about it. :-( T -- Lottery: tax on the stupid. -- SlashdotterIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects.
Mar 30 2015
On Monday, 30 March 2015 at 22:55:50 UTC, H. S. Teoh wrote:On Mon, Mar 30, 2015 at 10:39:50PM +0000, lobo via Digitalmars-d wrote:I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase! Seriously though, allocating a bunch of memory until you hit some maximum threshold, possibly configured, and freeing unreferenced memory at that point, pausing compilation while that happens? This is GC. I wonder if someone enterprising enough would be willing to try it out with DDMD by swapping malloc calls with calls to D's GC or something.On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:[...] Yeah, dmd memory consumption is way off the charts, because under the pretext of compile speed it never frees allocated memory. Unfortunately, the assumption that not managing memory == faster quickly becomes untrue once dmd runs out of RAM and the OS starts thrashing. Compile times quickly skyrocket exponentially as everything gets stuck on I/O. This is one of the big reasons I can't use D on my work PC, because it's an older machine with limited RAM, and when DMD is running the whole box slows down to an unusable crawl. This is not the first time this issue was brought up, but it seems nobody in the compiler team cares enough to do anything about it. :-( TIt seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects.
Mar 30 2015
On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase!Yes, set an initial heap size of 100Mb or something and the GC won't kick in for scripts. Also, free after CTFE !
Mar 30 2015
On 3/30/15 4:28 PM, w0rp wrote:I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase!Compiler workloads are a good candidate for GC. -- Andrei
Mar 30 2015
On Tuesday, 31 March 2015 at 00:54:08 UTC, Andrei Alexandrescu wrote:On 3/30/15 4:28 PM, w0rp wrote:Yes, compiler to perform significantly better with GC than with other memory management strategy. Ironically, I think that weighted a bit too much in favor of GC for language design in the general case.I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase!Compiler workloads are a good candidate for GC. -- Andrei
Mar 30 2015
On 03/31/2015 05:51 AM, deadalnix wrote:Yes, compiler to perform significantly better with GC than with other memory management strategy. Ironically, I think that weighted a bit too much in favor of GC for language design in the general case.Why? Compilers use a lot of long-lived data structures (AST, metadata) which is particularly bad for a conservative GC. Any evidence to the contrary?
Mar 31 2015
On Tuesday, 31 March 2015 at 19:19:23 UTC, Martin Nowak wrote:On 03/31/2015 05:51 AM, deadalnix wrote:The graph is not acyclic, which makes it even worse for anything else.Yes, compiler to perform significantly better with GC than with other memory management strategy. Ironically, I think that weighted a bit too much in favor of GC for language design in the general case.Why? Compilers use a lot of long-lived data structures (AST, metadata) which is particularly bad for a conservative GC. Any evidence to the contrary?
Mar 31 2015
On Monday, 30 March 2015 at 23:28:50 UTC, w0rp wrote:Seriously though, allocating a bunch of memory until you hit some maximum threshold, possibly configured, and freeing unreferenced memory at that point, pausing compilation while that happens? This is GC. I wonder if someone enterprising enough would be willing to try it out with DDMD by swapping malloc calls with calls to D's GC or something.has anyone tried using boehm with dmd? I'm pretty sure it has a way of being LD_PRELOADed to override malloc IIRC.
Mar 30 2015
"w0rp" wrote in message news:leajtjgremulowqoxqpc forum.dlang.org...I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase! Seriously though, allocating a bunch of memory until you hit some maximum threshold, possibly configured, and freeing unreferenced memory at that point, pausing compilation while that happens? This is GC. I wonder if someone enterprising enough would be willing to try it out with DDMD by swapping malloc calls with calls to D's GC or something.I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC. * Well actually it currently segfaults, but not because of anything fundamentally wrong with the approach. After switching to DDMD we will have a HUGE number of options readily available for reducing memory usage, such as using allocation-free range code and enabling the GC.
Mar 30 2015
On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC.That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
Mar 30 2015
"deadalnix" wrote in message news:uwajsjgcjtzfeqtqoyjt forum.dlang.org...On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:I don't see how it's inaccurate. Many projects fit into the range where they do not exhaust physical memory, and the slower allocation speed can really hurt. It's worth noting that 'small' doesn't mean low number of lines of code, but low number of instantiated templates and ctfe calls.I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC.That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
Mar 30 2015
On Tue, 31 Mar 2015 05:21:13 +0000, deadalnix wrote:On Tuesday, 31 March 2015 at 04:56:13 UTC, Daniel Murphy wrote:i think that DDMD can start with GC turned off, and automatically turn it=20 on when RAM consumption goes over 1GB, for example. this way small-sized=20 (and even middle-sized) projects without heavy CTFE will still enjoy=20 "nofree is fast" strategy, and big projects will not eat the whole box'=20 RAM.=I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC.That is not accurate. For small programs, yes. For anything non trivial, the amount of memory in the working is set become so big that I doubt there is any advantage of doing so.
Mar 30 2015
On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:i think that DDMD can start with GC turned off, and automatically turn it on when RAM consumption goes over 1GB, for example. this way small-sized (and even middle-sized) projects without heavy CTFE will still enjoy "nofree is fast" strategy, and big projects will not eat the whole box' RAM.Recording the information necessary to free memory costs performance (and more memory) itself. With a basic bump-the-pointer scheme, you don't need to worry about page sizes or free lists or heap fragmentation - all allocated data is contiguous, there is no metadata, and you can't back out of that.
Mar 30 2015
On Tue, 31 Mar 2015 05:57:45 +0000, Vladimir Panteleev wrote:On Tuesday, 31 March 2015 at 05:42:02 UTC, ketmar wrote:TANSTAAFL. alas. yet without `free()` there aren't free lists to scan and=20 so on, so it can be almost as fast as bump-the-pointer. the good thing is=20 that user doesn't have to do the work that machine can do for him, i.e.=20 thinking about how to invoke the compiler -- with GC or without GC.=i think that DDMD can start with GC turned off, and automatically turn it on when RAM consumption goes over 1GB, for example. this way small-sized (and even middle-sized) projects without heavy CTFE will still enjoy "nofree is fast" strategy, and big projects will not eat the whole box' RAM.=20 Recording the information necessary to free memory costs performance (and more memory) itself. With a basic bump-the-pointer scheme, you don't need to worry about page sizes or free lists or heap fragmentation - all allocated data is contiguous, there is no metadata, and you can't back out of that.
Mar 31 2015
"Vladimir Panteleev" wrote in message news:remgknxogqlfwfnsubce forum.dlang.org...Recording the information necessary to free memory costs performance (and more memory) itself. With a basic bump-the-pointer scheme, you don't need to worry about page sizes or free lists or heap fragmentation - all allocated data is contiguous, there is no metadata, and you can't back out of that.It's possible that we could use a hybrid approach, where a GB or so is allocated from the GC in one chunk, then filled up using a bump-pointer allocator. When that's exhausted, the GC can start being used as normal for the rest of the compilation. The big chunk will obviously never be freed, but the GC still has a good chance to keep memory usage under control. (on 64-bit at least)
Mar 31 2015
Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick. I think DMD should be refactored and free the memory, pools and other techniques.
Mar 31 2015
On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick.and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.=
Mar 31 2015
On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick.and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
Mar 31 2015
On Tue, 31 Mar 2015 18:24:48 +0000, deadalnix wrote:On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:this won't really help long CTFE calls (like building a parser based on=20 grammar, for example, as this is a one very long call). it will slow down=20 simple CFTE calls though. it *may* help, but i'm looking at my "life" samle, for example, and see=20 that it eats all my RAM while parsing big .lif file. it has to do that in=20 one call, as there is no way to enumerate existing files in directory and=20 process them sequentially -- as there is no way to store state between=20 CTFE calls, so i can't even create numbered arrays with data.=On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:=20 I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick.and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
Mar 31 2015
On Tuesday, 31 March 2015 at 18:24:49 UTC, deadalnix wrote:On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:Wait, you mean DMD doesn't already do something like that? Yikes. I had always assumed (without looking) that ctfe used some separate heap that was chucked after each call.On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick.and it has no CTFE, so... CTFE is a big black hole that eats memory like crazy.
Mar 31 2015
On 03/31/2015 08:24 PM, deadalnix wrote:I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.No, it's trivial enough to implement a full AST interpreter. The way it's done currently (using AST nodes as CTFE interpreter values) makes it very hard to use a distinct allocator, because ownership can move from CTFE to compiler and vice versa.
Mar 31 2015
On Tuesday, 31 March 2015 at 21:53:29 UTC, Martin Nowak wrote:On 03/31/2015 08:24 PM, deadalnix wrote:This is why I introduced a deep copy step in there.I'm going to propose again the same thing as in the past : - before CTFE switch pool. - CTFE in the new pool. - deep copy result from ctfe pool to main pool. - ditch ctfe pool.No, it's trivial enough to implement a full AST interpreter. The way it's done currently (using AST nodes as CTFE interpreter values) makes it very hard to use a distinct allocator, because ownership can move from CTFE to compiler and vice versa.
Mar 31 2015
I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC.As a random d-user (who cares about perf/speed and just happened to read this) a switch sounds VERY good to me. I don't want to pay the price of GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits). Also, I wanted to add that freeing (at least to the OS (does this apply to GC?)) isn't exactly free either. Infact it can be more costly than mallocing. Here's enlightening article: https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
Mar 31 2015
On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits).this is the essense of "modern computing", btw. "hey, we have this=20 resource! hey, we have the only program user will ever want to run, so=20 assume that all that resource is ours! what? just buy a better box!"=
Mar 31 2015
On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:google/mozilla's developer mantra regarding web browsers.GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits).this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
Mar 31 2015
On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:They must have an agreement with DRAM vendor, I see no other explanation...On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:google/mozilla's developer mantra regarding web browsers.GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits).this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
Mar 31 2015
On Wed, 01 Apr 2015 06:21:58 +0000, deadalnix wrote:On Wednesday, 1 April 2015 at 04:51:26 UTC, weaselcat wrote:maybe vendors just giving 'em free DRAM chips...=On Wednesday, 1 April 2015 at 04:49:55 UTC, ketmar wrote:=20 They must have an agreement with DRAM vendor, I see no other explanation...On Wed, 01 Apr 2015 02:25:43 +0000, Random D-user wrote:google/mozilla's developer mantra regarding web browsers.GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits).this is the essense of "modern computing", btw. "hey, we have this resource! hey, we have the only program user will ever want to run, so assume that all that resource is ours! what? just buy a better box!"
Apr 01 2015
On Wednesday, 1 April 2015 at 02:25:44 UTC, Random D-user wrote:I think a switch would be good. My main reason for asking for such a thing isn't for performance (not directly), it's for being able to compile some D programs on computers with less memory. I've had machines with 1 or 2 GB of memory on them, wanted to compile a D program, DMD ran out of memory, and the compiler crashed. You can maybe start swapping on disk, but that won't be too great.I've used D's GC with DDMD. It works*, but you're trading better memory usage for worse allocation speed. It's quite possible we could add a switch to ddmd to enable the GC.As a random d-user (who cares about perf/speed and just happened to read this) a switch sounds VERY good to me. I don't want to pay the price of GC because of some low-end machines. Memory is really cheap these days and pretty much every machine is 64-bits (even phones are trasitioning fast to 64-bits). Also, I wanted to add that freeing (at least to the OS (does this apply to GC?)) isn't exactly free either. Infact it can be more costly than mallocing. Here's enlightening article: https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/
Apr 09 2015
On 2015-03-31 01:28, w0rp wrote:I sometimes think DMD's memory should be... garbage collected. I used the forbidden phrase!Doesn't DMD already have a GC that is disabled? -- /Jacob Carlborg
Mar 31 2015
"Jacob Carlborg" wrote in message news:mfe0dm$2i6l$1 digitalmars.com...Doesn't DMD already have a GC that is disabled?It did once, but it's been gone for a while now.
Mar 31 2015
I don't use CTFE in my game engine and DMD uses about 600 MB memory per file for instance.
Mar 31 2015
2015-03-31 0:53 GMT+02:00 H. S. Teoh via Digitalmars-d < digitalmars-d puremagic.com>:Yeah, dmd memory consumption is way off the charts, because under the pretext of compile speed it never frees allocated memory. Unfortunately, the assumption that not managing memory == faster quickly becomes untrue once dmd runs out of RAM and the OS starts thrashing. Compile times quickly skyrocket exponentially as everything gets stuck on I/O. This is one of the big reasons I can't use D on my work PC, because it's an older machine with limited RAM, and when DMD is running the whole box slows down to an unusable crawl. This is not the first time this issue was brought up, but it seems nobody in the compiler team cares enough to do anything about it. :-( T -- Lottery: tax on the stupid. -- SlashdotterI can relate. DMD compilation speed was nothing but a myth to me until I migrated from 4GBs to 8 GBs. And everytime I compiled something, my computer froze for a few seconds (or a few minutes, depending of the project).
Mar 30 2015