digitalmars.D - DMD memory management
- bitwise (10/10) Jun 14 2015 I'm trying to mod dmd, and I'm totally confused about what's goin on.
- weaselcat (3/15) Jun 14 2015 to answer all your questions at once:
- ketmar (7/8) Jun 14 2015 yes. by OS when process terminates.
- bitwise (3/11) Jun 14 2015 Ok, makes sense ;)
- Shachar Shemesh (9/13) Jun 15 2015 Well, sortof.
- Dmitry Olshansky (10/25) Jun 15 2015 Truth be told it never made any sense - it only suitable for immutables
- ketmar (7/21) Jun 15 2015 that is, this approach to reduce compilation times is wrong. storing=20
- Rikki Cattermole (6/12) Jun 15 2015 I'm personally very interested in a D based linker. Preferably using ran...
- bitwise (8/24) Jun 15 2015 I just had a thought as well. On Linux/OSX/etc, dmd uses fork() and then...
- Tofu Ninja (4/12) Jun 15 2015 I think fork just does copy on write, so all the garbage that is
- burjui (12/15) Jun 17 2015 You are correct. fork() guarantees separate address spaces for
- ketmar (4/6) Jun 17 2015 on any decent OS fork(2) does CoW (copy-on-writing), so forking is=20
- deadalnix (2/9) Jun 17 2015 "lightning fast" for some value of "lightning fast".
I'm trying to mod dmd, and I'm totally confused about what's goin on. -some things are allocated with 'new' and some are allocated with 'mem.malloc' -most things don't ever seem to be freed -no RAII is used for cleanup -no clear ownership of pointers How does memory get cleaned up? Is it just assumed that the process is short-lived, and that memory can just be left dangling until it terminates? Thanks, Bit
Jun 14 2015
On Sunday, 14 June 2015 at 16:37:19 UTC, bitwise wrote:I'm trying to mod dmd, and I'm totally confused about what's goin on. -some things are allocated with 'new' and some are allocated with 'mem.malloc' -most things don't ever seem to be freed -no RAII is used for cleanup -no clear ownership of pointers How does memory get cleaned up? Is it just assumed that the process is short-lived, and that memory can just be left dangling until it terminates? Thanks, Bitto answer all your questions at once: it doesn't
Jun 14 2015
On Sun, 14 Jun 2015 12:37:18 -0400, bitwise wrote:How does memory get cleaned up?yes. by OS when process terminates. http://www.drdobbs.com/cpp/increasing-compiler-speed-by-over-75/240158941 a small quote: "DMD does memory allocation in a bit of a sneaky way.=20 Since compilers are short-lived programs, and speed is of the essence,=20 DMD just mallocs away, and never frees." so it's by design.=
Jun 14 2015
On Sun, 14 Jun 2015 12:52:47 -0400, ketmar <ketmar ketmar.no-ip.org> wrote:On Sun, 14 Jun 2015 12:37:18 -0400, bitwise wrote:Ok, makes sense ;) BitHow does memory get cleaned up?yes. by OS when process terminates. http://www.drdobbs.com/cpp/increasing-compiler-speed-by-over-75/240158941 a small quote: "DMD does memory allocation in a bit of a sneaky way. Since compilers are short-lived programs, and speed is of the essence, DMD just mallocs away, and never frees." so it's by design.
Jun 14 2015
On 14/06/15 20:01, bitwise wrote:On Sun, 14 Jun 2015 12:52:47 -0400, ketmar <ketmar ketmar.no-ip.org> wrote:Well, sortof. It makes sense, until you try to compile a program that needs more memory than your computer has. Then, all of a sudden, it completely and utterly stops making sense. Hint: when you need to swap out over 2GB of memory (with 16GB of physical ram installed), this strategy completely and utterly stops making sense. Shacharso it's by design.Ok, makes sense ;) Bit
Jun 15 2015
On 15-Jun-2015 10:20, Shachar Shemesh wrote:On 14/06/15 20:01, bitwise wrote:Truth be told it never made any sense - it only suitable for immutables - AST, ID pool and few others. For instance, lots and lots of AA-s are short-lived per analyzed scope. Even for immutables using region-style allocator with "releaseAll" would be much safer strategy with same gains. Also never deallocating means we can't use tooling such as valgrind to pin down real memory leaks.On Sun, 14 Jun 2015 12:52:47 -0400, ketmar <ketmar ketmar.no-ip.org> wrote:Well, sortof. It makes sense, until you try to compile a program that needs more memory than your computer has. Then, all of a sudden, it completely and utterly stops making sense.so it's by design.Ok, makes sense ;) BitHint: when you need to swap out over 2GB of memory (with 16GB of physical ram installed), this strategy completely and utterly stops making sense.Agreed. -- Dmitry Olshansky
Jun 15 2015
On Mon, 15 Jun 2015 13:07:46 +0300, Dmitry Olshansky wrote:Truth be told it never made any sense - it only suitable for immutables - AST, ID pool and few others. For instance, lots and lots of AA-s are short-lived per analyzed scope. =20 Even for immutables using region-style allocator with "releaseAll" would be much safer strategy with same gains. Also never deallocating means we can't use tooling such as valgrind to pin down real memory leaks. =20 =20that is, this approach to reduce compilation times is wrong. storing=20 partially analyzed ASTs on disk as easily parsable binary representations=20 (preferably ones that can be mmaped and used as-is) is right. updating=20 the caches when more templates are semanticed is right. even moving off=20 the system linker in favor of much simplier and faster homegrown linker=20 is right for some cases. too much work, though...=Hint: when you need to swap out over 2GB of memory (with 16GB of physical ram installed), this strategy completely and utterly stops making sense.Agreed.
Jun 15 2015
On 15/06/2015 10:54 p.m., ketmar wrote:that is, this approach to reduce compilation times is wrong. storing partially analyzed ASTs on disk as easily parsable binary representations (preferably ones that can be mmaped and used as-is) is right. updating the caches when more templates are semanticed is right. even moving off the system linker in favor of much simplier and faster homegrown linker is right for some cases. too much work, though...I'm personally very interested in a D based linker. Preferably using ranges. Unfortunately mine is going take quite a while to get anywhere and that is just for PE-COFF support. I theorize for a language like C it could be quite a fast compile + link when using ranges.
Jun 15 2015
On Mon, 15 Jun 2015 23:03:35 +1200, Rikki Cattermole wrote:I'm personally very interested in a D based linker. Preferably using ranges. =20 Unfortunately mine is going take quite a while to get anywhere and that is just for PE-COFF support. =20 I theorize for a language like C it could be quite a fast compile + link when using ranges.i have too many projects that i want to write. elf linker is one of=20 them. ;-)=
Jun 15 2015
On Monday, 15 June 2015 at 11:03:42 UTC, Rikki Cattermole wrote:On 15/06/2015 10:54 p.m., ketmar wrote:What about https://github.com/yebblies/ylink?that is, this approach to reduce compilation times is wrong. storing partially analyzed ASTs on disk as easily parsable binary representations (preferably ones that can be mmaped and used as-is) is right. updating the caches when more templates are semanticed is right. even moving off the system linker in favor of much simplier and faster homegrown linker is right for some cases. too much work, though...I'm personally very interested in a D based linker. Preferably using ranges. Unfortunately mine is going take quite a while to get anywhere and that is just for PE-COFF support. I theorize for a language like C it could be quite a fast compile + link when using ranges.
Jun 15 2015
On Mon, 15 Jun 2015 03:20:47 -0400, Shachar Shemesh <shachar weka.io> wrote:On 14/06/15 20:01, bitwise wrote:I just had a thought as well. On Linux/OSX/etc, dmd uses fork() and then calls gcc to do linking. When memory is never cleaned up, can't that make fork() really slow? Doesn't fork copy all memory of the entire process? Don't some benchmarks measure the total time including compiler invocation? BitOn Sun, 14 Jun 2015 12:52:47 -0400, ketmar <ketmar ketmar.no-ip.org> wrote:Well, sortof. It makes sense, until you try to compile a program that needs more memory than your computer has. Then, all of a sudden, it completely and utterly stops making sense. Hint: when you need to swap out over 2GB of memory (with 16GB of physical ram installed), this strategy completely and utterly stops making sense. Shacharso it's by design.Ok, makes sense ;) Bit
Jun 15 2015
On Monday, 15 June 2015 at 22:19:05 UTC, bitwise wrote:I just had a thought as well. On Linux/OSX/etc, dmd uses fork() and then calls gcc to do linking. When memory is never cleaned up, can't that make fork() really slow? Doesn't fork copy all memory of the entire process? Don't some benchmarks measure the total time including compiler invocation? BitI think fork just does copy on write, so all the garbage that is no longer being referenced off in random pages shouldn't get copied. Only the pages that get written are actually copied.
Jun 15 2015
On Monday, 15 June 2015 at 22:25:27 UTC, Tofu Ninja wrote:I think fork just does copy on write, so all the garbage that is no longer being referenced off in random pages shouldn't get copied. Only the pages that get written are actually copied.You are correct. fork() guarantees separate address spaces for the parent and the child processes, but there's a note in it's man page: NOTES Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child.
Jun 17 2015
On Mon, 15 Jun 2015 18:19:06 -0400, bitwise wrote:When memory is never cleaned up, can't that make fork() really slow? Doesn't fork copy all memory of the entire process?on any decent OS fork(2) does CoW (copy-on-writing), so forking is=20 lightning fast. also, any decent OS knows about "fork-and-replace"=20 pattern, so it throws away old process pages on replacing.=
Jun 17 2015
On Wednesday, 17 June 2015 at 17:10:40 UTC, ketmar wrote:On Mon, 15 Jun 2015 18:19:06 -0400, bitwise wrote:"lightning fast" for some value of "lightning fast".When memory is never cleaned up, can't that make fork() really slow? Doesn't fork copy all memory of the entire process?on any decent OS fork(2) does CoW (copy-on-writing), so forking is lightning fast. also, any decent OS knows about "fork-and-replace" pattern, so it throws away old process pages on replacing.
Jun 17 2015