digitalmars.D - Honey, I shrunk the build times
- Andrei Alexandrescu (3/3) Jun 06 2015 https://github.com/D-Programming-Language/phobos/pull/3379
- Jonathan M Davis (6/9) Jun 06 2015 Reading makefiles always gives me a headache. For those of us who
- Andrei Alexandrescu (32/39) Jun 06 2015 Thanks for asking. The situation before went like this: to build
- ketmar (4/4) Jun 06 2015 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:
- Rikki Cattermole (4/8) Jun 06 2015 Nobody is always right.
- Andrei Alexandrescu (2/11) Jun 06 2015 There might be a bit of misunderstanding on what that change does. -- An...
- Rikki Cattermole (3/17) Jun 06 2015 I probably should have removed your original post from that. Really it
- ketmar (5/20) Jun 06 2015 it utilizes "partial separate compilation" to earn speed using parallel=...
- weaselcat (4/32) Jun 07 2015 you'd think with dmd's module system achieving compiler-level
- Temtaime (4/4) Jun 07 2015 It's really bad solution.
- weaselcat (2/3) Jun 07 2015 only bad compilers
- Iain Buclaw via Digitalmars-d (5/9) Jun 07 2015 The way dmd does it, it's almost the same as compiling all object files ...
- Dicebot (9/13) Jun 07 2015 All existing compilers AFAIK. There is no point in discussing
- weaselcat (4/13) Jun 07 2015 right off the top of my head, I know ghc and rustc have zero
- ketmar (4/9) Jun 07 2015 how is that? even if we left lto aside, compiler needs module source=20
- Iain Buclaw via Digitalmars-d (3/12) Jun 07 2015 Semantic analysis is done lazily. No AST, no inline.
- ketmar (6/23) Jun 07 2015 but everything one need to do semantic is already there. it's just calls...
- Andrei Alexandrescu (4/7) Jun 07 2015 Yes.
- Jonathan M Davis (16/18) Jun 07 2015 IIRC, Walter stated that he wanted to add it but decided that it
- Iain Buclaw via Digitalmars-d (7/23) Jun 07 2015 I wouldn't have thought that not moving to 2.067 would be a hold-up (the...
- Jonathan M Davis (14/23) Jun 07 2015 The biggest problem is that releasing a ddmd which is compiled
- weaselcat (6/16) Jun 14 2015 after playing around with ddmd built with ldc, it's still a solid
- Temtaime (2/2) Jun 14 2015 I think the way is fix all memory operations which cause UB and
- David Nadlinger (9/12) Jun 14 2015 How did you build it? This is especially important given that
- Iain Buclaw via Digitalmars-d (3/13) Jun 27 2015 Because 64GiB is such a commodity nowadays. :-)
- H. S. Teoh via Digitalmars-d (7/15) Jun 07 2015 [...]
- weaselcat (4/10) Jun 06 2015 a broken clock is right twice a day ;)
- Dicebot (4/4) Jun 06 2015 "C style per-module separate compilation sux" != "splitting the
- Jonathan M Davis (6/11) Jun 06 2015 Ah, okay. So, you essentially did what you were talking about
- Jacob Carlborg (5/13) Jun 07 2015 I'm wondering if the impovements would have been larger if Phobos had a
- Andrei Alexandrescu (3/16) Jun 07 2015 Affirmative. Currently the duration of the build is determined by the
- Nick Sabalausky (5/8) Jun 07 2015 It just means you're taking more system resources, which yea, can
- Iain Buclaw via Digitalmars-d (6/15) Jun 27 2015 By the way, what's happening with the eventual packaging of
- Atila Neves (11/15) Jun 09 2015 Are the inter-package dependencies handled correctly? It's hard
- Andrei Alexandrescu (4/19) Jun 09 2015 Last one's right. From the diff:
- Atila Neves (6/33) Jun 09 2015 Ah right, sorry, I missed that. reggae already calculates
https://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed. Andrei
Jun 06 2015
On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:https://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed.Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do? - Jonathan M Davis
Jun 06 2015
On 6/6/15 5:45 PM, Jonathan M Davis wrote:On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:Thanks for asking. The situation before went like this: to build libphobos2.a, the command would go like this (simplified to just a few files and flags: dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d std/algorithm/iteration.d So all modules would go together in one command to build the library. With the package-at-a-time approach, we build one directory at a time like this: dmd -oflibphobos2_std.a std/datetime.d std/conv.d dmd -oflibphobos2_std_algorithm.a std/algorithm/comparison.d std/algorithm/iteration.d So now we have two libraries that need to be combined together, which is easy: dmd -oflibphobos2_std.a libphobos2_std.a libphobos2_std_algorithm.a and voila, the library is built. This is strictly speaking more work: * Everything in Phobos imports everything else, so effectively we're parsing the entire Phobos twice as much * There are temporary files being created * There's an extra final step - files that have just been written need to be read again However, the key advantage here is that the first two steps can be performed in parallel, and that turns out to be key. Time and again I see this: parallel processing almost always ends up doing more work - some of which is wasteful, but in the end it wins. It's counterintuitive sometimes. This is key to scalability, too. Now, the baseline numbers were without std.experimental.allocator. Recall the baseline time on my laptop was 4.93s. I added allocator, boom, 5.08s - sensible degradation. However, after I merged the per-package builder I got the same 4.01 seconds. Andreihttps://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed.Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do?
Jun 06 2015
On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and=20 everyone should do one-step combined compilation, separate compilation=20 wins. it's funny how i'm always right in the end.=
Jun 06 2015
On 7/06/2015 4:55 p.m., ketmar wrote:On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 06 2015
On 6/6/15 10:00 PM, Rikki Cattermole wrote:On 7/06/2015 4:55 p.m., ketmar wrote:There might be a bit of misunderstanding on what that change does. -- AndreiOn Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 06 2015
On 7/06/2015 5:08 p.m., Andrei Alexandrescu wrote:On 6/6/15 10:00 PM, Rikki Cattermole wrote:I probably should have removed your original post from that. Really it was meant for ketmar.On 7/06/2015 4:55 p.m., ketmar wrote:There might be a bit of misunderstanding on what that change does. -- AndreiOn Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 06 2015
On Sat, 06 Jun 2015 22:08:47 -0700, Andrei Alexandrescu wrote:On 6/6/15 10:00 PM, Rikki Cattermole wrote:it utilizes "partial separate compilation" to earn speed using parallel=20 builds. the thing alot of people talking of before: separate compilation=20 can use multicores with ease, while one-step-all compilation can't=20 without significant changes in compiler internals.=On 7/06/2015 4:55 p.m., ketmar wrote:=20 There might be a bit of misunderstanding on what that change does. -- AndreiOn Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 06 2015
On Sunday, 7 June 2015 at 05:25:21 UTC, ketmar wrote:On Sat, 06 Jun 2015 22:08:47 -0700, Andrei Alexandrescu wrote:you'd think with dmd's module system achieving compiler-level parallelism wouldn't be so difficult. I guess it stems from dmd being before the free lunch ended.On 6/6/15 10:00 PM, Rikki Cattermole wrote:it utilizes "partial separate compilation" to earn speed using parallel builds. the thing alot of people talking of before: separate compilation can use multicores with ease, while one-step-all compilation can't without significant changes in compiler internals.On 7/06/2015 4:55 p.m., ketmar wrote:There might be a bit of misunderstanding on what that change does. -- AndreiOn Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 07 2015
It's really bad solution. Are you building phobos 1000 times a day so 5 seconds is really long for you ? Separate compilation prevents compiler from inlining everything.
Jun 07 2015
On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:Separate compilation prevents compiler from inlining everything.only bad compilers
Jun 07 2015
On 7 June 2015 at 10:34, weaselcat via Digitalmars-d < digitalmars-d puremagic.com> wrote:On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:The way dmd does it, it's almost the same as compiling all object files at once, but only emitting code for one. Then times that by 134 modules and you understand why dmd uses a "better together" strategy for compilation.Separate compilation prevents compiler from inlining everything.only bad compilers
Jun 07 2015
On Sunday, 7 June 2015 at 08:34:50 UTC, weaselcat wrote:On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:All existing compilers AFAIK. There is no point in discussing theoretical advanced enough compiler when considering actions done right now. Good compiler should be able to work as caching daemon and never need separate object files at all. So we should completely ban it by that logic. In practice creating library per package is decent compromise that works good right now, even if it is consistently imperfect.Separate compilation prevents compiler from inlining everything.only bad compilers
Jun 07 2015
On Sunday, 7 June 2015 at 10:11:26 UTC, Dicebot wrote:On Sunday, 7 June 2015 at 08:34:50 UTC, weaselcat wrote:right off the top of my head, I know ghc and rustc have zero issue with this or are we only referring to D compilers?On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:All existing compilers AFAIK. There is no point in discussing theoretical advanced enough compiler when considering actions done right now.Separate compilation prevents compiler from inlining everything.only bad compilers
Jun 07 2015
On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:It's really bad solution. =20 Are you building phobos 1000 times a day so 5 seconds is really long for you ? Separate compilation prevents compiler from inlining everything.how is that? even if we left lto aside, compiler needs module source=20 anyway. if one will use full .d files instead of .di, nothing can prevent=20 good compiler from inlining.=
Jun 07 2015
On 7 June 2015 at 10:51, ketmar via Digitalmars-d < digitalmars-d puremagic.com> wrote:On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:Semantic analysis is done lazily. No AST, no inline.It's really bad solution. Are you building phobos 1000 times a day so 5 seconds is really long for you ? Separate compilation prevents compiler from inlining everything.how is that? even if we left lto aside, compiler needs module source anyway. if one will use full .d files instead of .di, nothing can prevent good compiler from inlining.
Jun 07 2015
On Sun, 07 Jun 2015 11:01:19 +0200, Iain Buclaw via Digitalmars-d wrote:On 7 June 2015 at 10:51, ketmar via Digitalmars-d < digitalmars-d puremagic.com> wrote: =20but everything one need to do semantic is already there. it's just calls=20 to `semantic` are absent. with some imaginary "--aggressive-inline"=20 option compiler can do more semantic calls and inline things properly.=20 sure, that will slow down compilation, but that's why it should be done=20 as opt-in feature.=On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:=20 =20 Semantic analysis is done lazily. No AST, no inline.It's really bad solution. Are you building phobos 1000 times a day so 5 seconds is really long for you ? Separate compilation prevents compiler from inlining everything.how is that? even if we left lto aside, compiler needs module source anyway. if one will use full .d files instead of .di, nothing can prevent good compiler from inlining.
Jun 07 2015
On 6/7/15 1:24 AM, Temtaime wrote:It's really bad solution.No.Are you building phobos 1000 times a day so 5 seconds is really long for you ?Yes. Andrei
Jun 07 2015
On Sunday, 7 June 2015 at 08:12:11 UTC, weaselcat wrote:you'd think with dmd's module system achieving compiler-level parallelism wouldn't be so difficult.IIRC, Walter stated that he wanted to add it but decided that it would be too much of a pain to do in C++ and is waiting for us to fully switch to ddmd before tackling that problem. Similarly, Daniel Murphy has ideas on how to improve CTFE (which would vastly help compilation speeds), but it would be so much easier to do in D that he put it off until we switch to ddmd. It would surprise me if there are other speed improvements that have been put off, simply because they'd be easier to implement in D than C++. So, I expect that there's a decent chance that we'll be able to better leverage the design of the language to improve its compilation speed once we've officially switched the reference compiler to D (and we'll probably get there within a release or two; the main hold-up is how long it'll take gdc and ldc to catch up with 2.067). - Jonathan M Davis
Jun 07 2015
On 7 June 2015 at 10:49, Jonathan M Davis via Digitalmars-d < digitalmars-d puremagic.com> wrote:On Sunday, 7 June 2015 at 08:12:11 UTC, weaselcat wrote:I wouldn't have thought that not moving to 2.067 would be a hold-up (there is nothing in that release that blocks building DDMD as it is *now*). But I have been promised time and again that there will be more effort (infrastructure?) put in to help get LDC and GDC integrated into the testing process for all new PRs.you'd think with dmd's module system achieving compiler-level parallelism wouldn't be so difficult.IIRC, Walter stated that he wanted to add it but decided that it would be too much of a pain to do in C++ and is waiting for us to fully switch to ddmd before tackling that problem. Similarly, Daniel Murphy has ideas on how to improve CTFE (which would vastly help compilation speeds), but it would be so much easier to do in D that he put it off until we switch to ddmd. It would surprise me if there are other speed improvements that have been put off, simply because they'd be easier to implement in D than C++. So, I expect that there's a decent chance that we'll be able to better leverage the design of the language to improve its compilation speed once we've officially switched the reference compiler to D (and we'll probably get there within a release or two; the main hold-up is how long it'll take gdc and ldc to catch up with 2.067).
Jun 07 2015
On Sunday, 7 June 2015 at 08:59:46 UTC, Iain Buclaw wrote:I wouldn't have thought that not moving to 2.067 would be a hold-up (there is nothing in that release that blocks building DDMD as it is *now*).The biggest problem is that releasing a ddmd which is compiled with dmd is unacceptable, because it incurs too large a performance hit (~20% IIRC), so we need either ldc or gdc to be at 2.067 so that we can use that to compile the release build of ddmd.But I have been promised time and again that there will be more effort (infrastructure?) put in to help get LDC and GDC integrated into the testing process for all new PRs.That would be good, though I don't know what the situation with that is. However, I think that Daniel's top priority at this point is getting the frontend to the point that it's backend-agnostic and thus identical for all three backends, which should greatly help in having gdc and ldc keep up with dmd. That obviously wouldn't obviate the need for testing gdc and ldc, but it would reduce the effort to update them and maintain them. - Jonathan M Davis
Jun 07 2015
On Sunday, 7 June 2015 at 10:03:06 UTC, Jonathan M Davis wrote:On Sunday, 7 June 2015 at 08:59:46 UTC, Iain Buclaw wrote:after playing around with ddmd built with ldc, it's still a solid 30-40% slower than current dmd(with optimization flags, obv.) after profiling, it spends most of its time swapping and handling page faults. Enabling the GC seems to crash it, oh well. Maybe 20-30% of the actual time is doing non-allocation related things.I wouldn't have thought that not moving to 2.067 would be a hold-up (there is nothing in that release that blocks building DDMD as it is *now*).The biggest problem is that releasing a ddmd which is compiled with dmd is unacceptable, because it incurs too large a performance hit (~20% IIRC), so we need either ldc or gdc to be at 2.067 so that we can use that to compile the release build of ddmd.
Jun 14 2015
I think the way is fix all memory operations which cause UB and enable GC.
Jun 14 2015
On Sunday, 14 June 2015 at 19:02:59 UTC, weaselcat wrote:after playing around with ddmd built with ldc, it's still a solid 30-40% slower than current dmd(with optimization flags, obv.)How did you build it? This is especially important given that DDMD straight from the repo does not build with LDC right now as it tries to override the druntime memory allocation functions, which works only due to the way DMD's -lib is implemented. On a system with 64 GiB RAM, Daniel and I could not measure any performance difference to the C++ version when building the Phobos unittests. - David
Jun 14 2015
On 15 June 2015 at 02:55, David Nadlinger via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Sunday, 14 June 2015 at 19:02:59 UTC, weaselcat wrote:Because 64GiB is such a commodity nowadays. :-)after playing around with ddmd built with ldc, it's still a solid 30-40% slower than current dmd(with optimization flags, obv.)How did you build it? This is especially important given that DDMD straight from the repo does not build with LDC right now as it tries to override the druntime memory allocation functions, which works only due to the way DMD's -lib is implemented. On a system with 64 GiB RAM, Daniel and I could not measure any performance difference to the C++ version when building the Phobos unittests.
Jun 27 2015
On Sun, Jun 07, 2015 at 05:00:58PM +1200, Rikki Cattermole via Digitalmars-d wrote:On 7/06/2015 4:55 p.m., ketmar wrote:[...] "Nobody is always right. I am Nobody." :-P T -- Nobody is perfect. I am Nobody. -- pepoluan, GKC forumOn Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.Nobody is always right.
Jun 07 2015
On Sunday, 7 June 2015 at 04:55:52 UTC, ketmar wrote:On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote: so in the end, after endless talking how separate compilation sux and everyone should do one-step combined compilation, separate compilation wins. it's funny how i'm always right in the end.a broken clock is right twice a day ;) also, LDC stomps all over dmd when you enter separate compilation territory.
Jun 06 2015
"C style per-module separate compilation sux" != "splitting the library into smaller meaningful static libraries sux" It was all discussed and nailed down so many times but old habits never die easy.
Jun 06 2015
On Sunday, 7 June 2015 at 04:30:02 UTC, Andrei Alexandrescu wrote: [...]This is key to scalability, too. Now, the baseline numbers were without std.experimental.allocator. Recall the baseline time on my laptop was 4.93s. I added allocator, boom, 5.08s - sensible degradation. However, after I merged the per-package builder I got the same 4.01 seconds.Ah, okay. So, you essentially did what you were talking about doing for rdmd. I don't think that it's an approach that would have occurred to me, but I'm certainly in favor of a faster build. - Jonathan M Davis
Jun 06 2015
On 2015-06-07 06:30, Andrei Alexandrescu wrote:Thanks for asking. The situation before went like this: to build libphobos2.a, the command would go like this (simplified to just a few files and flags: dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d std/algorithm/iteration.d So all modules would go together in one command to build the library. With the package-at-a-time approach, we build one directory at a time like this:I'm wondering if the impovements would have been larger if Phobos had a more tree structure for the modules rather than a fairly flat structure. -- /Jacob Carlborg
Jun 07 2015
On 6/7/15 2:36 AM, Jacob Carlborg wrote:On 2015-06-07 06:30, Andrei Alexandrescu wrote:Affirmative. Currently the duration of the build is determined by the critical path, which mainly consists of building std/*.d. -- AndreiThanks for asking. The situation before went like this: to build libphobos2.a, the command would go like this (simplified to just a few files and flags: dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d std/algorithm/iteration.d So all modules would go together in one command to build the library. With the package-at-a-time approach, we build one directory at a time like this:I'm wondering if the impovements would have been larger if Phobos had a more tree structure for the modules rather than a fairly flat structure.
Jun 07 2015
On 06/07/2015 12:30 AM, Andrei Alexandrescu wrote:parallel processing almost always ends up doing more work - some of which is wasteful, but in the end it wins. It's counterintuitive sometimes.It just means you're taking more system resources, which yea, can naturally be faster as long as those resources aren't already in use. Get more people assembling gizmos and you'll reach your quota faster even with a little bit of coordination overhead.
Jun 07 2015
On 7 June 2015 at 02:45, Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:By the way, what's happening with the eventual packaging of std.datetime? I'd like to have memory consumption of running unittests down down down. Iain.https://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed.Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do? - Jonathan M Davis
Jun 27 2015
On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:https://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed. AndreiAre the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time? Atila
Jun 09 2015
On 6/9/15 4:06 AM, Atila Neves wrote:On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:Last one's right. From the diff: Andreihttps://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed. AndreiAre the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time?
Jun 09 2015
On Tuesday, 9 June 2015 at 16:20:35 UTC, Andrei Alexandrescu wrote:On 6/9/15 4:06 AM, Atila Neves wrote:Ah right, sorry, I missed that. reggae already calculates dependencies by asking the compiler and (modulo bugs) only recompiles packages that need to be recompiled. AtilaOn Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:Last one's right. From the diff: the future.https://github.com/D-Programming-Language/phobos/pull/3379 Punchline: major reduction of both total run time and memory consumed. AndreiAre the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time?
Jun 09 2015