www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Modern C++ Lamentations

reply Walter Bright <newshound2 digitalmars.com> writes:
http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

Time to show off your leet D skilz and see how good we can do it in D!
Dec 29 2018
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 29/12/2018 10:29 PM, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do it in D!
Reddit thread: https://www.reddit.com/r/programming/comments/aac4hg/modern_c_lamentations/
Dec 29 2018
prev sibling next sibling parent reply JN <666total wp.pl> writes:
On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do 
 it in D!
I don't know if D is a role model here, considering just importing std.regex adds three seconds to compile time - https://issues.dlang.org/show_bug.cgi?id=18378 . Here's another take on language complexity, related to Rust - https://graydon2.dreamwidth.org/263429.html
Dec 29 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Dec 29, 2018 at 10:35:05AM +0000, JN via Digitalmars-d wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do it in
 D!
I don't know if D is a role model here, considering just importing std.regex adds three seconds to compile time - https://issues.dlang.org/show_bug.cgi?id=18378 .
[...] Yeah no kidding, recently I rewrote a whole bunch of code to get *rid* of dependency on std.regex because it was too slow, and project compilation time improved from about 7+ seconds to 2+ seconds. Let me say that again. Removing dependency on std.regex (by writing equivalent functionality by hand) improved compilation times from more than SEVEN seconds to just over TWO seconds. That's almost TRIPLE the compilation speed. The mere act of using std.regex causes compilation times to TRIPLE. Let that sink in for a moment. These days, it has been really hard for me to boast about D compilation times with a straight face. True, if you write C-style D code, then compilation *is* lightning fast. But modern D has moved away from that style, and the kind of style that modern D code takes on these days is template- and CTFE-heavy, both areas of which are in the "extremely slow" category of D compilation. This seriously needs to improve if we're going to continue boasting about compilation times, because otherwise it's becoming more and more like false advertisement when we talk about D being fast to compile. I've been saying this ever since we adopted that cringe-worthy fast-fast-fast slogan on dlang.org, and it seems not much has improved since then. These days I'm almost ashamed to talk about D compilation times. :-/ And don't get me started on dmd's ridiculous memory usage requirements, which makes it basically unusable on low-memory systems. There is not even an option to trade off compilation speed for less memory usage. We're paying mandatory memory tax yet we're still unable to improve compilation speed of std.regex to within acceptable limits. And modern C++ compilers have improved enough that on said low-memory systems, compiling C++ can be actually *faster* than D because g++ stays within available RAM and therefore isn't thrashing on swap, whereas dmd thrashes on swap like crazy because it's such a memory hog. And that's if dmd even finishes compiling at all, before it gets hit by the OOM killer, otherwise we're looking at finite compilation time vs. infinite compilation time (because it never finishes). Don't get me wrong, I still love D for having the best power to ease of writing ratio, and I can't see myself going back to C++ in the foreseeable future (or ever again). But we seriously need to improve on these two areas before we start proclaiming how good D compilation times are, because that's no longer true. T -- A bend in the road is not the end of the road unless you fail to make the turn. -- Brian White
Dec 29 2018
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
 Yeah no kidding, recently I rewrote a whole bunch of code to 
 get *rid* of dependency on std.regex because it was too slow, 
 and project compilation time improved from about 7+ seconds
Ditto. (Basically). Rewriting the uri parser in cgi.d brought its build time from about 3 seconds down to 0.6. Which still feels slow, especially next to my minigui.d, which can do 0.3 since it broke off from Phobos entirely! (It was 2.5 seconds before).
 But we seriously need to improve on these two areas before we 
 start proclaiming how good D compilation times are, because 
 that's no longer true.
Amen. I have boxes that have infinite D build times because it is impossible to run the sloppy compiler!
Dec 29 2018
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Dec 29, 2018 at 03:57:57PM +0000, Adam D. Ruppe via Digitalmars-d wrote:
 On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
 Yeah no kidding, recently I rewrote a whole bunch of code to get
 *rid* of dependency on std.regex because it was too slow, and
 project compilation time improved from about 7+ seconds
Ditto. (Basically). Rewriting the uri parser in cgi.d brought its build time from about 3 seconds down to 0.6. Which still feels slow, especially next to my minigui.d, which can do 0.3 since it broke off from Phobos entirely! (It was 2.5 seconds before).
IME, Phobos isn't all bad. Some Phobos modules are pretty useful, esp. std.algorithm and std.range, and despite std.algorithm being pretty huge (its submodules are pretty huge even after I split up the original std/algorithm.d -- which was so bad I couldn't run its unittests locally anymore because it ran out of memory, on a machine with 4GB RAM) it actually compiles very fast, proportional to how much you use it. The bad apples are std.regex, std.format, and maybe a few others, that rely a little too heavily on recursive templates and CTFE.
 But we seriously need to improve on these two areas before we start
 proclaiming how good D compilation times are, because that's no
 longer true.
Amen. I have boxes that have infinite D build times because it is impossible to run the sloppy compiler!
Yeah, this is one reason I haven't dared propose using D at my day job. I'd be laughed out of the CTO's office if dmd ran out of memory on a trivial regex test. And my coworkers will hate me so much for making the already RAM-heavy build system soak up even more memory. Memory may be cheap these days, but it's still not free, and it's still a finite resource. T -- Customer support: the art of getting your clients to pay for your own incompetence.
Dec 29 2018
prev sibling parent Manu <turkeyman gmail.com> writes:
On Sat, Dec 29, 2018 at 8:00 AM Adam D. Ruppe via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
 Yeah no kidding, recently I rewrote a whole bunch of code to
 get *rid* of dependency on std.regex because it was too slow,
 and project compilation time improved from about 7+ seconds
Ditto. (Basically). Rewriting the uri parser in cgi.d brought its build time from about 3 seconds down to 0.6. Which still feels slow, especially next to my minigui.d, which can do 0.3 since it broke off from Phobos entirely! (It was 2.5 seconds before).
I ran into this the other day: https://github.com/dlang/druntime/pull/2398#issuecomment-445690050 That is `max`, can you imagine a theoretically simpler function? Phobos is chaos!
Jan 02 2019
prev sibling parent reply Ivan Kazmenko <gassa mail.ru> writes:
On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
 On Sat, Dec 29, 2018 at 10:35:05AM +0000, JN via Digitalmars-d 
 wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
 wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can 
 do it in D!
I don't know if D is a role model here, considering just importing std.regex adds three seconds to compile time - https://issues.dlang.org/show_bug.cgi?id=18378 .
[...] Yeah no kidding, recently I rewrote a whole bunch of code to get *rid* of dependency on std.regex because it was too slow, and project compilation time improved from about 7+ seconds to 2+ seconds. Let me say that again. Removing dependency on std.regex (by writing equivalent functionality by hand) improved compilation times from more than SEVEN seconds to just over TWO seconds. That's almost TRIPLE the compilation speed. The mere act of using std.regex causes compilation times to TRIPLE. Let that sink in for a moment.
I thought I'd chime in with a similar experience. I have this small library, testlib.d, used to check text outputs for programming contest problems (used internally, no public repository right now). As one can imagine, a regular expression is sometimes handy to parse simple alphanumeric constructs and check their format. But after using std.regex a bit, I put the import down into one function and templated it, and mostly stopped using the function in new checkers' code, developing some not-so-nice workarounds instead. A typical checker is very short, a couple dozen to a couple hundred lines, and this change brings compile times from 5+ seconds to 2+ seconds. Sorry, I just can't stand having to wait 5+ seconds to compile a 50-liner, repeatedly, when the development time for a single checker is on the scale of minutes, not hours. The C++ analog library (https://github.com/MikeMirzayanov/testlib) is currently 4500+ lines long, and so also takes 5+ seconds to compile a trivial checker. This was actually one of the incentives for me to switch to D with a homebrew library for writing checkers. I know there was a move to make Phobos compilation faster a few months ago, and things seemed to improve a little then, but std.regex still adds seconds of compilation time. Ivan Kazmenko.
Dec 29 2018
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/18 7:18 PM, Ivan Kazmenko wrote:
 On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
 On Sat, Dec 29, 2018 at 10:35:05AM +0000, JN via Digitalmars-d wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 Time to show off your leet D skilz and see how good we can > do 
it in D! I don't know if D is a role model here, considering just importing std.regex adds three seconds to compile time - https://issues.dlang.org/show_bug.cgi?id=18378 .
[...] Yeah no kidding, recently I rewrote a whole bunch of code to get *rid* of dependency on std.regex because it was too slow, and project compilation time improved from about 7+ seconds to 2+ seconds. Let me say that again.  Removing dependency on std.regex (by writing equivalent functionality by hand) improved compilation times from more than SEVEN seconds to just over TWO seconds. That's almost TRIPLE the compilation speed.  The mere act of using std.regex causes compilation times to TRIPLE. Let that sink in for a moment.
I thought I'd chime in with a similar experience.  I have this small library, testlib.d, used to check text outputs for programming contest problems (used internally, no public repository right now).  As one can imagine, a regular expression is sometimes handy to parse simple alphanumeric constructs and check their format.  But after using std.regex a bit, I put the import down into one function and templated it, and mostly stopped using the function in new checkers' code, developing some not-so-nice workarounds instead.  A typical checker is very short, a couple dozen to a couple hundred lines, and this change brings compile times from 5+ seconds to 2+ seconds. Sorry, I just can't stand having to wait 5+ seconds to compile a 50-liner, repeatedly, when the development time for a single checker is on the scale of minutes, not hours.  The C++ analog library (https://github.com/MikeMirzayanov/testlib) is currently 4500+ lines long, and so also takes 5+ seconds to compile a trivial checker.  This was actually one of the incentives for me to switch to D with a homebrew library for writing checkers. I know there was a move to make Phobos compilation faster a few months ago, and things seemed to improve a little then, but std.regex still adds seconds of compilation time.
Hmmm, I thought Dmitry was successful at eliminating most overheads of importing and not using std.regex. Perhaps a second pass would be needed?
Dec 30 2018
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 30, 2018 at 09:45:11AM -0500, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
 Hmmm, I thought Dmitry was successful at eliminating most overheads of
 importing and not using std.regex. Perhaps a second pass would be
 needed?
I think the cost of importing without actually using std.regex should have been fixed by now, but the complaint here is importing and *using* std.regex, which would be the important use case. :-D Understandably, some amount of cost would have to be paid to actually use the module, but given our fast-fast-fast slogan, should it really add *5 seconds* to compilation time just to use a couple of near-trivial regexes? T -- Why can't you just be a nonconformist like everyone else? -- YHL
Dec 30 2018
prev sibling next sibling parent Rubn <where is.this> writes:
 Issues with “Everything is a library” C++
That kind of mentality sounds familiar, can't put my finger on it though. var triples = from z in Enumerable.Range(1, int.MaxValue) from x in Enumerable.Range(1, z) from y in Enumerable.Range(x, z) where x*x+y*y==z*z select (x:x, y:y, z:z); foreach (var t in triples.Take(100)) { Console.WriteLine($"({t.x},{t.y},{t.z})"); } At least not creating an "object" that represents a triple in that sense that is readable and doesn't involve mixins.
Dec 29 2018
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/29/18 4:29 AM, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do it in D!
Ugh, ranges really aren't a good fit for emulating nested loops, unless you write a specialized one. I tried my best, but it kind of sucks: foreach(z, x, y; iota(size_t.max) .map!(a => zip(StoppingPolicy.shortest, a.repeat, iota(1, a))) .joiner .map!(t => zip(StoppingPolicy.shortest, t[0].repeat, t[1].repeat, iota(t[1], t[0]))) .joiner .filter!(t => t[0]*t[0] == t[1]*t[1] + t[2]*t[2]) .take(100)) { writeln(x, " ", y, " ", z); } Now, a specialized range looks much better and more efficient to me. This is essentially what the author wrote. And as is typical for D, we can split the tasks into more generic pieces. For instance: struct PossiblePythags(T) { T z = 2; T x = 1; T y = 1; auto front() { return tuple!(T, "x", T, "y", T, "z")(x, y, z); } void popFront() { if(++y == z) { if(++x == z) { ++z; x = 1; } y = x; } } enum empty = false; } auto pythagrange(T = size_t)() { return PossiblePythags!T() .filter!(p => p.x * p.x + p.y * p.y == p.z * p.z); } No measuring of either compilation or runtime, but I would expect the specialized range version to be pretty close to the author's measurements. I'm wondering if some generic "emulate N nested loops" with given stopping and starting conditions might be a useful addition for std.range or std.algorithm. I'm thinking of other looping algorithms like Floyd Warshall that might benefit from such building blocks. -Steve
Dec 29 2018
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 29.12.18 23:01, Steven Schveighoffer wrote:
 
 I'm wondering if some generic "emulate N nested loops" with given 
 stopping and starting conditions might be a useful addition for 
 std.range or std.algorithm. I'm thinking of other looping algorithms 
 like Floyd Warshall that might benefit from such building blocks.
 
 -Steve
cartesianProduct suffices for Floyd-Warshall: cartesianProduct(iota(n),iota(n),iota(n)) .each!((k,i,j){ d[i][j]=min(d[i][j],d[i][k]+d[k][j]); }); For loops where nested ranges depend on outer indices, 'then' goes a long way. It is also easy to do something like: mixin(comp!q{(i,j,k) | i in iota(n), j in iota(i), k in iota(j)}) This would expand to: iota(n).then!(i=>iota(i).then!(j=>iota(j).map!(k=>tuple(i,j,k)))) (Of course, right now, tuples are somewhat inconvenient to use.)
Dec 29 2018
prev sibling next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Saturday, 29 December 2018 at 22:01:58 UTC, Steven 
Schveighoffer wrote:
 On 12/29/18 4:29 AM, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do 
 it in D!
Ugh, ranges really aren't a good fit for emulating nested loops, unless you write a specialized one. I tried my best, but it kind of sucks: foreach(z, x, y; iota(size_t.max) .map!(a => zip(StoppingPolicy.shortest, a.repeat, iota(1, a))) .joiner .map!(t => zip(StoppingPolicy.shortest, t[0].repeat, t[1].repeat, iota(t[1], t[0]))) .joiner .filter!(t => t[0]*t[0] == t[1]*t[1] + t[2]*t[2]) .take(100)) { writeln(x, " ", y, " ", z); }
Or if you can bear the closures iota(1, size_t.max) .map!(z => iota(1, z + 1) .map!(x => iota(x, z + 1) .map!(y => tuple!("x", "y", "z")(x, y, z))) .joiner) .joiner .filter!(t => t.x^^2 + t.y^^2 == t.z^^2);
Dec 30 2018
parent Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 30 December 2018 at 12:22:01 UTC, John Colvin wrote:
 Or if you can bear the closures

     iota(1, size_t.max)
         .map!(z => iota(1, z + 1)
               .map!(x => iota(x, z + 1)
                     .map!(y => tuple!("x", "y", "z")(x, y, z)))
               .joiner)
         .joiner
         .filter!(t => t.x^^2 + t.y^^2 == t.z^^2);
But this is still unreadable if you are not a D annointed range ninja.
Dec 30 2018
prev sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Saturday, 29 December 2018 at 22:01:58 UTC, Steven 
Schveighoffer wrote:
 On 12/29/18 4:29 AM, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do 
 it in D!
Ugh, ranges really aren't a good fit for emulating nested loops, unless you write a specialized one. I tried my best, but it kind of sucks: foreach(z, x, y; iota(size_t.max) .map!(a => zip(StoppingPolicy.shortest, a.repeat, iota(1, a))) .joiner .map!(t => zip(StoppingPolicy.shortest, t[0].repeat, t[1].repeat, iota(t[1], t[0]))) .joiner .filter!(t => t[0]*t[0] == t[1]*t[1] + t[2]*t[2]) .take(100)) { writeln(x, " ", y, " ", z); }
Isn't "StoppingPolicy.shortest" the default?
Dec 30 2018
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/30/18 7:27 AM, John Colvin wrote:
 On Saturday, 29 December 2018 at 22:01:58 UTC, Steven Schveighoffer wrote:
 On 12/29/18 4:29 AM, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do it in D!
Ugh, ranges really aren't a good fit for emulating nested loops, unless you write a specialized one. I tried my best, but it kind of sucks:     foreach(z, x, y;     iota(size_t.max)         .map!(a =>              zip(StoppingPolicy.shortest, a.repeat, iota(1, a)))         .joiner         .map!(t =>              zip(StoppingPolicy.shortest, t[0].repeat, t[1].repeat, iota(t[1], t[0])))         .joiner         .filter!(t => t[0]*t[0] == t[1]*t[1] + t[2]*t[2])         .take(100))     {         writeln(x, " ", y, " ", z);     }
Isn't "StoppingPolicy.shortest" the default?
Maybe :) I didn't spend a lot of time examining the details. I also like your way, it's much more readable, but I still don't like the joiners. There has to be a way to just store the three ranges and iterate them properly, something like: struct Triples(ZRange, alias r2func, alias r3func) { ZRange z; typeof(r2func(z.front)) x; typeof(r3func(z.front, x.front)) y; auto front() { return tuple(z.front, x.front, y.front); } void popFront() { y.popFront; if(y.empty) { scope(exit) if(!x.empty) y = r3func(z.front, x.front) x.popFront; if(x.empty) { scope(exit) if(!z.empty) x = r2func(z.front); z.popFront; } } } bool empty() { return y.empty } } auto pythags = iota(size_t.max).triples!(z => iota(1 .. z), (z, x) => iota(x, z)).filter!(t => t[1]*t[1] + t[2]*t[2] == t[0]*t[0]); Maybe split it out into something like withloop, maybe we can make this kind of thing work: auto pythags = iota(size_t.max) .withLoop!(z => iota(1, z)) .withLoop!((z, x) => iota(x, z)) .filter!(t => t[1]*t[1] + t[2]*t[2] == t[0]*t[0]); Which is really similar to the loop design. -Steve
Dec 30 2018
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/30/18 1:13 PM, Steven Schveighoffer wrote:
 Maybe split it out into something like withloop, maybe we can make this 
 kind of thing work:
 
There you go, nested looping with ranges, with no need for closures or joiners: https://run.dlang.io/is/hkqhvZ :) OK, spent enough time on this... I have to stop before it consumes my whole weekend. -Steve
Dec 30 2018
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/30/18 1:47 PM, Steven Schveighoffer wrote:
 On 12/30/18 1:13 PM, Steven Schveighoffer wrote:
 Maybe split it out into something like withloop, maybe we can make 
 this kind of thing work:
There you go, nested looping with ranges, with no need for closures or joiners: https://run.dlang.io/is/hkqhvZ
BTW, this would be much nicer with first-class tuples... -Steve
Dec 30 2018
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 29.12.18 10:29, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
 
 Time to show off your leet D skilz and see how good we can do it in D!
Current D: --- import std.range, std.algorithm, std.typecons; import std.bigint, std.stdio; alias then(alias a)=(r)=>map!a(r).joiner; void main(){ auto triples=recurrence!"a[n-1]+1"(1.BigInt) .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z)))) .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2); triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } --- tuple-syntax branch at https://github.com/tgehr/dmd/tree/tuple-syntax --- import std.range, std.algorithm, std.typecons; import std.bigint, std.stdio; alias then(alias a)=(r)=>map!a(r).joiner; void main(){ auto triples=recurrence!"a[n-1]+1"(1.BigInt) .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>(x,y,z)))) .filter!((x,y,z)=>x^^2+y^^2==z^^2); triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } ---
Dec 29 2018
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/18 9:03 PM, Timon Gehr wrote:
 On 29.12.18 10:29, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do it in D!
Current D: --- import std.range, std.algorithm, std.typecons; import std.bigint, std.stdio; alias then(alias a)=(r)=>map!a(r).joiner; void main(){     auto triples=recurrence!"a[n-1]+1"(1.BigInt)         .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z))))         .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2);     triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } --- tuple-syntax branch at https://github.com/tgehr/dmd/tree/tuple-syntax --- import std.range, std.algorithm, std.typecons; import std.bigint, std.stdio; alias then(alias a)=(r)=>map!a(r).joiner; void main(){     auto triples=recurrence!"a[n-1]+1"(1.BigInt)         .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>(x,y,z))))         .filter!((x,y,z)=>x^^2+y^^2==z^^2);     triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } ---
The "then" abstraction is pretty awesome. Thanks!
Dec 30 2018
parent reply Jacob Carlborg <doob me.com> writes:
On 2018-12-30 15:46, Andrei Alexandrescu wrote:
 On 12/29/18 9:03 PM, Timon Gehr wrote:
 alias then(alias a)=(r)=>map!a(r).joiner;
The "then" abstraction is pretty awesome. Thanks!
Isn't that usually called "flatMap"? -- /Jacob Carlborg
Dec 30 2018
parent reply sarn <sarn theartofmachinery.com> writes:
On Sunday, 30 December 2018 at 16:51:07 UTC, Jacob Carlborg wrote:
 On 2018-12-30 15:46, Andrei Alexandrescu wrote:
 On 12/29/18 9:03 PM, Timon Gehr wrote:
 alias then(alias a)=(r)=>map!a(r).joiner;
The "then" abstraction is pretty awesome. Thanks!
Isn't that usually called "flatMap"?
Yeah, in Haskell monad terminology, "then" means >>, but flatMap is >>= ("bind"). So flatMap is a less confusing name for some people.
Dec 30 2018
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 30.12.18 22:12, sarn wrote:
 On Sunday, 30 December 2018 at 16:51:07 UTC, Jacob Carlborg wrote:
 On 2018-12-30 15:46, Andrei Alexandrescu wrote:
 On 12/29/18 9:03 PM, Timon Gehr wrote:
 alias then(alias a)=(r)=>map!a(r).joiner;
The "then" abstraction is pretty awesome. Thanks!
Isn't that usually called "flatMap"?
Yeah, in Haskell monad terminology, "then" means >>, but flatMap is >>= ("bind").  So flatMap is a less confusing name for some people.
Pun intended. Welcome to D, where 'enum' means 'const', 'const' means 'readonly', 'lazy' means 'by name', 'assert' means 'assume' and 'real' does not mean 'real' (in fact, I really like the 'ireal' and 'creal' keywords, pity they are being phased out). :) The wider context is that I have argued many times that it makes sense to put 'trivial' one-liners like this one into Phobos even if for no other reason than to standardize their names.
Dec 31 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2018 2:28 PM, Timon Gehr wrote:
 Welcome to D, where 'enum' means 'const', 'const' means 
 'readonly', 'lazy' means 'by name', 'assert' means 'assume' and 'real' does
not 
 mean 'real' (in fact, I really like the 'ireal' and 'creal' keywords, pity
they 
 are being phased out). :)
D's "by name" are the template alias parameters.
Dec 31 2018
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01.01.19 07:21, Walter Bright wrote:
 On 12/31/2018 2:28 PM, Timon Gehr wrote:
 Welcome to D, where 'enum' means 'const', 'const' means 'readonly', 
 'lazy' means 'by name', 'assert' means 'assume' and 'real' does not 
 mean 'real' (in fact, I really like the 'ireal' and 'creal' keywords, 
 pity they are being phased out). :)
D's "by name" are the template alias parameters.
I think alias parameters are not "by name" they are something like "by symbol". ("by name" is a bit confusing of a name, the argument expression need not really have a "name".) The standard PL jargon is this: https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_name https://en.wikipedia.org/wiki/Lazy_evaluation
Jan 03 2019
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/3/2019 11:26 AM, Timon Gehr wrote:
 On 01.01.19 07:21, Walter Bright wrote:
 On 12/31/2018 2:28 PM, Timon Gehr wrote:
 Welcome to D, where 'enum' means 'const', 'const' means 'readonly', 'lazy' 
 means 'by name', 'assert' means 'assume' and 'real' does not mean 'real' (in 
 fact, I really like the 'ireal' and 'creal' keywords, pity they are being 
 phased out). :)
D's "by name" are the template alias parameters.
I think alias parameters are not "by name" they are something like "by symbol". ("by name" is a bit confusing of a name, the argument expression need not really have a "name".) The standard PL jargon is this: https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_name https://en.wikipedia.org/wiki/Lazy_evaluation
Thanks for the info. It seems the explanations are a little slippery, and D's usage doesn't precisely fit. For example, D's lazy parameters can be used to implement control structures, as it says lazy evaluation is for, but then seems a bit ambiguous about whether the lazy evaluation is done once or many times.
Jan 03 2019
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2018 6:03 PM, Timon Gehr wrote:
 Current D:
 ---
 import std.range, std.algorithm, std.typecons;
 import std.bigint, std.stdio;
 alias then(alias a)=(r)=>map!a(r).joiner;
 void main(){
      auto triples=recurrence!"a[n-1]+1"(1.BigInt)
          .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z))))
          .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2);
      triples.each!((x,y,z){ writeln(x," ",y," ",z); });
 }
 ---
I never would have thought of that. Thank you!
Dec 30 2018
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do 
 it in D!
I'm on it. As I write, I'm timing (compile and run time) several C++, D, and Rust implementations and writing a blog post. I'm only on the 2nd implementation but D is winning... :) I'm going to shamelessly steal Timon's range code in this thread and the generator one posted on reddit.
Dec 30 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2018 5:18 AM, Atila Neves wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do it in D!
I'm on it. As I write, I'm timing (compile and run time) several C++, D, and Rust implementations and writing a blog post. I'm only on the 2nd implementation but D is winning... :) I'm going to shamelessly steal Timon's range code in this thread and the generator one posted on reddit.
Wow! I'm looking forward to it.
Dec 30 2018
parent reply Atila Neves <atila.neves gmail.com> writes:
On Sunday, 30 December 2018 at 22:31:27 UTC, Walter Bright wrote:
 On 12/30/2018 5:18 AM, Atila Neves wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
 wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do 
 it in D!
I'm on it. As I write, I'm timing (compile and run time) several C++, D, and Rust implementations and writing a blog post. I'm only on the 2nd implementation but D is winning... :) I'm going to shamelessly steal Timon's range code in this thread and the generator one posted on reddit.
Wow! I'm looking forward to it.
Blog: https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-triples-in-c-d-and-rust/ Reddit: https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_triples_in_c_d_and_rust/ Hacker news something something dark side.
Dec 31 2018
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/31/18 8:20 AM, Atila Neves wrote:
 On Sunday, 30 December 2018 at 22:31:27 UTC, Walter Bright wrote:
 On 12/30/2018 5:18 AM, Atila Neves wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do it in D!
I'm on it. As I write, I'm timing (compile and run time) several C++, D, and Rust implementations and writing a blog post. I'm only on the 2nd implementation but D is winning... :) I'm going to shamelessly steal Timon's range code in this thread and the generator one posted on reddit.
Wow! I'm looking forward to it.
Blog: https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-trip es-in-c-d-and-rust/ Reddit: https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_trip es_in_c_d_and_rust/ Hacker news something something dark side.
I tried the version I came up with with a dedicated looping range (https://run.dlang.io/is/hkqhvZ), cuts the range version down to 550ms (my timings are on my macbook, but my baseline was literally exactly what yours was, 154ms). In any case, it does seem like the optimizer/compiler is much better at dealing with nested loops than 3 ranges. This is probably a case where the low level code is so fast (doing 3 multiplications, an addition and an equality check) that the loop processing becomes significant. -Steve
Dec 31 2018
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/31/18 10:15 AM, Steven Schveighoffer wrote:
 On 12/31/18 8:20 AM, Atila Neves wrote:
 On Sunday, 30 December 2018 at 22:31:27 UTC, Walter Bright wrote:
 On 12/30/2018 5:18 AM, Atila Neves wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/

 Time to show off your leet D skilz and see how good we can do it in D!
I'm on it. As I write, I'm timing (compile and run time) several C++, D, and Rust implementations and writing a blog post. I'm only on the 2nd implementation but D is winning... :) I'm going to shamelessly steal Timon's range code in this thread and the generator one posted on reddit.
Wow! I'm looking forward to it.
Blog: https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-trip es-in-c-d-and-rust/ Reddit: https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_trip es_in_c_d_and_rust/ Hacker news something something dark side.
I tried the version I came up with with a dedicated looping range (https://run.dlang.io/is/hkqhvZ), cuts the range version down to 550ms (my timings are on my macbook, but my baseline was literally exactly what yours was, 154ms). In any case, it does seem like the optimizer/compiler is much better at dealing with nested loops than 3 ranges. This is probably a case where the low level code is so fast (doing 3 multiplications, an addition and an equality check) that the loop processing becomes significant.
And the answer was provided by someone who examined the compiler output: https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_triples_in_c_d_and_rust/ecy6rqn/ And it fixes the problem with Timon's D version as well, new time down to 172ms (see my reply to that post). -Steve
Dec 31 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2018 8:05 AM, Steven Schveighoffer wrote:
 And the answer was provided by someone who examined the compiler output:
 
 https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_triples_in_c_d_and_rust/ecy6rqn/
For reference, the reddit comment is: "Looking at the generated code in compiler explorer, it looks like the Rust compiler is not hoisting the multiplications out of the loops, while the C++ compiler (I used clang) does. Furthermore, it seems like using `for x in y..=z` etc results in quite convoluted conditions. This code seems to perform the same as the C++: https://godbolt.org/z/-nzALh It looks like there's some things to fix in the rust compiler.."
 And it fixes the problem with Timon's D version as well, new time down to
172ms 
One thing to consider is optimizers are designed by looking at the code generated by popular programming languages and popular programming methods. Hence the traditional loop structure gets a heluva lot of attention. The range version is fairly new, and so it will likely do less well.
 (see my reply to that post).
Your reply is: "Ooh, that's interesting. Same issue with the D version. I had to work a bit on it, but this does work and is 172ms vs the ~1000ms: return recurrence!"a[n-1]+1"(1) .then!((z) { auto ztotal = z * z; return iota(1, z + 1).then!((x) { auto xtotal = x * x; return iota(x, z + 1) .filter!(y => y * y + xtotal == total) .map!(y => tuple(x,y,z)); }); }); " Atila, can you please update the blog post?
Dec 31 2018
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/31/18 1:43 PM, Walter Bright wrote:
 On 12/31/2018 8:05 AM, Steven Schveighoffer wrote:
  > (see my reply to that post).
 
 Your reply is:
 
 "Ooh, that's interesting. Same issue with the D version.
 I had to work a bit on it, but this does work and is 172ms vs the ~1000ms:
 
    return
      recurrence!"a[n-1]+1"(1)
      .then!((z) {
         auto ztotal = z * z;
         return iota(1, z + 1).then!((x) {
             auto xtotal = x * x;
             return iota(x, z + 1)
                .filter!(y => y * y + xtotal == total)
                .map!(y => tuple(x,y,z));
                });
         });
 "
 
 Atila, can you please update the blog post?
This isn't a fair comparison though -- I'm doing work here that in the other languages the compiler is doing (hoisting the multiplications outside the inner loops). It's not a straight port. I just wanted to point out that this accounts for how the rust and C++ versions are faster than the D range versions. It would be good to look into why the D compilers are not seeing that optimization possibility. -Steve
Dec 31 2018
prev sibling next sibling parent Jon Degenhardt <jond noreply.com> writes:
On Monday, 31 December 2018 at 13:20:35 UTC, Atila Neves wrote:
 Blog:

 https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-triples-in-c-d-and-rust/

 Reddit:

 https://www.reddit.com/r/programming/comments/ab71ag/comparing_pythagorean_triples_in_c_d_and_rust/


 Hacker news something something dark side.
LTO often helps optimize range code (and phobos code generally), so I tried it out on the variants in your repo. Indeed, the range variants improve, but not the others. Still doesn't bring the range version into line with other three variants. Compile lines: * No LTO: ldc2 -O2 * LTO: ldc2 -O2 -flto=thin -defaultlib=phobos2-ldc-lto,druntime-ldc-lto LDC 1.13.0; Macbook Pro, 16GB RAM. The runtime for simple, lambda, and generator variants came in at 120ms each for both LTO off and LTO on. The range version was 1080ms for no LTO, and 780ms with LTO. I bumped N up to 3000 to see if more differentiation would show up. There was some, but overall the results were still very consistent with N at 1000. --Jon
Dec 31 2018
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Monday, 31 December 2018 at 13:20:35 UTC, Atila Neves wrote:
 Blog:

 https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-triples-in-c-d-and-rust/
Isn't the main problem with performance of the Timon's range loop that it uses arbitrary-sized integers (BigInts)? I took his example, and modified it to this: import std.experimental.all; import std.datetime.stopwatch : AutoStart, StopWatch; alias then(alias a)=(r)=>map!a(r).joiner; void main(){ auto sw = StopWatch(AutoStart.no); if (true) { sw.start; scope (success) sw.stop; auto triples=recurrence!"a[n-1]+1"(1.BigInt) .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z)))) .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2) .until!(t=>t[2] >= 500); triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } writefln("Big int time is %s microseconds", sw.peek.total!"usecs"); sw.reset; if (true) { sw.start; scope (success) sw.stop; auto triples=recurrence!"a[n-1]+1"(1L) .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z)))) .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2) .until!(t=>t[2] >= 500); triples.each!((x,y,z){ writeln(x," ",y," ",z); }); } writefln("Long int time is %s microseconds", sw.peek.total!"usecs"); return; } The output, LDC version being 1.11.0-beta2, with: dub --compiler=ldc2 --build=release was: 3 4 5 6 8 10 5 12 13 9 12 15 8 15 17 [...snip...] 155 468 493 232 435 493 340 357 493 190 456 494 297 396 495 Big int time is 4667925 microseconds 3 4 5 6 8 10 5 12 13 9 12 15 8 15 17 [...snip...] 155 468 493 232 435 493 340 357 493 190 456 494 297 396 495 Long int time is 951821 microseconds That is almost five times as fast. Assuming the factor would be the same in your blog, doesn't that account for most of the difference between performance of D ranged and other versions? The remaining difference might be explained with bounds-checking.
Jan 04 2019
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/4/19 9:00 AM, Dukc wrote:
 On Monday, 31 December 2018 at 13:20:35 UTC, Atila Neves wrote:
 Blog:

 https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-trip
es-in-c-d-and-rust/ 
Isn't the main problem with performance of the Timon's range loop that it uses arbitrary-sized integers (BigInts)?
Atila's version of that code doesn't use bigints: https://github.com/atilaneves/pythagoras/blob/master/range.d#L24 The major problem with the D range implementation is that the compiler isn't able to find the optimization of hoisting the multiplication of the outer indexes out of the inner loop. See my responses to Atila in this thread. -Steve
Jan 04 2019
parent Dukc <ajieskola gmail.com> writes:
On Friday, 4 January 2019 at 16:21:40 UTC, Steven Schveighoffer 
wrote:
 On 1/4/19 9:00 AM, Dukc wrote:

 
 Isn't the main problem with performance of the Timon's range 
 loop that it uses arbitrary-sized integers (BigInts)?
Atila's version of that code doesn't use bigints: https://github.com/atilaneves/pythagoras/blob/master/range.d#L24 The major problem with the D range implementation is that the compiler isn't able to find the optimization of hoisting the multiplication of the outer indexes out of the inner loop. See my responses to Atila in this thread. -Steve
Now, I'm replying to an old theard, I hope you're still interested enough to warrant necrobumping. The thing is, I did do some additional testing of the range version, and I think I found out a way to make the compiler to find the quoted optimization without doing it manually. You just have to move the sqr calculations to where the data is still nested: import std.experimental.all; import std.datetime.stopwatch : AutoStart, StopWatch; alias then(alias a)=(r)=>map!a(r).joiner; void main(){ auto sw = StopWatch(AutoStart.no); int total; if (true) { sw.start; scope (success) sw.stop; auto triples=recurrence!"a[n-1]+1"(1L) .then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z)))) .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2) .until!(t=>t[2] >= 500); triples.each!((x,y,z){ total += x+y+z; }); } writefln("Old loop time is %s microseconds", sw.peek.total!"usecs"); // 118_614 sw.reset; if (true) { sw.start; scope (success) sw.stop; auto triples=recurrence!"a[n-1]+1"(1L) .then!(z=>iota(1,z+1).then! ( x=>iota(x,z+1) .map!(y=> tuple(x,y,z)) .filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2)) ) .until!(t=>t[2] >= 500); triples.each!((x,y,z){ total += x+y+z; }); } writefln("New loop time is %s microseconds", sw.peek.total!"usecs"); // 21_936 writeln(total); // to force the compiler to do the calculations return; } See, no manual caching of the squares. And the improvement is over 5X (dub --build=release --compiler=ldc2), which should bring it close to the other versions in the blog entry.
Mar 05 2019
prev sibling next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
It says:
 C++ compilation times have been a source of pain in every 
 non-trivial-size codebase I’ve worked on. Don’t believe me? Try 
 building one of the widely available big codebases (any of: 
 Chromium, Clang/LLVM, UE4 etc will do). Among the things I 

 the list, and has been since forever.
getting worse. There is the theory (D builds fast) and the application (DUB often negate the advantage, you need to avoid templatitis).
Dec 30 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 30, 2018 at 01:25:33PM +0000, Guillaume Piolat via Digitalmars-d
wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
It says:
 C++ compilation times have been a source of pain in every
 non-trivial-size codebase I’ve worked on. Don’t believe me? Try
 building one of the widely available big codebases (any of:
 Chromium, Clang/LLVM, UE4 etc will do). Among the things I really

 and has been since forever.
worse. There is the theory (D builds fast) and the application (DUB often negate the advantage, you need to avoid templatitis).
D theory sounds all good and all, but in practice you have warts like dub (one big reason I stay away from it -- though based on what Sonke said recently, performance may have improved since I last checked), std.regex (after the last big refactor, something Really Bad happened to its compile times -- it didn't used to be this bad!), std.format (a big hairball I haven't dared to look too deeply into), and a couple of others, like various recursive templates elsewhere in Phobos. And also std.uni's large templated internal tables, which may be (part of?) the common cause of compile-time slowdowns in std.format and std.regex. There's also dmd's ridiculous memory usage policy, which is supposed to help compile times when you have ridiculous amounts of free RAM, but which causes anything from swap thrashing slowdowns to outright unusability on medium- to low-memory systems. T -- That's not a bug; that's a feature!
Dec 30 2018
next sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy, which is 
 supposed to help compile times when you have ridiculous amounts 
 of free RAM, but which causes anything from swap thrashing 
 slowdowns to outright unusability on medium- to low-memory 
 systems.
Rainer an Martin Kinkelin have been working on that for LDC, they might upstream eventually.
Dec 30 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 30, 2018 at 01:58:56PM +0000, Nicholas Wilson via Digitalmars-d
wrote:
 On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy, which is supposed
 to help compile times when you have ridiculous amounts of free RAM,
 but which causes anything from swap thrashing slowdowns to outright
 unusability on medium- to low-memory systems.
Rainer an Martin Kinkelin have been working on that for LDC, they might upstream eventually.
Recently I noticed that LDC now compiles every function into their own section and runs LTO, including GC of unreferenced sections, by default. As a result, executable sizes are back down to where equivalent C/C++ code would be, as opposed to being a MB or so larger when compiled with DMD. It more-or-less nullifies most of the ill-effects of template bloat. Furthermore, LDC now tracks dmd releases very closely, almost on par, produces better code, has a far wider range of target archs, like Android/ARM, that I doubt DMD will ever support in the foreseeable future, and recently shows compile times pretty close to DMD (with things like std.regex or std.format making DMD dog-slow anyway, the extra time the LDC backend takes for codegen basically becomes roundoff error). Right now, I'm very tempted to drop dmd as my primary D compiler and use LDC instead. If this memory usage thing is fixed in LDC, I probably WILL drop dmd for good and use LDC instead. I've been itching to get D to compile on some of my low-memory systems but have been hampered until now -- it's been extremely frustrating. T -- "You are a very disagreeable person." "NO."
Dec 30 2018
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2018 6:27 AM, H. S. Teoh wrote:
 Recently I noticed that LDC now compiles every function into their own
 section
DMD has always done that.
 and runs LTO, including GC of unreferenced sections, by default.
I've tried using the GC feature of ld, but it would produce executables that crashed. But that was years ago, perhaps ld has improved. If someone would like to turn that on with dmd, it would be a worthwhile experiment.
Dec 30 2018
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2018-12-30 23:37, Walter Bright wrote:

 I've tried using the GC feature of ld, but it would produce executables 
 that crashed.
I think there might be some sections (perhaps the module info and similar) that need to be tagged (somehow) otherwise they will be removed. -- /Jacob Carlborg
Dec 31 2018
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 30, 2018 at 02:37:59PM -0800, Walter Bright via Digitalmars-d wrote:
 On 12/30/2018 6:27 AM, H. S. Teoh wrote:
 Recently I noticed that LDC now compiles every function into their
 own section
DMD has always done that.
Then it must be doing something wrong, since running dmd with -L-gc-sections produces a 2 MB executable, but running ldc2 (without any special options) produces a 456 KB executable. Exactly the same set of source files. No dependencies on compiler-specific features in the code. To put this more in perspective, I re-tested this with a trivial Hello World program: import std.stdio; void main() { writeln("Hello world"); } Compile this with dmd: $ dmd -L--gc-sections test.d $ ls -l test -rwxrwxr-x 1 hsteoh hsteoh 967416 Dec 31 07:43 test Compile this with ldc2: $ ldc2 test.d hsteoh crystal:/tmp$ \ls -l test -rwxrwxr-x 1 hsteoh hsteoh 24632 Dec 31 07:44 test Note the order of magnitude difference in size, and that ldc2 achieves this by default, with no additional options needed. How do you make dmd produce the same (or comparable) output?
 and runs LTO, including GC of unreferenced sections, by default.
I've tried using the GC feature of ld, but it would produce executables that crashed. But that was years ago, perhaps ld has improved. If someone would like to turn that on with dmd, it would be a worthwhile experiment.
Maybe it's because certain required sections need to be marked in some way so that the linker doesn't discard them? T -- The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world. -- Anonymous
Dec 31 2018
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2018 7:51 AM, H. S. Teoh wrote:
 How do you make dmd produce the same (or comparable) output?
I don't know. To find out would take some time comparing the .o files.
Dec 31 2018
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2018-12-31 16:51, H. S. Teoh wrote:

 Then it must be doing something wrong, since running dmd with
 -L-gc-sections produces a 2 MB executable, but running ldc2 (without any
 special options) produces a 456 KB executable. Exactly the same set of
 source files. No dependencies on compiler-specific features in the code.
 
 To put this more in perspective, I re-tested this with a trivial Hello
 World program:
 
 	import std.stdio;
 	void main() {
 		writeln("Hello world");
 	}
 
 Compile this with dmd:
 
 	$ dmd -L--gc-sections test.d
 	$ ls -l test
 	-rwxrwxr-x 1 hsteoh hsteoh 967416 Dec 31 07:43 test
 
 Compile this with ldc2:
 
 	$ ldc2 test.d
 	hsteoh crystal:/tmp$ \ls -l test
 	-rwxrwxr-x 1 hsteoh hsteoh 24632 Dec 31 07:44 test
 
 Note the order of magnitude difference in size, and that ldc2 achieves
 this by default, with no additional options needed.
 
 How do you make dmd produce the same (or comparable) output?
For me I get comparable output with DMD and LDC: LDC: 932 KB DMD (--gc-sections): 957 KB DMD: 988 KB This is running using Docker containers (which are running Ubuntu). Funny thing, on macOS it's the opposite of your experience: LDC: 5 MB LDC (-dead_strip): 957 KB DMD: 882 KB DMD (-dead_strip): 361 KB -- /Jacob Carlborg
Jan 01 2019
parent reply kinke <noone nowhere.com> writes:
On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg wrote:
 On 2018-12-31 16:51, H. S. Teoh wrote:
 Note the order of magnitude difference in size, and that ldc2 
 achieves
 this by default, with no additional options needed.
 
 How do you make dmd produce the same (or comparable) output?
For me I get comparable output with DMD and LDC
You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.
Jan 01 2019
next sibling parent reply Rubn <where is.this> writes:
On Tuesday, 1 January 2019 at 11:57:26 UTC, kinke wrote:
 On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg 
 wrote:
 On 2018-12-31 16:51, H. S. Teoh wrote:
 Note the order of magnitude difference in size, and that ldc2 
 achieves
 this by default, with no additional options needed.
 
 How do you make dmd produce the same (or comparable) output?
For me I get comparable output with DMD and LDC
You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.
Druntime and Phobos can be used as shared libraries? What is this magical feature? -Windows User
Jan 01 2019
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, January 1, 2019 7:04:42 AM MST Rubn via Digitalmars-d wrote:
 On Tuesday, 1 January 2019 at 11:57:26 UTC, kinke wrote:
 On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg

 wrote:
 On 2018-12-31 16:51, H. S. Teoh wrote:
 Note the order of magnitude difference in size, and that ldc2
 achieves
 this by default, with no additional options needed.

 How do you make dmd produce the same (or comparable) output?
For me I get comparable output with DMD and LDC
You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.
Druntime and Phobos can be used as shared libraries? What is this magical feature? -Windows User
LOL. Personally, it's a feature that I avoid like the plague, because I hate it when my programs stop working just because I updated dmd (which would be bad enough for a normal user, but it's particularly nasty when you're frequently following master and making local changes, because you're working on Phobos, druntime, and/or dmd). But yeah, when I saw the difference in size that H.S. Teoh was seeing, my first thought was that one was probably using Phobos as a shared library while the other was using it as a static one. - Jonathan M Davis
Jan 01 2019
next sibling parent reply Rubn <where is.this> writes:
On Tuesday, 1 January 2019 at 19:28:36 UTC, Jonathan M Davis 
wrote:
 On Tuesday, January 1, 2019 7:04:42 AM MST Rubn via 
 Digitalmars-d wrote:
 On Tuesday, 1 January 2019 at 11:57:26 UTC, kinke wrote:
 On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg

 wrote:
 On 2018-12-31 16:51, H. S. Teoh wrote:
 Note the order of magnitude difference in size, and that 
 ldc2
 achieves
 this by default, with no additional options needed.

 How do you make dmd produce the same (or comparable) 
 output?
For me I get comparable output with DMD and LDC
You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.
Druntime and Phobos can be used as shared libraries? What is this magical feature? -Windows User
LOL. Personally, it's a feature that I avoid like the plague, because I hate it when my programs stop working just because I updated dmd (which would be bad enough for a normal user, but it's particularly nasty when you're frequently following master and making local changes, because you're working on Phobos, druntime, and/or dmd). But yeah, when I saw the difference in size that H.S. Teoh was seeing, my first thought was that one was probably using Phobos as a shared library while the other was using it as a static one. - Jonathan M Davis
On linux the version should be part of the filename, and iirc it'll only try the shared library without a version if it can't find the one it needs. Not too familiar with how it is done on Windows as it isn't really standardized. Could just look at how it is done for the C++ runtime, as I usually have 20 different versions of it installed even for the same MSVC year revision. I just stick the dll in the same folder as the executable. I'd rather have a single shared library than having 20 instances of it loaded in my 20 dll files. I can dream though, that one day this fundamental feature will be added to D.
Jan 01 2019
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, January 1, 2019 1:59:25 PM MST Rubn via Digitalmars-d wrote:
 On Tuesday, 1 January 2019 at 19:28:36 UTC, Jonathan M Davis
 wrote:
 On Tuesday, January 1, 2019 7:04:42 AM MST Rubn via
 Druntime and Phobos can be used as shared libraries? What is
 this magical feature?

 -Windows User
LOL. Personally, it's a feature that I avoid like the plague, because I hate it when my programs stop working just because I updated dmd (which would be bad enough for a normal user, but it's particularly nasty when you're frequently following master and making local changes, because you're working on Phobos, druntime, and/or dmd). But yeah, when I saw the difference in size that H.S. Teoh was seeing, my first thought was that one was probably using Phobos as a shared library while the other was using it as a static one. - Jonathan M Davis
On linux the version should be part of the filename, and iirc it'll only try the shared library without a version if it can't find the one it needs. Not too familiar with how it is done on Windows as it isn't really standardized. Could just look at how it is done for the C++ runtime, as I usually have 20 different versions of it installed even for the same MSVC year revision. I just stick the dll in the same folder as the executable. I'd rather have a single shared library than having 20 instances of it loaded in my 20 dll files. I can dream though, that one day this fundamental feature will be added to D.
The version number is in the file name for Phobos' shared library, but if you're building the development version, that really doesn't help you, because every time you rebuild it, you get a file with the same name but potentially different contents (at least until the next release gets tagged, and the version number gets bumped). If you're only using releases, you don't have that problem. However, unless you keep older versions of the shared library around, as soon as you update, all of your D programs break, because the version they need isn't there anymore. All in all, it's just way simpler to use Phobos as a static library. Sure, when you have a bunch of D programs statically linked, it takes up a few extra megabytes that way, but when systems have terabytes, that doesn't really matter. Anyone who wants to use the shared library version is free to do so, but I really don't think that it makes sense in most cases. I do agree that shared library support in general is something that we should have, but that doesn't mean that it really makes sense as the default for the standard library - especially when so much of it is templated anyway. I really have no clue what the situation is with dll support, because I pretty much only program on Windows when I'm forced to, but given how dlls work on Windows, I've never been a fan of using them except in cases where you have to. The whole nonsense where you have to rebuild your program, because _anything_ changed in the dll is just ridiculous IMHO. At least on *nix systems, shared libraries can be changed so long as the ABIs of the existing symbols aren't changed, making it so that you can actually update shared libraries without rebuilding everything. We just get screwed with D, because the way it's designed makes it very hard to maintain ABI compatibility if templates are involved, and no attempt is made to maintain ABI compatibility across versions of Phobos. So, for Phobos, using a shared library can be very problematic (whereas a shared library with minimal templates that was designed to maintain a fixed API and ABI would work just fine, just like it does with C). - Jonathan M Davis
Jan 01 2019
parent reply Rubn <where is.this> writes:
On Tuesday, 1 January 2019 at 22:34:24 UTC, Jonathan M Davis 
wrote:
 The version number is in the file name for Phobos' shared 
 library, but if you're building the development version, that 
 really doesn't help you, because every time you rebuild it, you 
 get a file with the same name but potentially different 
 contents (at least until the next release gets tagged, and the 
 version number gets bumped). If you're only using releases, you 
 don't have that problem. However, unless you keep older 
 versions of the shared library around, as soon as you update, 
 all of your D programs break, because the version they need 
 isn't there anymore. All in all, it's just way simpler to use 
 Phobos as a static library. Sure, when you have a bunch of D 
 programs statically linked, it takes up a few extra megabytes 
 that way, but when systems have terabytes, that doesn't really 
 matter. Anyone who wants to use the shared library version is 
 free to do so, but I really don't think that it makes sense in 
 most cases. I do agree that shared library support in general 
 is something that we should have, but that doesn't mean that it 
 really makes sense as the default for the standard library - 
 especially when so much of it is templated anyway.
Why wouldn't you keep older versions of phobos? Why are you deleting the old ones when you install a new version of DMD. They are designed in such a way to keep them there for this very reason. I don't really understand your argument here, to me it just seems DMD is just doing something backwards in comparison to the rest of the platform (as usual). But we aren't, there's no shared library of phobos for Windows and the looks of it Mac OS. It's not about size, it's about easy of use. The way the garbage collector works it might not be as bad in comparison with the C runtime. Where allocating memory from a shared library then unloading it would effectively free all the memory that was allocated by that shared library. But at the same time since allocating memory from the GC
 The whole nonsense where you have to rebuild your program, 
 because _anything_ changed in the dll is just ridiculous IMHO.
What?! Where did you hear this non-sense from? I'm not surprised at the state of shared library support on windows anymore. Optlink and DMC are the greatest compilers and linkers on Windows, they don't cause ANY problems AT ALL, they are very well supported and shouldn't be removed. Sigh.
Jan 01 2019
next sibling parent Neia Neutuladh <neia ikeran.org> writes:
On Wed, 02 Jan 2019 00:40:21 +0000, Rubn wrote:
 Why wouldn't you keep older versions of phobos? Why are you deleting the
 old ones when you install a new version of DMD.
Distributing binaries requires either static linking, shipping multiple files that must be kept together (which is strictly inferior to static linking), or a compatible library already installed on the client machine. Phobos doesn't have a stable ABI, so that means the client would need your specific version of Phobos. That's not super kind. You can't tell which versions of libphobos.so you need to keep around without running ldd on every executable on your computer. This would be moderately annoying with only official releases; for instance, I've got one tiny project that's old enough that I built it using dsss, and either that would be broken, or I'd have a few dozen versions of phobos running around. More than one per binary that I care about. Specific to Jonathan M Davis, if you're building dmd/druntime/phobos from git head, the previous problems are hugely compounded, plus you need to assign unique numbers to each built version and update the relevant dmd.conf to match.
Jan 01 2019
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, January 1, 2019 5:40:21 PM MST Rubn via Digitalmars-d wrote:
 On Tuesday, 1 January 2019 at 22:34:24 UTC, Jonathan M Davis

 wrote:
 The version number is in the file name for Phobos' shared
 library, but if you're building the development version, that
 really doesn't help you, because every time you rebuild it, you
 get a file with the same name but potentially different
 contents (at least until the next release gets tagged, and the
 version number gets bumped). If you're only using releases, you
 don't have that problem. However, unless you keep older
 versions of the shared library around, as soon as you update,
 all of your D programs break, because the version they need
 isn't there anymore. All in all, it's just way simpler to use
 Phobos as a static library. Sure, when you have a bunch of D
 programs statically linked, it takes up a few extra megabytes
 that way, but when systems have terabytes, that doesn't really
 matter. Anyone who wants to use the shared library version is
 free to do so, but I really don't think that it makes sense in
 most cases. I do agree that shared library support in general
 is something that we should have, but that doesn't mean that it
 really makes sense as the default for the standard library -
 especially when so much of it is templated anyway.
Why wouldn't you keep older versions of phobos? Why are you deleting the old ones when you install a new version of DMD. They are designed in such a way to keep them there for this very reason. I don't really understand your argument here, to me it just seems DMD is just doing something backwards in comparison to the rest of the platform (as usual).
Anyone using a package manager to install dmd is only ever going to end up with the version of Phobos that goes with that dmd. It would be highly abnormal for multiple versions of Phobos to be installed on a system. The only folks who would ever end up with multiple versions of Phobos on the same system are folks actively going to the effort of making it happen instead of using any of the normal install mechanisms. And ultimately, it's far simpler to just always use Phobos as a static library rather than trying to do anything with trying to keep older versions of the shared library around - especially if you're actually building development versions of Phobos. In general, the only downsides to statically linking are that the executables are slightly larger, and you don't get the benefit of security updates to libraries without rebuilding the binary. And with how easy it is to break ABI compatibility with D libraries (something as simple as a function attribute changing breaks it, and that's trivial if the code is inferring attributes), updating D shared libraries in a way that doesn't require rebuilding the program is quite problematic in general. In some situations, it's worth it, but in general, it really isn't. The place where shared libraries offer far more benefit is with plugins and the like which are loaded while the program is running rather than in trying to actually share code. The ABI problems still exist there, but plugin APIs are generally far more limited than full-on libraries (which reduces the problem), and you simply can't use static libraries in those situations, whereas if you're just linking your program against a library, static libraries work just fine. I'm by no means against having shared libraries in general work, and I think that full dll support should exist for D on Windows, but aside from plugins, in the vast majority of cases, I think that using shared libraries is far more trouble than it's worth (especially with how difficult it is to maintain ABI compatibility with D libraries).
 The whole nonsense where you have to rebuild your program,
 because _anything_ changed in the dll is just ridiculous IMHO.
What?! Where did you hear this non-sense from? I'm not surprised at the state of shared library support on windows anymore.
From working with dlls with C++. With dlls on Windows, your program links
against a static library associated with the dynamic library, and if any of the symbols are changed, the addresses change, and your program will be unable to load the newer version of the library without being rebuilt against the new version of the static library. This is in stark contrast to *nix where the linking works in such a way that as long as the symbols still exist with the same ABI in the newly built library, they're found when the program loads, and it's not a problem. The addresses aren't hard-coded in the way that happens with dlls on Windows. dlls on Windows allow you to share code so long as the programs are all built against exactly the same version of the dll (and if they're not, then you need separate copies of the dll, and you get into dll hell), whereas with *nix, you can keep updating the shared library as much as you like without changing the executable as long as the API and ABI of the existing symbols don't change. - Jonathan M Davis
Jan 01 2019
next sibling parent reply Rubn <where is.this> writes:
On Wednesday, 2 January 2019 at 02:04:24 UTC, Jonathan M Davis 
wrote:
 I'm by no means against having shared libraries in general 
 work, and I think that full dll support should exist for D on 
 Windows, but aside from plugins, in the vast majority of cases, 
 I think that using shared libraries is far more trouble than 
 it's worth (especially with how difficult it is to maintain ABI 
 compatibility with D libraries).
You've obviously haven't used shared libraries that much then -- that is shared libraries that link statically to the runtime. Having multiple instances of Phobos/druntime loaded at the same time in one process has it's own can of warms. I'm not surprised at all that in general people don't even use shared libraries.
 The whole nonsense where you have to rebuild your program, 
 because _anything_ changed in the dll is just ridiculous 
 IMHO.
What?! Where did you hear this non-sense from? I'm not surprised at the state of shared library support on windows anymore.
From working with dlls with C++. With dlls on Windows, your 
program links
against a static library associated with the dynamic library, and if any of the symbols are changed, the addresses change, and your program will be unable to load the newer version of the library without being rebuilt against the new version of the static library. This is in stark contrast to *nix where the linking works in such a way that as long as the symbols still exist with the same ABI in the newly built library, they're found when the program loads, and it's not a problem. The addresses aren't hard-coded in the way that happens with dlls on Windows. dlls on Windows allow you to share code so long as the programs are all built against exactly the same version of the dll (and if they're not, then you need separate copies of the dll, and you get into dll hell), whereas with *nix, you can keep updating the shared library as much as you like without changing the executable as long as the API and ABI of the existing symbols don't change. - Jonathan M Davis
On Wednesday, 2 January 2019 at 04:09:29 UTC, H. S. Teoh wrote:
 On Tue, Jan 01, 2019 at 07:04:24PM -0700, Jonathan M Davis via 
 Digitalmars-d wrote: [...]
 From working with dlls with C++. With dlls on Windows, your 
 program links against a static library associated with the 
 dynamic library, and if any of the symbols are changed, the 
 addresses change, and your program will be unable to load the 
 newer version of the library without being rebuilt against the 
 new version of the static library.
Wow. That makes me glad I'm not programming on Windows...
Mother of god, these two comments. Rip D on Windows, there's clearly only one competent Windows user on the D development team. It's too bad most of his work is focused on maintaining the VS plugin but from the looks of it if he didn't no one would. Everyone else is obliviously tucking incompetent. If it worked the way you think it does (it doesn't) then every Windows update would literally break EVERY SINGLE executable file on the face of the earth. They would all need to be recompiled as the system DLL libraries are updated. You can only link to the system libraries dynamically virtually and every executable uses them. Yet somehow Windows is able to maintain backwards compatibility with old executable files better than Linux does. The more worrying thing about this is even though you said it yourself you barely use Windows yourself, rather than spending the 5 mins it would take for you to google search this yourself, you continue to go on with your misinformation that might have been true with Windows DOS 40 years ago. God damn.
Jan 02 2019
parent 12345swordy <alexanderheistermann gmail.com> writes:
On Wednesday, 2 January 2019 at 21:04:25 UTC, Rubn wrote:
 Mother of god, these two comments. Rip D on Windows, there's 
 clearly only one competent Windows user on the D development 
 team. It's too bad most of his work is focused on maintaining 
 the VS plugin but from the looks of it if he didn't no one 
 would.
Some of us would. (Manu most noticeably) VS is simply that good. Alex
Jan 02 2019
prev sibling parent reply rjframe <dlang ryanjframe.com> writes:
On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:

 From working with dlls with C++. With dlls on Windows, your program
 links
 against a static library associated with the dynamic library, and if any
 of the symbols are changed, the addresses change, and your program will
 be unable to load the newer version of the library without being rebuilt
 against the new version of the static library.
That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime. With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support). Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it. --Ryan
Jan 03 2019
next sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d wrote:
 On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
 From working with dlls with C++. With dlls on Windows, your program
 links
 against a static library associated with the dynamic library, and if any
 of the symbols are changed, the addresses change, and your program will
 be unable to load the newer version of the library without being rebuilt
 against the new version of the static library.
That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime. With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support). Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.
*nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO. - Jonathan M Davis
Jan 03 2019
prev sibling parent Manu <turkeyman gmail.com> writes:
On Thu, Jan 3, 2019 at 5:50 AM Jonathan M Davis via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d wrote:
 On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
 From working with dlls with C++. With dlls on Windows, your program
 links
 against a static library associated with the dynamic library, and if any
 of the symbols are changed, the addresses change, and your program will
 be unable to load the newer version of the library without being rebuilt
 against the new version of the static library.
That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime. With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support). Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.
*nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO.
Sorry, I don't think you know what you're talking about WRT Windows DLL's and import libs. Linking a Windows import lib is the same as `-lSharedLib.so`; it links(/generates) a small stub at entry that loads the DLL, and resolves the symbols in the import table to local function pointers. You certainly do NOT need to rebuild your exe if the DLL is updated, assuming no breaking changes to the ABI. The import lib includes little stub's for the import functions that call through the resolved pointer into the DLL. It's nothing more than a convenience, and it's also possible to *generate* an import lib from a .dll, which is effectively identical to linking against a .so.
Jan 03 2019
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jan 01, 2019 at 07:04:24PM -0700, Jonathan M Davis via Digitalmars-d
wrote:
[...]
 Anyone using a package manager to install dmd is only ever going to
 end up with the version of Phobos that goes with that dmd. It would be
 highly abnormal for multiple versions of Phobos to be installed on a
 system.
Wait, what? Where did you get that idea from? Since about a half decade ago, Linux distros like Debian have had the possibility of multiple versions of the same shared library installed at the same time. It's pretty much impossible to manage a distro otherwise, since being unable to do so would mean that you cannot upgrade a shared library until ALL upstream code has been updated to use the new version. Now granted, most of the time new library versions are ABI compatible with older versions, so you don't actually need to keep *every* version of the library around just because some executable somewhere needs them. And granted, splitting a library package into multiple simultaneous versions is only done when necessary. But the mechanisms for doing so have been in place since a long time ago, and any decent distro management would include setting up the proper mechanisms for multiple versions of Phobos to be installable simultaneously. Otherwise you have the untenable situation that Phobos cannot be upgraded without breaking every D program currently installed on the system. Of course, for this to work, the soname needs to be set properly and a sane versioning system (encoded in the soname) needs to be in place. Basically, *every* ABI incompatibility (and I do mean *every*, even those with no equivalent change in the source code) needs to be reflected by a soname change. Which is likely not being done with the current makefiles in git. Which would explain your observations. But it is certainly *possible*, and often *necessary*, to install multiple versions of the same shared library simultaneously.
 The only folks who would ever end up with multiple versions of Phobos
 on the same system are folks actively going to the effort of making it
 happen instead of using any of the normal install mechanisms.
I have no idea what you mean by "normal install mechanisms", because it makes no sense to me to use any system-wide installation mechanism that *doesn't* support multiple versions per shared library. I mean, how do you even get a sane operating environment at all? It would be DLL hell compounded with .so hell, episode II. OTOH, if you're installing stuff by hand or by 3rd party installers (which is generally a bad idea on Linux distros with a distro-specific packaging system -- since colliding assumptions made by either side means endless headaches on the user end), then you're on your own. Maybe that's where you're coming from.
 And ultimately, it's far simpler to just always use Phobos as a static
 library rather than trying to do anything with trying to keep older
 versions of the shared library around - especially if you're actually
 building development versions of Phobos.
Yes, for development builds, I'd tend to agree. [...]
 From working with dlls with C++. With dlls on Windows, your program
 links against a static library associated with the dynamic library,
 and if any of the symbols are changed, the addresses change, and your
 program will be unable to load the newer version of the library
 without being rebuilt against the new version of the static library.
Wow. That makes me glad I'm not programming on Windows...
 This is in stark contrast to *nix where the linking works in such a
 way that as long as the symbols still exist with the same ABI in the
 newly built library, they're found when the program loads, and it's
 not a problem. The addresses aren't hard-coded in the way that happens
 with dlls on Windows. dlls on Windows allow you to share code so long
 as the programs are all built against exactly the same version of the
 dll (and if they're not, then you need separate copies of the dll, and
 you get into dll hell), whereas with *nix, you can keep updating the
 shared library as much as you like without changing the executable as
 long as the API and ABI of the existing symbols don't change.
[...] Yes, though the library authors / distributors will have to be extremely careful to bump the soname for every ABI incompatibility. All too often people forget to do so, esp. when the ABI incompatibility is not something directly caused by a source code change (the fallacious assumption being "I didn't touch the source code, the ABI can't have changed, right?"). And more often than people would like to admit, sonames fail to get bumped even in the face of source code change, and that's where things start getting ugly. T -- Государство делает вид, что платит нам зарплату, а мы делаем вид, что работаем.
Jan 01 2019
prev sibling parent Rubn <where is.this> writes:
Wouldn't be the least bit surprised if DMD didn't follow Linux 
conventions though.
Jan 01 2019
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jan 01, 2019 at 11:57:26AM +0000, kinke via Digitalmars-d wrote:
 On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg wrote:
 On 2018-12-31 16:51, H. S. Teoh wrote:
 Note the order of magnitude difference in size, and that ldc2
 achieves this by default, with no additional options needed.
 
 How do you make dmd produce the same (or comparable) output?
For me I get comparable output with DMD and LDC
You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.
Aha! I think you hit the nail on the head. `ldd` revealed that the LDC binary has dynamic dependencies on libphobos2-ldc-shared.so and libdruntime-ldc-shared.so. So that explains that. So that's one less reason to ditch dmd... but still, I'm tempted, because of overall better codegen with ldc, esp. with compute-intensive code. T -- What are you when you run out of Monet? Baroque.
Jan 01 2019
prev sibling parent Jon Degenhardt <jond noreply.com> writes:
On Sunday, 30 December 2018 at 14:27:56 UTC, H. S. Teoh wrote:
 Recently I noticed that LDC now compiles every function into 
 their own section and runs LTO, including GC of unreferenced 
 sections, by default. As a result, executable sizes are back 
 down to where equivalent C/C++ code would be, as opposed to 
 being a MB or so larger when compiled with DMD. It more-or-less 
 nullifies most of the ill-effects of template bloat.
At DConf I showed executable size reductions on my apps resulting from applying LDC's LTO. They are on slide 14 here: https://github.com/eBay/tsv-utils/blob/master/docs/dconf2018.pdf. I didn't have a basis for comparison to equivalent C/C++ though. --Jon
Dec 30 2018
prev sibling next sibling parent reply Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 D theory sounds all good and all, but in practice you have 
 warts like dub (one big reason I stay away from it -- though 
 based on what Sonke said recently, performance may have 
 improved since I last checked), std.regex (after the last big 
 refactor, something Really Bad happened to its compile times -- 
 it didn't used to be this bad!), std.format (a big hairball I 
 haven't dared to look too deeply into), and a couple of others, 
 like various recursive templates elsewhere in Phobos. And also 
 std.uni's large templated internal tables, which may be (part 
 of?) the common cause of compile-time slowdowns in std.format 
 and std.regex.

 There's also dmd's ridiculous memory usage policy, which is 
 supposed to help compile times when you have ridiculous amounts 
 of free RAM, but which causes anything from swap thrashing 
 slowdowns to outright unusability on medium- to low-memory 
 systems.


 T
Phobos has a few modules like this, but I believe that all of them should be fixable without help from the compiler, given enough effort. On the other hand, hopefully soon we'll have the option to turn on the GC for the frontend. See: https://github.com/ldc-developers/ldc/pull/2916 As for Dub, we really ought to add a lower-level dependency graph interface for describing non-trivial builds. There are already a couple of (meta-)build systems written in D, so we have to come up with a good design and integrate one of them.
Dec 30 2018
parent welkam <wwwelkam gmail.com> writes:
On Sunday, 30 December 2018 at 13:59:57 UTC, Petar Kirov 
[ZombineDev] wrote:
 On the other hand, hopefully soon we'll have the option to turn 
 on the GC for the frontend. See: 
 https://github.com/ldc-developers/ldc/pull/2916
GC might not be needed here. Compiler compiles your program in steps and memory that were used in previous step could be deallocated later in one place. We could just use a couple different allocators for different parts of compiler and call free on them when we are sure that nothing will need that data anymore. For example we can deallocate file buffers after we parsed those files. A person more familiar with compiler could find more cases where this kind of strategy is applicable.
Dec 30 2018
prev sibling next sibling parent Laeeth Isharc <laeeth laeeth.com> writes:
On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 On Sun, Dec 30, 2018 at 01:25:33PM +0000, Guillaume Piolat via 
 Digitalmars-d wrote:
 On Saturday, 29 December 2018 at 09:29:30 UTC, Walter Bright 
 wrote:
 http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
It says:
 C++ compilation times have been a source of pain in every
 non-trivial-size codebase I’ve worked on. Don’t believe me? 
 Try
 building one of the widely available big codebases (any of:
 Chromium, Clang/LLVM, UE4 etc will do). Among the things I 
 really

 list,
 and has been since forever.
getting worse. There is the theory (D builds fast) and the application (DUB often negate the advantage, you need to avoid templatitis).
D theory sounds all good and all, but in practice you have warts like dub (one big reason I stay away from it -- though based on what Sonke said recently, performance may have improved since I last checked), std.regex (after the last big refactor, something Really Bad happened to its compile times -- it didn't used to be this bad!), std.format (a big hairball I haven't dared to look too deeply into), and a couple of others, like various recursive templates elsewhere in Phobos. And also std.uni's large templated internal tables, which may be (part of?) the common cause of compile-time slowdowns in std.format and std.regex. T
Perhaps we should implement CyberShadow's idea into the build infrastructure. Works quite nicely with etsy statsd - see library from Burner. https://blog.thecybershadow.net/2015/05/05/is-d-slim-yet/ Laeeth
Dec 30 2018
prev sibling next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy, which is 
 supposed to help compile times when you have ridiculous amounts 
 of free RAM, but which causes anything from swap thrashing 
 slowdowns to outright unusability on medium- to low-memory 
 systems.
Don't get me started :) My VPS host has 512 mb RAM, and it's stressful when a D compiler can't build a few files. Upping this memory limit costs significantly more money.
Dec 30 2018
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 30, 2018 at 03:10:17PM +0000, Guillaume Piolat via Digitalmars-d
wrote:
 On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy, which is supposed
 to help compile times when you have ridiculous amounts of free RAM,
 but which causes anything from swap thrashing slowdowns to outright
 unusability on medium- to low-memory systems.
 
Don't get me started :) My VPS host has 512 mb RAM, and it's stressful when a D compiler can't build a few files. Upping this memory limit costs significantly more money.
Exactly the same problem I've been facing. Not to mention some workstations at my day job that don't have the luxury of having gobs of free RAM available. Which is why I haven't dared to mention D at work yet... it would become the laughing stock of my coworkers if dmd were to die with an OOM error on the simplest of programs. T -- What's an anagram of "BANACH-TARSKI"? BANACH-TARSKI BANACH-TARSKI.
Dec 30 2018
prev sibling parent Mengu <mengukagan gmail.com> writes:
On Sunday, 30 December 2018 at 15:10:17 UTC, Guillaume Piolat 
wrote:
 On Sunday, 30 December 2018 at 13:46:46 UTC, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy, which is 
 supposed to help compile times when you have ridiculous 
 amounts of free RAM, but which causes anything from swap 
 thrashing slowdowns to outright unusability on medium- to 
 low-memory systems.
Don't get me started :) My VPS host has 512 mb RAM, and it's stressful when a D compiler can't build a few files. Upping this memory limit costs significantly more money.
i was shocked when i encountered the exact same thing on an ec2 micro instance. it was a bit upsetting and disappointing.
Dec 30 2018
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2018 5:46 AM, H. S. Teoh wrote:
 There's also dmd's ridiculous memory usage policy,
Now that D is written in D, one can compile it with: -profile=gc and see where the memory usage is coming from.
Dec 30 2018
prev sibling next sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, January 1, 2019 9:09:29 PM MST H. S. Teoh via Digitalmars-d 
wrote:
 On Tue, Jan 01, 2019 at 07:04:24PM -0700, Jonathan M Davis via
 Digitalmars-d wrote: [...]

 Anyone using a package manager to install dmd is only ever going to
 end up with the version of Phobos that goes with that dmd. It would be
 highly abnormal for multiple versions of Phobos to be installed on a
 system.
Wait, what? Where did you get that idea from? Since about a half decade ago, Linux distros like Debian have had the possibility of multiple versions of the same shared library installed at the same time. It's pretty much impossible to manage a distro otherwise, since being unable to do so would mean that you cannot upgrade a shared library until ALL upstream code has been updated to use the new version. Now granted, most of the time new library versions are ABI compatible with older versions, so you don't actually need to keep *every* version of the library around just because some executable somewhere needs them. And granted, splitting a library package into multiple simultaneous versions is only done when necessary. But the mechanisms for doing so have been in place since a long time ago, and any decent distro management would include setting up the proper mechanisms for multiple versions of Phobos to be installable simultaneously. Otherwise you have the untenable situation that Phobos cannot be upgraded without breaking every D program currently installed on the system. Of course, for this to work, the soname needs to be set properly and a sane versioning system (encoded in the soname) needs to be in place. Basically, *every* ABI incompatibility (and I do mean *every*, even those with no equivalent change in the source code) needs to be reflected by a soname change. Which is likely not being done with the current makefiles in git. Which would explain your observations. But it is certainly *possible*, and often *necessary*, to install multiple versions of the same shared library simultaneously.
Sure, you could have separate packages for separate versions of Phobos, but from what I've seen, there's always one version of dmd with one version of Phobos, and distros don't provide multiple versions. Now, I haven't studied every distro there is, so maybe there is a distro out there that provides separate packages for old versions of Phobos, but in my experience, you're lucky if the distro has any package for dmd, let alone for it to be trying to provide a way to have multiple versions of Phobos installed. And while yes, distros are set up in a way that you can have multiple versions of a library if multiple packages exist for them, and sometimes they do that, in the vast majority of cases, the solution is that all packages for the distro are built for a specific version of a library, and when that library is upgraded, all the packages that depend on it get rebuilt and need to be reinstalled. That's part of why it usually works so poorly to distribute closed source programs for Linux. Distros tend to be put together with the idea that all of the packages on the system are built for that system with whatever version of the libraries they're currently using, and having multiple versions of a library is the exception rather than the rule. But regardless of what is typically done with libraries for stuff like C++ or python, unless distros are specifically providing packages for older versions of Phobos, then no one installing dmd and Phobos via a package manager is going to end up with multiple versions of Phobos installed. So, while it may be theoretically possible to have multiple versions of Phobos installed via a package manager, from what I've seen, that simply doesn't happen in practice, because the packages for dmd and Phobos aren't set up that way. - Jonathan M Davis
Jan 01 2019
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, January 3, 2019 12:19:59 PM MST Manu via Digitalmars-d wrote:
 On Thu, Jan 3, 2019 at 5:50 AM Jonathan M Davis via Digitalmars-d

 <digitalmars-d puremagic.com> wrote:
 On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d 
wrote:
 On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
 From working with dlls with C++. With dlls on Windows, your program
 links
 against a static library associated with the dynamic library, and if
 any
 of the symbols are changed, the addresses change, and your program
 will
 be unable to load the newer version of the library without being
 rebuilt
 against the new version of the static library.
That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime. With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support). Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.
*nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO.
Sorry, I don't think you know what you're talking about WRT Windows DLL's and import libs. Linking a Windows import lib is the same as `-lSharedLib.so`; it links(/generates) a small stub at entry that loads the DLL, and resolves the symbols in the import table to local function pointers. You certainly do NOT need to rebuild your exe if the DLL is updated, assuming no breaking changes to the ABI. The import lib includes little stub's for the import functions that call through the resolved pointer into the DLL. It's nothing more than a convenience, and it's also possible to *generate* an import lib from a .dll, which is effectively identical to linking against a .so.
From the last time I worked with Windows dlls, I remember quite distinctly
that doing anything like adding a symbol to the library meant that it was incompatible with executables previously built with it (which is not true for shared libraries on *nix - they only break if the ABI for the existing symbols changes). So, if that's not the case, I don't know what we were doing wrong, but I have an extremely sour taste in my mouth from dealing with Windows dlls. It was my experience that Linux stuff generally just worked, whereas we kept having to deal with junk on Windows to make them work (e.g. adding special attributes to functions just so that they would be exported), and I absolutely hated it. I have nothing good to say about dlls on Windows. It's quite possible that some of it isn't as bad if you know more about it than the team I was working on did, but it was one part of why making our libraries cross-platform was not at all fun. - Jonathan M Davis
Jan 03 2019
next sibling parent Rubn <where is.this> writes:
On Thursday, 3 January 2019 at 19:36:52 UTC, Jonathan M Davis 
wrote:
 *nix has the same distinction. It's a fundamentally different 
 situation from linking your executable against the library. 
 You're really dynamically loading rather than dynamically 
 linking (though unfortunately, the terminology for the two is 
 not particularly distinct, and they're often referred to the 
 same way even though they're completely different). Loading 
 libraries that way is what you do when you do stuff like 
 plugins, because those aren't known when you build your 
 program. But it makes a lot less sense as an alternative to 
 linking your program against the library if you don't actually 
 need to load the library like that. The COFF vs OMF mess on 
 Windows makes it make slightly more sense on Windows (at least 
 with D, where dmd uses OMF by default, unlike most of the C/C++ 
 world at this point), because then it doesn't matter whether 
 COFF or OMF was used (e.g. IIRC, Derelict is designed to be 
 loaded that way for that reason), but in general, it's an 
 unnecessarily complicated way to use a library. And if Windows' 
 eccentricities make it more desirable than it is on *nix 
 systems, then that's just yet another black mark against how 
 Windows does dynamic linking IMHO.

 - Jonathan M Davis

 From the last time I worked with Windows dlls, I remember quite 
 distinctly
 that doing anything like adding a symbol to the library meant 
 that it was incompatible with executables previously built with 
 it (which is not true for shared libraries on *nix - they only 
 break if the ABI for the existing symbols changes). So, if 
 that's not the case, I don't know what we were doing wrong, but 
 I have an extremely sour taste in my mouth from dealing with 
 Windows dlls. It was my experience that Linux stuff generally 
 just worked, whereas we kept having to deal with junk on 
 Windows to make them work (e.g. adding special attributes to 
 functions just so that they would be exported), and I 
 absolutely hated it. I have nothing good to say about dlls on 
 Windows. It's quite possible that some of it isn't as bad if 
 you know more about it than the team I was working on did, but 
 it was one part of why making our libraries cross-platform was 
 not at all fun.

 - Jonathan M Davis
Yah you should just stop and never talk about how Windows DLLs work again until you've actually learned how they actually work. "I used some piece of software that I have no idea how it works, I condemn it cause it works this way even though I'm sure if it even works that way". I already know the D team has no idea what they are doing on Windows, but just stop.
Jan 03 2019
prev sibling parent reply Ethan <gooberman gmail.com> writes:
On Thursday, 3 January 2019 at 19:36:52 UTC, Jonathan M Davis 
wrote:
From the last time I worked with Windows dlls, I remember quite 
distinctly
 that doing anything like adding a symbol to the library meant 
 that it was incompatible with executables previously built with 
 it (which is not true for shared libraries on *nix - they only 
 break if the ABI for the existing symbols changes).
That would surely break if you were looking up symbols by ordinal instead of by name. I didn't even know you could do that until I read the documentation for GetProcAddress closely - turns out the name parameter can be used as an ordinal. All exportable symbols are exported by their mangled name in to the DLL's export table. Resolving by mangle will give you a similar experience to *nix there.
Jan 03 2019
next sibling parent Rubn <where is.this> writes:
On Thursday, 3 January 2019 at 21:22:03 UTC, Ethan wrote:
 On Thursday, 3 January 2019 at 19:36:52 UTC, Jonathan M Davis 
 wrote:
From the last time I worked with Windows dlls, I remember quite 
distinctly
 that doing anything like adding a symbol to the library meant 
 that it was incompatible with executables previously built 
 with it (which is not true for shared libraries on *nix - they 
 only break if the ABI for the existing symbols changes).
That would surely break if you were looking up symbols by ordinal instead of by name. I didn't even know you could do that until I read the documentation for GetProcAddress closely - turns out the name parameter can be used as an ordinal. All exportable symbols are exported by their mangled name in to the DLL's export table. Resolving by mangle will give you a similar experience to *nix there.
Even if you used ordinal values instead, you need to assign the symbol a specific ordinal value. The only reason you would have to relink the executable is if you changed the ordinal values to be incompatible with the previous layout, eg assigning a new function to have a ordinal value the same to one that was previously assigned to different function. The equivalent to removing a symbol resolving by name on *nix/Windows.
Jan 03 2019
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, January 3, 2019 2:22:03 PM MST Ethan via Digitalmars-d wrote:
 On Thursday, 3 January 2019 at 19:36:52 UTC, Jonathan M Davis

 wrote:
From the last time I worked with Windows dlls, I remember quite
distinctly

 that doing anything like adding a symbol to the library meant
 that it was incompatible with executables previously built with
 it (which is not true for shared libraries on *nix - they only
 break if the ABI for the existing symbols changes).
That would surely break if you were looking up symbols by ordinal instead of by name. I didn't even know you could do that until I read the documentation for GetProcAddress closely - turns out the name parameter can be used as an ordinal. All exportable symbols are exported by their mangled name in to the DLL's export table. Resolving by mangle will give you a similar experience to *nix there.
That would be a significant improvement. Thanks for pointing that out. If/when I have to deal with Windows dlls again, I'll have to look into that - though honestly, in general, I don't see any reason to use shared libraries for anything other than plugins (whether you're talking about *nix or Windows). If you're dealing with system libraries that are shared by a ton of programs on the system (e.g. openssl on *nix systems), then using shared libraries can definitely be worth it (especially when the library is likely to need security updates), but if you're distributing a program, static libraries are far easier, and if a library isn't used by many programs, then I think that the simplicity of static libraries is worth far more than the small space gain of using a shared library. It's just with plugins that you really don't have any choice. Regardless, D should have full shared library support on all of its supported platforms, and for those who care particularly about Windows, hopefully, Benjamin Thaut's work can be finished in a timely manner. IIRC, he did make a post about it a few months back looking for folks to try out his branch of dmd so that any issues could be worked out. I have no clue how long it will be before that work gets merged, but I expect that it will happen at some point. - Jonathan M Davis
Jan 03 2019
parent reply Rubn <where is.this> writes:
On Friday, 4 January 2019 at 03:48:49 UTC, Jonathan M Davis wrote:
 though honestly, in general, I don't see any reason to use 
 shared libraries for anything other than plugins (whether 
 you're talking about *nix or Windows). If you're dealing with 
 system libraries that are shared by a ton of programs on the 
 system (e.g. openssl on *nix systems), then using shared 
 libraries can definitely be worth it (especially when the 
 library is likely to need security updates), but if you're 
 distributing a program, static libraries are far easier, and if 
 a library isn't used by many programs, then I think that the 
 simplicity of static libraries is worth far more than the small 
 space gain of using a shared library. It's just with plugins 
 that you really don't have any choice.
Here we go again. Really? You don't think there is any reason to using shared libraries in anything other than plugins? You then go on to list two other valid reasons in your next run on sentence. I'll add some more to your list: treating code as data is another reason. Hot loading code so you don't have to restart your program which can be time consuming. Would not work without a shared library. Modifying applications, though I guess from your perspective this might be niche. It would not be as elegant without shared libraries. You keep bringing up this size saving rationale, no one has said this other than you.
Jan 04 2019
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Fri, 04 Jan 2019 12:54:12 +0000, Rubn wrote:
 Here we go again. Really? You don't think there is any reason to using
 shared libraries in anything other than plugins? You then go on to list
 two other valid reasons in your next run on sentence. I'll add some more
 to your list
I'll give another: linking times. With GTKD, static linking costs something like 6-10 seconds more than dynamic linking, which is huge.
Jan 04 2019
parent reply Ethan <gooberman gmail.com> writes:
On Friday, 4 January 2019 at 16:28:32 UTC, Neia Neutuladh wrote:
 On Fri, 04 Jan 2019 12:54:12 +0000, Rubn wrote:
 Here we go again. Really? You don't think there is any reason 
 to using shared libraries in anything other than plugins? You 
 then go on to list two other valid reasons in your next run on 
 sentence. I'll add some more to your list
I'll give another: linking times. With GTKD, static linking costs something like 6-10 seconds more than dynamic linking, which is huge.
And to add to this. *ANY* large project benefits from reducing to smaller DLLs and a corresponding static .lib to load them. The MSVC team has come around in the last five years to love game developers specifically because we create massive codebases that tax the compiler. Remember how I said our binary sizes were something up near 100MB on Quantum Break during one of my DConf talks? Good luck linking that in one hit.
Jan 04 2019
parent Ethan <gooberman gmail.com> writes:
On Friday, 4 January 2019 at 22:33:57 UTC, Ethan wrote:
 And to add to this. *ANY* large project benefits from reducing 
 to smaller DLLs and a corresponding static .lib to load them.

 The MSVC team has come around in the last five years to love 
 game developers specifically because we create massive 
 codebases that tax the compiler. Remember how I said our binary 
 sizes were something up near 100MB on Quantum Break during one 
 of my DConf talks? Good luck linking that in one hit.
Adding to myself. Pretty sure the default for the Windows UCRT (Universal C Runtime) is to be dynamically linked these days.
Jan 04 2019