www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Official compiler

reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
I was reading the other thread "Speed kills" and was wondering if 
there is any practical reason why DMD is the official compiler?

Currently, newcomers come expecting their algorithm from rosetta 
code to run faster in D than their current language, but then it 
seems like it's actually slower. What gives?

Very often the typical answer from this community is generally 
"did you use LDC/GDC?".

Wouldn't it be a better newcomer experience if the official 
compiler was either LDC or GDC?
For us current users it really doesn't matter what is labelled 
official, we pick what serves us best, but for a newcomer, the 
word official surely carries a lot of weight, doesn't it?

 From a marketing point of view, is it better for D as a language 
that first-timers try the bleeding-edge, latest language features 
with DMD, or that their expectations of efficient native code are 
not broken?

Apologies if this has been discussed before...
Feb 17 2016
next sibling parent reply Xinok <xinok live.com> writes:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?

 ...
I pretty much asked this same question a little over a year ago. Thread is here: http://forum.dlang.org/thread/mjwitvqmaqlwvoudjoae forum.dlang.org
Feb 17 2016
parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 00:06:10 UTC, Xinok wrote:
 On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
 wrote:
 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?

 ...
I pretty much asked this same question a little over a year ago. Thread is here: http://forum.dlang.org/thread/mjwitvqmaqlwvoudjoae forum.dlang.org
I am not proposing a new backend for DMD, that discussion is going nowhere. I am considering changing the compiler that is tagged as "official", instead.
Feb 17 2016
parent rsw0x <anonymous anonymous.com> writes:
On Thursday, 18 February 2016 at 02:29:52 UTC, Márcio Martins 
wrote:
 On Thursday, 18 February 2016 at 00:06:10 UTC, Xinok wrote:
 On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
 wrote:
 I was reading the other thread "Speed kills" and was 
 wondering if there is any practical reason why DMD is the 
 official compiler?

 ...
I pretty much asked this same question a little over a year ago. Thread is here: http://forum.dlang.org/thread/mjwitvqmaqlwvoudjoae forum.dlang.org
I am not proposing a new backend for DMD, that discussion is going nowhere. I am considering changing the compiler that is tagged as "official", instead.
It would probably be for the best if there was some push to make LDC 'the' D compiler over a 6 month or 1 year period that included getting LDC and DMD perfectly in sync(AFAIK there's currently heavily refactoring going on in the frontend to help this?) LDC has the downside in that it's a lot slower than DMD. I've never worked on the LDC codebase so I'm not even sure if this could be fixed.
Feb 17 2016
prev sibling next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering if there
 is any practical reason why DMD is the official compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Feb 17 2016
next sibling parent =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright wrote:
 On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Hmm, I think most of us involved with D for a while know which compiler to use and why, right? But would it be beneficial to D as a language if we could make it so that the first compiler a newcomer interacts with, will be the one that produces code as fast as C/C++? Walter and everyone else could continue working on DMD as the reference compiler and pushing language features. Everyone that uses DMD could continue using it. Nothing would have to change, except for the website, and perhaps some work on building installers for LDC and GDC to also a "Easy installation" description on the downloads section :) Purely a marketing move, if that makes sense, I don't know...
Feb 17 2016
prev sibling next sibling parent reply Luis <luis.panadero gmail.com> writes:
On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright wrote:
 On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Correct me, but If I understand correctly what I read on other threads, LDC & GDC uses the same frontend that DMD, but a few versions more older because they need to update the glue layer the front end with lvm/gnu backend ?
Feb 18 2016
parent Daniel Kozak via Digitalmars-d <digitalmars-d puremagic.com> writes:
Yes all three compilers shared same frontend (different version generaly)

Dne 18.2.2016 v 09:57 Luis via Digitalmars-d napsal(a):
 On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright wrote:
 On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering if 
 there is any practical reason why DMD is the official compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Correct me, but If I understand correctly what I read on other threads, LDC & GDC uses the same frontend that DMD, but a few versions more older because they need to update the glue layer the front end with lvm/gnu backend ?
Feb 18 2016
prev sibling next sibling parent reply Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright wrote:
 On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Walter should not need to ever work on D compiler back-ends, there are *a lot* of issues to be dealt with in the language implementation that are front-end only or at least not backend related. There are others that can work/already work with the LLVM backend and they seam to know what they are doing. There warts and issues in the language/runtime/phobos are well know, spending time fixing them is more valuable for the community rather than having Walter (maybe others) working on any dmd backend stuff. As rsw0x suggested, a push to get LDC on sync with mainline, and switching to it after it would make more sense in the long run. Probably focusing on LDC and investing more man power will also help fix any perf issues re. compile time, there should bot be much to loose here at least for debug compile times. All this of course depends on Walter's willing to give up working on DMD, whatever this means for him.
Feb 18 2016
parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 10:16:40 UTC, Radu wrote:
 On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright 
 wrote:
 On Wed, 17 Feb 2016 22:57:20 +0000, Márcio Martins wrote:

 I was reading the other thread "Speed kills" and was 
 wondering if there is any practical reason why DMD is the 
 official compiler?
Walter Bright is the lead developer, and for legal reasons he will never touch source code from a compiler he didn't write. And since DMD is something like twice as fast as LDC, there's at least some argument in favor of keeping it around. Should Walter retire, there's a reasonable chance that LDC will become the primary compiler. However, compilation speed is important. I'm not sure how different LDC and DMD are, but perhaps you could use DMD for development and LDC for production builds?
Walter should not need to ever work on D compiler back-ends, there are *a lot* of issues to be dealt with in the language implementation that are front-end only or at least not backend related. There are others that can work/already work with the LLVM backend and they seam to know what they are doing. There warts and issues in the language/runtime/phobos are well know, spending time fixing them is more valuable for the community rather than having Walter (maybe others) working on any dmd backend stuff. As rsw0x suggested, a push to get LDC on sync with mainline, and switching to it after it would make more sense in the long run. Probably focusing on LDC and investing more man power will also help fix any perf issues re. compile time, there should bot be much to loose here at least for debug compile times. All this of course depends on Walter's willing to give up working on DMD, whatever this means for him.
Walter doesn't have to give up working on DMD, right? Everyone could continue working on DMD, perhaps a few people could help on all three, I don't know... It's important if more people work on DMD and focus on polishing the frontend and language features, being the reference compiler, and used by all three compilers as well. What could potentially be important would be to backport key fixes/features from current frontend to LDC/GDC as well.
Feb 18 2016
next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 February 2016 at 11:42, M=C3=A1rcio Martins via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Thursday, 18 February 2016 at 10:16:40 UTC, Radu wrote:

 On Thursday, 18 February 2016 at 00:35:01 UTC, Chris Wright wrote:

 On Wed, 17 Feb 2016 22:57:20 +0000, M=C3=A1rcio Martins wrote:

 I was reading the other thread "Speed kills" and was wondering if there
 is any practical reason why DMD is the official compiler?
Walter Bright is the lead developer, and for legal reasons he will neve=
r
 touch source code from a compiler he didn't write. And since DMD is
 something like twice as fast as LDC, there's at least some argument in
 favor of keeping it around.

 Should Walter retire, there's a reasonable chance that LDC will become
 the primary compiler. However, compilation speed is important.

 I'm not sure how different LDC and DMD are, but perhaps you could use
 DMD for development and LDC for production builds?
Walter should not need to ever work on D compiler back-ends, there are *=
a
 lot* of issues to be dealt with in the language implementation that are
 front-end only or at least not backend related. There are others that ca=
n
 work/already work with the LLVM backend and they seam to know what they =
are
 doing.

 There warts and issues in the language/runtime/phobos are well know,
 spending time fixing them is more valuable for the community rather than
 having Walter (maybe others) working on any dmd backend stuff.

 As rsw0x suggested, a push to get LDC on sync with mainline, and
 switching to it after it would make more sense in the long run. Probably
 focusing on LDC and investing more man power will also help fix any perf
 issues re. compile time, there should bot be much to loose here at least
 for debug compile times.

 All this of course depends on Walter's willing to give up working on DMD=
,
 whatever this means for him.
Walter doesn't have to give up working on DMD, right? Everyone could continue working on DMD, perhaps a few people could help on all three, I don't know... It's important if more people work on DMD and focus on polishing the frontend and language features, being the reference compile=
r,
 and used by all three compilers as well. What could potentially be
 important would be to backport key fixes/features from current frontend t=
o
 LDC/GDC as well.
There seems to be a deterrence against backporting ie: 2.068 fixes to 2.066 for LDC/GDC. I have no idea why, I do it all the time. :-)
Feb 18 2016
parent reply tsbockman <thomas.bockman gmail.com> writes:
On Thursday, 18 February 2016 at 10:48:46 UTC, Iain Buclaw wrote:
 There seems to be a deterrence against backporting ie: 2.068 
 fixes to 2.066 for LDC/GDC.  I have no idea why, I do it all 
 the time. :-)
Part of the problem is just that no one else knows *which* fixes have been back-ported - there doesn't seem to be a list prominently displayed anywhere on the GDC home page. This leaves people like myself to default to the assumption that the GDC/LDC front-end basically matches the DMD one of the same version.
Feb 18 2016
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 February 2016 at 11:53, tsbockman via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Thursday, 18 February 2016 at 10:48:46 UTC, Iain Buclaw wrote:

 There seems to be a deterrence against backporting ie: 2.068 fixes to
 2.066 for LDC/GDC.  I have no idea why, I do it all the time. :-)
Part of the problem is just that no one else knows *which* fixes have been back-ported - there doesn't seem to be a list prominently displayed anywhere on the GDC home page. This leaves people like myself to default to the assumption that the GDC/LDC front-end basically matches the DMD one of the same version.
Typically things that you no one will ever notice, nor care to. Anything that causes an ICE is a candidate for backporting. Features or changes in behaviour are not in that list of approved things to backport. For example, I typically raise (and Kai probably too) about half a dozen patches to DMD that fix bad or nonsensical frontend "lowering" in almost *every* release. Saying that, I have in the past: - Backported vector support from master when they first got accepted. - Current 2.066FE uses C++ support from 2.068. But these are, again, nothing that end users would ever be aware about.
Feb 18 2016
prev sibling parent Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 10:42:33 UTC, Márcio Martins 
wrote:
 On Thursday, 18 February 2016 at 10:16:40 UTC, Radu wrote:
 [...]
Walter doesn't have to give up working on DMD, right? Everyone could continue working on DMD, perhaps a few people could help on all three, I don't know... It's important if more people work on DMD and focus on polishing the frontend and language features, being the reference compiler, and used by all three compilers as well. What could potentially be important would be to backport key fixes/features from current frontend to LDC/GDC as well.
As history tells, everything has to do with him moving D to more open waters. DMD is just a piece of that history and an excuse to keep working in a confort zone. It adds nothing strategically and it's just the last vestige of the old D world.
Feb 18 2016
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2016-02-18 01:35, Chris Wright wrote:

 Should Walter retire, there's a reasonable chance that LDC will become
 the primary compiler. However, compilation speed is important.
Walter working on D is his way of retiring ;) -- /Jacob Carlborg
Feb 18 2016
parent Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 12:48:23 UTC, Jacob Carlborg 
wrote:
 On 2016-02-18 01:35, Chris Wright wrote:

 Should Walter retire, there's a reasonable chance that LDC 
 will become
 the primary compiler. However, compilation speed is important.
Walter working on D is his way of retiring ;)
Well said :)
Feb 18 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/17/2016 4:35 PM, Chris Wright wrote:
 And since DMD is
 something like twice as fast as LDC, there's at least some argument in
 favor of keeping it around.
When I meet someone new who says they settled on D in their company for development, I casually ask why they selected D? "Because it compiles so fast." It's not a minor issue.
Feb 24 2016
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:
 When I meet someone new who says they settled on D in their 
 company for development, I casually ask why they selected D?

   "Because it compiles so fast."
I actually agree this is a big issue and one of the killer features to me. But, I also need to point out that there's a selection bias going on here: of course D's users today like D's strengths today. If they didn't, they wouldn't be using it. I've also heard from big users who want the performance more than compile time and hit difficulty in build scaling..
Feb 24 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:
 I've also heard from big users who want the performance more than compile time
 and hit difficulty in build scaling..
I know that performance trumps all for many users. But we can have both - dmd and ldc/gdc. My point is that compile speed is a valuable and distinguishing feature of D. It's one that I have to constantly maintain, or it bit rots away. It's one that people regularly dismiss as unimportant. Sometimes it seems I'm the only one working on the compiler who cares about it. For comparison, C++ compiles like a pig, I've read that Rust compiles like a pig, and Go makes a lot of hay for compiling fast.
Feb 24 2016
next sibling parent reply Puming <zhaopuming gmail.com> writes:
On Thursday, 25 February 2016 at 02:48:24 UTC, Walter Bright 
wrote:
 On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:
 I've also heard from big users who want the performance more 
 than compile time
 and hit difficulty in build scaling..
I know that performance trumps all for many users. But we can have both - dmd and ldc/gdc. My point is that compile speed is a valuable and distinguishing feature of D. It's one that I have to constantly maintain, or it bit rots away. It's one that people regularly dismiss as unimportant. Sometimes it seems I'm the only one working on the compiler who cares about it. For comparison, C++ compiles like a pig, I've read that Rust compiles like a pig, and Go makes a lot of hay for compiling fast.
Maybe in the future, when ldc/gdc catches up versions with dmd, we can combine them into a bundle for downloads? Then new people can just download the compiler bundle and run dmd or ldc/gdc as they like.
Feb 24 2016
parent reply rsw0x <anonymous anonymous.com> writes:
On Thursday, 25 February 2016 at 03:07:20 UTC, Puming wrote:
 On Thursday, 25 February 2016 at 02:48:24 UTC, Walter Bright 
 wrote:
 [...]
Maybe in the future, when ldc/gdc catches up versions with dmd, we can combine them into a bundle for downloads? Then new people can just download the compiler bundle and run dmd or ldc/gdc as they like.
licensing issues
Feb 24 2016
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:
 licensing issues
I can't see any... Walter would be licensed to distribute all three.
Feb 24 2016
parent reply rsw0x <anonymous anonymous.com> writes:
On Thursday, 25 February 2016 at 03:26:54 UTC, Adam D. Ruppe 
wrote:
 On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:
 licensing issues
I can't see any... Walter would be licensed to distribute all three.
GDC is under GPL
Feb 24 2016
parent rsw0x <anonymous anonymous.com> writes:
On Thursday, 25 February 2016 at 03:47:33 UTC, rsw0x wrote:
 On Thursday, 25 February 2016 at 03:26:54 UTC, Adam D. Ruppe 
 wrote:
 On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:
 licensing issues
I can't see any... Walter would be licensed to distribute all three.
GDC is under GPL
Oh, my bad I reread the post. I thought he meant combine them as in single frontend/three backends in a single executable. Nevermind.
Feb 24 2016
prev sibling next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 2016-02-24 at 18:48 -0800, Walter Bright via Digitalmars-d
wrote:
 [=E2=80=A6]
=20
 For comparison, C++ compiles like a pig, I've read that Rust compiles
 like a=C2=A0
 pig, and Go makes a lot of hay for compiling fast.
I wonder if anyone has noticed, or appreciated that, all the trendy, hipster cloud based CI servers support Go, sometimes C++ and C (sort of), but not Rust, or D. Public CI and deployment support are increasingly an issue for FOSS projects, not just for goodness, but also for marketing. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 25 2016
parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 25 February 2016 at 09:04:17 UTC, Russel Winder 
wrote:
 I wonder if anyone has noticed, or appreciated that, all the 
 trendy, hipster cloud based CI servers support Go, sometimes 
 C++ and C (sort of), but not Rust, or D.
Travis CI, which is probably the one "trendy, hipster" service most would think of, has been supporting D for quite some while now due to Martin Nowak's great work. (He put Iain's name and mine on it too, but we didn't really contribute at all.) Of course, there is always room for improving the integration with this and similar services. When I'm saying that dividing the attention between three compilers is a strategic mistake, it's not because I doubt that having multiple compilers is not a nice thing to have. It certainly is. But I'm convinced that expending the same amount of effort on the wider ecosystem would get us much farther. — David
Feb 25 2016
parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2016-02-25 at 16:51 +0000, David Nadlinger via Digitalmars-d
wrote:
 [=E2=80=A6]
 Travis CI, which is probably the one "trendy, hipster" service=C2=A0
 most would think of, has been supporting D for quite some while=C2=A0
 now due to Martin Nowak's great work. (He put Iain's name and=C2=A0
 mine on it too, but we didn't really contribute at all.)
Indeed Travis-CI advertises it's D capability. Apologies for implying it didn't. Other cloud CI services are definitely lacking though, at least in their advertising of supported langauges. =C2=A0 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 26 2016
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 25/02/2016 03:48, Walter Bright a écrit :
 On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:
 I've also heard from big users who want the performance more than
 compile time
 and hit difficulty in build scaling..
I know that performance trumps all for many users. But we can have both - dmd and ldc/gdc. My point is that compile speed is a valuable and distinguishing feature of D. It's one that I have to constantly maintain, or it bit rots away. It's one that people regularly dismiss as unimportant. Sometimes it seems I'm the only one working on the compiler who cares about it. For comparison, C++ compiles like a pig, I've read that Rust compiles like a pig, and Go makes a lot of hay for compiling fast.
I think that you are very gentle with C++. It can be hell, when you are working on multiple platforms with different compiler and build systems it took a lot of efforts and time to maintain compilation time at a decent level. I recently made optimizations on our build configurations after adding some boost modules at my day job. Our build time double instantly. All optimizations have a great cost on at least an other point. - PIMPL: Increase the code complexity, decrease performances - Precompiled header: not standard, mingw is limited to 130Mo generated file - Unity build: can be hard to add to many build system if auto-generated. compiler can crash with an out of memory (mingw will be the first) - cleaning our includes: how doing that without tools? - multi-threaded compilation: not standard, sometimes it have to be configured on computer So thank you for having created a fast compiler even if I just can dream to be able to use it a day professionally. IMO if Go is a fast compiler is just because dmd shows the way. Is dmd multi-threaded? PS: I don't understand why import modules aren't already here in c++, it make years that clang team is working on it.
Feb 25 2016
next sibling parent Chris Wright <dhasenan gmail.com> writes:
On Fri, 26 Feb 2016 00:48:15 +0100, Xavier Bigand wrote:

 Is dmd multi-threaded?
Not at present. It should be relatively easy to parallelize IO and parsing, at least in theory. I think IO parallelism was removed with the ddmd switch, maybe? But you'd have to identify the files you need to read in advance, so that's not as straightforward. D's metaprogramming is too complex for a 100% solution for parallelizing semantic analysis on a module level. But you could create a partial solution: * After parsing, look for unconditional imports. Skip static if/else blocks, skip template bodies, but grab everything else. * Make a module dependency graph from that. * Map each module to a task. * Merge dependency cycles into single tasks. You now have a DAG. * While there are any tasks in the graph: - Find all leaf tasks in the graph. - Run semantic analysis on them in parallel. When you encounter a conditional or mixed in import, you can insert it into the DAG if it's not already there, but it would be simpler just to run analysis right then and there. Alternatively, you can find regular and conditional imports and try to use them all. But this requires you to hold errors until you're certain that the module is used, and you end up doing more work overall. And that could be *tons* more work. Consider: module a; enum data = import("ten_million_records.csv"); mixin(createClassesFromData(data)); module b; enum shouldUseModuleA = false; module c; import b; static if (shouldUseModuleA) import a; And even if you ignored that, you'd still have to deal with mixed in imports, which can be the result of arbitrarily complex CTFE expressions. While all of this is straightforward in theory, it probably isn't so simple in practice.
Feb 25 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 25 February 2016 at 23:48:15 UTC, Xavier Bigand 
wrote:
 IMO if Go is a fast compiler is just because dmd shows the way.
Go was designed to compile fast because Google was looking for something faster than C++ for largish projects. The authors were also involved with Unix/Plan9 and have experience with creating languages and compilers for building operating systems... Anyway, compilation speed isn't the primary concern these days when you look at how people pick their platform. People tend to go for languages/compilers that are convenient, generate good code, support many platforms and resort to parallell builds when the project grows. You can build a very fast compiler for a stable language with a simple type system like C that don't even build an AST (using an implicit AST) and do code-gen on the fly. But it turns out people prefer sticking to GCC even when other C compilers have been 10-20x faster.
Feb 26 2016
prev sibling next sibling parent karabuta <karabutaworld gmail.com> writes:
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:
 On 2/17/2016 4:35 PM, Chris Wright wrote:
 And since DMD is
 something like twice as fast as LDC, there's at least some 
 argument in
 favor of keeping it around.
When I meet someone new who says they settled on D in their company for development, I casually ask why they selected D? "Because it compiles so fast." It's not a minor issue.
+1 Well spoken
Feb 25 2016
prev sibling parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:
 On 2/17/2016 4:35 PM, Chris Wright wrote:
 And since DMD is
 something like twice as fast as LDC, there's at least some 
 argument in
 favor of keeping it around.
When I meet someone new who says they settled on D in their company for development, I casually ask why they selected D? "Because it compiles so fast." It's not a minor issue.
Could we maybe create a quick informative survey, (surveymonkey?), so we can get a glimpse of why people like D and what they believe would improve their experience with the language? Perhaps also why they have chosen to or not to adopt D more seriously or professionally? Given that there is such a wide diversity of people currently using it, I think it would be nice for the project leadership and all of us in the community to get a more realistic view on this matter, to better understand what's important, chose the future direction and what are the real selling points. Right now it seems like there are a lot of mixed signals even among long-time users and contributors.
Feb 28 2016
parent reply Mike Parker <aldacron gmail.com> writes:
On Sunday, 28 February 2016 at 13:31:17 UTC, Márcio Martins wrote:

 Could we maybe create a quick informative survey, 
 (surveymonkey?), so we can get a glimpse of why people like D 
 and what they believe would improve their experience with the 
 language? Perhaps also why they have chosen to or not to adopt 
 D more seriously or professionally?

 Given that there is such a wide diversity of people currently 
 using it, I think it would be nice for the project leadership 
 and all of us in the community to get a more realistic view on 
 this matter, to better understand what's important, chose the 
 future direction and what are the real selling points. Right 
 now it seems like there are a lot of mixed signals even among 
 long-time users and contributors.
Such a survey wouldn't be anywhere near "realistic." The number and types of users who regularly keep up with the forums are highly unlikely to be a representative sample of D users.
Feb 28 2016
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Sunday, 28 February 2016 at 15:02:24 UTC, Mike Parker wrote:

 Such a survey wouldn't be anywhere near "realistic." The number 
 and types of users who regularly keep up with the forums are 
 highly unlikely to be a representative sample of D users.
Not to mention that only a fraction of people who view the forums would actually take the survey.
Feb 28 2016
prev sibling parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Sunday, 28 February 2016 at 15:02:24 UTC, Mike Parker wrote:
 On Sunday, 28 February 2016 at 13:31:17 UTC, Márcio Martins 
 wrote:

 Could we maybe create a quick informative survey, 
 (surveymonkey?), so we can get a glimpse of why people like D 
 and what they believe would improve their experience with the 
 language? Perhaps also why they have chosen to or not to adopt 
 D more seriously or professionally?

 Given that there is such a wide diversity of people currently 
 using it, I think it would be nice for the project leadership 
 and all of us in the community to get a more realistic view on 
 this matter, to better understand what's important, chose the 
 future direction and what are the real selling points. Right 
 now it seems like there are a lot of mixed signals even among 
 long-time users and contributors.
Such a survey wouldn't be anywhere near "realistic." The number and types of users who regularly keep up with the forums are highly unlikely to be a representative sample of D users.
There is no reason why it should be limited to these forums, is there? Such a survey should be fairly more "realistic" and "representative" than feelings, emotions and anecdotal evidence. I think it would be interesting and useful to know what is important for: -users just starting to use D -users already heavily invested in the language -users in each distinct usage (gamedev, web, scripts, real-time, ...) -users proficient in alternative languages -companies of different sizes -size of D codebase
Feb 28 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 02/28/2016 11:15 AM, Márcio Martins wrote:
 There is no reason why it should be limited to these forums, is there?
 Such a survey should be fairly more "realistic" and "representative"
 than feelings, emotions and anecdotal evidence.

 I think it would be interesting and useful to know what is important for:
 -users just starting to use D
 -users already heavily invested in the language
 -users in each distinct usage (gamedev, web, scripts, real-time, ...)
 -users proficient in alternative languages
 -companies of different sizes
 -size of D codebase
Putting the horses on the proper end of the cart is to first make sure we make it easy to align the three compiler versions together. Only then, choosing which compiler is more promoted, default etc. becomes a simple matter of branding. Márcio, we are a small enough community that we can't enact things by fiat. We've tried before, invariably with bad results. Of course you are free to speculate that this time may be different, but it's just that - speculation. Andrei
Feb 28 2016
prev sibling next sibling parent reply rsw0x <anonymous anonymous.com> writes:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?

 [...]
Developer politics, I believe. I'm curious where Andrei stands on this issue, IIRC he was upset at one point that dmd could not be redistributed properly on linux distros.
Feb 17 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 02/17/2016 09:28 PM, rsw0x wrote:
 I'm curious where Andrei stands on this issue, IIRC he was upset at one
 point that dmd could not be redistributed properly on linux distros.
We'd love dmd's backend to have a more permissive license, and we have tried to make it so. Regardless, the dmd compiler is an important part of the D ecosystem and is here to stay. -- Andrei
Feb 18 2016
parent =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 12:22:20 UTC, Andrei 
Alexandrescu wrote:
 On 02/17/2016 09:28 PM, rsw0x wrote:
 I'm curious where Andrei stands on this issue, IIRC he was 
 upset at one
 point that dmd could not be redistributed properly on linux 
 distros.
We'd love dmd's backend to have a more permissive license, and we have tried to make it so. Regardless, the dmd compiler is an important part of the D ecosystem and is here to stay. -- Andrei
Andrei, I agree with that. DMD is important and should be where most of the exploratory action happens. But does it really have to be the "face" of the D language?
Feb 18 2016
prev sibling next sibling parent reply Kai Nacke <kai redstar.de> writes:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?

 Currently, newcomers come expecting their algorithm from 
 rosetta code to run faster in D than their current language, 
 but then it seems like it's actually slower. What gives?

 Very often the typical answer from this community is generally 
 "did you use LDC/GDC?".

 Wouldn't it be a better newcomer experience if the official 
 compiler was either LDC or GDC?
 For us current users it really doesn't matter what is labelled 
 official, we pick what serves us best, but for a newcomer, the 
 word official surely carries a lot of weight, doesn't it?

 From a marketing point of view, is it better for D as a 
 language that first-timers try the bleeding-edge, latest 
 language features with DMD, or that their expectations of 
 efficient native code are not broken?

 Apologies if this has been discussed before...
Hi, even if DMD is the official reference compiler, the download page http://dlang.org/download.html already mentions "strong optimization" as pro of GDC/LDC vs. "very fast compilation speeds" as pro of DMD. If we would make GDC or LDC the official compiler then the next question which pops up is about compilation speed.... Regards, Kai
Feb 17 2016
next sibling parent rsw0x <anonymous anonymous.com> writes:
On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
 wrote:
 [...]
Hi, even if DMD is the official reference compiler, the download page http://dlang.org/download.html already mentions "strong optimization" as pro of GDC/LDC vs. "very fast compilation speeds" as pro of DMD. If we would make GDC or LDC the official compiler then the next question which pops up is about compilation speed.... Regards, Kai
an additional major pro of ldc/gdc is that they're under a free license and can be freely redistributed, dmd is not
Feb 17 2016
prev sibling next sibling parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
 wrote:
 I was reading the other thread "Speed kills" and was wondering 
 if there is any practical reason why DMD is the official 
 compiler?

 Currently, newcomers come expecting their algorithm from 
 rosetta code to run faster in D than their current language, 
 but then it seems like it's actually slower. What gives?

 Very often the typical answer from this community is generally 
 "did you use LDC/GDC?".

 Wouldn't it be a better newcomer experience if the official 
 compiler was either LDC or GDC?
 For us current users it really doesn't matter what is labelled 
 official, we pick what serves us best, but for a newcomer, the 
 word official surely carries a lot of weight, doesn't it?

 From a marketing point of view, is it better for D as a 
 language that first-timers try the bleeding-edge, latest 
 language features with DMD, or that their expectations of 
 efficient native code are not broken?

 Apologies if this has been discussed before...
Hi, even if DMD is the official reference compiler, the download page http://dlang.org/download.html already mentions "strong optimization" as pro of GDC/LDC vs. "very fast compilation speeds" as pro of DMD. If we would make GDC or LDC the official compiler then the next question which pops up is about compilation speed.... Regards, Kai
I agree that there is potential for compilation speed to become the new question, but I don't think it's very likely for newcomers to have larger codebases to compile for compilation speed to matter. I suppose it's a lot easier to address the compilation speed issue in LDC/GDC, than to improve and maintain DMD's backend to the expected levels, right?
Feb 18 2016
parent reply Kai Nacke <kai redstar.de> writes:
On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins 
wrote:
 I suppose it's a lot easier to address the compilation speed 
 issue in LDC/GDC, than to improve and maintain DMD's backend to 
 the expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed. Regards, Kai
Feb 18 2016
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins 
 wrote:
 I suppose it's a lot easier to address the compilation speed 
 issue in LDC/GDC, than to improve and maintain DMD's backend 
 to the expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed.
On some level, I would expect compilation speed and generating well-optimized binaries to be mutually exclusive. To get those extra optimizations, you usually have to do more work, and that takes more time. I'm sure that some optimizations can be added to dmd without particularly compromising compilation speed, and gdc and ldc can probably be made to compile faster without losing out on optimizations, but you can only go so far without either losing out on compilation speed or on optimizations. And obviously, it's not necessarily easy to make improvements to either, regardless of whether it comes at the cost of the other. - Jonathan M Davis
Feb 18 2016
parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 12:05:12 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins 
 wrote:
 I suppose it's a lot easier to address the compilation speed 
 issue in LDC/GDC, than to improve and maintain DMD's backend 
 to the expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed.
On some level, I would expect compilation speed and generating well-optimized binaries to be mutually exclusive. To get those extra optimizations, you usually have to do more work, and that takes more time. I'm sure that some optimizations can be added to dmd without particularly compromising compilation speed, and gdc and ldc can probably be made to compile faster without losing out on optimizations, but you can only go so far without either losing out on compilation speed or on optimizations. And obviously, it's not necessarily easy to make improvements to either, regardless of whether it comes at the cost of the other. - Jonathan M Davis
I agree with that. It also means that it would be considerably easier to have a setting in LDC/GDC that generates slightly worst code, and compiles slightly faster... perhaps never reaching the speed of DMD, but compilation speed is not the only factor, is it? GCC/LLVM have many more supported platforms and architectures, produce faster code, and have large communities behind them, constantly optimizing and modernizing, backed by it giants like Google, Apple, ... I cannot say for GCC but LDC also has considerably better tooling with the sanitizers. LDC seems to also be the closest to support all major platforms and architectures, including iOS and Android which are huge markets. It supports Win64/Win32 (experimental) out-of-the-box. Both LDC and GDC have no weird legal strings attached. Both can be distributed with major Linux distros. All that DMD has going for it is it's compilation speed. These are all big points towards having more users experience and enjoy D as we do! To get more contributors, more people have to use and believe in the language. DMD has a lot of clear barriers for this. Really, not a lot has to change to start with, just fix the installers and slap the official tag in either LDC or GDC.
Feb 18 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/18/16 8:11 AM, Márcio Martins wrote:
 On Thursday, 18 February 2016 at 12:05:12 UTC, Jonathan M Davis wrote:
 On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins wrote:
 I suppose it's a lot easier to address the compilation speed issue
 in LDC/GDC, than to improve and maintain DMD's backend to the
 expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed.
On some level, I would expect compilation speed and generating well-optimized binaries to be mutually exclusive. To get those extra optimizations, you usually have to do more work, and that takes more time. I'm sure that some optimizations can be added to dmd without particularly compromising compilation speed, and gdc and ldc can probably be made to compile faster without losing out on optimizations, but you can only go so far without either losing out on compilation speed or on optimizations. And obviously, it's not necessarily easy to make improvements to either, regardless of whether it comes at the cost of the other. - Jonathan M Davis
I agree with that. It also means that it would be considerably easier to have a setting in LDC/GDC that generates slightly worst code, and compiles slightly faster... perhaps never reaching the speed of DMD, but compilation speed is not the only factor, is it? GCC/LLVM have many more supported platforms and architectures, produce faster code, and have large communities behind them, constantly optimizing and modernizing, backed by it giants like Google, Apple, ... I cannot say for GCC but LDC also has considerably better tooling with the sanitizers. LDC seems to also be the closest to support all major platforms and architectures, including iOS and Android which are huge markets. It supports Win64/Win32 (experimental) out-of-the-box. Both LDC and GDC have no weird legal strings attached. Both can be distributed with major Linux distros.
Which of these advantages cannot be taken advantage of today?
 All that DMD has going for it is it's compilation speed.
Walter does most of the feature implementation work. Having a familiar back-to-back codebase is a big asset. Compilation speed is a big asset, too, probably not as big.
 These are all big points towards having more users experience and enjoy
 D as we do!

 To get more contributors, more people have to use and believe in the
 language. DMD has a lot of clear barriers for this.

 Really, not a lot has to change to start with, just fix the installers
 and slap the official tag in either LDC or GDC.
A step everybody would agree is good would be to make it easy for the three compilers to stay in sync. Thanks for your work on the GC! Andrei
Feb 18 2016
parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 13:23:34 UTC, Andrei 
Alexandrescu wrote:

 Which of these advantages cannot be taken advantage of today?
I suppose if you combine the feature sets of all compilers you will to some degree be able to get the best of all worlds. But the compiler *representing* the language in the wild, in benchmarks could be one with an offering that fits the largest amount of potential users, and the least possible friction towards the adoption, could it not? Is it optimal that the compiler labelled *official* offers the least "advantages" of all? There is "Strong optimization" under LDC and GDC in the downloads page, however, we still see people downloading DMD and benchmarking with it, don't we? Yes, people don't read a lot on the web, as soon as they see "official" most people pick that and stop reading.
 Walter does most of the feature implementation work. Having a 
 familiar back-to-back codebase is a big asset. Compilation 
 speed is a big asset, too, probably not as big.
I agree, but I don't see why this would have to change. It shouldn't change. Frontend development could happen on DMD as the *reference* compiler.
 A step everybody would agree is good would be to make it easy 
 for the three compilers to stay in sync.
That would be the cherry on top.
Feb 18 2016
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins 
wrote:
 I agree, but I don't see why this would have to change. It 
 shouldn't change. Frontend development could happen on DMD as 
 the *reference* compiler.
And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be? - Jonathan M Davis
Feb 18 2016
next sibling parent reply =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 15:36:42 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins 
 wrote:
 I agree, but I don't see why this would have to change. It 
 shouldn't change. Frontend development could happen on DMD as 
 the *reference* compiler.
And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be? - Jonathan M Davis
"official" carries a connotation of endorsement doesn't it? In other words, if you are given a choice of 3 and you know very little about each, which would you predict would give you a better user experience? Reference in this case is the one that most closely follows the bleeding edge of the language spec, which new users don't benefit a lot from. In this case it's also where all the frontend development would happen. But what we call it this doesn't really matter to end users. What I have been defending this far is that we could entertain the possibility that end users could be better off if we "suggested" they try out one of the other compilers before they try DMD. The easiest way to suggest that is to stamp "official" on one of the stronger alternatives. Once installers for LDC and GDC are on par with DMD, is there still a pragmatic reason to suggest DMD to new users? Given that all that DMD has going for it from the perspective of a new user is the compilation speed? For everyone else nothing would change, we'd go about our daily lives, using our favorite compiler as always. But meanwhile, people exploring and looking to try D could try out it's amazing features and get proof in first-hand, that these awesome features come at no efficiency cost, as advertised.
Feb 18 2016
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 16:47:16 UTC, Márcio Martins 
wrote:
 On Thursday, 18 February 2016 at 15:36:42 UTC, Jonathan M Davis 
 wrote:
 On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins 
 wrote:
 I agree, but I don't see why this would have to change. It 
 shouldn't change. Frontend development could happen on DMD as 
 the *reference* compiler.
And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be? - Jonathan M Davis
"official" carries a connotation of endorsement doesn't it? In other words, if you are given a choice of 3 and you know very little about each, which would you predict would give you a better user experience? Reference in this case is the one that most closely follows the bleeding edge of the language spec, which new users don't benefit a lot from. In this case it's also where all the frontend development would happen. But what we call it this doesn't really matter to end users. What I have been defending this far is that we could entertain the possibility that end users could be better off if we "suggested" they try out one of the other compilers before they try DMD. The easiest way to suggest that is to stamp "official" on one of the stronger alternatives. Once installers for LDC and GDC are on par with DMD, is there still a pragmatic reason to suggest DMD to new users? Given that all that DMD has going for it from the perspective of a new user is the compilation speed? For everyone else nothing would change, we'd go about our daily lives, using our favorite compiler as always. But meanwhile, people exploring and looking to try D could try out it's amazing features and get proof in first-hand, that these awesome features come at no efficiency cost, as advertised.
Honestly, I think that dmd _should_ be the goto compiler. It's the fast one. It's the most up-to-date - especially right now as gdc and ldc have been trying to get to the point that they're using the new D frontend instead of the C++ one. gdc and ldc are great if you want to make sure that you're code is faster in production, but they're slower for actually get the code written, and AFAIK, if you want to be writing scripts in D (which is really useful), you need rdmd, which means using dmd (and I sure wouldn't want those to be compiled with gdc or ldc anyway - compilation speed matters way more in that case than it even does during development). New users are frequently impressed by how fast dmd compiles code, and it's a big selling point for us. It's only later that benchmarking comes into play, and if want to do that, then use gdc or ldc. The download page already says to use gdc or ldc if you want better optimization. I wouldn't want to use gdc or ldc for normal development unless I had to, and I wouldn't want to encourage others to either. dmd's speed is worth way too much when it comes to getting actual work done. And it's not like it generates slow code. It just doesn't generate code that's as fast as gdc or ldc generates, and when you get to the point that you need the fast binary, then use gdc or ldc. But use them as the default? Why? dmd is a clear winner as far as development goes. It's both faster and more up-to-date. It's just that gdc or ldc would be better for production code if you really need all the speed that you can get. We need to work towards getting and keeping gdc and ldc in sync with dmd so that they stop being behind like they typically are, and we do need to make sure that it's clear that gdc and ldc generate faster binaries. But I think that it would be a huge mistake to push either of them as the one that everyone should be using by default. - Jonathan M Davis
Feb 18 2016
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis 
wrote:
 Honestly, I think that dmd _should_ be the goto compiler. [snip]
I agree with your response. That being said, it can't hurt to make things a bit more clear for new users. If you go to the download page, there is a more information button that takes you to the wiki. But the wiki page seems to just look like raw html. It doesn't look nearly as good as the page we are coming from. Assuming that is fixed, I would recommend two small other changes: 1) Put the first line from the wiki where it says "If you're a beginner DMD is the recommended choice, ..." on the top of the compiler page, 2) Replace the GDC and LDC "Strong optimization" lines in the download page with something a little clearer. Even "Stronger optimization than DMD" would be clearer.
Feb 18 2016
prev sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis 
wrote:
 […] if you want to be writing scripts in D (which is really 
 useful), you need rdmd, which means using dmd
You can use rdmd with ldmd2 just as well (and presumably gdmd too).
 New users are frequently impressed by how fast dmd compiles 
 code, and it's a big selling point for us. It's only later that 
 benchmarking comes into play, and if want to do that, then use 
 gdc or ldc. The download page already says to use gdc or ldc if 
 you want better optimization.
I'd claim that an equal number of people is put off by the sometimes abysmal performance of optimized DMD output in their initial tests and first toy projects.
 dmd is a clear winner as far as development goes.
Clear only to somebody with x86-centric vision. I'm not claiming that the somewhat lower compile times aren't good for productivity. But being able to easily tap into the rich LLVM ecosystem or even just targeting the most widely used CPU architecture (in terms of units) is also something not to be forgotten when considering the development process. — David
Feb 18 2016
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger 
wrote:
 On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis 
 wrote:
 […] if you want to be writing scripts in D (which is really 
 useful), you need rdmd, which means using dmd
You can use rdmd with ldmd2 just as well (and presumably gdmd too).
Good to know.
 Clear only to somebody with x86-centric vision. I'm not 
 claiming that the somewhat lower compile times aren't good for 
 productivity. But being able to easily tap into the rich LLVM 
 ecosystem or even just targeting the most widely used CPU 
 architecture (in terms of units) is also something not to be 
 forgotten when considering the development process.
Having ldc is huge, but as long as you're targeting x86(_64) as one of your platforms, developing with dmd is going to be faster thanks to the fast compilation times. And if we can get dmd and ldc to be fully compatible like they should be, then as long as your code is cross-platform, it should be possible to develop it with dmd and then target whatever you want with ldc - though obviously some stuff will have to be done with ldc when it's something that dmd can't do (like a version block targeting ARM), and anything that's going to ultimately be released using ldc should be tested on it. But that fast compilation time is so tremendous in the edit-test-edit cycle, that I just can't see using ldc as the main compiler for development unless what you're doing isn't targeting x86(_64) at all, or ldc isn't compatible enough with dmd to do most of the development with dmd. But assuming that dmd and gdc/ldc are compatible, I would definitely argue that the best way to do D development is to do most of the development with dmd and then switch to gdc or ldc for production. That way, you get the fast compilation times when you need it, and your final binary is better optimized. - Jonathan M Davis
Feb 18 2016
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 18 February 2016 at 22:23, Jonathan M Davis via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger wrote:

 On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis wrote:

 [=E2=80=A6] if you want to be writing scripts in D (which is really use=
ful), you
 need rdmd, which means using dmd
You can use rdmd with ldmd2 just as well (and presumably gdmd too).
Good to know. Clear only to somebody with x86-centric vision. I'm not claiming that the
 somewhat lower compile times aren't good for productivity. But being abl=
e
 to easily tap into the rich LLVM ecosystem or even just targeting the mo=
st
 widely used CPU architecture (in terms of units) is also something not t=
o
 be forgotten when considering the development process.
Having ldc is huge, but as long as you're targeting x86(_64) as one of your platforms, developing with dmd is going to be faster thanks to the fast compilation times. And if we can get dmd and ldc to be fully compatible like they should be, then as long as your code is cross-platform, it should be possible to develop it with dmd and then target whatever you want with ldc - though obviously some stuff will have to be done with ldc when it's something that dmd can't do (like a version block targeting ARM), and anything that's going to ultimately be released using ldc should be tested on it. But that fast compilation time is so tremendous in the edit-test-edit cycle, that I just can't see using ldc a=
s
 the main compiler for development unless what you're doing isn't targetin=
g
 x86(_64) at all, or ldc isn't compatible enough with dmd to do most of th=
e
 development with dmd.

 But assuming that dmd and gdc/ldc are compatible, I would definitely argu=
e
 that the best way to do D development is to do most of the development wi=
th
 dmd and then switch to gdc or ldc for production. That way, you get the
 fast compilation times when you need it, and your final binary is better
 optimized.

 - Jonathan M Davis
Actually, I'm sure this is a great way to let bugs in. There's no saying what could happen if you switch compiler and turn the optimisations throttle to full. In 99% of cases, one would hope all is good. But the bigger the codebase you're dealing with, the more you should really use both side by side when testing to ensure that no heisenbugs creep in.
Feb 18 2016
parent =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 22:33:15 UTC, Iain Buclaw wrote:
 On 18 February 2016 at 22:23, Jonathan M Davis via 
 Digitalmars-d < digitalmars-d puremagic.com> wrote:

[...]
Actually, I'm sure this is a great way to let bugs in. There's no saying what could happen if you switch compiler and turn the optimisations throttle to full. In 99% of cases, one would hope all is good. But the bigger the codebase you're dealing with, the more you should really use both side by side when testing to ensure that no heisenbugs creep in.
Yep, that issue I reported a while ago with floating-point casts comes to mind.
Feb 18 2016
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger 
wrote:
 You can use rdmd with ldmd2 just as well (and presumably gdmd 
 too).
First I'm hearing of it.
Feb 18 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 18 February 2016 at 15:36:42 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins 
 wrote:
 I agree, but I don't see why this would have to change. It 
 shouldn't change. Frontend development could happen on DMD as 
 the *reference* compiler.
And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be?
A reference implementation is written to the spec in the simplest and clearest possible way so that it is bug free... It is not for production...
Feb 18 2016
prev sibling next sibling parent reply rsw0x <anonymous anonymous.com> writes:
On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins 
 wrote:
 I suppose it's a lot easier to address the compilation speed 
 issue in LDC/GDC, than to improve and maintain DMD's backend 
 to the expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed. Regards, Kai
Sorry for being off topic, Rustc(uses LLVM) has a parallel codegen compilation mode that decreases optimization for a (major AFAIK?) decrease in compilation time when compiling multiple files. Would it be possible for LDC to offer the same thing without a major rewrite? I'm unfamiliar with the LDC codebase which is why I ask. Probably worth noting that even with parallel codegen rustc is still far slower than ldc. reference: https://internals.rust-lang.org/t/default-settings-for-parallel-codegen/519
Feb 18 2016
parent Kai Nacke <kai redstar.de> writes:
On Thursday, 18 February 2016 at 17:23:09 UTC, rsw0x wrote:
 On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins 
 wrote:
 I suppose it's a lot easier to address the compilation speed 
 issue in LDC/GDC, than to improve and maintain DMD's backend 
 to the expected levels, right?
LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed. Regards, Kai
Sorry for being off topic, Rustc(uses LLVM) has a parallel codegen compilation mode that decreases optimization for a (major AFAIK?) decrease in compilation time when compiling multiple files. Would it be possible for LDC to offer the same thing without a major rewrite? I'm unfamiliar with the LDC codebase which is why I ask. Probably worth noting that even with parallel codegen rustc is still far slower than ldc. reference: https://internals.rust-lang.org/t/default-settings-for-parallel-codegen/519
From time to time I dream about compiling modules in parallel. :-) This needs some investigation but I think it could be possible to spawn a thread per module you are compiling (after the frontend passes). Never digged deeper into this... Regards, Kai
Feb 18 2016
prev sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
 LLVM has about 2.5 million code lines. I am anything than sure 
 if it is easy to improve compilation speed.
I think you are a tad too pessimistic here. First, don't forget that there are some big LLVM customers for which low compile times are important too (remember all the buzz from when Clang first hit the scene?). Second, when was the last time you focussed on optimizing LDC -O0 compiler performance? There is currently a lot of low-hanging fruit, and then there are still the more involved options (such as making sure we use FastISel as much as possible). It might not end up quite as fast as DMD is right now. But imagine that Walter would have invested all the time he spent e.g. on implementing DWARF EH into optimizing the LDC frontend/glue layer/backend pass structure instead. Who knows, we might have an LDC-based compiler today that is faster than the DMD we currently have. — David
Feb 18 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/18/2016 11:54 AM, David Nadlinger wrote:
 But imagine that Walter
 would have invested all the time he spent e.g. on implementing DWARF EH into
 optimizing the LDC frontend/glue layer/backend pass structure instead. Who
 knows, we might have an LDC-based compiler today that is faster than the DMD we
 currently have.
A big chunk of that was getting D to catch C++ exceptions. And before I did this work, neither GDC nor LDC did, either. It's not a simple matter of just turning it on given Dwarf EH. The point being, a lot of things are not going to happen for D unless I do them. Many of these require changing the front end, back end, and the runtime library in concert. It's a lot easier to make these work when the person working on it understands how all three work. Once they're done, they provide a good guide on how to get it to work with a monumental code base like the gdc and ldc backends are.
Feb 24 2016
next sibling parent Joakim <dlang joakim.fea.st> writes:
On Thursday, 25 February 2016 at 02:58:08 UTC, Walter Bright 
wrote:
 On 2/18/2016 11:54 AM, David Nadlinger wrote:
 But imagine that Walter
 would have invested all the time he spent e.g. on implementing 
 DWARF EH into
 optimizing the LDC frontend/glue layer/backend pass structure 
 instead. Who
 knows, we might have an LDC-based compiler today that is 
 faster than the DMD we
 currently have.
A big chunk of that was getting D to catch C++ exceptions. And before I did this work, neither GDC nor LDC did, either. It's not a simple matter of just turning it on given Dwarf EH. The point being, a lot of things are not going to happen for D unless I do them. Many of these require changing the front end, back end, and the runtime library in concert. It's a lot easier to make these work when the person working on it understands how all three work. Once they're done, they provide a good guide on how to get it to work with a monumental code base like the gdc and ldc backends are.
That's a good argument for keeping your backend. I also like that it will be in D one day, meaning a completely bootstrapped D compiler. :) It would help if you weren't doing other stuff that others could also do, as you've complained about. You should keep a list of tasks online, ones you consider important but that others could reasonably do. That would give them an avenue to take stuff off your plate, freeing you up to work on what you do best.
Feb 25 2016
prev sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 25 February 2016 at 02:58:08 UTC, Walter Bright 
wrote:
 A big chunk of that was getting D to catch C++ exceptions. And 
 before I did this work, neither GDC nor LDC did, either. It's 
 not a simple matter of just turning it on given Dwarf EH.
That's beside the point, the C++ interop needed to be worked out either way and is not specific to the DMD backend. In that stupid example I gave, I was referring to the DWARF EH implementation itself, which will have taken you a non-negligible amount of time due to all the barely documented details, unless you are even more of a super-human compiler implementation expert than I already know you are. ;) Don't get me wrong, I couldn't care less about the details of how long it took whom to implement C++ EH interop (or the fact that it did exist before in LDC/Calypso, and in the form of prototypes for vanilla GDC/LDC, etc.). I'm only playing devil's advocate because many people here make it seem as if there was no cost to supporting multiple compilers, while there most definitely is. This ranges from the blatant duplication of work over PR issues to the fact that big language/compiler features are all but impossible to implement for anybody but you, since you are the only one who knows how to implement them on DMD (and in the current situation, not having them available in DMD would be a deal-breaker). Sure, the fact that you know all the nitty-gritty details of one backend might make implementing certain changes easier for you, as you pointed out. But the fact that this one backend is obscure compared to the usual suspects, poorly documented and license-encumbered pretty much ensures that you will remain the only person to tackle such projects in the future. — David
Feb 25 2016
parent rsw0x <anonymous anonymous.com> writes:
On Thursday, 25 February 2016 at 17:57:49 UTC, David Nadlinger 
wrote:
 I'm only playing devil's advocate because many people here make 
 it seem as if there was no cost to supporting multiple 
 compilers, while there most definitely is. This ranges from the 
 blatant duplication of work over PR issues to the fact that big 
 language/compiler features are all but impossible to implement 
 for anybody but you, since you are the only one who knows how 
 to implement them on DMD (and in the current situation, not 
 having them available in DMD would be a deal-breaker).
It would be nice if the DMD frontend was completely uprooted from the DMD backend and put into separate git projects. The frontend should be completely agnostic from which backend it's using or else it's just more trouble the LDC/GDC developers have to deal with.
Feb 25 2016
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 even if DMD is the official reference compiler, the download 
 page http://dlang.org/download.html already mentions "strong 
 optimization" as pro of GDC/LDC vs. "very fast compilation 
 speeds" as pro of DMD.

 If we would make GDC or LDC the official compiler then the next 
 question which pops up is about compilation speed....
Yeah. dmd's compilation speed has been a huge win for us and tends to make a very good first impression. And as far as development goes, fast compilation speed matters a lot more than fast binaries. So, assuming that they're compatible enough (which ideally they are but aren't always), I would argue that the best approach would be to use dmd to develop your code and then use gdc or ldc to build the production binary. We benefit by having all of these compilers, and I seriously question that changing which one is the "official" one is going to help any. It just shifts which set of complaints we get. Regardless, dmd's backend was written by Walter and is the one he's worked on for something like 25 years. I would be shocked if he were to switch to something else now. And actually, he'd risk legal problems if he did, because he doesn't want anyone to be able to accuse him of taking code from gcc or llvm. Yes, dmc/dmd has failed to keep up with gcc/gdc and llvm/ldc in terms of optimizations, because there are far fewer people working on it, but it compiles way faster than they do. There are advantages to each, and as long as that's clear, and we treat gdc and ldc as least semi-official, I think that we're fine. If anything, the problem is probably that the gdc and ldc folks could use more help, but dmd and Phobos suffer from that problem on some level as well, albeit probably not as acutely. - Jonathan M Davis
Feb 18 2016
next sibling parent reply Kai Nacke <kai redstar.de> writes:
On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
wrote:
 If anything, the problem is probably that the gdc and ldc folks 
 could use more help, but dmd and Phobos suffer from that 
 problem on some level as well, albeit probably not as acutely.

 - Jonathan M Davis
Yes, participation is a key issue for all compilers and the libraries. It is easy to say that compilation speed of ldc may be fixed. But turning on the profiler and looking for potential improvements is a totally different action. As always I welcome every contribution to ldc. :-) Regards, Kai
Feb 18 2016
parent reply Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 11:47:48 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
 wrote:
 If anything, the problem is probably that the gdc and ldc 
 folks could use more help, but dmd and Phobos suffer from that 
 problem on some level as well, albeit probably not as acutely.

 - Jonathan M Davis
Yes, participation is a key issue for all compilers and the libraries. It is easy to say that compilation speed of ldc may be fixed. But turning on the profiler and looking for potential improvements is a totally different action. As always I welcome every contribution to ldc. :-) Regards, Kai
As a casual user of the language I see that there is a fragmentation of resources and a waste in this regard with people developing in mainline, then some of you LDC guys catching up. My simple assumption is that if presumably the dmd backend is not maintained anymore, a lot of the core dmd people can focus on improving whatever problems the frontend or glue layers have. This could only mean that you core LDC guys could focus on llvm backend optimizations (both code gen and performance related). I'm going to assume that those kind of performance optimizations are also constantly done by upstream llvm, so more win here. Users will not magically turn to contributors if their perception is that there is always going to be a catch-up game to play somewhere. Not to mention that if one want's to get something in LDC, one has to commit it in mainline, which is DMD, you just multiplied the know-how someone needs to have to do some useful work... And finally, just pointing people to ldc/gdc (always a version or 2 behind, another grief) each time dmd performance is poor, looks awfully wrong.
Feb 18 2016
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
 My simple assumption is that if presumably the dmd backend is 
 not maintained anymore, a lot of the core dmd people can focus 
 on improving whatever problems the frontend or glue layers have.
That's what they're already doing. Very little work is done on the backend (which is part of why it doesn't optimize as well as gcc or llvm). Occasionally, work has to be done on the backend, but by and large, the dmd devs are working on the frontend, which benefits gdc and ldc just as much as it does dmd. Now, that doesn't change the fact that the gdc and ldc guys could use more help, but and if the dmd backend were dropped, then presumably some of the work being done on the frontend would be going to the glue layer for either gdc or ldc, but that would further slow down the development of the frontend and not necessarily improve things overall. Regardless, losing dmd's backend would be a _huge_ loss. Yes, the binaries that it generates are slower, but its faster compilation times are a huge win for developers and can significantly improve the development time of a project. It also has served us very well in impressing and attracting programmers to D. Ultimately, we want all three compilers with all three backends to be well-maintained and usable, because they each have their pros and cons. - Jonathan M Davis
Feb 18 2016
next sibling parent Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 12:23:18 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
 My simple assumption is that if presumably the dmd backend is 
 not maintained anymore, a lot of the core dmd people can focus 
 on improving whatever problems the frontend or glue layers 
 have.
That's what they're already doing. Very little work is done on the backend (which is part of why it doesn't optimize as well as gcc or llvm). Occasionally, work has to be done on the backend, but by and large, the dmd devs are working on the frontend, which benefits gdc and ldc just as much as it does dmd. Now, that doesn't change the fact that the gdc and ldc guys could use more help, but and if the dmd backend were dropped, then presumably some of the work being done on the frontend would be going to the glue layer for either gdc or ldc, but that would further slow down the development of the frontend and not necessarily improve things overall. Regardless, losing dmd's backend would be a _huge_ loss. Yes, the binaries that it generates are slower, but its faster compilation times are a huge win for developers and can significantly improve the development time of a project. It also has served us very well in impressing and attracting programmers to D. Ultimately, we want all three compilers with all three backends to be well-maintained and usable, because they each have their pros and cons. - Jonathan M Davis
No to contradict what you said but there is still sensible work eating time, like Dwarf EH. Probably making LDC part of the release process could make things better for both sides. I can see that after a while of sync releases someone will retire DMD for practical reasons. Anecdotal evidence shows that each time D moved to a more open license or participation group things worked for the better.
Feb 18 2016
prev sibling parent David Nadlinger <code klickverbot.at> writes:
On Thursday, 18 February 2016 at 12:23:18 UTC, Jonathan M Davis 
wrote:
 if the dmd backend were dropped, […] that would further slow 
 down the development of the frontend and not necessarily 
 improve things overall.
How would that be? — David
Feb 18 2016
prev sibling parent reply Kai Nacke <kai redstar.de> writes:
On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
 As a casual user of the language I see that there is a 
 fragmentation of resources and a waste in this regard with 
 people developing in mainline, then some of you LDC guys 
 catching up.
As Iain already pointed out the main problem is (undocumented or weird) AST changes. This makes a merge sometimes painful. This can (and will) go better. This is IMHO the only "waste". Nobody of the LDC team does frontend development, we are all focused on the glue layer.
 My simple assumption is that if presumably the dmd backend is 
 not maintained anymore, a lot of the core dmd people can focus 
 on improving whatever problems the frontend or glue layers have.
As far as I know only Walter (and Daniel I think) work on the backend. This is not "a lot of the core dmd people".
 This could only mean that you core LDC guys could focus on llvm 
 backend optimizations (both code gen and performance related). 
 I'm going to assume that those kind of performance 
 optimizations are also constantly done by upstream llvm, so 
 more win here.
By chance I am an LLVM committer, too. But the LDC team only focuses on getting the glue library and the runtime library right. Adding new useful optimizations is hard work. The people working on it are either researchers or backed by a big company.
 Users will not magically turn to contributors if their 
 perception is that there is always going to be a catch-up game 
 to play somewhere. Not to mention that if one want's to get 
 something in LDC, one has to commit it in mainline, which is 
 DMD, you just multiplied the know-how someone needs to have to 
 do some useful work...
It depends on the feature you want. If you want a new language feature then yes. But then you do not change LDC, you change the language specification and therefore the reference compiler. You can add a lot of features without ever touching DMD frontend code. The sanitizers, for example. Or the not-yet-merged PR for profile-guided optimizations.
 And finally, just pointing people to ldc/gdc (always a version 
 or 2 behind, another grief) each time dmd performance is poor, 
 looks awfully wrong.
I really find this "speed" argument doubtful. My experience is that if you really need performance you must *know* what you are doing. Just picking some code from a web site, compiling it and then complaining that the resulting binary is slower than that of language xy is not a serious approach. For a novice user, LDC can be discouraging: just type ldc2 -help-hidden. But you may need to know about these options to e.g. enable the right auto-vectorizer for your problem. I once wrote an MD5 implementation in pure Java which was substantially faster than the reference implementation in C from RFC 1321 (gcc -O3 compiled). C is not faster than Java if you know Java but not C. The same is true for D. I really like the compiler diversity. What I miss (hint!) is a program to verify the compiler/backend correctness. Just generate a random D program, compile with all 3 compilers and compare the output. IMHO we could find a lot of backend bugs this way. This would help all D compilers. Regards, Kai
Feb 18 2016
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 17:52:10 UTC, Kai Nacke wrote:
 I really like the compiler diversity. What I miss (hint!) is a 
 program to verify the compiler/backend correctness. Just 
 generate a random D program, compile with all 3 compilers and 
 compare the output. IMHO we could find a lot of backend bugs 
 this way. This would help all D compilers.
That would really be cool. - Jonathan M Davis
Feb 18 2016
prev sibling next sibling parent reply rsw0x <anonymous anonymous.com> writes:
On Thursday, 18 February 2016 at 17:52:10 UTC, Kai Nacke wrote:
 I really like the compiler diversity. What I miss (hint!) is a 
 program to verify the compiler/backend correctness. Just 
 generate a random D program, compile with all 3 compilers and 
 compare the output. IMHO we could find a lot of backend bugs 
 this way. This would help all D compilers.

 Regards,
 Kai
reminds me of csmith https://embed.cs.utah.edu/csmith/ I believe Brian Schott had worked on something like this for D... Did that ever go anywhere?
Feb 18 2016
parent reply Paul O'Neil <redballoon36 gmail.com> writes:
On 02/18/2016 02:06 PM, rsw0x wrote:
 On Thursday, 18 February 2016 at 17:52:10 UTC, Kai Nacke wrote:
 I really like the compiler diversity. What I miss (hint!) is a program
 to verify the compiler/backend correctness. Just generate a random D
 program, compile with all 3 compilers and compare the output. IMHO we
 could find a lot of backend bugs this way. This would help all D
 compilers.

 Regards,
 Kai
reminds me of csmith https://embed.cs.utah.edu/csmith/ I believe Brian Schott had worked on something like this for D... Did that ever go anywhere?
Brian's project is at https://github.com/Hackerpilot/generated . I can't speak to the state of the project, but it hasn't been touched in about a year. -- Paul O'Neil Github / IRC: todayman
Feb 24 2016
parent Brian Schott <briancschott gmail.com> writes:
On Thursday, 25 February 2016 at 02:08:32 UTC, Paul O'Neil wrote:
 On 02/18/2016 02:06 PM, rsw0x wrote:
 I believe Brian Schott had worked on something like this for 
 D... Did that ever go anywhere?
Brian's project is at https://github.com/Hackerpilot/generated . I can't speak to the state of the project, but it hasn't been touched in about a year.
I built that to fuzz test parsers, not code generation or anything else. I can pretty much guarantee that its output should not compile.
Feb 24 2016
prev sibling next sibling parent Radu <radu void.null> writes:
On Thursday, 18 February 2016 at 17:52:10 UTC, Kai Nacke wrote:
 On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
 As a casual user of the language I see that there is a 
 fragmentation of resources and a waste in this regard with 
 people developing in mainline, then some of you LDC guys 
 catching up.
As Iain already pointed out the main problem is (undocumented or weird) AST changes. This makes a merge sometimes painful. This can (and will) go better. This is IMHO the only "waste". Nobody of the LDC team does frontend development, we are all focused on the glue layer.
 My simple assumption is that if presumably the dmd backend is 
 not maintained anymore, a lot of the core dmd people can focus 
 on improving whatever problems the frontend or glue layers 
 have.
As far as I know only Walter (and Daniel I think) work on the backend. This is not "a lot of the core dmd people".
 This could only mean that you core LDC guys could focus on 
 llvm backend optimizations (both code gen and performance 
 related). I'm going to assume that those kind of performance 
 optimizations are also constantly done by upstream llvm, so 
 more win here.
By chance I am an LLVM committer, too. But the LDC team only focuses on getting the glue library and the runtime library right. Adding new useful optimizations is hard work. The people working on it are either researchers or backed by a big company.
 Users will not magically turn to contributors if their 
 perception is that there is always going to be a catch-up game 
 to play somewhere. Not to mention that if one want's to get 
 something in LDC, one has to commit it in mainline, which is 
 DMD, you just multiplied the know-how someone needs to have to 
 do some useful work...
It depends on the feature you want. If you want a new language feature then yes. But then you do not change LDC, you change the language specification and therefore the reference compiler. You can add a lot of features without ever touching DMD frontend code. The sanitizers, for example. Or the not-yet-merged PR for profile-guided optimizations.
 And finally, just pointing people to ldc/gdc (always a version 
 or 2 behind, another grief) each time dmd performance is poor, 
 looks awfully wrong.
I really find this "speed" argument doubtful. My experience is that if you really need performance you must *know* what you are doing. Just picking some code from a web site, compiling it and then complaining that the resulting binary is slower than that of language xy is not a serious approach. For a novice user, LDC can be discouraging: just type ldc2 -help-hidden. But you may need to know about these options to e.g. enable the right auto-vectorizer for your problem. I once wrote an MD5 implementation in pure Java which was substantially faster than the reference implementation in C from RFC 1321 (gcc -O3 compiled). C is not faster than Java if you know Java but not C. The same is true for D. I really like the compiler diversity. What I miss (hint!) is a program to verify the compiler/backend correctness. Just generate a random D program, compile with all 3 compilers and compare the output. IMHO we could find a lot of backend bugs this way. This would help all D compilers. Regards, Kai
I think there are more involve in DMD in general, you need to count reviewers and all the infrastructure deployed. But even if only 2 of them are involved having them 100% focused on core D stuff would be a boon. I see a trend with this discussion: 1. Compiler speed. This is a clear win for DMD, but at the same time LDC doesn't benefit from consistent investment on performance tuning. This obviously is just speculation, but I think the performance gap can be substantially closed with more resources invested here, at least for un-optimized builds. 2. Speed of compiled code. People often suggest that DMD could close the gap here, but I see this as wishful thinking, just listing all the optimizations LLVM does it's just depressing for anyone wanting to move DMD closer to that, it is just game over in this regard. Plus, who is going to work on them except Walter? Does anyone want to invest in a dubious licensed backend? But the story is more complicated :) We are talking here about perception, LLVM is a well known and respectable backend, this is a win for people using or wanting to contribute to the language. Also, people forget that DMD is limited on numbers of architecture supported. My hope is that LDC will be on the same announcement page when a new DMD version is launched. When this will happen, common sense will just kill DMD. Apreciate all the hard work you all guys do!
Feb 18 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/18/2016 9:52 AM, Kai Nacke wrote:
 I really like the compiler diversity.
Me too. Having 3 major implementations is a great source of strength for D.
Feb 24 2016
next sibling parent Radu <radu void.null> writes:
On Thursday, 25 February 2016 at 03:05:21 UTC, Walter Bright 
wrote:
 On 2/18/2016 9:52 AM, Kai Nacke wrote:
 I really like the compiler diversity.
Me too. Having 3 major implementations is a great source of strength for D.
This needs to go further, currently there is no up to date, high performance, cross architecture compiler. The way I see it is to integrate one of the compilers, best candidate is LDC, in the release cycle. I know that LDC is really close to get to 2.70 level, if mainline will only be for regressions and bug fixes for a while, there is a good chance LDC could catch up and be part of the daily merge-auto-tester loop. This will lower the pressure to constantly merge from LDC and allow to focus on other parts of the LDC compiler. Can this be attainable?
Feb 25 2016
prev sibling parent David Nadlinger <code klickverbot.at> writes:
On Thursday, 25 February 2016 at 03:05:21 UTC, Walter Bright 
wrote:
 On 2/18/2016 9:52 AM, Kai Nacke wrote:
 I really like the compiler diversity.
Me too. Having 3 major implementations is a great source of strength for D.
I like it too. I just think that we can't afford it at this point, and that this is a major impediment for improving the quality of the D ecosystem. — David
Feb 25 2016
prev sibling next sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
wrote:
 And actually, he'd risk legal problems if he did, because he 
 doesn't want anyone to be able to accuse him of taking code 
 from gcc or llvm.
That's a silly strawman, and you should know better than putting that forward as an argument by now. Walter is of course free to do whatever he pleases, and I would totally understand if his reason was just that it's hard to give something up you've worked on for a long time. But please don't make up argument trying to rationalize whatever personal decision somebody else made. You could literally copy LLVM source code into your application and sell it as a closed-source product without risking any copyright problems (if you comply with the very modest attribution clause of the license).
 If anything, the problem is probably that the gdc and ldc folks 
 could use more help, but dmd and Phobos suffer from that 
 problem on some level as well, albeit probably not as acutely.
The problem that many of us are seeing is that D development is unnecessarily defocussed by spreading out the effort between three different compilers. Of course, ideally we would have infinite manpower. A "special-case" compiler that boasts lower compile times for x86 development would definitely be nice to have then. But our resources aren't limitless, and as such the question whether we can afford to maintain such a "nice to have"-compiler is very relevant. Don't get me wrong, I understand that there is an argument to be made for the current situation. And, by the way, let me make very clear that even if I argue that sticking to DMD is a strategic mistake, this is not about personal things. I highly respect Walter as a compiler developer and like him as a person. But perpetuating ill-informed arguments really doesn't do this debate any good. — David
Feb 18 2016
next sibling parent anonymous <anonymous example.com> writes:
On 18.02.2016 21:24, David Nadlinger wrote:
 But please don't make up argument trying to rationalize whatever
 personal decision somebody else made. You could literally copy LLVM
 source code into your application and sell it as a closed-source product
 without risking any copyright problems (if you comply with the very
 modest attribution clause of the license).
LLVM's license isn't the supposed problem. DMD's license is. You cannot copy DMD backend code to LLVM. By not contributing to other compilers, Walter stays in the clear in that regard. At least, that's how I understand the argument.
Feb 18 2016
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 20:24:31 UTC, David Nadlinger 
wrote:
 On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
 wrote:
 And actually, he'd risk legal problems if he did, because he 
 doesn't want anyone to be able to accuse him of taking code 
 from gcc or llvm.
That's a silly strawman, and you should know better than putting that forward as an argument by now. Walter is of course free to do whatever he pleases, and I would totally understand if his reason was just that it's hard to give something up you've worked on for a long time. But please don't make up argument trying to rationalize whatever personal decision somebody else made. You could literally copy LLVM source code into your application and sell it as a closed-source product without risking any copyright problems (if you comply with the very modest attribution clause of the license).
It's not a strawman. Walter has state previously that he's explicitly avoided looking at the source code for other compilers like gcc, because he doesn't want anyone to be able to accuse him of stealing code, copyright infringement, etc. Now, that's obviously much more of a risk with gcc than llvm given their respective licenses, but it is a position that Walter has taken when the issue has come up, and it's not something that I'm making up. Now, if Walter were willing to give up on the dmd backend entirely, then presumably, that wouldn't be a problem anymore regardless of license issues, but he still has dmc, which uses the same backend, so I very much doubt that that's going to happen. - Jonathan M Davis
Feb 18 2016
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis 
wrote:
 It's not a strawman. Walter has state previously that he's 
 explicitly avoided looking at the source code for other 
 compilers like gcc, because he doesn't want anyone to be able 
 to accuse him of stealing code, copyright infringement, etc.
Isn't this much more likely to happen if you don't look at the codebase for other compilers? How do you know if someone submitting code isn't just translating from GCC if you haven't looked at GCC? If you have looked at GCC, then you can just choose a different implementation. :-) Anyway, the clean-virgin thing in programming is related to reverse engineering very small codebases where the implementation most likely is going to be very similar (like BIOS). So you have one team writing the spec and another team implementing the spec (with no communication between them).
Feb 18 2016
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Thu, 18 Feb 2016 21:39:45 +0000, Ola Fosheim Grøstad wrote:

 On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis wrote:
 It's not a strawman. Walter has state previously that he's explicitly
 avoided looking at the source code for other compilers like gcc,
 because he doesn't want anyone to be able to accuse him of stealing
 code, copyright infringement, etc.
Isn't this much more likely to happen if you don't look at the codebase for other compilers? How do you know if someone submitting code isn't just translating from GCC if you haven't looked at GCC?
That's the exact opposite of true. With copyright, the fact that you created yours on your own is sufficient defense, assuming the courts agree. If by sheer coincidence you come up with code identical to what's in GCC, but you can show that you didn't take the code from GCC, you're in the clear. Patents, well, you're infringing even if you didn't refer to any other source. But if you did look at another source, especially if you looked in the patent database, you open yourself up to increased damages.
Feb 18 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 18 February 2016 at 22:22:57 UTC, Chris Wright wrote:
 With copyright, the fact that you created yours on your own is 
 sufficient defense, assuming the courts agree. If by sheer 
 coincidence you come up with code identical to what's in GCC, 
 but you can show that you didn't take the code from GCC, you're 
 in the clear.
And how are you going to show that? You can't, because it is widespread.
 Patents, well, you're infringing even if you didn't refer to 
 any other source. But if you did look at another source, 
 especially if you looked in the patent database, you open 
 yourself up to increased damages.
There are no damages for GCC.
Feb 18 2016
parent reply Chris Wright <dhasenan gmail.com> writes:
On Thu, 18 Feb 2016 22:41:46 +0000, Ola Fosheim Grøstad wrote:

 On Thursday, 18 February 2016 at 22:22:57 UTC, Chris Wright wrote:
 With copyright, the fact that you created yours on your own is
 sufficient defense, assuming the courts agree. If by sheer coincidence
 you come up with code identical to what's in GCC, but you can show that
 you didn't take the code from GCC, you're in the clear.
And how are you going to show that? You can't, because it is widespread.
You testify it under oath, and you hope you look honest. You can show a lack of GCC source code on your home computer, possibly.
 Patents, well, you're infringing even if you didn't refer to any other
 source. But if you did look at another source, especially if you looked
 in the patent database, you open yourself up to increased damages.
There are no damages for GCC.
There are damages for patent infringement. There are higher damages for willful infringement. The patent doesn't have to be held by the FSF or a contributor to GCC. There might be a patent troll that sued a third party regarding GCC. And thanks to how software patents generally are, it'd probably be regarding something that most C/C++ compilers need to implement and the most obvious implementation for that feature. If Walter had read the GCC source code from an infringing version after that case came to light, that's the sort of thing that can bring on triple damages. It depends on relative lawyer quality, of course, but it's much harder for the plaintiffs if there's no indication that you've accessed the GCC source code.
Feb 18 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 18 February 2016 at 23:42:11 UTC, Chris Wright wrote:
 You testify it under oath, and you hope you look honest. You 
 can show a lack of GCC source code on your home computer, 
 possibly.
If they actually have a strong case it will be highly unlikely that you have arrived at it independently. Of course, all you have to do is to remove the code and FSF will be happy. So if you let it go all the way to the court you can only blame yourself for being pedantic. FSF will only sue over a strong case that carries political weight. A loss in court is a PR disaster for FSF.
 There are damages for patent infringement. There are higher 
 damages for willful infringement.
Iff you use it as a means for production. There is nothing illegal about implementing patented techniques in source code (i.e. describing them) and distributing it.
 regarding GCC. And thanks to how software patents generally 
 are, it'd probably be regarding something that most C/C++ 
 compilers need to implement and the most obvious implementation 
 for that feature.
If that is the case then there will be prior art that predates the patent.
 If Walter had read the GCC source code from an infringing 
 version after that case came to light, that's the sort of thing 
 that can bring on triple damages. It depends on relative lawyer 
 quality, of course, but it's much harder for the plaintiffs if 
 there's no indication that you've accessed the GCC source code.
It should help you, not hurt you, if you learnt about a technique from a widespread codebase from an organization that is known for avoiding patents. If anything that proves that you didn't pick it up from the filed patent and was in good faith? If the case came to light (e.g. you knew about it) and you didn't vet your own codebase then you will be to blame no matter where you got it from? But FSF would make sure they remove patented techniques from GCC so that scenario would be very unlikely. In other words, you are more likely to be hit by a bus when crossing the street. I find this kind of anxiety hysterical to be honest. The only thing I get out of this is that companies shouldn't admit to using open source codebases. Of course, one reason for avoiding reading other people's source code is that you have a client that makes it a requirement.
Feb 18 2016
parent Chris Wright <dhasenan gmail.com> writes:
On Fri, 19 Feb 2016 05:29:20 +0000, Ola Fosheim Grøstad wrote:

 On Thursday, 18 February 2016 at 23:42:11 UTC, Chris Wright wrote:
 There are damages for patent infringement. There are higher damages for
 willful infringement.
Iff you use it as a means for production. There is nothing illegal about implementing patented techniques in source code (i.e. describing them) and distributing it.
That depends on where the patent was filed and where the lawsuit is being executed.
 regarding GCC. And thanks to how software patents generally are, it'd
 probably be regarding something that most C/C++ compilers need to
 implement and the most obvious implementation for that feature.
If that is the case then there will be prior art that predates the patent.
Not if it's for a feature added to a C++ standard after the patent was filed. Not if it's for a feature that modern compilers consider standard but wasn't standard before the patent was created. Not if it's in a jurisdiction that uses first-to-file rather than first-to-invent.
Feb 18 2016
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 21:39:45 UTC, Ola Fosheim 
Grøstad wrote:
 On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis 
 wrote:
 It's not a strawman. Walter has state previously that he's 
 explicitly avoided looking at the source code for other 
 compilers like gcc, because he doesn't want anyone to be able 
 to accuse him of stealing code, copyright infringement, etc.
Isn't this much more likely to happen if you don't look at the codebase for other compilers? How do you know if someone submitting code isn't just translating from GCC if you haven't looked at GCC? If you have looked at GCC, then you can just choose a different implementation. :-) Anyway, the clean-virgin thing in programming is related to reverse engineering very small codebases where the implementation most likely is going to be very similar (like BIOS). So you have one team writing the spec and another team implementing the spec (with no communication between them).
Walter has stated previously that there have been cases of lawyers coming to him about him possibly violating someone else's copyright, and when he tells them that he's never even looked at the source code, that satisfies them. And when the GPL is involved, that' paranoia is probably a very good idea. With the BSD license, and the license that LLVM uses (which is very close to the BSD license), it's nowhere near the same level of issue, since it really only comes down to giving attribution. But we had problems with that at one point with Tango code (which is BSD-licensed), so the dmd and Phobos devs as a whole have avoided even looking at Tango so that we could always, legitimately say that we hadn't looked at it and therefore could not possibly have copied from it. So, this can be a real problem, even if it's just an issue with someone thinking that you should be giving attribution when you're not. And while the LLVM license would definitely allow LLVM code to be mixed into dmd's backend as long as the appropriate attribution was given, I don't know if Symantec would be okay with that or not. The fact that Symantec owns the dmd backend just makes things weird all around. Regardless, whether Walter is willing to look at LLVM/LDC or work on it at all is up to him. I doubt that he'll choose to based on what he's said previously, but he might. However, I think that it's quite safe to say that GCC/GDC are completely off the table for him, because that's GPL-licensed, and folks definitely get unhappy when they think that you might have copied GPL code, and it's _not_ as simple as giving attribution to be able to take GPL-licensed code and mix it into your own, since it's a copyleft license (and one of the most extreme of them at that). - Jonathan M Davis
Feb 19 2016
next sibling parent Radu <radu void.null> writes:
On Friday, 19 February 2016 at 09:06:28 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 21:39:45 UTC, Ola Fosheim 
 Grøstad wrote:
 On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M 
 Davis wrote:
 It's not a strawman. Walter has state previously that he's 
 explicitly avoided looking at the source code for other 
 compilers like gcc, because he doesn't want anyone to be able 
 to accuse him of stealing code, copyright infringement, etc.
Isn't this much more likely to happen if you don't look at the codebase for other compilers? How do you know if someone submitting code isn't just translating from GCC if you haven't looked at GCC? If you have looked at GCC, then you can just choose a different implementation. :-) Anyway, the clean-virgin thing in programming is related to reverse engineering very small codebases where the implementation most likely is going to be very similar (like BIOS). So you have one team writing the spec and another team implementing the spec (with no communication between them).
Walter has stated previously that there have been cases of lawyers coming to him about him possibly violating someone else's copyright, and when he tells them that he's never even looked at the source code, that satisfies them. And when the GPL is involved, that' paranoia is probably a very good idea. With the BSD license, and the license that LLVM uses (which is very close to the BSD license), it's nowhere near the same level of issue, since it really only comes down to giving attribution. But we had problems with that at one point with Tango code (which is BSD-licensed), so the dmd and Phobos devs as a whole have avoided even looking at Tango so that we could always, legitimately say that we hadn't looked at it and therefore could not possibly have copied from it. So, this can be a real problem, even if it's just an issue with someone thinking that you should be giving attribution when you're not. And while the LLVM license would definitely allow LLVM code to be mixed into dmd's backend as long as the appropriate attribution was given, I don't know if Symantec would be okay with that or not. The fact that Symantec owns the dmd backend just makes things weird all around. Regardless, whether Walter is willing to look at LLVM/LDC or work on it at all is up to him. I doubt that he'll choose to based on what he's said previously, but he might. However, I think that it's quite safe to say that GCC/GDC are completely off the table for him, because that's GPL-licensed, and folks definitely get unhappy when they think that you might have copied GPL code, and it's _not_ as simple as giving attribution to be able to take GPL-licensed code and mix it into your own, since it's a copyleft license (and one of the most extreme of them at that). - Jonathan M Davis
For a change he might enjoy being the one that doesn't look/work with the backend, if those legal issues are such a major worry for him. I remember back when DMD was even closed licensed than today that it sat on a different repo, so people would not look at its code by accident. DMD had its use at the beginning when only Walter was running the show, and things needed to be coded fast, he knew his turf and that allowed him to implement stuff with ease, but today that argument has little value, see the whole Dwarf EH stuff he had to do... There are plenty things to do in the language design, frontend and runtime, hes work would greatly improve those parts.
Feb 19 2016
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 19 February 2016 at 09:06:28 UTC, Jonathan M Davis 
wrote:
 Walter has stated previously that there have been cases of 
 lawyers coming to him about him possibly violating someone 
 else's copyright, and when he tells them that he's never even 
 looked at the source code, that satisfies them. And when the 
 GPL is involved, that' paranoia is probably a very good idea.
If FSF lawyers contact you without solid reason then it is newsworthy and should make headlines. So I sincerely doubt that anyone from FSF has done so. Some lawyers are trying to make a living out of acting like manipulative bastards, randomly fishing for a case, hoping you will put something in writing that they can twist. Does not mean they have a leg to stand on, just don't admit anything to them in writing.
 you're not. And while the LLVM license would definitely allow 
 LLVM code to be mixed into dmd's backend as long as the
Well, that would be silly anyway. IMO the better approach would be to create a high level typed IR and have a clean non-optimizing backend (or JIT). Then leave the optimized backend for LLVM. Basically clean up the source code and introduce a clean separate layer between the templating system and codegen. That way more people could work on it. Make it easy to work on one aspect of the compiler without understanding the whole.
 Regardless, whether Walter is willing to look at LLVM/LDC or 
 work on it at all is up to him.
Sure. It is better to have a very simple backend for an experimental/reference compiler. But DMDs backend isn't simpler to understand than LLVM. If it was dead simple and made the compiler easier to understand, then it would be good to have it in.
Feb 19 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/18/2016 1:30 PM, Jonathan M Davis wrote:
 It's not a strawman. Walter has state previously that he's explicitly avoided
 looking at the source code for other compilers like gcc, because he doesn't
want
 anyone to be able to accuse him of stealing code, copyright infringement, etc.
 Now, that's obviously much more of a risk with gcc than llvm given their
 respective licenses, but it is a position that Walter has taken when the issue
 has come up, and it's not something that I'm making up.

 Now, if Walter were willing to give up on the dmd backend entirely, then
 presumably, that wouldn't be a problem anymore regardless of license issues,
but
 he still has dmc, which uses the same backend, so I very much doubt that that's
 going to happen.
It's still an issue I worry about. I've been (falsely) accused of stealing code in the past, even once accused of having stolen the old Datalight C compiler from some BYU students. Once a game company stole Empire, and then had the astonishing nerve to sic their lawyers on me accusing me of stealing it from them! (Showing them my registered copyright of the source code that predated their claim by 10 years was entertaining.) More recently this came up in the Tango/Phobos rift, as some of the long term members here will recall. So it is not an issue to be taken too lightly. I have the scars to prove it :-/ One thing I adore about github is it provides a legal audit trail of where the code came from. While that proves nothing about whether contributions are stolen or not, it provides a date stamp (like my registered copyright did), and if stolen code does make its way into the code base, it can be precisely excised. Github takes a great load off my mind. There are other reasons to have dmd's back end. One obvious one is we wouldn't have had a Win64 port without it. And anytime we wish to experiment with something new in code generation, it's a heluva lot easier to do that with dmd than with the monumental code bases of gcc and llvm. One thing that has changed a lot in my attitudes is I no longer worry about people stealing my code. If someone can make good use of my stuff, have at it. Boost license FTW! I wish LLVM would switch to the Boost license, in particular removing this clause: "Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution." Reading it adversely means if I write a simple utility and include a few lines from LLVM, I have to include that license in the binary and a means to print it out. If I include a bit of code from several places, each with their own version of that license, there's just a bunch of crap to deal with to be in compliance.
Feb 25 2016
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
 I wish LLVM would switch to the Boost license, in particular 
 removing this clause:

 "Redistributions in binary form must reproduce the above 
 copyright notice, this list of conditions and the following 
 disclaimers in the documentation and/or other materials 
 provided with the distribution."

 Reading it adversely means if I write a simple utility and 
 include a few lines from LLVM, I have to include that license 
 in the binary and a means to print it out. If I include a bit 
 of code from several places, each with their own version of 
 that license, there's just a bunch of crap to deal with to be 
 in compliance.
That's why I tend to encourage folks to use the Boost license rather than the BSD license when it comes up (LLVM isn't BSD-licensed, but its license is very similar). While source attribution makes sense, I just don't want to deal with binary attribution in anything I write. It does make some sense when you don't want someone to be able to claim that they didn't use your code (even if you're not looking to require that they open everything up like the GPL does), but for the most part, I just don't think that that's worth it - though it is kind of cool that some commercial stuff (like the PS4) is using BSD-licensed code, and we know it, because they're forced to give attribution with their binaries. - Jonathan M Davis
Feb 25 2016
prev sibling next sibling parent reply Radu <radu void.null> writes:
On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
 On 2/18/2016 1:30 PM, Jonathan M Davis wrote:
 It's not a strawman. Walter has state previously that he's 
 explicitly avoided
 looking at the source code for other compilers like gcc, 
 because he doesn't want
 anyone to be able to accuse him of stealing code, copyright 
 infringement, etc.
 Now, that's obviously much more of a risk with gcc than llvm 
 given their
 respective licenses, but it is a position that Walter has 
 taken when the issue
 has come up, and it's not something that I'm making up.

 Now, if Walter were willing to give up on the dmd backend 
 entirely, then
 presumably, that wouldn't be a problem anymore regardless of 
 license issues, but
 he still has dmc, which uses the same backend, so I very much 
 doubt that that's
 going to happen.
It's still an issue I worry about. I've been (falsely) accused of stealing code in the past, even once accused of having stolen the old Datalight C compiler from some BYU students. Once a game company stole Empire, and then had the astonishing nerve to sic their lawyers on me accusing me of stealing it from them! (Showing them my registered copyright of the source code that predated their claim by 10 years was entertaining.) More recently this came up in the Tango/Phobos rift, as some of the long term members here will recall. So it is not an issue to be taken too lightly. I have the scars to prove it :-/ One thing I adore about github is it provides a legal audit trail of where the code came from. While that proves nothing about whether contributions are stolen or not, it provides a date stamp (like my registered copyright did), and if stolen code does make its way into the code base, it can be precisely excised. Github takes a great load off my mind. There are other reasons to have dmd's back end. One obvious one is we wouldn't have had a Win64 port without it. And anytime we wish to experiment with something new in code generation, it's a heluva lot easier to do that with dmd than with the monumental code bases of gcc and llvm. One thing that has changed a lot in my attitudes is I no longer worry about people stealing my code. If someone can make good use of my stuff, have at it. Boost license FTW! I wish LLVM would switch to the Boost license, in particular removing this clause: "Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimers in the documentation and/or other materials provided with the distribution." Reading it adversely means if I write a simple utility and include a few lines from LLVM, I have to include that license in the binary and a means to print it out. If I include a bit of code from several places, each with their own version of that license, there's just a bunch of crap to deal with to be in compliance.
Please don't get me wrong, we all apreciate what you offered to the D community, but all these legal arguments are strongly tied to you, and less so to the community. Your LLVM license nit pick is hilarious, you can't do that when the "oficial" D compiler has a non-liberal licensed backend, you just can't. Speaking of which, I think realistically DMD's backend will generally have ~ 1 major contributor, I think you guessed who that is. But setting things aside, we all need to acknowledge that the current setup is not fair to motivated and proven third party compilers, their contributors, and their users. The D ecosistem must create and foster a friendly environment to anyone wanting to have a good compiler that is current with the language/runtime/phobos developments. I'm not seeing you, or Andrei, exploring and encouraging this actively, what I see is a defensive approach on DMD's merits.
Feb 26 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 1:47 AM, Radu wrote:
 Please don't get me wrong, we all apreciate what you offered to the D
community,
 but all these legal arguments are strongly tied to you, and less so to the
 community.
Didn't Google get hung out to dry over 6 lines of Java code or something like that? And I don't know how long you've been around here, but we DID have precisely these sorts of problems during the Phobos/Tango rift. Ignoring licensing issues can have ugly consequences.
 Your LLVM license nit pick is hilarious, you can't do that when the "oficial" D
 compiler has a non-liberal licensed backend, you just can't.
That's not under my control, and is one of the reasons why D gravitated towards the Boost license for everything we could.
 But setting things aside, we all need to acknowledge that the current setup is
 not fair to motivated and proven third party compilers, their contributors, and
 their users.
I don't see anything unfair. gdc, ldc, and dmd are each as good as their respective teams make them.
 The D ecosistem must create and foster a friendly environment to anyone wanting
 to have a good compiler that is current with the language/runtime/phobos
 developments.
And that's what we do. It's why we have 3 major compilers.
Feb 26 2016
parent reply Radu <radu void.null> writes:
On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
 On 2/26/2016 1:47 AM, Radu wrote:
 Please don't get me wrong, we all apreciate what you offered 
 to the D community,
 but all these legal arguments are strongly tied to you, and 
 less so to the
 community.
Didn't Google get hung out to dry over 6 lines of Java code or something like that? And I don't know how long you've been around here, but we DID have precisely these sorts of problems during the Phobos/Tango rift. Ignoring licensing issues can have ugly consequences.
I'm around here since 2004, not as vocal as I'm now, but yes, I remember those ugly times. Due diligence is mandatory when dealing with software license, agreed, but we can't extrapolate your experience re. the backend with whatever is used in LDC or any other compiler. I'm sure in this regard LDC is not at peril.
 Your LLVM license nit pick is hilarious, you can't do that 
 when the "oficial" D
 compiler has a non-liberal licensed backend, you just can't.
That's not under my control, and is one of the reasons why D gravitated towards the Boost license for everything we could.
Yes, agreed, boost FTW, but still doesn't solve the backend issue.
 But setting things aside, we all need to acknowledge that the 
 current setup is
 not fair to motivated and proven third party compilers, their 
 contributors, and
 their users.
I don't see anything unfair. gdc, ldc, and dmd are each as good as their respective teams make them.
The lack of fairness comes from the way the ecosystem is setup, you have the reference compiler released, then everybody needs to catch up with it. Why not have others be part of the official release? This will undoubtedly increase the quality of the frontend and the glue layer, and probably the runtime, just because they will be tested on more architectures each release. No matter how you put it, both LDC and GDC are limited in manpower, and also caught in the merge game with mainline. This is a bottle neck if they need to attract more talent. Right of the bat you need to do a lot of grunt work handling different repos, each at their own revision, plus all the knowledge about build env and testing env.
 The D ecosistem must create and foster a friendly environment 
 to anyone wanting
 to have a good compiler that is current with the 
 language/runtime/phobos
 developments.
And that's what we do. It's why we have 3 major compilers.
See above, just having 3 compilers (could be 5 for the matter), it's not enough. We will be better with just one that works great, but if that is not possible, at least give me the option to use the latest and greatest D on my Linux embedded ARM boards.
Feb 26 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 2/26/16 7:02 AM, Radu wrote:
 On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
 I don't see anything unfair. gdc, ldc, and dmd are each as good as
 their respective teams make them.
The lack of fairness comes from the way the ecosystem is setup, you have the reference compiler released, then everybody needs to catch up with it. Why not have others be part of the official release? This will undoubtedly increase the quality of the frontend and the glue layer, and probably the runtime, just because they will be tested on more architectures each release. No matter how you put it, both LDC and GDC are limited in manpower, and also caught in the merge game with mainline. This is a bottle neck if they need to attract more talent. Right of the bat you need to do a lot of grunt work handling different repos, each at their own revision, plus all the knowledge about build env and testing env.
The issue here is the front-end not the back end. Daniel has already stated this was a goal (to make the front end shared code). So it will happen (I think Daniel has a pretty good record of following through, we do have a D-based front end now after all). Any effort to make both LDC and GDC part of the "official" release would be artificial -- instead of LDC and GDC getting released "faster", they would simply hold up dmd's release until they caught up. And this is probably more pressure than their developers need. When the front end is shared, then the releases will be quicker, and you can be happier with it. -Steve
Feb 26 2016
parent reply Radu <radu void.null> writes:
On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer 
wrote:
 On 2/26/16 7:02 AM, Radu wrote:
 On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright 
 wrote:
 I don't see anything unfair. gdc, ldc, and dmd are each as 
 good as
 their respective teams make them.
The lack of fairness comes from the way the ecosystem is setup, you have the reference compiler released, then everybody needs to catch up with it. Why not have others be part of the official release? This will undoubtedly increase the quality of the frontend and the glue layer, and probably the runtime, just because they will be tested on more architectures each release. No matter how you put it, both LDC and GDC are limited in manpower, and also caught in the merge game with mainline. This is a bottle neck if they need to attract more talent. Right of the bat you need to do a lot of grunt work handling different repos, each at their own revision, plus all the knowledge about build env and testing env.
The issue here is the front-end not the back end. Daniel has already stated this was a goal (to make the front end shared code). So it will happen (I think Daniel has a pretty good record of following through, we do have a D-based front end now after all). Any effort to make both LDC and GDC part of the "official" release would be artificial -- instead of LDC and GDC getting released "faster", they would simply hold up dmd's release until they caught up. And this is probably more pressure than their developers need. When the front end is shared, then the releases will be quicker, and you can be happier with it. -Steve
OK, a shared front end will be great! My main concern is that if they are not integrated withing the daily pull-merge-auto-test loop they will always tend to drift and get out of sync while trying to fix stuff that breaks. If the author of the pull request gets auto feedback from DMD and LDC on hes changes test results, than he will be aware of potential problems he might create. The integration doesn't necessarily needs to be tightly coupled, i.e. LDC can keep its infrastructure and auto sync/run any merges from mainline. The issue is what to do with breaking changes. Ideally, no breaking should be allowed for when fixing regressions or bugs, and any breaking on front-end or glue layers should at least be talked with the LDC/GDC guys. All of the above needs steering from the leadership to follow trough. And BTW, I'm happy with what D has become :), always room for improvements, thank you!
Feb 26 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 2/26/16 9:26 AM, Radu wrote:
 On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer wrote:
 On 2/26/16 7:02 AM, Radu wrote:
 On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
 I don't see anything unfair. gdc, ldc, and dmd are each as good as
 their respective teams make them.
The lack of fairness comes from the way the ecosystem is setup, you have the reference compiler released, then everybody needs to catch up with it. Why not have others be part of the official release? This will undoubtedly increase the quality of the frontend and the glue layer, and probably the runtime, just because they will be tested on more architectures each release. No matter how you put it, both LDC and GDC are limited in manpower, and also caught in the merge game with mainline. This is a bottle neck if they need to attract more talent. Right of the bat you need to do a lot of grunt work handling different repos, each at their own revision, plus all the knowledge about build env and testing env.
The issue here is the front-end not the back end. Daniel has already stated this was a goal (to make the front end shared code). So it will happen (I think Daniel has a pretty good record of following through, we do have a D-based front end now after all). Any effort to make both LDC and GDC part of the "official" release would be artificial -- instead of LDC and GDC getting released "faster", they would simply hold up dmd's release until they caught up. And this is probably more pressure than their developers need. When the front end is shared, then the releases will be quicker, and you can be happier with it.
OK, a shared front end will be great! My main concern is that if they are not integrated withing the daily pull-merge-auto-test loop they will always tend to drift and get out of sync while trying to fix stuff that breaks.
I think the intention is to make all of the compilers supported with some reasonable form of CI (not sure if all PRs would be tested this way, because that may be too much of a burden on the test servers). The idea is that ldc and gdc will get plenty of warning if something breaks. -Steve
Feb 26 2016
parent David Nadlinger <code klickverbot.at> writes:
On Friday, 26 February 2016 at 18:19:57 UTC, Steven Schveighoffer 
wrote:
 The idea is that ldc and gdc will get plenty of warning if 
 something breaks.
As stated, this in itself would be utterly useless. Right now, you can be absolutely certain that the AST semantics will change in between each DMD release. Sometimes in obvious ways because fields are removed and so on, but much more often silently and in a hard-to-track-down fashion because the structure of the AST or the interpretation of certain node properties changes. In other words, we don't need any warning that something breaks, because we already know it will. The people that need the warning are the authors of the breaking front-end commits, so that they can properly document the changes and make sure they are acceptable for the other backends (right now, you typically have to reverse-engineer that from the DMD glue layer changes). Ideally, of course, no such changes would be merged without making sure that all the backends have already been adapted for them first. — David
Feb 26 2016
prev sibling next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2016-02-25 at 22:19 -0800, Walter Bright via Digitalmars-d
wrote:
[=E2=80=A6]
=20
 One thing I adore about github is it provides a legal audit trail of
 where the=C2=A0
 code came from. While that proves nothing about whether contributions
 are stolen=C2=A0
 or not, it provides a date stamp (like my registered copyright did),
 and if=C2=A0
 stolen code does make its way into the code base, it can be precisely
 excised.=C2=A0
 Github takes a great load off my mind.
[=E2=80=A6] Has there been case law in the USA that gives a Git log official status as a record of history? I haven't done a detailed search here, but I am not aware of any case law in the UK on this. Other jursidictions will have their own rules obviously. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 26 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 2:41 AM, Russel Winder via Digitalmars-d wrote:
 Has there been case law in the USA that gives a Git log official status
 as a record of history? I haven't done a detailed search here, but I am
 not aware of any case law in the UK on this. Other jursidictions will
 have their own rules obviously.
I'm not aware of any, either, that is specific to github. But given how digital records in general (such as email, social media posts, etc.) are routinely accepted as evidence, I'd be very surprised if github wasn't.
Feb 26 2016
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2016-02-26 at 02:52 -0800, Walter Bright via Digitalmars-d
wrote:
 [=E2=80=A6]
 I'm not aware of any, either, that is specific to github. But given
 how digital=C2=A0
 records in general (such as email, social media posts, etc.) are
 routinely=C2=A0
 accepted as evidence, I'd be very surprised if github wasn't.
Be careful about make assumptions of admissibility as evidence. I have been expert witness in three cases regarding email logs and it is not always so simple to have them treated as a matter of record. Of course the USA is not the UK, rules and history are different in every jurisdiction =E2=80=93 and the USA has more than one!=C2=A0 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 26 2016
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 2/26/16 6:04 AM, Russel Winder via Digitalmars-d wrote:
 On Fri, 2016-02-26 at 02:52 -0800, Walter Bright via Digitalmars-d
 wrote:
 […]
 I'm not aware of any, either, that is specific to github. But given
 how digital
 records in general (such as email, social media posts, etc.) are
 routinely
 accepted as evidence, I'd be very surprised if github wasn't.
Be careful about make assumptions of admissibility as evidence. I have been expert witness in three cases regarding email logs and it is not always so simple to have them treated as a matter of record. Of course the USA is not the UK, rules and history are different in every jurisdiction – and the USA has more than one!
I think it's much stronger when the email/logs are maintained by a disinterested third party. For example, I'd say emails that were maintained on a private server by one of the parties in the case would be less reliable than logs stored on yahoo's servers that neither party has access to. There would also be no shortage of witnesses "Yes, I remember the day Walter added feature x, and github's logs are correct". I think Walter is on solid ground there. -Steve
Feb 26 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 5:15 AM, Steven Schveighoffer wrote:
 I think it's much stronger when the email/logs are maintained by a
disinterested
 third party.

 For example, I'd say emails that were maintained on a private server by one of
 the parties in the case would be less reliable than logs stored on yahoo's
 servers that neither party has access to.

 There would also be no shortage of witnesses "Yes, I remember the day Walter
 added feature x, and github's logs are correct".

 I think Walter is on solid ground there.

 -Steve
Not only that, everyone who has accessed the github repository has their own copy of the repository. It's a distributed repository, not a single sourced one. I also keep the email logs I get from it. It's a thousand times better than producing a date stamp from a file on my backup hard disk.
Feb 26 2016
prev sibling parent reply BBasile <b2.temp gmx.com> writes:
On Friday, 26 February 2016 at 10:41:31 UTC, Russel Winder wrote:
 On Thu, 2016-02-25 at 22:19 -0800, Walter Bright via 
 Digitalmars-d
 wrote:
 […]
 
 One thing I adore about github is it provides a legal audit 
 trail of
 where the
 code came from. While that proves nothing about whether 
 contributions
 are stolen
 or not, it provides a date stamp (like my registered copyright 
 did),
 and if
 stolen code does make its way into the code base, it can be 
 precisely
 excised.
 Github takes a great load off my mind.
[…] Has there been case law in the USA that gives a Git log official status as a record of history? I haven't done a detailed search here, but I am not aware of any case law in the UK on this. Other jursidictions will have their own rules obviously.
BTW Malicious people can cheat and commit in the past, according to https://github.com/gelstudios/gitfiti commitment date is not reliable.
Feb 26 2016
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 2016-02-26 at 11:12 +0000, BBasile via Digitalmars-d wrote:
 [=E2=80=A6]
 BTW Malicious people can cheat and commit in the past, according=C2=A0
 to
=20
 https://github.com/gelstudios/gitfiti
=20
 commitment date is not reliable.
Indeed, which is why Mercurial is a much better system, though it is far from perfect. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 26 2016
parent reply David Nadlinger <code klickverbot.at> writes:
On Friday, 26 February 2016 at 11:50:27 UTC, Russel Winder wrote:
 On Fri, 2016-02-26 at 11:12 +0000, BBasile via Digitalmars-d 
 wrote:
 […]
 BTW Malicious people can cheat and commit in the past, 
 according
 to
 
 https://github.com/gelstudios/gitfiti
 
 commitment date is not reliable.
Indeed, which is why Mercurial is a much better system, though it is far from perfect.
"hg commit" knows the "--date" option just as well. Can we please keep this out of here? — David
Feb 26 2016
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 02/26/2016 09:50 AM, David Nadlinger wrote:
 Can we please keep this out of here?
Thank you!! -- Andrei
Feb 26 2016
prev sibling next sibling parent reply Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
 I wish LLVM would switch to the Boost license, in particular 
 removing this clause:

 "Redistributions in binary form must reproduce the above 
 copyright notice, this list of conditions and the following 
 disclaimers in the documentation and/or other materials 
 provided with the distribution."

 Reading it adversely means if I write a simple utility and 
 include a few lines from LLVM, I have to include that license 
 in the binary and a means to print it out. If I include a bit 
 of code from several places, each with their own version of 
 that license, there's just a bunch of crap to deal with to be 
 in compliance.
Hi Walter, I recall there was a thread in the LLVM mailing list last year about moving to a different license. So maybe that is on the cards, and the D community could chip on that conversation. I feel that by moving an LLVM backend D will gain the help / expertise of a large number of companies that are working on LLVM including Microsoft & Google. Isn't Clang's claim that it is much faster than gcc when it comes to compiling? So maybe the speed of compilation using LLVM is not such an issue as presumably a lot of the cost in C++ compilation is in the front-end and with D the same issues won't arise? In any case with scarce resources it seems wasteful to have people working on multiple backends - it would make more sense to converge to one backend - and LLVM being non-GPL and having a lot of momentum may be the best option. I also feel that a lot of the C++ interfacing could be done by using the Clang libraries - again for similar reasons that you will gain from work already being done. Regards Dibyendu
Feb 26 2016
parent reply Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Friday, 26 February 2016 at 11:35:04 UTC, Dibyendu Majumdar 
wrote:
 On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright 
 wrote:
 [...]
I recall there was a thread in the LLVM mailing list last year about moving to a different license. So maybe that is on the cards, and the D community could chip on that conversation.
I am referring to this thread: http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html
Feb 26 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 3:45 AM, Dibyendu Majumdar wrote:
 On Friday, 26 February 2016 at 11:35:04 UTC, Dibyendu Majumdar wrote:
 On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
 [...]
I recall there was a thread in the LLVM mailing list last year about moving to a different license. So maybe that is on the cards, and the D community could chip on that conversation.
I am referring to this thread: http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html
Thanks for the pointer. If anyone wants to chip in on that thread, feel free!
Feb 26 2016
parent reply Dibyendu Majumdar <d.majumdar gmail.com> writes:
On Friday, 26 February 2016 at 22:20:09 UTC, Walter Bright wrote:
 I am referring to this thread:

 http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html
Thanks for the pointer. If anyone wants to chip in on that thread, feel free!
Hi Walter, Should LLVM move to an Apache License would that help in migrating to an LLVM backend as the standard backend? Regards Dibyendu
Feb 28 2016
parent asdf <a b.c> writes:
On Sunday, 28 February 2016 at 12:59:01 UTC, Dibyendu Majumdar 
wrote:
 Should LLVM move to an Apache License would that help in 
 migrating to an LLVM backend as the standard backend?

 Regards
 Dibyendu
LLVM is great but you wouldn't want to be locked down to only one backend, probably. LLVM does have good support for a variety of architectures though... A bytecode code generator might be good for bootstrapping (after the nuclear apocalypse) but everyone just cross-compiles.
Feb 28 2016
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 26/02/2016 06:19, Walter Bright wrote:
 I wish LLVM would switch to the Boost license, in particular removing
 this clause:

 "Redistributions in binary form must reproduce the above copyright
 notice, this list of conditions and the following disclaimers in the
 documentation and/or other materials provided with the distribution."

 Reading it adversely means if I write a simple utility and include a few
 lines from LLVM, I have to include that license in the binary and a
 means to print it out. If I include a bit of code from several places,
 each with their own version of that license, there's just a bunch of
 crap to deal with to be in compliance.
Then add the license info to a "readme" or "copyright" file. Is that really such a hassle? It seems a trivial task to me. For example: https://github.com/rust-lang/rust/blob/master/COPYRIGHT (that file is included in the binary distributions) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Mar 01 2016
prev sibling parent karabuta <karabutaworld gmail.com> writes:
On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
wrote:
 On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 even if DMD is the official reference compiler, the download 
 page http://dlang.org/download.html already mentions "strong 
 optimization" as pro of GDC/LDC vs. "very fast compilation 
 speeds" as pro of DMD.

 If we would make GDC or LDC the official compiler then the 
 next question which pops up is about compilation speed....
Yeah. dmd's compilation speed has been a huge win for us and tends to make a very good first impression. And as far as development goes, fast compilation speed matters a lot more than fast binaries. So, assuming that they're compatible enough (which ideally they are but aren't always), I would argue that the best approach would be to use dmd to develop your code and then use gdc or ldc to build the production binary. We benefit by having all of these compilers, and I seriously question that changing which one is the "official" one is going to help any. It just shifts which set of complaints we get. - Jonathan M Davis
Yep. Fast compilation during development must not be sacrificed for fast binaries. What are you really building to have fast binaries during development? However, I strongly agree with cleaning up the language instead of adding more features.
Feb 25 2016
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 If we would make GDC or LDC the official compiler then the next 
 question which pops up is about compilation speed....
ldc is still significantly faster than clang, or gdc than gcc. I don't think this is that much of a valid concern, especially for smaller programs.
Feb 25 2016
parent reply rsw0x <anonymous anonymous.com> writes:
On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:
 On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 If we would make GDC or LDC the official compiler then the 
 next question which pops up is about compilation speed....
ldc is still significantly faster than clang, or gdc than gcc. I don't think this is that much of a valid concern, especially for smaller programs.
For larger programs, LDC with single-file compilation outdoes DMD by a large factor on any recent multi-core CPU for both debug and release builds in my tests. DMD did not scale across cores anywhere near as well as LDC. OTOH, it does not benefit from singleobj this way when doing release builds.
Feb 25 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 02/25/2016 02:55 PM, rsw0x wrote:
 On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:
 On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 If we would make GDC or LDC the official compiler then the next
 question which pops up is about compilation speed....
ldc is still significantly faster than clang, or gdc than gcc. I don't think this is that much of a valid concern, especially for smaller programs.
For larger programs, LDC with single-file compilation outdoes DMD by a large factor on any recent multi-core CPU for both debug and release builds in my tests. DMD did not scale across cores anywhere near as well as LDC. OTOH, it does not benefit from singleobj this way when doing release builds.
Good to know, thanks! -- Andrei
Feb 25 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:
 Good to know, thanks! -- Andrei
DMD did slow down because it was now being compiled by DMD instead of g++. Also, dmd was doing multithreaded file I/O, but that was removed because speed didn't matter <grrrr>. As I said, keeping the compiler speed up is a constant battle. Currently, dmd makes zero user of multicore. I didn't know that ldc did.
Feb 25 2016
next sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 25 February 2016 at 22:03:47 UTC, Walter Bright 
wrote:
 DMD did slow down because it was now being compiled by DMD 
 instead of g++.
You can compile it using LDC just fine now. ;)
 Also, dmd was doing multithreaded file I/O, but that was 
 removed because speed didn't matter <grrrr>.
Did we ever have any numbers showing that this in particular produced a tangible performance benefit (even a single barnacle)?
 As I said, keeping the compiler speed up is a constant battle.
And this leaves me wondering why nobody ever wrote a comprehensive compiler performance tracking tool. There is Dash, my half-finished CI-style project (that a couple of people were interested in picking up after DConf, but which never really happened), and Vladimir's quite limited TrenD adaption of Mozilla's areweslimyet, but nobody really came up with something that would be part of our day-to-day development workflow.
 Currently, dmd makes zero user of multicore. I didn't know that 
 ldc did.
LDC doesn't do so either. I think what rsw0x referred to is doing a normal "C-style" parallel compilation of several compilation unit. I'm not sure why this couldn't also be done with DMD, though. — David
Feb 25 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2016 3:06 PM, David Nadlinger wrote:
 On Thursday, 25 February 2016 at 22:03:47 UTC, Walter Bright wrote:
 DMD did slow down because it was now being compiled by DMD instead of g++.
You can compile it using LDC just fine now. ;)
I think we should ask Martin to set that up for the release builds.
 Also, dmd was doing multithreaded file I/O, but that was removed because speed
 didn't matter <grrrr>.
Did we ever have any numbers showing that this in particular produced a tangible performance benefit (even a single barnacle)?
On a machine with local disk and running nothing else, no speedup. With a slow filesystem, like an external, network, or cloud (!) drive, yes. I would also expect it to speed up when the machine is running a lot of other stuff.
 LDC doesn't do so either. I think what rsw0x referred to is doing a normal
 "C-style" parallel compilation of several compilation unit. I'm not sure why
 this couldn't also be done with DMD, though.
-j should work just fine with dmd. There's a lot internal to the compiler that can be parallelized - just about everything but the semantic analysis.
Feb 25 2016
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Feb 25, 2016 at 02:03:47PM -0800, Walter Bright via Digitalmars-d wrote:
 On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:
Good to know, thanks! -- Andrei
DMD did slow down because it was now being compiled by DMD instead of g++. Also, dmd was doing multithreaded file I/O, but that was removed because speed didn't matter <grrrr>.
[...] I remember you did a bunch of stuff to the optimizer after the switchover to self-hosting; how much of a difference did that make? Are there any low-hanging fruit left that could make dmd faster? On a related note, I discovered an O(n^2) algorithm in the front-end... it's unlikely to be an actual bottleneck in practice (basically it's quadratic in the number of fields in an aggregate), though you never know. It actually does a full n^2 iterations, and seemed like it could be at least pared down to n(n+1)/2, even without doing better than O(n^2). T -- What is Matter, what is Mind? Never Mind, it doesn't Matter.
Feb 25 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2016 3:06 PM, H. S. Teoh via Digitalmars-d wrote:
 I remember you did a bunch of stuff to the optimizer after the
 switchover to self-hosting; how much of a difference did that make? Are
 there any low-hanging fruit left that could make dmd faster?
There's a lot of low hanging fruit in dmd. In particular, far too many templates are instantiated over and over. The data structures need to be looked at, and the excessive memory consumption also slows things down.
 On a related note, I discovered an O(n^2) algorithm in the front-end...
 it's unlikely to be an actual bottleneck in practice (basically it's
 quadratic in the number of fields in an aggregate), though you never
 know. It actually does a full n^2 iterations, and seemed like it could
 be at least pared down to n(n+1)/2, even without doing better than
 O(n^2).
Please add a comment to the source code about this and put it in a PR.
Feb 25 2016
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, 26 February 2016 at 00:56:22 UTC, Walter Bright wrote:
 On 2/25/2016 3:06 PM, H. S. Teoh via Digitalmars-d wrote:
 I remember you did a bunch of stuff to the optimizer after the
 switchover to self-hosting; how much of a difference did that 
 make? Are
 there any low-hanging fruit left that could make dmd faster?
There's a lot of low hanging fruit in dmd. In particular, far too many templates are instantiated over and over.
LOL. That would be an understatement. IIRC, at one point, Don figured out that we were instantiating _millions_ of templates for the std.algorithm unit tests. The number of templates used in template constraints alone is likely through the roof. Imagine how many times something like isInputRange!string gets compiled in your typical program. With how template-heavy range-base code is, almost anything we can do to speed of the compiler with regards to templates is likely to pay off. - Jonathan M Davis
Feb 25 2016
prev sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:
 Are there any low-hanging fruit left that could make dmd faster?
A big one would be overhauling the template mangling scheme so it does not generate mangled names a few hundred kilo (!) bytes in size anymore for code that uses templates and voldemort types. For an example, see http://forum.dlang.org/post/n96k3g$ka5$1 digitalmars.com, although the problem can get much worse in big code bases. I've seen just the handling of the mangle strings (generation, ...) making up a significant part of the time profile. — David
Feb 26 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 02/26/2016 10:38 AM, David Nadlinger wrote:
 On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:
 Are there any low-hanging fruit left that could make dmd faster?
A big one would be overhauling the template mangling scheme so it does not generate mangled names a few hundred kilo (!) bytes in size anymore for code that uses templates and voldemort types.
My understanding is the main problem is the _same_ templates are repeatedly instantiated with the same exact parameters - the epitome of redundant work. -- Andrei
Feb 26 2016
next sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Friday, 26 February 2016 at 18:53:21 UTC, Andrei Alexandrescu 
wrote:
 My understanding is the main problem is the _same_ templates 
 are repeatedly instantiated with the same exact parameters - 
 the epitome of redundant work. -- Andrei
Within one compiler execution, there might be some optimization potential in the way semantically equivalent template instantiations are merged, yes – it's been a while since I have looked at the related code (see e.g. TemplateDeclaration.findExistingInstance). Another area matching your description would be that of the same template being instantiated from multiple compilation units, where it can be omitted from some of the compilation units (i.e. object files). Our current logic for that is broken anyway, see e.g. https://issues.dlang.org/show_bug.cgi?id=15318. I was referring to something different in my post, though, as the question concerned "low-hanging fruit". The problem there is really just that template names sometimes grow unreasonably long pretty quickly. As an example, without wanting to divulge any internals, some of the mangled symbols (!) in the Weka codebase are several hundred kilobytes in size. core.demangle gives up on them anyway, and they appear to be extremely repetitive. Note that just like in Steven's post which I linked earlier, the code in question does not involve any crazy recursive meta-templates, but IIRC makes use of Voldemort types. Tracking down and fixing this – one would almost be tempted to just use standard data compression – would lead to a noticeable decrease in compile and link times for affected code. — David
Feb 26 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 11:17 AM, David Nadlinger wrote:
 I was referring to something different in my post, though, as the question
 concerned "low-hanging fruit". The problem there is really just that template
 names sometimes grow unreasonably long pretty quickly. As an example, without
 wanting to divulge any internals, some of the mangled symbols (!) in the Weka
 codebase are several hundred kilobytes in size. core.demangle gives up on them
 anyway, and they appear to be extremely repetitive. Note that just like in
 Steven's post which I linked earlier, the code in question does not involve any
 crazy recursive meta-templates, but IIRC makes use of Voldemort types. Tracking
 down and fixing this – one would almost be tempted to just use standard data
 compression – would lead to a noticeable decrease in compile and link times
for
 affected code.
A simple solution is to just use lz77 compression on the strings. This is used for Win32 and works well. (I had a PR to put that in Phobos, but it was rejected.) https://www.digitalmars.com/sargon/lz77.html As a snide aside, the mangling schemes used by Microsoft and g++ have a built-in compression scheme, but they are overly complex and produce lousy results. Lz77 is simpler and far more effective :-) An alternative is to generate an SHA hash of the name, which will be unique, but the downside is it is not reversible and so cannot be demangled.
Feb 26 2016
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Feb 26, 2016 at 01:53:21PM -0500, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 02/26/2016 10:38 AM, David Nadlinger wrote:
On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:
Are there any low-hanging fruit left that could make dmd faster?
A big one would be overhauling the template mangling scheme so it does not generate mangled names a few hundred kilo (!) bytes in size anymore for code that uses templates and voldemort types.
My understanding is the main problem is the _same_ templates are repeatedly instantiated with the same exact parameters - the epitome of redundant work.
[...] I must be missing something, but why can't we use the obvious solution to use some kind of hash table to track previous instantiations? T -- There are two ways to write error-free programs; only the third one works.
Feb 26 2016
prev sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 25 Feb 2016 11:05 pm, "Walter Bright via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:
 Good to know, thanks! -- Andrei
DMD did slow down because it was now being compiled by DMD instead of
g++. Also, dmd was doing multithreaded file I/O, but that was removed because speed didn't matter <grrrr>.

I thought that mulithreaded I/O did not change anything, or slowed
compilation down in some cases?

Or I recall seeing a slight slowdown when I first tested it in gdc all
those years ago.  So left it disabled - probably for the best too.
Feb 26 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
 I thought that mulithreaded I/O did not change anything, or slowed compilation
 down in some cases?

 Or I recall seeing a slight slowdown when I first tested it in gdc all those
 years ago.  So left it disabled - probably for the best too.
Running one test won't really give much useful information. I also wrote: "On a machine with local disk and running nothing else, no speedup. With a slow filesystem, like an external, network, or cloud (!) drive, yes. I would also expect it to speed up when the machine is running a lot of other stuff."
Feb 26 2016
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
 I thought that mulithreaded I/O did not change anything, or slowed
compilation
 down in some cases?

 Or I recall seeing a slight slowdown when I first tested it in gdc all
those
 years ago.  So left it disabled - probably for the best too.
Running one test won't really give much useful information. I also wrote: "On a machine with local disk and running nothing else, no speedup. With
a slow filesystem, like an external, network, or cloud (!) drive, yes. I would also expect it to speed up when the machine is running a lot of other stuff." Ah ha. Yes I can sort of remember that comment. One interesting line of development (though would be difficult to implement) would be to do all three semantic passes asynchronously using fibers. If I understand correctly, sdc already does this with many cases that need ironing out.
Feb 26 2016
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 26.02.2016 19:34, Iain Buclaw via Digitalmars-d wrote:
 On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d"
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
  >
  > On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
  >>
  >> I thought that mulithreaded I/O did not change anything, or slowed
 compilation
  >> down in some cases?
  >>
  >> Or I recall seeing a slight slowdown when I first tested it in gdc
 all those
  >> years ago.  So left it disabled - probably for the best too.
  >
  >
  >
  > Running one test won't really give much useful information. I also wrote:
  >
  > "On a machine with local disk and running nothing else, no speedup.
 With a slow filesystem, like an external, network, or cloud (!) drive,
 yes. I would also expect it to speed up when the machine is running a
 lot of other stuff."

 Ah ha. Yes I can sort of remember that comment.

 One interesting line of development (though would be difficult to
 implement) would be to do all three semantic passes asynchronously using
 fibers.

 If I understand correctly, sdc already does this with many cases that
 need ironing out.
Different passes are not really required once semantic analysis becomes asynchronous. Just keep track of semantic analysis dependencies, with strong and weak dependencies and different means to resolve cycles of weak dependencies. Then write the semantic analysis of each component in a linear fashion and pause it whenever it depends on information that has not yet been obtained, until that information is computed.
Feb 26 2016
next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 26 Feb 2016 10:16 pm, "Timon Gehr via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On 26.02.2016 19:34, Iain Buclaw via Digitalmars-d wrote:
 On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d"
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:
  >
  > On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
  >>
  >> I thought that mulithreaded I/O did not change anything, or slowed
 compilation
  >> down in some cases?
  >>
  >> Or I recall seeing a slight slowdown when I first tested it in gdc
 all those
  >> years ago.  So left it disabled - probably for the best too.
  >
  >
  >
  > Running one test won't really give much useful information. I also
wrote:
  >
  > "On a machine with local disk and running nothing else, no speedup.
 With a slow filesystem, like an external, network, or cloud (!) drive,
 yes. I would also expect it to speed up when the machine is running a
 lot of other stuff."

 Ah ha. Yes I can sort of remember that comment.

 One interesting line of development (though would be difficult to
 implement) would be to do all three semantic passes asynchronously using
 fibers.

 If I understand correctly, sdc already does this with many cases that
 need ironing out.
Different passes are not really required once semantic analysis becomes
asynchronous. Just keep track of semantic analysis dependencies, with strong and weak dependencies and different means to resolve cycles of weak dependencies. Then write the semantic analysis of each component in a linear fashion and pause it whenever it depends on information that has not yet been obtained, until that information is computed. Yes. In our case, it may be best to go for small steps. First remove the 'deferred' semantic pass, then merge semantic 1+2, then finally as you describe above. Easier said than done I guess though.
Feb 26 2016
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 1:10 PM, Timon Gehr wrote:
 Different passes are not really required once semantic analysis becomes
 asynchronous. Just keep track of semantic analysis dependencies, with strong
and
 weak dependencies and different means to resolve cycles of weak dependencies.
 Then write the semantic analysis of each component in a linear fashion and
pause
 it whenever it depends on information that has not yet been obtained, until
that
 information is computed.
I'll put you in charge of debugging that :-)
Feb 26 2016
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 26.02.2016 23:41, Walter Bright wrote:
 On 2/26/2016 1:10 PM, Timon Gehr wrote:
 Different passes are not really required once semantic analysis becomes
 asynchronous. Just keep track of semantic analysis dependencies, with
 strong and
 weak dependencies and different means to resolve cycles of weak
 dependencies.
 Then write the semantic analysis of each component in a linear fashion
 and pause
 it whenever it depends on information that has not yet been obtained,
 until that
 information is computed.
I'll put you in charge of debugging that :-)
I am/was (I have not worked on it a lot lately). I haven't found it to be particularly hard to debug. It is likely the best way to fix the "forward reference error" situation. My code which does this does not compile on DMD versions after 2.060 due to forward reference issues. I have just reduced one of them: https://issues.dlang.org/show_bug.cgi?id=15733
Feb 27 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2016 12:05 PM, Timon Gehr wrote:
 On 26.02.2016 23:41, Walter Bright wrote:
 On 2/26/2016 1:10 PM, Timon Gehr wrote:
 Different passes are not really required once semantic analysis becomes
 asynchronous. Just keep track of semantic analysis dependencies, with
 strong and
 weak dependencies and different means to resolve cycles of weak
 dependencies.
 Then write the semantic analysis of each component in a linear fashion
 and pause
 it whenever it depends on information that has not yet been obtained,
 until that
 information is computed.
I'll put you in charge of debugging that :-)
I am/was (I have not worked on it a lot lately). I haven't found it to be particularly hard to debug.
It'll get 100 times harder if it's a heisenbug due to synchronization issues.
 It is likely the best way to fix the "forward reference error" situation.
 My code which does this does not compile on DMD versions after 2.060 due to
 forward reference issues. I have just reduced one of them:
 https://issues.dlang.org/show_bug.cgi?id=15733
Thanks for preparing a bug report.
Feb 27 2016
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 27 February 2016 at 23:30, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

On 2/27/2016 12:05 PM, Timon Gehr wrote:
 On 26.02.2016 23:41, Walter Bright wrote:

 On 2/26/2016 1:10 PM, Timon Gehr wrote:

 Different passes are not really required once semantic analysis becomes
 asynchronous. Just keep track of semantic analysis dependencies, with
 strong and
 weak dependencies and different means to resolve cycles of weak
 dependencies.
 Then write the semantic analysis of each component in a linear fashion
 and pause
 it whenever it depends on information that has not yet been obtained,
 until that
 information is computed.
I'll put you in charge of debugging that :-)
I am/was (I have not worked on it a lot lately). I haven't found it to be particularly hard to debug.
It'll get 100 times harder if it's a heisenbug due to synchronization issues.
Surely with Fibers everything would be deterministic though?
Feb 28 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2016 1:35 AM, Iain Buclaw via Digitalmars-d wrote:
 Surely with Fibers everything would be deterministic though?
I don't see the point of fibers if: 1. they are running on the same core 2. none of them do any waiting, such as waiting on I/O requests The only I/O a compiler does is reading the source files and writing the object file. At one time, dmd did have an async thread to read the source files, but that was removed as discussed in this thread. To speed up dmd, using multicores is necessary, and that requires synchronization.
Feb 28 2016
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 29 February 2016 at 00:43, Walter Bright via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 2/28/2016 1:35 AM, Iain Buclaw via Digitalmars-d wrote:

 Surely with Fibers everything would be deterministic though?
I don't see the point of fibers if: 1. they are running on the same core
That's a reasonable stance to have. I was only considering the speed up of using yield/continue on a declarations' semantic pass verses the double round-robin we currently do for a couple of passes just because of forward-reference issues.
Feb 29 2016
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2016 10:34 AM, Iain Buclaw via Digitalmars-d wrote:
 One interesting line of development (though would be difficult to implement)
 would be to do all three semantic passes asynchronously using fibers.
I'd be terrified of all the synchronizing that would be necessary there. The lexing, and code generation would be far easier to parallelize.
 If I understand correctly, sdc already does this with many cases that need
 ironing out.
The "many cases that need ironing out" is always the problem :-)
Feb 26 2016
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Thursday, 25 February 2016 at 19:55:20 UTC, rsw0x wrote:
 On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:
 On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
 If we would make GDC or LDC the official compiler then the 
 next question which pops up is about compilation speed....
ldc is still significantly faster than clang, or gdc than gcc. I don't think this is that much of a valid concern, especially for smaller programs.
For larger programs, LDC with single-file compilation outdoes DMD by a large factor on any recent multi-core CPU for both debug and release builds in my tests. DMD did not scale across cores anywhere near as well as LDC. OTOH, it does not benefit from singleobj this way when doing release builds.
Would it be possible to point me in the directions of projects where you saw ldc being faster? I didn't try per-module, but on the projects I tried, dmd is still considerably faster when compiling per-package. And given that per-package is nearly always faster than per-module... (http://forum.dlang.org/post/yfykbayodugukemvoedf forum.dlang.org) Atila
Feb 25 2016
parent Atila Neves <atila.neves gmail.com> writes:
On Thursday, 25 February 2016 at 22:38:45 UTC, Atila Neves wrote:
 On Thursday, 25 February 2016 at 19:55:20 UTC, rsw0x wrote:
 [...]
Would it be possible to point me in the directions of projects where you saw ldc being faster? I didn't try per-module, but on the projects I tried, dmd is still considerably faster when compiling per-package. And given that per-package is nearly always faster than per-module... (http://forum.dlang.org/post/yfykbayodugukemvoedf forum.dlang.org) Atila
Forgot to say: I measured seconds ago on the most recent dmd and ldc with an 8-core CPU with hyperthreading (so 16 threads). Atila
Feb 25 2016
prev sibling next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
 Currently, newcomers come expecting their algorithm from 
 rosetta code to run faster in D than their current language, 
 but then it seems like it's actually slower. What gives?
Squeezing that last ounce of performance out of the processor is only one reason to use D, and to be perfectly honest, not likely to ever be a selling point, because you're not going to consistently beat C++ by 30% or 40% anyway. D's selling point is the design of the language. DMD offers another selling point, fast compilation, and for many tasks the inefficiency is irrelevant. "If you're careful, as fast as C++" isn't by itself the most compelling sales pitch.
Feb 18 2016
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, 18 February 2016 at 13:05:53 UTC, bachmeier wrote:
 "If you're careful, as fast as C++" isn't by itself the most 
 compelling sales pitch.
That's never going to be a good selling pitch, if it were true 100% of the time. If that's all that someone cares about, they're just going to continue to use C++. Why bother switching? It may matter that their performance isn't going to take a huge hit by switching to D, but it's all of the other stuff about D that's going to actually get someone interested - the stuff that it does better than other languages, not the stuff that it's "as good at" as other languages. And as long as we have alternative compilers which produce code on par with the big C++ compilers, I really don't think that we have an issue here. - Jonathan M Davis
Feb 18 2016
prev sibling next sibling parent reply Dejan Lekic <dejan.lekic gmail.com> writes:
Lots of programmers out there use and love languages that are far 
slower than any code DMD produces (think JavaScript, Python, 
Ruby). So I see no point here. If someone is learning D, and they 
know there are different compilers available, they would find out 
what are the differences. OpenJDK's JVM is not the best JVM in 
the world, yet millions of people use it.

What I find in having DMD being a *reference compiler* useful is 
to have a compiler which has latest language changes.
Feb 18 2016
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 02/18/2016 09:22 AM, Dejan Lekic wrote:
 Lots of programmers out there use and love languages that are far slower
 than any code DMD produces (think JavaScript, Python, Ruby). So I see no
 point here.
While that's true, my impression is most of the users and fans of those languages use them *because* they're fundamentally dynamic (unlike D), deliberately lacks every feature they possibly CAN lack (unlike D), and lack all the compile-time safety that D promotes as features. So regarding those langauges' reduced speed, while many of their users don't care one bit ("why aren't you a consumer whore like me? go buy a new machine, you dinosaur!"), there seems to also be a large population that merely *tolerates* the lower speed for the sake of the dynamicness and the lack-of-compile-time-anything that they love. The first group will likely never be tempted by D regardless, but for the second group, a language that's fast like C/C++ but not nearly as unproductive IS appealing, and even seems to be something they're often on the lookout for.
Feb 18 2016
parent bachmeier <no spam.com> writes:
On Thursday, 18 February 2016 at 15:58:14 UTC, Nick Sabalausky 
wrote:
  but for the second group, a language that's fast like C/C++ 
 but not nearly as unproductive IS appealing, and even seems to 
 be something they're often on the lookout for.
I would agree with you if you could write D code using the most convenient style possible, compile using LDC, and get the same speed as C or C++. My experience suggests that is not going to happen. You're going to need some knowledge of the language to get the best performance.
Feb 18 2016
prev sibling next sibling parent reply David Nadlinger <code klickverbot.at> writes:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
 […]
On a completely unrelated note, you aren't by any chance the Márcio Martins who is giving a talk at ETH in a couple of days, are you? — David
Feb 18 2016
parent =?UTF-8?B?TcOhcmNpbw==?= Martins <marcioapm gmail.com> writes:
On Thursday, 18 February 2016 at 20:18:14 UTC, David Nadlinger 
wrote:
 On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
 wrote:
 […]
On a completely unrelated note, you aren't by any chance the Márcio Martins who is giving a talk at ETH in a couple of days, are you? — David
No, I'm not.
Feb 18 2016
prev sibling parent reply Xavier Bigand <flamaros.xavier gmail.com> writes:
Le 17/02/2016 23:57, Márcio Martins a écrit :
 I was reading the other thread "Speed kills" and was wondering if there
 is any practical reason why DMD is the official compiler?

 Currently, newcomers come expecting their algorithm from rosetta code to
 run faster in D than their current language, but then it seems like it's
 actually slower. What gives?

 Very often the typical answer from this community is generally "did you
 use LDC/GDC?".

 Wouldn't it be a better newcomer experience if the official compiler was
 either LDC or GDC?
 For us current users it really doesn't matter what is labelled official,
 we pick what serves us best, but for a newcomer, the word official
 surely carries a lot of weight, doesn't it?

  From a marketing point of view, is it better for D as a language that
 first-timers try the bleeding-edge, latest language features with DMD,
 or that their expectations of efficient native code are not broken?

 Apologies if this has been discussed before...
Like you said it's only a marketing issue. DMD will and can stay the reference, but IMO as you also evoke it the most important thing is certainly to have LDC/GDC more sync with DMD front-end version because new comers will feel comfortable to use an other compiler up to date that can also target more platforms. I am exactly in this case, I prefer to use a compiler that make my code running on all platforms (android, iOS,...), but I don't want suffering of staying on an older front-end that will limit me by missing features or bugs. So I am using DMD with some frustration lol. And ldc/gdc need more love : 1. A direct link to download latest version (instead of a like to the project page) 2. An installer like for DMD that can download Visuald optionally 3. Be auto detected by building tools (dub, Visuald,...) I know Visuald support ldc, but for dub I didn't find anything on how it find which compiler to use.
Feb 24 2016
parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 24 February 2016 at 22:43:07 UTC, Xavier Bigand 
wrote:
 I know Visuald support ldc, but for dub I didn't find anything 
 on how it find which compiler to use.
I agree the docs could be better. If you type dub build --help, it shows that --compiler is an option. So you would just pass --compiler=ldc2.
Feb 24 2016