www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Migrating D front end to D - post Dconf

reply "Iain Buclaw" <ibuclaw gdcproject.org> writes:
Daniel and/or David,

We should list down in writing the issues preventing DMD, GDC, 
and LDC having a shared code base.  From what David has shown me, 
LDC will need the most work for this, but I'll list down what I 
can remember.

1. Support extern(C++) classes so can have a split C++/D 
implementation of eg: Expression and others.

2. Support representing integers and floats to a greater 
precision than what the host can natively support. In D there's 
BigInt for integral types, and there's a possibility of using 
std.numeric for floats.  For me, painless conversion between eg: 
BigInt <-> GCC's double_int is a requirement, but that is more of 
an after thought at this point in time.

3. Array ops should be moved out of the front end. The back end 
can deal with emitting the correct Libcall if required.

4. Continue building upon Target to hide target-specific things 
from the front end.  Off the top of my head I've got two to raise 
pulls for: __VENDOR__ and retrieving memalignsize for fields.

5. DMD sends messages to stdout, GDC sends to stderr.  Just a 
small implementation detail, but worth noting where 
'printf'appears, it's almost always rewritten as fprintf(stderr) 
for GDC.

6. LDC does not implement toObjFile, toCtype, toDt, toIR, 
possibly others...

7. BUILTINxxx could be moved into Target, as there is no reason 
why each back end can't support their own builtins for the 
purpose of CTFE.

8. D front end's port.h can't be used by GDC because of 
dependency  on mars.h, this could perhaps be replaced by 
std.numeric post conversion.

9. Opaque declarations of back end types defined in front end 
differ for each compiler implementation.  Eg: elem is a typedef 
to union tree_node.

10. The main function in mars.c is not used by GDC, possibly LDC 
also.  Another implementation detail but also a note to maybe 
split out errorSuplimental and others from that file.

11. The function genCfunc does not generate the arguments of the 
extern(C) symbol.

12. LDC adds extra reserved version identifiers that are not 
allowed to be declared in D code.  This could and probably should 
be merged into D front end. Don't think it would be wise to let 
back end's have the ability to add their own.  Also this list 
needs updating regardless to reflect the documented spec.

13. LDC makes some more arbitrary changes to which the reason for 
the change has been forgotten. Get on it David!  :o)

14. Reading sources asynchronously, GDC ifdefs this out.  Do we 
really need this?  I seem to recall that the speed increase is 
either negliegable or offers no benefit to compilation speed.

15. Deal with all C++ -> D conversion
May 05 2013
next sibling parent "Iain Buclaw" <ibuclaw gdcproject.org> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 15. Deal with all C++ -> D conversion
15. Deal with all C++ -> D conversion issues (see all DDMD marked pull requests). 16. Testing the C++ -> D front end conversion on Linux. Daniel you can send me the sources to test that if getting a Linux box is a problem for you. Anything else I missed? Oh, perhaps licensing issues. I know the C++ sources for the D front end have been assigned to the FSF by Walter, I think the conversion to D is enough change to warrant reassignment. 1, 2, 3, get destroying... Regards Iain.
May 05 2013
prev sibling next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 13. LDC makes some more arbitrary changes to which the reason 
 for the change has been forgotten. Get on it David!  :o)
This applies only to a small part of the changes. The larger share of them will actually need adaption of the upstream frontend sources for a very good reason if we want to have a truly shared codebase. As for the size of the diff, don't forget that LDC doesn't enjoy the luxury of having IN_LLVM sections in the upstream source – the difference in amount of changes actually isn't that large: --- $ fgrep -rI IN_GCC dmd/src | wc -l 49 $ fgrep -rI IN_LLVM ldc/dmd2 | wc -l 57 --- David
May 05 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On May 5, 2013 3:30 PM, "David Nadlinger" <see klickverbot.at> wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 13. LDC makes some more arbitrary changes to which the reason for the
change has been forgotten. Get on it David! :o)
 This applies only to a small part of the changes. The larger share of
them will actually need adaption of the upstream frontend sources for a very good reason if we want to have a truly shared codebase.
 As for the size of the diff, don't forget that LDC doesn't enjoy the
luxury of having IN_LLVM sections in the upstream source =96 the difference in amount of changes actually isn't that large:
 ---
 $ fgrep -rI IN_GCC dmd/src | wc -l
 49

 $ fgrep -rI IN_LLVM ldc/dmd2 | wc -l
 57
 ---

 David
Indeed, but I was thinking of changes that aren't ifdef 'd. I'm sure I saw a few... Regards --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0';
May 05 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 12. LDC adds extra reserved version identifiers that are not 
 allowed to be declared in D code.  This could and probably 
 should be merged into D front end. Don't think it would be wise 
 to let back end's have the ability to add their own.  Also this 
 list needs updating regardless to reflect the documented spec.
I think we should just add the full list from http://dlang.org/version.html. This would also resolve the issue for LDC. David
May 05 2013
prev sibling next sibling parent reply =?UTF-8?B?Ikx1w61z?= Marques" <luismarques gmail.com> writes:
On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D 
 implementation of eg: Expression and others.
I don't know if this will be in the videos, so I'll ask here. I thought extern(C++) only supported interfaces because everything else fell into the "we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable?
May 05 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On May 5, 2013 5:20 PM, "&lt;luismarques gmail.com&gt;&quot; puremagic.com"
<&quot;\&quot;Lu=EDs&quot;.Marques&quot;> wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D implementation
of eg: Expression and others.
 I don't know if this will be in the videos, so I'll ask here. I thought
extern(C++) only supported interfaces because everything else fell into the "we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable? It was mentioned, however I do believe there are a few more complicated things than that. Many would be in a position to educate you on that. Regards --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0';
May 05 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/5/2013 9:17 AM, "Luís Marques" <luismarques gmail.com>" wrote:
 On Sunday, 5 May 2013 at 13:33:25 UTC, Iain Buclaw wrote:
 1. Support extern(C++) classes so can have a split C++/D implementation of eg:
 Expression and others.
I don't know if this will be in the videos, so I'll ask here. I thought extern(C++) only supported interfaces because everything else fell into the "we'd need to pretty much include a C++ compiler into D to support that" camp. Is that not quite true for classes? Did you find some compromise between usefulness and complexity that wasn't obvious before, or did the D compiler transition just motivate adding some additional complexity that previously wasn't deemed acceptable?
extern(C++) interfaces are ABI compatible with C++ "com" classes - i.e. single inheritance, no constructors or destructors.
May 05 2013
parent =?UTF-8?B?Ikx1w61z?= Marques" <luismarques gmail.com> writes:
On Sunday, 5 May 2013 at 20:33:15 UTC, Walter Bright wrote:
 extern(C++) interfaces are ABI compatible with C++ "com" 
 classes - i.e. single inheritance, no constructors or 
 destructors.
That I know, thanks, I just understood that point one meant some additional extern(C++) support:
 1. Support extern(C++) classes so can have a split C++/D 
 implementation of eg: Expression and others.
May 05 2013
prev sibling next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw gdcproject.org> wrote in message 
news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC 
 having a shared code base.  From what David has shown me, LDC will need 
 the most work for this, but I'll list down what I can remember.
oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation of 
 eg: Expression and others.
s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than 
 what the host can natively support.
This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.
 In D there's BigInt for integral types, and there's a possibility of using 
 std.numeric for floats.  For me, painless conversion between eg: BigInt 
 <-> GCC's double_int is a requirement, but that is more of an after 
 thought at this point in time.
Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.
 3. Array ops should be moved out of the front end. The back end can deal 
 with emitting the correct Libcall if required.
Only blocks C...
 4. Continue building upon Target to hide target-specific things from the 
 front end.  Off the top of my head I've got two to raise pulls for: 
 __VENDOR__ and retrieving memalignsize for fields.
Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small 
 implementation detail, but worth noting where 'printf'appears, it's almost 
 always rewritten as fprintf(stderr) for GDC.
Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly 
 others...
This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each 
 back end can't support their own builtins for the purpose of CTFE.
Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on 
 mars.h, this could perhaps be replaced by std.numeric post conversion.
Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.
 9. Opaque declarations of back end types defined in front end differ for 
 each compiler implementation.  Eg: elem is a typedef to union tree_node.
Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also. 
 Another implementation detail but also a note to maybe split out 
 errorSuplimental and others from that file.
I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.
 11. The function genCfunc does not generate the arguments of the extern(C) 
 symbol.
I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to be 
 declared in D code.  This could and probably should be merged into D front 
 end. Don't think it would be wise to let back end's have the ability to 
 add their own.  Also this list needs updating regardless to reflect the 
 documented spec.
Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the 
 change has been forgotten. Get on it David!  :o)
I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really 
 need this?  I seem to recall that the speed increase is either negliegable 
 or offers no benefit to compilation speed.
I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion
Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can 
 send me the sources to test that if getting a Linux box is a problem for 
 you.
It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.
May 05 2013
next sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
I'm expecting lots of positive comments when I get off my flight in 14 
hours.

"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:km7aqo$2kv4$1 digitalmars.com...
 "Iain Buclaw" <ibuclaw gdcproject.org> wrote in message 
 news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC 
 having a shared code base.  From what David has shown me, LDC will need 
 the most work for this, but I'll list down what I can remember.
oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation 
 of eg: Expression and others.
s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than 
 what the host can natively support.
This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.
 In D there's BigInt for integral types, and there's a possibility of 
 using std.numeric for floats.  For me, painless conversion between eg: 
 BigInt <-> GCC's double_int is a requirement, but that is more of an 
 after thought at this point in time.
Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.
 3. Array ops should be moved out of the front end. The back end can deal 
 with emitting the correct Libcall if required.
Only blocks C...
 4. Continue building upon Target to hide target-specific things from the 
 front end.  Off the top of my head I've got two to raise pulls for: 
 __VENDOR__ and retrieving memalignsize for fields.
Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small 
 implementation detail, but worth noting where 'printf'appears, it's 
 almost always rewritten as fprintf(stderr) for GDC.
Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly 
 others...
This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each 
 back end can't support their own builtins for the purpose of CTFE.
Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on 
 mars.h, this could perhaps be replaced by std.numeric post conversion.
Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.
 9. Opaque declarations of back end types defined in front end differ for 
 each compiler implementation.  Eg: elem is a typedef to union tree_node.
Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also. 
 Another implementation detail but also a note to maybe split out 
 errorSuplimental and others from that file.
I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.
 11. The function genCfunc does not generate the arguments of the 
 extern(C) symbol.
I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to 
 be declared in D code.  This could and probably should be merged into D 
 front end. Don't think it would be wise to let back end's have the 
 ability to add their own.  Also this list needs updating regardless to 
 reflect the documented spec.
Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the 
 change has been forgotten. Get on it David!  :o)
I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really 
 need this?  I seem to recall that the speed increase is either 
 negliegable or offers no benefit to compilation speed.
I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion
Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can 
 send me the sources to test that if getting a Linux box is a problem for 
 you.
It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.
May 05 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 6 May 2013 05:16, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "Iain Buclaw" <ibuclaw gdcproject.org> wrote in message
 news:qtcogcbrhfzjvuoayyjr forum.dlang.org...
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC
 having a shared code base.  From what David has shown me, LDC will need
 the most work for this, but I'll list down what I can remember.
oooook here we go: We have three goals: A: D frontend ported to D B: Identical frontend code shared between all three backends C: Fixing the layering violations in the glue layer (in some cases this probably blocks B)
 1. Support extern(C++) classes so can have a split C++/D implementation
of
 eg: Expression and others.
s/others/all ast classes/ Requred for A only
 2. Support representing integers and floats to a greater precision than
 what the host can natively support.
This should be 'Support representing integers and floats to the EXACT precisison that the TARGET supports at runtime'. The old arguments about how you can't rely on floating point exactness do not hold up when cross compiling - all compilers that differ only in host compiler/machine must produce identical binaries. This is really a seperate issue.
Probably yes, but I cannot consider switching without it.
 In D there's BigInt for integral types, and there's a possibility of
using
 std.numeric for floats.  For me, painless conversion between eg: BigInt
 <-> GCC's double_int is a requirement, but that is more of an after
 thought at this point in time.
Because this does not block anything it _can_ wait until the port is complete, we can live with some weirdness in floating point at compile time. I completely agree it should be fixed eventually.
Indeed, and I can deal without BigInt.
 3. Array ops should be moved out of the front end. The back end can deal
 with emitting the correct Libcall if required.
Only blocks C...
 4. Continue building upon Target to hide target-specific things from the
 front end.  Off the top of my head I've got two to raise pulls for:
 __VENDOR__ and retrieving memalignsize for fields.
Only blocks B (and fixing it helps C)
 5. DMD sends messages to stdout, GDC sends to stderr.  Just a small
 implementation detail, but worth noting where 'printf'appears, it's
almost
 always rewritten as fprintf(stderr) for GDC.
Similar.
 6. LDC does not implement toObjFile, toCtype, toDt, toIR, possibly
 others...
This is another layering violation, and eventually I believe we should migrate to an _actual_ visitor pattern, so ast classes do not need to know anything about the glue layer. I think we should work around this for now. (With #ifdef, or adding _all_ virtuals to the frontend and stubbing the unused ones)
 7. BUILTINxxx could be moved into Target, as there is no reason why each
 back end can't support their own builtins for the purpose of CTFE.
Makes sense. I guess if Target detects a builtin it gets Port to evaluate it. Maybe we should rename Port to Host?
 8. D front end's port.h can't be used by GDC because of dependency  on
 mars.h, this could perhaps be replaced by std.numeric post conversion.
Didn't we find it doesn't rely on anything substantial? This can certainly be cleaned up.
Nothing substantial, no. And cleaned up, it should be. I just haven't spent more than 5 minutes looking at it.
 9. Opaque declarations of back end types defined in front end differ for
 each compiler implementation.  Eg: elem is a typedef to union tree_node.
Same problem as 6, except opaque types can be safely ignored/used as they are opaque.
 10. The main function in mars.c is not used by GDC, possibly LDC also.
 Another implementation detail but also a note to maybe split out
 errorSuplimental and others from that file.
I'm happy with each compiler having their own 'main' file. Yes we need to move the common stuff into another file.
Have any suggestions for where to move this? (other than a new file)
 11. The function genCfunc does not generate the arguments of the
extern(C)
 symbol.
I think this only blocks C.
 12. LDC adds extra reserved version identifiers that are not allowed to
be
 declared in D code.  This could and probably should be merged into D
front
 end. Don't think it would be wise to let back end's have the ability to
 add their own.  Also this list needs updating regardless to reflect the
 documented spec.
Makes sense.
 13. LDC makes some more arbitrary changes to which the reason for the
 change has been forgotten. Get on it David!  :o)
I know very little about this but hopefully most of it can go into Target/get merged upstream.
 14. Reading sources asynchronously, GDC ifdefs this out.  Do we really
 need this?  I seem to recall that the speed increase is either
negliegable
 or offers no benefit to compilation speed.
I think #ifdefed or dropped are both fine.
 15. Deal with all C++ -> D conversion
Yeah.
 16. Testing the C++ -> D front end conversion on Linux.   Daniel you can
 send me the sources to test that if getting a Linux box is a problem for
 you.
It's not a problem, just not my primary platform and therefore not my first focus. At the moment you would need a modified porting tool to compile for anything except win32. To get here we need to fix the #ifdef-cutting-expressions-and-statements-etc mess. I'm not sure how bad this is because last time I tried I was going for the backend as well. I'll have a go on my flight until my laptop battery runs out. There is more, it's just more of the same.
-- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 06 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
When devising solutions, I want to prefer solutions that do not rely on 
#ifdef/#endif. I've tried to scrub those out of the dmd front end source code.
May 05 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on 
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source 
 code.
I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...
May 06 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 6 May 2013 08:19, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source
 code.
I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...
^^ One thing I won't miss about removing all DMDV1 macros from GDC glue. ;) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 06 2013
prev sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:km7lir$48g$1 digitalmars.com...
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:km7fml$2rka$1 digitalmars.com...
 When devising solutions, I want to prefer solutions that do not rely on 
 #ifdef/#endif. I've tried to scrub those out of the dmd front end source 
 code.
I completely agree. But - refactoring the glue layer interface to use a proper visitor interface (what I suspect is the best solution) is a rather large change and will be much easier _after_ the conversion. While ifdefs are a pain in general, the big problem is this pattern. if (a && b && #if SOMETHING c && d && #else e && f && #endif g && h) { ...
It turns out these are actually not that big a problem in the frontend - around 30 cases, all DMDV2 or 0/1. The backend is another story...
May 07 2013
prev sibling next sibling parent reply Thomas Koch <thomas koch.ro> writes:
Do you plan to support a build path that has no circular dependendencies? 
This would be a very strong nice to have for porting D to new architectures.

So it should be possible to build a subset of D (stage 1) with gcc without 
relying on a D compiler and than using the stage 1 binary to build a 
complete D compiler.

There are languages in Debian that rely on themselves to be build and it's a 
headache to support those languages on all architectures.

Regards, Thomas Koch
May 09 2013
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 9 May 2013 10:11, Thomas Koch <thomas koch.ro> wrote:

 Do you plan to support a build path that has no circular dependendencies?
 This would be a very strong nice to have for porting D to new
 architectures.

 So it should be possible to build a subset of D (stage 1) with gcc without
 relying on a D compiler and than using the stage 1 binary to build a
 complete D compiler.

 There are languages in Debian that rely on themselves to be build and it's
 a
 headache to support those languages on all architectures.

 Regards, Thomas Koch
I'll will very likely keep a branch with the C++ implemented front end for these purposes. But ideally we should get porting as soon as possible ahead of this move so that there are already D compilers available for said targets. Though it would be nice for the D implementation to be kept to a subset that is backwards compatible with 2.062 (or whatever version we decide to make the switch at), that is something I cannot guarantee. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 09 2013
parent "QAston" <qaston gmail.com> writes:
On Thursday, 9 May 2013 at 10:15:42 UTC, Iain Buclaw wrote:
 I'll will very likely keep a branch with the C++ implemented 
 front end for
 these purposes. But ideally we should get porting as soon as 
 possible ahead
 of this move so that there are already D compilers available 
 for said
 targets.

 Though it would be nice for the D implementation to be kept to 
 a subset
 that is backwards compatible with 2.062 (or whatever version we 
 decide to
 make the switch at), that is something I cannot guarantee.


 Regards
Could compiling the D compiler in D to llvm bytecode on a working platform and then compiling the bytecode on target platform solve the issue (at least a part of it)?
May 20 2013
prev sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:
 There are languages in Debian that rely on themselves to be 
 build and it's a
 headache to support those languages on all architectures.
Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David
May 09 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 9 May 2013 12:50, David Nadlinger <see klickverbot.at> wrote:

 On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:

 There are languages in Debian that rely on themselves to be build and
 it's a
 headache to support those languages on all architectures.
Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David
Currently... only if the target platform does not have a native c++ compiler. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 09 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 9 May 2013 13:06, Iain Buclaw <ibuclaw ubuntu.com> wrote:

 On 9 May 2013 12:50, David Nadlinger <see klickverbot.at> wrote:

 On Thursday, 9 May 2013 at 09:11:05 UTC, Thomas Koch wrote:

 There are languages in Debian that rely on themselves to be build and
 it's a
 headache to support those languages on all architectures.
Wouldn't the "normal" workflow for porting to a new platform be to start out with a cross-compiler anyway? David
Currently... only if the target platform does not have a native c++ compiler.
Though that assumes that the target platform has a c compiler already though... :) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 09 2013
prev sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On May 5, 2013 2:36 PM, "Iain Buclaw" <ibuclaw gdcproject.org> wrote:
 Daniel and/or David,

 We should list down in writing the issues preventing DMD, GDC, and LDC
having a shared code base. From what David has shown me, LDC will need the most work for this, but I'll list down what I can remember.
 1. Support extern(C++) classes so can have a split C++/D implementation
of eg: Expression and others.
 2. Support representing integers and floats to a greater precision than
what the host can natively support. In D there's BigInt for integral types, and there's a possibility of using std.numeric for floats. For me, painless conversion between eg: BigInt <-> GCC's double_int is a requirement, but that is more of an after thought at this point in time.

Actually, the more I sit down and think about it, the more I question
whether or not it is a good idea for the D D front end to have a dependency
on phobos.   Maybe I should stop thinking in general.  :)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
May 11 2013
next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I question
 whether or not it is a good idea for the D D front end to have a 
 dependency
 on phobos.   Maybe I should stop thinking in general.  :)
Yeah, the compiler can't depend on phobos. But if we really need to, we can clone a chunk of phobos and add it to the compiler. Just so long as there isn't a loop. BigInt is a pretty good candidate.
May 11 2013
parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to have 
 a dependency
 on phobos.   Maybe I should stop thinking in general.  :)
Yeah, the compiler can't depend on phobos.
Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David
May 11 2013
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to 
 have a dependency
 on phobos.   Maybe I should stop thinking in general.  :)
Yeah, the compiler can't depend on phobos.
Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David
It prevent the use of newer feature of D in phobos.
May 11 2013
parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:
 If we keep a "must compile with several past versions" policy 
 anyway, what would make Phobos special?

 David
It prevent the use of newer feature of D in phobos.
?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David
May 11 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
 wrote:
 If we keep a "must compile with several past versions" policy 
 anyway, what would make Phobos special?

 David
It prevent the use of newer feature of D in phobos.
?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David
No, that is what have been said : you got to fork phobos and ship your own with the compiler.
May 11 2013
parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 16:27:37 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:
 On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
 On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
 wrote:
 If we keep a "must compile with several past versions" 
 policy anyway, what would make Phobos special?

 David
It prevent the use of newer feature of D in phobos.
?! It prevents the use of newer Phobos features in the compiler, but we would obviously use the Phobos version that comes with the host D compiler to compile the frontend, not the version shipping with the frontend. Maybe I'm missing something obvious, but I really can't see the issue here. David
No, that is what have been said : you got to fork phobos and ship your own with the compiler.
I still don't get what your point is. To build any D application (which might be a D compiler or not), you need a D compiler on your host system. This D compiler will come with druntime, Phobos and any number of other libraries installed. Now, if the application you are building using that host compiler is DMD, you will likely use that new DMD to build a (newer) version of druntime and Phobos later on. But this doesn't have anything to do with what libraries of the host system the application can or can't use. No fork in sight anywhere. David
May 11 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:llovknbpvcnksinsnpfk forum.dlang.org...
 On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.1201.1368284962.4724.digitalmars-d puremagic.com...
 Actually, the more I sit down and think about it, the more I question
 whether or not it is a good idea for the D D front end to have a 
 dependency
 on phobos.   Maybe I should stop thinking in general.  :)
Yeah, the compiler can't depend on phobos.
Why? If we keep a "must compile with several past versions" policy anyway, what would make Phobos special? David
Yes it's possible, but it seems like a really bad idea because: - Phobos is huge - Changes in phobos now have the potential to break the compiler If you decide that all later versions of the compiler must compile with all earlier versions of phobos, then those phobos modules are unable to change. If you do it the other way and say old versions of the compiler must be able to compile the newer compilers and their versions of phobos, you've locked phobos to an old subset of D. (And effectively made the compiler source base enormous) The nice middle ground is you take the chunk of phobos you need, add it to the compiler source, and say 'this must always compile with earlier versions of the compiler'.
May 11 2013
next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
 If you decide that all later versions of the compiler must 
 compile with all
 earlier versions of phobos, then those phobos modules are 
 unable to change.
In (the rare) case of breaking changes, we could always work around them in the compiler source (depending on __VERSION__), rather than duplicating everything up-front. I believe *this* is the nice middle ground. David
May 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:mwkwqttkbdpmzvyviymq forum.dlang.org...
 On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
 If you decide that all later versions of the compiler must compile with 
 all
 earlier versions of phobos, then those phobos modules are unable to 
 change.
In (the rare) case of breaking changes, we could always work around them in the compiler source (depending on __VERSION__), rather than duplicating everything up-front. I believe *this* is the nice middle ground. David
That... doesn't sound very nice to me. How much of phobos are we realistically going to need?
May 11 2013
parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are 
 we
 realistically going to need?
All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little point in translating it to D in the first place. Of course, there will be some restrictions arising from the fact that the code base needs to work with D versions from a year back or so. But to me duplicating the whole standard library inside the compiler source seems like maintenance hell. David
May 11 2013
next sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:bwkwvbjdykrnsdezprls forum.dlang.org...
 On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are we
 realistically going to need?
All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little point in translating it to D in the first place.
I disagree. Phobos is great, but there are thousands of things in the language itself that make it much more pleasant and effective than C++.
 Of course, there will be some restrictions arising from the fact that the 
 code base needs to work with D versions from a year back or so. But to me 
 duplicating the whole standard library inside the compiler source seems 
 like maintenance hell.

 David
I agree. But I was thinking much longer term compatibility, and a much smaller chunk of phobos.
May 11 2013
prev sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On May 11, 2013 6:35 PM, "David Nadlinger" <see klickverbot.at> wrote:
 On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
 That... doesn't sound very nice to me.  How much of phobos are we
 realistically going to need?
All of it? Well, not quite, but large parts at least. If we are going to stick to the C subset of the language, there is little
point in translating it to D in the first place.
 Of course, there will be some restrictions arising from the fact that the
code base needs to work with D versions from a year back or so. But to me duplicating the whole standard library inside the compiler source seems like maintenance hell.
 David
I don't think it would be anything in the slightest at all. For instance, Bigint implementation is big, BIG. :) What would be ported to the compiler may be influenced by BigInt, but would be a limited subset of its functionality tweaked for the purpose of use in the front end. I am more concerned from GDC's perspective of things. Especially when it comes to building from hosts that may have phobos disabled (this is a configure switch). Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 11 2013
parent Johannes Pfau <nospam example.com> writes:
Am Sat, 11 May 2013 23:51:36 +0100
schrieb Iain Buclaw <ibuclaw ubuntu.com>:

 
 I am more concerned from GDC's perspective of things.  Especially
 when it comes to building from hosts that may have phobos disabled
 (this is a configure switch).
 
Indeed. Right now we can compile and run GDC on every system which has a c++ compiler. We can compile D code on all those platforms even if we don't have druntime or phobos support there. Using phobos means that we would always need a complete & working phobos port (at least some GC work, platform specific headers, TLS, ...) on the host machine, even if we: * Only want to compile D code which doesn't use phobos / druntime at all. * Create a compiler which runs on A but generates code for B. Now we also need a working phobos port on A. (Think of a sh4 -> x86 cross compiler. This works now, it won't work when the frontend has been ported to D / phobos) (I do understand why it would be nice to use phobos though. Hacking some include path code right now I wish I could use std.path...)
May 12 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/11/13 1:10 PM, Daniel Murphy wrote:
 Yes it's possible, but it seems like a really bad idea because:
 - Phobos is huge
 - Changes in phobos now have the potential to break the compiler
The flipside is: - Phobos offers many amenities and opportunities for reuse - Breakages in Phobos will be experienced early on a large system using them I've talked about this with Simon Peyton-Jones who was unequivocal to assert that writing the Haskell compiler in Haskell has had enormous benefits in improving its quality. Andrei
May 11 2013
next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:
 - Breakages in Phobos will be experienced early on a large 
 system using them

 I've talked about this with Simon Peyton-Jones who was 
 unequivocal to assert that writing the Haskell compiler in 
 Haskell has had enormous benefits in improving its quality.
This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David
May 11 2013
next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 17:48:27 UTC, David Nadlinger wrote:
 […] the whole DDDMD effort […]
Whoops, must be a Freudian slip, revealing how much I'd like to see the D compiler being written in idiomatic D. ;) David
May 11 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:wynfxitcgpiggwemrmkx forum.dlang.org...
 On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:
 - Breakages in Phobos will be experienced early on a large system using 
 them

 I've talked about this with Simon Peyton-Jones who was unequivocal to 
 assert that writing the Haskell compiler in Haskell has had enormous 
 benefits in improving its quality.
This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David
I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container? If we use them in the compiler, we effectively freeze them. We can't use the new modules, because the old toolchains don't have them. We can't fix old broken modules because the compiler depends on them. If you add code to work around old modules being gone in later versions, you pretty much end up moving the source into the compiler after all. If we only need to be able to compile with a version from 6 months ago, this is not a problem. A year and it's still workable. But two years? Three? We can get something right here that gcc got so horribly wrong.
May 11 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/11/13 2:15 PM, Daniel Murphy wrote:
 "David Nadlinger"<see klickverbot.at>  wrote in message
 news:wynfxitcgpiggwemrmkx forum.dlang.org...
 On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:
 - Breakages in Phobos will be experienced early on a large system using
 them

 I've talked about this with Simon Peyton-Jones who was unequivocal to
 assert that writing the Haskell compiler in Haskell has had enormous
 benefits in improving its quality.
This. If we aren't confident that we can write and maintain a large real-world application in D just yet, we must pull the emergency brakes on the whole DDDMD effort, right now. David
I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container? If we use them in the compiler, we effectively freeze them. We can't use the new modules, because the old toolchains don't have them. We can't fix old broken modules because the compiler depends on them. If you add code to work around old modules being gone in later versions, you pretty much end up moving the source into the compiler after all. If we only need to be able to compile with a version from 6 months ago, this is not a problem. A year and it's still workable. But two years? Three? We can get something right here that gcc got so horribly wrong.
But you're exactly enumerating the problems any D user would face when we make breaking changes to Phobos. Andrei
May 11 2013
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 11 May 2013 at 18:15:22 UTC, Daniel Murphy wrote:
 If we use them in the compiler, we effectively freeze them.  We 
 can't use
 the new modules, because the old toolchains don't have them.
Fair enough, but in such a case we could always add the parts of them we really need to the compiler source until the module is present in the last supported version. The critical difference of this scenario to your approach is that the extra maintenance burden is limited in time: The code is guaranteed to be removed again after (say) a year, and as Phobos stabilizes more and more, the total amount of such "compatibility" code will go down as well.
 We can't fix
 old broken modules because the compiler depends on them.
I don't see your point here: 1) The same is true for any client code out there. The only difference is that we now directly experience what any D library writer out there has to go through anyway, if they want their code to work with multiple compiler releases. 2) If a module is so broken that any "fix" would break all client code, we probably are not going to use it in the compiler anyway.
 If you add code to
 work around old modules being gone in later versions, you 
 pretty much end up
 moving the source into the compiler after all.
Yes, but how often do you think this will happen? At the current point, the barrier for such changes should be quite high anyway. The amount of D2 code in the wild is already non-negligible and growing steadily.
 If we only need to be able to compile with a version from 6 
 months ago, this
 is not a problem.  A year and it's still workable.  But two 
 years?  Three?
 We can get something right here that gcc got so horribly wrong.
Care to elaborate on that? David
May 11 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
11-May-2013 22:15, Daniel Murphy пишет:
 If we aren't confident that we can write and maintain a large real-world
 application in D just yet, we must pull the emergency brakes on the whole
 DDDMD effort, right now.

 David
I'm confident in D, just not in phobos. Even if phobos didn't exist, we'd still be in better shape using D than C++. What exactly are we going to need from phobos? sockets? std.datetime? std.regex? std.container?
Sockets may come in handy one day. Caching compiler daemon etc. std.container well ... mm ... eventually.
 If we use them in the compiler, we effectively freeze them.  We can't use
 the new modules, because the old toolchains don't have them.  We can't fix
 old broken modules because the compiler depends on them.  If you add code to
 work around old modules being gone in later versions, you pretty much end up
 moving the source into the compiler after all.
I propose a different middle ground: Define a minimal subset of phobos, compilable and usable separately. Then full phobos will depend on it in turn (or rather contain it). Related to my recent thread on limiting inter-dependencies - we will have to face that problem while make a subset of phobos. It has some operational costs but will limit the frozen surface. -- Dmitry Olshansky
May 11 2013
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:
 On 5/11/13 1:10 PM, Daniel Murphy wrote:
 Yes it's possible, but it seems like a really bad idea because:
 - Phobos is huge
 - Changes in phobos now have the potential to break the 
 compiler
The flipside is: - Phobos offers many amenities and opportunities for reuse - Breakages in Phobos will be experienced early on a large system using them I've talked about this with Simon Peyton-Jones who was unequivocal to assert that writing the Haskell compiler in Haskell has had enormous benefits in improving its quality.
Except that now, it is a pain to migrate old haskell stuff to newer haskelle stuff if you missed several compile release. You ends up building recursively from the native version to the version you want. We have an implementation in C+ that work, we got to ensure that whatever port of DMD is made in D, it does work with the C+ version.
May 11 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.
 
 You ends up building recursively from the native version to the
 version you want.
Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis
May 11 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:
 On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.
 
 You ends up building recursively from the native version to the
 version you want.
Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis
Can't this be eased with readily available binaries and cross compilation? E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. The minimum needed to compile 2.8.2 is 2.7.5: You can download a binary of 2.7.5 for any common system, cross compile 2.8.2 for your development system, viola! If there are binaries available for your development system, then it becomes almost trivial. Even if this wasn't possible for some reason, recursively building successive versions of the compiler is a completely automatable process. dmd+druntime+phobos compiles quickly enough that it's not a big problem.
May 11 2013
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 11.05.2013 23:43, schrieb John Colvin:
 On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:
 On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
 Except that now, it is a pain to migrate old haskell stuff to
 newer haskelle stuff if you missed several compile release.

 You ends up building recursively from the native version to the
 version you want.
Yeah. And I'm stuck with the opposite problem at the moment. I have to be able to build old haskell code without updating it, but I don't have an older version of ghc built currently, and getting a version old enough to compile my code has turned out to be a royal pain, because the old compiler won't compile with the new compiler. I don't even know if I'm going to be able to do it. If you're always moving forward, you're okay, but if you have to deal with older code, then you quickly run into trouble if the compiler is written in an up-to-date version of the language that it's compiling. At least at this point, if you needed something like 2.059 for some reason, you can just grab 2.059, compile it, and use it with your code. But if the compiler were written in D, and the version of D with 2.059 was not fully compatible with the current version, then compiling 2.059 would become a nightmare. The situation between a normal program and the compiler is quite different. With a normal program, if your code isn't going to work with the current compiler due to language or library changes, then you just grab an older version of the compiler and use that (possibly upgrading your code later if you intend to maintain it long term). But if it's the compiler that you're trying to compile, then you're screwed by any language or library changes that affect the compiler, because it could very well become impossible to compile older versions of the compiler. Yes, keeping language and library changes to a minimum reduces the problem, but unless they're absolutely frozen, you risk problems. Even changes with high ROI (like making implicit fall-through on switch statements illegal) could make building older compilers impossible. So, whatever we do with porting dmd to D, we need to be very careful. We don't want to lock ourselves in so that we can't make changes to the language or libraries even when we really need to, but we don't want to make it too difficult to build older versions of the compiler for people who have to either. At the extreme, we could end up in a situation where you have to grab the oldest version of the compiler which was written in C++, and then build each newer version of the compiler in turn until you get to the one that you want. - Jonathan M Davis
Can't this be eased with readily available binaries and cross compilation? E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. The minimum needed to compile 2.8.2 is 2.7.5: You can download a binary of 2.7.5 for any common system, cross compile 2.8.2 for your development system, viola! If there are binaries available for your development system, then it becomes almost trivial. Even if this wasn't possible for some reason, recursively building successive versions of the compiler is a completely automatable process. dmd+druntime+phobos compiles quickly enough that it's not a big problem.
I also don't understand the problem. This is how compilers get botstraped all the time. You just use toolchain X to build toolchain X+1. -- Paulo
May 11 2013
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 23:43:19 John Colvin wrote:
 Can't this be eased with readily available binaries and cross
 compilation?
 
 E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2.
 The minimum needed to compile 2.8.2 is 2.7.5:
 
 You can download a binary of 2.7.5 for any common system, cross
 compile 2.8.2 for your development system, viola! If there are
 binaries available for your development system, then it becomes
 almost trivial.
Sure, but that assumes that you have access to a compatible binary. That's not always easy, and it can be particularly nasty in *nix. A binary built a few years ago stands a good chance of being completely incompatible with current systems even if all it depends on is glibc, let alone every other dependency that might have changed. It's even harder when your language is not one included by default in distros. For Windows, this probably wouldn't be an issue, but it could be a big one for *nix systems.
 Even if this wasn't possible for some reason, recursively
 building successive versions of the compiler is a completely
 automatable process. dmd+druntime+phobos compiles quickly enough
 that it's not a big problem.
Sure, assuming that you can get an old enough version of the compiler which you can actually compile. It's by no means an insurmountable problem, but you _do_ very much risk being in a situation where you literally have to compile the last C++ version of D's compiler and then compile every version of the compiler since then until you get to the one you want. And anyone who doesn't know that they could go to an older compiler which was in C++ (let alone which version it was) is going to have a lot of trouble. I don't know how much we want to worry about this, but it's very much a real world problem when you don't have a binary for an older version of the compiler that you need, and the current compiler can't build it. It's been costing me a lot of time trying to sort that out in Haskell thanks to the shift from the 98 standard to 2010. - Jonathan M Davis
May 11 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 2:09 PM, Jonathan M Davis wrote:
 I have to be able
 to build old haskell code without updating it,
I guess this is the crux of the matter. Why can't you update the source?
May 11 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 17:51:24 Walter Bright wrote:
 On 5/11/2013 2:09 PM, Jonathan M Davis wrote:
 I have to be able
 to build old haskell code without updating it,
I guess this is the crux of the matter. Why can't you update the source?
Well, in this particular case, it has to do with work on my master's thesis, and I have the code in various stages of completion and need to be able to look at exactly what it was doing at each of those stages for writing the actual paper. Messing with the code risks changing what it does, and it wasn't necessarily in a great state anyway given that I'm basically dealing with snapshots of the code over time, and not all of the snapshots are necessarily fully functional. In the normal case, I'd definitely want to update my code, but I still might need to get the old code working before doing that so that I can be sure of how it works before changing it. Obviously, things like solid unit testing help with that, but if you're dealing with code that hasn't been updated in a while, it's not necessarily a straightforward task to update it, especially when it's in a language that you're less familiar with. It's even worse if it's code written by someone else entirely, and you're just trying to get it working (which isn't my current situation, but that's often the case when building old code). Ultimately, I don't know how much we need to care about situations where people need to compile an old version of the compiler, and all they have is the new compiler. Much as its been causing me quite a bit of grief in haskell, for the vast majority of people, it's not likely to come up. But I think that it at least needs to be brought up so that it can be considered when deciding what we're doing with regards to porting the front-end to D. I think that main reason that C++ avoids the problem is that it's so rarely updated (which causes a whole different set of problems). And while we obviously want to minimize breakage caused by changes to the library, language, or just due to bugs, they _are_ going to have an effect with regards to building older compilers if the compiler itself is affected by them. So, we might be better of restricting how much the compiler depends on - or we may decide that the workaround is to simply build the last C++ version of the compiler and then move forward from there. But I think that the issue should at least be raised. - Jonathan M Davis
May 11 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 6:09 PM, Jonathan M Davis wrote:
 So, we might be better of restricting how much the compiler depends on - or we
 may decide that the workaround is to simply build the last C++ version of the
 compiler and then move forward from there. But I think that the issue should
 at least be raised.
Last month I tried compiling an older 15 line D utility, and 10 of those lines broke due to phobos changes. I discussed this a bit with Andrei, and proposed that we keep around aliases for the old names, and put them inside a: version (OldNames) { alias newname oldname; .... } or something like that.
May 11 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 18:18:27 Walter Bright wrote:
 On 5/11/2013 6:09 PM, Jonathan M Davis wrote:
 So, we might be better of restricting how much the compiler depends on -
 or we may decide that the workaround is to simply build the last C++
 version of the compiler and then move forward from there. But I think
 that the issue should at least be raised.
Last month I tried compiling an older 15 line D utility, and 10 of those lines broke due to phobos changes. I discussed this a bit with Andrei, and proposed that we keep around aliases for the old names, and put them inside a: version (OldNames) { alias newname oldname; .... } or something like that.
Well, that particular problem should be less of an issue in the long run. We renamed a lot of stuff in an effort to make the naming more consistent, but we haven't been doing much of that for a while now. And fortunately, those changes are obvious and quick. But in theory, the way to solve the problem of your program not compiling with the new compiler is to compile with the compiler it was developed with in the first place, and then if you want to upgrade your code, you upgrade your code and use it with the new compiler. The big problem is when you need to compile the compiler. You have a circular dependency due to the compiler depending on itself, and have to break it somehow. As long as newer compilers can compiler older ones, you're fine, but that's bound to fall apart at some point unless you freeze everything. But even bug fixes could make the old compiler not compile anymore, so unless the language and compiler (and anything they depend on) is extremely stable, you risk not being able to compile older compilers, and it's hard to guarantee that level of stability, especially if the compiler is not restricted in what features it uses or in what it uses from the standard library. - Jonathan M Davis
May 11 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 7:30 PM, Jonathan M Davis wrote:
 But in theory, the way to solve the problem of your program not compiling with
 the new compiler is to compile with the compiler it was developed with in the
 first place, and then if you want to upgrade your code, you upgrade your code
 and use it with the new compiler. The big problem is when you need to compile
 the compiler. You have a circular dependency due to the compiler depending on
 itself, and have to break it somehow. As long as newer compilers can compiler
 older ones, you're fine, but that's bound to fall apart at some point unless
 you freeze everything. But even bug fixes could make the old compiler not
 compile anymore, so unless the language and compiler (and anything they depend
 on) is extremely stable, you risk not being able to compile older compilers,
 and it's hard to guarantee that level of stability, especially if the compiler
 is not restricted in what features it uses or in what it uses from the
 standard library.
It isn't just compiling the older compiler, it is compiling it and verifying that it works. At least for dmd, we keep all the old binaries up and downloadable for that reason.
May 11 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, May 11, 2013 19:56:00 Walter Bright wrote:
 At least for dmd, we keep all the old binaries up and downloadable for that
 reason.
That helps considerably, though if the compiler is old enough, that won't work for Linux due to glibc changes and whatnot. I expect that my particular situation is quite abnormal, but I thought that it was worth raising the point that if you're compiler has to compile itself, then changes to the language (and anything else the compiler depends on) can be that much more costly, so it may be worth minimizing what the compiler depends on (as Daniel is suggesting). As we increase our stability, the likelihood of problems will be less, but we'll probably never eliminate them. Haskell's case is as bad as it is because they released a new standard for it and did it in a way that it doesn't necessarily work to build the old one anymore (and if it does, it tends to be a pain). It would be akin to if dmd were building itself when we went from D1 to D2, and the new compiler could only compile D1 when certain flags were used, and those flags were overly complicated to boot. So, it's much worse than simply going from one version of the compiler to the next. - Jonathan M Davis
May 11 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-05-12 05:50, Jonathan M Davis wrote:

 That helps considerably, though if the compiler is old enough, that won't work
 for Linux due to glibc changes and whatnot.
My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg
May 12 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

 On 2013-05-12 05:50, Jonathan M Davis wrote:

  That helps considerably, though if the compiler is old enough, that won't
 work
 for Linux due to glibc changes and whatnot.
My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg
Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 12 2013
next sibling parent reply "w0rp" <devw0rp gmail.com> writes:
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
 Depends... statically linked binaries will probably always work 
 on the
 latest version, dynamic link and then you've got yourself a 
 'this
 libstdc++v5 doesn't exist anymore' problem.
I am picturing a Linux workstation with the Post-It note ”DO NOT UPDATE" stuck to it.
May 12 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 12 May 2013 11:08, w0rp <devw0rp gmail.com> wrote:

 On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:

 Depends... statically linked binaries will probably always work on the
 latest version, dynamic link and then you've got yourself a 'this
 libstdc++v5 doesn't exist anymore' problem.
I am picturing a Linux workstation with the Post-It note =94DO NOT UPDATE=
"
 stuck to it.
:D The only reason you'd have for that post-it note is if you were running some application that you; built yourself, obtained from a third party vendor, general other or not part of the distributions repository. For instance, I've had some linux ports of games break on me once after an upgrade. And I've even got a company gcc that does not work on Debian/Ubuntu. There's nothing wrong with binary compatibility, just that they implemented a multi-arch directory structure, so everything is in a different place to what the vanilla gcc expects. ;) --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0';
May 12 2013
prev sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
 On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

 On 2013-05-12 05:50, Jonathan M Davis wrote:

  That helps considerably, though if the compiler is old 
 enough, that won't
 work
 for Linux due to glibc changes and whatnot.
My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg
Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.
So surely we can just offer a full history of statically linked binaries, problem solved?
May 12 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 12 May 2013 11:39, John Colvin <john.loughran.colvin gmail.com> wrote:

 On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:

 On 12 May 2013 10:39, Jacob Carlborg <doob me.com> wrote:

  On 2013-05-12 05:50, Jonathan M Davis wrote:
  That helps considerably, though if the compiler is old enough, that
 won't

 work
 for Linux due to glibc changes and whatnot.
My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions. -- /Jacob Carlborg
Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.
So surely we can just offer a full history of statically linked binaries, problem solved?
The historical quirk of binary compatibility on Linux is OT to the problem I questioned, so no. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
May 12 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote in message 
news:mailman.1222.1368325870.4724.digitalmars-d puremagic.com...
 The big problem is when you need to compile
 the compiler. You have a circular dependency due to the compiler depending 
 on
 itself, and have to break it somehow. As long as newer compilers can 
 compiler
 older ones, you're fine, but that's bound to fall apart at some point 
 unless
 you freeze everything. But even bug fixes could make the old compiler not
 compile anymore, so unless the language and compiler (and anything they 
 depend
 on) is extremely stable, you risk not being able to compile older 
 compilers,
 and it's hard to guarantee that level of stability, especially if the 
 compiler
 is not restricted in what features it uses or in what it uses from the
 standard library.

 - Jonathan M Davis
My thought was that you ensure (for the foreseeable future) that all D versions of the compiler compile with the most recent C++ version of the compiler.
May 11 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/11/2013 10:25 PM, Daniel Murphy wrote:
 My thought was that you ensure (for the foreseeable future) that all D
 versions of the compiler compile with the most recent C++ version of the
 compiler.
That would likely mean the the D compiler sources must be compilable with 2.063.
May 12 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:kmnk08$3qr$1 digitalmars.com...
 On 5/11/2013 10:25 PM, Daniel Murphy wrote:
 My thought was that you ensure (for the foreseeable future) that all D
 versions of the compiler compile with the most recent C++ version of the
 compiler.
That would likely mean the the D compiler sources must be compilable with 2.063.
Yes. And anybody with a C++ compiler can build the latest release.
May 12 2013
prev sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 11 May 2013 at 15:09:24 UTC, Iain Buclaw wrote:
 Actually, the more I sit down and think about it, the more I 
 question
 whether or not it is a good idea for the D D front end to have 
 a dependency
 on phobos.   Maybe I should stop thinking in general.  :)

 Regards
Let me restate the issues to be clear on what I think is being said, and the my opinion. == On GDC: There is an flag to have the compiler built without dependencies of druntime/phobos. Someone interested in a Phobos free compiler would the be required to have Phobos to build their compiler. - While this is the same person, I don't see that they will require the same restriction when building the compiler. My guess is the environment used to build the compiler has fewer restrictions, such as the having gcc/ubuntu available. Thus it is reasonable to expect them to have the needed libraries to build their compiler. - Similarly, even if we restrict to just using druntime, the one interested in a druntime free compiler still runs into the issue. == On Compiling older Compilers: Checkout compiler source for an older compiler and gcc will build it. By switching to D, not only do we locate the source for the compiler we are building we must have the version of D used to build that compiler (or within the some window) - I think it would be positive to say, each dmd version compiles with the previous release and itself (possibly with -d). This gives a feel for what changes are happening, and the more Phobos used the better. - We can't eliminate the problem, if we only rely on druntime, everything still applies there. Instead we just need a consistent and/or well documented statement of which compiler versions compile which compiler versions. In conclusion, it is a real problem. But it is nothing we can eliminate. We should look at reducing the impact not through reducing the dependency, but instead through improvement of our processes for introducing breaking changes. Such concentration will not be limited to benefiting DMD, but instead every project which must deal with older code in some fashion.
May 13 2013