www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Migrating dmd to D?

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Hello,


Walter and I have had a long conversation about the next radical thing 
to do to improve D's standing. Like others in this community, we believe 
it's a good time to consider bootstrapping the compiler. Having the D 
compiler written in D has quite a few advantages, among which taking 
advantages of D's features and having a large codebase that would be its 
own test harness.

By this we'd like to initiate a dialog about how this large project can 
be initiated and driven through completion. Our initial basic ideas are:

1. Implement the dtoh standalone program that takes a D module and 
generates its corresponding C++ header.

2. Use dtoh to initiate and conduct an incremental port of the compiler. 
At given points throughout the code D code will coexist and link with 
C++ code.

3. At a point in the future the last C++ module will be replaced with a 
D module. Going forward there will be no more need for a C++ compiler to 
build the compiler (except as a bootstrapping test).

It is essential that we get support from the larger community for this. 
This is a large project that should enjoy strong leadership apart from 
Walter himself (as he is busy with dynamic library support which is 
strategic) and robust participation from many of us.

Please chime in with ideas on how to make this happen.


Thanks,

Andrei
Feb 27 2013
next sibling parent reply "timotheecour" <thelastmammoth gmail.com> writes:
related post from a month back that didn't get me any response...
[compiler bootstrapping]
http://forum.dlang.org/thread/qhqgqsmgrmdustoiauzu forum.dlang.org
Feb 27 2013
parent reply "timotheecour" <thelastmammoth gmail.com> writes:
 Use dtoh to initiate and conduct an incremental port of the 
 compiler.
How about going the other way around? Using swig to make existing dmd C++ code available to D, so we can advance with a mix of D and C++ code until all of C++ code is converted. I was able to successfully convert large code bases from C++ to D using swig (eg: opencv, sfml, etc). It's the most hands-off way, with very minimal interface file that can recursively make things accessible with fine grained control (for opencv the interface file was < 200 loc).
Feb 27 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 28 February 2013 at 00:55:44 UTC, timotheecour wrote:
 Use dtoh to initiate and conduct an incremental port of the 
 compiler.
How about going the other way around? Using swig to make existing dmd C++ code available to D, so we can advance with a mix of D and C++ code until all of C++ code is converted. I was able to successfully convert large code bases from C++ to D using swig (eg: opencv, sfml, etc). It's the most hands-off way, with very minimal interface file that can recursively make things accessible with fine grained control (for opencv the interface file was < 200 loc).
OT: Could you perhaps detail the process you went through to get opencv to D? It would be a big help to me as I'm currently staring down the barrel of having to re-implement a chunk of OpenCV in D for a data processing app.
Feb 27 2013
parent FG <home fgda.pl> writes:
On 2013-02-28 02:03, John Colvin wrote:
 Could you perhaps detail the process you went through to get opencv to D? It
 would be a big help to me as I'm currently staring down the barrel of having to
 re-implement a chunk of OpenCV in D for a data processing app.
I second that request! :)
Feb 28 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 7:55 PM, timotheecour wrote:
 Use dtoh to initiate and conduct an incremental port of the compiler.
How about going the other way around? Using swig to make existing dmd C++ code available to D, so we can advance with a mix of D and C++ code until all of C++ code is converted. I was able to successfully convert large code bases from C++ to D using swig (eg: opencv, sfml, etc). It's the most hands-off way, with very minimal interface file that can recursively make things accessible with fine grained control (for opencv the interface file was < 200 loc).
I think that's a fine idea but I also believe dtoh would be a mightily powerful program in and by itself. Once available, it would make migration of C++ projects to D possible and easy. Andrei
Feb 27 2013
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 28 February 2013 at 01:05:08 UTC, Andrei 
Alexandrescu wrote:
 I think that's a fine idea but I also believe dtoh would be a 
 mightily powerful program in and by itself.
I think the one I started a while ago is still sitting up on github.
Feb 27 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 8:12 PM, Adam D. Ruppe wrote:
 On Thursday, 28 February 2013 at 01:05:08 UTC, Andrei Alexandrescu wrote:
 I think that's a fine idea but I also believe dtoh would be a mightily
 powerful program in and by itself.
I think the one I started a while ago is still sitting up on github.
Did you finish it? Andrei
Feb 27 2013
prev sibling parent "Brad Anderson" <eco gnuk.net> writes:
On Thursday, 28 February 2013 at 01:12:47 UTC, Adam D. Ruppe 
wrote:
 On Thursday, 28 February 2013 at 01:05:08 UTC, Andrei 
 Alexandrescu wrote:
 I think that's a fine idea but I also believe dtoh would be a 
 mightily powerful program in and by itself.
I think the one I started a while ago is still sitting up on github.
Yep. https://github.com/adamdruppe/tools/blob/dtoh/dtoh.d
Feb 27 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 02:05, Andrei Alexandrescu wrote:

 I think that's a fine idea but I also believe dtoh would be a mightily
 powerful program in and by itself. Once available, it would make
 migration of C++ projects to D possible and easy.
It depends on where we want to put our time. -- /Jacob Carlborg
Feb 27 2013
prev sibling parent Chad Joan <chadjoan gmail.com> writes:
On 02/27/2013 08:05 PM, Andrei Alexandrescu wrote:
 On 2/27/13 7:55 PM, timotheecour wrote:
 Use dtoh to initiate and conduct an incremental port of the compiler.
How about going the other way around? Using swig to make existing dmd C++ code available to D, so we can advance with a mix of D and C++ code until all of C++ code is converted. I was able to successfully convert large code bases from C++ to D using swig (eg: opencv, sfml, etc). It's the most hands-off way, with very minimal interface file that can recursively make things accessible with fine grained control (for opencv the interface file was < 200 loc).
I think that's a fine idea but I also believe dtoh would be a mightily powerful program in and by itself. Once available, it would make migration of C++ projects to D possible and easy. Andrei
Isn't this what swig /does/ though? What is lacking?
Mar 09 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.
I think having the D front-end in D is a nice idea, but maybe it's better to use a back-end written by someone else, so most developing work will be spent on D itself. bye, bearophile
Feb 27 2013
parent "eles" <eles eles.com> writes:
On Thursday, 28 February 2013 at 00:55:33 UTC, bearophile wrote:
 I think having the D front-end in D is a nice idea, but maybe 
 it's better to use a back-end written by someone else, so most 
 developing work will be spent on D itself.
Won't that hinder the gdc/gcc work?
Feb 28 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Feb 27, 2013 at 07:37:50PM -0500, Andrei Alexandrescu wrote:
 Hello,
 
 Walter and I have had a long conversation about the next radical
 thing to do to improve D's standing. Like others in this community,
 we believe it's a good time to consider bootstrapping the compiler.
 Having the D compiler written in D has quite a few advantages, among
 which taking advantages of D's features and having a large codebase
 that would be its own test harness.
Aren't there already parts of a D compiler written by various community members? IIRC, Timon has a prototype D compiler that supports quite a good subset of D already. And I believe there are various D lexers and parsers lying around, some of which may serve as the basis of a bootstrapping D compiler. Shouldn't we make use of these existing efforts instead of starting from ground zero?
 By this we'd like to initiate a dialog about how this large project
 can be initiated and driven through completion. Our initial basic
 ideas are:
 
 1. Implement the dtoh standalone program that takes a D module and
 generates its corresponding C++ header.
 
 2. Use dtoh to initiate and conduct an incremental port of the
 compiler. At given points throughout the code D code will coexist
 and link with C++ code.
 
 3. At a point in the future the last C++ module will be replaced
 with a D module. Going forward there will be no more need for a C++
 compiler to build the compiler (except as a bootstrapping test).
 
 It is essential that we get support from the larger community for
 this. This is a large project that should enjoy strong leadership
 apart from Walter himself (as he is busy with dynamic library
 support which is strategic) and robust participation from many of
 us.
[...] How will this work with the continual stream of fixes that the current C++-based compiler is getting? I assume we're not just going to put DMD development on hold. Also, wouldn't this be a good time to review some of the current designs in DMD that may be hampering the full implementation of features that we'd like, such as discrepancies with TDPL, etc.? Would it make sense to redesign some of the code currently causing hard-to-fix issues as we're porting that part of DMD into D? It seems a bit counterproductive to simply transcribe the current buggy code into D, only to rewrite it later when (if) we finally get round to fixing it. Finally, I think somebody has brought up the idea of "freezing" a particular subset of D that the D compiler can use in its own code, preferably a reasonably simple subset that is safe from breaking changes down the road (it would be pathetic if a breaking change causes the compiler to be unable to compile itself, because the source code uses a language construct that was later deemed to need redesign). As DMD is ported over to D, it should be restricted to using only this subset of the language, so that it does not hamper future developments of the language unnecessarily. T -- Without geometry, life would be pointless. -- VS
Feb 27 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 7:57 PM, H. S. Teoh wrote:
 On Wed, Feb 27, 2013 at 07:37:50PM -0500, Andrei Alexandrescu wrote:
 Hello,

 Walter and I have had a long conversation about the next radical
 thing to do to improve D's standing. Like others in this community,
 we believe it's a good time to consider bootstrapping the compiler.
 Having the D compiler written in D has quite a few advantages, among
 which taking advantages of D's features and having a large codebase
 that would be its own test harness.
Aren't there already parts of a D compiler written by various community members? IIRC, Timon has a prototype D compiler that supports quite a good subset of D already. And I believe there are various D lexers and parsers lying around, some of which may serve as the basis of a bootstrapping D compiler. Shouldn't we make use of these existing efforts instead of starting from ground zero?
Of course. This is the purpose of this entire discussion - to leverage existing and new ideas, talent, and code.
 By this we'd like to initiate a dialog about how this large project
 can be initiated and driven through completion. Our initial basic
 ideas are:

 1. Implement the dtoh standalone program that takes a D module and
 generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the
 compiler. At given points throughout the code D code will coexist
 and link with C++ code.

 3. At a point in the future the last C++ module will be replaced
 with a D module. Going forward there will be no more need for a C++
 compiler to build the compiler (except as a bootstrapping test).

 It is essential that we get support from the larger community for
 this. This is a large project that should enjoy strong leadership
 apart from Walter himself (as he is busy with dynamic library
 support which is strategic) and robust participation from many of
 us.
[...] How will this work with the continual stream of fixes that the current C++-based compiler is getting? I assume we're not just going to put DMD development on hold.
I don't have all answers. I do have some ideas though. I'm thinking we need a wedge first - dtoh in place, and one seed D file in the middle of the C++ project. For example the module containing the main function. Then I imagine there will be pull requests that delete entire modules and replace them with .d modules. We need to have a form of a protocol that "freezes" modules that are under translation.
 Also, wouldn't this be a good time to review some of the current designs
 in DMD that may be hampering the full implementation of features that
 we'd like, such as discrepancies with TDPL, etc.? Would it make sense to
 redesign some of the code currently causing hard-to-fix issues as we're
 porting that part of DMD into D? It seems a bit counterproductive to
 simply transcribe the current buggy code into D, only to rewrite it
 later when (if) we finally get round to fixing it.
I think fixing while translating is difficult and should be approached on a case basis.
 Finally, I think somebody has brought up the idea of "freezing" a
 particular subset of D that the D compiler can use in its own code,
 preferably a reasonably simple subset that is safe from breaking changes
 down the road (it would be pathetic if a breaking change causes the
 compiler to be unable to compile itself, because the source code uses a
 language construct that was later deemed to need redesign). As DMD is
 ported over to D, it should be restricted to using only this subset of
 the language, so that it does not hamper future developments of the
 language unnecessarily.
Far as I can tell the freeze is "last non-bootstrapped version is D 2.xxx" and go from there. Andrei
Feb 27 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 4:57 PM, H. S. Teoh wrote:
 How will this work with the continual stream of fixes that the current
 C++-based compiler is getting? I assume we're not just going to put DMD
 development on hold.
I've done many projects consisting of converting a medium sized code base from one language to another. The way that works is to do it incrementally. Incrementally means: 1. at each step (i.e. pull request) we will have a fully functioning D compiler that passes its test suite 2. there is no divergence in code bases because there is not a divergent code base.
 Also, wouldn't this be a good time to review some of the current designs
 in DMD that may be hampering the full implementation of features that
 we'd like, such as discrepancies with TDPL, etc.? Would it make sense to
 redesign some of the code currently causing hard-to-fix issues as we're
 porting that part of DMD into D? It seems a bit counterproductive to
 simply transcribe the current buggy code into D, only to rewrite it
 later when (if) we finally get round to fixing it.
My experience chiming in - never ever ever attempt to refactor while translating. What always happens is you wind up with a mess that just doesn't work.
 Finally, I think somebody has brought up the idea of "freezing" a
 particular subset of D that the D compiler can use in its own code,
 preferably a reasonably simple subset that is safe from breaking changes
 down the road (it would be pathetic if a breaking change causes the
 compiler to be unable to compile itself, because the source code uses a
 language construct that was later deemed to need redesign). As DMD is
 ported over to D, it should be restricted to using only this subset of
 the language, so that it does not hamper future developments of the
 language unnecessarily.
Experience chiming in - a successful model is that the HEAD is compiled by the previous official release of D.
Feb 27 2013
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Feb 28, 2013 1:31 AM, "Walter Bright" <newshound2 digitalmars.com> wrote:
 On 2/27/2013 4:57 PM, H. S. Teoh wrote:
 How will this work with the continual stream of fixes that the current
 C++-based compiler is getting? I assume we're not just going to put DMD
 development on hold.
I've done many projects consisting of converting a medium sized code base
from one language to another. The way that works is to do it incrementally. Incrementally means:
 1. at each step (i.e. pull request) we will have a fully functioning D
compiler that passes its test suite
 2. there is no divergence in code bases because there is not a divergent
code base.
 Also, wouldn't this be a good time to review some of the current designs
 in DMD that may be hampering the full implementation of features that
 we'd like, such as discrepancies with TDPL, etc.? Would it make sense to
 redesign some of the code currently causing hard-to-fix issues as we're
 porting that part of DMD into D? It seems a bit counterproductive to
 simply transcribe the current buggy code into D, only to rewrite it
 later when (if) we finally get round to fixing it.
My experience chiming in - never ever ever attempt to refactor while
translating. What always happens is you wind up with a mess that just doesn't work.
 Finally, I think somebody has brought up the idea of "freezing" a
 particular subset of D that the D compiler can use in its own code,
 preferably a reasonably simple subset that is safe from breaking changes
 down the road (it would be pathetic if a breaking change causes the
 compiler to be unable to compile itself, because the source code uses a
 language construct that was later deemed to need redesign)g . As DMD is
 ported over to D, it should be restricted to using only this subset of
 the language, so that it does not hamper future developments of the
 language unnecessarily.
Experience chiming in - a successful model is that the HEAD is compiled
by the previous official release of D.

Once HEAD is compiled by the previous release (or system D compiler), it
might be a good practice for HEAD to compile itself too. Then this compiler
built by HEAD will then build the library.

Regards
Iain
Feb 27 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 07:45, Iain Buclaw wrote:

 Once HEAD is compiled by the previous release (or system D compiler), it
 might be a good practice for HEAD to compile itself too. Then this
 compiler built by HEAD will then build the library.
This is a good idea. -- /Jacob Carlborg
Feb 27 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/13 1:45 AM, Iain Buclaw wrote:
 Once HEAD is compiled by the previous release (or system D compiler), it
 might be a good practice for HEAD to compile itself too. Then this
 compiler built by HEAD will then build the library.
Do you think there's a risk that bootstrapping causes trouble for gdc? Andrei
Feb 28 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Feb 28, 2013 3:02 PM, "Andrei Alexandrescu" <
SeeWebsiteForEmail erdani.org> wrote:
 On 2/28/13 1:45 AM, Iain Buclaw wrote:
 Once HEAD is compiled by the previous release (or system D compiler), it
 might be a good practice for HEAD to compile itself too. Then this
 compiler built by HEAD will then build the library.
Do you think there's a risk that bootstrapping causes trouble for gdc? Andrei
No more a risk than bootstrapping for dmd. However my main concern is that I'd rather see this happen at a time when we port to more architectures other than x86 and 64bit. Leaving the cross-compiler step as a non-issue as there is already a suitable D compiler on the targeted system. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 28 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 02:22, Walter Bright wrote:

 I've done many projects consisting of converting a medium sized code
 base from one language to another. The way that works is to do it
 incrementally. Incrementally means:

 1. at each step (i.e. pull request) we will have a fully functioning D
 compiler that passes its test suite

 2. there is no divergence in code bases because there is not a divergent
 code base.

 Also, wouldn't this be a good time to review some of the current designs
 in DMD that may be hampering the full implementation of features that
 we'd like, such as discrepancies with TDPL, etc.? Would it make sense to
 redesign some of the code currently causing hard-to-fix issues as we're
 porting that part of DMD into D? It seems a bit counterproductive to
 simply transcribe the current buggy code into D, only to rewrite it
 later when (if) we finally get round to fixing it.
My experience chiming in - never ever ever attempt to refactor while translating. What always happens is you wind up with a mess that just doesn't work.
 Finally, I think somebody has brought up the idea of "freezing" a
 particular subset of D that the D compiler can use in its own code,
 preferably a reasonably simple subset that is safe from breaking changes
 down the road (it would be pathetic if a breaking change causes the
 compiler to be unable to compile itself, because the source code uses a
 language construct that was later deemed to need redesign). As DMD is
 ported over to D, it should be restricted to using only this subset of
 the language, so that it does not hamper future developments of the
 language unnecessarily.
Experience chiming in - a successful model is that the HEAD is compiled by the previous official release of D.
I agree with Walter here. -- /Jacob Carlborg
Feb 27 2013
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.

 By this we'd like to initiate a dialog about how this large 
 project can be initiated and driven through completion. Our 
 initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.

 3. At a point in the future the last C++ module will be 
 replaced with a D module. Going forward there will be no more 
 need for a C++ compiler to build the compiler (except as a 
 bootstrapping test).

 It is essential that we get support from the larger community 
 for this. This is a large project that should enjoy strong 
 leadership apart from Walter himself (as he is busy with 
 dynamic library support which is strategic) and robust 
 participation from many of us.

 Please chime in with ideas on how to make this happen.


 Thanks,

 Andrei
This is good news :) What will this mean for licensing? Will we be able to fully free the backend?
Feb 27 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 8:07 PM, John Colvin wrote:
 What will this mean for licensing? Will we be able to fully free the
 backend?
The backend will stay as is. Andrei
Feb 27 2013
parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Thursday, 28 February 2013 at 01:12:06 UTC, Andrei 
Alexandrescu wrote:
 On 2/27/13 8:07 PM, John Colvin wrote:
 What will this mean for licensing? Will we be able to fully 
 free the
 backend?
The backend will stay as is. Andrei
Ah, so we're just bootstrapping the frontend, not the whole compiler. This presents a good opportunity to make the frontend completely backend agnostic (I don't know how close it is currently).
Feb 27 2013
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 5:07 PM, John Colvin wrote:
 What will this mean for licensing? Will we be able to fully free the backend?
The backend will not be part of this conversion project in the foreseeable future. And besides, a translation of the backend into D will not void its license.
Feb 27 2013
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 2/27/13 9:37 PM, Andrei Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler.
If you do it, I think it's an excellent opportunity to rewrite the compiler *from scratch*, using features in D, and probably using a better design. It's probably easier to design the compiler now that all the features are more or less known. I also remember that DMD didn't have a visitor of sort for the semantic analysis.
Feb 27 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 5:11 PM, Ary Borenszweig wrote:
 If you do it, I think it's an excellent opportunity to rewrite the compiler
 *from scratch*, using features in D, and probably using a better design. It's
 probably easier to design the compiler now that all the features are more or
 less known. I also remember that DMD didn't have a visitor of sort for the
 semantic analysis.
My experience with such things is it, while tempting, has a large probability of destroying the project entirely.
Feb 27 2013
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 2/27/13 10:58 PM, Walter Bright wrote:
 ures in D, and probably using a better design. It's
 probably easier to design the compiler now that all the features are
 more or
 less known. I also remember that DMD didn't have a visitor of sort for the
 semantic analysis.
Why? What happened? If you don't use all D features and plan the compiler for being written in D I don't think the compiler will be a good candidate for stress-testing the language. It's also more fun to do it from scratch. What's the hurry?
Feb 27 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 6:12 PM, Ary Borenszweig wrote:
 Why? What happened?
There's a lot of lore in the original code involving arcana about how things really work. If you refactor and translate at the same time, you don't have an incremental conversion you can run through the test suite at each step. You wind up with one very large step change, and it doesn't work, and you're lost. Another reason is the people doing the translation/refactoring have an inadequate grasp of why the code is the way it is, so they just wind up breaking it. The resulting frustration and finger-pointing ruins everything. Translate. Test. Verify. *THEN* refactor. Because when the verify step fails, and you have a one:one correspondence to the original code that does work, you can quickly find out what went wrong. And believe me, things go wrong in the translation at every step of the process.
 If you don't use all D features and plan the compiler for being written in D I
 don't think the compiler will be a good candidate for stress-testing the
language.
The point is not to use the compiler to stress test the language. NOT AT ALL. The point is to improve the compiler by taking advantage of what D offers.
 It's also more fun to do it from scratch. What's the hurry?
We have limited resources, and we shouldn't squander them on something I have a lot of experience on knowing will fail. Hey, anyone can ignore me and go ahead and do it that way. I wish you the best of luck - sometimes us old coots are dead wrong - but forgive me if I'm not going to be terribly sympathetic if you ignore my advice and things go badly!
Feb 27 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 04:49, Walter Bright wrote:

 There's a lot of lore in the original code involving arcana about how
 things really work. If you refactor and translate at the same time, you
 don't have an incremental conversion you can run through the test suite
 at each step.
I interpreted Ary's post as basically doing a clean room implementation. Not translate the existing code.
 Hey, anyone can ignore me and go ahead and do it that way. I wish you
 the best of luck - sometimes us old coots are dead wrong - but forgive
 me if I'm not going to be terribly sympathetic if you ignore my advice
 and things go badly!
There are already several people doing clean room implementations. At least of the front end. * Dil * SDC * A couple of lexers/parsers -- /Jacob Carlborg
Feb 27 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 11:49 PM, Jacob Carlborg wrote:
 There are already several people doing clean room implementations. At least of
 the front end.
Why? The only point would be to change the license of the front end.
Feb 28 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 11:25, Walter Bright wrote:

 Why? The only point would be to change the license of the front end.
I don't know, I'm not doing it. Possibly reasons: * Fun * Learn * Change the license * DMD is not written in D * DMD is not built/usable as a library * DMD contains a lot of bugs Although I don't know for sure if they're clean room implementations or not. They are at least not direct translations. -- /Jacob Carlborg
Feb 28 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/28/2013 03:02 PM, Jacob Carlborg wrote:
 On 2013-02-28 11:25, Walter Bright wrote:

 Why? The only point would be to change the license of the front end.
I don't know, I'm not doing it. Possibly reasons: * Fun
Yup.
 ...
 * Change the license
Actually I dislike the whole licensing issue.
 * DMD is not written in D
Yup.
 * DMD is not built/usable as a library
Yup. There should be a sufficiently simple frontend on which libraries for source code analysis and manipulation tools can be built.
 * DMD contains a lot of bugs
Yup, even the intention on the compiler developer side is buggy sometimes. DMD will keep breaking my code because the hand-wavy notion of a "forward reference error" somehow appears to be accepted. * Having only one implementation harms the language quality, because people are more likely to be willing to accept arguably buggy or stupid behaviour, and there is no pressure to clarify details of the spec. (eg. UDA's in DMD introduce an awkward and underpowered AST macro system. Value range propagation is too conservative for bitwise operations, DMD contains undocumented features, like types as function arguments in a typeof, or invoking opCall via an assignment, etc.) * wc -l reveals that the DMD front end source code is roughly 30 to 40 times larger than it should be for what it does in my opinion. Refactoring it so that it shrinks by that factor in C++ is a lot harder than building it from scratch in D. (* Knowing the guts of a front end means you can decide to add type system and syntax extensions for private use. :P)
 Although I don't know for sure if they're clean room implementations or
 not. They are at least not direct translations.
Mine is clean room.
Feb 28 2013
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Thu, 28 Feb 2013 18:44:31 +0100
schrieb Timon Gehr <timon.gehr gmx.ch>:

 * wc -l reveals that the DMD front end source code is roughly 30 to 40 
 times larger than it should be for what it does in my opinion.
That can only mean that you don't really know what it does in my opinion. Sure such a large code base accumulates duplicates since not everybody knows about all helper functions or copy and paste was the least intrusive bug fix somewhere, but you don't really believe that 94% of the front end are unnecessary, do you?
 (* Knowing the guts of a front end means you can decide to add type 
 system and syntax extensions for private use. :P)
Ah, I see where the wind blows ;) -- Marco
Feb 28 2013
next sibling parent Philippe Sigaud <philippe.sigaud gmail.com> writes:
 (* Knowing the guts of a front end means you can decide to add type
 system and syntax extensions for private use. :P)
Ah, I see where the wind blows ;)
Or, even better, an extensible FE with commonly distributed extensions like the Haskell compiler. And, of course, an AST macro system and a pony ;) No, scratch that, with a good macro system, we can have the pony.
Feb 28 2013
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/01/2013 07:48 AM, Marco Leise wrote:
 Am Thu, 28 Feb 2013 18:44:31 +0100
 schrieb Timon Gehr <timon.gehr gmx.ch>:

 * wc -l reveals that the DMD front end source code is roughly 30 to 40
 times larger than it should be for what it does in my opinion.
That can only mean that you don't really know what it does in my opinion.
I guess it (almost) implements the language. :)
 Sure such a large code base accumulates duplicates
 since not everybody knows about all helper functions or copy
 and paste was the least intrusive bug fix somewhere, but you
 don't really believe that 94% of the front end are
 unnecessary, do you?
I believe it is bloated. Maybe it's partly because it is written in C++.
 ...
Mar 01 2013
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Friday, 1 March 2013 at 08:55:22 UTC, Timon Gehr wrote:
 On 03/01/2013 07:48 AM, Marco Leise wrote:
 Am Thu, 28 Feb 2013 18:44:31 +0100
 schrieb Timon Gehr <timon.gehr gmx.ch>:

 * wc -l reveals that the DMD front end source code is roughly 
 30 to 40
 times larger than it should be for what it does in my opinion.
That can only mean that you don't really know what it does in my opinion.
I guess it (almost) implements the language. :)
 Sure such a large code base accumulates duplicates
 since not everybody knows about all helper functions or copy
 and paste was the least intrusive bug fix somewhere, but you
 don't really believe that 94% of the front end are
 unnecessary, do you?
I believe it is bloated. Maybe it's partly because it is written in C++.
 ...
I have to go with Marco. What is the usual bloat factor between C++ and D ? 2x at most ? Unless you found some super efficient way of writing a complex grammar, I don't see a reason there could be such a large difference. In fact, for this kind of program, I am not even sure the D code will be much smaller than the C++ one overall.
Mar 01 2013
prev sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 2/28/13 4:49 AM, Jacob Carlborg wrote:
 On 2013-02-28 04:49, Walter Bright wrote:

 There's a lot of lore in the original code involving arcana about how
 things really work. If you refactor and translate at the same time, you
 don't have an incremental conversion you can run through the test suite
 at each step.
I interpreted Ary's post as basically doing a clean room implementation. Not translate the existing code.
Yes, exactly.
Feb 28 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 My experience with such things is it, while tempting, has a 
 large probability of destroying the project entirely.
I agree. And translating code must be done in an extremely methodical way, if you want one chance to see a working result :-) In such work taking short-cuts gives a high probability of producing trash. You have to go slowly, and double test every intermediate step. Bye, bearophile
Feb 27 2013
parent reply Marco Leise <Marco.Leise gmx.de> writes:
There are > 1000 open bugs and well known, expected language
features not implemented. In my opinion, the compiler should
be ported after all important language features are finalized.
I don't mean syntax, but stuff that only bearophile has the
complete list of: shared, allocators, ...
Also DMD leaks memory -> it is tempting to use the GC -> DMD
will often be a whole lot slower in the end. :D
Also Phobos is designed for safety and generic programming,
not for raw speed like many old C functions (at least that's
my experience). E.g. I have seen or written Unicode and
float/string conversion routines that perform 7x to 13x faster
than the 'obvious' way in D.

-- 
Marco
Feb 27 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 8:03 PM, Marco Leise wrote:
 There are > 1000 open bugs and well known, expected language
 features not implemented. In my opinion, the compiler should
 be ported after all important language features are finalized.
 I don't mean syntax, but stuff that only bearophile has the
 complete list of: shared, allocators, ...
 Also DMD leaks memory -> it is tempting to use the GC -> DMD
 will often be a whole lot slower in the end. :D
 Also Phobos is designed for safety and generic programming,
 not for raw speed like many old C functions (at least that's
 my experience). E.g. I have seen or written Unicode and
 float/string conversion routines that perform 7x to 13x faster
 than the 'obvious' way in D.
The motivation for the migration is not for fun, it's not even to "eat our own dogfood". The idea is to make the front end more reliable and more flexible by using D features that help. This should make us more productive and able to fix problems faster and presumably have fewer problems in the first place. There are a long list of D things that will help.
Feb 27 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Feb 27, 2013 at 09:32:08PM -0800, Walter Bright wrote:
[...]
 The motivation for the migration is not for fun, it's not even to
 "eat our own dogfood". The idea is to make the front end more
 reliable and more flexible by using D features that help. This
 should make us more productive and able to fix problems faster and
 presumably have fewer problems in the first place.
 
 There are a long list of D things that will help.
How does this affect GDC/LDC? AFAIK, the GCC build scripts do not (yet?) support bootstrapping D code. T -- If you compete with slaves, you become a slave. -- Norbert Wiener
Feb 27 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 9:35 PM, H. S. Teoh wrote:
 How does this affect GDC/LDC? AFAIK, the GCC build scripts do not (yet?)
 support bootstrapping D code.
I don't know. I presume other gcc language tools are not written in C.
Feb 27 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, February 27, 2013 23:44:02 Walter Bright wrote:
 On 2/27/2013 9:35 PM, H. S. Teoh wrote:
 How does this affect GDC/LDC? AFAIK, the GCC build scripts do not (yet?)
 support bootstrapping D code.
I don't know. I presume other gcc language tools are not written in C.
Wasn't all of that stuff written in pure C until fairly recently when they finally started letting C++ in? Maybe that was only the core stuff though and some of the language extensions aren't that strict. I don't know. - Jonathan M Davis
Feb 27 2013
parent reply "pjmlp" <pjmlp progtools.org> writes:
On Thursday, 28 February 2013 at 07:58:31 UTC, Jonathan M Davis 
wrote:
 On Wednesday, February 27, 2013 23:44:02 Walter Bright wrote:
 On 2/27/2013 9:35 PM, H. S. Teoh wrote:
 How does this affect GDC/LDC? AFAIK, the GCC build scripts 
 do not (yet?)
 support bootstrapping D code.
I don't know. I presume other gcc language tools are not written in C.
Wasn't all of that stuff written in pure C until fairly recently when they finally started letting C++ in? Maybe that was only the core stuff though and some of the language extensions aren't that strict. I don't know. - Jonathan M Davis
GNAT is written in Ada. If I am not mistaken, many non standard frontends for Modula-2, Modula-3 and Pascal also use their own languages. http://gcc.gnu.org/frontends.html -- Paulo
Feb 28 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Feb 28, 2013 9:36 PM, "pjmlp" <pjmlp progtools.org> wrote:
 On Thursday, 28 February 2013 at 07:58:31 UTC, Jonathan M Davis wrote:
 On Wednesday, February 27, 2013 23:44:02 Walter Bright wrote:
 On 2/27/2013 9:35 PM, H. S. Teoh wrote:
 How does this affect GDC/LDC? AFAIK, the GCC build scripts > do not
(yet?)
 support bootstrapping D code.
I don't know. I presume other gcc language tools are not written in C.
Wasn't all of that stuff written in pure C until fairly recently when
they
 finally started letting C++ in? Maybe that was only the core stuff
though and
 some of the language extensions aren't that strict. I don't know.

 - Jonathan M Davis
GNAT is written in Ada. If I am not mistaken, many non standard frontends for Modula-2, Modula-3
and Pascal also use their own languages.
 http://gcc.gnu.org/frontends.html

 --
 Paulo
See my message above. The problem is not what language the frontend is written in, the problem is not requiring to interface to the gcc backend from the parts that are written in D. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 28 2013
prev sibling next sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 27 Feb 2013 21:32:08 -0800
schrieb Walter Bright <newshound2 digitalmars.com>:
 
 The motivation for the migration is not for fun, it's not even to "eat our own 
 dogfood". The idea is to make the front end more reliable and more flexible by 
 using D features that help. This should make us more productive and able to
fix 
 problems faster and presumably have fewer problems in the first place.
 
 There are a long list of D things that will help.
In a way it means "eat your own dogfood" if you compare C++ to D. C++ may be lacking, but you can emulate a few things and it has good code analysis tools. Maybe I'm too pessimistic in thinking this will take a year, stop bug fixes and stall language design issues from being resolved as well as slow the compiler down notably, since you'll be writing easy to maintain code using Phobos and a GC and that is always slower than ASM, right? :p -- Marco
Feb 28 2013
parent "Rob T" <alanb ucora.com> writes:
On Thursday, 28 February 2013 at 17:04:24 UTC, Marco Leise wrote:
 Am Wed, 27 Feb 2013 21:32:08 -0800
 schrieb Walter Bright <newshound2 digitalmars.com>:
 
 The motivation for the migration is not for fun, it's not even 
 to "eat our own dogfood". The idea is to make the front end 
 more reliable and more flexible by using D features that help. 
 This should make us more productive and able to fix problems 
 faster and presumably have fewer problems in the first place.
 
 There are a long list of D things that will help.
In a way it means "eat your own dogfood" if you compare C++ to D. C++ may be lacking, but you can emulate a few things and it has good code analysis tools. Maybe I'm too pessimistic in thinking this will take a year, stop bug fixes and stall language design issues from being resolved as well as slow the compiler down notably, since you'll be writing easy to maintain code using Phobos and a GC and that is always slower than ASM, right? :p
The biggest benefit I predict that will come from an effort like this, is the productive change that comes about when you "eat your own dog food". --rt
Feb 28 2013
prev sibling parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Thursday, 28 February 2013 at 05:32:37 UTC, Walter Bright 
wrote:
 On 2/27/2013 8:03 PM, Marco Leise wrote:
 There are > 1000 open bugs and well known, expected language
 features not implemented. In my opinion, the compiler should
 be ported after all important language features are finalized.
 I don't mean syntax, but stuff that only bearophile has the
 complete list of: shared, allocators, ...
 Also DMD leaks memory -> it is tempting to use the GC -> DMD
 will often be a whole lot slower in the end. :D
 Also Phobos is designed for safety and generic programming,
 not for raw speed like many old C functions (at least that's
 my experience). E.g. I have seen or written Unicode and
 float/string conversion routines that perform 7x to 13x faster
 than the 'obvious' way in D.
The motivation for the migration is not for fun, it's not even to "eat our own dogfood". The idea is to make the front end more reliable and more flexible by using D features that help. This should make us more productive and able to fix problems faster and presumably have fewer problems in the first place. There are a long list of D things that will help.
So you're saying some of our dogfood is actually caviar then... I would divide the caviar into two groups, manifest and hidden. The manifest caviar is the easiest to sell. Hidden caviar is the benefits which are unexpected by at least a portion of the D community. Each piece of hidden caviar therefore needs one or more champions. Not that this is a perfect example, but the lexer being assembled by Brian and Dmitri seems to have a spark of the hidden caviar about it, lending weight to the "clean room" camp. The politics of "existing" versus "clean room" must be mastered because there's a lot of room for resentment there if the wrong choices are made, it seems to me. One thing both "clean room" and "existing" have, or should have, in common is the test suite, which is probably a better spec than the spec is. Perhaps a method can be devised which makes it easy to divide and conquer the test suite.
Feb 28 2013
parent "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Friday, 1 March 2013 at 06:57:31 UTC, Zach the Mystic wrote:
 So you're saying some of our dogfood is actually caviar then...

 I would divide the caviar into two groups, manifest and hidden. 
 The manifest caviar is the easiest to sell. Hidden caviar is 
 the benefits which are unexpected by at least a portion of the 
 D community. Each piece of hidden caviar therefore needs one or 
 more champions.

 Not that this is a perfect example, but the lexer being 
 assembled by Brian and Dmitri seems to have a spark of the 
 hidden caviar about it, lending weight to the "clean room" 
 camp. The politics of "existing" versus "clean room" must be 
 mastered because there's a lot of room for resentment there if 
 the wrong choices are made, it seems to me.

 One thing both "clean room" and "existing" have, or should 
 have, in common is the test suite, which is probably a better 
 spec than the spec is. Perhaps a method can be devised which 
 makes it easy to divide and conquer the test suite.
By "clean room" I really meant starting from scratch, regardless of license.
Mar 01 2013
prev sibling parent Shahid <govellius gmail.com> writes:
On Wed, 27 Feb 2013 17:58:27 -0800, Walter Bright wrote:

 On 2/27/2013 5:11 PM, Ary Borenszweig wrote:
 If you do it, I think it's an excellent opportunity to rewrite the
 compiler *from scratch*, using features in D, and probably using a
 better design. It's probably easier to design the compiler now that all
 the features are more or less known. I also remember that DMD didn't
 have a visitor of sort for the semantic analysis.
My experience with such things is it, while tempting, has a large probability of destroying the project entirely.
I wholeheartedly agree with Walter on this. I'd like to see as much 1:1 translation as possible first, then refactoring can begin.
Feb 28 2013
prev sibling parent Robert burner Schadek <realburner gmx.de> writes:
On 02/28/2013 02:11 AM, Ary Borenszweig wrote:
 On 2/27/13 9:37 PM, Andrei Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler.
If you do it, I think it's an excellent opportunity to rewrite the compiler *from scratch*, using features in D, and probably using a better design. It's probably easier to design the compiler now that all the features are more or less known. I also remember that DMD didn't have a visitor of sort for the semantic analysis.
+1 I think translating the frontend from C++ to D will yield very bad code. The AST Node types in dmd use struct with inheritance. This leads to the question whether or not to use structs or classes in the D frontend. If classes are used the GC will hit you from time to time, so class are not a good idea. Sure the GC could be disabled, but imho will lead to the same problems the frontend has now. Using struct will also require to use heap an pointers, also bad, but imho there are great benefits by from structs and there is a GC workaround for structs. Structs will make it easier to make the frontend into a library and use it from anything that knows C. This is a huge +1 imo. To get around the GC, the AST could be build using shared_ptr pointing to structs. Sure this will have a overhead, but I would rather see the compiler taking longer than telling me "no more memory available". I would also argue that a clean room D impl. with tons of unittests, testings all parse methodes, the lexer, the semantic analyzer etc, will yield a much better maintainable and bug free compiler. Pro clean room with shared_ptr!(ASTNode): - Binding from C - fixes memory problem - more maintainable code
Feb 28 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 4:37 PM, Andrei Alexandrescu wrote:
 3. At a point in the future the last C++ module will be replaced with a D
 module. Going forward there will be no more need for a C++ compiler to build
the
 compiler (except as a bootstrapping test).
Not exactly - the back end will not realistically be converted to D. This is a front end only conversion.
Feb 27 2013
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
Alexandrescu wrote:
 Please chime in with ideas on how to make this happen.
Not an expert on the topic, but what does this mean for maintainers of integration with other backends, like GDC and LDC? For example: are there any other frontends in GCC not written in C/C++?
Feb 27 2013
next sibling parent "sclytrack" <sclytrack hotmail.com> writes:
On Thursday, 28 February 2013 at 01:32:58 UTC, Vladimir Panteleev
wrote:
 On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
 Alexandrescu wrote:
 Please chime in with ideas on how to make this happen.
Not an expert on the topic, but what does this mean for maintainers of integration with other backends, like GDC and LDC? For example: are there any other frontends in GCC not written in C/C++?
gcc only started accepting c++ in 2010. So it will be really though to get them to accept D.
Feb 27 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Feb 28, 2013 1:40 AM, "Vladimir Panteleev" <vladimir thecybershadow.net>
wrote:
 On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei Alexandrescu wrote:
 Please chime in with ideas on how to make this happen.
Not an expert on the topic, but what does this mean for maintainers of
integration with other backends, like GDC and LDC? For example: are there any other frontends in GCC not written in C/C++? Before gdc, there was no such thing as a frontend written in C++ in gcc. As far as history goes this was a huge drawback to make any progress in the development of a gcc D compiler. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 27 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Like others in this community, we believe it's a good time to 
 consider bootstrapping the compiler. Having the D compiler 
 written in D has quite a few advantages, among which taking 
 advantages of D's features and having a large codebase that 
 would be its own test harness.
If just the front-end is written in D, then I think it can't be defined a bootstrapping, because you still need a C++ compiler to compile a complete D compiler. Bye, bearophile
Feb 27 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 01:37, Andrei Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.
Now this is some great news to wake up to :)
 By this we'd like to initiate a dialog about how this large project can
 be initiated and driven through completion. Our initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module and
 generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the compiler.
 At given points throughout the code D code will coexist and link with
 C++ code.

 3. At a point in the future the last C++ module will be replaced with a
 D module. Going forward there will be no more need for a C++ compiler to
 build the compiler (except as a bootstrapping test).

 It is essential that we get support from the larger community for this.
 This is a large project that should enjoy strong leadership apart from
 Walter himself (as he is busy with dynamic library support which is
 strategic) and robust participation from many of us.
Short term goal: I agree with what Walter has said in other posts that we need to make a direct translation as possible minimize translation bugs. Long term goal: When the translation is done we should refactor the compiler/front end to be a library, usable by other tools. It would be nice to hear some comments from the GDC and LDC developers. BTW, there's already a translation of DMD available, DDMD: http://www.dsource.org/projects/ddmd But this is a bit outdated. I also don't know how a direct translation this is. I can at least tell that it has a more one-to-one mapping of files and classes than DMD does. -- /Jacob Carlborg
Feb 27 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 11:34 PM, Jacob Carlborg wrote:
 BTW, there's already a translation of DMD available, DDMD:

 http://www.dsource.org/projects/ddmd

 But this is a bit outdated. I also don't know how a direct translation this is.
 I can at least tell that it has a more one-to-one mapping of files and classes
 than DMD does.
Curiously, there appears to be no copyright/license information.
Feb 27 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 08:48, Walter Bright wrote:

 Curiously, there appears to be no copyright/license information.
Should be the same as DMD uses. -- /Jacob Carlborg
Feb 28 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 5:15 AM, Jacob Carlborg wrote:
 On 2013-02-28 08:48, Walter Bright wrote:

 Curiously, there appears to be no copyright/license information.
Should be the same as DMD uses.
Well, it should say. In fact, with the current copyright law which specifies the default as "copyrighted, and no license at all" it needs to or nobody can use it. I know I probably come off as a ninny about this, but professional users will run screaming from any open source code unless it contains: 1. a copyright notice 2. a license 3. who owns the above
Feb 28 2013
parent reply Russel Winder <russel winder.org.uk> writes:
Walter,

On Thu, 2013-02-28 at 14:46 -0800, Walter Bright wrote:
[=E2=80=A6]
 I know I probably come off as a ninny about this, but professional users =
will=20
 run screaming from any open source code unless it contains:
=20
 1. a copyright notice
 2. a license
 3. who owns the above
Not ninni-ish at all, very sensible. Of course it is not the professional users that worry about these things, it is their lawyers. Worse there is a whole collection of misunderstanding and misapprehensions, not to mention FUD, about the various well known licences. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 28 2013
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On 01.03.2013 07:10, Russel Winder wrote:
 Walter,

 On Thu, 2013-02-28 at 14:46 -0800, Walter Bright wrote:
 […]
 I know I probably come off as a ninny about this, but professional users will
 run screaming from any open source code unless it contains:

 1. a copyright notice
 2. a license
 3. who owns the above
Not ninni-ish at all, very sensible. Of course it is not the professional users that worry about these things, it is their lawyers. Worse there is a whole collection of misunderstanding and misapprehensions, not to mention FUD, about the various well known licences.
I lost count the amount of times I had to fulfill Excel sheets with information for Lawyers before we could use open source software. Some of those sheets are pretty exhaustive. :\ - License - Owner - Web site - Code repository location for the given release number - In which product it is going to be used - Why we are using open source in first place - Examples of known software that also make use of the related software - ... This for each single version being used. A new version requires going again through the process. -- Paulo
Mar 01 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 01, 2013 at 04:28:50PM +0100, Paulo Pinto wrote:
 On 01.03.2013 07:10, Russel Winder wrote:
Walter,

On Thu, 2013-02-28 at 14:46 -0800, Walter Bright wrote:
[…]
I know I probably come off as a ninny about this, but professional
users will run screaming from any open source code unless it
contains:

1. a copyright notice
2. a license
3. who owns the above
Not ninni-ish at all, very sensible. Of course it is not the professional users that worry about these things, it is their lawyers. Worse there is a whole collection of misunderstanding and misapprehensions, not to mention FUD, about the various well known licences.
I lost count the amount of times I had to fulfill Excel sheets with information for Lawyers before we could use open source software. Some of those sheets are pretty exhaustive. :\ - License - Owner - Web site - Code repository location for the given release number - In which product it is going to be used - Why we are using open source in first place - Examples of known software that also make use of the related software - ... This for each single version being used. A new version requires going again through the process.
[...] Wow. You make me feel really lucky that at my day job, I once made a request to use a particular piece of open source software, and the legal department actually replied with "the license is MIT, it should be OK, approved." OTOH, though, anything to do with the GPL or its ilk will probably require truckloads of red tape to approve. T -- They say that "guns don't kill people, people kill people." Well I think the gun helps. If you just stood there and yelled BANG, I don't think you'd kill too many people. -- Eddie Izzard, Dressed to Kill
Mar 01 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/1/2013 7:43 AM, H. S. Teoh wrote:
 Wow. You make me feel really lucky that at my day job, I once made a
 request to use a particular piece of open source software, and the legal
 department actually replied with "the license is MIT, it should be OK,
 approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Mar 01 2013
parent reply "Andrej Mitrovic" <andrej.mitrovich gmail.com> writes:
On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright wrote:
 On 3/1/2013 7:43 AM, H. S. Teoh wrote:
 Wow. You make me feel really lucky that at my day job, I once 
 made a
 request to use a particular piece of open source software, and 
 the legal
 department actually replied with "the license is MIT, it 
 should be OK,
 approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
Mar 01 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/1/2013 6:55 PM, Andrej Mitrovic wrote:
 On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright wrote:
 On 3/1/2013 7:43 AM, H. S. Teoh wrote:
 Wow. You make me feel really lucky that at my day job, I once made a
 request to use a particular piece of open source software, and the legal
 department actually replied with "the license is MIT, it should be OK,
 approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
I'd recommend Boost or GPL. Anyhow, it's a pity to see his work go to waste because of no license.
Mar 01 2013
next sibling parent "Andrej Mitrovic" <andrej.mitrovich gmail.com> writes:
On Saturday, 2 March 2013 at 03:01:20 UTC, Walter Bright wrote:
 On 3/1/2013 6:55 PM, Andrej Mitrovic wrote:
 On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright wrote:
 On 3/1/2013 7:43 AM, H. S. Teoh wrote:
 Wow. You make me feel really lucky that at my day job, I 
 once made a
 request to use a particular piece of open source software, 
 and the legal
 department actually replied with "the license is MIT, it 
 should be OK,
 approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
I'd recommend Boost or GPL. Anyhow, it's a pity to see his work go to waste because of no license.
I'll send him an e-mail to see if he's still around, maybe he'd be interested in this again.
Mar 01 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 01, 2013 at 07:01:20PM -0800, Walter Bright wrote:
 On 3/1/2013 6:55 PM, Andrej Mitrovic wrote:
On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright wrote:
On 3/1/2013 7:43 AM, H. S. Teoh wrote:
Wow. You make me feel really lucky that at my day job, I once made
a request to use a particular piece of open source software, and
the legal department actually replied with "the license is MIT, it
should be OK, approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
I'd recommend Boost or GPL. Anyhow, it's a pity to see his work go to waste because of no license.
I would personally go for GPL, but it does scare certain companies off -- I've personally witnessed that. Just FWIW. T -- Computers aren't intelligent; they only think they are.
Mar 01 2013
parent "Rob T" <alanb ucora.com> writes:
On Saturday, 2 March 2013 at 03:16:59 UTC, H. S. Teoh wrote:
 On Fri, Mar 01, 2013 at 07:01:20PM -0800, Walter Bright wrote:
 On 3/1/2013 6:55 PM, Andrej Mitrovic wrote:
On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright 
wrote:
On 3/1/2013 7:43 AM, H. S. Teoh wrote:
Wow. You make me feel really lucky that at my day job, I 
once made
a request to use a particular piece of open source 
software, and
the legal department actually replied with "the license is 
MIT, it
should be OK, approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
I'd recommend Boost or GPL. Anyhow, it's a pity to see his work go to waste because of no license.
I would personally go for GPL, but it does scare certain companies off -- I've personally witnessed that. Just FWIW. T
Yeah, GPL can have that effect. One of the most friendly license I've seen is sqlite's http://www.sqlite.org/copyright.html --rt
Mar 01 2013
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Saturday, 2 March 2013 at 03:01:20 UTC, Walter Bright wrote:
 On 3/1/2013 6:55 PM, Andrej Mitrovic wrote:
 On Saturday, 2 March 2013 at 02:35:34 UTC, Walter Bright wrote:
 On 3/1/2013 7:43 AM, H. S. Teoh wrote:
 Wow. You make me feel really lucky that at my day job, I 
 once made a
 request to use a particular piece of open source software, 
 and the legal
 department actually replied with "the license is MIT, it 
 should be OK,
 approved."
This is exactly why we are using a well-known license, rather than rolling our own.
Anyway the DDMD maintainer was asked about the license in 2010, he never picked one but it seemed like he was open to anything:
I'd recommend Boost or GPL. Anyhow, it's a pity to see his work go to waste because of no license.
I'm no copyright lawyer, but I think ddmd being a derivative work from dmd should probably inherit the license from it (although I prefer Boost as in my opinion it is more liberal that GPL). If someone is willing to bring the project back from it's stale state - I'm more than willing to help (by both writing patches and explaining how the existing code works). He must also understand that ddmd is a couple of years behind dmd, and updating it is a monkey-work that requires little thinking but lot's of time. It usually took me a couple of hours to generate a code diff between 2 subsequent dmd releases, apply it to ddmd and run tests (normally I was compiling druntime.lib and compare it against one produced by dmd - they should match byte-for-byte) - that is if everything went smoothly. Sometimes they won't match so I compile druntime one file at a time, find what files don't match, reduce test case even further until I find the source of the problem and fix it. For me (someone who only has very basic understanding of the codebase) it could took another extra couple of hours to fix bugs. Recent releases also started containing more features and a lot more bugfixes (= larger diffs, more difficult to merge). Let's make it 5 hours per release. 20 versions behind x 5 hours of work ~ 100 man-hours to bring it up to date. A little optimistic probably, but it doesn't sound too bad :) Anyway, feel free to take the sources and attach whatever license you want to it.
Mar 01 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 02, 2013 at 04:46:57AM +0100, Denis Koroskin wrote:
[...]
 Anyway, feel free to take the sources and attach whatever license
 you want to it.
Unfortunately, I don't think copyright works like that. You (the author) must be the one who licenses it. Any license applied by others will most probably be invalid, since they are not the real copyright owner, and, in the case of actual legal disputes, such a license will be indefensible. T -- Береги платье снову, а здоровье смолоду.
Mar 01 2013
next sibling parent "Era Scarecrow" <rtcvb32 yahoo.com> writes:
On Saturday, 2 March 2013 at 03:56:56 UTC, H. S. Teoh wrote:
 On Sat, Mar 02, 2013 at 04:46:57AM +0100, Denis Koroskin wrote:
 [...]
 Anyway, feel free to take the sources and attach whatever 
 license you want to it.
Unfortunately, I don't think copyright works like that. You (the author) must be the one who licenses it. Any license applied by others will most probably be invalid, since they are not the real copyright owner, and, in the case of actual legal disputes, such a license will be indefensible.
You can transfer ownership via written letter (not electronic) to a company/person I believe, various utilities of the GNU suite are donated that way. To my understanding, anything published on the internet without a license is basically informational. You can read it to understand it, but you can't compile or use it without a license/permission. The license can grant specific permissions to use (copy, modify, redistribute, etc), and terms of use. When in doubt consider GPL/LGPL, which will protect you in the case the malfunction/damages (for whatever reason, marked 'as is', no warranty or support, or refunds). Not sure of other licenses though, haven't read into them too much.
Mar 01 2013
prev sibling parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Saturday, 2 March 2013 at 03:56:56 UTC, H. S. Teoh wrote:
 On Sat, Mar 02, 2013 at 04:46:57AM +0100, Denis Koroskin wrote:
 [...]
 Anyway, feel free to take the sources and attach whatever 
 license
 you want to it.
Unfortunately, I don't think copyright works like that. You (the author) must be the one who licenses it. Any license applied by others will most probably be invalid, since they are not the real copyright owner, and, in the case of actual legal disputes, such a license will be indefensible. T
Last year I boiled down existing ddmd to: https://github.com/zachthemystic/ddmd-clean/ I did it because I needed to teach myself programming, and maybe there was a long shot that someone would want to use it. I rather angrily attached the GPL license because I thought I had to. But flexibility in this area comes as a welcome surprise. I want to make another point. Dmd is perhaps written in a way that might be easier to create an automatic program translating it to D than other C++ programs. I guess that approach is worth at least a small amount of investigation. The question is whether the number of special cases required to automate will take more toll than the grunt work of direct translation. ddmd currently does some really awkward C++ contortions of stuff which is built into D. No one would write it that way in D. While I stripped down the compiler to make my lexer/parser version of it, which is very mangled at the moment, at least you can look at the code to see how dmd would look in D.
Mar 01 2013
parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Saturday, 2 March 2013 at 04:28:40 UTC, Zach the Mystic wrote:
 Last year I boiled down existing ddmd to:

 https://github.com/zachthemystic/ddmd-clean/

 I did it because I needed to teach myself programming, and 
 maybe there was a long shot that someone would want to use it. 
 I rather angrily attached the GPL license because I thought I 
 had to. But flexibility in this area comes as a welcome 
 surprise.

 I want to make another point. Dmd is perhaps written in a way 
 that might be easier to create an automatic program translating 
 it to D than other C++ programs. I guess that approach is worth 
 at least a small amount of investigation. The question is 
 whether the number of special cases required to automate will 
 take more toll than the grunt work of direct translation.
You would definitely need an identifier translation table: "Dsymbols *" -> "Dsymbol[]" "NULL" -> "null" `//printf("...%d...", s)` -> `writef("...%s...", s)` "#ifdef XIFDEFVERSION" + nested ifdefs + "#endif" -> "version(XIFDEFVERSION) {" + nested {}'s + "}" "#ifdef 0" -> "version(none)" To assemble a class, you'd need a list of methods to look up, and hints where to look up each method. It would be good to develop a small domain specific language just for translating this. The better the language, the easier it would be to add all the special cases I'm sure would be necessary.
Mar 01 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:bueceuemxqmflixkqbuz forum.dlang.org...
 On Saturday, 2 March 2013 at 04:28:40 UTC, Zach the Mystic wrote:
 You would definitely need an identifier translation table:

 "Dsymbols *" -> "Dsymbol[]"
Might as well just define ArrayBase etc
 "NULL" -> "null"
Sure, but what about all the places 0 is used to mean NULL?
 `//printf("...%d...", s)` -> `writef("...%s...", s)`
Why not just keep it as printf?
 "#ifdef XIFDEFVERSION" + nested ifdefs + "#endif"
 -> "version(XIFDEFVERSION) {" + nested {}'s + "}"

 "#ifdef 0" -> "version(none)"
No luck, dmd source uses #ifdefs mid-declaration, mid-statement, and mid-expression (even mid-string-literal) It also uses #ifs with complex conditions. And don't forget no-args ctors, implicit calling of ctors, stack allocated classes, new keywords, narrowing integer conversions, 'virtual', pure virtual function syntax, macros as expression aliases, string literal constness, the EXP_CANT_INTERPRET cast hack, macros, namespaces, macros, structs using inheritance, and of course more macros.
Mar 02 2013
parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Saturday, 2 March 2013 at 10:05:08 UTC, Daniel Murphy wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote 
 in message
 news:bueceuemxqmflixkqbuz forum.dlang.org...
 On Saturday, 2 March 2013 at 04:28:40 UTC, Zach the Mystic 
 wrote:
 You would definitely need an identifier translation table:

 "Dsymbols *" -> "Dsymbol[]"
Might as well just define ArrayBase etc
 "NULL" -> "null"
Sure, but what about all the places 0 is used to mean NULL?
 `//printf("...%d...", s)` -> `writef("...%s...", s)`
Why not just keep it as printf?
 "#ifdef XIFDEFVERSION" + nested ifdefs + "#endif"
 -> "version(XIFDEFVERSION) {" + nested {}'s + "}"

 "#ifdef 0" -> "version(none)"
No luck, dmd source uses #ifdefs mid-declaration, mid-statement, and mid-expression (even mid-string-literal) It also uses #ifs with complex conditions. And don't forget no-args ctors, implicit calling of ctors, stack allocated classes, new keywords, narrowing integer conversions, 'virtual', pure virtual function syntax, macros as expression aliases, string literal constness, the EXP_CANT_INTERPRET cast hack, macros, namespaces, macros, structs using inheritance, and of course more macros.
Every single one of these would have to be special-cased. If you had a domain-specific language you could keep track of whether you were mid-declaration, mid-statement, or mid-string-literal. Half the stuff you special-case could probably be applied to other C++ projects as well. If this works, the benefits are just enormous. In fact, I would actually like to "waste" my time trying to make this work, but I'm going to need to ask a lot of questions because my current programming skills are nowhere near the average level of posters at this forum. I would like a c++ lexer (with whitespace) to start with. Then a discussion of parsers and emitters. Then a ton of questions just on learning github and other basics. I would also like the sanction of some of the more experienced people here, saying it's at least worth a go, even if other strategies are simultaneously pursued.
Mar 02 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:pwwrifebdwzctioujuwm forum.dlang.org...
 On Saturday, 2 March 2013 at 10:05:08 UTC, Daniel Murphy wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message
 news:bueceuemxqmflixkqbuz forum.dlang.org...
 On Saturday, 2 March 2013 at 04:28:40 UTC, Zach the Mystic wrote:
 You would definitely need an identifier translation table:

 "Dsymbols *" -> "Dsymbol[]"
Might as well just define ArrayBase etc
 "NULL" -> "null"
Sure, but what about all the places 0 is used to mean NULL?
 `//printf("...%d...", s)` -> `writef("...%s...", s)`
Why not just keep it as printf?
 "#ifdef XIFDEFVERSION" + nested ifdefs + "#endif"
 -> "version(XIFDEFVERSION) {" + nested {}'s + "}"

 "#ifdef 0" -> "version(none)"
No luck, dmd source uses #ifdefs mid-declaration, mid-statement, and mid-expression (even mid-string-literal) It also uses #ifs with complex conditions. And don't forget no-args ctors, implicit calling of ctors, stack allocated classes, new keywords, narrowing integer conversions, 'virtual', pure virtual function syntax, macros as expression aliases, string literal constness, the EXP_CANT_INTERPRET cast hack, macros, namespaces, macros, structs using inheritance, and of course more macros.
Every single one of these would have to be special-cased. If you had a domain-specific language you could keep track of whether you were mid-declaration, mid-statement, or mid-string-literal. Half the stuff you special-case could probably be applied to other C++ projects as well. If this works, the benefits are just enormous. In fact, I would actually like to "waste" my time trying to make this work, but I'm going to need to ask a lot of questions because my current programming skills are nowhere near the average level of posters at this forum. I would like a c++ lexer (with whitespace) to start with. Then a discussion of parsers and emitters. Then a ton of questions just on learning github and other basics. I would also like the sanction of some of the more experienced people here, saying it's at least worth a go, even if other strategies are simultaneously pursued.
Something like this https://github.com/yebblies/magicport2 ?
Mar 02 2013
next sibling parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Sunday, 3 March 2013 at 03:06:15 UTC, Daniel Murphy wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote 
 in message
 news:pwwrifebdwzctioujuwm forum.dlang.org...
 On Saturday, 2 March 2013 at 10:05:08 UTC, Daniel Murphy wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> 
 wrote in message
 news:bueceuemxqmflixkqbuz forum.dlang.org...
 On Saturday, 2 March 2013 at 04:28:40 UTC, Zach the Mystic 
 wrote:
 You would definitely need an identifier translation table:

 "Dsymbols *" -> "Dsymbol[]"
Might as well just define ArrayBase etc
 "NULL" -> "null"
Sure, but what about all the places 0 is used to mean NULL?
 `//printf("...%d...", s)` -> `writef("...%s...", s)`
Why not just keep it as printf?
 "#ifdef XIFDEFVERSION" + nested ifdefs + "#endif"
 -> "version(XIFDEFVERSION) {" + nested {}'s + "}"

 "#ifdef 0" -> "version(none)"
No luck, dmd source uses #ifdefs mid-declaration, mid-statement, and mid-expression (even mid-string-literal) It also uses #ifs with complex conditions. And don't forget no-args ctors, implicit calling of ctors, stack allocated classes, new keywords, narrowing integer conversions, 'virtual', pure virtual function syntax, macros as expression aliases, string literal constness, the EXP_CANT_INTERPRET cast hack, macros, namespaces, macros, structs using inheritance, and of course more macros.
Every single one of these would have to be special-cased. If you had a domain-specific language you could keep track of whether you were mid-declaration, mid-statement, or mid-string-literal. Half the stuff you special-case could probably be applied to other C++ projects as well. If this works, the benefits are just enormous. In fact, I would actually like to "waste" my time trying to make this work, but I'm going to need to ask a lot of questions because my current programming skills are nowhere near the average level of posters at this forum. I would like a c++ lexer (with whitespace) to start with. Then a discussion of parsers and emitters. Then a ton of questions just on learning github and other basics. I would also like the sanction of some of the more experienced people here, saying it's at least worth a go, even if other strategies are simultaneously pursued.
Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
Mar 02 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Mar 02 2013
next sibling parent "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Sunday, 3 March 2013 at 05:18:13 UTC, Daniel Murphy wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote 
 in message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Yes, more than a beginning. This is a higher-level approach than mine, clearly. In your estimation, how high-level is your approach compared to my lower-level one? How far would each of us have to travel to meet in the middle, in other words? :)
Mar 02 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Mar 09 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message 
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from that, then process the frontend's source again with the built compiler. It also works on the conversion tool, and pulls in a sizeable chunk of druntime and phobos.
Mar 11 2013
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 11.03.2013 15:20, schrieb Daniel Murphy:
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from that, then process the frontend's source again with the built compiler. It also works on the conversion tool, and pulls in a sizeable chunk of druntime and phobos.
do i get it right - you've converted the dmd C++ code with it?
Mar 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"dennis luehring" <dl.soluz gmx.net> wrote in message 
news:khkqug$v57$1 digitalmars.com...
 Am 11.03.2013 15:20, schrieb Daniel Murphy:
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in 
 message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from that, then process the frontend's source again with the built compiler. It also works on the conversion tool, and pulls in a sizeable chunk of druntime and phobos.
do i get it right - you've converted the dmd C++ code with it?
Umm... C++ compiler source -> my tool -> D source D source -> normal dmd -> self-host dmd D source -> self-host dmd -> no problems, but only the frontend so no code generation tool source -> self-host dmd -> same thing
Mar 11 2013
next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 11.03.2013 16:23, schrieb Daniel Murphy:
 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khkqug$v57$1 digitalmars.com...
 Am 11.03.2013 15:20, schrieb Daniel Murphy:
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in
 message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from that, then process the frontend's source again with the built compiler. It also works on the conversion tool, and pulls in a sizeable chunk of druntime and phobos.
do i get it right - you've converted the dmd C++ code with it?
Umm... C++ compiler source -> my tool -> D source D source -> normal dmd -> self-host dmd D source -> self-host dmd -> no problems, but only the frontend so no code generation tool source -> self-host dmd -> same thing
but interesting enough to get its own root newsgroup post i think - or it the "quality"(converted source etc. whatever) too bad
Mar 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"dennis luehring" <dl.soluz gmx.net> wrote in message 
news:khku3t$15ja$1 digitalmars.com...
 Am 11.03.2013 16:23, schrieb Daniel Murphy:
 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khkqug$v57$1 digitalmars.com...
 Am 11.03.2013 15:20, schrieb Daniel Murphy:
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in
 message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue layer stubbed out, I can build dmd from the converted source then lex and parse the source (+druntime headers) again. The highlight was the dynamically resized struct in root/stringtable. Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from that, then process the frontend's source again with the built compiler. It also works on the conversion tool, and pulls in a sizeable chunk of druntime and phobos.
do i get it right - you've converted the dmd C++ code with it?
Umm... C++ compiler source -> my tool -> D source D source -> normal dmd -> self-host dmd D source -> self-host dmd -> no problems, but only the frontend so no code generation tool source -> self-host dmd -> same thing
but interesting enough to get its own root newsgroup post i think - or it the "quality"(converted source etc. whatever) too bad
I'm planning to when it can do the entire test suite, and all of phobos and druntime. The code generated is very close to what you would get running it though a (bad) formatter, with comments removed. I will eventually preserve the comments and improve the formatting. Performance wise the code is pretty nasty because I'm allocating all OutBuffers on the heap and inserting tracing code. This will need to be fixed eventually but is fine for checking correctness. Here's an example conversion: (from PrettyFuncInitExp) Almost all of the differences are from the primitive pretty-printer. -------------------------------------------------------- C++ version -------------------------------------------------------- Expression *PrettyFuncInitExp::resolveLoc(Loc loc, Scope *sc) { FuncDeclaration *fd; if (sc->callsc && sc->callsc->func) fd = sc->callsc->func; else fd = sc->func; const char *s; if (fd) { const char *funcStr = fd->Dsymbol::toPrettyChars(); HdrGenState hgs; OutBuffer buf; functionToCBuffer2((TypeFunction *)fd->type, &buf, &hgs, 0, funcStr); buf.writebyte(0); s = (const char *)buf.extractData(); } else { s = ""; } Expression *e = new StringExp(loc, (char *)s); e = e->semantic(sc); e = e->castTo(sc, type); return e; } -------------------------------------------------------- D version -------------------------------------------------------- Expression resolveLoc(Loc loc, Scope sc) { tracein("resolveLoc"); scope(success) traceout("resolveLoc"); scope(failure) traceerr("resolveLoc"); { FuncDeclaration fd; if ((sc.callsc && sc.callsc.func)) (fd = sc.callsc.func); else (fd = sc.func); const(char)* s; if (fd) { const(char)* funcStr = fd.Dsymbol.toPrettyChars(); HdrGenState hgs; OutBuffer buf = new OutBuffer(); functionToCBuffer2((cast(TypeFunction)fd.type), buf, (&hgs), 0, funcStr); buf.writebyte(0); (s = (cast(const(char)*)buf.extractData())); } else { (s = ""); } Expression e = (new StringExp(loc, (cast(char*)s))); (e = e.semantic(sc)); (e = e.castTo(sc, type)); return e; } }
Mar 11 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 11 March 2013 16:01, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khku3t$15ja$1 digitalmars.com...
 Am 11.03.2013 16:23, schrieb Daniel Murphy:
 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khkqug$v57$1 digitalmars.com...
 Am 11.03.2013 15:20, schrieb Daniel Murphy:
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:khfoa6$fm7$1 digitalmars.com...
 "Daniel Murphy" <yebblies nospamgmail.com> wrote in message
 news:kgumek$2tp4$1 digitalmars.com...
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in
 message
 news:pabfuaorrjbljxzrglbv forum.dlang.org...
 Something like this https://github.com/yebblies/magicport2 ?
Yes! I need to look it over more thoroughly, but I couldn't ask for a better beginning. Can I trust that you'll be a willing part of future discussions on this matter, even if only to play Devil's Advocate?
More like a full-blown attempt than a beginning. I started this a long time ago. There are three parts to it: - c++ parser/d printer, with lots of cheating and special cases - patches to the c++ source - patched version of dmd to build the result (no error on variable shadowing etc) It produces a 70000 line d file which appears to get through 3/7ths of semantic1. Root needs to be ported, and a cleaner interface to the backend is needed to compile the glue layer.
Update: With the bulk of root converting or ported, and the glue
layer
 stubbed out, I can build dmd from the converted source then lex and
 parse
 the source (+druntime headers) again.

 The highlight was the dynamically resized struct in root/stringtable.
 Something went horribly wrong there.
Update: I can now generate the source, then build a frontend from
that,
 then
 process the frontend's source again with the built compiler.  It also
 works
 on the conversion tool, and pulls in a sizeable chunk of druntime and
 phobos.
do i get it right - you've converted the dmd C++ code with it?
Umm... C++ compiler source -> my tool -> D source D source -> normal dmd -> self-host dmd D source -> self-host dmd -> no problems, but only the frontend so no code generation tool source -> self-host dmd -> same thing
but interesting enough to get its own root newsgroup post i think - or it the "quality"(converted source etc. whatever) too bad
I'm planning to when it can do the entire test suite, and all of phobos and druntime. The code generated is very close to what you would get running it though a (bad) formatter, with comments removed. I will eventually preserve the comments and improve the formatting. Performance wise the code is pretty nasty because I'm allocating all OutBuffers on the heap and inserting tracing code. This will need to be fixed eventually but is fine for checking correctness. Here's an example conversion: (from PrettyFuncInitExp) Almost all of the differences are from the primitive pretty-printer. -------------------------------------------------------- C++ version -------------------------------------------------------- Expression *PrettyFuncInitExp::resolveLoc(Loc loc, Scope *sc) { FuncDeclaration *fd; if (sc->callsc && sc->callsc->func) fd = sc->callsc->func; else fd = sc->func; const char *s; if (fd) { const char *funcStr = fd->Dsymbol::toPrettyChars(); HdrGenState hgs; OutBuffer buf; functionToCBuffer2((TypeFunction *)fd->type, &buf, &hgs, 0, funcStr); buf.writebyte(0); s = (const char *)buf.extractData(); } else { s = ""; } Expression *e = new StringExp(loc, (char *)s); e = e->semantic(sc); e = e->castTo(sc, type); return e; } -------------------------------------------------------- D version -------------------------------------------------------- Expression resolveLoc(Loc loc, Scope sc) { tracein("resolveLoc"); scope(success) traceout("resolveLoc"); scope(failure) traceerr("resolveLoc"); { FuncDeclaration fd; if ((sc.callsc && sc.callsc.func)) (fd = sc.callsc.func); else (fd = sc.func); const(char)* s; if (fd) { const(char)* funcStr = fd.Dsymbol.toPrettyChars(); HdrGenState hgs; OutBuffer buf = new OutBuffer(); functionToCBuffer2((cast(TypeFunction)fd.type), buf, (&hgs), 0, funcStr); buf.writebyte(0); (s = (cast(const(char)*)buf.extractData())); } else { (s = ""); } Expression e = (new StringExp(loc, (cast(char*)s))); (e = e.semantic(sc)); (e = e.castTo(sc, type)); return e; } }
(The D conversion seems to think it's lisp). -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 11 2013
next sibling parent FG <home fgda.pl> writes:
On 2013-03-11 18:36, Iain Buclaw wrote:
 (The D conversion seems to think it's lisp).
(LOL)
Mar 11 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.357.1363023395.14496.digitalmars-d puremagic.com...
 (The D conversion seems to think it's lisp).
Expression resolveLoc(Loc loc, Scope sc) { tracein("resolveLoc"); scope(success) traceout("resolveLoc"); scope(failure) traceerr("resolveLoc"); { FuncDeclaration fd; if (sc.callsc && sc.callsc.func) fd = sc.callsc.func; else fd = sc.func; const(char)* s; if (fd) { const(char)* funcStr = fd.Dsymbol.toPrettyChars(); HdrGenState hgs; OutBuffer buf = new OutBuffer(); functionToCBuffer2(cast(TypeFunction)fd.type, buf, &hgs, 0, funcStr); buf.writebyte(0); s = cast(const(char)*)buf.extractData(); } else { s = ""; } Expression e = new StringExp(loc, cast(char*)s); e = e.semantic(sc); e = e.castTo(sc, type); return e; } }
Mar 11 2013
next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 12.03.2013 03:25, schrieb Daniel Murphy:
 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.357.1363023395.14496.digitalmars-d puremagic.com...
 (The D conversion seems to think it's lisp).
Expression resolveLoc(Loc loc, Scope sc) { tracein("resolveLoc"); scope(success) traceout("resolveLoc"); scope(failure) traceerr("resolveLoc"); { FuncDeclaration fd; if (sc.callsc && sc.callsc.func) fd = sc.callsc.func; else fd = sc.func; const(char)* s; if (fd) { const(char)* funcStr = fd.Dsymbol.toPrettyChars(); HdrGenState hgs; OutBuffer buf = new OutBuffer(); functionToCBuffer2(cast(TypeFunction)fd.type, buf, &hgs, 0, funcStr); buf.writebyte(0); s = cast(const(char)*)buf.extractData(); } else { s = ""; } Expression e = new StringExp(loc, cast(char*)s); e = e.semantic(sc); e = e.castTo(sc, type); return e; } }
looks not that bad - the big question for me is - do you think that this could be the way to do the port (or do you just "test" how far you can get with automated conversion)
Mar 11 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"dennis luehring" <dl.soluz gmx.net> wrote in message 
news:khmgep$1a6s$1 digitalmars.com...
 looks not that bad - the big question for me is - do you think that this 
 could be the way to do the port (or do you just "test" how far you can get 
 with automated conversion)
This is the way. With automatic conversion, development can continue on the C++ frontend until the D version is ready to become _the_ frontend. The C++ code needs a lot of cleanup.
Mar 12 2013
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 12.03.2013 10:59, schrieb Daniel Murphy:
 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khmgep$1a6s$1 digitalmars.com...
 looks not that bad - the big question for me is - do you think that this
 could be the way to do the port (or do you just "test" how far you can get
 with automated conversion)
This is the way. With automatic conversion, development can continue on the C++ frontend until the D version is ready to become _the_ frontend. The C++ code needs a lot of cleanup.
maybe it will get near perfection :) and become an standard tool like htod
Mar 12 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"dennis luehring" <dl.soluz gmx.net> wrote in message 
news:khnbps$31c5$1 digitalmars.com...
 Am 12.03.2013 10:59, schrieb Daniel Murphy:
 "dennis luehring" <dl.soluz gmx.net> wrote in message
 news:khmgep$1a6s$1 digitalmars.com...
 looks not that bad - the big question for me is - do you think that this
 could be the way to do the port (or do you just "test" how far you can 
 get
 with automated conversion)
This is the way. With automatic conversion, development can continue on the C++ frontend until the D version is ready to become _the_ frontend. The C++ code needs a lot of cleanup.
maybe it will get near perfection :) and become an standard tool like htod
Unfortunately it only works because it can make a lot of assumptions about the dmd source, and the subset of C++ it uses. The same approach can be used with other C++ projects, but not the same tool.
Mar 12 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tuesday, 12 March 2013 at 14:32:05 UTC, Daniel Murphy wrote:
 Unfortunately it only works because it can make a lot of 
 assumptions about
 the dmd source, and the subset of C++ it uses.  The same 
 approach can be
 used with other C++ projects, but not the same tool.
Visual D comes with a "C++ to D" conversion wizard (also usable as a stand-alone command-line tool): http://www.dsource.org/projects/visuald/wiki/Tour/CppConversion
Mar 12 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
news:zhdnnukpqrpbydlyvnci forum.dlang.org...
 On Tuesday, 12 March 2013 at 14:32:05 UTC, Daniel Murphy wrote:
 Unfortunately it only works because it can make a lot of assumptions 
 about
 the dmd source, and the subset of C++ it uses.  The same approach can be
 used with other C++ projects, but not the same tool.
Visual D comes with a "C++ to D" conversion wizard (also usable as a stand-alone command-line tool): http://www.dsource.org/projects/visuald/wiki/Tour/CppConversion
Why didn't I know this existed...
Mar 12 2013
prev sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 12 March 2013 02:25, Daniel Murphy <yebblies nospamgmail.com> wrote:

 "Iain Buclaw" <ibuclaw ubuntu.com> wrote in message
 news:mailman.357.1363023395.14496.digitalmars-d puremagic.com...
 (The D conversion seems to think it's lisp).
Expression resolveLoc(Loc loc, Scope sc) { tracein("resolveLoc"); scope(success) traceout("resolveLoc"); scope(failure) traceerr("resolveLoc"); { FuncDeclaration fd; if (sc.callsc && sc.callsc.func) fd = sc.callsc.func; else fd = sc.func; const(char)* s; if (fd) { const(char)* funcStr = fd.Dsymbol.toPrettyChars(); HdrGenState hgs; OutBuffer buf = new OutBuffer(); functionToCBuffer2(cast(TypeFunction)fd.type, buf, &hgs, 0, funcStr); buf.writebyte(0); s = cast(const(char)*)buf.extractData(); } else { s = ""; } Expression e = new StringExp(loc, cast(char*)s); e = e.semantic(sc); e = e.castTo(sc, type); return e; } }
Yes, I know it can be cleaned up. Just thought I might chime in on a point I thought was amusing. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 12 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.373.1363091214.14496.digitalmars-d puremagic.com...
 On 12 March 2013 02:25, Daniel Murphy <yebblies nospamgmail.com> wrote:


 Yes, I know it can be cleaned up.  Just thought I might chime in on a 
 point
 I thought was amusing.
I know. Your remark inspired me to clean it up!
Mar 12 2013
prev sibling parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Monday, 11 March 2013 at 15:23:17 UTC, Daniel Murphy wrote:
 Umm...

 C++ compiler source -> my tool -> D source
 D source -> normal dmd -> self-host dmd
 D source -> self-host dmd -> no problems, but only the frontend 
 so no code
 generation
 tool source -> self-host dmd -> same thing
This is great. I'm still trying to invent the wheel, and you're driving a sports car. I've been trying to develop a general purpose text-to-text translation program, but it may be just something I need to do for myself - I need to learn this stuff, but your progress is really encouraging. Are there any blind spots to your approach right now? It looks like you're destined to get this whole thing covered. I was wondering your thoughts on some of the more sophisticated operations, like converted all of dmd/root to built-in D, OutBuffer -> Appender!string, standard C/C++ libraries (malloc, strcmp, const char* -> string, etc.)? Are these things within grasp, given the incentive and enough time, or farther away than what you've got right now?
Mar 13 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachzach gggggmail.com> wrote in message 
news:ybbtwnbvxhjcpgbnaoaq forum.dlang.org...
 On Monday, 11 March 2013 at 15:23:17 UTC, Daniel Murphy wrote:
 Umm...

 C++ compiler source -> my tool -> D source
 D source -> normal dmd -> self-host dmd
 D source -> self-host dmd -> no problems, but only the frontend so no 
 code
 generation
 tool source -> self-host dmd -> same thing
This is great. I'm still trying to invent the wheel, and you're driving a sports car. I've been trying to develop a general purpose text-to-text translation program, but it may be just something I need to do for myself - I need to learn this stuff, but your progress is really encouraging.
I did have a couple years head start.
 Are there any blind spots to your approach right now? It looks like you're 
 destined to get this whole thing covered. I was wondering your thoughts on 
 some of the more sophisticated operations, like converted all of dmd/root 
 to built-in D, OutBuffer -> Appender!string, standard C/C++ libraries 
 (malloc, strcmp, const char* -> string, etc.)? Are these things within 
 grasp, given the incentive and enough time, or farther away than what 
 you've got right now?
Most of these are possible, some harder than others. I'd rather do as little as possible refactoring now, and leave that until after it's all in D. What I'm up to now is the glue layer needs to be linked into the D code, and for this extern(C++) needs to be upgraded a bit.
Mar 16 2013
prev sibling parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Sunday, 3 March 2013 at 03:06:15 UTC, Daniel Murphy wrote:
 Every single one of these would have to be special-cased. If 
 you had a domain-specific language you could keep track of 
 whether you were mid-declaration, mid-statement, or 
 mid-string-literal. Half the stuff you special-case could 
 probably be applied to other C++ projects as well.

 If this works, the benefits are just enormous. In fact, I 
 would actually like to "waste" my time trying to make this 
 work, but I'm going to need to ask a lot of questions because 
 my current programming skills are nowhere near the average 
 level of posters at this forum.

 I would like a c++ lexer (with whitespace) to start with. Then 
 a discussion of parsers and emitters. Then a ton of questions 
 just on learning github and other basics.

 I would also like the sanction of some of the more experienced 
 people here, saying it's at least worth a go, even if other 
 strategies are simultaneously pursued.
Something like this https://github.com/yebblies/magicport2 ?
Since you're obviously way ahead of me on this, I'm going to go ahead and say everything I've been thinking about this issue. My approach to translating the source would be more-or-less naive. That is, I would be trying to do simple pattern-matching and replacement as much as possible. I would try to go as far as I could without the scanner knowing any context-sensitive information. When I added a piece of context-sensitive information, I would do so by observing the failures of the naive output, and adding pieces one by one, searching for the most bang for my context-sensitive buck. It would be nice to see upwards of 50 percent or more of the code conquered by just a few such carefully selected context-sensitive bucks. Eventually the point of diminishing returns would be met with these simple additions. It would be of utility to have a language at that point, which, instead of seeking direct gains in its ability to transform dmd code, saw its gains in the ease and flexibility with which one could add the increasingly obscure and detailed special cases to it. I don't know how to set up that language or its data structures, but I can tell you what I'd like to be able to do with it. I would like to be able to query which function I am in, which class I am assembling, etc. I would like to be able to take a given piece of text and say exactly what text should replace it, so that complex macros could be rewritten to their equivalent static pure D functions. In other words, when push comes to shove, I want to be able to brute-force a particularly hard substitution direct access to the context-sensitive data structure. For example, suppose I know that some strange macro peculiarities of a function add an extra '}' brace which is not read by C++ but is picked up by the naive nesting '{}' tracker, which botches up its 'nestedBraceLevel' variable. It would be necessary to be able to say: if (currentFunction == "oneIKnowToBeMessedUp" && currentLine >= funcList.oneIKnowToBeMessedUp.startingLine +50) { --nestedBraceLevel; } My founding principle is Keep It Simple Stupid. I don't know if it's the best way to start, but barring expert advice steering me away from it, it would be the best for someone like me who had no experience and needed to learn from the ground up what worked and what didn't. Another advantage of the domain-specific language as described above would its reusability of whatever transformations are common in C++, say transforming 'strcmp(a,b)' -> 'a == b', and it's possible use for adding special cases to translating from one language to another generally speaking . I don't know the difference between what I'm describing and a basic macro text processing language - they might be the same. My last thought is probably well-tread ground, but the translation program should have import dependency charts for its target program, and automate imports on a per-symbol basis, so it lays out the total file in two steps. import std.array : front, array; One thing I'm specifically avoiding in this proposal is a sophisticated awareness of the C++ grammar. I'm hoping special cases cover whatever ground might be more perfectly trod by a totally grammar-aware conversion mechanism. Now you're as up-to-date as I am on what I'm thinking.
Mar 02 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:ewtgqpcvhmlaaibiaezc forum.dlang.org...
 Since you're obviously way ahead of me on this, I'm going to go ahead and 
 say everything I've been thinking about this issue.

 My approach to translating the source would be more-or-less naive. That 
 is, I would be trying to do simple pattern-matching and replacement as 
 much as possible. I would try to go as far as I could without the scanner 
 knowing any context-sensitive information. When I added a piece of 
 context-sensitive information, I would do so by observing the failures of 
 the naive output, and adding pieces one by one, searching for the most 
 bang for my context-sensitive buck. It would be nice to see upwards of 50 
 percent or more of the code conquered by just a few such carefully 
 selected context-sensitive bucks.

 Eventually the point of diminishing returns would be met with these simple 
 additions. It would be of utility to have a language at that point, which, 
 instead of seeking direct gains in its ability to transform dmd code, saw 
 its gains in the ease and flexibility with which one could add the 
 increasingly obscure and detailed special cases to it. I don't know how to 
 set up that language or its data structures, but I can tell you what I'd 
 like to be able to do with it.

 I would like to be able to query which function I am in, which class I am 
 assembling, etc. I would like to be able to take a given piece of text and 
 say exactly what text should replace it, so that complex macros could be 
 rewritten to their equivalent static pure D functions. In other words, 
 when push comes to shove, I want to be able to brute-force a particularly 
 hard substitution direct access to the context-sensitive data structure. 
 For example, suppose I know that some strange macro peculiarities of a 
 function add an extra '}' brace which is not read by C++ but is picked up 
 by the naive nesting '{}' tracker, which botches up its 'nestedBraceLevel' 
 variable. It would be necessary to be able to say:

 if (currentFunction == "oneIKnowToBeMessedUp" &&
    currentLine >= funcList.oneIKnowToBeMessedUp.startingLine +50)
    { --nestedBraceLevel; }

 My founding principle is Keep It Simple Stupid. I don't know if it's the 
 best way to start, but barring expert advice steering me away from it, it 
 would be the best for someone like me who had no experience and needed to 
 learn from the ground up what worked and what didn't.

 Another advantage of the domain-specific language as described above would 
 its reusability of whatever transformations are common in C++, say 
 transforming 'strcmp(a,b)' -> 'a == b', and it's possible use for adding 
 special cases to translating from one language to another generally 
 speaking . I don't know the difference between what I'm describing and a 
 basic macro text processing language - they might be the same.

 My last thought is probably well-tread ground, but the translation program 
 should have import dependency charts for its target program, and automate 
 imports on a per-symbol basis, so it lays out the total file in two steps.

 import std.array : front, array;

 One thing I'm specifically avoiding in this proposal is a sophisticated 
 awareness of the C++ grammar. I'm hoping special cases cover whatever 
 ground might be more perfectly trod by a totally grammar-aware conversion 
 mechanism.

 Now you're as up-to-date as I am on what I'm thinking.
I did something like that before (token-level pattern matching) and found the number of special cases to be much much too high. You need so much context information you're better off just building an ast and operating on that. For the nastier special cases, I'm modifying the compiler source to eliminate them. This mostly means expanding macros and adding casts. Many of the same ideas apply, although I'm not trying to eg use native arrays and strings, just a direct port.
Mar 02 2013
parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Sunday, 3 March 2013 at 07:27:51 UTC, Daniel Murphy wrote:
 Now you're as up-to-date as I am on what I'm thinking.
I did something like that before (token-level pattern matching) and found the number of special cases to be much much too high. You need so much context information you're better off just building an ast and operating on that.
What were the biggest and most common reasons you needed context information?
Mar 03 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:kidboshnjpowpyqrtwjl forum.dlang.org...
 On Sunday, 3 March 2013 at 07:27:51 UTC, Daniel Murphy wrote:
 Now you're as up-to-date as I am on what I'm thinking.
I did something like that before (token-level pattern matching) and found the number of special cases to be much much too high. You need so much context information you're better off just building an ast and operating on that.
What were the biggest and most common reasons you needed context information?
Turning implicit into explicit conversions. A big one is 0 -> Loc(0). dinteger_t -> size_t. void* -> char*. string literal to char*. string literal to unsigned char*. unsigned -> unsigned char. int -> bool.
Mar 03 2013
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 4, 2013 2:41 AM, "Daniel Murphy" <yebblies nospamgmail.com> wrote:
 "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message
 news:kidboshnjpowpyqrtwjl forum.dlang.org...
 On Sunday, 3 March 2013 at 07:27:51 UTC, Daniel Murphy wrote:
 Now you're as up-to-date as I am on what I'm thinking.
I did something like that before (token-level pattern matching) and
found
 the number of special cases to be much much too high.  You need so much
 context information you're better off just building an ast and
operating
 on
 that.
What were the biggest and most common reasons you needed context information?
Turning implicit into explicit conversions. A big one is 0 -> Loc(0). dinteger_t -> size_t. void* -> char*. string literal to char*. string literal to unsigned char*. unsigned -> unsigned char. int -> bool.
All look fine except for dinteger_t, which should be -> long (it should always be the widest integer type supported by the host eg: longlong. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 04 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.215.1362390328.14496.digitalmars-d puremagic.com...
 All look fine except for dinteger_t, which should be -> long (it should
 always be the widest integer type supported by the host eg: longlong.

 Regards
 -- 
 Iain Buclaw

 *(p < e ? p++ : p) = (c & 0x0f) + '0';
I know, it's nasty, but dmd does this _everywhere_. Expression::toInteger returns toInteger, then it is used to index arrays, set offsets, etc. I'm now using a modified compiler that accepts all these conversions, doesn't error on variable shadowing, and lets you compare pointers with null without 'is'. I've managed to processes, compile and link the frontend. Next root, then glue.
Mar 04 2013
prev sibling parent reply "Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> writes:
On Monday, 4 March 2013 at 02:36:23 UTC, Daniel Murphy wrote:
 What were the biggest and most common reasons you needed 
 context information?
Turning implicit into explicit conversions. A big one is 0 -> Loc(0). dinteger_t -> size_t. void* -> char*. string literal to char*. string literal to unsigned char*. unsigned -> unsigned char. int -> bool.
I would like to play devil's advocate myself, at least on 0 -> Loc(0). I found that in the source, the vast, vast majority of Loc instances were named, of course, 'loc'. Of the few other ones, only 'endloc' was ever assigned to 0. The token matcher could substitute: 'loc = 0' -> 'loc = Loc(0)' 'endloc = 0' -> 'endloc = Loc(0)' As long as it had a list of the D's AST classes, a pretty conservative attempt to knock out a huge number of additional cases is: 'new DmdClassName(0' -> 'new DmdClassName(Loc(0)' The core principle with the naive approach is to take advantage of specific per-project conventions such as always giving the Loc first. The more uniformity with which the project has been implemented, the more likely this approach will work. A lot of those other implicit conversions I do agree seem daunting. The naive approach would require two features, one, a basic way of tracking a variable's type. For example, it could have a list of known 'killer' types which cause problems. When it sees one it records the next identifier it finds and associates it to that type for the rest of the function. It may then be slightly better able to known patterns where conversion is desirable. The second feature would be a brute force way of saying, "You meet pattern ZZZ: if in function XXX::YYY, replace it with WWW, else replace with UUU." This is clearly the point of diminishing returns for the naive approach, at which point I could only hope that a good abstraction could make up a lot of ground when found necessary. The point of diminishing returns for the whole naive approach is reached when for every abstraction you add, you end up breaking as much code as you fix. Then you're stuck with the grunt work of adding special case after special case, and you might as well try something else at that point... My current situation is that my coding skills will lag behind my ability to have ideas, so I don't have anything rearding my approach up and running for comparison, but I want the conversation to be productive, so I'll give you the ideas I've had since yesterday. I would start by creating a program which converts the source by class, one class at a time, and one file for each. It has a list of classes to convert, and a list of data, methods, and overrides for each class - it will only include what's on the list, so you can add classes and functions one step at a time. For each method or override, a file to find it in, and maybe a hint as to about where the function begins in said file. You may have already thought of these, but just to say them out loud, some more token replacements I was thinking of: 'SameName::SameName(...ABC...) : DifferentName(...XYZ...) {' -> 'this(...ABC...) { super(...XYZ...);' Standard reference semantics: 'DTreeClass *' -> 'DTreeClass' Combined, they look like this: 'OrOrExp::OrOrExp(Loc loc, Expression *e1, Expression *e2) : BinExp(loc, TOKoror, sizeof(OrOrExp), e1, e2) {' -> 'this(Loc loc, Expression e1, Expression e2) { super(loc, TOKoror, sizeof(OrOrExp), e1, e2);'
Mar 04 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachBUTMINUSTHISzach gOOGLYmail.com> wrote in message 
news:oxcqgprnwnsuzngfijyg forum.dlang.org...
 I would like to play devil's advocate myself, at least on 0 -> Loc(0).

 I found that in the source, the vast, vast majority of Loc instances were 
 named, of course, 'loc'. Of the few other ones, only 'endloc' was ever 
 assigned to 0. The token matcher could substitute:

 'loc = 0' -> 'loc = Loc(0)'
 'endloc = 0' -> 'endloc = Loc(0)'
This is fairly rare.
 As long as it had a list of the D's AST classes, a pretty conservative 
 attempt to knock out a huge number of additional cases is:
 'new DmdClassName(0' -> 'new DmdClassName(Loc(0)'
Yes, this mostly works, and is exactly what I did in a previous attempt.
 The core principle with the naive approach is to take advantage of 
 specific per-project conventions such as always giving the Loc first. The 
 more uniformity with which the project has been implemented, the more 
 likely this approach will work.

 A lot of those other implicit conversions I do agree seem daunting. The 
 naive approach would require two features, one, a basic way of tracking a 
 variable's type. For example, it could have a list of known 'killer' types 
 which cause problems. When it sees one it records the next identifier it 
 finds and associates it to that type for the rest of the function. It may 
 then be slightly better able to known patterns where conversion is 
 desirable. The second feature would be a brute force way of saying, "You 
 meet pattern ZZZ: if in function XXX::YYY, replace it with WWW, else 
 replace with UUU." This is clearly the point of diminishing returns for 
 the naive approach, at which point I could only hope that a good 
 abstraction could make up a lot of  ground when found necessary.
My experience was that you don't need to explicitly track which function you are in, just keeping track of the file and matching a longer pattern is enough. Here is one of the files of patterns I made: http://dpaste.dzfl.pl/3c9be703 Obviously this could be shorter with a dsl, and towards the end I started using a less verbose SM + DumpOut approach.
 The point of diminishing returns for the whole naive approach is reached 
 when for every abstraction you add, you end up breaking as much code as 
 you fix. Then you're stuck with the grunt work of adding special case 
 after special case, and you might as well try something else at that 
 point...
Yeah...
 My current situation is that my coding skills will lag behind my ability 
 to have ideas, so I don't have anything rearding my approach up and 
 running for comparison, but I want the conversation to be productive, so 
 I'll give you the ideas I've had since yesterday.

 I would start by creating a program which converts the source by class, 
 one class at a time, and one file for each. It has a list of classes to 
 convert, and a list of data, methods, and overrides for each class - it 
 will only include what's on the list, so you can add classes and functions 
 one step at a time. For each method or override, a file to find it in, and 
 maybe a hint as to about where the function begins in said file.
That is waaaay to much information to gather manually. There are a LOT of classes and functions in dmd.
 You may have already thought of these, but just to say them out loud, some 
 more token replacements I was thinking of:

 'SameName::SameName(...ABC...) : DifferentName(...XYZ...) {'
 ->
 'this(...ABC...)
 {
     super(...XYZ...);'

 Standard reference semantics:
 'DTreeClass *' -> 'DTreeClass'

 Combined, they look like this:
 'OrOrExp::OrOrExp(Loc loc, Expression *e1, Expression *e2)
         : BinExp(loc, TOKoror, sizeof(OrOrExp), e1, e2)
 {'
 ->
 'this(Loc loc, Expression e1, Expression e2)
 {
     super(loc, TOKoror, sizeof(OrOrExp), e1, e2);'
Like I said, I went down this path before, and made some progress. It resulted in a huge list of cases. My second attempt was to 'parse' c++, recognising preprocessor constructs as regular ones. The frequent use of #ifdef cutting expressions makes this very, very difficult. So my current approach is to filter out the preprocessor conditionals first, before parsing. #defines and #pragmas survive to parsing. In short, doing this at the token level works, but because you're transforming syntax, not text, it's better to work on a syntax tree.
Mar 04 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/1/2013 7:46 PM, Denis Koroskin wrote:
 I'm no copyright lawyer, but I think ddmd being a derivative work from dmd
 should probably inherit the license from it
It does indeed. But the derived part of the work can be any license the author chooses.
 If someone is willing to bring the project back from it's stale state - I'm
 more than willing to help (by both writing patches and explaining how the
 existing code works).
Sadly, your efforts will be wasted without getting a license from the author. If you want anybody else to use your code, you cannot ignore this issue.
Mar 01 2013
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Saturday, 2 March 2013 at 04:20:27 UTC, Walter Bright wrote:
 On 3/1/2013 7:46 PM, Denis Koroskin wrote:
 I'm no copyright lawyer, but I think ddmd being a derivative 
 work from dmd
 should probably inherit the license from it
It does indeed. But the derived part of the work can be any license the author chooses.
 If someone is willing to bring the project back from it's
stale state - I'm
 more than willing to help (by both writing patches and
explaining how the
 existing code works).
Sadly, your efforts will be wasted without getting a license from the author. If you want anybody else to use your code, you cannot ignore this issue.
I should have mentioned I'm the author...
Mar 01 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/1/2013 8:23 PM, Denis Koroskin wrote:
 I should have mentioned I'm the author...
That changes everything!
Mar 01 2013
prev sibling next sibling parent reply "Andrea Fontana" <nospam example.com> writes:
On Thursday, 28 February 2013 at 07:34:11 UTC, Jacob Carlborg 
wrote:
 Long term goal:

 When the translation is done we should refactor the 
 compiler/front end to be a library, usable by other tools.
Something like Fabrice Bellard tcc/libtcc? (http://bellard.org/tcc/) If you can call dmd using an api, you can write a new range of application. Every program could be a custom compiler. And can compile and patch itself.
Feb 28 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 11:45, Andrea Fontana wrote:

 Something like Fabrice Bellard tcc/libtcc? (http://bellard.org/tcc/)
 If you can call dmd using an api, you can write a new range of
 application. Every program could be a custom compiler. And can compile
 and patch itself.
I have no idea about tcc, I was more thinking of like Clang and LLVM. -- /Jacob Carlborg
Feb 28 2013
prev sibling parent "Rob T" <alanb ucora.com> writes:
On Thursday, 28 February 2013 at 07:34:11 UTC, Jacob Carlborg 
wrote:
[...]
 Long term goal:

 When the translation is done we should refactor the 
 compiler/front end to be a library, usable by other tools.
Yes that would be awesome. It can make the exact same tool at least 10x more useful and versatile. I'm a big fan of tools that allow plugins, so I would also like to see parts of the compiler be written as loadable plugins that can be swapped in/out with different versions, and also to allow the tool set to be extensible by anyone. --rt
Feb 28 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 01:37, Andrei Alexandrescu wrote:

 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.
BTW, how are we going to bootstrap the compiler on any possibly new platforms? Cross compiling? -- /Jacob Carlborg
Feb 27 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 11:35 PM, Jacob Carlborg wrote:
 BTW, how are we going to bootstrap the compiler on any possibly new platforms?
 Cross compiling?
Cross compiling.
Feb 28 2013
prev sibling next sibling parent dennis luehring <dl.soluz gmx.net> writes:
Am 28.02.2013 01:37, schrieb Andrei Alexandrescu:
 Hello,


 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.

 By this we'd like to initiate a dialog about how this large project can
 be initiated and driven through completion. Our initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module and
 generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the compiler.
 At given points throughout the code D code will coexist and link with
 C++ code.

 3. At a point in the future the last C++ module will be replaced with a
 D module. Going forward there will be no more need for a C++ compiler to
 build the compiler (except as a bootstrapping test).
sounds like an very good idea, incremental stupid ports - refactoring comes later when the port is as bug-free as original and this will also shape dtoh to an even better state next question is - is there any way of semi-auto conversion of the code itself possible?
Feb 28 2013
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.

 By this we'd like to initiate a dialog about how this large 
 project can be initiated and driven through completion. Our 
 initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.

 3. At a point in the future the last C++ module will be 
 replaced with a D module. Going forward there will be no more 
 need for a C++ compiler to build the compiler (except as a 
 bootstrapping test).

 It is essential that we get support from the larger community 
 for this. This is a large project that should enjoy strong 
 leadership apart from Walter himself (as he is busy with 
 dynamic library support which is strategic) and robust 
 participation from many of us.

 Please chime in with ideas on how to make this happen.


 Thanks,

 Andrei
I don't think that is a good idea. That will impair GDC and LDC quite a lot. Especially GDC as GCC team accept C++ only recently. This mean no inclusion in the official GCC collection. This mean that porting DMD to D and making it the default implementation imply to stick with DMD backend (and it isn't open source even if almost). Not being open source can really impair D's popularity. It also prevent to do actual progress on D during the translation process. FInally, Denis's ddmd hasn't succeeded.
Feb 28 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 2:03 AM, deadalnix wrote:
 That will impair GDC and LDC quite a lot. Especially GDC as GCC team accept C++
 only recently. This mean no inclusion in the official GCC collection.
Hmm. I had thought gccgo was written in go, but it is written in C++: http://golang.org/doc/gccgo_contribute.html
Feb 28 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 11:03, deadalnix wrote:

 I don't think that is a good idea.

 That will impair GDC and LDC quite a lot. Especially GDC as GCC team
 accept C++ only recently. This mean no inclusion in the official GCC
 collection.

 This mean that porting DMD to D and making it the default implementation
 imply to stick with DMD backend (and it isn't open source even if
 almost). Not being open source can really impair D's popularity.

 It also prevent to do actual progress on D during the translation
 process. FInally, Denis's ddmd hasn't succeeded.
They could stick with the C++ front end and fold in changes as needed. They can't to direct merges though. -- /Jacob Carlborg
Feb 28 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 28, 2013 at 03:05:31PM +0100, Jacob Carlborg wrote:
 On 2013-02-28 11:03, deadalnix wrote:
 
I don't think that is a good idea.

That will impair GDC and LDC quite a lot. Especially GDC as GCC team
accept C++ only recently. This mean no inclusion in the official GCC
collection.

This mean that porting DMD to D and making it the default
implementation imply to stick with DMD backend (and it isn't open
source even if almost). Not being open source can really impair D's
popularity.

It also prevent to do actual progress on D during the translation
process. FInally, Denis's ddmd hasn't succeeded.
They could stick with the C++ front end and fold in changes as needed. They can't to direct merges though.
[...] This will be a big problem once the front end is completely written in D. The GDC maintainers will have to translate bugfixes to the D code back to the C++ code. In some cases this may not be possible (D-specific features may be used in the fix, which requires non-trivial translation to C++, which is prone to bugs not in the D code). This will be a lot of maintenance work. This is one of the reasons I suggested using a frozen version of D to write the front end with. That way, we can include the C++ source for that frozen version in GDC, and then bundle the newer D source code with it, so during the bootstrapping process, the GCC scripts first build a working (but older) D compiler from the C++ sources, then use that to compile the newer D source code to produce the final compiler. T -- The trouble with TCP jokes is that it's like hearing the same joke over and over.
Feb 28 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/13 5:03 AM, deadalnix wrote:
 That will impair GDC and LDC quite a lot.
Let's see what the respective project leaders say. Andrei
Feb 28 2013
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 28 February 2013 15:24, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 On 2/28/13 5:03 AM, deadalnix wrote:

 That will impair GDC and LDC quite a lot.
Let's see what the respective project leaders say. Andrei
I'll provide facts, but I'll reserve any opinion to myself. So, feel free to send me a list of questions you want me to answer. :o) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 28 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/13 10:53 AM, Iain Buclaw wrote:
 On 28 February 2013 15:24, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org <mailto:SeeWebsiteForEmail erdani.org>>
 wrote:

     On 2/28/13 5:03 AM, deadalnix wrote:

         That will impair GDC and LDC quite a lot.


     Let's see what the respective project leaders say.

     Andrei



 I'll provide facts, but I'll reserve any opinion to myself.

 So, feel free to send me a list of questions you want me to answer. :o)
"Would an initiative of porting dmd to D create difficulties for gdc?" Andrei
Feb 28 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 28 February 2013 16:01, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 On 2/28/13 10:53 AM, Iain Buclaw wrote:

 On 28 February 2013 15:24, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org
<mailto:SeeWebsiteForEmail **erdani.org<SeeWebsiteForEmail erdani.org>

wrote: On 2/28/13 5:03 AM, deadalnix wrote: That will impair GDC and LDC quite a lot. Let's see what the respective project leaders say. Andrei I'll provide facts, but I'll reserve any opinion to myself. So, feel free to send me a list of questions you want me to answer. :o)
"Would an initiative of porting dmd to D create difficulties for gdc?" Andrei
Gnat's frontend is written in Ada, however it does not depend on having to call anything from the gcc backend. We still do not know what portions of the frontend is being ported over to D. The way gdc is written, it takes the D Frontend, removes all C++ parts that interface with dmd's backend - toElem; toIR; toObjFile; toSymbol; toCtype; toDt (this latter one I am in the middle of removing from gdc) and implements them to instead build GCC trees, the code itself also being in C++. These are all methods defined in D Front-End, and rely on calling and interfacing with the gcc backend. Re-writing these in D is not an option, as require access to GCC macros. See tree.h for the majority of that list: http://gcc.gnu.org/git/?p=gcc.git;a=blob_plain;f=gcc/tree.h;hb=refs/heads/master -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 28 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 18:18, Iain Buclaw wrote:

 We still do not know what portions of the frontend is being ported over
 to D.  The way gdc is written, it takes the D Frontend, removes all C++
 parts that interface with dmd's backend - toElem; toIR; toObjFile;
 toSymbol; toCtype; toDt (this latter one I am in the middle of removing
 from gdc) and implements them to instead build GCC trees, the code
 itself also being in C++.

 These are all methods defined in D Front-End, and rely on calling and
 interfacing with the gcc backend.  Re-writing these in D is not an
 option, as require access to GCC macros.
If you're removing these functions does it matter which language they're written in ? -- /Jacob Carlborg
Feb 28 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 March 2013 07:26, Jacob Carlborg <doob me.com> wrote:

 On 2013-02-28 18:18, Iain Buclaw wrote:

  We still do not know what portions of the frontend is being ported over
 to D.  The way gdc is written, it takes the D Frontend, removes all C++
 parts that interface with dmd's backend - toElem; toIR; toObjFile;
 toSymbol; toCtype; toDt (this latter one I am in the middle of removing
 from gdc) and implements them to instead build GCC trees, the code
 itself also being in C++.

 These are all methods defined in D Front-End, and rely on calling and
 interfacing with the gcc backend.  Re-writing these in D is not an
 option, as require access to GCC macros.
If you're removing these functions does it matter which language they're written in ? -- /Jacob Carlborg
Not removed, re-written. For example, VectorExp::toElem(). For gdc: https://github.com/D-Programming-GDC/GDC/blob/master/gcc/d/d-elem.cc#L2375 For dmd: https://github.com/D-Programming-Language/dmd/blob/master/src/e2ir.c#L3820 Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-03-01 11:23, Iain Buclaw wrote:

 Not removed, re-written.  For example, VectorExp::toElem().

 For gdc:
 https://github.com/D-Programming-GDC/GDC/blob/master/gcc/d/d-elem.cc#L2375
 For dmd:
 https://github.com/D-Programming-Language/dmd/blob/master/src/e2ir.c#L3820
Aha, I see. -- /Jacob Carlborg
Mar 01 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 7:53 AM, Iain Buclaw wrote:
 So, feel free to send me a list of questions you want me to answer. :o)
Would it impair having it accepted as part of gcc?
Feb 28 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 28 February 2013 22:50, Walter Bright <newshound2 digitalmars.com> wrote:

 On 2/28/2013 7:53 AM, Iain Buclaw wrote:

 So, feel free to send me a list of questions you want me to answer. :o)
Would it impair having it accepted as part of gcc?
Not if we follow by way of example, eg: the Ada or Go model. Where they have separately maintained code for their front end that may be used verbatim in multiple compilers, with the code outside the front end doing everything related to interfacing with GCC, and only what's related to interfacing with GCC. The only part where this may be problematic for D is that there are still parts of the front end that require patching for use with GCC, this is being worked on, but would require co-operation from both code gdc, ldc and dmd maintains to align their copies of the front end up. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 16:53, Iain Buclaw wrote:

 So, feel free to send me a list of questions you want me to answer. :o)
Could the GDC front end remain in C++ and changes be folded in anyway? These changes do not need to be direct translation of the D code. -- /Jacob Carlborg
Feb 28 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 March 2013 07:28, Jacob Carlborg <doob me.com> wrote:

 On 2013-02-28 16:53, Iain Buclaw wrote:

  So, feel free to send me a list of questions you want me to answer. :o)

 Could the GDC front end remain in C++ and changes be folded in anyway?
 These changes do not need to be direct translation of the D code.

 --
 /Jacob Carlborg
The code that interfaces with gcc needs to be in either C or C++. There are C++ structs/classes defined in the D frontend that while they include all methods required for parsing/semantic analysis of D code. They also include methods that are used to generate the codegen for the backend (toElem, toIR, toSymbol, etc). In gdc, these are gcc interfacing methods that can't be converted to D. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-03-01 11:29, Iain Buclaw wrote:

 The code that interfaces with gcc needs to be in either C or C++.  There
 are C++ structs/classes defined in the D frontend that while they
 include all methods required for parsing/semantic analysis of D code.
 They also include methods that are used to generate the codegen for the
 backend (toElem, toIR, toSymbol, etc).  In gdc, these are gcc
 interfacing methods that can't be converted to D.
Can you use the current toElem, toIR and toSymbol written in C++. Then port in changes from the version written in D as needed? -- /Jacob Carlborg
Mar 01 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 March 2013 10:43, Jacob Carlborg <doob me.com> wrote:

 On 2013-03-01 11:29, Iain Buclaw wrote:

  The code that interfaces with gcc needs to be in either C or C++.  There
 are C++ structs/classes defined in the D frontend that while they
 include all methods required for parsing/semantic analysis of D code.
 They also include methods that are used to generate the codegen for the
 backend (toElem, toIR, toSymbol, etc).  In gdc, these are gcc
 interfacing methods that can't be converted to D.
Can you use the current toElem, toIR and toSymbol written in C++. Then port in changes from the version written in D as needed? -- /Jacob Carlborg
It's much more complex than that. Think about compatibility between calling D structs/classes from C++, and that dmd and gdc don't share the same representation of types in the back-end that are common to the front-end - elem, type, IRState, real_t, to name a few. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 March 2013 10:43, Jacob Carlborg <doob me.com> wrote:

 On 2013-03-01 11:29, Iain Buclaw wrote:

  The code that interfaces with gcc needs to be in either C or C++.  There
 are C++ structs/classes defined in the D frontend that while they
 include all methods required for parsing/semantic analysis of D code.
 They also include methods that are used to generate the codegen for the
 backend (toElem, toIR, toSymbol, etc).  In gdc, these are gcc
 interfacing methods that can't be converted to D.
Can you use the current toElem, toIR and toSymbol written in C++. Then port in changes from the version written in D as needed? -- /Jacob Carlborg
Also, what changes dmd makes to it's back-end facing functions do not necessarily affect gdc. So there has never really been a direct conversion from one to the other, however as they (should) do effectively the same code generation, one can draw comparisons between them. Regards, -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
prev sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 28 February 2013 at 15:24:07 UTC, Andrei 
Alexandrescu wrote:
 On 2/28/13 5:03 AM, deadalnix wrote:
 That will impair GDC and LDC quite a lot.
Let's see what the respective project leaders say.
Well, let me first emphasize that I agree that having the D reference implementation written in D is a desirable goal for a number of reasons, such as outlined by Andrei in his initial post. I am not sure whether using DMD as a basis is the ideal approach as far as the ultimate outcome is concerned, but it certainly has its merits considering the limited time budget. That being said, moving parts of the front-end source to D will in any case cause quite a bit of minor work all over the place for LDC (porting LDC-specific changes, adapting the build system, ...), and I would be glad if somebody new could take this as an opportunity to join LDC development, as the time that Kai and I (the current main contributors) can spend on LDC right now is unfortunately rather limited anyway. Apart from such minor effects, I only really see two possible issues to be aware of: First, requiring a D compiler to build LDC will make life harder for people preparing distribution packages, at least for packages in the actual upstream repositories where the packages usually have to be buildable from source (with dependencies also being met out of the distro's repositories). This is not at all an unsolvable issue, but the migration should be coordinated with the packaging crowd to ensure a smooth transition. In this regard, we should also make sure that the front-end (and thus GDC and LDC) can be bootstrapped of a Free/OSS D compiler, otherwise integration of GDC/LDC into Debian, Fedora, ... might become a problem. Not that this should be a huge issue with GDC and LDC being around, but I thought I would mention it. Second, rewriting all of *LDC's* code in D would be a huge task, as the use of C++ templates is pervasive through the LLVM C++ API (even if they are used pretty judiciously), and the LLVM C API is a lot less powerful in some aspects. Thus, care should be taken that the D frontend can actually be used with some of the virtual method implementations still in C++ (e.g. toElem/toElemDtor and similar LDC-specific ones). Your (Andrei's) initial post sounded like this would be the case. But if I interpreted some of the posts correctly, Daniel Murphy has an automatic translator in the works for porting over the whole compiler (except for the backend) at once, which might be a problem for LDC. David
Mar 04 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"David Nadlinger" <see klickverbot.at> wrote in message 
news:tbfgbhevqojgnawnxgns forum.dlang.org...
 Second, rewriting all of *LDC's* code in D would be a huge task, as the 
 use of C++ templates is pervasive through the LLVM C++ API (even if they 
 are used pretty judiciously), and the LLVM C API is a lot less powerful in 
 some aspects. Thus, care should be taken that the D frontend can actually 
 be used with some of the virtual method implementations still in C++ (e.g. 
 toElem/toElemDtor and similar LDC-specific ones).

 Your (Andrei's) initial post sounded like this would be the case. But if I 
 interpreted some of the posts correctly, Daniel Murphy has an automatic 
 translator in the works for porting over the whole compiler (except for 
 the backend) at once, which might be a problem for LDC.

 David
I think we can solve this, but it's a lot of work. 1. Refactor the glue layer to use a proper visitor pattern 2. Implement extern(C++) classes (where https://github.com/D-Programming-Language/dmd/pull/644 was supposed to be headed) This should allow us to have the dmd glue layer written in D, with the ldc/gdc glue layers written in c++. It would require all three glue layers to be refactored together, but I don't see a way to avoid this. Hopefully we can get rid of most of the gdc/ldc specific frontend patches along the way. What do you and Iain think about this approach?
Mar 04 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 5, 2013 6:56 AM, "Daniel Murphy" <yebblies nospamgmail.com> wrote:
 "David Nadlinger" <see klickverbot.at> wrote in message
 news:tbfgbhevqojgnawnxgns forum.dlang.org...
 Second, rewriting all of *LDC's* code in D would be a huge task, as the
 use of C++ templates is pervasive through the LLVM C++ API (even if they
 are used pretty judiciously), and the LLVM C API is a lot less powerful
in
 some aspects. Thus, care should be taken that the D frontend can
actually
 be used with some of the virtual method implementations still in C++
(e.g.
 toElem/toElemDtor and similar LDC-specific ones).

 Your (Andrei's) initial post sounded like this would be the case. But
if I
 interpreted some of the posts correctly, Daniel Murphy has an automatic
 translator in the works for porting over the whole compiler (except for
 the backend) at once, which might be a problem for LDC.

 David
I think we can solve this, but it's a lot of work. 1. Refactor the glue layer to use a proper visitor pattern 2. Implement extern(C++) classes (where https://github.com/D-Programming-Language/dmd/pull/644 was supposed to be headed) This should allow us to have the dmd glue layer written in D, with the ldc/gdc glue layers written in c++. It would require all three glue layers to be refactored together, but I don't see a way to avoid this. Hopefully we can get rid of most of the gdc/ldc specific frontend patches along the way. What do you and Iain think about this approach?
I think C++ classes would be more ill to implement than what I see initially in that link. Mangling is another problem as well. I've seen differing C++ compilers have subtle differences. I'll try to find one discrepancy between D and g++. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 05 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.235.1362482490.14496.digitalmars-d puremagic.com...
 I think C++ classes would be more ill to implement than what I see
 initially in that link.
All that pull request implements is some extended C++ mangling, for windows, and not very well.
 Mangling is another problem as well.  I've seen differing C++ compilers
 have subtle differences.  I'll try to find one discrepancy between D and
 g++.
Mangling is a large part of the work, but is really not that hard. I do think we need to move the windows c++ mangling into the frontend. most of the g++ mangling code is already there. The list is pretty short: - global functions - global variables - static members functions - virtual member functions - normal member functions - static member variables - normal member variables If we have the mangling and abi working for all of those, I think that should be enough to implement the glue layer in C++ with all the ast classes written in D. No need for the messy stuff like stack allocation semantics and constructors/destructors.
Mar 05 2013
prev sibling next sibling parent reply Martin Nowak <code dawg.eu> writes:
On 02/28/2013 01:37 AM, Andrei Alexandrescu wrote:
 Please chime in with ideas on how to make this happen.
Splitting off druntime's GC to also use it for dmd could be a great project.
Feb 28 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 11:27, Martin Nowak wrote:

 Splitting off druntime's GC to also use it for dmd could be a great
 project.
How do you mean. If DMD is written in D it would be using druntime including its GC? -- /Jacob Carlborg
Feb 28 2013
prev sibling next sibling parent reply "Maxim Fomin" <maxim maxim-fomin.ru> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei
Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.
D contains hundred of files, a "little" file can contain 5K of good old buggy C++ code. A big "file" can contain up to 10K of lines. I didn't collect statistics. How do you plan to convert it?
 By this we'd like to initiate a dialog about how this large 
 project can be initiated and driven through completion. Our 
 initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.
With some kind of magical tool called 'dtoh'? Ok, it can translate declarations. But what (maybe who) would rewrite code?
 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.
The fact that dmd outputs old object format at win32 would come at help, wouldn't it? And how such code can coexists? From straightforward POV, it is clear how D function can call C function (forgetting about 64 C struct ABI problem), but how for ex. D code can use C++ class and vice versa? And what about runtime? Two runtimes: C++ and D? And how GC would treat C++ staff?
 3. At a point in the future the last C++ module will be 
 replaced with a D module. Going forward there will be no more 
 need for a C++ compiler to build the compiler (except as a 
 bootstrapping test).

 It is essential that we get support from the larger community 
 for this. This is a large project that should enjoy strong 
 leadership apart from Walter himself (as he is busy with 
 dynamic library support which is strategic) and robust 
 participation from many of us.
So, you both are asking community help? It is nice to hear, but I consider that community was in some kind of discrimination against you in the past except in trivial cases like fixing bugs and asking something which was badly needed. The very single example of when you both agreed that you are wrong (after long insisting that you are right because you are right) is bugzilla issue on class inheritance and preconditions - whether base class invariant should be respected or not. So, I see this idea (and I can be rude and biased here) as "we haven't treated you seriously in the past, please rewtite 100K from C++ to D for us, we are to high to do the dirty job ourselves".
 Please chime in with ideas on how to make this happen.


 Thanks,

 Andrei
P.S. Latest passage is a sum of each small disappointment of how D project is governed.
Feb 28 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 11:35, Maxim Fomin wrote:

 With some kind of magical tool called 'dtoh'? Ok, it can
 translate declarations. But what (maybe who) would rewrite code?
One would translate a single file to D. Then run "dtoh" over that file to get a C/C++ interface to the D file. You can then link the D object file with the rest of the C++ code. -- /Jacob Carlborg
Feb 28 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/13 5:35 AM, Maxim Fomin wrote:
 So, you both are asking community help? It is nice to hear, but I
 consider that community was in some kind of discrimination
 against you in the past except in trivial cases like fixing bugs
 and asking something which was badly needed. The very single
 example of when you both agreed that you are wrong (after long
 insisting that you are right because you are right) is bugzilla
 issue on class inheritance and preconditions - whether base class
 invariant should be respected or not.

 So, I see this idea (and I can be rude and biased here) as "we
 haven't treated you seriously in the past, please rewtite 100K
 from C++ to D for us, we are to high to do the dirty job
 ourselves".
Now that's some grudge. What happened here? Were you wronged somehow in the past? Thanks, Andrei
Feb 28 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
I support the intention and agree that it needs to be done 
part-by-part with no re-factoring allowed to minimize 
regressions. Probably could have even added myself to volunteers, 
but not sure, it looks like a very time-intensive project.

But issues with LDC and GDC need to be settled first. If D 
front-end in D considerably harms any of those, it is a complete 
no-no, even if porting will be perfect. Just not worth the loss.
Feb 28 2013
parent reply FG <home fgda.pl> writes:
On 2013-02-28 11:58, Dicebot wrote:
 But issues with LDC and GDC need to be settled first. If D front-end in D
 considerably harms any of those, it is a complete no-no, even if porting will
be
 perfect. Just not worth the loss.
Indeed, but even if LDC and GDC don't stop this from happening, I'm more worried (as someone willing to write more of his programs in D instead of picking C++) about stretching resources too thin on this one project, while there are tons of more important things to do first (from my POV). Let's see: 1) shared libraries (loading and being loaded), 2) GC, const refs, manual MM, containers managing their memory, 3) stop hiding AA's implementation, 4) improve libraries: bigint, xml, you name it, ... n) rewrite the compiler's frontend. I'm sure you can find a lot more to fit into the [5..n]. Even the infamous properties could rank higher than this migration, because, frankly, I don't care what language the compiler is in, as long as I don't have to install a JVM to use it. :)
Feb 28 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-28 16:02, FG wrote:

 frankly, I don't care what language the compiler is in, as long
 as I don't have to install a JVM to use it. :)
Then .Net it is :) /irony -- /Jacob Carlborg
Feb 28 2013
parent reply FG <home fgda.pl> writes:
On 2013-02-28 16:07, Jacob Carlborg wrote:
 On 2013-02-28 16:02, FG wrote:

 frankly, I don't care what language the compiler is in, as long
 as I don't have to install a JVM to use it. :)
Then .Net it is :) /irony
I was wondering if I should have also mentioned .Net. Now I know the answer. :)
Feb 28 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 16:25, FG wrote:

 I was wondering if I should have also mentioned .Net.
 Now I know the answer. :)
I'm sure we can find some other environment you would need to install to be able to run D :) -- /Jacob Carlborg
Feb 28 2013
prev sibling next sibling parent reply Arlen <arlen.ng gmx.com> writes:
On Wed, Feb 27, 2013 at 6:37 PM, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 Hello,


 Walter and I have had a long conversation about the next radical thing to
 do to improve D's standing. Like others in this community, we believe it's
 a good time to consider bootstrapping the compiler. Having the D compiler
 written in D has quite a few advantages, among which taking advantages of
 D's features and having a large codebase that would be its own test harness.

 By this we'd like to initiate a dialog about how this large project can be
 initiated and driven through completion. Our initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module and
 generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the compiler.
 At given points throughout the code D code will coexist and link with C++
 code.

 3. At a point in the future the last C++ module will be replaced with a D
 module. Going forward there will be no more need for a C++ compiler to
 build the compiler (except as a bootstrapping test).

 It is essential that we get support from the larger community for this.
 This is a large project that should enjoy strong leadership apart from
 Walter himself (as he is busy with dynamic library support which is
 strategic) and robust participation from many of us.

 Please chime in with ideas on how to make this happen.


 Thanks,

 Andrei
Having ported Boost.units to D, I can attest to this being a lot of work. I did try translating first and then refactoring the code, but that did not go well, mainly because of all the tricks and hacks employed when doing template meta-programming in C++ that did not translate well at all to D. With my first attempt I pretty much ended up with C++ code that was written in D, and that's not what I wanted. So I had to start over, refactoring and writing D code in D as I went. The problem with refactoring is that once you refactor a piece, chances are that you will need to refactor everything that depends on the code that was refactored, and that starts a domino effect. Of course, things are different with DMD, so translating first and then refactoring is probably the right way to do it. But, I don't see how we could use D's nice features without refactoring. So, I presume this is going to be done in two phases: Phase 1: direct translation to make sure everything works. Phase 2: refactoring to use D's nice features. And your three steps would be describing Phase 1. Arlen
Feb 28 2013
next sibling parent reply "js.mdnq" <js_adddot+mdng gmail.com> writes:
I believe a complete rewrite from the ground up using a fixed 
stable dmd is needed. The reason is two fold: Many things have 
been learned about the evolution of the D language over time. 
Much of the trouble of D has been stabling the D implementation 
and spec. Second, Trying to port the C++ code to D to make a D 
compiler will only multiply the bugs in DMD. (i.e., it will 
introduce new bugs from the conversion and retain the old bugs)

Instead, I believe proper project management is needed along with 
a SOLID language specification and clear delineation of goals. If 
the language spec itself is flawed then the same things will 
occur as is with DMD.

To tie the dependence on C/C++ the D compiler would need to be 
written in the language subset of the intersection between DMD 
and the new language spec. This should not be hard to do but must 
be strictly maintained. Else one will always require dmd to 
compile the compiler.

Hence, a solid language spec for the new D compiler is needed. 
The language spec must overlap with the old spec and the D 
compiler must only be written in this overlap. (It should be 
obvious but this allows the D compiler to be compiled in DMD or 
itself, after the bootstrap one can gradually evolve the subset 
to include the newer features a few versions behind since the old 
dmd is not needed)

The problem with such an undertaking behind successful is all in 
the project management. I would say we need a solid language spec 
and the subset between it and the current spec(frozen at some 
point). I imagine they would be almost identical so actually 
little real work would be needed.

So, who's up for writing the D 3.0 spec?
Feb 28 2013
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Thursday, 28 February 2013 at 21:11:20 UTC, js.mdnq wrote:
 I believe a complete rewrite from the ground up using a fixed 
 stable dmd is needed. The reason is two fold: Many things have 
 been learned about the evolution of the D language over time. 
 Much of the trouble of D has been stabling the D implementation 
 and spec. Second, Trying to port the C++ code to D to make a D 
 compiler will only multiply the bugs in DMD. (i.e., it will 
 introduce new bugs from the conversion and retain the old bugs)

 Instead, I believe proper project management is needed along 
 with a SOLID language specification and clear delineation of 
 goals. If the language spec itself is flawed then the same 
 things will occur as is with DMD.

 To tie the dependence on C/C++ the D compiler would need to be 
 written in the language subset of the intersection between DMD 
 and the new language spec. This should not be hard to do but 
 must be strictly maintained. Else one will always require dmd 
 to compile the compiler.

 Hence, a solid language spec for the new D compiler is needed. 
 The language spec must overlap with the old spec and the D 
 compiler must only be written in this overlap. (It should be 
 obvious but this allows the D compiler to be compiled in DMD or 
 itself, after the bootstrap one can gradually evolve the subset 
 to include the newer features a few versions behind since the 
 old dmd is not needed)

 The problem with such an undertaking behind successful is all 
 in the project management. I would say we need a solid language 
 spec and the subset between it and the current spec(frozen at 
 some point). I imagine they would be almost identical so 
 actually little real work would be needed.

 So, who's up for writing the D 3.0 spec?
I believe this is the better strategy. At least, we have a reference compiler, so that it's easy to know at each point in time if the new compiler is at least as good as the old one. However, it's such a long road that all these efforts will mean the development of the current compilers are nearly completely stopped, which is problemaic, given that important features are still missing. So the question now is, what exactly which features can a rewrite in D make it easier to add ?
Mar 01 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-28 20:28, Arlen wrote:

 Having ported Boost.units to D, I can attest to this being a lot of
 work.  I did try translating first and then refactoring the code, but
 that did not go well, mainly because of all the tricks and hacks
 employed when doing template meta-programming in C++ that did not
 translate well at all to D.  With my first attempt I pretty much ended
 up with C++ code that was written in D, and that's not what I wanted.
 So I had to start over, refactoring and writing D code in D as I went.
 The problem with refactoring is that once you refactor a piece, chances
 are that you will need to refactor everything that depends on the code
 that was refactored, and that starts a domino effect.
That sounds more like one needs to figure out the intent of the code and not just look at the exact syntax. An easy example. C++ supports multiple inheritance, D does not. Trying to emulate that will most likely cause a lot of problem. But the use case in C++ could just be interfaces. -- /Jacob Carlborg
Feb 28 2013
prev sibling next sibling parent reply Thomas Koch <thomas koch.ro> writes:
 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.
Two points from the viewpoint of the Debian distribution: Debian is ported to many different platforms and in average one new platform port started every year. A huge pain point for porters are circular (or self) dependencies. A lot of effort goes into breaking such circles. So in the moment the D language is great in that it does not introduce a new circular dependency. It would be a pity to lose this. The second important thing for Debian (and Fedora and others) is licensing. It's a pity that DMD isn't free software and I believe DMD not being in distros is one reason for the low popularity of D. It's hard to learn D with gdc while all tutorials are based on DMD. So instead of a rewrite of D, it would rather be important (from my humble point of view) to replace non-free parts of DMD. Thank you, Thomas Koch
Mar 01 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 1 March 2013 08:50, Thomas Koch <thomas koch.ro> wrote:

 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.
Two points from the viewpoint of the Debian distribution: Debian is ported to many different platforms and in average one new platform port started every year. A huge pain point for porters are circular (or self) dependencies. A lot of effort goes into breaking such circles.
As I understand it, the biggest pain is getting an initial system compiler on the ported target in the first place. One that package is in place, it gets easier to manage the circular dependency. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 01 2013
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, March 01, 2013 09:50:13 Thomas Koch wrote:
 Walter and I have had a long conversation about the next radical thing
 to do to improve D's standing. Like others in this community, we believe
 it's a good time to consider bootstrapping the compiler. Having the D
 compiler written in D has quite a few advantages, among which taking
 advantages of D's features and having a large codebase that would be its
 own test harness.
Two points from the viewpoint of the Debian distribution: Debian is ported to many different platforms and in average one new platform port started every year. A huge pain point for porters are circular (or self) dependencies. A lot of effort goes into breaking such circles. So in the moment the D language is great in that it does not introduce a new circular dependency. It would be a pity to lose this. The second important thing for Debian (and Fedora and others) is licensing. It's a pity that DMD isn't free software and I believe DMD not being in distros is one reason for the low popularity of D. It's hard to learn D with gdc while all tutorials are based on DMD. So instead of a rewrite of D, it would rather be important (from my humble point of view) to replace non-free parts of DMD.
I don't really care about the license, but I can definitely say that part of me finds the idea of having a compiler compiling itself to be a bad idea, much is compiler folks love to do that. Recently, I had some older haskell code that I needed to compile, but it followed the previous haskell standard, and I couldn't get the current compiler to compile it even in compatability mode. So, I tried to compile an older version of the compiler from before the new standard, and it had exactly the same problems that my code did, because it was written in haskell using the older standard. So, I had to give up on being able to compile my code, because I couldn't get my hands on an old enough version of the compiler. If they'd just written it in C/C++, then I wouldn't have had that problem. I know that it's generally touted as a great idea for a language to compile itself, and I'm sure that it would be great to be able to use D's features in the compiler, but the circular dependency that that causes is a definite negative IMHO. - Jonathan M Davis
Mar 01 2013
parent reply "js.mdnq" <js_adddot+mdng gmail.com> writes:
On Friday, 1 March 2013 at 10:36:04 UTC, Jonathan M Davis wrote:
 On Friday, March 01, 2013 09:50:13 Thomas Koch wrote:
 Walter and I have had a long conversation about the next 
 radical thing
 to do to improve D's standing. Like others in this 
 community, we believe
 it's a good time to consider bootstrapping the compiler. 
 Having the D
 compiler written in D has quite a few advantages, among 
 which taking
 advantages of D's features and having a large codebase that 
 would be its
 own test harness.
Two points from the viewpoint of the Debian distribution: Debian is ported to many different platforms and in average one new platform port started every year. A huge pain point for porters are circular (or self) dependencies. A lot of effort goes into breaking such circles. So in the moment the D language is great in that it does not introduce a new circular dependency. It would be a pity to lose this. The second important thing for Debian (and Fedora and others) is licensing. It's a pity that DMD isn't free software and I believe DMD not being in distros is one reason for the low popularity of D. It's hard to learn D with gdc while all tutorials are based on DMD. So instead of a rewrite of D, it would rather be important (from my humble point of view) to replace non-free parts of DMD.
I don't really care about the license, but I can definitely say that part of me finds the idea of having a compiler compiling itself to be a bad idea, much is compiler folks love to do that. Recently, I had some older haskell code that I needed to compile, but it followed the previous haskell standard, and I couldn't get the current compiler to compile it even in compatability mode. So, I tried to compile an older version of the compiler from before the new standard, and it had exactly the same problems that my code did, because it was written in haskell using the older standard. So, I had to give up on being able to compile my code, because I couldn't get my hands on an old enough version of the compiler. If they'd just written it in C/C++, then I wouldn't have had that problem. I know that it's generally touted as a great idea for a language to compile itself, and I'm sure that it would be great to be able to use D's features in the compiler, but the circular dependency that that causes is a definite negative IMHO. - Jonathan M Davis
There is no problem with circular dependencies as long as the language spec has a fixed subset that the compiler is written in. The reason is that any future version then can compile the compiler source because the future versions all support the subset. This is why it is so important to get the fixed language subset down because it will the core language features and can't be changed without causing regressive dependencies. Any evolution of the D compiler will compile it's own compiler source as long as it properly implements the D language subset. This subset also has to be a subset of the current dmd language implementation to bootstrap from.
Mar 01 2013
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 01, 2013 at 05:44:52PM +0100, js.mdnq wrote:
 On Friday, 1 March 2013 at 10:36:04 UTC, Jonathan M Davis wrote:
[...]
I don't really care about the license, but I can definitely say that
part of me finds the idea of having a compiler compiling itself to be
a bad idea, much is compiler folks love to do that. Recently, I had
some older haskell code that I needed to compile, but it followed the
previous haskell standard, and I couldn't get the current compiler to
compile it even in compatability mode.  So, I tried to compile an
older version of the compiler from before the new standard, and it
had exactly the same problems that my code did, because it was
written in haskell using the older standard. So, I had to give up on
being able to compile my code, because I couldn't get my hands on an
old enough version of the compiler. If they'd just written it in
C/C++, then I wouldn't have had that problem.

I know that it's generally touted as a great idea for a language to
compile itself, and I'm sure that it would be great to be able to use
D's features in the compiler, but the circular dependency that that
causes is a definite negative IMHO.

- Jonathan M Davis
There is no problem with circular dependencies as long as the language spec has a fixed subset that the compiler is written in. The reason is that any future version then can compile the compiler source because the future versions all support the subset. This is why it is so important to get the fixed language subset down because it will the core language features and can't be changed without causing regressive dependencies. Any evolution of the D compiler will compile it's own compiler source as long as it properly implements the D language subset. This subset also has to be a subset of the current dmd language implementation to bootstrap from.
+1. Another reason I keep saying that we need to write the D compiler in a fixed subset of D. There are many advantages to this. For one, it will solve the GDC situation -- we can just ship the last C++ version of DMD with GDC (outdated but it will correctly compile the newest D compiler), then it can be used to compile the D version of the compiler. Then there's the above point, that if the last C++ version of DMD is capable of compiling the latest D version of DMD, then we won't run into the situation where you can't compile anything unless you checkout successive versions of DMD just to be able to compile the next one. Restricting DMD to a fixed subset of D also means that it will not be vulnerable to subtle bugs caused by later breaking changes to the language. Let's just face it, the current version of D is far from breaking changes, even if we're trying our best to stabilize it. Using a fixed subset of D to write the compiler ensures that all versions of the compiler are compilable by all other versions of it, which comes in very useful when you end up in Jonathan's situation above. T -- One Word to write them all, One Access to find them, One Excel to count them all, And thus to Windows bind them. -- Mike Champion
Mar 01 2013
prev sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Friday, 1 March 2013 at 16:44:53 UTC, js.mdnq wrote:
 There is no problem with circular dependencies as long as the 
 language spec has a fixed subset that the compiler is written 
 in. The reason is that any future version then can compile the 
 compiler source because the future versions all support the 
 subset.

 This is why it is so important to get the fixed language subset 
 down because it will the core language features and can't be 
 changed without causing regressive dependencies.

 Any evolution of the D compiler will compile it's own compiler 
 source as long as it properly implements the D language subset. 
 This subset also has to be a subset of the current dmd language 
 implementation to bootstrap from.
Exactly. This fixed subset would be very limited in comparison to the full language (I can imagine something looking a bit like a smaller Go, there would probably be no templates at all, no CTFE, maybe even no exceptions, for instance), but would be orthogonal, completely stable in terms of spec, and known to work. It could be defined for other real world usages as well, like embedding in small appliances.
Mar 01 2013
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in comparison 
 to the full language (I can imagine something looking a bit 
 like a smaller Go, there would probably be no templates at all, 
 no CTFE, maybe even no exceptions, for instance), but would be 
 orthogonal, completely stable in terms of spec, and known to 
 work. It could be defined for other real world usages as well, 
 like embedding in small appliances.
It would also make it easy to bootstrap the compiler on new platforms.
Mar 01 2013
parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 2 March 2013 at 07:16:04 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in comparison 
 to the full language (I can imagine something looking a bit 
 like a smaller Go, there would probably be no templates at 
 all, no CTFE, maybe even no exceptions, for instance), but 
 would be orthogonal, completely stable in terms of spec, and 
 known to work. It could be defined for other real world usages 
 as well, like embedding in small appliances.
It would also make it easy to bootstrap the compiler on new platforms.
I don't see how this would help with proting to different platofrms at all if you have a cross-compiler. Yes, the DMD frontend currently isn't really built with cross-compilation in mind (e.g. using the host's floating point arithmetic for constant folding/CTFE), but once this has been changed, I don't see how the language used would make any difference in re-targetting at all. You simply use another host system (e.g. Windows/Linux x86) until the new backend/runtime is stable enough for the compiler to self-host. David
Mar 02 2013
next sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 2 March 2013 at 14:47:55 UTC, David Nadlinger wrote:
 On Saturday, 2 March 2013 at 07:16:04 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in 
 comparison to the full language (I can imagine something 
 looking a bit like a smaller Go, there would probably be no 
 templates at all, no CTFE, maybe even no exceptions, for 
 instance), but would be orthogonal, completely stable in 
 terms of spec, and known to work. It could be defined for 
 other real world usages as well, like embedding in small 
 appliances.
It would also make it easy to bootstrap the compiler on new platforms.
I don't see how this would help with proting to different platofrms at all if you have a cross-compiler. Yes, the DMD frontend currently isn't really built with cross-compilation in mind (e.g. using the host's floating point arithmetic for constant folding/CTFE), but once this has been changed, I don't see how the language used would make any difference in re-targetting at all. You simply use another host system (e.g. Windows/Linux x86) until the new backend/runtime is stable enough for the compiler to self-host. David
And what if you *don't* have a cross compiler ? You compile the D subset (bootstrapper) in C and off you go (provided you have a reasonable C compiler on that platform).
Mar 02 2013
next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 2, 2013 3:01 PM, "SomeDude" <lovelydear mailmetrash.com> wrote:
 On Saturday, 2 March 2013 at 14:47:55 UTC, David Nadlinger wrote:
 On Saturday, 2 March 2013 at 07:16:04 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in comparison to the
full language (I can imagine something looking a bit like a smaller Go, there would probably be no templates at all, no CTFE, maybe even no exceptions, for instance), but would be orthogonal, completely stable in terms of spec, and known to work. It could be defined for other real world usages as well, like embedding in small appliances.
 It would also make it easy to bootstrap the compiler on new platforms.
I don't see how this would help with proting to different platofrms at
all if you have a cross-compiler.
 Yes, the DMD frontend currently isn't really built with
cross-compilation in mind (e.g. using the host's floating point arithmetic for constant folding/CTFE), but once this has been changed, I don't see how the language used would make any difference in re-targetting at all.
 You simply use another host system (e.g. Windows/Linux x86) until the
new backend/runtime is stable enough for the compiler to self-host.
 David
And what if you *don't* have a cross compiler ? You compile the D subset
(bootstrapper) in C and off you go (provided you have a reasonable C compiler on that platform). I don't see how using only a subset of the language would have an effect on cross compiling or porting of a compiler self hosted in D. Your argument is lost on me some dude... Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 02 2013
parent reply "js.mdnq" <js_adddot+mdng gmail.com> writes:
On Saturday, 2 March 2013 at 15:12:40 UTC, Iain Buclaw wrote:
 On Mar 2, 2013 3:01 PM, "SomeDude" <lovelydear mailmetrash.com> 
 wrote:
 On Saturday, 2 March 2013 at 14:47:55 UTC, David Nadlinger 
 wrote:
 On Saturday, 2 March 2013 at 07:16:04 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in 
 comparison to the
full language (I can imagine something looking a bit like a smaller Go, there would probably be no templates at all, no CTFE, maybe even no exceptions, for instance), but would be orthogonal, completely stable in terms of spec, and known to work. It could be defined for other real world usages as well, like embedding in small appliances.
 It would also make it easy to bootstrap the compiler on new 
 platforms.
I don't see how this would help with proting to different platofrms at
all if you have a cross-compiler.
 Yes, the DMD frontend currently isn't really built with
cross-compilation in mind (e.g. using the host's floating point arithmetic for constant folding/CTFE), but once this has been changed, I don't see how the language used would make any difference in re-targetting at all.
 You simply use another host system (e.g. Windows/Linux x86) 
 until the
new backend/runtime is stable enough for the compiler to self-host.
 David
And what if you *don't* have a cross compiler ? You compile the D subset
(bootstrapper) in C and off you go (provided you have a reasonable C compiler on that platform). I don't see how using only a subset of the language would have an effect on cross compiling or porting of a compiler self hosted in D. Your argument is lost on me some dude... Regards
For the same reason that most embedded languages use C and not C++. Obviously it is easier to implement a subset of something than the full set(at the very least, less work). Most embedded applications don't have the resources to deal with higher level constructs(since these generally come at a real cost). For example, a GC is generally an issue on small embedded apps. The D core language spec would have to be GC agnostic(in fact, I think the full spec should be). I actually prefer to use C++ in embedded apps but use static classes. It just looks better than traditional C because of the logical separation it creates. For the core language spec, things like templates, mixins, and other useful logical language constructs(these are more like macros than objects) should be included. One could propose the following: The core language spec is the specification of the core language elements of D that can run on any modern processing unit, compiled as is and without "issue". That is, say you have a model with functions in it. These functions are just mathematical calculations and should have no issue running on any cpu. Hence, mark the module as core(core language spec, no GC, etc) and porting is not an issue. You know because it is using the core spec that no advanced language specs are being used that will break the bank. Every major revision the core spec can be updated to include new logical language constructs that were added to the previous major version. By being able to mark modules one can gain some benfit: module mymodule : core, gc, ...; states that mymodule uses the core language spec, the garbage collector, and whatever else. You can think of the "core" language spec as being analogous to purity. It offers similar benefits because it restricts what can happen to a smaller universe.
Mar 02 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Saturday, 2 March 2013 at 17:26:52 UTC, js.mdnq wrote:
 For the same reason that most embedded languages use C and not 
 C++. Obviously it is easier to implement a subset of something 
 than the full set(at the very least, less work). Most embedded 
 applications don't have the resources to deal with higher level 
 constructs(since these generally come at a real cost). For 
 example, a GC is generally an issue on small embedded apps. The 
 D core language spec would have to be GC agnostic(in fact, I 
 think the full spec should be).
As an embedded guy I dream of direct safe opposite, somewhat similar to nogc proposal but even more restrictive, one that could work with minimal run-time. I have tried to interest someone in experiments with D at work but lack of compiler verified subset that is embedded-ready was a big issue.
Mar 02 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/2/2013 10:48 AM, Dicebot wrote:
 As an embedded guy I dream of direct  safe opposite, somewhat similar to  nogc
 proposal but even more restrictive, one that could work with minimal run-time.
I
 have tried to interest someone in experiments with D at work but lack of
 compiler verified subset that is embedded-ready was a big issue.
You can do that now. Use the badly named and rather undocumented "betterC" switch and you can build D apps that don't need phobos at all - they can be linked with the C runtime library only. I use it to bring D up on a new target.
Mar 02 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Saturday, 2 March 2013 at 19:39:15 UTC, Walter Bright wrote:
 On 3/2/2013 10:48 AM, Dicebot wrote:
 As an embedded guy I dream of direct  safe opposite, somewhat 
 similar to  nogc
 proposal but even more restrictive, one that could work with 
 minimal run-time. I
 have tried to interest someone in experiments with D at work 
 but lack of
 compiler verified subset that is embedded-ready was a big 
 issue.
You can do that now. Use the badly named and rather undocumented "betterC" switch and you can build D apps that don't need phobos at all - they can be linked with the C runtime library only. I use it to bring D up on a new target.
Wow, I have never known something like that exists! Is there description of what it actually does or source code is only possible reference. Depending on actual limitations, it may be a game changer.
Mar 02 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Saturday, 2 March 2013 at 22:52:49 UTC, Dicebot wrote:
 ...
Missed a quotation mark there :)
Mar 02 2013
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
Mar 02 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link with C only.
Mar 02 2013
parent reply "js.mdnq" <js_adddot+mdng gmail.com> writes:
On Saturday, 2 March 2013 at 23:09:57 UTC, Walter Bright wrote:
 On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may 
 be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link with C only.
But isn't there a few language constructs the specifically rely on the GC? I thought this was the whole issue of not being able to disable the GC completely in dmd? If so, the core language spec would have to be designed to be GC agnostic. I think the ability to mark a module with attributes would help. Possibly extend UDA's to work on the module keyword also.
Mar 02 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/2/2013 8:36 PM, js.mdnq wrote:
 On Saturday, 2 March 2013 at 23:09:57 UTC, Walter Bright wrote:
 On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link with C only.
But isn't there a few language constructs the specifically rely on the GC?
Yes, and those will fail to link.
Mar 02 2013
next sibling parent reply "js.mdnq" <js_adddot+mdng gmail.com> writes:
On Sunday, 3 March 2013 at 05:48:30 UTC, Walter Bright wrote:
 On 3/2/2013 8:36 PM, js.mdnq wrote:
 On Saturday, 2 March 2013 at 23:09:57 UTC, Walter Bright wrote:
 On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may 
 be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link with C only.
But isn't there a few language constructs the specifically rely on the GC?
Yes, and those will fail to link.
I think original we started with dmd being GC dependent and that this causes problems for some platforms. To move dmd to d source and be more platform independent one would need the core spec to be GC independent. IIRC arrays depend on the GC for cleanup and therefore this would need to be changed to allow arrays in the core. Really what is needed is gc arrays and ngc arrays as well as other essential features. e.g., gc arrays would not be part of the core spec while ngc arrays would.
Mar 02 2013
parent reply "jerro" <a a.com> writes:
 Really what is needed is gc arrays and ngc arrays as well as 
 other essential features. e.g., gc arrays would not be part of 
 the core spec while ngc arrays would.
You can already use slices without a gc, like this: T[] allocate(T)(int n) { return (cast(T*) malloc(T.sizeof * n))[0 .. n]; } void deallocate(T)(ref T[] a) { free(a.ptr) a = null; } Of course, you can not append to such slices or expand them without a GC. It would be useful to have a nogc flag which would result in an error if a feature that needs a GC was used. I think adding nogc would be better than defining a "core spec", because most D code does not need that feature. If we add a a nogc flag, the people that don't need it can just ignore its existence and do not need to learn about it, but if we call the subset of D that doesn't use a GC a "core spec", people will feel that's something they need to learn, which will make the language seem more complex.
Mar 03 2013
parent "js.mdnq" <js_adddot+mdng gmail.com> writes:
On Sunday, 3 March 2013 at 12:05:00 UTC, jerro wrote:
 Really what is needed is gc arrays and ngc arrays as well as 
 other essential features. e.g., gc arrays would not be part of 
 the core spec while ngc arrays would.
You can already use slices without a gc, like this: T[] allocate(T)(int n) { return (cast(T*) malloc(T.sizeof * n))[0 .. n]; } void deallocate(T)(ref T[] a) { free(a.ptr) a = null; } Of course, you can not append to such slices or expand them without a GC. It would be useful to have a nogc flag which would result in an error if a feature that needs a GC was used. I think adding nogc would be better than defining a "core spec", because most D code does not need that feature. If we add a a nogc flag, the people that don't need it can just ignore its existence and do not need to learn about it, but if we call the subset of D that doesn't use a GC a "core spec", people will feel that's something they need to learn, which will make the language seem more complex.
A core spec is not just about gc features. It was also about migrating the compiler from C++ to D. The core spec can provide many useful benefits but the gc shouldn't be one of them. What happens when you append to a slice using manual allocation? Does the compiler throw an error or just crash and burn? A core language spec for a self-compiler is required and because it must be gc agnostic to work across a multitude of platforms(most to work well with embedded apps or for performance reasons) one needs a way to signify this(hence marking modules as being gc-free and having ngc constructs). For example, by marking a module as core it can only use other core modules. Since a core module can't use the gc any all arrays are ngc and gc operations on them would be in error. This also helps when migrating a module from non-core to core once you get the module compiled you know it is gc free(as well as other things).
Mar 03 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 3 March 2013 at 05:48:30 UTC, Walter Bright wrote:
 On 3/2/2013 8:36 PM, js.mdnq wrote:
 On Saturday, 2 March 2013 at 23:09:57 UTC, Walter Bright wrote:
 On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may 
 be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link with C only.
But isn't there a few language constructs the specifically rely on the GC?
Yes, and those will fail to link.
Thank you. I need to check out how it works in practice to evaluate applicability but that definitely looks like a step in needed direction.
Mar 03 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 3, 2013 10:56 AM, "Dicebot" <m.strashun gmail.com> wrote:
 On Sunday, 3 March 2013 at 05:48:30 UTC, Walter Bright wrote:
 On 3/2/2013 8:36 PM, js.mdnq wrote:
 On Saturday, 2 March 2013 at 23:09:57 UTC, Walter Bright wrote:
 On 3/2/2013 3:00 PM, Andrej Mitrovic wrote:
 On 3/2/13, Dicebot <m.strashun gmail.com> wrote:
 Wow, I have never known something like that exists! Is there
 description of what it actually does or source code is only
 possible reference. Depending on actual limitations, it may be a
 game changer.
I had alook, all it does is avoids generating moduleinfo. I'm not sure how that helps, the GC still works with this switch.
By not generating moduleinfo, which needs phobos to work, it can link
with C
 only.
But isn't there a few language constructs the specifically rely on the
GC?
 Yes, and those will fail to link.
Thank you. I need to check out how it works in practice to evaluate
applicability but that definitely looks like a step in needed direction. I intend to fix this problem in gdc at least by removing _tlsstart and _tlsend from the library. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 03 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 3 March 2013 at 13:56:57 UTC, Iain Buclaw wrote:
 I intend to fix this problem in gdc at least by removing 
 _tlsstart and
 _tlsend from the library.

 Regards
Ugh, which problem are you speaking about? "betterC" flag is not working properly in gdc or what?
Mar 03 2013
parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 3, 2013 2:21 PM, "Dicebot" <m.strashun gmail.com> wrote:
 On Sunday, 3 March 2013 at 13:56:57 UTC, Iain Buclaw wrote:
 I intend to fix this problem in gdc at least by removing _tlsstart and
 _tlsend from the library.

 Regards
Ugh, which problem are you speaking about? "betterC" flag is not working
properly in gdc or what? This "betterC" is not currently implemented at all. Would rather have, say -ffreestanding switch which would go a little further and not emit implicit library calls. What I'm referring to is one potential link issue when compiling C/C++ that interfaces with D. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 03 2013
prev sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 3 March 2013 at 05:48:30 UTC, Walter Bright wrote:
 ...
 Yes, and those will fail to link.
Ok, checked this out. While it is cool that you can get all fat stuff out and get your hello world of the same binary size as plain C one, resulting language is actually less usable than C (array literals) and lacks my main reason to use D (templates and friends). May be it can be used in pair with custom re-written from scratch run-time to create something usable - I'd argue that at least templates should not require run-time stuff at all.
Mar 03 2013
parent reply Martin Nowak <code dawg.eu> writes:
On 03/03/2013 08:34 PM, Dicebot wrote:
I'd argue that at least templates should not require
 run-time stuff at all.
Templates do not require any runtime support.
Mar 11 2013
parent "Dicebot" <m.strashun gmail.com> writes:
(copied from an e-mail)
Probably I have misunderstood linker error then. This simple 
snippet fails:

---
import core.stdc.stdio;

template Value(int val)
{
     enum Value = val;
}
// Plain enum Value = 42 works

extern(C)
int main()
{
     printf("%d", Value!42);
     return 0;
}
---
$ dmd -betterC -defaultlib= tmp.d
tmp.o: In function `_D3tmp7__arrayZ':
tmp.d:(.text._D3tmp7__arrayZ+0xd): undefined reference to 
`_D3tmp12__ModuleInfoZ'
tmp.d:(.text._D3tmp7__arrayZ+0x16): undefined reference to 
`_d_array_bounds'
tmp.o: In function `_D3tmp8__assertFiZv':
tmp.d:(.text._D3tmp8__assertFiZv+0xd): undefined reference to 
`_D3tmp12__ModuleInfoZ'
tmp.d:(.text._D3tmp8__assertFiZv+0x16): undefined reference to 
`_d_assertm'
tmp.o: In function `_D3tmp15__unittest_failFiZv':
tmp.d:(.text._D3tmp15__unittest_failFiZv+0xd): undefined 
reference to `_D3tmp12__ModuleInfoZ'
tmp.d:(.text._D3tmp15__unittest_failFiZv+0x16): undefined 
reference to `_d_unittestm'
collect2: error: ld returned 1 exit status
---

What is the issue then?
Mar 12 2013
prev sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 2 March 2013 at 18:48:37 UTC, Dicebot wrote:
 On Saturday, 2 March 2013 at 17:26:52 UTC, js.mdnq wrote:
 For the same reason that most embedded languages use C and not 
 C++. Obviously it is easier to implement a subset of something 
 than the full set(at the very least, less work). Most embedded 
 applications don't have the resources to deal with higher 
 level constructs(since these generally come at a real cost). 
 For example, a GC is generally an issue on small embedded 
 apps. The D core language spec would have to be GC agnostic(in 
 fact, I think the full spec should be).
As an embedded guy I dream of direct safe opposite, somewhat similar to nogc proposal but even more restrictive, one that could work with minimal run-time. I have tried to interest someone in experiments with D at work but lack of compiler verified subset that is embedded-ready was a big issue.
I believe a subset of D could prove interesting to C programmers the same way the full D language looks interesting to C++ programmers. With the added benefit that one could fairly easily learn the full language from the subset language.
Mar 03 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 2, 2013 5:31 PM, "js.mdnq" <js_adddot+mdng gmail.com> wrote:
 On Saturday, 2 March 2013 at 15:12:40 UTC, Iain Buclaw wrote:
 On Mar 2, 2013 3:01 PM, "SomeDude" <lovelydear mailmetrash.com> wrote:
 On Saturday, 2 March 2013 at 14:47:55 UTC, David Nadlinger wrote:
 On Saturday, 2 March 2013 at 07:16:04 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 06:50:32 UTC, SomeDude wrote:
 Exactly. This fixed subset would be very limited in comparison to the
full language (I can imagine something looking a bit like a smaller Go, there would probably be no templates at all, no CTFE, maybe even no exceptions, for instance), but would be orthogonal, completely stable in terms of spec, and known to work. It could be defined for other real
world
 usages as well, like embedding in small appliances.
 It would also make it easy to bootstrap the compiler on new platforms.
I don't see how this would help with proting to different platofrms at
all if you have a cross-compiler.
 Yes, the DMD frontend currently isn't really built with
cross-compilation in mind (e.g. using the host's floating point
arithmetic
 for constant folding/CTFE), but once this has been changed, I don't see
how
 the language used would make any difference in re-targetting at all.
 You simply use another host system (e.g. Windows/Linux x86) until the
new backend/runtime is stable enough for the compiler to self-host.
 David
And what if you *don't* have a cross compiler ? You compile the D subset
(bootstrapper) in C and off you go (provided you have a reasonable C compiler on that platform). I don't see how using only a subset of the language would have an effect
on
 cross compiling or porting of a compiler self hosted in D.  Your argument
 is lost on me some dude...

 Regards
For the same reason that most embedded languages use C and not C++.
These aren't self hosting if they are written in another language. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 02 2013
prev sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Saturday, 2 March 2013 at 14:55:19 UTC, SomeDude wrote:
 On Saturday, 2 March 2013 at 14:47:55 UTC, David Nadlinger 
 wrote:
 You simply use another host system (e.g. Windows/Linux x86) 
 until the new backend/runtime is stable enough for the 
 compiler to self-host.

 David
And what if you *don't* have a cross compiler ? You compile the D subset (bootstrapper) in C and off you go (provided you have a reasonable C compiler on that platform).
I think you are misunderstanding something here. You need a backend for the new platform anyway for a D compiler on it to be of any use. Or do you envision building x86 D binaries on <fancy_new_architecture> to be an important use case? David
Mar 04 2013
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 4 March 2013 at 13:40:27 UTC, David Nadlinger wrote:
 I think you are misunderstanding something here.

 You need a backend for the new platform anyway for a D compiler 
 on it to be of any use. Or do you envision building x86 D 
 binaries on <fancy_new_architecture> to be an important use 
 case?

 David
Oh ok. Maybe I was implying the gcc backend, which has been ported to several platforms.
Mar 04 2013
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/2/2013 6:47 AM, David Nadlinger wrote:
 You simply use another host system (e.g. Windows/Linux x86) until the new
 backend/runtime is stable enough for the compiler to self-host.
In fact, if the new system supports sshfs, it is fairly easy to do.
Mar 02 2013
prev sibling next sibling parent reply "Vincent" <thornik gmail.com> writes:
Andrei, I cannot catch these steps:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.
Since D is written on C++, what exactly you gonna convert to "C++ header"?? Anyway, I offer to keep away from C++ code - it's written with "C++ in mind", while D offers brand new world. So I support rewritting D on D from scratch. Initially frontend only + join LLVM backend. And next (if we find LLVM not so well) we can write our own backend.
Mar 01 2013
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 01.03.2013 10:54, schrieb Vincent:
 Andrei, I cannot catch these steps:

 1. Implement the dtoh standalone program that takes a D module
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the
 compiler. At given points throughout the code D code will
 coexist and link with C++ code.
Since D is written on C++, what exactly you gonna convert to "C++ header"??
it will be an incremental port using both C++ and D code so the DMD C++ base needs headers from the D based module replacements
Anyway, I offer to keep away from C++ code - it's written with
"C++ in mind", while D offers brand new world. So I support
rewritting D on D from scratch.
that will just not work for an project of that size - or better - there are several tries of from-scratch without success out there an incremental port and then refactor is the best/fasted and error-free thing we can get benefits: -will help to find more bugs/missing features/tools in the C/C++ <-> D conversion/adaption area (which is still a big plus for D) -keeps both the pure D and the semi-pure :) D guys (Walter,gdc,ldc frontend developers) in the same boat -the dmd frontend will become the very first community driven BIG project that can be a much better orientation for bug-prevention designs/future D ideas etc. then every thing else etc...
Mar 01 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/01/2013 11:08 AM, dennis luehring wrote:
 Am 01.03.2013 10:54, schrieb Vincent:
 ...
  >Anyway, I offer to keep away from C++ code - it's written with
  >"C++ in mind", while D offers brand new world. So I support
  >rewritting D on D from scratch.

 that will just not work for an project of that size - or better - there
 are several tries of from-scratch without success out there
 ...
As well as some that will likely become successful. Anyway, imo both should be attempted.
Mar 01 2013
prev sibling next sibling parent reply "Don" <turnyourkidsintocash nospam.com> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.

 By this we'd like to initiate a dialog about how this large 
 project can be initiated and driven through completion. Our 
 initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.

 3. At a point in the future the last C++ module will be 
 replaced with a D module. Going forward there will be no more 
 need for a C++ compiler to build the compiler (except as a 
 bootstrapping test).

 It is essential that we get support from the larger community 
 for this. This is a large project that should enjoy strong 
 leadership apart from Walter himself (as he is busy with 
 dynamic library support which is strategic) and robust 
 participation from many of us.

 Please chime in with ideas on how to make this happen.
This would be a huge step forward, I'm sure all of us who have made significant contributions to the compiler are frustrated by the many things that are difficult in C++ but would be easy in D. But in my view, before step 2 can happen, we need to clean up the glue layer. Once we have an isolated, clearly defined front-end that is shared between dmd, gdc and ldc, we can start converting it.
Mar 01 2013
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Friday, 1 March 2013 at 11:45:42 UTC, Don wrote:
 On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
 Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.

 By this we'd like to initiate a dialog about how this large 
 project can be initiated and driven through completion. Our 
 initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module 
 and generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will 
 coexist and link with C++ code.

 3. At a point in the future the last C++ module will be 
 replaced with a D module. Going forward there will be no more 
 need for a C++ compiler to build the compiler (except as a 
 bootstrapping test).

 It is essential that we get support from the larger community 
 for this. This is a large project that should enjoy strong 
 leadership apart from Walter himself (as he is busy with 
 dynamic library support which is strategic) and robust 
 participation from many of us.

 Please chime in with ideas on how to make this happen.
This would be a huge step forward, I'm sure all of us who have made significant contributions to the compiler are frustrated by the many things that are difficult in C++ but would be easy in D. But in my view, before step 2 can happen, we need to clean up the glue layer. Once we have an isolated, clearly defined front-end that is shared between dmd, gdc and ldc, we can start converting it.
Like js.mdnq wrote, I'm pretty sure this will fail because of the circular dependency problem and because of memory problems, if the compiler isn't written in a strict subset of the language. AFAIK, Ocaml is compiled in a strict subset of Caml, for instance, and I would believe many bootstrapping compilers do the same.
Mar 01 2013
prev sibling next sibling parent reply "Oleg Kuporosov" <Oleg.Kuporosov gmail.com> writes:
On Thursday, 28 February 2013 at 00:37:50 UTC, Andrei 
Alexandrescu wrote:
 Hello,


 Walter and I have had a long conversation about the next 
 radical thing to do to improve D's standing. Like others in 
 this community, we believe it's a good time to consider 
 bootstrapping the compiler. Having the D compiler written in D 
 has quite a few advantages, among which taking advantages of 
 D's features and having a large codebase that would be its own 
 test harness.
.. Strategically it is great idea, but tactically there are probably more attractive (for new users also) areas to improve toolchain: - GC. Current solution is just like java's from 90th. Hit all OS and mostly everybody. Makes so hard to develop solutions with soft-RT requirements, like games and multimedia processing. - Linker for Windows. optlink is far away from current industry requirements/standarts. Using coff for Win64 is good but we have now dependency on external toolchain. Unfortunatelly C++/CLI was excluded from SDK'12, it probably shows new trend. How long SDK'10 will be available for download and compatible with next Windows? Big risk. Good article on what can replace C from Damien, including D - http://damienkatz.net/2013/01/follow_up_to_the_unreasonable.html
Mar 01 2013
parent "jerro" <a a.com> writes:
 - GC. Current solution is just like java's from 90th. Hit all 
 OS and mostly everybody. Makes so hard to develop solutions 
 with soft-RT requirements, like
 games and multimedia processing.
For those use cases, it may be more productive to make avoiding the GC easier than to try to improve GC's performance (not that making the GC faster is a bad thing).
Mar 01 2013
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2013-02-28 00:37:50 +0000, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 Hello,
 
 
 Walter and I have had a long conversation about the next radical thing 
 to do to improve D's standing. Like others in this community, we 
 believe it's a good time to consider bootstrapping the compiler. Having 
 the D compiler written in D has quite a few advantages, among which 
 taking advantages of D's features and having a large codebase that 
 would be its own test harness.
 
 By this we'd like to initiate a dialog about how this large project can 
 be initiated and driven through completion. Our initial basic ideas are:
 
 1. Implement the dtoh standalone program that takes a D module and 
 generates its corresponding C++ header.
 
 2. Use dtoh to initiate and conduct an incremental port of the 
 compiler. At given points throughout the code D code will coexist and 
 link with C++ code.
 
 3. At a point in the future the last C++ module will be replaced with a 
 D module. Going forward there will be no more need for a C++ compiler 
 to build the compiler (except as a bootstrapping test).
 
 It is essential that we get support from the larger community for this. 
 This is a large project that should enjoy strong leadership apart from 
 Walter himself (as he is busy with dynamic library support which is 
 strategic) and robust participation from many of us.
 
 Please chime in with ideas on how to make this happen.
Actually, I think it'd be easier and faster to convert it all in one chunk. Perhaps I'm a little too optimistic, but I did port successfully a game from D to C++ once and it was not that difficult. I never bothered with having a half-translated version that'd work. My impression is that trying to add some layer to make the intermediary state compile is more likely to introduce bugs than to help. The current architecture isn't modular enough to do that without many complications. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca/
Mar 01 2013
prev sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Wed, 27 Feb 2013 16:37:50 -0800, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Hello,


 Walter and I have had a long conversation about the next radical thing  
 to do to improve D's standing. Like others in this community, we believe  
 it's a good time to consider bootstrapping the compiler. Having the D  
 compiler written in D has quite a few advantages, among which taking  
 advantages of D's features and having a large codebase that would be its  
 own test harness.

 By this we'd like to initiate a dialog about how this large project can  
 be initiated and driven through completion. Our initial basic ideas are:

 1. Implement the dtoh standalone program that takes a D module and  
 generates its corresponding C++ header.

 2. Use dtoh to initiate and conduct an incremental port of the compiler.  
 At given points throughout the code D code will coexist and link with  
 C++ code.

 3. At a point in the future the last C++ module will be replaced with a  
 D module. Going forward there will be no more need for a C++ compiler to  
 build the compiler (except as a bootstrapping test).

 It is essential that we get support from the larger community for this.  
 This is a large project that should enjoy strong leadership apart from  
 Walter himself (as he is busy with dynamic library support which is  
 strategic) and robust participation from many of us.

 Please chime in with ideas on how to make this happen.


 Thanks,

 Andrei
First off, I am totally in favor of rewriting D in D. However, we should move carefully as there is a minefield of potential issues here. The most important benefit of this project that I see is that it would force the codebase to rapidly stabilize by forcing the developers to make the language and library work for the compiller. Andrei, specifically, having worked in the DMD code, it is, at this time, somewhat unrealistic to expect to be able port the code one file at a time. As has been previously discussed, DMD makes use of C++ features that D can't link with and in many cases the current code is not well modularized. The DMD C++ code would have to be very carefully refactored with D porting in mind PRIOR to beginning the actual porting project. That will incur an additional time penalty. I don't see this happening without a project freeze. We absolutely cannot be porting code and fixing bugs at the same time. Even if we did one file/module at a time, many of the files are multiple thousands of lines of code. Just doing a straight conversion of a single file will take non-trivial amounts of time. The legal issues surrounding the Back-End have been the cause of great concern among many of the Linux distro packagers. This is a VERY grave issue that MUST be addressed. One of the prime reasons driving the ubiquity of GCC is that it's license is acceptable to virtually every distro available. DMD will NEVER be available on distro's like Debian. THAT is a PROBLEM. My preference would be to completely replace the back-end with LLVM. Why LLVM? Well as opposed to GCC it was designed from the ground up to support many languages. The benefit here is that it is possible to create standalone compiler the generates LLVM bytecode that can then be run through LLVM. My understanding (and I am happy to be corrected here) is that LLVM does not need the front-end to be compiled into the back-end. I would ask the community and particularly Walter to consider the following plan: Freeze the DMD repo. Set-up a new repo for the DMD in D codebase. Walter creates the basic folder/file structure. The community can then begin porting the compiler by submitting pull requests, this allows multiple people to potentially port the same code and the core team can select the best conversion. Switch the backend to LLVM. This would reap the following benefits. A highly optimized code generator. They have hundreds of people working on theirs. We don't. Reduction in specialized knowledge with DMD. Very few people understand the DMDBE. The bus factor is uncomfortably high. Reduction in workload for core team. By removing the need to support the backend the team will be able to focus on the front-end. Portability. Simply put, the amount of work required to make DMD work on ARM is beyond reasonable and ARM support is absolutely required in the future of computing. If we used LLVM this becomes almost trivially easy. Just rework druntime/phobos for ARM. Once the port is complete and working we unfreeze DMD for bug fixes and get back to it. I suspect that by allowing the many people skilled in D to port the code in a simultaneous fashion that we could complete the porting in a matter of months. It would be longer than the desired release cycle of two months. But I would think that four months is a reasonable estimate. Ok D community, destroy me! -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Mar 05 2013
next sibling parent "J" <not_listed not.not.listed> writes:
On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 My preference would be to completely replace the back-end with 
 LLVM.
+1 In addition to getting ARM and javascript (via emscripten) output, going to LLVM brings the possibility of using its JIT to make a REPL possible, and nice dynamic code loading possibilities as well. -J
Mar 05 2013
prev sibling next sibling parent reply "Rob T" <alanb ucora.com> writes:
On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
[...]
 My preference would be to completely replace the back-end with 
 LLVM. Why LLVM? Well as opposed to GCC it was designed from the 
 ground up to support many languages. The benefit here is that 
 it is possible to create standalone compiler the generates LLVM 
 bytecode that can then be run through LLVM. My understanding 
 (and I am happy to be corrected here) is that LLVM does not 
 need the front-end to be compiled into the back-end.
That seems like the most obvious direction to take. Is there any valid reason not to? --rt
Mar 05 2013
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, March 06, 2013 02:44:07 Rob T wrote:
 On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 [...]
 
 My preference would be to completely replace the back-end with
 LLVM. Why LLVM? Well as opposed to GCC it was designed from the
 ground up to support many languages. The benefit here is that
 it is possible to create standalone compiler the generates LLVM
 bytecode that can then be run through LLVM. My understanding
 (and I am happy to be corrected here) is that LLVM does not
 need the front-end to be compiled into the back-end.
That seems like the most obvious direction to take. Is there any valid reason not to?
Because LDC already does that, there are potential legal issues with Walter working on other backends, and it's completely unnecessary. It's a shame that the stance of debian and some other distros makes it so that dmd can't be on them, but both gdc and ldc already exist and are both completely FOSS. The picky distros can just stick with those, and if anyone using them really wants the reference compiler, they can just install it themselves. I agree that it sucks that dmd's backend is not fully open source, but the code is available to read and provide fixes for, and no code compiled by it is affected by the license. All it really affects is whether it can go on some Linux distros, and given that we have two other perfectly good compilers which _can_ go on such distros, I don't think that it's at all worth worrying about dmd's license. There are much, much more important things to worry about (like bug fixing). - Jonathan M Davis
Mar 05 2013
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Wednesday, 6 March 2013 at 03:19:23 UTC, Jonathan M Davis 
wrote:
 On Wednesday, March 06, 2013 02:44:07 Rob T wrote:
 On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 [...]
 
 My preference would be to completely replace the back-end 
 with
 LLVM. Why LLVM? Well as opposed to GCC it was designed from 
 the
 ground up to support many languages. The benefit here is that
 it is possible to create standalone compiler the generates 
 LLVM
 bytecode that can then be run through LLVM. My understanding
 (and I am happy to be corrected here) is that LLVM does not
 need the front-end to be compiled into the back-end.
That seems like the most obvious direction to take. Is there any valid reason not to?
Because LDC already does that, there are potential legal issues with Walter working on other backends, and it's completely unnecessary. It's a shame that the stance of debian and some other distros makes it so that dmd can't be on them, but both gdc and ldc already exist and are both completely FOSS. The picky distros can just stick with those, and if anyone using them really wants the reference compiler, they can just install it themselves. I agree that it sucks that dmd's backend is not fully open source, but the code is available to read and provide fixes for, and no code compiled by it is affected by the license. All it really affects is whether it can go on some Linux distros, and given that we have two other perfectly good compilers which _can_ go on such distros, I don't think that it's at all worth worrying about dmd's license. There are much, much more important things to worry about (like bug fixing). - Jonathan M Davis
Is it realistic to consider making the frontend completely portable across backends? I'm imagining a situation where there is no gdc/ldc frontend, just glue to the backend. The advantages seem significant.
Mar 06 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 6 March 2013 09:28, John Colvin <john.loughran.colvin gmail.com> wrote:

 On Wednesday, 6 March 2013 at 03:19:23 UTC, Jonathan M Davis wrote:

 On Wednesday, March 06, 2013 02:44:07 Rob T wrote:

 On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 [...]

 My preference would be to completely replace the back-end > with
 LLVM. Why LLVM? Well as opposed to GCC it was designed from > the
 ground up to support many languages. The benefit here is that
 it is possible to create standalone compiler the generates > LLVM
 bytecode that can then be run through LLVM. My understanding
 (and I am happy to be corrected here) is that LLVM does not
 need the front-end to be compiled into the back-end.
That seems like the most obvious direction to take. Is there any valid reason not to?
Because LDC already does that, there are potential legal issues with Walter working on other backends, and it's completely unnecessary. It's a shame that the stance of debian and some other distros makes it so that dmd can't be on them, but both gdc and ldc already exist and are both completely FOSS. The picky distros can just stick with those, and if anyone using them really wants the reference compiler, they can just install it themselves. I agree that it sucks that dmd's backend is not fully open source, but the code is available to read and provide fixes for, and no code compiled by it is affected by the license. All it really affects is whether it can go on some Linux distros, and given that we have two other perfectly good compilers which _can_ go on such distros, I don't think that it's at all worth worrying about dmd's license. There are much, much more important things to worry about (like bug fixing). - Jonathan M Davis
Is it realistic to consider making the frontend completely portable across backends? I'm imagining a situation where there is no gdc/ldc frontend, just glue to the backend. The advantages seem significant.
This is not new. Though people seem to only just be speculating the idea in the NG, the truth is that this has started to happen around 2 months ago. However this is a slow process that will take time. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 06 2013
parent reply "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
A big problem is that GDC and LDC in the distros are not up to 
date. GDC was 2.058 I think. This has forced me to use dmd even 
for my final code (I don't want to get in the trouble of building 
them by source, this is ancient).
Mar 06 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 6, 2013 12:51 PM, "Minas Mina" <minas_mina1990 hotmail.co.uk> wrote:
 A big problem is that GDC and LDC in the distros are not up to date. GDC
was 2.058 I think. This has forced me to use dmd even for my final code (I don't want to get in the trouble of building them by source, this is ancient). One of the benefits of the merger would be that this would be a part non-issue anymore, as all distros will (or should) ship gdc. However what won't happen is frontend updates following a release of gdc. So eg: one release will come with 2.062 frontend, and 10 months later the next release will come with 2.068, or whatever happens to be the current at the time. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 06 2013
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On Mar 6, 2013 1:18 PM, "Iain Buclaw" <ibuclaw ubuntu.com> wrote:
 On Mar 6, 2013 12:51 PM, "Minas Mina" <minas_mina1990 hotmail.co.uk>
wrote:
 A big problem is that GDC and LDC in the distros are not up to date.
GDC was 2.058 I think. This has forced me to use dmd even for my final code (I don't want to get in the trouble of building them by source, this is ancient).
 One of the benefits of the merger would be that this would be a part
non-issue anymore, as all distros will (or should) ship gdc.
 However what won't happen is frontend updates following a release of gdc.
So eg: one release will come with 2.062 frontend, and 10 months later the next release will come with 2.068, or whatever happens to be the current at the time.

This also might be a time to re-address the minor release scheme that has
been discussed in the past (eg: 2.062.1, 2.062.2, etc).   Rather than focus
on maintaining a tree for each release of the D frontend implementation, we
pick a common release that gdc/ldc is shipped with, and pull in bug fixes
from main development and later releases.  When the shipped versions of
gdc/ldc get updated, we then start the process over from that new common
release (again, eg: 2.068) and start maintaining minor releases for that
(2.068.1, etc).

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 06 2013
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 6 March 2013 at 03:19:23 UTC, Jonathan M Davis 
wrote:
 It's a shame that
 the stance of debian and some other distros makes it so that 
 dmd can't be on
 them, but both gdc and ldc already exist and are both 
 completely FOSS. The
 picky distros can just stick with those, and if anyone using 
 them really wants
 the reference compiler, they can just install it themselves.
According to distrowatch Ubuntu and Mint are more popular than Debian, and Ubuntu allows proprietary software like Opera browser and Nvidia drivers, so dmd won't be a problem too. Why Debian policies should be an issue?
Mar 07 2013
next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 03/07/2013 01:03 PM, Kagamin wrote:
 According to distrowatch Ubuntu and Mint are more popular than Debian,
 and Ubuntu allows proprietary software like Opera browser and Nvidia
 drivers, so dmd won't be a problem too. Why Debian policies should be an
 issue?
Both Ubuntu and Mint are based off of Debian, so if you get on Debian you get on those and many Debian-based others as well. Besides that, Debian is more popular with hardcore developer types that will help push adoption. And finally, while I think it's a shame that the reference compiler is proprietary, Debian has a non-free repository that DMD can be placed on as long as the binaries are redistributable.
Mar 07 2013
parent reply "Kagamin" <spam here.lot> writes:
On Thursday, 7 March 2013 at 18:56:05 UTC, Jeff Nowakowski wrote:
 And finally, while I think it's a shame that the reference 
 compiler is proprietary, Debian has a non-free repository that 
 DMD can be placed on as long as the binaries are 
 redistributable.
Are they? I have a vague memory of dmd being non-redistributable.
Mar 11 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, March 11, 2013 10:10:54 Kagamin wrote:
 On Thursday, 7 March 2013 at 18:56:05 UTC, Jeff Nowakowski wrote:
 And finally, while I think it's a shame that the reference
 compiler is proprietary, Debian has a non-free repository that
 DMD can be placed on as long as the binaries are
 redistributable.
Are they? I have a vague memory of dmd being non-redistributable.
It requires Walter's permission to redstribute it, but he's likely to give permission if you ask. - Jonathan M Davis
Mar 11 2013
parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 11 March 2013 at 11:54:29 UTC, Jonathan M Davis wrote:
 On Monday, March 11, 2013 10:10:54 Kagamin wrote:
 On Thursday, 7 March 2013 at 18:56:05 UTC, Jeff Nowakowski 
 wrote:
 And finally, while I think it's a shame that the reference
 compiler is proprietary, Debian has a non-free repository 
 that
 DMD can be placed on as long as the binaries are
 redistributable.
Are they? I have a vague memory of dmd being non-redistributable.
It requires Walter's permission to redstribute it, but he's likely to give permission if you ask. - Jonathan M Davis
That is so problematic for repositories.
Mar 11 2013
prev sibling parent reply Russel Winder <russel winder.org.uk> writes:
On Thu, 2013-03-07 at 19:03 +0100, Kagamin wrote:
[=E2=80=A6]
 According to distrowatch Ubuntu and Mint are more popular than=20
 Debian, and Ubuntu allows proprietary software like Opera browser=20
 and Nvidia drivers, so dmd won't be a problem too. Why Debian=20
 policies should be an issue?
As Jeff pointed out, Debian is the base for Ubuntu, Mint, and others so if you get in Debian you are in Ubuntu, Mint, etc. It is possible to get into Ubuntu, Mint, etc. separately but then you have many channels instead of just the one. Debian also allows proprietary software such as NVIDIA drivers, it is just that they are in the non-free repository instead of the free repository. Non-free is not available by default in Debian but it is there. I use it all the time for NVIDIA drivers and some other stuff. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 09 2013
next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
Coventryday, 9 March 2013 at 09:08:09 UTC, Russel Winder wrote:
 On Thu, 2013-03-07 at 19:03 +0100, Kagamin wrote:
 […]
 According to distrowatch Ubuntu and Mint are more popular than 
 Debian, and Ubuntu allows proprietary software like Opera 
 browser and Nvidia drivers, so dmd won't be a problem too. Why 
 Debian policies should be an issue?
As Jeff pointed out, Debian is the base for Ubuntu, Mint, and others so if you get in Debian you are in Ubuntu, Mint, etc. It is possible to get into Ubuntu, Mint, etc. separately but then you have many channels instead of just the one. Debian also allows proprietary software such as NVIDIA drivers, it is just that they are in the non-free repository instead of the free repository. Non-free is not available by default in Debian but it is there. I use it all the time for NVIDIA drivers and some other stuff.
A similar thing exists in fedora and redhat country: rpmfusion. It has free and non-free branches containing a handful of packages fed and rh won't include in the main repos for licence reasons. Many people use it for nvidia drivers
Mar 09 2013
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 9 March 2013 at 09:08:09 UTC, Russel Winder wrote:
 Debian also allows proprietary software such as NVIDIA drivers, 
 it is
 just that they are in the non-free repository instead of the 
 free
 repository. Non-free is not available by default in Debian but 
 it is
 there. I use it all the time for NVIDIA drivers and some other 
 stuff.
The support is clearly not as good. I got into an argument few month ago with some debian maintainers. The topic was that it was impossible to compile wine on a machine with the nvidia drivers. I came up with the issue, and even a fix, a receive basically a GTFO, we don't care about making a soft to run non free software using non free driver. I took several month (maybe more than a year) for the issue to be solved, in a very similar manner to what I proposed in the first place. I'm not the only one that had similar issues. Debian is not per se against non free, but some maintainers are, and it does matter.
Mar 09 2013
next sibling parent Russel Winder <russel winder.org.uk> writes:
On Sat, 2013-03-09 at 11:14 +0100, deadalnix wrote:
[=E2=80=A6]
 Debian is not per se against non free, but some maintainers are,=20
 and it does matter.
This latter point is Debian's biggest problem, and actually it is worse than that in some cases. This is why having Debian maintainers maintaining D packages who are positively associated with D is important. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 09 2013
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 09, 2013 at 10:38:02AM +0000, Russel Winder wrote:
 On Sat, 2013-03-09 at 11:14 +0100, deadalnix wrote:
 […]
 Debian is not per se against non free, but some maintainers are, 
 and it does matter.
This latter point is Debian's biggest problem, and actually it is worse than that in some cases. This is why having Debian maintainers maintaining D packages who are positively associated with D is important.
[...] +1. I have upload privileges, and I'm willing to help with D packages. T -- Nearly all men can stand adversity, but if you want to test a man's character, give him power. -- Abraham Lincoln
Mar 09 2013
prev sibling next sibling parent reply "Chris Cain" <clcain uncg.edu> writes:
On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 My preference would be to completely replace the back-end with 
 LLVM. Why LLVM?
I would _really_ like to see this, personally. I kind of doubt it would happen, but I can dream... Not just for the reasons you listed, but because it would potentially enable the compiler to use LLVM's JIT/interpreter to perform CTFE at much higher speeds. There's been several things I've wanted to do at compile time that I simply could not because CTFE is rather expensive, especially memory-wise, with DMD. Furthermore, it would also allow some other pretty unique features... For instance, Emscripten (https://github.com/kripken/emscripten) could be used to enable people to write JS code in D (which might be a pretty cool bonus for the vibe.d project). There's a few problems with LLVM. Specifically, the last I heard, it doesn't do exceptions very well on Windows (maybe not at all?). However, some of the expertise from this community could be leveraged to provide patches to LLVM to support this better. This probably wouldn't be that big of a deal and it would also help out everyone using LLVM currently.
Mar 05 2013
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Chris Cain:

 I would _really_ like to see this, personally. I kind of doubt 
 it would happen, but I can dream...
I think Walter will keep using D to keep developing his back-end.
 Not just for the reasons you listed, but because it would
 potentially enable the compiler to use LLVM's JIT/interpreter to
 perform CTFE at much higher speeds.
LLVM JIT is also slow to compile, so it's better to use LLVM JIT only for the longer running CT functions, and keep using an interpreter for all the other CT calls. This means D devs will have to manage an interpreter a JIT and a compiler for the same language :-) I think than inside LISP machines have much less duplication than this. Bye, bearophile
Mar 05 2013
prev sibling next sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 6 March 2013 at 02:26:43 UTC, Chris Cain wrote:
 There's a few problems with LLVM. Specifically, the last I 
 heard,
 it doesn't do exceptions very well on Windows (maybe not at
 all?). However, some of the expertise from this community could
 be leveraged to provide patches to LLVM to support this better.
 This probably wouldn't be that big of a deal and it would also
 help out everyone using LLVM currently.
clang supports dwarf exceptions in 32-bit. 32-bit SEH is not planned for patent issues. 64-bit SEH is under development.
Mar 07 2013
prev sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Wednesday, 6 March 2013 at 02:26:43 UTC, Chris Cain wrote:
 On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 My preference would be to completely replace the back-end with 
 LLVM. Why LLVM?
I would _really_ like to see this, personally. I kind of doubt it would happen, but I can dream... Not just for the reasons you listed, but because it would potentially enable the compiler to use LLVM's JIT/interpreter to perform CTFE at much higher speeds. There's been several things I've wanted to do at compile time that I simply could not because CTFE is rather expensive, especially memory-wise, with DMD. Furthermore, it would also allow some other pretty unique features... For instance, Emscripten (https://github.com/kripken/emscripten) could be used to enable people to write JS code in D (which might be a pretty cool bonus for the vibe.d project).
It hasn't been updated in awhile but Adam D. Ruppe made a D to JavaScript compiler. https://github.com/adamdruppe/dtojs
Mar 12 2013
parent reply "Suliman" <evermind live.ru> writes:
So, what the final decision about porting D to D?
Mar 31 2013
next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 31 March 2013 19:31, Suliman <evermind live.ru> wrote:

 So, what the final decision about porting D to D?
https://www.youtube.com/watch?v=fpaQpyU_QiM -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Mar 31 2013
prev sibling parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Sunday, 31 March 2013 at 18:31:33 UTC, Suliman wrote:
 So, what the final decision about porting D to D?
It's not a "final decision", but Daniel Murphy/yebblies has already made so much progress with his automatic conversion program, https://github.com/yebblies/magicport2 that I feel like he carries the torch right now. Please refer to this discussion: http://forum.dlang.org/thread/kgn24n$5u8$1 digitalmars.com#post-kgumek:242tp4:241:40digitalmars.com Basically: 1) Daniel seems to have this project under control, and he's way ahead of anyone else on it. 2) The current hurdle is the glue layer. 3) The project is mostly being kept private, presumably because he wants to come out with a finished product. 4) All I know is, my gut says YES!
Mar 31 2013
parent reply "Nick B" <nick.barbalich gmail.com> writes:
On Sunday, 31 March 2013 at 23:48:31 UTC, Zach the Mystic wrote:
 On Sunday, 31 March 2013 at 18:31:33 UTC, Suliman wrote:
 So, what the final decision about porting D to D?
It's not a "final decision", but Daniel Murphy/yebblies has already made so much progress with his automatic conversion program, https://github.com/yebblies/magicport2 that I feel like he carries the torch right now. Please refer to this discussion: http://forum.dlang.org/thread/kgn24n$5u8$1 digitalmars.com#post-kgumek:242tp4:241:40digitalmars.com Basically: 1) Daniel seems to have this project under control, and he's way ahead of anyone else on it. 2) The current hurdle is the glue layer. 3) The project is mostly being kept private, presumably because he wants to come out with a finished product. 4) All I know is, my gut says YES!
Question. Does this imply that once Daniel has finished this task, the code will be frozen and a new major release i.e. D 3.0 announced ? Nick
Apr 01 2013
parent reply "Zach the Mystic" <reachzach gggggmail.com> writes:
On Tuesday, 2 April 2013 at 01:09:59 UTC, Nick B wrote:
 On Sunday, 31 March 2013 at 23:48:31 UTC, Zach the Mystic wrote:
 On Sunday, 31 March 2013 at 18:31:33 UTC, Suliman wrote:
 So, what the final decision about porting D to D?
It's not a "final decision", but Daniel Murphy/yebblies has already made so much progress with his automatic conversion program, https://github.com/yebblies/magicport2 that I feel like he carries the torch right now. Please refer to this discussion: http://forum.dlang.org/thread/kgn24n$5u8$1 digitalmars.com#post-kgumek:242tp4:241:40digitalmars.com Basically: 1) Daniel seems to have this project under control, and he's way ahead of anyone else on it. 2) The current hurdle is the glue layer. 3) The project is mostly being kept private, presumably because he wants to come out with a finished product. 4) All I know is, my gut says YES!
Question. Does this imply that once Daniel has finished this task, the code will be frozen and a new major release i.e. D 3.0 announced ? Nick
I'm no expert on that, but I seriously doubt it. D2 is the flagship and will be for a long time, so far as I understand it. Also, Daniel's is an automatic dmd C++ to D conversion program, designed precisely so that the C++ will not need to be frozen, allowing a period where there are both C++ and D frontends. And a new frontend doesn't mean a new language. A "D 3.0" would imply additions and modifications to the language, whereas the topic of this post is changing the compiler. At the same time, perhaps the fact that the leaders have decided now would be a good time to convert the frontend means the language is reaching an important point in its maturity. Still, there's so much known work to do, plus actually extremely fertile ground for new possibilities within D2, that D3 is probably considered both unnecessary and a bit of a distraction at this time. And yet major versions do exist, and there must be some reason they advance, and to have a frontend written in its own language is in some way a milestone, so maybe you're right!
Apr 01 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Zach the Mystic" <reachzach gggggmail.com> wrote in message 
news:rgotiizywwfzkzrdqwkf forum.dlang.org...
 On Tuesday, 2 April 2013 at 01:09:59 UTC, Nick B wrote:
 On Sunday, 31 March 2013 at 23:48:31 UTC, Zach the Mystic wrote:
 On Sunday, 31 March 2013 at 18:31:33 UTC, Suliman wrote:
 So, what the final decision about porting D to D?
It's not a "final decision", but Daniel Murphy/yebblies has already made so much progress with his automatic conversion program, https://github.com/yebblies/magicport2 that I feel like he carries the torch right now. Please refer to this discussion: http://forum.dlang.org/thread/kgn24n$5u8$1 digitalmars.com#post-kgumek:242tp4:241:40digitalmars.com Basically: 1) Daniel seems to have this project under control, and he's way ahead of anyone else on it. 2) The current hurdle is the glue layer. 3) The project is mostly being kept private, presumably because he wants to come out with a finished product. 4) All I know is, my gut says YES!
Question. Does this imply that once Daniel has finished this task, the code will be frozen and a new major release i.e. D 3.0 announced ? Nick
I'm no expert on that, but I seriously doubt it. D2 is the flagship and will be for a long time, so far as I understand it. Also, Daniel's is an automatic dmd C++ to D conversion program, designed precisely so that the C++ will not need to be frozen, allowing a period where there are both C++ and D frontends. And a new frontend doesn't mean a new language. A "D 3.0" would imply additions and modifications to the language, whereas the topic of this post is changing the compiler.
This is what I'm hoping for. An automatic converter means we never have to freeze development, and the pull requests are never invalidated. We can even automatically convert the pull requests to D by applying, converting, and diffing. Because it is automatically kept up to date, the D version and the C++ version can coexist with minimal disruption while the D version is perfected. At some point we abandon the C++ version and switch all development to the D version. I would guess it will be several months of having both until we reach this kind of trust in the D version. Right now I'm up to 'get glue layer working' which needs 'allow C++ static variables, member variables, static functions etc' which (for me) needs 'move win32 C++ mangling into the frontend' which needs 'more free time'. If anyone wants to have a go the plan is to just copy what the linux version does. (cppmangle.c)
 At the same time, perhaps the fact that the leaders have decided now would 
 be a good time to convert the frontend means the language is reaching an 
 important point in its maturity. Still, there's so much known work to do, 
 plus actually extremely fertile ground for new possibilities within D2, 
 that D3 is probably considered both unnecessary and a bit of a distraction 
 at this time. And yet major versions do exist, and there must be some 
 reason they advance, and to have a frontend written in its own language is 
 in some way a milestone, so maybe you're right!
Yeah, D3 is not on the table and may never be. There is no reason we need to change the numbering when switching to D.
Apr 02 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/2/13 9:34 AM, Daniel Murphy wrote:
 Right now I'm up to 'get glue layer working' which needs 'allow C++ static
 variables, member variables, static functions etc' which (for me) needs
 'move win32 C++ mangling into the frontend' which needs 'more free time'.
 If anyone wants to have a go the plan is to just copy what the linux version
 does. (cppmangle.c)
How did you solve the problem that virtual functions for a given class are spread out in several implementation files? Andrei
Apr 02 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:kjenl8$1h4h$1 digitalmars.com...
 On 4/2/13 9:34 AM, Daniel Murphy wrote:
 Right now I'm up to 'get glue layer working' which needs 'allow C++ 
 static
 variables, member variables, static functions etc' which (for me) needs
 'move win32 C++ mangling into the frontend' which needs 'more free time'.
 If anyone wants to have a go the plan is to just copy what the linux 
 version
 does. (cppmangle.c)
How did you solve the problem that virtual functions for a given class are spread out in several implementation files? Andrei
I'm not currently preserving file layout, so they are all merged into the class definitions. It could also be done by generating forwarder functions or changing the language to allow out-of-class function bodies, if keeping the current organization is required. I'm not a fan of out-of-class bodies but whatever is easiest. Note that C++ code can still define the function bodies for extern(C++) classes, so there is no problem for the various glue layers. In the future I would prefer to introduce real visitor objects. The current approach of virtual functions + state structs leads to a lot of duplication for each pass that needs to walk the ast. (semantic, toObj, cppMangle, toJson, apply, toChars, toCBuffer, interpret, toMangleBuffer, nothrow, safe, toDelegate etc)
Apr 02 2013
parent reply "Suliman" <bubnenkoff gmail.com> writes:
Does anybody work on port D to D?
Aug 15 2013
next sibling parent "Dicebot" <public dicebot.lv> writes:
https://www.google.com/search?q=site%3Ahttps%3A%2F%2Fgithub.com%2FD-Programming-Language%2Fdmd+%5BDDMD%5D&oq=site%3Ahttps%3A%2F%2Fgithub.com%2FD-Programming-Language%2Fdmd+%5BDDMD%5D
Aug 15 2013
prev sibling next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 15 August 2013 14:02, Suliman <bubnenkoff gmail.com> wrote:
 Does anybody work on port D to D?
Daniel is the driving force, with myself falling second behind. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Aug 15 2013
parent "David Nadlinger" <code klickverbot.at> writes:
On Thursday, 15 August 2013 at 13:20:15 UTC, Iain Buclaw wrote:
 On 15 August 2013 14:02, Suliman <bubnenkoff gmail.com> wrote:
 Does anybody work on port D to D?
Daniel is the driving force, with myself falling second behind.
And FWIW, we at the LDC front are also working on minimizing the diff of our frontend to the upstream DMD source so as to make a possible future transition easier. David
Aug 15 2013
prev sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Suliman" <bubnenkoff gmail.com> wrote in message 
news:htihsekthjkyhqazuvpc forum.dlang.org...
 Does anybody work on port D to D?
I've done quite a lot of work on it since dconf. The progress of making the C++ source 'conversion compatible' is shown here: https://github.com/D-Programming-Language/dmd/pull/1980 The porting program is here: https://github.com/yebblies/magicport2 I am currently able to convert the C++ source to D, then build that into a compiler capable of building itself, druntime, phobos (with unittests), and passing the test suite on win32. The next step now is to clear the list of patches by integrating them into the compiler or improving the converter. The large parts of this list are: - Cleaning up macro uses - Integrating new extern(C++) support - Splitting up root.c - Fixing all narrowing integer conversions - Removing all variable shadowing - Correctly mangling templated types - Finding a clean way to shallow copy classes Once that is done: - Enhancing the GC so it can destruct extern(C++) classes (no typeinfo available) - Porting to other platforms - Making the generated source more presentable (eg preserving comments) - Integration with gdc/ldc
Aug 15 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/15/13 7:10 AM, Daniel Murphy wrote:
 "Suliman" <bubnenkoff gmail.com> wrote in message
 news:htihsekthjkyhqazuvpc forum.dlang.org...
 Does anybody work on port D to D?
I've done quite a lot of work on it since dconf. The progress of making the C++ source 'conversion compatible' is shown here: https://github.com/D-Programming-Language/dmd/pull/1980 The porting program is here: https://github.com/yebblies/magicport2 I am currently able to convert the C++ source to D, then build that into a compiler capable of building itself, druntime, phobos (with unittests), and passing the test suite on win32.
Did you have a chance to measure the speed of the Double-D compiler? Andrei
Aug 15 2013
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:kuj10l$194l$1 digitalmars.com...
 On 8/15/13 7:10 AM, Daniel Murphy wrote:
 "Suliman" <bubnenkoff gmail.com> wrote in message
 news:htihsekthjkyhqazuvpc forum.dlang.org...
 Does anybody work on port D to D?
I've done quite a lot of work on it since dconf. The progress of making the C++ source 'conversion compatible' is shown here: https://github.com/D-Programming-Language/dmd/pull/1980 The porting program is here: https://github.com/yebblies/magicport2 I am currently able to convert the C++ source to D, then build that into a compiler capable of building itself, druntime, phobos (with unittests), and passing the test suite on win32.
Did you have a chance to measure the speed of the Double-D compiler? Andrei
Last time I measured there was a ~20% performance hit. The D version throws away all the recent work done on tuning the internal allocator, and uses the GC for all allocations (with collections turned off). I suspect a large chunk of the extra time comes from that.
Aug 16 2013
prev sibling parent reply "Brad Anderson" <eco gnuk.net> writes:
On Thursday, 15 August 2013 at 14:11:02 UTC, Daniel Murphy wrote:
 "Suliman" <bubnenkoff gmail.com> wrote in message
 news:htihsekthjkyhqazuvpc forum.dlang.org...
 Does anybody work on port D to D?
I've done quite a lot of work on it since dconf. The progress of making the C++ source 'conversion compatible' is shown here: https://github.com/D-Programming-Language/dmd/pull/1980
So all of those changes were just done by hand, right? Have the other DDMD labeled pull requests just been you cherry-picking from that branch?
 The porting program is here: 
 https://github.com/yebblies/magicport2

 I am currently able to convert the C++ source to D, then build 
 that into a
 compiler capable of building itself, druntime, phobos (with 
 unittests), and
 passing the test suite on win32.
So what's left is basically cleaning it all up and fine tuning it so it's good enough for the actual transition?
Aug 15 2013
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Aug 15, 2013 at 08:19:06PM +0200, Brad Anderson wrote:
 On Thursday, 15 August 2013 at 14:11:02 UTC, Daniel Murphy wrote:
[...]
I am currently able to convert the C++ source to D, then build that
into a compiler capable of building itself, druntime, phobos (with
unittests), and passing the test suite on win32.
So what's left is basically cleaning it all up and fine tuning it so it's good enough for the actual transition?
Whoa. This is good news! So you're saying we already have a working D compiler written in D (albeit autoconverted from C++), and all that remains is for some cleanup + performance tuning? T -- Windows 95 was a joke, and Windows 98 was the punchline.
Aug 15 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.91.1376592874.1719.digitalmars-d puremagic.com...
 On Thu, Aug 15, 2013 at 08:19:06PM +0200, Brad Anderson wrote:
 On Thursday, 15 August 2013 at 14:11:02 UTC, Daniel Murphy wrote:
[...]
I am currently able to convert the C++ source to D, then build that
into a compiler capable of building itself, druntime, phobos (with
unittests), and passing the test suite on win32.
So what's left is basically cleaning it all up and fine tuning it so it's good enough for the actual transition?
Whoa. This is good news! So you're saying we already have a working D compiler written in D (albeit autoconverted from C++), and all that remains is for some cleanup + performance tuning?
Yep, all the fun parts are done and the rest should be fairly tedious.
Aug 16 2013
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, August 16, 2013 18:54:41 Daniel Murphy wrote:
 Yep, all the fun parts are done and the rest should be fairly tedious.
LOL. I guess that that's kind of where I am with splitting std.datetime. It's basically done code-wise, but now I have to fix all of the documentation, which is no fun at all. :) - Jonathan M Davis
Aug 16 2013
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
Isn't the resulting D code is still one 70k-line file?
Aug 15 2013
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Kagamin" <spam here.lot> wrote in message 
news:iceiqyqtdsippewgetmp forum.dlang.org...
 Isn't the resulting D code is still one 70k-line file?
~92k Putting the code into multiple files is trivial, but until we've done some major refactoring it won't start to resemble the original source organisation. ( see https://github.com/D-Programming-Language/dmd/pull/2356 for some discussion )
Aug 16 2013
parent reply "BS" <slackovsky gmail.com> writes:
On Friday, 16 August 2013 at 09:28:13 UTC, Daniel Murphy wrote:
 "Kagamin" <spam here.lot> wrote in message
 news:iceiqyqtdsippewgetmp forum.dlang.org...
 Isn't the resulting D code is still one 70k-line file?
~92k Putting the code into multiple files is trivial, but until we've done some major refactoring it won't start to resemble the original source organisation. ( see https://github.com/D-Programming-Language/dmd/pull/2356 for some discussion )
Looking forward DMD!D The D compiler that compiles itself at compile time :-)
Aug 16 2013
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On Aug 16, 2013 10:55 AM, "BS" <slackovsky gmail.com> wrote:
 On Friday, 16 August 2013 at 09:28:13 UTC, Daniel Murphy wrote:
 "Kagamin" <spam here.lot> wrote in message
 news:iceiqyqtdsippewgetmp forum.dlang.org...
 Isn't the resulting D code is still one 70k-line file?
~92k Putting the code into multiple files is trivial, but until we've done
some
 major refactoring it won't start to resemble the original source
 organisation.  ( see
https://github.com/D-Programming-Language/dmd/pull/2356
 for some discussion )
Looking forward DMD!D The D compiler that compiles itself at compile time :-)
I suspect we won't be able to do that efficiently until Don starts speeding up CTFE. ;-) Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Aug 16 2013
next sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Iain Buclaw" <ibuclaw ubuntu.com> wrote in message 
news:mailman.100.1376649733.1719.digitalmars-d puremagic.com...
 On Aug 16, 2013 10:55 AM, "BS" <slackovsky gmail.com> wrote:
 On Friday, 16 August 2013 at 09:28:13 UTC, Daniel Murphy wrote:
 "Kagamin" <spam here.lot> wrote in message
 news:iceiqyqtdsippewgetmp forum.dlang.org...
 Isn't the resulting D code is still one 70k-line file?
~92k Putting the code into multiple files is trivial, but until we've done
some
 major refactoring it won't start to resemble the original source
 organisation.  ( see
https://github.com/D-Programming-Language/dmd/pull/2356
 for some discussion )
Looking forward DMD!D The D compiler that compiles itself at compile time :-)
I suspect we won't be able to do that efficiently until Don starts speeding up CTFE. ;-)
Well, the compiler could always invoke the compiler (while compiling the compiler) to compile the compiler, thus vastly improving the speed of ctfe.
Aug 16 2013
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/16/2013 12:42 PM, Iain Buclaw wrote:
 I suspect we won't be able to do that efficiently until Don starts
 speeding up CTFE. ;-)
Using, of course, only CTFE-able language constructs.
Aug 16 2013
prev sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Brad Anderson" <eco gnuk.net> wrote in message 
news:vdiwuykbulxauiabwams forum.dlang.org...
 On Thursday, 15 August 2013 at 14:11:02 UTC, Daniel Murphy wrote:
 "Suliman" <bubnenkoff gmail.com> wrote in message
 news:htihsekthjkyhqazuvpc forum.dlang.org...
 Does anybody work on port D to D?
I've done quite a lot of work on it since dconf. The progress of making the C++ source 'conversion compatible' is shown here: https://github.com/D-Programming-Language/dmd/pull/1980
So all of those changes were just done by hand, right? Have the other DDMD labeled pull requests just been you cherry-picking from that branch?
Yes.
 The porting program is here: https://github.com/yebblies/magicport2

 I am currently able to convert the C++ source to D, then build that into 
 a
 compiler capable of building itself, druntime, phobos (with unittests), 
 and
 passing the test suite on win32.
So what's left is basically cleaning it all up and fine tuning it so it's good enough for the actual transition?
Mostly, yes.
Aug 16 2013
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 My preference would be to completely replace the back-end with 
 LLVM. Why LLVM? Well as opposed to GCC it was designed from the 
 ground up to support many languages.
I heard, llvm was written for C and x86. C++, exceptions and ARM pushed it beyond its limits and created a lot of kludge and redesigns.
Mar 07 2013
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 7 March 2013 at 14:55:18 UTC, Kagamin wrote:
 On Wednesday, 6 March 2013 at 00:25:30 UTC, Adam Wilson wrote:
 My preference would be to completely replace the back-end with 
 LLVM. Why LLVM? Well as opposed to GCC it was designed from 
 the ground up to support many languages.
I heard, llvm was written for C and x86. C++, exceptions and ARM pushed it beyond its limits and created a lot of kludge and redesigns.
And so what ? LLVM is more awesome with each version and is clearly evolving much faster than other compiler. That is a clear success story.
Mar 07 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-03-07 15:55, Kagamin wrote:

 I heard, llvm was written for C and x86. C++, exceptions and ARM pushed
 it beyond its limits and created a lot of kludge and redesigns.
Apple is betting everything on Clang/LLVM and they really need ARM for iOS. They have basically given up on GCC. Last time GCC got update was with Xcode 3.2.6, latest Xcode is 4.6, according to this: http://en.wikipedia.org/wiki/Xcode#Toolchain_Versions -- /Jacob Carlborg
Mar 07 2013
next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2013-03-07 18:31:34 +0000, Jacob Carlborg <doob me.com> said:

 On 2013-03-07 15:55, Kagamin wrote:
 
 I heard, llvm was written for C and x86. C++, exceptions and ARM pushed
 it beyond its limits and created a lot of kludge and redesigns.
Apple is betting everything on Clang/LLVM and they really need ARM for iOS. They have basically given up on GCC. Last time GCC got update was with Xcode 3.2.6, latest Xcode is 4.6, according to this: http://en.wikipedia.org/wiki/Xcode#Toolchain_Versions
In other words, Apple stopped using newer versions of GCC when the licence changed to GPLv3. I wonder where Clang/LLVM would be today if GCC was still available under GPLv2. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca/
Mar 07 2013
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 8 March 2013 at 03:37:41 UTC, Michel Fortin wrote:
 On 2013-03-07 18:31:34 +0000, Jacob Carlborg <doob me.com> said:

 On 2013-03-07 15:55, Kagamin wrote:
 
 I heard, llvm was written for C and x86. C++, exceptions and 
 ARM pushed
 it beyond its limits and created a lot of kludge and 
 redesigns.
Apple is betting everything on Clang/LLVM and they really need ARM for iOS. They have basically given up on GCC. Last time GCC got update was with Xcode 3.2.6, latest Xcode is 4.6, according to this: http://en.wikipedia.org/wiki/Xcode#Toolchain_Versions
In other words, Apple stopped using newer versions of GCC when the licence changed to GPLv3. I wonder where Clang/LLVM would be today if GCC was still available under GPLv2.
BSD people also are switching to LLVM. This is a very high quality tool in general, and even if you don't consider license issues, you'd find good reasons to use it.
Mar 07 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-03-08 04:37, Michel Fortin wrote:

 In other words, Apple stopped using newer versions of GCC when the
 licence changed to GPLv3. I wonder where Clang/LLVM would be today if
 GCC was still available under GPLv2.
Aha, I didn't know that. Interesting ... -- /Jacob Carlborg
Mar 07 2013
prev sibling parent "Kagamin" <spam here.lot> writes:
On Thursday, 7 March 2013 at 18:31:35 UTC, Jacob Carlborg wrote:
 On 2013-03-07 15:55, Kagamin wrote:

 I heard, llvm was written for C and x86. C++, exceptions and 
 ARM pushed
 it beyond its limits and created a lot of kludge and redesigns.
Apple is betting everything on Clang/LLVM and they really need ARM for iOS.
This means it's designed for apple/posix environment, i.e. llvm assumes exceptions can only be thrown manually and the only source of an exception are function calls, processor traps are assumed to result in posix signals.
Mar 08 2013