D - [Helmets on]: "Language independence ..."?
- Matthew Wilson (19/19) Jul 23 2003 Am ready to be fired at, so take your best shots.
- Russ Lewis (4/30) Jul 23 2003 There is currently ongoing work to get a D frontend for gcc for exactly
- Walter (11/30) Jul 23 2003 I've thought many times about doing just that. But I think it's a lot mo...
- Matthew Wilson (10/47) Jul 24 2003 I'd be more than happy to accept a version that didn't work with inline
- Sean L. Palmer (8/16) Jul 24 2003 All the C compilers I know of have an inline assembler. ;)
- Ilya Minkov (18/22) Jul 24 2003 You can't say "it won't work"!!! It may, just not portably. No sense
- Martin M. Pedersen (13/16) Jul 24 2003 I'm such a person. I have always believed that it was the way to go - fo...
- Walter (5/19) Jul 24 2003 It
- Roberto Mariottini (11/13) Jul 25 2003 I have some questions, before asking myself whether I'll have enough tim...
- Walter (11/25) Jul 26 2003 to do
- J. Daniel Smith (13/32) Jul 25 2003 Since the objective is not raw performance, C++ might be a better target
- Ilya Minkov (26/31) Jul 25 2003 There's still one problem that i see: C++ doesn't have the "finally"
- Walter (8/11) Jul 26 2003 The "finally" can be faked by making it a "destructor" for a dummy objec...
- Ilya Minkov (6/12) Jul 26 2003 Since the struct trick is the same for the nested functions, i believe,
- Walter (27/38) Jul 26 2003 each
- Ilya Minkov (12/23) Jul 26 2003 But the constructor can. ;)
- Walter (7/18) Jul 26 2003 take
- Ilya Minkov (7/11) Jul 26 2003 If i understand corectly, the try...finally blocks have to be nested in
- Mark T (6/9) Jul 26 2003 see
- Andy Friesen (6/22) Jul 26 2003 It seems to me that C would be better in the end, since it compiles
Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMD producing C-code. A kind of D-front if you like. Now I realise straight up that DMD can probably produce code straight to binary that will likely be faster than a compiled intermediate C-form in a non-trivial number of cases. So I'm not proposing that compile to machine code be abandoned. However, since there is currently a single compiler/linker that is usable with DMD, the bottleneck could be alleviated by allowing any C compiler to provide the back-end, and this could also help with porting to other platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would also allow debugging within one's accustomed compiler/environment, though of course this would be C debugging not D. (However, since a lot of the bugs being reported are in the generated code, this may not be a bad thing.) Now I know the answer's going to be "LOTS", but how much work would it be to give the compiler this extra mode, perhaps with a -ic (intermediate C)? Does anyone think the benefits significant and, if so, worth the effort in Walter doing so. Matthew
Jul 23 2003
There is currently ongoing work to get a D frontend for gcc for exactly this reason. I don't know the status, but things seem very quiet over there - perhaps the work is stalled? Matthew Wilson wrote:Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMD producing C-code. A kind of D-front if you like. Now I realise straight up that DMD can probably produce code straight to binary that will likely be faster than a compiled intermediate C-form in a non-trivial number of cases. So I'm not proposing that compile to machine code be abandoned. However, since there is currently a single compiler/linker that is usable with DMD, the bottleneck could be alleviated by allowing any C compiler to provide the back-end, and this could also help with porting to other platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It would also allow debugging within one's accustomed compiler/environment, though of course this would be C debugging not D. (However, since a lot of the bugs being reported are in the generated code, this may not be a bad thing.) Now I know the answer's going to be "LOTS", but how much work would it be to give the compiler this extra mode, perhaps with a -ic (intermediate C)? Does anyone think the benefits significant and, if so, worth the effort in Walter doing so. Matthew
Jul 23 2003
I've thought many times about doing just that. But I think it's a lot more work to generate C than it would be to integrate with gcc. There are also some things that won't work - such as converting the inline assembler to C! -Walter "Matthew Wilson" <matthew stlsoft.org> wrote in message news:bfnnvo$305l$1 digitaldaemon.com...Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMDproducingC-code. A kind of D-front if you like. Now I realise straight up that DMD can probably produce code straight to binary that will likely be faster than a compiled intermediate C-form in a non-trivial number of cases. So I'm not proposing that compile to machine code be abandoned. However, since there is currently a single compiler/linker that is usable with DMD, the bottleneck could be alleviated by allowing any C compiler to provide the back-end, and this could also help with porting to other platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It wouldalsoallow debugging within one's accustomed compiler/environment, though of course this would be C debugging not D. (However, since a lot of the bugs being reported are in the generated code, this may not be a bad thing.) Now I know the answer's going to be "LOTS", but how much work would it betogive the compiler this extra mode, perhaps with a -ic (intermediate C)?Doesanyone think the benefits significant and, if so, worth the effort inWalterdoing so. Matthew
Jul 23 2003
I'd be more than happy to accept a version that didn't work with inline assembler "Walter" <walter digitalmars.com> wrote in message news:bfnuu7$4cp$1 digitaldaemon.com...I've thought many times about doing just that. But I think it's a lot more work to generate C than it would be to integrate with gcc. There are also some things that won't work - such as converting the inline assembler to C! -Walter "Matthew Wilson" <matthew stlsoft.org> wrote in message news:bfnnvo$305l$1 digitaldaemon.com...aAm ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMDproducingC-code. A kind of D-front if you like. Now I realise straight up that DMD can probably produce code straight to binary that will likely be faster than a compiled intermediate C-form inmachinenon-trivial number of cases. So I'm not proposing that compile tousablecode be abandoned. However, since there is currently a single compiler/linker that istowith DMD, the bottleneck could be alleviated by allowing any C compilerbugsprovide the back-end, and this could also help with porting to other platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It wouldalsoallow debugging within one's accustomed compiler/environment, though of course this would be C debugging not D. (However, since a lot of thebebeing reported are in the generated code, this may not be a bad thing.) Now I know the answer's going to be "LOTS", but how much work would ittogive the compiler this extra mode, perhaps with a -ic (intermediate C)?Doesanyone think the benefits significant and, if so, worth the effort inWalterdoing so. Matthew
Jul 24 2003
All the C compilers I know of have an inline assembler. ;) Just pass asm blocks straight thru to the C code and let the programmer deal with them. Sean "Matthew Wilson" <matthew stlsoft.org> wrote in message news:bfo0bm$69k$1 digitaldaemon.com...I'd be more than happy to accept a version that didn't work with inline assembler "Walter" <walter digitalmars.com> wrote in message news:bfnuu7$4cp$1 digitaldaemon.com...moreI've thought many times about doing just that. But I think it's a lotalsowork to generate C than it would be to integrate with gcc. There aresome things that won't work - such as converting the inline assembler to C! -Walter
Jul 24 2003
Walter wrote:I've thought many times about doing just that. But I think it's a lot more work to generate C than it would be to integrate with gcc. There are also some things that won't work - such as converting the inline assembler to C! -WalterYou can't say "it won't work"!!! It may, just not portably. No sense sticking to ANSI C where it doesn't yuild what you need. There would be a need to: * accomodate for compiler-dependant extended sytaxes, e.g. SEH with most Win32 compilers and Java-like exception handling with GCC. Preferable solution - INI-file. Or even simply an #include of a file #defining a number of macros which would sort these things away. The problem of the #include approach is with the mulri-line macros in the case one has to #define something in them, and the lack of flexibility. * accomodate for different inline assembly notations. Like, converter to AT&T-style assembly and somesuch. Maybe through plug-in scripts? May also be in the main code. Besides, i believe there are quite a few people who know C and may want to help with that :) , while GCC internals are not something one wants to mess with. And another point: GCC is so slow, that going through C would be faster on most systems, where other compilers exist. -i.
Jul 24 2003
"Ilya Minkov" <midiclub 8ung.at> wrote in message news:bfon1k$q2o$1 digitaldaemon.com...Besides, i believe there are quite a few people who know C and may want to help with that :) , while GCC internals are not something one wants to mess with.I'm such a person. I have always believed that it was the way to go - for several reasons: - It does not require expertice with GCC. - Dependence of GCC could make the compiler break if GCC changes. dfront would probably be a relative small free standing project. - dfront could be ported to platforms where GCC is not available. Cross-compilation (generation of C files, that is) could easily be done. - dfront could be implemented in D using DMD for its initial development. It would be fun to do, and also be a great proof-of-concept for D. Regards, Martin M. Pedersen
Jul 24 2003
"Martin M. Pedersen" <martin moeller-pedersen.dk> wrote in message news:bforpc$upb$1 digitaldaemon.com..."Ilya Minkov" <midiclub 8ung.at> wrote in message news:bfon1k$q2o$1 digitaldaemon.com...ItBesides, i believe there are quite a few people who know C and may want to help with that :) , while GCC internals are not something one wants to mess with.I'm such a person. I have always believed that it was the way to go - for several reasons: - It does not require expertice with GCC. - Dependence of GCC could make the compiler break if GCC changes. dfront would probably be a relative small free standing project. - dfront could be ported to platforms where GCC is not available. Cross-compilation (generation of C files, that is) could easily be done. - dfront could be implemented in D using DMD for its initial development.would be fun to do, and also be a great proof-of-concept for D.If someone wants to start up a dfront project, I'll help with advice and ideas.
Jul 24 2003
In article <bfpfga$1hi2$1 digitaldaemon.com>, Walter says... [...]If someone wants to start up a dfront project, I'll help with advice and ideas.I have some questions, before asking myself whether I'll have enough time to do it or not: - does the D frontend build a syntax tree? - are the currently distributed sources complete enough to do a dfront project? - what are, in your opinion, the main obstacles one can find in trying to generate a c source from a D source? I guess there's not only the inline assembler. Thanks in advace. Ciao.
Jul 25 2003
"Roberto Mariottini" <Roberto_member pathlink.com> wrote in message news:bfr5ic$1kr$1 digitaldaemon.com...In article <bfpfga$1hi2$1 digitaldaemon.com>, Walter says... [...]to doIf someone wants to start up a dfront project, I'll help with advice and ideas.I have some questions, before asking myself whether I'll have enough timeit or not: - does the D frontend build a syntax tree?Yes.- are the currently distributed sources complete enough to do a dfrontproject? Yes.- what are, in your opinion, the main obstacles one can find in trying to generate a c source from a D source? I guess there's not only the inline assembler.o Exception handling - probably have to make this work with setjmp/longjmp. o Nested functions - can be done by making a struct of all the locals, and then passing a pointer to that. o Inline assembly - probably not worth it o Arbitrary initialized data - a nuisance to make this work with CThanks in advace. Ciao.
Jul 26 2003
Since the objective is not raw performance, C++ might be a better target than C. While probably not quite as wide-spread on a multitude of platforms, D->C++ is probably going to result in nicer generated code than D->C; hence easier debugging. Targeting C++ would also make it easier to do D.NET since you would "just" generate Managed C++. Dan "Matthew Wilson" <matthew stlsoft.org> wrote in message news:bfnnvo$305l$1 digitaldaemon.com...Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMDproducingC-code. A kind of D-front if you like. Now I realise straight up that DMD can probably produce code straight to binary that will likely be faster than a compiled intermediate C-form in a non-trivial number of cases. So I'm not proposing that compile to machine code be abandoned. However, since there is currently a single compiler/linker that is usable with DMD, the bottleneck could be alleviated by allowing any C compiler to provide the back-end, and this could also help with porting to other platforms (not just Linux, but Mac, VAX, Solaris, etc. etc.). It wouldalsoallow debugging within one's accustomed compiler/environment, though of course this would be C debugging not D. (However, since a lot of the bugs being reported are in the generated code, this may not be a bad thing.) Now I know the answer's going to be "LOTS", but how much work would it betogive the compiler this extra mode, perhaps with a -ic (intermediate C)?Doesanyone think the benefits significant and, if so, worth the effort inWalterdoing so. Matthew
Jul 25 2003
There's still one problem that i see: C++ doesn't have the "finally" construct. This and inline assembly and other things would force to use configuration files and compiler extensions. The code generated through C++ wouldn't be slower as compared to the generated C code. But you are definately right: namespaces and classes in C++ are very flexible to represent any data types of D, and at the same time respective debuggers are able to unmangle names. One could go through multiple stages - Create D AST - Transform into C++ AST, then either - output C++ as text - output to a G++ -based backend - possibly other back-ends? Like, Java bytecode, .NET - Transform into C AST, then either - output C as text - possibly output as bytecode for our own interpreter? The transformation stage into C would not need to take care of all C++ contructs, so it probably won't become too complex. :) What that leads me to think about again: GCJ reads in Java files or JVM bytecode, while generating code equivalent to C++ and linkable against C++. It is probably a hacked version of G++. If we adhere to their conventions, we could possibly achieve C++ - Java - D triple cross-language compatibility! Maybe the GCJ people can give us some clue how they did it? -i. J. Daniel Smith wrote:Since the objective is not raw performance, C++ might be a better target than C. While probably not quite as wide-spread on a multitude of platforms, D->C++ is probably going to result in nicer generated code than D->C; hence easier debugging. Targeting C++ would also make it easier to do D.NET since you would "just" generate Managed C++.
Jul 25 2003
"Ilya Minkov" <midiclub 8ung.at> wrote in message news:bfrefo$a1m$1 digitaldaemon.com...There's still one problem that i see: C++ doesn't have the "finally" construct. This and inline assembly and other things would force to use configuration files and compiler extensions.The "finally" can be faked by making it a "destructor" for a dummy object. But then you've got the problems of accessing the local variables in the stack frame - the way to do that is make all the locals a member of a struct, and the finally will be the destructor for that struct. And that leads to the problem of what to do with multiple finally's, each with different code and each needing to access the stack frame.
Jul 26 2003
Walter wrote:But then you've got the problems of accessing the local variables in the stack frame - the way to do that is make all the locals a member of a struct, and the finally will be the destructor for that struct. And that leads to the problem of what to do with multiple finally's, each with different code and each needing to access the stack frame.Since the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions. BTW, does this struct trick work with functions nested in nested functions? -i.
Jul 26 2003
"Ilya Minkov" <midiclub 8ung.at> wrote in message news:bfuq25$i5e$1 digitaldaemon.com...Walter wrote:eachBut then you've got the problems of accessing the local variables in the stack frame - the way to do that is make all the locals a member of a struct, and the finally will be the destructor for that struct. And that leads to the problem of what to do with multiple finally's,It doesn't work because there can be only one destructor, and it cannot take arguments.with different code and each needing to access the stack frame.Since the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions.BTW, does this struct trick work with functions nested in nestedfunctions? Yes (!). You'll also have to do some ugly things like copy the parameters into the struct: ========================= void foo(int a) { int b; int bar(int c) { return a + b + c; } return b + bar(6); } ======================= struct foo_tmp { int a; int b; }; void foo(int a) { struct foo_tmp tmp; tmp.a = a; tmp.b = 0; return tmp.b + bar(&tmp, 6); } void foo_bar(struct foo_tmp *this, int c) { return this->a + this->b + c; } ========================== Painfully ugly, but workable.
Jul 26 2003
Walter wrote:"Ilya Minkov" <midiclub 8ung.at> wrote in messageSince the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions.It doesn't work because there can be only one destructor, and it cannot take arguments.But the constructor can. ;) BTW, i'm not sure i can follow about what you would need multiple destructors and/or arguments for, after you'd turn a whole block in question into a nested function?Ah, i get it. just make an extra struct for each new level, so that the usual nested functions get 1 stack struct, the ones nested in them 2 and so on... Or use one struct, but have a field pointing to a previous-level struct in all but the top-level one.BTW, does this struct trick work with functions nested in nested functions?Yes (!).You'll also have to do some ugly things like copy the parameters into the struct:This is not much different from locals, which also have to be initialised first. -i.
Jul 26 2003
"Ilya Minkov" <midiclub 8ung.at> wrote in message news:bfv1q2$pee$1 digitaldaemon.com...Walter wrote:take"Ilya Minkov" <midiclub 8ung.at> wrote in messageSince the struct trick is the same for the nested functions, i believe, one can factor out "try" blocks which have "finally" clause into nested functions.It doesn't work because there can be only one destructor, and it cannotIf you have multiple finally blocks, with different code in each, how do you translate that into *one* destructor? I think that translating into C++ rather than C doesn't really get us where we want to go.arguments.But the constructor can. ;) BTW, i'm not sure i can follow about what you would need multiple destructors and/or arguments for, after you'd turn a whole block in question into a nested function?
Jul 26 2003
Walter wrote:If you have multiple finally blocks, with different code in each, how do you translate that into *one* destructor?If i understand corectly, the try...finally blocks have to be nested in each other. So why not translate them into nested functions? I know, overhead. Not that the overhead would matter much though - it is a rather rare case to nest too many of these.I think that translating into C++ rather than C doesn't really get us where we want to go.Maybe. Just evaluating. OTOH C++ compatibility may be not that bad. -i.
Jul 26 2003
In article <bfnnvo$305l$1 digitaldaemon.com>, Matthew Wilson says...Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMD producing C-code. A kind of D-front if you like.see http://smarteiffel.loria.fr/ for an example OO language (Eiffel) to C "translator" it is GPL licensed and maybe the backend could be reused or adapted personally I much prefer C than C++ as a target
Jul 26 2003
Mark T wrote:In article <bfnnvo$305l$1 digitaldaemon.com>, Matthew Wilson says...It seems to me that C would be better in the end, since it compiles faster, and on more platforms. C++ would be a lot easier to implement, though, since most of what's in D is directly available in C++. (most notably exceptions. Doing them in C might be a bit of a pain)Am ready to be fired at, so take your best shots. My question is, has anyone seriously considered the notion of DMD producing C-code. A kind of D-front if you like.see http://smarteiffel.loria.fr/ for an example OO language (Eiffel) to C "translator" it is GPL licensed and maybe the backend could be reused or adapted personally I much prefer C than C++ as a target
Jul 26 2003