www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - tooling quality and some random rant

reply golgeliyele <usuldan gmail.com> writes:
I am relatively new to D. As a long time C++ coder, I love D. Recently, I have
started doing some coding with D. One of the things that
bothered me was the 'perceived' quality of the tooling. There are some
relatively minor things that make the tooling look bad.

For instance: I am using the DMD compiler on a Mac, and it seems the compiler
has a horrendous command line usage. Just type 'dmd
-h' and observe:

- Some option names are single characters and some are full words. Quality
tooling will support a short and a long option for most of
these.
- Some options have values separated from the option names via '=' some via
*nothing*. The latter is just unacceptable. Look at this:
-offilename. First of all, you better list this as -of<file-name> so that one
can understand where the option name ends. Second, not
allowing space or requiring = after the option name is messy and looks
unprofessional.
- All options start with '-', yet the help option starts with '--'.
- Option description text seems to be left aligned, yet there are 3 exceptions

The error reporting has issues as well. I noticed that the compiler leaks low
level errors to the user. If you forget to add a main to your
app or misspell it, you get errors like:
====
Undefined symbols:
  "__Dmain", referenced from:
      _D2rt6dmain24mainUiPPaZi7runMainMFZv in libphobos2.a(dmain2_513_1a5.o)
====
I mean, wow, this should really be handled better.

Another annoyance, for me anyway, is that the DMD compiler outputs the .o files
without the package directory hierarchy. I like to
organize my code as 'src/my/package/module.d'. And I want to set my output
directory to 'lib' and get 'lib/my/package/module.o'.
But DMD generates 'lib/module.o'. I setup my project to build each .d file into
a .o file as a separate step. I don't even know if this is
the desired setup. But that seems to be the way to make it incremental. I
couldn't find any definitive information on this in the DMD
compiler web page. It says:
"dmd can build an executable much faster if as many of the source files as
possible are put on the command line.

Another advantage to putting multiple source files on the same invocation of
dmd is that dmd will be able to do some level of cross-
module optimizations, such as function inlining across modules."

Yes, but what happens when I have a project with million lines of code? Is the
suggestion to recompile it every time a file changes?

I am sure there are various other warts about tooling and I know Walter and co.
are working on more important stuff like 64-bit
support, etc. However, if D wants to be successful it needs to excel in all
dimensions. I am sure there are people who are willing to
improve little things like these that make a difference.

IMO, despite all the innovations the D project brings, the lack of pretty
packaging and presentation is hurting it. I have observed
changes for the better lately. Such as the TDPL book, the github move, the new
web page (honestly, the digitalmars page was and still
is a liability for D), and may be a new web forum interface(?).

I apologize for sounding critical at times. I do appreciate all the great work
that is going into D. I want to see it succeed.
Feb 12 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"golgeliyele" <usuldan gmail.com> wrote in message 
news:ij7s2p$1ld0$1 digitalmars.com...
I am relatively new to D. As a long time C++ coder, I love D. Recently, I 
have started doing some coding with D.
Welcome. I'm another C++ -> D refugee :)
 - Some option names are single characters and some are full words. Quality 
 tooling will support a short and a long option for most of
 these.
That's really not a matter of "quality", it's just a common Unix convention.
 - Some options have values separated from the option names via '=' some 
 via *nothing*. The latter is just unacceptable. Look at this:
 -offilename. First of all, you better list this as -of<file-name> so that 
 one can understand where the option name ends. Second, not
 allowing space or requiring = after the option name is messy and looks 
 unprofessional.
Yea, that is kinda ugly. I wouldn't go so far as to call it "unacceptable" though. I've always gotten by fine with it. An improvement on that would certainly be welcomed though.
 - All options start with '-', yet the help option starts with '--'.
I'm sure that's just to be consistent with the option to get help from pretty much every other command-line app out there. There's a lot of different switches that are sometimes accepted by certan programs for help, like /? or -h, but the one that you can always count on nearly anywhere is --help.
 - Option description text seems to be left aligned, yet there are 3 
 exceptions
It all looks left-aligned to me, but I'm using the Win version. Maybe it's different for OSX. Seems weird that it would be though.
 The error reporting has issues as well. I noticed that the compiler leaks 
 low level errors to the user. If you forget to add a main to your
 app or misspell it, you get errors like:
 ====
 Undefined symbols:
  "__Dmain", referenced from:
      _D2rt6dmain24mainUiPPaZi7runMainMFZv in 
 libphobos2.a(dmain2_513_1a5.o)
 ====
 I mean, wow, this should really be handled better.
That's not the compiler, that's the linker. I don't know what linker DMD uses on OSX, but on Windows it uses OPTLINK which is written in hand-optimized Asm so it's really hard to change. But Walter's been converting it to C (and maybe then to D once that's done) bit-by-bit (so to speak), so linker improvements are at least on the horizon. AIUI, on Linux, DMD just uses the GCC linker, and GCC unfortunately doesn't know anything about D name mangling, just C/C++. Might be true of OSX as well, I don't know though.
 Another annoyance, for me anyway, is that the DMD compiler outputs the .o 
 files without the package directory hierarchy. I like to
 organize my code as 'src/my/package/module.d'. And I want to set my output 
 directory to 'lib' and get 'lib/my/package/module.o'.
 But DMD generates 'lib/module.o'. I setup my project to build each .d file 
 into a .o file as a separate step. I don't even know if this is
 the desired setup. But that seems to be the way to make it incremental. I 
 couldn't find any definitive information on this in the DMD
 compiler web page. It says:
 "dmd can build an executable much faster if as many of the source files as 
 possible are put on the command line.

 Another advantage to putting multiple source files on the same invocation 
 of dmd is that dmd will be able to do some level of cross-
 module optimizations, such as function inlining across modules."

 Yes, but what happens when I have a project with million lines of code? Is 
 the suggestion to recompile it every time a file changes?
D compiles a few orders of magnitude faster than C++ does. Better handling of incremental building might be nice for really large projects, but it's really not a big issue for D, not like it is for C++. Not long ago, the Google Go people were bragging about their super-fast compile times, but D turned out to be even faster.
 I am sure there are various other warts about tooling and I know Walter 
 and co. are working on more important stuff like 64-bit
 support, etc. However, if D wants to be successful it needs to excel in 
 all dimensions. I am sure there are people who are willing to
 improve little things like these that make a difference.

 IMO, despite all the innovations the D project brings, the lack of pretty 
 packaging and presentation is hurting it. I have observed
 changes for the better lately. Such as the TDPL book, the github move, the 
 new web page (honestly, the digitalmars page was and still
 is a liability for D), and may be a new web forum interface(?).
Additional volunteers to help out are always welcome!
Feb 12 2011
next sibling parent spir <denis.spir gmail.com> writes:
On 02/13/2011 07:52 AM, Nick Sabalausky wrote:
 - Option description text seems to be left aligned, yet there are 3
  exceptions
It all looks left-aligned to me, but I'm using the Win version. Maybe it's different for OSX. Seems weird that it would be though.
Maybe you did not watch properly, or indeed the win version output differently. On linux, I get: spir d:~$ dmd --help Digital Mars D Compiler v2.051 Copyright (c) 1999-2010 by Digital Mars written by Walter Bright Documentation: http://www.digitalmars.com/d/2.0/index.html Usage: dmd files.d ... { -switch } files.d D source files cmdfile read arguments from cmdfile -c do not link -cov do code coverage analysis ... -debug=ident compile in debug code identified by ident -debuglib=name set symbolic debug library to name -defaultlib=name set default library to name -deps=filename write module dependencies to filename ... -release compile release version -run srcfile args... run resulting program, passing args -unittest compile in unit tests ... Sure, very minor bug. But still... presentation counts. Denis -- _________________ vita es estrany spir.wikidot.com
Feb 12 2011
prev sibling next sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 13/02/11 6:52 AM, Nick Sabalausky wrote:
 D compiles a few orders of magnitude faster than C++ does. Better handling
 of incremental building might be nice for really large projects, but it's
 really not a big issue for D, not like it is for C++.
The only person I know that's worked on large D projects is Tomasz, and he claimed that he was getting faster compile times in C++ due to being able to do incremental builds. "Walter might claim that DMD is fast, but it’s not exactly blazing when you confront it with a few hundred thousand lines of code. With C/C++, you’d split your source into .c and .h files, which mean that a localized change of a .c file only requires the compilation of a single unit. Take an incremental linker as well, and C++ compiles faster than D. With D you often have the situation of having to recompile everything upon the slightest change." (http://h3.gd/devlog/?p=22)
Feb 13 2011
next sibling parent reply Peter Alexander <peter.alexander.au gmail.com> writes:
On 13/02/11 10:10 AM, Peter Alexander wrote:
 On 13/02/11 6:52 AM, Nick Sabalausky wrote:
 D compiles a few orders of magnitude faster than C++ does. Better
 handling
 of incremental building might be nice for really large projects, but it's
 really not a big issue for D, not like it is for C++.
The only person I know that's worked on large D projects is Tomasz, and he claimed that he was getting faster compile times in C++ due to being able to do incremental builds. "Walter might claim that DMD is fast, but it’s not exactly blazing when you confront it with a few hundred thousand lines of code. With C/C++, you’d split your source into .c and .h files, which mean that a localized change of a .c file only requires the compilation of a single unit. Take an incremental linker as well, and C++ compiles faster than D. With D you often have the situation of having to recompile everything upon the slightest change." (http://h3.gd/devlog/?p=22)
Turns out this may have been solved: https://bitbucket.org/h3r3tic/xfbuild/wiki/Home
Feb 13 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:ij8a8p$2gqv$1 digitalmars.com...
 On 13/02/11 10:10 AM, Peter Alexander wrote:
 On 13/02/11 6:52 AM, Nick Sabalausky wrote:
 D compiles a few orders of magnitude faster than C++ does. Better
 handling
 of incremental building might be nice for really large projects, but 
 it's
 really not a big issue for D, not like it is for C++.
The only person I know that's worked on large D projects is Tomasz, and he claimed that he was getting faster compile times in C++ due to being able to do incremental builds. "Walter might claim that DMD is fast, but it’s not exactly blazing when you confront it with a few hundred thousand lines of code. With C/C++, you’d split your source into .c and .h files, which mean that a localized change of a .c file only requires the compilation of a single unit. Take an incremental linker as well, and C++ compiles faster than D. With D you often have the situation of having to recompile everything upon the slightest change." (http://h3.gd/devlog/?p=22)
Turns out this may have been solved: https://bitbucket.org/h3r3tic/xfbuild/wiki/Home
The problem that xfbuild ended up running into is that DMD puts the generated code for instantiated temples into an unpredictable object file. This leads to situations where certain functions end up being lost from the object files unless you do a full rebuild. Essentialy it breaks incremental compilation. There's a detailed explanation of it somewhere on the xfbuild site.
Feb 13 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-02-13 13:24, Nick Sabalausky wrote:
 "Peter Alexander"<peter.alexander.au gmail.com>  wrote in message
 news:ij8a8p$2gqv$1 digitalmars.com...
 On 13/02/11 10:10 AM, Peter Alexander wrote:
 On 13/02/11 6:52 AM, Nick Sabalausky wrote:
 D compiles a few orders of magnitude faster than C++ does. Better
 handling
 of incremental building might be nice for really large projects, but
 it's
 really not a big issue for D, not like it is for C++.
The only person I know that's worked on large D projects is Tomasz, and he claimed that he was getting faster compile times in C++ due to being able to do incremental builds. "Walter might claim that DMD is fast, but it’s not exactly blazing when you confront it with a few hundred thousand lines of code. With C/C++, you’d split your source into .c and .h files, which mean that a localized change of a .c file only requires the compilation of a single unit. Take an incremental linker as well, and C++ compiles faster than D. With D you often have the situation of having to recompile everything upon the slightest change." (http://h3.gd/devlog/?p=22)
Turns out this may have been solved: https://bitbucket.org/h3r3tic/xfbuild/wiki/Home
The problem that xfbuild ended up running into is that DMD puts the generated code for instantiated temples into an unpredictable object file. This leads to situations where certain functions end up being lost from the object files unless you do a full rebuild. Essentialy it breaks incremental compilation. There's a detailed explanation of it somewhere on the xfbuild site.
Walter has said in a thread here that if you build with the -lib option it will output all templates into all object files. -- /Jacob Carlborg
Feb 14 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Peter Alexander wrote:
 On 13/02/11 6:52 AM, Nick Sabalausky wrote:
 D compiles a few orders of magnitude faster than C++ does. Better 
 handling
 of incremental building might be nice for really large projects, but it's
 really not a big issue for D, not like it is for C++.
The only person I know that's worked on large D projects is Tomasz, and he claimed that he was getting faster compile times in C++ due to being able to do incremental builds. "Walter might claim that DMD is fast, but it’s not exactly blazing when you confront it with a few hundred thousand lines of code. With C/C++, you’d split your source into .c and .h files, which mean that a localized change of a .c file only requires the compilation of a single unit. Take an incremental linker as well, and C++ compiles faster than D. With D you often have the situation of having to recompile everything upon the slightest change." (http://h3.gd/devlog/?p=22)
You can do the same in D using .di files.
Feb 13 2011
parent reply Alan Smithee <email example.com> writes:
 You can do the same in D using .di files.
Except no one really does that because such an approach is insanely error prone. E.g. with classes, you need to copy entire definitions. Change any ordering, forget a field, change a type, and you're having undefined behavior. How about eating your own dog food before making unfounded statements like that? Trivial transliterations of DMDScript or Empire don't count. So far, you've only written silly Bash-like scripts in D.
Feb 13 2011
next sibling parent reply Gary Whatmore <no spam.sp> writes:
Alan Smithee Wrote:

 You can do the same in D using .di files.
Except no one really does that because such an approach is insanely error prone. E.g. with classes, you need to copy entire definitions. Change any ordering, forget a field, change a type, and you're having undefined behavior. How about eating your own dog food before making unfounded statements like that? Trivial transliterations of DMDScript or Empire don't count. So far, you've only written silly Bash-like scripts in D.
Let's try to act civil here. Walter bashing is already getting old and mostly favored by our famous Reddit trolls, that is retard = uriel = eternium = lurker. I wouldn't be shocked to hear this Alan Smithee is another sockpuppet of yours, dear "retard". I already did mention eating your own dog food. On the other hand it's crystal clear that such a task as writing a language and its compiler without any support from anyone is something only a handful of developers can and are willing to pursue on this planet. As a result D is one of the best languages ever built. I honestly wish we wouldn't question Walter's competence. He only has so much time. All this hate talk here pushes release dates farther away. We would already have a 64-bit compiler if you didn't rant so much. - G.W.
Feb 13 2011
next sibling parent reply Alan Smithee <email example.com> writes:
Gary Whatmore Wrote (fixed that for you):

Let's try to act reasonable here. Walter fanboyism is already
getting old and sadly favored by our famous NG trolls, that is
pretty much everyone here. I wouldn't be shocked to hear this Gary
Whatmore will be bashing D in about 2 years' time when he realizes
how naive he has been.

The creators haven't even attempted eating their own dog food. On
the other hand it's crystal clear that such a task as writing a
language and its compiler without any support from anyone is the
very definition of "Not Invented Here" that only a handful of
developers are willing to pursue on this planet. As a result D is
one of the most broken languages ever built. I honestly wish we
would sometimes question Walter's competence. He only has so much
time. All this love talk here blinds even more potential users. We
would already have a working compiler if they didn't want to
reinvent everything.
Feb 13 2011
parent so <so so.so> writes:
On Sun, 13 Feb 2011 19:47:30 +0200, Alan Smithee <email example.com> wrote:

 Gary Whatmore Wrote (fixed that for you):

 Let's try to act reasonable here. Walter fanboyism is already
 getting old and sadly favored by our famous NG trolls, that is
 pretty much everyone here. I wouldn't be shocked to hear this Gary
 Whatmore will be bashing D in about 2 years' time when he realizes
 how naive he has been.

 The creators haven't even attempted eating their own dog food. On
 the other hand it's crystal clear that such a task as writing a
 language and its compiler without any support from anyone is the
 very definition of "Not Invented Here" that only a handful of
 developers are willing to pursue on this planet. As a result D is
 one of the most broken languages ever built. I honestly wish we
 would sometimes question Walter's competence. He only has so much
 time. All this love talk here blinds even more potential users. We
 would already have a working compiler if they didn't want to
 reinvent everything.
This love talk exists just because of some people occasionally insulting people (especially Walter) with no basis whatsoever. You might come here state your problems, opinions and propose solutions if you have in mind. But no they prefer bitching, insulting. People might respect Walter and this might go to "fanboyism". On the other hand insulting him is disgusting, and baseless. Is he forcing anyone else to use D? He is just minding his own business as far as i can see. If you think something is broken, prove it and try to find a solutions. If the community doesn't help you, leave it them to their misery, there are other languages after all. One thing you are right on the point is that languages are designed by "designers", this has been like this for long time.
Feb 13 2011
prev sibling parent reply Kevin Bealer <kevindangerbealer mail.com> writes:
 our famous Reddit trolls, that is retard = uriel = eternium = lurker
In case anyone doubts gay's guess... for those who don't follow entertainment trivia, Alan Smithee is a pseudonym used by directors disowning a film (google it). So anyone using this name is actually effectively *claiming* to be a imposter. K
Feb 13 2011
parent Kevin Bealer <kevindangerbealer gmail.com> writes:
Sorry this was a completely unintentional error --- I meant to say "in case
anyone
doubts Gary's post".  Blame the lateness of the night and/or my annoyingly lossy
wireless keyboard.

Kevin
Feb 13 2011
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/13/11, Alan Smithee <email example.com> wrote:
 You can do the same in D using .di files.
Except no one really does that because such an approach is insanely error prone. E.g. with classes, you need to copy entire definitions. Change any ordering, forget a field, change a type, and you're having undefined behavior.
Could you elaborate on that? Aren't .di files supposed to be auto-generated by the compiler, and not hand-written?
Feb 13 2011
next sibling parent reply Alan Smithee <email example.com> writes:
Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint* This NG assumes a curious stance. Sprouting claims and standing by them until they're shown invalid, and then some. This is not the way to go for a new language. It's YOUR job (not yours in particular, Andrej) to demonstrate the feasibility of a certain feature, ONLY THEN can you claim how it may solve any issues. And it needs to be more than a 10-line Hello World. Because you can concatenate Hello World 1,000,000 times, D can work for multi million line projects, right? "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4 which D has been past the 1.0 version. How many people gave up on their med/large projects and moved to "lesser" languages in this span?
Feb 13 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/13/11, Alan Smithee <email example.com> wrote:
 Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint* This NG assumes a curious stance. Sprouting claims and standing by them until they're shown invalid, and then some. This is not the way to go for a new language. It's YOUR job (not yours in particular, Andrej) to demonstrate the feasibility of a certain feature, ONLY THEN can you claim how it may solve any issues. And it needs to be more than a 10-line Hello World. Because you can concatenate Hello World 1,000,000 times, D can work for multi million line projects, right? "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4 which D has been past the 1.0 version. How many people gave up on their med/large projects and moved to "lesser" languages in this span?
On 2/13/11, email example.com <email example.com> wrote:
 Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint* This NG assumes a curious stance. Sprouting claims and standing by them until they're shown invalid, and then some. This is not the way to go for a new language. It's YOUR job (not yours in particular, Andrej) to demonstrate the feasibility of a certain feature, ONLY THEN can you claim how it may solve any issues. And it needs to be more than a 10-line Hello World. Because you can concatenate Hello World 1,000,000 times, D can work for multi million line projects, right? "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4 which D has been past the 1.0 version. How many people gave up on their med/large projects and moved to "lesser" languages in this span?
Heh. :) I'm not claiming that I know that everything works, I only know as much as I've tried. When I've hit a bug in a multi-thousand line project I'll report it to bugzilla. So what's broken about generating import modules, is it already in bugzilla? I've only heard about problems with templates so far, so I don't know. If they're really broken we can push Walter & Co. to fix them. I know of a technique, too. I've heard posting a random comment on a D reddit thread about a D bug usually gets Andrei to talk with Walter in private ASAP and fix it right away.
Feb 13 2011
parent reply retard <re tard.com.invalid> writes:
Sun, 13 Feb 2011 19:10:01 +0100, Andrej Mitrovic wrote:

 On 2/13/11, Alan Smithee <email example.com> wrote:
 Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint* This NG assumes a curious stance. Sprouting claims and standing by them until they're shown invalid, and then some. This is not the way to go for a new language. It's YOUR job (not yours in particular, Andrej) to demonstrate the feasibility of a certain feature, ONLY THEN can you claim how it may solve any issues. And it needs to be more than a 10-line Hello World. Because you can concatenate Hello World 1,000,000 times, D can work for multi million line projects, right? "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4 which D has been past the 1.0 version. How many people gave up on their med/large projects and moved to "lesser" languages in this span?
On 2/13/11, email example.com <email example.com> wrote:
 Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint* This NG assumes a curious stance. Sprouting claims and standing by them until they're shown invalid, and then some. This is not the way to go for a new language. It's YOUR job (not yours in particular, Andrej) to demonstrate the feasibility of a certain feature, ONLY THEN can you claim how it may solve any issues. And it needs to be more than a 10-line Hello World. Because you can concatenate Hello World 1,000,000 times, D can work for multi million line projects, right? "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4 which D has been past the 1.0 version. How many people gave up on their med/large projects and moved to "lesser" languages in this span?
Heh. :) I'm not claiming that I know that everything works, I only know as much as I've tried. When I've hit a bug in a multi-thousand line project I'll report it to bugzilla. So what's broken about generating import modules, is it already in bugzilla? I've only heard about problems with templates so far, so I don't know. If they're really broken we can push Walter & Co. to fix them. I know of a technique, too. I've heard posting a random comment on a D reddit thread about a D bug usually gets Andrei to talk with Walter in private ASAP and fix it right away.
I wish there were more news about D. This would bring us more reddit threads and thus more bug fixes.
Feb 13 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 I wish there were more news about D. This would bring us more reddit 
 threads and thus more bug fixes.
You can write articles about D and post them to reddit. What's holding you back?
Feb 13 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Alan Smithee" <email example.com> wrote in message 
news:ij967s$12rb$1 digitalmars.com...
 Andrej Mitrovic Wrote:

 Could you elaborate on that? Aren't .di files supposed to be auto-
generated by the compiler, and not hand-written? Yea, aren't they? How come no one uses that feature? Perhaps it's intrinsically broken? *hint hint*
"Perhaps"? Well, is it or isn't it? Are we supposed to just assume that lack of use means it's actually broken and not just unpopular?
 This NG assumes a curious stance. Sprouting claims and standing by
 them until they're shown invalid, and then some.
Just like you're doing? If you're sure that .di files are broken, then *show us* how.
 "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4
 which D has been past the 1.0 version. How many people gave up on
 their med/large projects and moved to "lesser" languages in this
 span?
Then contribute instead of just flaming.
Feb 13 2011
parent reply Alan Smithee <email example.com> writes:
Nick Sabalausky Wrote:

 "Perhaps"? Well, is it or isn't it? Are we supposed to just assume
that lack of use means it's actually broken and not just unpopular? Assume it's broken or demonstrate large projects written in D to show that it CAN be unpopular because something else makes up for it.
 Just like you're doing?  If you're sure that .di files are broken,
then *show us* how. People did - go figure. A swing of Walter's magical wand saying "everything is OK!" seems to suffice for fanboys. Until they disappear realizing the miasma surrounding D. Like this bloke: http://www.jfbillingsley.com/blog/?p=53
 Then contribute instead of just flaming.
I'm 12 years old and what is this? Your language is flawed, you don't see it - do not want.
Feb 13 2011
parent Andrew Wiley <debio264 gmail.com> writes:
On Sun, Feb 13, 2011 at 4:35 PM, Alan Smithee <email example.com> wrote:
 Nick Sabalausky Wrote:

 "Perhaps"? Well, is it or isn't it? Are we supposed to just assume
that lack of use means it's actually broken and not just unpopular? Assume it's broken or demonstrate large projects written in D to show that it CAN be unpopular because something else makes up for it.
How's about all of druntime (at least on my self-build Linux DMD).
 Then contribute instead of just flaming.
I'm 12 years old and what is this? Your language is flawed, you don't see it - do not want.
Honestly, I agree with Nick here (which is somewhat rare, actually): You're in the D mailing lists and you don't want to use D. Why, then, are you here?
Feb 13 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 Could you elaborate on that? Aren't .di files supposed to be
 auto-generated by the compiler, and not hand-written?
You can do it either way. In Phobos, you can find examples of both. In no instance are you worse off than with C++ .h/.cpp files.
Feb 13 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-02-13 18:36, Andrej Mitrovic wrote:
 On 2/13/11, Alan Smithee<email example.com>  wrote:
 You can do the same in D using .di files.
Except no one really does that because such an approach is insanely error prone. E.g. with classes, you need to copy entire definitions. Change any ordering, forget a field, change a type, and you're having undefined behavior.
Could you elaborate on that? Aren't .di files supposed to be auto-generated by the compiler, and not hand-written?
Yes, but they don't always work. -- /Jacob Carlborg
Feb 14 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jacob Carlborg wrote:
 On 2011-02-13 18:36, Andrej Mitrovic wrote:
 Could you elaborate on that? Aren't .di files supposed to be
 auto-generated by the compiler, and not hand-written?
Yes, but they don't always work.
Where they don't work, please file bug reports to bugzilla.
Feb 14 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-02-14 18:55, Walter Bright wrote:
 Jacob Carlborg wrote:
 On 2011-02-13 18:36, Andrej Mitrovic wrote:
 Could you elaborate on that? Aren't .di files supposed to be
 auto-generated by the compiler, and not hand-written?
Yes, but they don't always work.
Where they don't work, please file bug reports to bugzilla.
Done: http://d.puremagic.com/issues/show_bug.cgi?id=5577 -- /Jacob Carlborg
Feb 14 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Jacob Carlborg wrote:
 Done: http://d.puremagic.com/issues/show_bug.cgi?id=5577
Thank you.
Feb 14 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
"golgeliyele" <usuldan gmail.com> wrote:
 - Option description text seems to be left aligned, yet there are 3 exceptions
It's easier to visually associate the command with the description if they are fairly close. Some (i.e. 3) are too long for that. It's a compromise, but I don't see a horrendous problem: H:\cbx\mars>dmd Digital Mars D Compiler v2.052 Copyright (c) 1999-2010 by Digital Mars written by Walter Bright Documentation: http://www.digitalmars.com/d/2.0/index.html Usage: dmd files.d ... { -switch } files.d D source files cmdfile read arguments from cmdfile -c do not link -cov do code coverage analysis -D generate documentation -Dddocdir write documentation file to docdir directory -Dffilename write documentation file to filename -d allow deprecated features -debug compile in debug code -debug=level compile in debug code <= level -debug=ident compile in debug code identified by ident -debuglib=name set symbolic debug library to name -defaultlib=name set default library to name -deps=filename write module dependencies to filename -g add symbolic debug info -gc add symbolic debug info, pretend to be C -H generate 'header' file -Hddirectory write 'header' file to directory -Hffilename write 'header' file to filename --help print help -Ipath where to look for imports -ignore ignore unsupported pragmas -inline do function inlining -Jpath where to look for string imports -Llinkerflag pass linkerflag to link -lib generate library rather than object files -man open web browser on manual page -map generate linker .map file -noboundscheck turns off array bounds checking for all functions -nofloat do not emit reference to floating point -O optimize -o- do not write object file -odobjdir write object & library files to directory objdir -offilename name output file to filename -op do not strip paths from source file -profile profile runtime performance of generated code -quiet suppress unnecessary messages -release compile release version -run srcfile args... run resulting program, passing args -unittest compile in unit tests -v verbose -version=level compile in version code >= level -version=ident compile in version code identified by ident -vtls list all variables going into thread local storage -w enable warnings -wi enable informational warnings -X generate JSON file -Xffilename write JSON file to filename
Feb 13 2011
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-02-13 07:52, Nick Sabalausky wrote:
 "golgeliyele"<usuldan gmail.com>  wrote in message
 The error reporting has issues as well. I noticed that the compiler leaks
 low level errors to the user. If you forget to add a main to your
 app or misspell it, you get errors like:
 ====
 Undefined symbols:
   "__Dmain", referenced from:
       _D2rt6dmain24mainUiPPaZi7runMainMFZv in
 libphobos2.a(dmain2_513_1a5.o)
 ====
 I mean, wow, this should really be handled better.
That's not the compiler, that's the linker. I don't know what linker DMD uses on OSX, but on Windows it uses OPTLINK which is written in hand-optimized Asm so it's really hard to change. But Walter's been converting it to C (and maybe then to D once that's done) bit-by-bit (so to speak), so linker improvements are at least on the horizon. AIUI, on Linux, DMD just uses the GCC linker, and GCC unfortunately doesn't know anything about D name mangling, just C/C++. Might be true of OSX as well, I don't know though.
As you know, on Windows DMD uses OPTLINK and on all other platforms GCC is used. -- /Jacob Carlborg
Feb 13 2011
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker DMD 
 uses on OSX, but on Windows it uses OPTLINK which is written in 
 hand-optimized Asm so it's really hard to change. But Walter's been 
 converting it to C (and maybe then to D once that's done) bit-by-bit (so 
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D. -- Paulo
Feb 13 2011
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Paulo Pinto" <pjmlp progtools.org> wrote in message 
news:ij8he9$2v0o$1 digitalmars.com...
 "Nick Sabalausky" <a a.a> wrote in message 
 news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker DMD 
 uses on OSX, but on Windows it uses OPTLINK which is written in 
 hand-optimized Asm so it's really hard to change. But Walter's been 
 converting it to C (and maybe then to D once that's done) bit-by-bit (so 
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D.
That's jumping to conclusions. C is little more than a high-level assembler. That's why it's a reasonable first step up from Asm. Once it's in C and cleaned up, that's the time for it to move on to D .
Feb 13 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Nick Sabalausky:

 Paulo Pinto:
 Why C and not directly D?
That's jumping to conclusions. C is little more than a high-level assembler. That's why it's a reasonable first step up from Asm. Once it's in C and cleaned up, that's the time for it to move on to Dù
Paulo Pinto has asked a fair question. The answer is that D is not a perfect system language. If you need to write code for an Arduino (a small CPU), for 16 bit CPUs in general, if you want to convert something from asm like that linker (Walter has said that later it's easy to convert the C linker to D), and in several other situations, C language is better. Programs compiled with the a C compiler are generally smaller than the one compiled with DMD, there are simple ways to produce binaries of 4000 bytes with C. This is not a failure of D, D is designed for larger 32 bit CPUs, for systems that have a heap memory (in D there are ways to avoid heap allocations and to remove the GC, but doing it in C is more natural. I don't know how produce DMD binaries with DMD that don't use the GCC and don't include it, this problem is missing in C). Bye, bearophile
Feb 13 2011
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Hi,

I am sorry, but I don't belive it.

Many other systems programming languages that atempted to displace C and 
C++, have
the toolchain built in its languages, after the compilers were bootstrapped, 
as anyone
with enough compiler knowledge will surely tell you.

And D's linker must first be written in C, to make it easy to rewrite in D?!

A linker is not science fiction, it is just a program that binds object 
files and libraries together
to produce an executable. Any programming language able to manipulate files 
and binary
data, can be used to create a linker.

--
Paulo


"Nick Sabalausky" <a a.a> wrote in message 
news:ij8iau$30jr$1 digitalmars.com...
 "Paulo Pinto" <pjmlp progtools.org> wrote in message 
 news:ij8he9$2v0o$1 digitalmars.com...
 "Nick Sabalausky" <a a.a> wrote in message 
 news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker DMD 
 uses on OSX, but on Windows it uses OPTLINK which is written in 
 hand-optimized Asm so it's really hard to change. But Walter's been 
 converting it to C (and maybe then to D once that's done) bit-by-bit (so 
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D.
That's jumping to conclusions. C is little more than a high-level assembler. That's why it's a reasonable first step up from Asm. Once it's in C and cleaned up, that's the time for it to move on to D .
Feb 13 2011
next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 13/02/11 13:36, Paulo Pinto wrote:
 Hi,

 I am sorry, but I don't belive it.

 Many other systems programming languages that atempted to displace C and
 C++, have
 the toolchain built in its languages, after the compilers were bootstrapped,
 as anyone
 with enough compiler knowledge will surely tell you.

 And D's linker must first be written in C, to make it easy to rewrite in D?!

 A linker is not science fiction, it is just a program that binds object
 files and libraries together
 to produce an executable. Any programming language able to manipulate files
 and binary
 data, can be used to create a linker.

 --
 Paulo
I believe the issue is that OPTLINK is written in highly optimised hand-written assembly, and as such a direct port to D is impossible. As the linker is such a delicate tool (even a minor change can have major repucussions), the port needs to be as direct as possible - sure, it could be ported directly to D, but it will more than likely break in the process. See also http://www.drdobbs.com/blog/archives/2009/11/assembler_to_c.html -- Robert http://octarineparrot.com/
Feb 13 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I guess if you're not writing new templates in your code then
incremental compilation is possible?

I did collect some information on using DMD here (it's a bit
Windows-specific but my guess is it works similar on OSX):
http://prowiki.org/wiki4d/wiki.cgi?D__Tutorial/CompilingLinkingD

If anyone has additions or found erroneous statements I'd invite you
to add fix those, please. It's a wiki, after all. :)
Feb 13 2011
parent reply Gary Whatmore <no spam.sp> writes:
Andrej Mitrovic Wrote:

 I guess if you're not writing new templates in your code then
 incremental compilation is possible?
Exactly. What I did is a simple wrapper module for Phobos with preinstantiated non-templated functions for typical use cases. For example there are few wrappers for the templated collection functions. It's easy to grep for '!' in your code and rewrite it using these wrappers. Problem solved. - G.W.
Feb 13 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/13/11, Gary Whatmore <no spam.sp> wrote:
 Andrej Mitrovic Wrote:

 I guess if you're not writing new templates in your code then
 incremental compilation is possible?
Exactly. What I did is a simple wrapper module for Phobos with preinstantiated non-templated functions for typical use cases. For example there are few wrappers for the templated collection functions. It's easy to grep for '!' in your code and rewrite it using these wrappers. Problem solved. - G.W.
Or one-up DMD and use a templated function in your user code that automatically finds a pre-instantiated template by using the import("wrappermodule.d") trick. I wonder if it would be possible to wrap an entire module as a string (with the q{} trick) to a template that does exactly that..
Feb 13 2011
parent reply golgeliyele <usuldan yahoo.com> writes:
I wonder if we can get something positive out of this discussion. I would like
to enumerate a few possibilities for the several
things we discussed:

1. Difficult to understand linker errors due to missing main():
  Fixing this would be useful for newbies. If there is not already a defect on
this, I suggest we file a defect and it gets fixed
sometime. I am assuming that this can be caught before going to the linker.
Does dmd support creating an executable
whose main() comes from a library? If not, the compiler would know if there is
a main() or not before doing the linking. I
can see how this is a problem with incremental builds though. However, with the
compilation model that is being advocated
by the documentation (i.e, feed dmd all the .d files at once), the compiler
should know if main() is there or not. Yet another
reason to clarify the compilation model, IMO.

2. dmd compiler's command line options:
  This is mostly an esthetic issue. However, it is like the entrance to your
house. People who are not sure about entering in
care about what it looks like from the outside. If Walter is willing, I can
work on a command line options interface proposal
that would keep backwards compatibility with the existing options. This would
enable a staged transition. Would there be
an interest in this?

3. Incremental compilation (or lack of it)
  First of all there is a documentation problem here. There needs to be clarity
about whether incremental compilation is
possible or not. I won't count approaches that work partially as anything more
than a stopgap solution. IMO, it is acceptable
if we can state that dmd compilations are blazingly fast, and as a result,
there is no reason to do incremental compilation.
The problem is that I get mixed signals on this point:
  - If this claim is true, then I think it should be asserted strongly and
should be backed up by numbers (100K library
compilation takes X seconds, etc.)
  - If this claim is false, then we should look at enhancing the tooling with
things like xfBuild. Perhaps that kind of
functionality can be built into the compiler itself. Whatever is needed, the
following needs to be clearly documented: What is
the best way to organize the build of large projects?

It is a mistake to consider the language without the tooling that goes along
with it. I think there is still time to recover from
this error. Large projects are often build as a series of libraries. When the
shared library problem is to be attacked, I think
the tooling needs to be part of that design. Solving the tooling problem will
raise D to one level up and I hope the
community will step up to the challenge.

One last thing: Personally, I don't like this business with .di files. They are
optional, but then they are needed for certain use
cases. I believe the information that is contained in .di files should be
packed alongside the shared library and I should be
able to build/link against a single library package. I haven't used Java for a
long time, but I recall you get a .jar file and
javadoc documentation when you are handed a library. I like that.

p.s.: Does anyone know what the best way to use this newsgroup is? Is there a
better web interface? If not, is there a free
newsgroup (on a Mac) reader that is easy to use?
Feb 13 2011
next sibling parent reply charlie <charl7n yahoo.com> writes:
golgeliyele Wrote:

 It is a mistake to consider the language without the tooling that goes along
with it. I think there is still time to recover from
 this error. Large projects are often build as a series of libraries. When the
shared library problem is to be attacked, I think
 the tooling needs to be part of that design. Solving the tooling problem will
raise D to one level up and I hope the
 community will step up to the challenge.
So far D 1.0 development has forced me to study the compiler and library internals much more than I could ever imagine. Had 10 years of Pascal, Delphi, and Java programming under my belt, but never really knew what's the difference between a compiler frontend and compiler. I knew the linker though, but couldn't imagine there could be so many incompatibilities. For example the Delphi community has a large set of commonly used libraries for the casual user. I also ended up learning a great deal of regexps because my editor didn't support D and don't feel awkward reading dmd internals such as cod2.c or mtype.c now. This was all necessary to use D in a simple GUI project and to sidestep common bugs. I really like D. The elegance of the language can be blamed for the most part. In retrospect, I ended up running into more bugs than ever before and spent more time than with any other SDK. However it was so fun that it really wasn't a problem. Basically if you're using D at work, I recommend studying the libraries and finding workaround for bugs at home. This way you won't be spending too much time fighting the tool chain in professional context and get extra points from the voluntarily open source hobby. It also helps our community. This newsgroup's a valuable source of information. Read about tuning of JVM, race cars, rocket science, CRT monitors, and DVCS here. We don't always have to discuss grave business matters.
Feb 13 2011
parent "Paulo Pinto" <pjmlp progtools.org> writes:
Hi,

this is what I miss in D and Go.

Most developers that only used C and C++ aren't aware how easy it is to 
compile applications in more
modern languages.

It is funny that both D and Go advertise their compilation speed, when I was 
used to fast compilation since
the MS-DOS days with Turbo Pascal.

JVM and .Net based languages have editors that do compile on save.


development language always
cite the productivity gain in the compile-test-debug cycle.

I was a bit disappointed to find out that both Go and D still propose a 
compiler/linker model.

--
Paulo

"charlie" <charl7n yahoo.com> wrote in message 
news:ij95ge$119o$1 digitalmars.com...
 golgeliyele Wrote:

 It is a mistake to consider the language without the tooling that goes 
 along with it. I think there is still time to recover from
 this error. Large projects are often build as a series of libraries. When 
 the shared library problem is to be attacked, I think
 the tooling needs to be part of that design. Solving the tooling problem 
 will raise D to one level up and I hope the
 community will step up to the challenge.
So far D 1.0 development has forced me to study the compiler and library internals much more than I could ever imagine. Had 10 years of Pascal, Delphi, and Java programming under my belt, but never really knew what's the difference between a compiler frontend and compiler. I knew the linker though, but couldn't imagine there could be so many incompatibilities. For example the Delphi community has a large set of commonly used libraries for the casual user. I also ended up learning a great deal of regexps because my editor didn't support D and don't feel awkward reading dmd internals such as cod2.c or mtype.c now. This was all necessary to use D in a simple GUI project and to sidestep common bugs. I really like D. The elegance of the language can be blamed for the most part. In retrospect, I ended up running into more bugs than ever before and spent more time than with any other SDK. However it was so fun that it really wasn't a problem. Basically if you're using D at work, I recommend studying the libraries and finding workaround for bugs at home. This way you won't be spending too much time fighting the tool chain in professional context and get extra points from the voluntarily open source hobby. It also helps our community. This newsgroup's a valuable source of information. Read about tuning of JVM, race cars, rocket science, CRT monitors, and DVCS here. We don't always have to discuss grave business matters.
Feb 13 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
golgeliyele wrote:
 1. Difficult to understand linker errors due to missing main():
   Fixing this would be useful for newbies. If there is not already a defect on
this, I suggest we file a defect and it gets fixed
 sometime. I am assuming that this can be caught before going to the linker.
Does dmd support creating an executable
 whose main() comes from a library? If not, the compiler would know if there is
a main() or not before doing the linking. I
 can see how this is a problem with incremental builds though. However, with
the compilation model that is being advocated
 by the documentation (i.e, feed dmd all the .d files at once), the compiler
should know if main() is there or not. Yet another
 reason to clarify the compilation model, IMO.
The problem is the main() can come from a library, or some other .obj file handed to the compiler that the compiler doesn't look inside. It's a very flexible way to build things, and trying to impose more order on that will surely wind up with complaints from some developers.
 2. dmd compiler's command line options:
   This is mostly an esthetic issue. However, it is like the entrance to your
house. People who are not sure about entering in
 care about what it looks like from the outside. If Walter is willing, I can
work on a command line options interface proposal
 that would keep backwards compatibility with the existing options. This would
enable a staged transition. Would there be
 an interest in this?
A proposal would be nice. But please keep in mind that people often view their build systems / makefiles as black boxes, and breaking them with incompatible changes can be extremely annoying.
 3. Incremental compilation (or lack of it)
   First of all there is a documentation problem here. There needs to be
clarity about whether incremental compilation is
 possible or not. I won't count approaches that work partially as anything more
than a stopgap solution. IMO, it is acceptable
 if we can state that dmd compilations are blazingly fast, and as a result,
there is no reason to do incremental compilation.
 The problem is that I get mixed signals on this point:
   - If this claim is true, then I think it should be asserted strongly and
should be backed up by numbers (100K library
 compilation takes X seconds, etc.)
I stopped bothering posting numbers because nobody believed them. I was even once accused of "sabotaging" my own C++ compiler to make dmd look better. dmc++ is, by far, the fastest C++ compiler available. The people who use it know that and like it a lot. The people who don't use it just assume I'm lying about the speed, and I get tired of being accused of such.
   - If this claim is false, then we should look at enhancing the tooling with
things like xfBuild. Perhaps that kind of
 functionality can be built into the compiler itself. Whatever is needed, the
following needs to be clearly documented: What is
 the best way to organize the build of large projects?
 
 It is a mistake to consider the language without the tooling that goes along
with it. I think there is still time to recover from
 this error. Large projects are often build as a series of libraries. When the
shared library problem is to be attacked, I think
 the tooling needs to be part of that design. Solving the tooling problem will
raise D to one level up and I hope the
 community will step up to the challenge.
 
 One last thing: Personally, I don't like this business with .di files. They
are optional, but then they are needed for certain use
 cases. I believe the information that is contained in .di files should be
packed alongside the shared library and I should be
 able to build/link against a single library package. I haven't used Java for a
long time, but I recall you get a .jar file and
 javadoc documentation when you are handed a library. I like that.
In the worst case, you are no worse off with .di files than with C++ .h files.
Feb 13 2011
next sibling parent reply =?iso-8859-1?b?Z/ZsZ2VsaXllbGU=?= <usuldan gmail.com> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 
 golgeliyele wrote:
 1. Difficult to understand linker errors due to missing main():
 ...
The problem is the main() can come from a library, or some other .obj file handed to the compiler that the compiler doesn't look inside. It's a very flexible way to build things, and trying to impose more order on that will surely wind up with complaints from some developers.
I would like to question this. Is there a D project where the technique of putting main() into a library has proved useful? I used this in a C++ project of mine, but I have regretted that already. I can imagine having a compiler option to avoid the pre-link check for main(), but I would suggest not even having that. Of course unless we get to know what those complaints you mentioned are :)
 2. dmd compiler's command line options:
   This is mostly an esthetic issue. However, it is like the entrance to your 
house. People who are not sure about entering in
 care about what it looks like from the outside. If Walter is willing, I can 
work on a command line options interface proposal
 that would keep backwards compatibility with the existing options. This would 
enable a staged transition. Would there be
 an interest in this?
A proposal would be nice. But please keep in mind that people often view their build systems / makefiles as black boxes, and breaking them with incompatible changes can be extremely annoying.
Thanks for being open. I'll work on this.
 3. Incremental compilation (or lack of it)
 ...
I stopped bothering posting numbers because nobody believed them. I was even once accused of "sabotaging" my own C++ compiler to make dmd look better. dmc++ is, by far, the fastest C++ compiler available. The people who use it
know
 that and like it a lot. The people who don't use it just assume I'm lying 
about
 the speed, and I get tired of being accused of such.
I think what we need here is numbers from a project that everyone has access to. What is the largest D project right now? Can we get numbers on that? How much time does it take to compile that project after a change (assuming we are feeding all .d files at once)?
   - If this claim is false, then we should look at enhancing the tooling with 
things like xfBuild. Perhaps that kind of
 ...
In the worst case, you are no worse off with .di files than with C++ .h files.
:) .h files is something I want to forget forever.
Feb 13 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
gölgeliyele wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
  
 golgeliyele wrote:
 1. Difficult to understand linker errors due to missing main():
 ...
The problem is the main() can come from a library, or some other .obj file handed to the compiler that the compiler doesn't look inside. It's a very flexible way to build things, and trying to impose more order on that will surely wind up with complaints from some developers.
I would like to question this. Is there a D project where the technique of putting main() into a library has proved useful? I used this in a C++ project of mine, but I have regretted that already. I can imagine having a compiler option to avoid the pre-link check for main(), but I would suggest not even having that. Of course unless we get to know what those complaints you mentioned are :)
I find that people have all kinds of ways they wish to use a compiler. Is it worth restricting all that just for the case of one error message? I also have tried to avoid adding endless command line switches as the solution to every variation people want. These cause: 1. people just check out when they see pages and pages of wacky switches. Has anyone ever actually read all of man gcc? 2. different compiler switches can have unexpected interactions and complications when used together. This is impossible to test for, as the combinations increase as the factorial of the number of switches. 3. people tend to copy/paste makefiles from one project to the next. They copy/paste the switches, too, usually with no idea what those switches do. I.e. they treat those switches as some sort of sacred incantation that they dare not change.
Feb 13 2011
parent spir <denis.spir gmail.com> writes:
On 02/13/2011 08:30 PM, Walter Bright wrote:
 1. people just check out when they see pages and pages of wacky switches. Has
 anyone ever actually read all of man gcc?
+ 12_000 /lines/ in my version Denis -- _________________ vita es estrany spir.wikidot.com
Feb 13 2011
prev sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
gölgeliyele wrote:
...
 
 I think what we need here is numbers from a project that everyone has
 access to. What is the largest D project right now? Can we get numbers on
 that? How much time does it take to compile that project after a change
 (assuming we are feeding all .d files at once)?
Well you can take phobos, I believe Andrei used it once to compare against Go. With std.datetime it is now also much bigger :) Tango is another large project, I remember someone posted a compilation speed of a couple of seconds (Tango is huge, perhaps 300KLoC). But projects and settings may vary a lot. For sure, optlink is one hell of a speed monster and you might not get similar speeds with ld on a large project.
Feb 13 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-02-13 20:49, Lutger Blijdestijn wrote:
 gölgeliyele wrote:
 ...
 I think what we need here is numbers from a project that everyone has
 access to. What is the largest D project right now? Can we get numbers on
 that? How much time does it take to compile that project after a change
 (assuming we are feeding all .d files at once)?
Well you can take phobos, I believe Andrei used it once to compare against Go. With std.datetime it is now also much bigger :) Tango is another large project, I remember someone posted a compilation speed of a couple of seconds (Tango is huge, perhaps 300KLoC). But projects and settings may vary a lot. For sure, optlink is one hell of a speed monster and you might not get similar speeds with ld on a large project.
It takes around 12.5 seconds for my machine to build Tango using the bob executable. 2.4Ghz Intel Core 2 Duo 2G RAM Mac OS X 10.6.6 -- /Jacob Carlborg
Feb 14 2011
prev sibling parent =?ISO-8859-1?Q?g=F6lgeliyele?= <usuldan gmail.com> writes:
On 2/13/11 2:05 PM, Walter Bright wrote:
 golgeliyele wrote:
 2. dmd compiler's command line options:
 This is mostly an esthetic issue. However, it is like the entrance to
 your house. People who are not sure about entering in
 care about what it looks like from the outside. If Walter is willing,
 I can work on a command line options interface proposal
 that would keep backwards compatibility with the existing options.
 This would enable a staged transition. Would there be
 an interest in this?
A proposal would be nice. But please keep in mind that people often view their build systems / makefiles as black boxes, and breaking them with incompatible changes can be extremely annoying.
Here is one proposal: Digital Mars D Compiler v2.051 Copyright (c) 1999-2010 by Digital Mars written by Walter Bright Documentation: http://www.digitalmars.com/d/2.0/index.html Usage: dmd [options] <files> <files> D source files Options: --commands <file> read arguments from a command file -c, --compile only compile, do not link --coverage do code coverage analysis -D, --ddoc generate documentation --ddoc-dir <dir> write documentation file to a directory --ddoc-file <file> write documentation file to a file -d, --deprecated allow deprecated features --debug compile in debug code --debug-level <level> compile in debug code <= level --debug-ident <ident> compile in debug code identified by ident --debug-lib <name> set symbolic debug library to name --default-lib <name> set default library to name --dependencies <file> write module dependencies to a file --dylib generate dylib -g, --sym-debug add symbolic debug info --sym-debug-c add symbolic debug info, pretend to be C -H, --header generate 'header' file --header-dir <dir> write 'header' file to a directory --header-file <file> write 'header' file to a file --help print this help -I, --imports <path> where to look for imports --ignore-bad-pragmas ignore unsupported pragmas --inline do function inlining -J, --string-imports <path> where to look for string imports -L, --linker-flags <flags> pass flags to the linker --lib generate library rather than object files --man open web browser on manual page --linker-map generate linker .map file --no-bounds-check turns off array bounds checking --no-float do not emit reference to floating point -O, --optimize optimize -n, --no-object-file do not write object file --object-dir <dir> write object, library files to a directory --output <file> name output file to a file name --no-path-strip do not strip paths from source file --profile profile runtime performance of code --quiet suppress unnecessary messages --release compile release version --run <prog> <args...> run resulting program file, passing args --unittest compile in unit tests -v, --verbose verbose --version <level> compile in version >= level --version <ident> compile in version identified by ident --tls-vars list all variables going into thread local storage -w, --warnings enable warnings -W, --info-warnings enable informational warnings -X, --json generate JSON file --json-file <file> write JSON file to a given file
Feb 13 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-02-13 18:19, golgeliyele wrote:
 p.s.: Does anyone know what the best way to use this newsgroup is? Is there a
better web interface? If not, is there a free
 newsgroup (on a Mac) reader that is easy to use?
I'm using Thunderbird. -- /Jacob Carlborg
Feb 14 2011
prev sibling parent reply Gary Whatmore <no spam.sp> writes:
Paulo Pinto Wrote:

 Hi,
 
 I am sorry, but I don't belive it.
 
 Many other systems programming languages that atempted to displace C and 
 C++, have
 the toolchain built in its languages, after the compilers were bootstrapped, 
 as anyone
 with enough compiler knowledge will surely tell you.
 
 And D's linker must first be written in C, to make it easy to rewrite in D?!
 
 A linker is not science fiction, it is just a program that binds object 
 files and libraries together
 to produce an executable. Any programming language able to manipulate files 
 and binary
 data, can be used to create a linker.
If you want, you can prove this by starting a competive linker project. Probably both Digitalmars and Microsoft have done everything they can to make competition as hard as possible by leaving the object file format undocumented and filled the implementation with weird corner cases to make reverse engineering extremely hard. Microsoft even does minor changes in every version to break compatibility. Even if a 10 man team uses some open source linker as a base and writes the linker in D, you can't beat Walter. The productivity of hardcore domain experts is nearly two orders of magnitude better than that of novices. The toolchain issues will be history by the end of this year. - G.W.
Feb 13 2011
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Hi,

still you don't convice me.

So what language features has C that are missing from D and prevent a linker 
to be written in
D?

The issue is not if I can beat Walter, the issue is that we have a language 
which on its official
home page states lots of reasons for using it instead of C and C++, and its 
creator decides
to use C when porting the linker to an high level language.

So doesn't Walter belive in its own language?

As for your challenge, actually I am looking for a job currently, how much 
can I ask for?

--
Paulo

"Gary Whatmore" <no spam.sp> wrote in message 
news:ij8spi$ic0$1 digitalmars.com...
 Paulo Pinto Wrote:

 Hi,

 I am sorry, but I don't belive it.

 Many other systems programming languages that atempted to displace C and
 C++, have
 the toolchain built in its languages, after the compilers were 
 bootstrapped,
 as anyone
 with enough compiler knowledge will surely tell you.

 And D's linker must first be written in C, to make it easy to rewrite in 
 D?!

 A linker is not science fiction, it is just a program that binds object
 files and libraries together
 to produce an executable. Any programming language able to manipulate 
 files
 and binary
 data, can be used to create a linker.
If you want, you can prove this by starting a competive linker project. Probably both Digitalmars and Microsoft have done everything they can to make competition as hard as possible by leaving the object file format undocumented and filled the implementation with weird corner cases to make reverse engineering extremely hard. Microsoft even does minor changes in every version to break compatibility. Even if a 10 man team uses some open source linker as a base and writes the linker in D, you can't beat Walter. The productivity of hardcore domain experts is nearly two orders of magnitude better than that of novices. The toolchain issues will be history by the end of this year. - G.W.
Feb 13 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Paulo Pinto wrote:

 Hi,
 
 still you don't convice me.
 
 So what language features has C that are missing from D and prevent a
 linker to be written in
 D?
 
 The issue is not if I can beat Walter, the issue is that we have a
 language which on its official
 home page states lots of reasons for using it instead of C and C++, and
 its creator decides
 to use C when porting the linker to an high level language.
 
 So doesn't Walter belive in its own language?
From Walter himself: "Why use C instead of the D programming language? Certainly, D is usable for such low level coding and, when programming at this level, there isn't a practical difference between the two. The problem is that the system to build Optlink uses some old tools that only work with an old version of the object file format. The D compiler uses newer obj format features, the C compiler still uses the old ones. It was just easier to use the C compiler rather than modify the D one. Once the source is all in C, it will be trivial to shift it over to D and the modern tools." http://www.drdobbs.com/blog/archives/2009/11/assembler_to_c.html
Feb 13 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Lutger Blijdestijn wrote:
 "Why use C instead of the D programming language? Certainly, D is usable for 
 such low level coding and, when programming at this level, there isn't a 
 practical difference between the two. The problem is that the system to 
 build Optlink uses some old tools that only work with an old version of the 
 object file format. The D compiler uses newer obj format features, the C 
 compiler still uses the old ones. It was just easier to use the C compiler 
 rather than modify the D one. Once the source is all in C, it will be 
 trivial to shift it over to D and the modern tools." 
 
 http://www.drdobbs.com/blog/archives/2009/11/assembler_to_c.html
Yeah, I forgot to mention that Optlink relies on some tools that work only with an obsolete version of the omf.
Feb 13 2011
prev sibling next sibling parent reply Gary Whatmore <no spam.sp> writes:
Paulo Pinto Wrote:

 "Nick Sabalausky" <a a.a> wrote in message 
 news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker DMD 
 uses on OSX, but on Windows it uses OPTLINK which is written in 
 hand-optimized Asm so it's really hard to change. But Walter's been 
 converting it to C (and maybe then to D once that's done) bit-by-bit (so 
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D.
I'm guessing that Walter feels more familiar and comfortable developing C/C++ instead of D. He's the creator of D, but has written very small amounts of D and probably cannot write idiomatic D very fluently. Another issue is the immature toolchain. This might sound like blasphemy, but I believe the skills and knowledge for developing large scale applications in language XYZ cannot be extrapolated from small code snippets or from experience with projects in other languages. You just need to eat your own dogfood and get your feet wet by doing. People like the Tango's 'kris' and this 'h3r3tic' are the real world D experts. Sadly they've all left D. We need a new generation of experts, because these old guys ranting about every issue are more harmful than good to the community.
Feb 13 2011
next sibling parent reply spir <denis.spir gmail.com> writes:
On 02/13/2011 04:07 PM, Gary Whatmore wrote:
 his might sound like blasphemy, but I believe the skills and knowledge for
developing large scale applications in language XYZ cannot be extrapolated from
small code snippets or from experience with projects in other languages. You
just need to eat your own dogfood and get your feet wet by doing.
Precisely. A common route for the development of a static and compiled language (even more one intended as system programming language) is to "eat its own dogfood" by becoming its own compiler. From what I've heard, this is a great boost for the language's evolution, precisely because the creators use their language everyday from then --instead of becoming more & more experts in another one. Also, I really miss a D for D lexical- syntactic- semantic- analyser that would produce D data structures. This would open the door hoards of projects, including tool chain elements, meta-studies on D, improvements of these basic tools (efficiency, semantis analysis), decelopment of back-ends (including studies on compiler optimisation specific to D's semantics), etc. Even more important, the whole cummunity, which is imo rather high-level, would be able to take part to such challenges, in their favorite language. Isn't is ironic D depends so much on C++, while many programmers come to D fed up with this language, presicely? Denis -- _________________ vita es estrany spir.wikidot.com
Feb 13 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"spir" <denis.spir gmail.com> wrote in message 
news:mailman.1602.1297626622.4748.digitalmars-d puremagic.com...
 Also, I really miss a D for D lexical- syntactic- semantic- analyser that 
 would produce D data structures. This would open the door hoards of 
 projects, including tool chain elements, meta-studies on D, improvements 
 of these basic tools (efficiency, semantis analysis), decelopment of 
 back-ends (including studies on compiler optimisation specific to D's 
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather high-level, 
 would be able to take part to such challenges, in their favorite language. 
 Isn't is ironic D depends so much on C++, while many programmers come to D 
 fed up with this language, presicely?
DDMD: http://www.dsource.org/projects/ddmd
Feb 13 2011
parent reply spir <denis.spir gmail.com> writes:
On 02/13/2011 10:35 PM, Nick Sabalausky wrote:
 "spir"<denis.spir gmail.com>  wrote in message
 news:mailman.1602.1297626622.4748.digitalmars-d puremagic.com...
 Also, I really miss a D for D lexical- syntactic- semantic- analyser that
 would produce D data structures. This would open the door hoards of
 projects, including tool chain elements, meta-studies on D, improvements
 of these basic tools (efficiency, semantis analysis), decelopment of
 back-ends (including studies on compiler optimisation specific to D's
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather high-level,
 would be able to take part to such challenges, in their favorite language.
 Isn't is ironic D depends so much on C++, while many programmers come to D
 fed up with this language, presicely?
DDMD: http://www.dsource.org/projects/ddmd
Definitely a good thing, and more! :-) Thank your for the pointer, Nick. I will skim across the project as soon as I have some hours free. And see if --with my very limited competence in the domain-- I can contribute in any way. I have an idea for a side-feature if I can understand the produced AST: generate Types as D data structures on request (--meta), write them into a plain D module to be imported on need. A major aspect, I guess, of the 'meta' namespace discussed on this list. Denis -- _________________ vita es estrany spir.wikidot.com
Feb 13 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-02-13 23:38, spir wrote:
 On 02/13/2011 10:35 PM, Nick Sabalausky wrote:
 "spir"<denis.spir gmail.com> wrote in message
 news:mailman.1602.1297626622.4748.digitalmars-d puremagic.com...
 Also, I really miss a D for D lexical- syntactic- semantic- analyser
 that
 would produce D data structures. This would open the door hoards of
 projects, including tool chain elements, meta-studies on D, improvements
 of these basic tools (efficiency, semantis analysis), decelopment of
 back-ends (including studies on compiler optimisation specific to D's
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather
 high-level,
 would be able to take part to such challenges, in their favorite
 language.
 Isn't is ironic D depends so much on C++, while many programmers come
 to D
 fed up with this language, presicely?
DDMD: http://www.dsource.org/projects/ddmd
Definitely a good thing, and more! :-) Thank your for the pointer, Nick. I will skim across the project as soon as I have some hours free. And see if --with my very limited competence in the domain-- I can contribute in any way. I have an idea for a side-feature if I can understand the produced AST: generate Types as D data structures on request (--meta), write them into a plain D module to be imported on need. A major aspect, I guess, of the 'meta' namespace discussed on this list. Denis
Currently it doesn't compile on Posix, and never has as far as I know. That's one thing you can help with if you want to. Don't know the status on Windows -- /Jacob Carlborg
Feb 14 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:ijbtpv$61a$1 digitalmars.com...
 On 2011-02-13 23:38, spir wrote:
 On 02/13/2011 10:35 PM, Nick Sabalausky wrote:
 "spir"<denis.spir gmail.com> wrote in message
 news:mailman.1602.1297626622.4748.digitalmars-d puremagic.com...
 Also, I really miss a D for D lexical- syntactic- semantic- analyser
 that
 would produce D data structures. This would open the door hoards of
 projects, including tool chain elements, meta-studies on D, 
 improvements
 of these basic tools (efficiency, semantis analysis), decelopment of
 back-ends (including studies on compiler optimisation specific to D's
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather
 high-level,
 would be able to take part to such challenges, in their favorite
 language.
 Isn't is ironic D depends so much on C++, while many programmers come
 to D
 fed up with this language, presicely?
DDMD: http://www.dsource.org/projects/ddmd
Definitely a good thing, and more! :-) Thank your for the pointer, Nick. I will skim across the project as soon as I have some hours free. And see if --with my very limited competence in the domain-- I can contribute in any way. I have an idea for a side-feature if I can understand the produced AST: generate Types as D data structures on request (--meta), write them into a plain D module to be imported on need. A major aspect, I guess, of the 'meta' namespace discussed on this list. Denis
Currently it doesn't compile on Posix, and never has as far as I know. That's one thing you can help with if you want to. Don't know the status on Windows
It compiles fine on Windows. Some of the last few commits were related to compling on Linux and OSX. Does the latest version still not work?
Feb 14 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-02-14 21:43, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:ijbtpv$61a$1 digitalmars.com...
 On 2011-02-13 23:38, spir wrote:
 On 02/13/2011 10:35 PM, Nick Sabalausky wrote:
 "spir"<denis.spir gmail.com>  wrote in message
 news:mailman.1602.1297626622.4748.digitalmars-d puremagic.com...
 Also, I really miss a D for D lexical- syntactic- semantic- analyser
 that
 would produce D data structures. This would open the door hoards of
 projects, including tool chain elements, meta-studies on D,
 improvements
 of these basic tools (efficiency, semantis analysis), decelopment of
 back-ends (including studies on compiler optimisation specific to D's
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather
 high-level,
 would be able to take part to such challenges, in their favorite
 language.
 Isn't is ironic D depends so much on C++, while many programmers come
 to D
 fed up with this language, presicely?
DDMD: http://www.dsource.org/projects/ddmd
Definitely a good thing, and more! :-) Thank your for the pointer, Nick. I will skim across the project as soon as I have some hours free. And see if --with my very limited competence in the domain-- I can contribute in any way. I have an idea for a side-feature if I can understand the produced AST: generate Types as D data structures on request (--meta), write them into a plain D module to be imported on need. A major aspect, I guess, of the 'meta' namespace discussed on this list. Denis
Currently it doesn't compile on Posix, and never has as far as I know. That's one thing you can help with if you want to. Don't know the status on Windows
It compiles fine on Windows. Some of the last few commits were related to compling on Linux and OSX. Does the latest version still not work?
No, if I was the last one who did those commits. Since then a few necessary bugs has been fixed in DMD. -- /Jacob Carlborg
Feb 14 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-02-13 16:07, Gary Whatmore wrote:
 Paulo Pinto Wrote:

 "Nick Sabalausky"<a a.a>  wrote in message
 news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker DMD
 uses on OSX, but on Windows it uses OPTLINK which is written in
 hand-optimized Asm so it's really hard to change. But Walter's been
 converting it to C (and maybe then to D once that's done) bit-by-bit (so
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D.
I'm guessing that Walter feels more familiar and comfortable developing C/C++ instead of D. He's the creator of D, but has written very small amounts of D and probably cannot write idiomatic D very fluently. Another issue is the immature toolchain. This might sound like blasphemy, but I believe the skills and knowledge for developing large scale applications in language XYZ cannot be extrapolated from small code snippets or from experience with projects in other languages. You just need to eat your own dogfood and get your feet wet by doing. People like the Tango's 'kris' and this 'h3r3tic' are the real world D experts. Sadly they've all left D. We need a new generation of experts, because these old guys ranting about every issue are more harmful than good to the community.
Kris is still around. -- /Jacob Carlborg
Feb 14 2011
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 14/02/2011 12:37, Jacob Carlborg wrote:
 On 2011-02-13 16:07, Gary Whatmore wrote:
 Paulo Pinto Wrote:

 "Nick Sabalausky"<a a.a> wrote in message
 news:ij7v76$1q4t$1 digitalmars.com...
 ... (cutted) ...

 That's not the compiler, that's the linker. I don't know what linker
 DMD
 uses on OSX, but on Windows it uses OPTLINK which is written in
 hand-optimized Asm so it's really hard to change. But Walter's been
 converting it to C (and maybe then to D once that's done) bit-by-bit
 (so
 to speak), so linker improvements are at least on the horizon.

 ...
Why C and not directly D? It is really bad adversting for D to know that when its creator came around to rewrite the linker, Walter decided to use C instead of D.
I'm guessing that Walter feels more familiar and comfortable developing C/C++ instead of D. He's the creator of D, but has written very small amounts of D and probably cannot write idiomatic D very fluently. Another issue is the immature toolchain. This might sound like blasphemy, but I believe the skills and knowledge for developing large scale applications in language XYZ cannot be extrapolated from small code snippets or from experience with projects in other languages. You just need to eat your own dogfood and get your feet wet by doing. People like the Tango's 'kris' and this 'h3r3tic' are the real world D experts. Sadly they've all left D. We need a new generation of experts, because these old guys ranting about every issue are more harmful than good to the community.
Kris is still around.
Out of curiosity, what do you mean "still around". Still working with D? -- Bruno Medeiros - Software Engineer
Feb 23 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Paulo Pinto wrote:
 Why C and not directly D?
 
 It is really bad adversting for D to know that when its creator came around 
 to rewrite the linker, Walter decided to use C instead of D.
That's a very good question. The answer is in the technical details of transitioning optlink from an all assembler project to a higher level language. I do it function by function, meaning there will be hundreds of "hybrid" versions that are partly in the high level language, partly in asm. Currently, it's around 5% in C. 1. Optlink has its own "runtime" system and startup code. With C, and a little knowledge about how things work under the hood, it's easier to create "headless" functions that require zero runtime and startup support. With D, the D compiler will create ModuleInfo and TypeInfo objects, which more or less rely on some sort of D runtime existing. 2. The group/segment names emitted by the C compiler match what Optlink uses. It matches what dmd does, too, except that dmd emits more such names, requiring more of an understanding of Optlink to get them in the right places. 3. The hybrid intermediate versions require that the asm portions of Optlink be able to call the high level language functions. In order to avoid an error-prone editting of scores of files, it is very convenient to have the function names used by the asm code exactly match the names emitted by the compiler. I accomplished this by "tweaking" the dmc C compiler. I didn't really want to mess with the D compiler to do the same. 4. Translating asm to a high level language starts with a rote translation, i.e. using goto's, raw pointers, etc., which match 1:1 with the assembler logic. No attempt is made to infer higher level logic. This makes mistakes in the translation easier to find. But it's not the way anyone in their right mind would develop C code. The higher level abstractions in C are not useful here, and neither are the higher level abstractions in D. Once the entire Optlink code base has been converted, then it becomes a simple process to: 1. Dump the Optlink runtime, and switch to the C runtime. 2. Translate the C code to D. And then: 3. Refactor the D code into higher level abstractions. I've converted a massive code base from asm to C++ before (DASH for Data I/O) and I discovered that attempting to refactor the code while translating it is fraught with disaster. Doing the hybrid approach is much faster and more likely to be successful. TL,DR: The C version is there only as a transitional step, as it's somewhat easier to create a hybrid asm/C code base than a hybrid asm/D one. The goal is to create a D version.
Feb 13 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 With D, the D compiler will create ModuleInfo and TypeInfo objects,
 which more or less rely on some sort of D runtime existing.
In LDC there are no_typeinfo (and in maybe no_moduleinfo) pragmas to disable the generation of those for specific types/modules: http://www.dsource.org/projects/ldc/wiki/Docs#no_typeinfo pragma(no_typeinfo) { struct Opaque {} } If it's useful then something similar may be added to DMD too. Bye, bearophile
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 Walter:
 
 With D, the D compiler will create ModuleInfo and TypeInfo objects, which
 more or less rely on some sort of D runtime existing.
In LDC there are no_typeinfo (and in maybe no_moduleinfo) pragmas to disable the generation of those for specific types/modules: http://www.dsource.org/projects/ldc/wiki/Docs#no_typeinfo pragma(no_typeinfo) { struct Opaque {} } If it's useful then something similar may be added to DMD too.
I think it's best to avoid such things.
Feb 13 2011
parent Alan Smithee <email example.com> writes:
Agreed. These things might make D appear like less of a joke, thus
attracting more hapless users to their subsequent dismay.
Feb 13 2011
prev sibling next sibling parent spir <denis.spir gmail.com> writes:
On 02/13/2011 07:53 PM, Walter Bright wrote:
 Paulo Pinto wrote:
 Why C and not directly D?

 It is really bad adversting for D to know that when its creator came around
 to rewrite the linker, Walter decided to use C instead of D.
That's a very good question. The answer is in the technical details of transitioning optlink from an all assembler project to a higher level language. I do it function by function, meaning there will be hundreds of "hybrid" versions that are partly in the high level language, partly in asm. Currently, it's around 5% in C. 1. Optlink has its own "runtime" system and startup code. With C, and a little knowledge about how things work under the hood, it's easier to create "headless" functions that require zero runtime and startup support. With D, the D compiler will create ModuleInfo and TypeInfo objects, which more or less rely on some sort of D runtime existing. 2. The group/segment names emitted by the C compiler match what Optlink uses. It matches what dmd does, too, except that dmd emits more such names, requiring more of an understanding of Optlink to get them in the right places. 3. The hybrid intermediate versions require that the asm portions of Optlink be able to call the high level language functions. In order to avoid an error-prone editting of scores of files, it is very convenient to have the function names used by the asm code exactly match the names emitted by the compiler. I accomplished this by "tweaking" the dmc C compiler. I didn't really want to mess with the D compiler to do the same. 4. Translating asm to a high level language starts with a rote translation, i.e. using goto's, raw pointers, etc., which match 1:1 with the assembler logic. No attempt is made to infer higher level logic. This makes mistakes in the translation easier to find. But it's not the way anyone in their right mind would develop C code. The higher level abstractions in C are not useful here, and neither are the higher level abstractions in D. Once the entire Optlink code base has been converted, then it becomes a simple process to: 1. Dump the Optlink runtime, and switch to the C runtime. 2. Translate the C code to D. And then: 3. Refactor the D code into higher level abstractions. I've converted a massive code base from asm to C++ before (DASH for Data I/O) and I discovered that attempting to refactor the code while translating it is fraught with disaster. Doing the hybrid approach is much faster and more likely to be successful. TL,DR: The C version is there only as a transitional step, as it's somewhat easier to create a hybrid asm/C code base than a hybrid asm/D one. The goal is to create a D version.
Great! Thank you very much for this clear & comprehensive explanation of the process, Walter. (*) Denis (I can understand what you mean with this 2-stage translation --beeing easier, safer, and finally far more efficient-- having done something similar, but at a smaller-scale, probably, in the field of automation; where languages are often even closer to asm than C, 'cause much "memory" is in fact binary IO cards, directly accessed as is.) -- _________________ vita es estrany spir.wikidot.com
Feb 13 2011
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
Hi,

now I am conviced. Thanks for the explanation.

--
Paulo

"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:ij99gb$18fm$1 digitalmars.com...
 Paulo Pinto wrote:
 Why C and not directly D?

 It is really bad adversting for D to know that when its creator came 
 around to rewrite the linker, Walter decided to use C instead of D.
That's a very good question. The answer is in the technical details of transitioning optlink from an all assembler project to a higher level language. I do it function by function, meaning there will be hundreds of "hybrid" versions that are partly in the high level language, partly in asm. Currently, it's around 5% in C. 1. Optlink has its own "runtime" system and startup code. With C, and a little knowledge about how things work under the hood, it's easier to create "headless" functions that require zero runtime and startup support. With D, the D compiler will create ModuleInfo and TypeInfo objects, which more or less rely on some sort of D runtime existing. 2. The group/segment names emitted by the C compiler match what Optlink uses. It matches what dmd does, too, except that dmd emits more such names, requiring more of an understanding of Optlink to get them in the right places. 3. The hybrid intermediate versions require that the asm portions of Optlink be able to call the high level language functions. In order to avoid an error-prone editting of scores of files, it is very convenient to have the function names used by the asm code exactly match the names emitted by the compiler. I accomplished this by "tweaking" the dmc C compiler. I didn't really want to mess with the D compiler to do the same. 4. Translating asm to a high level language starts with a rote translation, i.e. using goto's, raw pointers, etc., which match 1:1 with the assembler logic. No attempt is made to infer higher level logic. This makes mistakes in the translation easier to find. But it's not the way anyone in their right mind would develop C code. The higher level abstractions in C are not useful here, and neither are the higher level abstractions in D. Once the entire Optlink code base has been converted, then it becomes a simple process to: 1. Dump the Optlink runtime, and switch to the C runtime. 2. Translate the C code to D. And then: 3. Refactor the D code into higher level abstractions. I've converted a massive code base from asm to C++ before (DASH for Data I/O) and I discovered that attempting to refactor the code while translating it is fraught with disaster. Doing the hybrid approach is much faster and more likely to be successful. TL,DR: The C version is there only as a transitional step, as it's somewhat easier to create a hybrid asm/C code base than a hybrid asm/D one. The goal is to create a D version.
Feb 13 2011
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 13 Feb 2011 08:00:57 +0200, golgeliyele <usuldan gmail.com> wrote:

 The error reporting has issues as well. I noticed that the compiler  
 leaks low level errors to the user. If you forget to add a main to your
 app or misspell it, you get errors like:
 ====
 Undefined symbols:
   "__Dmain", referenced from:
       _D2rt6dmain24mainUiPPaZi7runMainMFZv in  
 libphobos2.a(dmain2_513_1a5.o)
 ====
 I mean, wow, this should really be handled better.
This has been brought up before. Walter insists it's not a problem.
 Another annoyance, for me anyway, is that the DMD compiler outputs the  
 .o files without the package directory hierarchy. I like to
 organize my code as 'src/my/package/module.d'. And I want to set my  
 output directory to 'lib' and get 'lib/my/package/module.o'.
 But DMD generates 'lib/module.o'. I setup my project to build each .d  
 file into a .o file as a separate step. I don't even know if this is
 the desired setup. But that seems to be the way to make it incremental.  
 I couldn't find any definitive information on this in the DMD
 compiler web page. It says:
 "dmd can build an executable much faster if as many of the source files  
 as possible are put on the command line.
Correctly-working incremental builds are not possible with DMD. This is an old problem that isn't easy to fix, due to the way the compiler was designed and written.
 Another advantage to putting multiple source files on the same  
 invocation of dmd is that dmd will be able to do some level of cross-
 module optimizations, such as function inlining across modules."
I've been told that DMD will actually do cross-module optimizations even if you don't specify the other modules on its command-line. Unless you use .di files, the point is that unlike C++, the compiler has access to the full source code of all included modules, not just a header file, so it can do inlining and whatnot. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 08:00:57 +0200, golgeliyele <usuldan gmail.com> wrote:
 
 The error reporting has issues as well. I noticed that the compiler 
 leaks low level errors to the user. If you forget to add a main to your
 app or misspell it, you get errors like:
 ====
 Undefined symbols:
   "__Dmain", referenced from:
       _D2rt6dmain24mainUiPPaZi7runMainMFZv in 
 libphobos2.a(dmain2_513_1a5.o)
 ====
 I mean, wow, this should really be handled better.
This has been brought up before. Walter insists it's not a problem.
In C++, you get essentially the same thing from g++: /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' collect2: ld returned 1 exit status
Feb 13 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 In C++, you get essentially the same thing from g++:
 
 /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o: In function
`_start':
 (.text+0x20): undefined reference to `main'
 collect2: ld returned 1 exit status
Lot of people come here because they want a compiler+language better than C++ :-) If you compile this: void main() { writeln("Hello world"); } Since some time dmd shows an error fit for D newbies: test.d(2): Error: 'writeln' is not defined, perhaps you need to import std.stdio; ? Probably many Python/JS/Perl/PHP/etc programmers that may want to try D don't know what a linker is. When they want to develop a large multi-module D program they must know something about how a linker works. But D has to scale down to smaller programs too, where there are only one or very few modules, written by not experts of C-class languages. In this situation more readable error messages, produced by dmd that catches a basic error before the linker, is probably useful. Bye, bearophile
Feb 13 2011
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/13/11 6:59 AM, bearophile wrote:
 Walter:

 In C++, you get essentially the same thing from g++:

 /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o: In function
`_start':
 (.text+0x20): undefined reference to `main'
 collect2: ld returned 1 exit status
Lot of people come here because they want a compiler+language better than C++ :-)
In many ways D looks and feels like a newer language, so I agree that probably we shouldn't use C++ as a yardstick here. It would be a cop out to be relaxed about something because C++ has it too. Andrei
Feb 13 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 02/13/2011 01:59 PM, bearophile wrote:
 Walter:

 In C++, you get essentially the same thing from g++:

 /usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o: In function
`_start':
 (.text+0x20): undefined reference to `main'
 collect2: ld returned 1 exit status
Lot of people come here because they want a compiler+language better than C++ :-) If you compile this: void main() { writeln("Hello world"); } Since some time dmd shows an error fit for D newbies: test.d(2): Error: 'writeln' is not defined, perhaps you need to import std.stdio; ? Probably many Python/JS/Perl/PHP/etc programmers that may want to try D don't know what a linker is. When they want to develop a large multi-module D program they must know something about how a linker works. But D has to scale down to smaller programs too, where there are only one or very few modules, written by not experts of C-class languages. In this situation more readable error messages, produced by dmd that catches a basic error before the linker, is probably useful.
Couldn't have written this one better ;-) Denis -- _________________ vita es estrany spir.wikidot.com
Feb 13 2011
prev sibling parent reply golgeliyele <bgedik gmail.com> writes:
I don't think C++ and gcc set a good bar here.
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
Feb 13 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Feb 13 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
Feb 13 2011
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Sun, 13 Feb 2011 22:12:02 +0300, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
What's wrong with parsing low-level linker error messages and output them in human-readable form? E.g. demangle missing symbols.
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Denis Koroskin wrote:
 It's not impossible, but is a tremendous amount of work in order to 
 improve one error message, and one error message that generations of C 
 and C++ programmers are comfortable dealing with.
What's wrong with parsing low-level linker error messages and output them in human-readable form? E.g. demangle missing symbols.
Yes, that can be done. The downside is since dmd does not control what linker the user has, it becomes a constant source of problems trying to keep it working as it constantly breaks with linker changes and an arbitrarily long list of linkers on various distributions.
Feb 13 2011
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-13 14:38:20 -0500, Walter Bright <newshound2 digitalmars.com> said:

 Denis Koroskin wrote:
 It's not impossible, but is a tremendous amount of work in order to 
 improve one error message, and one error message that generations of C 
 and C++ programmers are comfortable dealing with.
What's wrong with parsing low-level linker error messages and output them in human-readable form? E.g. demangle missing symbols.
Yes, that can be done. The downside is since dmd does not control what linker the user has, it becomes a constant source of problems trying to keep it working as it constantly breaks with linker changes and an arbitrarily long list of linkers on various distributions.
Parsing error messages is a problem indeed. But demangling symbol names is easy. Try this: dmd ... 2>&1 | ddemangle With ddemangle being a compiled version of this program: import std.stdio; import core.demangle; void main() { foreach (line; stdin.byLine()) { size_t beginIdx, endIdx; enum State { searching_, searchingD, searchingEnd, done } State state; foreach (i, char c; line) { switch (state) { case State.searching_: if (c == '_') { beginIdx = i; state = State.searchingD; } break; case State.searchingD: if (c == 'D') state = State.searchingEnd; else if (c != '_') state = State.searching_; break; case State.searchingEnd: if (c == ' ' || c == '"' || c == '\'') { endIdx = i; state = State.done; } break; } if (state == State.done) break; } if (endIdx > beginIdx) writeln(line[0..beginIdx], demangle(line[beginIdx..endIdx]), line[endIdx..$]); else writeln(line); } } -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Michel Fortin wrote:
 Parsing error messages is a problem indeed. But demangling symbol names 
 is easy.
Demangling doesn't get us where golgeliyele wants to go.
Feb 13 2011
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-13 16:37:19 -0500, Walter Bright <newshound2 digitalmars.com> said:

 Michel Fortin wrote:
 Parsing error messages is a problem indeed. But demangling symbol names 
 is easy.
Demangling doesn't get us where golgeliyele wants to go.
Correct. But note I was replying to your reply to Denis who asked specifically for demangled names for missing symbols. This by itself would be a useful improvement. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 13 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically 
 for demangled names for missing symbols. This by itself would be a 
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Feb 13 2011
next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
On 2/13/2011 3:01 PM, Walter Bright wrote:
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically for
demangled names for missing symbols. This by
 itself would be a useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
No offense, but this argument gets kinda old and it's incredibly weak. Today's tooling expectations are higher. The audience isn't the same. And clearly people are asking for it. Even the past version of it I highly doubt no one cared, you just didn't hear from those that liked it. After all, few people go out of their way to talk about what they like, just what they don't. Later, Brad
Feb 13 2011
parent reply retard <re tard.com.invalid> writes:
Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:

 On 2/13/2011 3:01 PM, Walter Bright wrote:
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically
 for demangled names for missing symbols. This by itself would be a
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
No offense, but this argument gets kinda old and it's incredibly weak. Today's tooling expectations are higher. The audience isn't the same. And clearly people are asking for it. Even the past version of it I highly doubt no one cared, you just didn't hear from those that liked it. After all, few people go out of their way to talk about what they like, just what they don't.
Half of the readers have already added me to their killfile, but here goes some on-topic humor: http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg Sometimes people don't yet know what they want. For example the reason we write portable C++ in some projects is that it's easier to switch between VC++, ICC, GCC, and LLVM. Whichever produces best performing code. Unfortunately DMC is always out of the question because the performance is 10-20 behind competition, fast compilation won't help it.
Feb 13 2011
next sibling parent so <so so.so> writes:
 Half of the readers have already added me to their killfile
This is how communities work. They like to "fuck" in a small circle, everyone agrees everything and they still expect a positive outcome :D
Feb 13 2011
prev sibling next sibling parent reply so <so so.so> writes:
 Unfortunately DMC is always out of the
 question because the performance is 10-20 behind competition, fast
 compilation won't help it.
Can you please give a few links on this?
Feb 13 2011
parent reply retard <re tard.com.invalid> writes:
Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

 Unfortunately DMC is always out of the question because the performance
 is 10-20 (years) behind competition, fast compilation won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450 Many of those are already old. GCC 4.6, LLVM 2.9, and ICC 12 are much faster, especially on multicore hardware. A quick look at DMC changelog doesn't reveal any significant new optimizations durin the past 10 years except some Pentium 4 opcodes and fixes on library level. I rarely see a benchmark where DMC produces fastest code. In addition, most open source projects are not compatible with DMC's toolchain out of the box. If execution performance of the generated code is your top priority, I wouldn't recommend using DigitalMars products.
Feb 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
 
 Unfortunately DMC is always out of the question because the performance
 is 10-20 (years) behind competition, fast compilation won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
That link shows dmc winning.
 http://lists.boost.org/boost-testing/2005/06/1520.php
 http://www.digitalmars.com/d/archives/c++/chat/66.html
 http://www.drdobbs.com/cpp/184405450
 
 Many of those are already old. GCC 4.6, LLVM 2.9, and ICC 12 are much 
 faster, especially on multicore hardware. A quick look at DMC changelog 
 doesn't reveal any significant new optimizations durin the past 10 years 
 except some Pentium 4 opcodes and fixes on library level.
 
 I rarely see a benchmark where DMC produces fastest code. In addition, 
 most open source projects are not compatible with DMC's toolchain out of 
 the box. If execution performance of the generated code is your top 
 priority, I wouldn't recommend using DigitalMars products.
Feb 14 2011
parent reply retard <re tard.com.invalid> writes:
Mon, 14 Feb 2011 10:01:53 -0800, Walter Bright wrote:

 retard wrote:
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
 
 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
That link shows dmc winning.
No, it doesn't. In the Fib-50000 test where the optimizations bring largest improvements in wall clock time, g++ 3.3.1, vc++7, bc++ 5.5.1, and icc are all faster with optimized settings. This test is a joke anyway. I wouldn't pick a compiler for video transcoding based on some Fib-10000 results, seriously.
Feb 14 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Mon, 14 Feb 2011 10:01:53 -0800, Walter Bright wrote:
 
 retard wrote:
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
That link shows dmc winning.
No, it doesn't. In the Fib-50000 test where the optimizations bring largest improvements in wall clock time, g++ 3.3.1, vc++7, bc++ 5.5.1, and icc are all faster with optimized settings.
And dmc is faster with Fib-25000.
 This test is a joke anyway.
You picked these benchmarks, not me.
Feb 14 2011
prev sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
retard wrote:

 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
 
 Unfortunately DMC is always out of the question because the performance
 is 10-20 (years) behind competition, fast compilation won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450
That is ridiculous, have you even bothered to read your own links? In some of them dmc wins, others the differences are minimal and for all of them dmc is king in compilation times.
Feb 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Lutger Blijdestijn wrote:
 retard wrote:
 
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

 Unfortunately DMC is always out of the question because the performance
 is 10-20 (years) behind competition, fast compilation won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450
That is ridiculous, have you even bothered to read your own links? In some of them dmc wins, others the differences are minimal and for all of them dmc is king in compilation times.
People tend to see what they want to see. There was a computer magazine roundup in the late 1980's where they benchmarked a dozen or so compilers. The text enthusiastically declared Borland to be the fastest compiler, while their own benchmark tables clearly showed Zortech as winning across the board. The ironic thing about retard not recommending dmc for fast code is dmc is built using dmc, and dmc is *far* faster at compiling than any of the others.
Feb 14 2011
parent reply retard <re tard.com.invalid> writes:
Mon, 14 Feb 2011 11:38:50 -0800, Walter Bright wrote:

 Lutger Blijdestijn wrote:
 retard wrote:
 
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450
That is ridiculous, have you even bothered to read your own links? In some of them dmc wins, others the differences are minimal and for all of them dmc is king in compilation times.
People tend to see what they want to see. There was a computer magazine roundup in the late 1980's where they benchmarked a dozen or so compilers. The text enthusiastically declared Borland to be the fastest compiler, while their own benchmark tables clearly showed Zortech as winning across the board. The ironic thing about retard not recommending dmc for fast code is dmc is built using dmc, and dmc is *far* faster at compiling than any of the others.
Your obsession with fast compile times is incomprehensible. It doesn't have any relevance in the projects I'm talking about. On multicore 'make - jN', distcc & low cost clusters, and incremental compilation already mitigate most of the issues. LLVM is also supposed to compile large projects faster than the 'legacy' gcc. There are also faster linkers than GNU ld. If you're really obsessed with compile times, there are far better languages such as D. The extensive optimizations and fast compile times have an inverse correlation. Of course your compiler compiles faster if it optimizes less. What's the point here? All your examples and stories are from 1980's and 1990's. Any idea how well dmc fares against latest Intel / Microsoft / GNU compilers?
Feb 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Your obsession with fast compile times is incomprehensible.
Yet people complain about excessive compile times with C++ all the time, such as overnight builds. Quite a few dmc++ customers stick with it because of compile times.
 It doesn't have any relevance in the projects I'm talking about.
It's relevant when you make claims you cannot create fast code with dmc, since dmc is itself built with dmc.
 The extensive optimizations and fast compile times have an inverse 
 correlation. Of course your compiler compiles faster if it optimizes 
 less. What's the point here?
It compiles far faster for debug builds, too. That is directly relevant to productivity in the edit/compile/debug loop. It also makes a big difference to me that I can run the test suite in half an hour rather than an hour. It means I'll be less tempted to skip running the suite.
 All your examples and stories are from 1980's and 1990's. Any idea how 
 well dmc fares against latest Intel / Microsoft / GNU compilers?
Bearophile posted a benchmark last year where he concluded that modern compilers like LLVM beat the pants off of primitive, obsolete compilers like dmc for integer arithmetic. A little investigation showed it had nothing whatsoever to do with the compiler - it was the runtime library implementation of long divide that was the culprit. I corrected that, and the runtimes became indistinguishable. I hear stuff about how dmc should catch up with LLVM and do modern things like data flow analysis, yet dmc has done data flow analysis since 1985. I also hear that dmc should do named return value optimization, not realizing that dmc *invented* named return value optimization and has done it since 1991. These claims are clearly made simply based on assumptions and reading the marketing literature of other compilers. The point is, compiler optimizers hit a wall around 15 years ago. Only tiny improvements have happened since then. (Not considering vectorization, which is a big improvement.) Where dmc needs improvement is in floating point code, particularly in using XMM registers and doing vectorization. dmc does an excellent and competitive job with optimization rewrites, register assignment, scheduling and detail code generation. There's only so much juice you can get out of those grapes.
Feb 14 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:ijc4fk$iv3$1 digitalmars.com...
 I hear stuff about how dmc should catch up with LLVM and do modern things 
 like data flow analysis, yet dmc has done data flow analysis since 1985. I 
 also hear that dmc should do named return value optimization, not 
 realizing that dmc *invented* named return value optimization and has done 
 it since 1991. These claims are clearly made simply based on assumptions 
 and reading the marketing literature of other compilers.
If it isn't already, maybe all this should be mentioned on the D site.
Feb 14 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Nick Sabalausky wrote:
 If it isn't already, maybe all this should be mentioned on the D site. 
Maybe you're right.
Feb 14 2011
prev sibling parent =?UTF-8?B?Z8O2bGdlbGl5ZWxl?= <usuldan gmail.com> writes:
On 2/14/11 3:22 PM, retard wrote:
 Your obsession with fast compile times is incomprehensible. It doesn't
 have any relevance in the projects I'm talking about. On multicore 'make -
 jN', distcc&  low cost clusters, and incremental compilation already
 mitigate most of the issues. LLVM is also supposed to compile large
 projects faster than the 'legacy' gcc. There are also faster linkers than
 GNU ld. If you're really obsessed with compile times, there are far
 better languages such as D.

 The extensive optimizations and fast compile times have an inverse
 correlation. Of course your compiler compiles faster if it optimizes
 less. What's the point here?

 All your examples and stories are from 1980's and 1990's. Any idea how
 well dmc fares against latest Intel / Microsoft / GNU compilers?
I work on a >1M LOC C++ project and using distcc with 4 nodes and ccache. Unfortunately, it is not enough. Yes, there are various cases where runtime performance matters a lot. But compile time performance of C++ is a huge problem. I am glad that Walter cares about this. The point about optimizations vs compile time seems to be a valid one. However, even without optimizations turned on gcc sucks big time w.r.t. compilation time. And most of the time is being spent in parsing gazillion number of headers. I did not have a chance to work with Intel's and MS's compilers.
Feb 14 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Mon, 14 Feb 2011 20:10:47 +0100, Lutger Blijdestijn wrote:

 retard wrote:
 
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
 
 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450
That is ridiculous, have you even bothered to read your own links? In some of them dmc wins, others the differences are minimal and for all of them dmc is king in compilation times.
DMC doesn't clearly win in any of the tests and these are merely some naive examples I found by doing 5 minutes of googling. Seriously, take a closer look - the gcc version is over 5 years old. Nobody even bothers doing dmc benchmarks anymore, dmc is so out of the league. I repeat, this was about performance of the generated binaries, not compile times. Like I said: take some existing piece of code with high performance requirements and compile it with dmc. You lose. I honestly don't get what I need to prove here. Since you have no clue, presumably you aren't even using dmc and won't be considering it. Just take a look at the command line parameters: -[0|2|3|4|5|6] 8088/286/386/486/Pentium/P6 code There are no arch specific optimizations for PIII, Pentium 4, Pentium D, Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from AMD. No mention of auto-vectorization or whole program and instruction level optimizations the very latest GCC and LLVM are now slowly adopting.
Feb 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 There are no arch specific optimizations for PIII, Pentium 4, Pentium D,
Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from AMD. The optimal instruction sequences varied dramatically on those earlier processors, but not so much at all on the later ones. Reading the latest Intel/AMD instruction set references doesn't even provide that information anymore. In particular, instruction scheduling no longer seems to matter, except for the Intel Atom, which benefits very much from Pentium style instruction scheduling. Ironically, dmc++ is the only available current compiler which supports that.
 No mention of auto-vectorization 
dmc doesn't do auto-vectorization. I agree that's an issue.
 or whole program
I looked into that, there's not a lot of oil in that well.
 and instruction level optimizations the very latest GCC and LLVM are now 
slowly adopting. Huh? Every compiler in existence has done, and always has done, instruction level optimizations. Note: a lot of modern compilers expend tremendous effort optimizing access to global variables (often screwing up multithreaded code in the process). I've always viewed this as a crock, since modern programming style eschews globals as much as possible.
Feb 14 2011
next sibling parent reply retard <re tard.com.invalid> writes:
Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:

 In particular, instruction scheduling no longer seems to matter, except
 for the Intel Atom, which benefits very much from Pentium style
 instruction scheduling. Ironically, dmc++ is the only available current
 compiler which supports that.
I can't see how dmc++ is the only available current compiler which supports that. For example this article (April 15, 2010) [1] tells: "The GCC 4.5 announcement was made at GNU.org. Changes from GCC 4.4, which was released almost one year ago, include the * use of the MPC library to evaluate complex arithmetic at compile time * C++0x improvements * automatic parallelization as part of Graphite * support for new ARM processors * Intel Atom optimizations and tuning support, and * AMD Orochi optimizations too" GCC has supported i586 scheduling as long as I can remember. [1] http://www.phoronix.com/scan.php?page=news_item&px=ODE1Ng
  > or whole program
 
 I looked into that, there's not a lot of oil in that well.
How about [2]: "LTO is quite promising. Actually it is in line or even better with improvement got from other compilers (pathscale is the most convenient compiler to check lto separately: lto gave there upto 5% improvement on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50% slower and generated code size upto 30% bigger). LTO in GCC actually results in significant code reduction which is quite different from pathscale. That is one of rare cases on my mind when a specific optimization works actually better in gcc than in other optimizing compilers." [2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html In my opinion the up to 5% improvement is pretty good compared to advances in typical minor compiler version upgades. For example [3]: "The Fortran-written NAS Parallel Benchmarks from NASA with the LU.A test is running significantly faster with GCC 4.5. This new compiler is causing NAS LU.A to run 15% better than the other tested GCC releases." [3] http://www.phoronix.com/scan.php? page=article&item=gcc_45_benchmarks&num=6
  > and instruction level optimizations the very latest GCC and LLVM are
  > now
 slowly adopting.
 
 Huh? Every compiler in existence has done, and always has done,
 instruction level optimizations.
I don't know this area well enough, but here is a list of optimizations it does http://llvm.org/docs/Passes.html - from what I've read, GNU GCC doesn't implement all of these.
 Note: a lot of modern compilers expend tremendous effort optimizing
 access to global variables (often screwing up multithreaded code in the
 process). I've always viewed this as a crock, since modern programming
 style eschews globals as much as possible.
I only know that modern C/C++ compilers are doing more and more things automatically. And that might soon include automatic vectorization + multithreading of some computationally intensive code via OpenMP.
Feb 14 2011
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
retard wrote:
 Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:
 
 In particular, instruction scheduling no longer seems to matter, except
 for the Intel Atom, which benefits very much from Pentium style
 instruction scheduling. Ironically, dmc++ is the only available current
 compiler which supports that.
I can't see how dmc++ is the only available current compiler which supports that. For example this article (April 15, 2010) [1] tells: "The GCC 4.5 announcement was made at GNU.org. Changes from GCC 4.4, which was released almost one year ago, include the * use of the MPC library to evaluate complex arithmetic at compile time * C++0x improvements * automatic parallelization as part of Graphite * support for new ARM processors * Intel Atom optimizations and tuning support, and * AMD Orochi optimizations too" GCC has supported i586 scheduling as long as I can remember.
"Optimizations and tuning support" is not necessarily scheduling. dmc specifically does scheduling for the U and V pipes on the Pentium, and does a near perfect job of it (better than any other compiler of the time that I checked, most of which didn't even attempt it). The only way to tell if a compiler does it is by trying it and examining the emitted instructions. Reading the marketing literature isn't good enough.
 [1] http://www.phoronix.com/scan.php?page=news_item&px=ODE1Ng
 
  > or whole program

 I looked into that, there's not a lot of oil in that well.
How about [2]: "LTO is quite promising. Actually it is in line or even better with improvement got from other compilers (pathscale is the most convenient compiler to check lto separately: lto gave there upto 5% improvement on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50% slower and generated code size upto 30% bigger). LTO in GCC actually results in significant code reduction which is quite different from pathscale. That is one of rare cases on my mind when a specific optimization works actually better in gcc than in other optimizing compilers." [2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html
LTO is different from whole program analysis. BTW, you can sometimes get dramatic speedups by running the dmc profiler, and then feeding the .def file it generates back into the linker. This will reorder the code for optimum speed. That is LTO, but is not whole program optimization. C++'s compilation model thwarts true whole program analysis at every step. D, on the other hand, is designed to support it. dmd has some initial support for that, as it will inline code from across any modules you hand it the source for.
 In my opinion the up to 5% improvement is pretty good compared to 
 advances in typical minor compiler version upgades. For example [3]:
 
 "The Fortran-written NAS Parallel Benchmarks from NASA with the LU.A test 
 is running significantly faster with GCC 4.5. This new compiler is 
 causing NAS LU.A to run 15% better than the other tested GCC releases."
Yes, 5% is a decent improvement. You'd have to look closer to see where the improvement is coming from, though, to draw any useful conclusions. It could be (and this happens) one single tweak of one expression node that was crappily written in the first place.
 [3] http://www.phoronix.com/scan.php?
 page=article&item=gcc_45_benchmarks&num=6
 
  > and instruction level optimizations the very latest GCC and LLVM are
  > now
 slowly adopting.

 Huh? Every compiler in existence has done, and always has done,
 instruction level optimizations.
I don't know this area well enough, but here is a list of optimizations it does http://llvm.org/docs/Passes.html - from what I've read, GNU GCC doesn't implement all of these.
Every compiler implements a list of those, and those lists vary a lot from compiler to compiler. dmc probably has a thousand of those patterns embedded in it that it specifically recognizes.
 Note: a lot of modern compilers expend tremendous effort optimizing
 access to global variables (often screwing up multithreaded code in the
 process). I've always viewed this as a crock, since modern programming
 style eschews globals as much as possible.
I only know that modern C/C++ compilers are doing more and more things automatically. And that might soon include automatic vectorization + multithreading of some computationally intensive code via OpenMP.
D is actually far friendlier to vectorization than C/C++ are.
Feb 14 2011
prev sibling parent ./C <cbergstrom pathscale.com> writes:
 Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:
 
 
 How about [2]:
 
 "LTO is quite promising.  Actually it is in line or even better with
 improvement got from other compilers (pathscale is the most convenient
 compiler to check lto separately: lto gave there upto 5% improvement
 on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50%
 slower and generated code size upto 30% bigger).  LTO in GCC actually
 results in significant code reduction which is quite different from
 pathscale.  That is one of rare cases on my mind when a specific
 optimization works actually better in gcc than in other optimizing
 compilers."
 
 [2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html
PathScale is in the process of making significant improvements to our IPA optimization and welcome feedback and more testers in March. Please email me directly if you're a current customer or not. Thanks! Christopher
Feb 14 2011
prev sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 retard wrote:
  > There are no arch specific optimizations for PIII, Pentium 4, Pentium D,
 Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from
 AMD.
 
 The optimal instruction sequences varied dramatically on those earlier 
 processors, but not so much at all on the later ones. Reading the latest 
 Intel/AMD instruction set references doesn't even provide that 
 information anymore.
 
 In particular, instruction scheduling no longer seems to matter, except 
 for the Intel Atom, which benefits very much from Pentium style 
 instruction scheduling. Ironically, dmc++ is the only available current 
 compiler which supports that.
In hand-coded asm, instruction scheduling still gives more than half of the same benefit that it used to do. But, it's become ten times more difficult. You have to use Agner Fog's manuals, not Intel/AMD. For example: (1) a common bottleneck on all Intel processors, is that you can only read from three registers per cycle, but you can also read from any register which has been modified in the last three cycles. (2) it's important to break dependency chains. On the BigInt code, instruction scheduling gave a speedup of ~40%. But still, cache effects are more important than instruction scheduling in 99% of cases.
 No mention of auto-vectorization 
dmc doesn't do auto-vectorization. I agree that's an issue.
 
 
  > or whole program
 
 I looked into that, there's not a lot of oil in that well.
 
 
  > and instruction level optimizations the very latest GCC and LLVM are 
 now slowly adopting.
 
 Huh? Every compiler in existence has done, and always has done, 
 instruction level optimizations.
 
 
 Note: a lot of modern compilers expend tremendous effort optimizing 
 access to global variables (often screwing up multithreaded code in the 
 process). I've always viewed this as a crock, since modern programming 
 style eschews globals as much as possible.
Feb 14 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:

 But still, cache effects are more important than instruction scheduling 
 in 99% of cases.
I agree. CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A bit more higher level visibility for those instructions may be positive today. Being D a system language, another possible idea is to partially unveil what's under the "array as a random access memory" illusion. Memory hierarchy makes array access times quite variable according to what level of the memory pyramid your data is stored into (http://dotnetperls.com/memory-hierarchy ). This is why numeric algorithms that work on large arrays enjoy tiling a lot now. The Chapel language has language-level support for a high level specification of tilings, while Fortran compilers perform some limited forms of tiling by themselves. Bye, bearophile
Feb 14 2011
next sibling parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Don:
 
 But still, cache effects are more important than instruction scheduling 
 in 99% of cases.
I agree. CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A bit more higher level visibility for those instructions may be positive today.
A problem with that, is that the prefetching instructions are vendor-specific. Also, it's quite difficult to use them correctly. If you put them in the wrong place, or use them too much, they slow your code down.
 
 Being D a system language, another possible idea is to partially unveil what's
under the "array as a random access memory" illusion. Memory hierarchy makes
array access times quite variable according to what level of the memory pyramid
your data is stored into (http://dotnetperls.com/memory-hierarchy ). This is
why numeric algorithms that work on large arrays enjoy tiling a lot now. The
Chapel language has language-level support for a high level specification of
tilings, while Fortran compilers perform some limited forms of tiling by
themselves.
I think it is impossible to be a modern systems language without some support for memory heirarchy. I think we'll be able to take advantage of D's awesome metaprogramming, to support cache-aware algorithms. As a first step, I added cache size determination to core.cpuid some time ago. We have a long way to go, still.
Feb 14 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

Huh, I simply could never find a document about how to use those which gave me
any comfortable sense that the author knew what he was talking about.<
http://www.agner.org/optimize/ ------------------ Don:
A problem with that, is that the prefetching instructions are vendor-specific.<
Right. Then I suggest some higher-level annotations (pragmas?) that the programmer uses to better state the temporal semantics of memory accesses in a performance-critical part of D code.
Also, it's quite difficult to use them correctly. If you put them in the wrong
place, or use them too much, they slow your code down.<
CPU caches have a simple purpose. Light speed is finite (how much distance does light travel in vacuum/doped silicon during a clock cycle of a 5 GHz POWER6 CPU? http://en.wikipedia.org/wiki/POWER6 ), and finding one thing among many things is slower than finding among few ones. So you speed up your memory accesses if you read information from a smaller group of data located closer to you. Most CPUs don't have a little faster memory that you manage yourself (http://en.wikipedia.org/wiki/Scratchpad_RAM ), the CPUs copy data from/to cache levels by themselves, so on such CPUs the illusion of a flat memory is at the hardware level, not just at C language level. Cache manage their memory in few different ways, often bigger CPUs offer ways to alter such ways a little, using special instructions. The main difference is how they keep coherence across different core caches and in what situations they store back data from the cache to RAM. In some cases in your program you want to read from an array, and store data inside it again and another one too, but you never want to store far away data in the first one. There are few other common patterns of memory usage. In theory a normal language like Fortran is enough to specify what memory you want to read or write and when you want to do it. In practice today compilers are not so good at inferring such semantics, so some high level annotations probably help. In future maybe compilers will get better, so they will ignore those annotations, just like they often ignore "register" annotations. Being system-level programming languages practical things, adding annotations is not bad, even if 5-10 years later those annotations become less useful. Bye, bearophile
Feb 15 2011
parent Don <nospam nospam.com> writes:
bearophile wrote:
 Walter:
 
 Huh, I simply could never find a document about how to use those which gave me
any comfortable sense that the author knew what he was talking about.<
http://www.agner.org/optimize/ ------------------ Don:
 A problem with that, is that the prefetching instructions are vendor-specific.<
Right. Then I suggest some higher-level annotations (pragmas?) that the programmer uses to better state the temporal semantics of memory accesses in a performance-critical part of D code.
 Also, it's quite difficult to use them correctly. If you put them in the wrong
place, or use them too much, they slow your code down.<
CPU caches have a simple purpose. Light speed is finite (how much distance does light travel in vacuum/doped silicon during a clock cycle of a 5 GHz POWER6 CPU? http://en.wikipedia.org/wiki/POWER6 ), and finding one thing among many things is slower than finding among few ones. So you speed up your memory accesses if you read information from a smaller group of data located closer to you. Most CPUs don't have a little faster memory that you manage yourself (http://en.wikipedia.org/wiki/Scratchpad_RAM ), the CPUs copy data from/to cache levels by themselves, so on such CPUs the illusion of a flat memory is at the hardware level, not just at C language level. Cache manage their memory in few different ways, often bigger CPUs offer ways to alter such ways a little, using special instructions.
The main difference is how they keep coherence across different core caches and in what situations they store back data from the cache to RAM. I think you may be confusing prefetch instructions with non-temporal stores. The problem with prefetch instructions, is that they interfere with the hardware prefetch mechanism. The hardware prefetch is actually very good, and it's only under specific circumstances that a manual prefetch can beat it. I think it's unlikely that you can use prefetching beneficially, unless you've looked at the generated asm code.
 In some cases in your program you want to read from an array, and store data
inside it again and another one too, but you never want to store far away data
in the first one. There are few other common patterns of memory usage. In
theory a normal language like Fortran is enough to specify what memory you want
to read or write and when you want to do it. In practice today compilers are
not so good at inferring such semantics, so some high level annotations
probably help. In future maybe compilers will get better, so they will ignore
those annotations, just like they often ignore "register" annotations. Being
system-level programming languages practical things, adding annotations is not
bad, even if 5-10 years later those annotations become less useful.
Here you're definitely talking about non-temporal stores. Yes, there is some chance that an annotation for non-temporal stores could be beneficial.
Feb 15 2011
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
bearophile wrote:
 I agree. CPUs have prefetching instructions, but D doesn't expose them as
 intrinsics. A bit more higher level visibility for those instructions may be
 positive today.
Huh, I simply could never find a document about how to use those which gave me any comfortable sense that the author knew what he was talking about. The same goes for the memory fence instructions. Talk to 3 experts about them, and you get 3 wildly different answers. The Intel docs are zero help.
Feb 14 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 02/15/2011 03:47 AM, bearophile wrote:
 Don:

 But still, cache effects are more important than instruction scheduling
 in 99% of cases.
I agree. CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A bit more higher level visibility for those instructions may be positive today. Being D a system language, another possible idea is to partially unveil what's under the "array as a random access memory" illusion.
By the way, what does D rewrite: foreach (e ; array) { f(e); } to? I would guess something along the line of: auto p = array.ptr while (notAtEnd) { f(*p); ++ p; } ? Denis -- _________________ vita es estrany spir.wikidot.com
Feb 15 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 In hand-coded asm, instruction scheduling still gives more than half of 
 the same benefit that it used to do. But, it's become ten times more 
 difficult. You have to use Agner Fog's manuals, not Intel/AMD.
 
 For example:
 (1) a common bottleneck on all Intel processors, is that you can only 
 read from three registers per cycle, but you can also read from any 
 register which has been modified in the last three cycles.
 (2) it's important to break dependency chains.
 
 On the BigInt code, instruction scheduling gave a speedup of ~40%.
Wow. I didn't know that. Do any compilers currently schedule this stuff? Any chance you want to take a look at cgsched.c? I had great success using the same algorithm for the quite different Pentium and P6 scheduling minutia.
Feb 14 2011
parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Don wrote:
 In hand-coded asm, instruction scheduling still gives more than half 
 of the same benefit that it used to do. But, it's become ten times 
 more difficult. You have to use Agner Fog's manuals, not Intel/AMD.

 For example:
 (1) a common bottleneck on all Intel processors, is that you can only 
 read from three registers per cycle, but you can also read from any 
 register which has been modified in the last three cycles.
 (2) it's important to break dependency chains.

 On the BigInt code, instruction scheduling gave a speedup of ~40%.
Wow. I didn't know that. Do any compilers currently schedule this stuff?
Intel probably does. I don't think any others do a very good job. Agner told me that he had had no success in getting compiler vendors to be interested in his work.
 Any chance you want to take a look at cgsched.c? I had great success 
 using the same algorithm for the quite different Pentium and P6 
 scheduling minutia.
That would really be fun. BTW, the current Intel processors are basically the same as Pentium Pro, with a few improvements. The strange thing is, because of all of the reordering that happens, swapping the order of two (non-dependent) instructions makes no difference at all. So you always need to look at every instruction in the a loop, before you can do any scheduling.
Feb 15 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Don wrote:
 Walter Bright wrote:
 Don wrote:
 In hand-coded asm, instruction scheduling still gives more than half 
 of the same benefit that it used to do. But, it's become ten times 
 more difficult. You have to use Agner Fog's manuals, not Intel/AMD.

 For example:
 (1) a common bottleneck on all Intel processors, is that you can only 
 read from three registers per cycle, but you can also read from any 
 register which has been modified in the last three cycles.
 (2) it's important to break dependency chains.

 On the BigInt code, instruction scheduling gave a speedup of ~40%.
Wow. I didn't know that. Do any compilers currently schedule this stuff?
Intel probably does. I don't think any others do a very good job. Agner told me that he had had no success in getting compiler vendors to be interested in his work.
Well, this one is. In fact, could we get Agner to actively help us out with this?
 Any chance you want to take a look at cgsched.c? I had great success 
 using the same algorithm for the quite different Pentium and P6 
 scheduling minutia.
That would really be fun. BTW, the current Intel processors are basically the same as Pentium Pro, with a few improvements. The strange thing is, because of all of the reordering that happens, swapping the order of two (non-dependent) instructions makes no difference at all. So you always need to look at every instruction in the a loop, before you can do any scheduling.
I was looking at Agner's document, and it looks like ordering the instructions in the 4-1-1 or 4-1-1-1 for optimal decoding could work. This would fit right in with the way the scheduler works. I had thought that with the CPU automatically reordering instructions, that scheduling them was obsolete.
Feb 15 2011
parent reply "nedbrek" <nedbrek yahoo.com> writes:
Hello all,

"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:ijeih9$2aso$2 digitalmars.com...
 Don wrote:
 That would really be fun.
 BTW, the current Intel processors are basically the same as Pentium Pro, 
 with a few improvements. The strange thing is, because of all of the 
 reordering that happens, swapping the order of two (non-dependent) 
 instructions makes no difference at all. So you always need to look at 
 every instruction in the a loop, before you can do any scheduling.
I was looking at Agner's document, and it looks like ordering the instructions in the 4-1-1 or 4-1-1-1 for optimal decoding could work. This would fit right in with the way the scheduler works. I had thought that with the CPU automatically reordering instructions, that scheduling them was obsolete.
Reordering happens in the scheduler. A simple model is "Fetch", "Schedule", "Retire". Fetch and retire are done in program order. For code that is hitting well in the cache, the biggest bottleneck is that "4" decoder (the complex instruction decoder). Reducing the number of complex instructions will be a big win here (and settling them into the 4-1-1(-1) pattern). Of course, on anything after Core 2, the "1" decoders can handle pushes, pops, and load-ops (r+=m) (although not load-op-store (m+=r)). Also, "macro op fusion" allows you can get a branch along with the last instruction in decode, potentially giving you 5 macroinstructions per cycle from decode. Make sure it is the flags producing instruction (cmp-br). (I used to work for Intel :) Ned
Feb 18 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
nedbrek wrote:
 Reordering happens in the scheduler. A simple model is "Fetch", "Schedule", 
 "Retire".  Fetch and retire are done in program order.  For code that is 
 hitting well in the cache, the biggest bottleneck is that "4" decoder (the 
 complex instruction decoder).  Reducing the number of complex instructions 
 will be a big win here (and settling them into the 4-1-1(-1) pattern).
 
 Of course, on anything after Core 2, the "1" decoders can handle pushes, 
 pops, and load-ops (r+=m) (although not load-op-store (m+=r)).
 
 Also, "macro op fusion" allows you can get a branch along with the last 
 instruction in decode, potentially giving you 5 macroinstructions per cycle 
 from decode.  Make sure it is the flags producing instruction (cmp-br).
 
 (I used to work for Intel :)
I can't find any Intel documentation on this. Can you point me to some?
Feb 18 2011
parent reply "nedbrek" <nedbrek yahoo.com> writes:
Hello,

"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:ijnt3o$22dm$1 digitalmars.com...
 nedbrek wrote:
 Reordering happens in the scheduler. A simple model is "Fetch", 
 "Schedule", "Retire".  Fetch and retire are done in program order.  For 
 code that is hitting well in the cache, the biggest bottleneck is that 
 "4" decoder (the complex instruction decoder).  Reducing the number of 
 complex instructions will be a big win here (and settling them into the 
 4-1-1(-1) pattern).

 Of course, on anything after Core 2, the "1" decoders can handle pushes, 
 pops, and load-ops (r+=m) (although not load-op-store (m+=r)).

 Also, "macro op fusion" allows you can get a branch along with the last 
 instruction in decode, potentially giving you 5 macroinstructions per 
 cycle from decode.  Make sure it is the flags producing instruction 
 (cmp-br).
I can't find any Intel documentation on this. Can you point me to some?
The best available source is the optimization reference manual (http://www.intel.com/products/processor/manuals/). The latest version is 248966.pdf, which mentions "Decodes up to four instructions, or up to five with macro-fusion" (page 33). Also, page 36: "Macro-fusion merges two instructions into a single ?op. Intel Core microarchitecture is capable of one macro-fusion per cycle in 32-bit operation". It's unclear if macro fusion is off entirely in 64 bit mode, and whether this has changed in more recent processors... They recommend against aligning code in general to 4-1-1-1 (also page 36), but I'd assume this is for a very targeted application. As always, it is best to run things both ways and measure. The next section (2.1.2.5) talks about stack pointer tracking - which allows macro operations which used to be 2 uops (pop r -> load r = [esp]; inc esp) to become one (just the load). Pushes, which used to be 3 uops (store_address esp, store_data r, dec esp) should also be one fused uop (via sta/std fusion and store point tracking). ---- Another good resource is "Real World Tech", particularly: http://www.realworldtech.com/page.cfm?ArticleID=RWT030906143144 Page 4 covers the front end: "Macro-op fusion lets the decoders combine two macro instructions into a single uop. Specifically, x86 compare or test instructions are fused with x86 jumps to produce a single uop and any decoder can perform this optimization." ---- Finally, the Intel Technology Journal has some really good details (when you can find them! :) For example: http://download.intel.com/technology/itj/2003/volume07issue02/art03_pentiumm/vol7iss2_art03.pdf details the original processor to use micro-op fusion (Pentium M or Banias - which was the base design for Dothan and Yonah). See page 26 (epage 7/18) - which starts the section "MICRO-OPS FUSION". It gives a lot of detail of the store address / store data fusion. Hope that helps, Ned
Feb 19 2011
next sibling parent reply distcc <c p.p> writes:
nedbrek Wrote:

 Hello,
 
 "Walter Bright" <newshound2 digitalmars.com> wrote in message 
 news:ijnt3o$22dm$1 digitalmars.com...
 nedbrek wrote:
 Reordering happens in the scheduler. A simple model is "Fetch", 
 "Schedule", "Retire".  Fetch and retire are done in program order.  For 
 code that is hitting well in the cache, the biggest bottleneck is that 
 "4" decoder (the complex instruction decoder).  Reducing the number of 
 complex instructions will be a big win here (and settling them into the 
 4-1-1(-1) pattern).

 Of course, on anything after Core 2, the "1" decoders can handle pushes, 
 pops, and load-ops (r+=m) (although not load-op-store (m+=r)).

 Also, "macro op fusion" allows you can get a branch along with the last 
 instruction in decode, potentially giving you 5 macroinstructions per 
 cycle from decode.  Make sure it is the flags producing instruction 
 (cmp-br).
I can't find any Intel documentation on this. Can you point me to some?
The best available source is the optimization reference manual (http://www.intel.com/products/processor/manuals/). The latest version is 248966.pdf, which mentions "Decodes up to four instructions, or up to five with macro-fusion" (page 33). Also, page 36: "Macro-fusion merges two instructions into a single ?op. Intel Core microarchitecture is capable of one macro-fusion per cycle in 32-bit operation". It's unclear if macro fusion is off entirely in 64 bit mode, and whether this has changed in more recent processors...
I remember reading that macro fusion is entirely off in 64 bit mode in Nehalem and earlier generations, and supported in Sandy Bridge. When generating code for loops, the compiler could also make use of Loop Stream Coder to avoid i-cache misses.
Feb 19 2011
parent "nedbrek" <nedbrek yahoo.com> writes:
"distcc" <c p.p> wrote in message news:ijp9ji$1hvd$1 digitalmars.com...
 nedbrek Wrote:
 "Walter Bright" <newshound2 digitalmars.com> wrote in message
 news:ijnt3o$22dm$1 digitalmars.com...
 nedbrek wrote:
 Also, "macro op fusion" allows you can get a branch along with the last
 instruction in decode, potentially giving you 5 macroinstructions per
 cycle from decode.  Make sure it is the flags producing instruction
 (cmp-br).
I can't find any Intel documentation on this. Can you point me to some?
The best available source is the optimization reference manual (http://www.intel.com/products/processor/manuals/). The latest version is 248966.pdf, which mentions "Decodes up to four instructions, or up to five with macro-fusion" (page 33). Also, page 36: "Macro-fusion merges two instructions into a single ?op. Intel Core microarchitecture is capable of one macro-fusion per cycle in 32-bit operation". It's unclear if macro fusion is off entirely in 64 bit mode, and whether this has changed in more recent processors...
I remember reading that macro fusion is entirely off in 64 bit mode in Nehalem and earlier generations, and supported in Sandy Bridge. When generating code for loops, the compiler could also make use of Loop Stream Coder to avoid i-cache misses.
Serves me right, it is a little further in, page 52: "In Intel microarchitecture (Nehalem) , macro-fusion is supported in 64-bit mode, and the following instruction sequences are supported: (big list)". That would leave it off of 65nm (Merom) and 45nm (Penryn) parts. These are identifiable through CPUID. The guide is broken up into sections based on the particular chip, so you end up having to read them all to get a general feel for things... Ned
Feb 19 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
nedbrek wrote:
 Hope that helps,
Thanks, this is great info!
Feb 20 2011
prev sibling parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
retard wrote:

 Mon, 14 Feb 2011 20:10:47 +0100, Lutger Blijdestijn wrote:
 
 retard wrote:
 
 Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
 
 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
Can you please give a few links on this?
What kind of proof you need then? Just take some existing piece of code with high performance requirements and compile it with dmc. You lose. http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37 http://lists.boost.org/boost-testing/2005/06/1520.php http://www.digitalmars.com/d/archives/c++/chat/66.html http://www.drdobbs.com/cpp/184405450
That is ridiculous, have you even bothered to read your own links? In some of them dmc wins, others the differences are minimal and for all of them dmc is king in compilation times.
DMC doesn't clearly win in any of the tests and these are merely some naive examples I found by doing 5 minutes of googling. Seriously, take a closer look - the gcc version is over 5 years old. Nobody even bothers doing dmc benchmarks anymore, dmc is so out of the league. I repeat, this was about performance of the generated binaries, not compile times. Like I said: take some existing piece of code with high performance requirements and compile it with dmc. You lose. I honestly don't get what I need to prove here. Since you have no clue, presumably you aren't even using dmc and won't be considering it.
You go on ranting about dmc as if it is dwarfed by other compilers (which it might very well be), then provide 'proof' that doesn't prove this at all and now I must be convinced that it's because the other compilers are so old? You lose. You don't have to prove anything, but when you do, don't do it with dubious and inconclusive benchmarks. That's all.
Feb 15 2011
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-02-14 00:28, retard wrote:
 Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:

 On 2/13/2011 3:01 PM, Walter Bright wrote:
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically
 for demangled names for missing symbols. This by itself would be a
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
No offense, but this argument gets kinda old and it's incredibly weak. Today's tooling expectations are higher. The audience isn't the same. And clearly people are asking for it. Even the past version of it I highly doubt no one cared, you just didn't hear from those that liked it. After all, few people go out of their way to talk about what they like, just what they don't.
Half of the readers have already added me to their killfile, but here goes some on-topic humor: http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg
I had something similar with an attachable keyboard.
 Sometimes people don't yet know what they want.

 For example the reason we write portable C++ in some projects is that
 it's easier to switch between VC++, ICC, GCC, and LLVM. Whichever
 produces best performing code. Unfortunately DMC is always out of the
 question because the performance is 10-20 behind competition, fast
 compilation won't help it.
-- /Jacob Carlborg
Feb 14 2011
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 13/02/2011 23:28, retard wrote:
 Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:

 On 2/13/2011 3:01 PM, Walter Bright wrote:
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically
 for demangled names for missing symbols. This by itself would be a
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
No offense, but this argument gets kinda old and it's incredibly weak. Today's tooling expectations are higher. The audience isn't the same. And clearly people are asking for it. Even the past version of it I highly doubt no one cared, you just didn't hear from those that liked it. After all, few people go out of their way to talk about what they like, just what they don't.
Half of the readers have already added me to their killfile, but here goes some on-topic humor: http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg
The only fail here is that comparison -- Bruno Medeiros - Software Engineer
Feb 23 2011
prev sibling next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically  
 for demangled names for missing symbols. This by itself would be a  
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Many people are unthankful by nature. They tell about missing features while taking existing ones as granted. It doesn't mean no one cares about them. If no one would care, why would we even discuss those features?
Feb 13 2011
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
Denis Koroskin wrote:
 On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically 
 for demangled names for missing symbols. This by itself would be a 
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Many people are unthankful by nature. They tell about missing features while taking existing ones as granted. It doesn't mean no one cares about them. If no one would care, why would we even discuss those features?
Tellingly, I accidentally broke that feature, and nobody complained about that, either.
Feb 13 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 02/14/2011 02:29 AM, Denis Koroskin wrote:
 On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright <newshound2 digitalmars.com>
 wrote:

 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically for
 demangled names for missing symbols. This by itself would be a useful
 improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Many people are unthankful by nature. They tell about missing features while taking existing ones as granted. It doesn't mean no one cares about them. If no one would care, why would we even discuss those features?
Very often, heavily discussed designs are somewhat good. When they are truely bad, one does not even know where/how to start critics... We just feel their wrongness, but expressing it is hard time, even more proposing inprovements; so that we wish for a blank page. Good designs show their bugs much more obviously, everyone can enter the critic dance ;-) Denis -- _________________ vita es estrany spir.wikidot.com
Feb 14 2011
prev sibling next sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:

 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically
 for demangled names for missing symbols. This by itself would be a
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Let me take the opportunity to say I care about an unrelated usability feature: the spelling suggestion. However small it's pretty nice so thanks for doing that.
Feb 14 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Lutger Blijdestijn wrote:
 Let me take the opportunity to say I care about an unrelated usability 
 feature: the spelling suggestion. However small it's pretty nice so thanks 
 for doing that.
I like that one too, I liked it so much I wired it into dmc++ as well!
Feb 14 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-02-14 00:01, Walter Bright wrote:
 Michel Fortin wrote:
 But note I was replying to your reply to Denis who asked specifically
 for demangled names for missing symbols. This by itself would be a
 useful improvement.
I agree with that, but there's a caveat. I did such a thing years ago for C++ and Optlink. Nobody cared, including the people who asked for that feature. It's a bit demotivating to bother doing that again.
Maybe you can give it another try, there's a completely new community here now (I assume). On the other hand, that's unfortunately how people behave. They loudly complain when there's something they don't like and they sit silently when they're happy. -- /Jacob Carlborg
Feb 14 2011
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sun, 13 Feb 2011 21:12:02 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker.
You are trying to solve a much bigger problem, which indeed sounds like a lot of effort for something so insignificant. What I'm talking about is much simpler. Let's take two cases which will cover over 99% of such cases when using DMD. In both cases, the user only passes .d files to DMD, no extra .obj or .lib files, as is the case most of the time: 1) The user forgot to declare main(). If you don't pass the -c or -lib switches to the compiler, it's reasonable to expect that the user wants to compile and link an executable. But DMD knows that there is no D main() symbol in the files passed to it! So it can print a nice error message without having to run the linker to print its ugly one. 2) The user didn't pass all of his program's modules to the compiler. By far the most common cause, we've discussed this one before. It only requires knowing if a certain module is part of the standard library or not. Even simply doing it for modules present in the current directory would help. I know it's not consistent, but neither is import hinting for certain standard library functions, and both are great ideas. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Feb 14 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Don't forget DLLs.

But why not just change the linker error message from:
OPTLINK : Warning 134: No Start Address

to:
OPTLINK : Warning 134: No Start Address
"Are you missing a main() function?"
Feb 14 2011
parent reply Don <nospam nospam.com> writes:
Andrej Mitrovic wrote:
 Don't forget DLLs.
 
 But why not just change the linker error message from:
 OPTLINK : Warning 134: No Start Address
 
 to:
 OPTLINK : Warning 134: No Start Address
 "Are you missing a main() function?"
Why is that a "warning"? Why on earth does it create a corrupt exe file, instead of reporting an error???
Feb 14 2011
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/14/11, Don <nospam nospam.com> wrote:
 Why is that a "warning"?
 Why on earth does it create a corrupt exe file, instead of reporting an
 error???
I've no idea. But Optlink actually has a switch you can use to disable outputting corrupt executables. I've no idea what the use case for this is.
Feb 14 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 I've no idea. But Optlink actually has a switch you can use to disable
 outputting corrupt executables. I've no idea what the use case for
 this is.
It's from the olden days where you could use optlink to create all sorts of specialized binary files, such as ones you'll be blowing into EEPROMs. Those did not have normal start addresses.
Feb 14 2011
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-02-13 20:12, Walter Bright wrote:
 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
I agree with you here except for the last sentence. Please stop saying it's ok just because it's ok in C/C++. Isn't that why we use D, because we're not satisfied with C/C++. -- /Jacob Carlborg
Feb 14 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jacob Carlborg wrote:
 I agree with you here except for the last sentence. Please stop saying 
 it's ok just because it's ok in C/C++.
I bring that up because the thread started with the implication that D was worse than C/C++ in this regard.
Feb 14 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-02-14 19:07, Walter Bright wrote:
 Jacob Carlborg wrote:
 I agree with you here except for the last sentence. Please stop saying
 it's ok just because it's ok in C/C++.
I bring that up because the thread started with the implication that D was worse than C/C++ in this regard.
Fair enough. -- /Jacob Carlborg
Feb 14 2011
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
I'm not saying that this should be done and is worth the tremendous effort. However, when linking a c++ app without a main, here is what I get: /usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' When linking a d app without a main, we get: /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv': src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): undefined reference to `_Dmain' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x37): undefined reference to `_deh_end' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread': src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x1d): undefined reference to `_tlsend' src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x24): undefined reference to `_tlsstart' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In function `thread_attachThis': src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to `_tlsstart' src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to `_tlsend' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In function `thread_entryPoint': src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to `_tlsend' src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to `_tlsstart' collect2: ld returned 1 exit status --- errorlevel 1 Let's not pretend that generations of c/C++ coders are going to attribute this slew of errors to a missing main function. The first time I see this, I'm going to think I missed something else. I understand that to fix this, we need the linker to be more helpful, or we need to make dmd more helpful. I don't know how much effort it is, or how much it's worth it, I just wanted to point out that your statement about equivalence to C++ is stretching it. I personally think we need to get the linker to demangle symbols better. That would go a long way... -Steve
Feb 14 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
I'm not saying that this should be done and is worth the tremendous effort. However, when linking a c++ app without a main, here is what I get: /usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' When linking a d app without a main, we get: /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv': src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): undefined reference to `_Dmain' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x37): undefined reference to `_deh_end' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread': src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x1d): undefined reference to `_tlsend' src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x24): undefined reference to `_tlsstart' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In function `thread_attachThis': src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to `_tlsstart' src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to `_tlsend' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In function `thread_entryPoint': src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to `_tlsend' src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to `_tlsstart' collect2: ld returned 1 exit status --- errorlevel 1 Let's not pretend that generations of c/C++ coders are going to attribute this slew of errors to a missing main function.
I understand what you're saying, but experienced C/C++ programmers are used to paying attention only to the first error message :-)
 I personally think we need to get the linker to demangle symbols 
 better.  That would go a long way...
Not for the above messages.
Feb 14 2011
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 14 Feb 2011 13:24:26 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
I'm not saying that this should be done and is worth the tremendous effort. However, when linking a c++ app without a main, here is what I get: /usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' When linking a d app without a main, we get: /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv': src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): undefined reference to `_Dmain' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh21 DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213 HandlerTable+0x37): undefined reference to `_deh_end' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread': src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x1d): undefined reference to `_tlsend' src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6 hread6Thread+0x24): undefined reference to `_tlsstart' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In function `thread_attachThis': src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to `_tlsstart' src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to `_tlsend' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In function `thread_entryPoint': src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to `_tlsend' src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to `_tlsstart' collect2: ld returned 1 exit status --- errorlevel 1 Let's not pretend that generations of c/C++ coders are going to attribute this slew of errors to a missing main function.
I understand what you're saying, but experienced C/C++ programmers are used to paying attention only to the first error message :-)
Really? I find that in a mess of linker errors, the error isn't always the first line. It doesn't help that the name of the function "missing" is not called main (as it is called in the d source file). But Like I said, it's not critical -- the error is listed, it's just not as user-friendly as the C++ error.
 I personally think we need to get the linker to demangle symbols  
 better.  That would go a long way...
Not for the above messages.
I meant to demangle things like _D2rt6dmain24mainUiPPaZi7runMainMFZv Note how the _Dmain is buried between some of these large symbols. Those seemingly random nonsense symbols make the whole error listing seem unreadable. -Steve
Feb 14 2011
prev sibling next sibling parent spir <denis.spir gmail.com> writes:
On 02/14/2011 06:54 PM, Steven Schveighoffer wrote:
 On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright <newshound2 digitalmars.com>
 wrote:

 Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Not without reading the .o files passed to the linker, and the libraries, and figuring out what would be pulled in from those libraries. In essence, the compiler would have to become a linker. It's not impossible, but is a tremendous amount of work in order to improve one error message, and one error message that generations of C and C++ programmers are comfortable dealing with.
I'm not saying that this should be done and is worth the tremendous effort. However, when linking a c++ app without a main, here is what I get: /usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' When linking a d app without a main, we get: /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv': src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): undefined reference to `_Dmain' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37): undefined reference to `_deh_end' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread': src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d): undefined reference to `_tlsend' src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24): undefined reference to `_tlsstart' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In function `thread_attachThis': src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to `_tlsstart' src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to `_tlsend' /home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In function `thread_entryPoint': src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to `_tlsend' src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to `_tlsstart' collect2: ld returned 1 exit status --- errorlevel 1 Let's not pretend that generations of c/C++ coders are going to attribute this slew of errors to a missing main function. The first time I see this, I'm going to think I missed something else. I understand that to fix this, we need the linker to be more helpful, or we need to make dmd more helpful. I don't know how much effort it is, or how much it's worth it, I just wanted to point out that your statement about equivalence to C++ is stretching it. I personally think we need to get the linker to demangle symbols better. That would go a long way...
The "public" problem is not with the (admittedly very bad) error message in iself. The problem imo is that newcomers have high chances to stumble on this merroges (or points of similar friendliness) at the very start of their adventures with D, and thus think D tools just treat programmers that way, and the D community finds this just normal. Oops! I would be happy dmd to assume the main func is supposed to be located in the very first module passed on the command-line, if this can help. What do you think? "Error: cannot find main() function in module 'app.d'." (But this would not solve the case of /multiple/ mains, which happens to me several times a day, namely each time I have run an imported module's test suite separately ;-) Denis -- _________________ vita es estrany spir.wikidot.com
Feb 14 2011
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I think this void main() issue is blown out of proportion. They'll see
the error message once, and they won't know what it means. Ok.

But the second time, they'll know. No start address == no main. Maybe
the linker should just add another line saying that you might be
missing main, and that's it.

You guys want to rewrite the compiler for this one silly issue, come on!
Feb 14 2011
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 14 Feb 2011 14:24:05 -0500, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 I think this void main() issue is blown out of proportion. They'll see
 the error message once, and they won't know what it means. Ok.

 But the second time, they'll know. No start address == no main. Maybe
 the linker should just add another line saying that you might be
 missing main, and that's it.

 You guys want to rewrite the compiler for this one silly issue, come on!
No, not at all (at least for me). I'm just pointing out that the error that occurs when main is missing (probably one of the more common linker errors) is far more confusing in D than it is in C++. That doesn't mean D is unusable, or Walter should drop everything and fix this problem, or that C++ is better. It's just an observation. I think linker errors in general are one of those things that few people understand, and most cope with just pattern recognition "Oh, I see _deh_start, probably forgot main()" with no regards to logic. :) "Fixing" the linker so it suggests the right thing is likely impossible because the linker doesn't know where everything is or what one must include in order to satisfy it. That being said, fixing the linker so it demangles symbols would make the errors 10x easier to understand. -Steve
Feb 14 2011
parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
On Mon, 14 Feb 2011 15:03:01 -0500, Steven Schveighoffer wrote:

 I think linker errors in general are one of those things that few people
 understand, and most cope with just pattern recognition "Oh, I see
 _deh_start, probably forgot main()" with no regards to logic. :) 
Please get out of my head. :) -Lars
Feb 15 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-02-13 19:42, Vladimir Panteleev wrote:
 On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
That's not true. The compiler has knowledge of what symbols will be passed to the linker, and can display its own, much nicer error messages. I've mentioned this in our previous discussion on this topic.
Would the compiler be able to figure out if you build a library or an executable? -- /Jacob Carlborg
Feb 14 2011
prev sibling parent reply =?iso-8859-1?b?Z/ZsZ2VsaXllbGU=?= <usuldan gmail.com> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 
 golgeliyele wrote:
 I don't think C++ and gcc set a good bar here.
Short of writing our own linker, we're a bit stuck with what ld does.
I am not necessarily questioning the use of ld (or a different linker on a different platform). What intrigues me is: Is it possible to avoid leaking ld errors to the end user. For instance, with the compilation model of feeding all the .d files to dmd, we should be able to check if main() is missing or not, before going to the linker. I don't think supporting multiple compilation models is a good thing. I really hope you guys can visit this issue sometime soon.
Feb 13 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 13.02.2011 20:01, schrieb gölgeliyele:
 I don't think 
 supporting multiple compilation models is a good thing. 
 
I think incremental compilation is a very useful feature for large projects so it should be available. Also the possibility to link in .o files that were generated from C code with D programs is a must - so only supporting the model of feeding all .d files to dmd is not an option. But not supporting the model of feeding all .d files to dmd is very useful and should be possible. So *I* /do/ think that supporting multiple compilation models is a good thing :-) I think we can live with having the linker output something like "Undefined symbols: "__Dmain", referenced from: _D2rt6dmain24mainUiPPaZi7runMainMFZv in libphobos2.a(dmain2_513_1a5.o)" It would make sense to have a "Troubleshooting" section on the homepage that mentions this and other common problems, though. Cheers, - Daniel
Feb 13 2011
parent =?iso-8859-1?b?Z/ZsZ2VsaXllbGU=?= <usuldan gmail.com> writes:
Daniel Gibson <metalcaedes gmail.com> wrote:
 
 Am 13.02.2011 20:01, schrieb gölgeliyele:
 I don't think 
 supporting multiple compilation models is a good thing. 
 
I think incremental compilation is a very useful feature for large projects so it should be available. Also the possibility to link in .o files that were generated from C code with
D
 programs is a must - so only supporting the model of feeding all .d files to 
dmd
 is not an option.
 
 But not supporting the model of feeding all .d files to dmd is very useful and
 should be possible.
 
 So *I* /do/ think that supporting multiple compilation models is a good 
thing :-)
 
Ok, I might have misspoken there. I am not against incremental compilation. What the heck, the lack of it is the reason I started the thread. However, I would like to see a coherent compilation model. Feeding all .d files to the compiler does not necessarily mean that it needs to be a from-scratch compilation. Isn't the need for tools like xfBuild an indication that something is wrong here. If you can point me to a write up that describes how to setup an incremental compilation for a large project, without using advanced tools like xfBuild, that would be very helpful.
Feb 13 2011
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
gölgeliyele wrote:
 I don't think 
 supporting multiple compilation models is a good thing.
I think it's necessary if one is to support both small and large projects, and all the different ways one could use a D compiler as a tool.
Feb 13 2011
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2011-02-13 01:00:57 -0500, golgeliyele <usuldan gmail.com> said:

 IMO, despite all the innovations the D project brings, the lack of 
 pretty packaging and presentation is hurting it. I have observed 
 changes for the better lately. Such as the TDPL book, the github move, 
 the new web page (honestly, the digitalmars page was and still is a 
 liability for D), and may be a new web forum interface(?).
Since you're on a Mac, perhaps you'd be interested in D for Xcode (which I maintain). It abstracts away many of these complexities. <http://michelf.com/projects/d-for-xcode/> -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 13 2011
prev sibling parent Don <nospam nospam.com> writes:
golgeliyele wrote:
 I am relatively new to D. As a long time C++ coder, I love D. Recently, I have
started doing some coding with D. One of the things that
 bothered me was the 'perceived' quality of the tooling. There are some
relatively minor things that make the tooling look bad.
 The error reporting has issues as well. I noticed that the compiler leaks low
level errors to the user. If you forget to add a main to your
 app or misspell it, you get errors like:
 ====
 Undefined symbols:
   "__Dmain", referenced from:
       _D2rt6dmain24mainUiPPaZi7runMainMFZv in libphobos2.a(dmain2_513_1a5.o)
 ====
 I mean, wow, this should really be handled better.
Not solvable in general, but still solvable in the cases that matter. Created a bug report: http://d.puremagic.com/issues/show_bug.cgi?id=5573
Feb 14 2011