www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - GSOC Linker project

reply "Pierre LeMoine" <yarr.luben+dlang gmail.com> writes:
Hi!

I'm interested in starting a project to make a linker besides 
optlink for dmd on windows. If possible it'd be cool to run it as 
a gsoc-project, but if that's not an option I'll try to get it 
admitted as a soc-project at my university.

Anyway, the project would aim to be a replacement or alternative 
to optlink on windows. I've personally encountered quite a few 
seemingly random problems with optlink, and the error messages 
are not exactly friendly. My vision is to create a linker in a 
relatively modern language (D) and to release the project as open 
source.

So, I'm curious about some things; Is it too late to get this 
accepted as a summer of code project? Are there any current 
alternative linkers for dmd on windows, or any current projects 
aiming to create one? And do any of you know of a "Everything you 
need to know to write the best linker ever" resource center? ;]

/Pierre
May 03 2012
next sibling parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 04-05-2012 00:47, Pierre LeMoine wrote:
 Hi!

 I'm interested in starting a project to make a linker besides optlink
 for dmd on windows. If possible it'd be cool to run it as a
 gsoc-project, but if that's not an option I'll try to get it admitted as
 a soc-project at my university.
Absolutely possible, though too late for this year's GSoC. If you're still interested in working on it for GSoC 2013 (if Google decides to do another GSoC (which they most likely will)), then be sure to submit a proposal!
 Anyway, the project would aim to be a replacement or alternative to
 optlink on windows. I've personally encountered quite a few seemingly
 random problems with optlink, and the error messages are not exactly
 friendly. My vision is to create a linker in a relatively modern
 language (D) and to release the project as open source.
Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.
 So, I'm curious about some things; Is it too late to get this accepted
 as a summer of code project? Are there any current alternative linkers
 for dmd on windows, or any current projects aiming to create one? And do
 any of you know of a "Everything you need to know to write the best
 linker ever" resource center? ;]
Too late for this year's GSoC. For another linker option, see Unilink. As for resources on linkers, I think your best bet is reading the LLVM and GCC source code. I think someone also started an LLVM (machine code) linker project recently, but I don't know where to find it.
 /Pierre
-- - Alex
May 03 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 12:53:02AM +0200, Alex Rønne Petersen wrote:
 On 04-05-2012 00:47, Pierre LeMoine wrote:
[...]
Anyway, the project would aim to be a replacement or alternative to
optlink on windows. I've personally encountered quite a few seemingly
random problems with optlink, and the error messages are not exactly
friendly. My vision is to create a linker in a relatively modern
language (D) and to release the project as open source.
Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.
[...] The problem with writing linkers is that they are usually closely tied to implementation details on the host OS. At the very least, they must play nice with the OS's runtime dynamic linker (or *be* the dynamic linker themselves, like ld on the *nixes). They must also play nice with object files produced by other compilers on that platform, since otherwise it sorta defeats the purpose of rewriting optlink in the first place. This means that they must understand all the intimate details of every common object file and executable format on that OS. The basic concept behind a linker is very simple, really, but it's the implementation where details get ugly. To be frank, I question the wisdom of not just using ld on Posix systems... but OTOH, the world *needs* better linker technology than we currently have, so this projects like this one is a good thing. Linkers date from several decades ago, where programs can be broken up into separate, self-contained source files in a simple way. Things have changed a lot since then. Nowadays, we have template functions, virtual functions, dynamic libraries, etc., which require hacks like weak symbols to work properly. And we're *still* missing a sound conceptual framework for things like cross-module dead code elimination, cross-module template instantiation, duplicate code merging (like overlapping immutable arrays), etc.. These things _sorta_ work right now, but they're sorta hacked on top of basic 30-year-old linker technology, rather than being part of a sound, conceptual linker paradigm. T -- There are 10 kinds of people in the world: those who can count in binary, and those who can't.
May 03 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 04-05-2012 01:57, H. S. Teoh wrote:
 On Fri, May 04, 2012 at 12:53:02AM +0200, Alex Rønne Petersen wrote:
 On 04-05-2012 00:47, Pierre LeMoine wrote:
[...]
 Anyway, the project would aim to be a replacement or alternative to
 optlink on windows. I've personally encountered quite a few seemingly
 random problems with optlink, and the error messages are not exactly
 friendly. My vision is to create a linker in a relatively modern
 language (D) and to release the project as open source.
Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.
[...] The problem with writing linkers is that they are usually closely tied to implementation details on the host OS. At the very least, they must play nice with the OS's runtime dynamic linker (or *be* the dynamic linker themselves, like ld on the *nixes). They must also play nice with object files produced by other compilers on that platform, since otherwise it sorta defeats the purpose of rewriting optlink in the first place. This means that they must understand all the intimate details of every common object file and executable format on that OS.
I know, but that doesn't mean you can't write the linker so that it actually *is* portable at *all* (unlike a certain other linker ;).
 The basic concept behind a linker is very simple, really, but it's the
 implementation where details get ugly.

 To be frank, I question the wisdom of not just using ld on Posix
 systems... but OTOH, the world *needs* better linker technology than we
 currently have, so this projects like this one is a good thing.
Well, there's currently an LLVM linker in the works. If anything, that's probably the way forward. But seeing as DMD is not using LLVM...
 Linkers date from several decades ago, where programs can be broken up
 into separate, self-contained source files in a simple way. Things have
 changed a lot since then.  Nowadays, we have template functions, virtual
 functions, dynamic libraries, etc., which require hacks like weak
 symbols to work properly. And we're *still* missing a sound conceptual
 framework for things like cross-module dead code elimination,
 cross-module template instantiation, duplicate code merging (like
 overlapping immutable arrays), etc.. These things _sorta_ work right
 now, but they're sorta hacked on top of basic 30-year-old linker
 technology, rather than being part of a sound, conceptual linker
 paradigm.


 T
-- - Alex
May 03 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 02:43:34AM +0200, Alex Rønne Petersen wrote:
 On 04-05-2012 01:57, H. S. Teoh wrote:
[...]
The problem with writing linkers is that they are usually closely
tied to implementation details on the host OS.
[...]
 I know, but that doesn't mean you can't write the linker so that it
 actually *is* portable at *all* (unlike a certain other linker ;).
True, you could have a properly designed generic framework that makes plugging in new OS-dependent code very easy. I believe something like this is done by GNU BFD (binutils & family, probably subsuming ld as well). But then, you might as well just use binutils in the first place. :-) The only catch is that windows has its own conventions on stuff, and binutils is (AFAIK) tied to Posix. [...]
To be frank, I question the wisdom of not just using ld on Posix
systems... but OTOH, the world *needs* better linker technology than
we currently have, so this projects like this one is a good thing.
Well, there's currently an LLVM linker in the works. If anything, that's probably the way forward. But seeing as DMD is not using LLVM...
[...] As long as LDC is an option, I think all is well. :-) T -- Only boring people get bored. -- JM
May 03 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-05-04 01:57, H. S. Teoh wrote:

 To be frank, I question the wisdom of not just using ld on Posix
 systems... but OTOH, the world *needs* better linker technology than we
 currently have, so this projects like this one is a good thing.
He can start with a version for Windows. If as much as possible of the code is generic and modular designed it should be easy to add support for new formats and platforms.
 Linkers date from several decades ago, where programs can be broken up
 into separate, self-contained source files in a simple way. Things have
 changed a lot since then.  Nowadays, we have template functions, virtual
 functions, dynamic libraries, etc., which require hacks like weak
 symbols to work properly. And we're *still* missing a sound conceptual
 framework for things like cross-module dead code elimination,
 cross-module template instantiation, duplicate code merging (like
 overlapping immutable arrays), etc.. These things _sorta_ work right
 now, but they're sorta hacked on top of basic 30-year-old linker
 technology, rather than being part of a sound, conceptual linker
 paradigm.
That would be really nice. -- /Jacob Carlborg
May 04 2012
prev sibling parent "Pierre LeMoine" <yarr.luben+dlang gmail.com> writes:
On Thursday, 3 May 2012 at 22:53:03 UTC, Alex Rønne Petersen
wrote:
 Absolutely possible, though too late for this year's GSoC. If 
 you're still interested in working on it for GSoC 2013 (if 
 Google decides to do another GSoC (which they most likely 
 will)), then be sure to submit a proposal!
Too bad for me i guess, but i'll still try to get into my university's SoC-program. And it'd be better to start the project now compared to waiting for a year to start ;p
 Sounds like a good idea to me. Though in my personal opinion, 
 you should try to make the linker as platform-agnostic as 
 possible, so it's easy to adapt for new platforms / file 
 formats.
Thanks! I'll try to make it modular and awesome in the end, but for a start i'll just aim to make a linker that's usable with dmd on windows. It's easier to make a good design after getting some more hands-on experience, i think.
 As for resources on linkers, I think your best bet is reading 
 the LLVM and GCC source code. I think someone also started an 
 LLVM (machine code) linker project recently, but I don't know 
 where to find it.
Guess i've got some interesting reading to do.. =) I've come across http://www.iecc.com/linker/ which is quite interesting to read. It seems that it is "quite old", but i don't know how much the linker infrastructure has progressed the last ten years so it's probably still reasonably up to date, i hope ;p
May 06 2012
prev sibling next sibling parent reply Trass3r <un known.com> writes:
 I'm interested in starting a project to make a linker besides optlink  
 for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern language (D) and  
 to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 03 2012
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 03 May 2012 19:47:24 -0400, Trass3r <un known.com> wrote:

 I'm interested in starting a project to make a linker besides optlink  
 for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
+1 -Steve
May 04 2012
prev sibling next sibling parent reply "foobar" <foo bar.com> writes:
On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:
 I'm interested in starting a project to make a linker besides 
 optlink for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern 
 language (D) and to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.
May 04 2012
next sibling parent reply simendsjo <simendsjo gmail.com> writes:
On Fri, 04 May 2012 18:57:44 +0200, foobar <foo bar.com> wrote:

 On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:
 I'm interested in starting a project to make a linker besides optlink  
 for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern language (D)  
 and to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.
http://dsource.org/projects/ddl
May 04 2012
parent "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 17:52:54 UTC, simendsjo wrote:
 On Fri, 04 May 2012 18:57:44 +0200, foobar <foo bar.com> wrote:

 On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:
 I'm interested in starting a project to make a linker 
 besides optlink for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern 
 language (D) and to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.
http://dsource.org/projects/ddl
This is D1 only and AFAIK was abandoned long ago. Was a very good idea though and should be adopted by "official" D tool chain.
May 04 2012
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, foobar <foo bar.com> wrote:
 How about augmenting the object format so that libraries would be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
next sibling parent reply "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 17:54:47 UTC, Andrej Mitrovic wrote:
 On 5/4/12, foobar <foo bar.com> wrote:
 How about augmenting the object format so that libraries would 
 be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that 
 would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
How about using the documentation? It's meant to be consumed by humans and comes with (or should if it isn't yet) with nicely formatted explanations. The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.
May 04 2012
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, foobar <foo bar.com> wrote:
 The di files are mostly meant to be machine read (e.g. the
 compiler) and this belongs as part of the library file in order
 to provide ease of use and maintain the relationship between the
 binary code and it's interface.

 maintaining two sets of files that could easily get out of sync
 and *not* using the docs is way more insane.
I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
May 04 2012
next sibling parent reply "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:
 On 5/4/12, foobar <foo bar.com> wrote:
 The di files are mostly meant to be machine read (e.g. the
 compiler) and this belongs as part of the library file in order
 to provide ease of use and maintain the relationship between 
 the
 binary code and it's interface.

 maintaining two sets of files that could easily get out of sync
 and *not* using the docs is way more insane.
I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.
May 04 2012
parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com> wrote:

 On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:

 On 5/4/12, foobar <foo bar.com> wrote:

 The di files are mostly meant to be machine read (e.g. the
 compiler) and this belongs as part of the library file in order
 to provide ease of use and maintain the relationship between the
 binary code and it's interface.

 maintaining two sets of files that could easily get out of sync
 and *not* using the docs is way more insane.
I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.
I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.
May 04 2012
next sibling parent "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 19:21:02 UTC, Andrew Wiley wrote:
 On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com> wrote:

 On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:

 On 5/4/12, foobar <foo bar.com> wrote:

 The di files are mostly meant to be machine read (e.g. the
 compiler) and this belongs as part of the library file in 
 order
 to provide ease of use and maintain the relationship between 
 the
 binary code and it's interface.

 maintaining two sets of files that could easily get out of 
 sync
 and *not* using the docs is way more insane.
I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.
I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.
C++ has pre-compiled header files (.pch) which speedup compilation time for projects with lots'o'templates. The same kind of info could be stored inside the object files, for example by serializing the AST as you said yourself. There are many uses for this kind of technology. we can store additional info that currently isn't available for all sorts of link-time optimizations.
May 04 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 14:56:56 -0400, Andrew Wiley  
<wiley.andrew.j gmail.com> wrote:

 I like the idea, but what about templates? For them, you'd basically be
 stuffing source code into the object files
Nothing wrong with this. There is still a gain here -- object code compiled from the source containing the original source is tightly coupled with the template. You can be sure that this object file will link against one that you build based on the template contained in it. -Steve
May 04 2012
prev sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On Friday 04 May 2012 08:56 PM, Andrew Wiley wrote:
 On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com
 <mailto:foo bar.com>> wrote:

     On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:

         On 5/4/12, foobar <foo bar.com <mailto:foo bar.com>> wrote:

             The di files are mostly meant to be machine read (e.g. the
             compiler) and this belongs as part of the library file in order
             to provide ease of use and maintain the relationship between the
             binary code and it's interface.

             maintaining two sets of files that could easily get out of sync
             and *not* using the docs is way more insane.


         I'd say the docs are more likely to be out of sync than .di code. If
         the .di code is really out of sync you'll likely even get linker
         errors. And not everything ends up being documented.

         And then what about existing tools like IDEs and editors. E.g.
         autocomplete wouldn't work anymore.


     I'd say you'd be wrong.
     Both di and docs are auto-generated from the same source.
     As I said docs are designed for human consumption. This includes all
     sorts of features such as a table of contents, a symbol index, the
     symbols should have links, the docs provide usage examples, etc, etc.
     Docs can be put online thus ensuring they're always up-to-date.

     Tools should either read the data from the lib file or retrieve it
     from the web. Keeping separate local di files is simply insane.

     And really, have you never heard of Java? How about Pascal?
     Should I continue back in history to all the languages that
     implemented this feature decades ago?
     C/C++ is a huge PITA with their nonsense compilation model which we
     shouldn't have copied verbatim in D.


 I like the idea, but what about templates? For them, you'd basically be
 stuffing source code into the object files (unless you came up with a
 way to store the AST, but that seems like the effort/benefit ratio
 wouldn't be worth it since we currently have no way to preserve an AST
 tree between compiler runs).
 Otherwise, I find this idea very compelling. I'm sure there are probably
 other issues, though.
Storing the AST would basically equal storing the source code except 'trivia' like white space and unneeded tokens. At that point, you may as well ship the source. -- - Alex
May 04 2012
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
AST/symbol table manipulation is way faster than reparsing code.

People keep talking about D and Go compilation speed, while I was already
enjoying such compile times back in 1990 with Turbo Pascal in computers much
less powerfull than my laptop.

But C and C++ with their 70's compiler technology, somehow won the market 
share,
and then people started complaining about compilation speeds.

Adele Golberg, once wrote a paper telling how C made the compiler technology
regress several decades.

--
Paulo

"Alex Rønne Petersen"  wrote in message 
news:jo1s2b$2bie$1 digitalmars.com...

On Friday 04 May 2012 08:56 PM, Andrew Wiley wrote:
 On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com
 <mailto:foo bar.com>> wrote:

     On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:

         On 5/4/12, foobar <foo bar.com <mailto:foo bar.com>> wrote:

             The di files are mostly meant to be machine read (e.g. the
             compiler) and this belongs as part of the library file in 
 order
             to provide ease of use and maintain the relationship between 
 the
             binary code and it's interface.

             maintaining two sets of files that could easily get out of 
 sync
             and *not* using the docs is way more insane.


         I'd say the docs are more likely to be out of sync than .di code. 
 If
         the .di code is really out of sync you'll likely even get linker
         errors. And not everything ends up being documented.

         And then what about existing tools like IDEs and editors. E.g.
         autocomplete wouldn't work anymore.


     I'd say you'd be wrong.
     Both di and docs are auto-generated from the same source.
     As I said docs are designed for human consumption. This includes all
     sorts of features such as a table of contents, a symbol index, the
     symbols should have links, the docs provide usage examples, etc, etc.
     Docs can be put online thus ensuring they're always up-to-date.

     Tools should either read the data from the lib file or retrieve it
     from the web. Keeping separate local di files is simply insane.

     And really, have you never heard of Java? How about Pascal?
     Should I continue back in history to all the languages that
     implemented this feature decades ago?
     C/C++ is a huge PITA with their nonsense compilation model which we
     shouldn't have copied verbatim in D.


 I like the idea, but what about templates? For them, you'd basically be
 stuffing source code into the object files (unless you came up with a
 way to store the AST, but that seems like the effort/benefit ratio
 wouldn't be worth it since we currently have no way to preserve an AST
 tree between compiler runs).
 Otherwise, I find this idea very compelling. I'm sure there are probably
 other issues, though.
Storing the AST would basically equal storing the source code except 'trivia' like white space and unneeded tokens. At that point, you may as well ship the source. -- - Alex
May 06 2012
parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing code.
 
 People keep talking about D and Go compilation speed, while I was already
 enjoying such compile times back in 1990 with Turbo Pascal in computers much
 less powerfull than my laptop.
 
 But C and C++ with their 70's compiler technology, somehow won the
 market share,
 and then people started complaining about compilation speeds.
 
 Adele Golberg, once wrote a paper telling how C made the compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
 Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing 
 code.
 
 People keep talking about D and Go compilation speed, while I 
 was already
 enjoying such compile times back in 1990 with Turbo Pascal in 
 computers much
 less powerfull than my laptop.
 
 But C and C++ with their 70's compiler technology, somehow won 
 the
 market share,
 and then people started complaining about compilation speeds.
 
 Adele Golberg, once wrote a paper telling how C made the 
 compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
May 07 2012
parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
Paulo Pinto wrote:
AST/symbol table manipulation is way faster than reparsing code.

People keep talking about D and Go compilation speed, while I
was already
enjoying such compile times back in 1990 with Turbo Pascal in
computers much
less powerfull than my laptop.

But C and C++ with their 70's compiler technology, somehow won
the
market share,
and then people started complaining about compilation speeds.

Adele Golberg, once wrote a paper telling how C made the
compiler technology
regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 07 2012
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
I think it was there where I read about it.

I'll update you if I have any success, otherwise I need to retract my 
statement. :(

--
Paulo

"Jens Mueller"  wrote in message 
news:mailman.380.1336380192.24740.digitalmars-d puremagic.com...

Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
Paulo Pinto wrote:
AST/symbol table manipulation is way faster than reparsing code.

People keep talking about D and Go compilation speed, while I
was already
enjoying such compile times back in 1990 with Turbo Pascal in
computers much
less powerfull than my laptop.

But C and C++ with their 70's compiler technology, somehow won
the
market share,
and then people started complaining about compilation speeds.

Adele Golberg, once wrote a paper telling how C made the
compiler technology
regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 07 2012
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Hi,

it seems I have to excuse myself. I could not find anything
from Adele Goldberg.

So my statement is false. Most likely I ended up confusing
Fran Allen's interview in Coders at Work, with some nonsense
in my head.

Still, I leave here a few links I manage to find from Fran Allen.

Some remarks about bad languages on the page 27
http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

Complaint about C on slide 23
http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

Another remark about C
http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml

A video recorded at Purdue University, she also talks about C on minute 51
http://www.youtube.com/watch?v=Si3ZW3nI6oA

--
Paulo

Am 07.05.2012 10:41, schrieb Jens Mueller:
 Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
 Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing code.

 People keep talking about D and Go compilation speed, while I
 was already
 enjoying such compile times back in 1990 with Turbo Pascal in
 computers much
 less powerfull than my laptop.

 But C and C++ with their 70's compiler technology, somehow won
 the
 market share,
 and then people started complaining about compilation speeds.

 Adele Golberg, once wrote a paper telling how C made the
 compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 07 2012
parent reply Andre Tampubolon <andre lc.vlsm.org> writes:
Interesting reading.
I took a look at page 23, and didn't find the mention of C.
Maybe I didn't read carefully?

On 5/8/2012 3:34 AM, Paulo Pinto wrote:
 Hi,
 
 it seems I have to excuse myself. I could not find anything
 from Adele Goldberg.
 
 So my statement is false. Most likely I ended up confusing
 Fran Allen's interview in Coders at Work, with some nonsense
 in my head.
 
 Still, I leave here a few links I manage to find from Fran Allen.
 
 Some remarks about bad languages on the page 27
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf
 
 Complaint about C on slide 23
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf
 
 Another remark about C
 http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml
 
 
 A video recorded at Purdue University, she also talks about C on minute 51
 http://www.youtube.com/watch?v=Si3ZW3nI6oA
 
 -- 
 Paulo
 
 Am 07.05.2012 10:41, schrieb Jens Mueller:
 Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
 Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing code.

 People keep talking about D and Go compilation speed, while I
 was already
 enjoying such compile times back in 1990 with Turbo Pascal in
 computers much
 less powerfull than my laptop.

 But C and C++ with their 70's compiler technology, somehow won
 the
 market share,
 and then people started complaining about compilation speeds.

 Adele Golberg, once wrote a paper telling how C made the
 compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 07 2012
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
Oops, copy/paste error. :(

I'll check it, when I get back home.

--
Paulo

"Andre Tampubolon"  wrote in message news:joa0lq$1t2k$1 digitalmars.com...

Interesting reading.
I took a look at page 23, and didn't find the mention of C.
Maybe I didn't read carefully?

On 5/8/2012 3:34 AM, Paulo Pinto wrote:
 Hi,

 it seems I have to excuse myself. I could not find anything
 from Adele Goldberg.

 So my statement is false. Most likely I ended up confusing
 Fran Allen's interview in Coders at Work, with some nonsense
 in my head.

 Still, I leave here a few links I manage to find from Fran Allen.

 Some remarks about bad languages on the page 27
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

 Complaint about C on slide 23
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

 Another remark about C
 http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml


 A video recorded at Purdue University, she also talks about C on minute 51
 http://www.youtube.com/watch?v=Si3ZW3nI6oA

 -- 
 Paulo

 Am 07.05.2012 10:41, schrieb Jens Mueller:
 Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
 Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing code.

 People keep talking about D and Go compilation speed, while I
 was already
 enjoying such compile times back in 1990 with Turbo Pascal in
 computers much
 less powerfull than my laptop.

 But C and C++ with their 70's compiler technology, somehow won
 the
 market share,
 and then people started complaining about compilation speeds.

 Adele Golberg, once wrote a paper telling how C made the
 compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 07 2012
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
The correct link should have been
http://uhaweb.hartford.edu/ccscne/Allen.pdf

Am 08.05.2012 04:33, schrieb Andre Tampubolon:
 Interesting reading.
 I took a look at page 23, and didn't find the mention of C.
 Maybe I didn't read carefully?

 On 5/8/2012 3:34 AM, Paulo Pinto wrote:
 Hi,

 it seems I have to excuse myself. I could not find anything
 from Adele Goldberg.

 So my statement is false. Most likely I ended up confusing
 Fran Allen's interview in Coders at Work, with some nonsense
 in my head.

 Still, I leave here a few links I manage to find from Fran Allen.

 Some remarks about bad languages on the page 27
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

 Complaint about C on slide 23
 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf

 Another remark about C
 http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml


 A video recorded at Purdue University, she also talks about C on minute 51
 http://www.youtube.com/watch?v=Si3ZW3nI6oA

 --
 Paulo

 Am 07.05.2012 10:41, schrieb Jens Mueller:
 Paulo Pinto wrote:
 On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:
 Paulo Pinto wrote:
 AST/symbol table manipulation is way faster than reparsing code.

 People keep talking about D and Go compilation speed, while I
 was already
 enjoying such compile times back in 1990 with Turbo Pascal in
 computers much
 less powerfull than my laptop.

 But C and C++ with their 70's compiler technology, somehow won
 the
 market share,
 and then people started complaining about compilation speeds.

 Adele Golberg, once wrote a paper telling how C made the
 compiler technology
 regress several decades.
Do you happen to remember to exact title of that paper? Thanks. Jens
I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.
Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. Jens
May 08 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-05-04 20:30, Andrej Mitrovic wrote:

 I'd say the docs are more likely to be out of sync than .di code. If
 the .di code is really out of sync you'll likely even get linker
 errors. And not everything ends up being documented.
Then you need to manage your docs better.
 And then what about existing tools like IDEs and editors. E.g.
 autocomplete wouldn't work anymore.
They would need to be able to read the library and extract the .di files. Isn't this basically just how Java works? -- /Jacob Carlborg
May 04 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 5/4/12, foobar <foo bar.com> wrote:
 How about augmenting the object format so that libraries would be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
May 04 2012
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Ever heard of Java?
Ever heard of not requiring a bring-your-quadcore-to-its-knees IDE?
May 04 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 14:31:24 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Ever heard of Java?
Ever heard of not requiring a bring-your-quadcore-to-its-knees IDE?
This is a totally false comparison :) Java's storage of its interface in its object files has nothing to do with its IDE's performance. What I'm saying is, it's completely possible to store the API in binary format *in* the object files, and use documentation generators to document the API. You do not have to read the interface files to understand the API, and Java is a good example of a language that successfully does that. -Steve
May 04 2012
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 What I'm saying is, it's completely possible to store the API in binary
 format *in* the object files, and use documentation generators to document
Yes but then you need to *modify* existing tools in order to add a new feature that extracts information from object files. Either that, or you'd have to somehow extract the .di files back from the object files. How else can you see the interface in your text editor without the source files? :)
May 04 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 14:48:04 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 What I'm saying is, it's completely possible to store the API in binary
 format *in* the object files, and use documentation generators to  
 document
Yes but then you need to *modify* existing tools in order to add a new feature that extracts information from object files. Either that, or you'd have to somehow extract the .di files back from the object files. How else can you see the interface in your text editor without the source files? :)
Current tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here... -Steve
May 04 2012
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Current tools:  read .di files and extract API
 new tools: read .dobj files and extract API.

 I'm not really seeing the difficulty here...
I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 04 2012
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 15:07:43 -0400, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Current tools:  read .di files and extract API
 new tools: read .dobj files and extract API.

 I'm not really seeing the difficulty here...
I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
No reason for .di files if the object file already serves as the interface file. I think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute). -Steve
May 04 2012
next sibling parent "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 19:13:21 UTC, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 15:07:43 -0400, Andrej Mitrovic 
 <andrej.mitrovich gmail.com> wrote:

 On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Current tools:  read .di files and extract API
 new tools: read .dobj files and extract API.

 I'm not really seeing the difficulty here...
I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
No reason for .di files if the object file already serves as the interface file. I think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute). -Steve
Exactly :)
May 04 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 03:13:21PM -0400, Steven Schveighoffer wrote:
[...]
 I think he meant that object (and library) binary files would be
 augmented by API segments that provide what di files provide now --
 an interface-only version of the code.  It doesn't have to be text,
 it can be binary (maybe even partially compiled).
 
 The really nice thing you get from this is, the compiler now would
 use this object file instead of .d files for importing.  So not only
 do you eliminate errors from having two possibly separately
 maintained files, but the compiler can build *extra* details into
 the .dobj file.  For example, it could put in metadata that would
 allow for full escape analysis.  Or tag that a function is implied
 pure (without actually having to tag the function with the pure
 attribute).
[...] +1. It's about time we moved on from 30+ year old outdated linker technology, to something more powerful. Full escape analysis, compiler deduced function attributes like pureness, all the stuff that's impractical to implement in the current system, can all be done in a reasonable way if we stuck this information into the object files. The linker doesn't have to care what's in those extra sections; the compiler reads the info and does what it needs to do. The linker can omit the extra info from the final executable. (Or make use of it, if we implement a smarter linker. Like do cross-module string optimization, or something.) T -- Кто везде - тот нигде.
May 04 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On Friday 04 May 2012 11:17 PM, H. S. Teoh wrote:
 On Fri, May 04, 2012 at 03:13:21PM -0400, Steven Schveighoffer wrote:
 [...]
 I think he meant that object (and library) binary files would be
 augmented by API segments that provide what di files provide now --
 an interface-only version of the code.  It doesn't have to be text,
 it can be binary (maybe even partially compiled).

 The really nice thing you get from this is, the compiler now would
 use this object file instead of .d files for importing.  So not only
 do you eliminate errors from having two possibly separately
 maintained files, but the compiler can build *extra* details into
 the .dobj file.  For example, it could put in metadata that would
 allow for full escape analysis.  Or tag that a function is implied
 pure (without actually having to tag the function with the pure
 attribute).
[...] +1. It's about time we moved on from 30+ year old outdated linker technology, to something more powerful. Full escape analysis, compiler deduced function attributes like pureness, all the stuff that's impractical to implement in the current system, can all be done in a reasonable way if we stuck this information into the object files. The linker doesn't have to care what's in those extra sections; the compiler reads the info and does what it needs to do. The linker can omit the extra info from the final executable. (Or make use of it, if we implement a smarter linker. Like do cross-module string optimization, or something.) T
Purity inference won't happen either way. Purity is part of your API and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit. -- - Alex
May 04 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com> wrote:

 Purity inference won't happen either way. Purity is part of your API a=
nd =
 also meant to help you reason about your code. If the compiler just  =
 infers purity in a function and you later change the implementation so=
=
 it's no longer pure, you break your users' code. Also, purity would no=
=
 longer help you reason about your code if it's not explicit.
It can be pure for the purposes of optimization without affecting code = whatsoever. Inferred purity can be marked separately from explicit = purity, and explicitly pure functions would not be allowed to call = implicitly pure functions. -Steve
May 07 2012
next sibling parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 07-05-2012 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen
 <xtzgzorex gmail.com> wrote:

 Purity inference won't happen either way. Purity is part of your API
 and also meant to help you reason about your code. If the compiler
 just infers purity in a function and you later change the
 implementation so it's no longer pure, you break your users' code.
 Also, purity would no longer help you reason about your code if it's
 not explicit.
It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -Steve
But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable. (Note that what you described thus is the current situation, just that inferred purity is not part of the language (no reason it has to be).) -- - Alex
May 07 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 07 May 2012 07:41:43 -0400, Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com> wrote:

 On 07-05-2012 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen
 <xtzgzorex gmail.com> wrote:

 Purity inference won't happen either way. Purity is part of your API=
 and also meant to help you reason about your code. If the compiler
 just infers purity in a function and you later change the
 implementation so it's no longer pure, you break your users' code.
 Also, purity would no longer help you reason about your code if it's=
 not explicit.
It can be pure for the purposes of optimization without affecting cod=
e
 whatsoever. Inferred purity can be marked separately from explicit
 purity, and explicitly pure functions would not be allowed to call
 implicitly pure functions.

 -Steve
But that kind of inferred purity is something a compiler back end care=
s =
 about, not something the language should have to care about at all. In=
=
 practice, most compilers *do* analyze all functions for possible  =
 side-effects and use that information where applicable.
It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. I= f = the source isn't available for foo, the compiler cannot make this = optimization. I have no idea if this is a back-end or front-end issue. I'm not a = compiler writer. But I do understand that the compiler needs extra = information in the signature to determine if it can make this optimizati= on. -Steve
May 07 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 07-05-2012 14:50, Steven Schveighoffer wrote:
 On Mon, 07 May 2012 07:41:43 -0400, Alex Rønne Petersen
 <xtzgzorex gmail.com> wrote:

 On 07-05-2012 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen
 <xtzgzorex gmail.com> wrote:

 Purity inference won't happen either way. Purity is part of your API
 and also meant to help you reason about your code. If the compiler
 just infers purity in a function and you later change the
 implementation so it's no longer pure, you break your users' code.
 Also, purity would no longer help you reason about your code if it's
 not explicit.
It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -Steve
But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable.
It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y = foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. If the source isn't available for foo, the compiler cannot make this optimization. I have no idea if this is a back-end or front-end issue. I'm not a compiler writer. But I do understand that the compiler needs extra information in the signature to determine if it can make this optimization. -Steve
OK, point taken; didn't consider that. But in the first place, for inference of purity to work, the source would have to be available. Then, that inferred property has to be propagated somehow so that the compiler can make use of it when linking to the code as a library... -- - Alex
May 07 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 07 May 2012 09:27:32 -0400, Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com> wrote:

 On 07-05-2012 14:50, Steven Schveighoffer wrote:
 On Mon, 07 May 2012 07:41:43 -0400, Alex R=C3=B8nne Petersen
 <xtzgzorex gmail.com> wrote:

 On 07-05-2012 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen
 <xtzgzorex gmail.com> wrote:

 Purity inference won't happen either way. Purity is part of your A=
PI
 and also meant to help you reason about your code. If the compiler=
 just infers purity in a function and you later change the
 implementation so it's no longer pure, you break your users' code.=
 Also, purity would no longer help you reason about your code if it=
's
 not explicit.
It can be pure for the purposes of optimization without affecting c=
ode
 whatsoever. Inferred purity can be marked separately from explicit
 purity, and explicitly pure functions would not be allowed to call
 implicitly pure functions.

 -Steve
But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at=
 all. In practice, most compilers *do* analyze all functions for
 possible side-effects and use that information where applicable.
It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in.=
If
 the source isn't available for foo, the compiler cannot make this
 optimization.

 I have no idea if this is a back-end or front-end issue. I'm not a
 compiler writer. But I do understand that the compiler needs extra
 information in the signature to determine if it can make this  =
 optimization.
OK, point taken; didn't consider that. But in the first place, for =
 inference of purity to work, the source would have to be available.  =
 Then, that inferred property has to be propagated somehow so that the =
=
 compiler can make use of it when linking to the code as a library...
That's exactly what storing the interface in the object file does. You = = don't need the source because the object file contains the compiler's = interpretation of the source, and any inferred properties it has = discovered. -Steve
May 07 2012
parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer <schveiguy yahoo.com>w=
rote:

 On Mon, 07 May 2012 09:27:32 -0400, Alex R=F8nne Petersen <
 xtzgzorex gmail.com> wrote:

  On 07-05-2012 14:50, Steven Schveighoffer wrote:
 On Mon, 07 May 2012 07:41:43 -0400, Alex R=F8nne Petersen
 <xtzgzorex gmail.com> wrote:

  On 07-05-2012 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex R=F8nne Petersen
 <xtzgzorex gmail.com> wrote:

  Purity inference won't happen either way. Purity is part of your API
 and also meant to help you reason about your code. If the compiler
 just infers purity in a function and you later change the
 implementation so it's no longer pure, you break your users' code.
 Also, purity would no longer help you reason about your code if it's
 not explicit.
It can be pure for the purposes of optimization without affecting cod=
e
 whatsoever. Inferred purity can be marked separately from explicit
 purity, and explicitly pure functions would not be allowed to call
 implicitly pure functions.

 -Steve
But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable.
It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. I=
f
 the source isn't available for foo, the compiler cannot make this
 optimization.

 I have no idea if this is a back-end or front-end issue. I'm not a
 compiler writer. But I do understand that the compiler needs extra
 information in the signature to determine if it can make this
 optimization.
OK, point taken; didn't consider that. But in the first place, for inference of purity to work, the source would have to be available. Then=
,
 that inferred property has to be propagated somehow so that the compiler
 can make use of it when linking to the code as a library...
That's exactly what storing the interface in the object file does. You don't need the source because the object file contains the compiler's interpretation of the source, and any inferred properties it has discover=
ed.

Putting inferred purity into an object file sounds like a bad idea. It's
not hard to imagine this scenario:
-function foo in libSomething is inferred as pure (but not declared pure by
the author)
-exeSomethingElse is compiled to use libSomething, and the compiler takes
advantage of purity optimizations when calling foo
-libSomething is recompiled and foo is no longer pure, and exeSomethingElse
silently breaks

Purity inference is fine for templates (because recompiling the library
won't change the generated template code in an executable that depends on
it), but in all other cases, the API needs to be exactly what the author
declared it to be, or strange things will happen.
May 07 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 07 May 2012 12:59:24 -0400, Andrew Wiley  =

<wiley.andrew.j gmail.com> wrote:

 On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer  =
 <schveiguy yahoo.com>wrote:
 On Mon, 07 May 2012 09:27:32 -0400, Alex R=C3=B8nne Petersen <

 That's exactly what storing the interface in the object file does.  Y=
ou
 don't need the source because the object file contains the compiler's=
 interpretation of the source, and any inferred properties it has  =
 discovered.
Putting inferred purity into an object file sounds like a bad idea. It=
's
 not hard to imagine this scenario:
 -function foo in libSomething is inferred as pure (but not declared pu=
re =
 by
 the author)
 -exeSomethingElse is compiled to use libSomething, and the compiler ta=
kes
 advantage of purity optimizations when calling foo
 -libSomething is recompiled and foo is no longer pure, and  =
 exeSomethingElse
 silently breaks
no, it just doesn't link.
 Purity inference is fine for templates (because recompiling the librar=
y
 won't change the generated template code in an executable that depends=
on
 it), but in all other cases, the API needs to be exactly what the auth=
or
 declared it to be, or strange things will happen.
I agree that's the case with the current object/linker model. Something= = that puts inferred properties into the object file needs a new model, on= e = which does not blindly link code that wasn't compiled from the same = sources. -Steve
May 07 2012
parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Mon, May 7, 2012 at 12:21 PM, Steven Schveighoffer
<schveiguy yahoo.com>wrote:

 On Mon, 07 May 2012 12:59:24 -0400, Andrew Wiley <wiley.andrew.j gmail.co=
m>
 wrote:

  On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer <schveiguy yahoo.co=
m
wrote:
 On Mon, 07 May 2012 09:27:32 -0400, Alex R=F8nne Petersen <

 That's exactly what storing the interface in the object file does.  You
 don't need the source because the object file contains the compiler's
 interpretation of the source, and any inferred properties it has
 discovered.
Putting inferred purity into an object file sounds like a bad idea. It's not hard to imagine this scenario: -function foo in libSomething is inferred as pure (but not declared pure by the author) -exeSomethingElse is compiled to use libSomething, and the compiler take=
s
 advantage of purity optimizations when calling foo
 -libSomething is recompiled and foo is no longer pure, and
 exeSomethingElse
 silently breaks
no, it just doesn't link. Purity inference is fine for templates (because recompiling the library
 won't change the generated template code in an executable that depends o=
n
 it), but in all other cases, the API needs to be exactly what the author
 declared it to be, or strange things will happen.
I agree that's the case with the current object/linker model. Something that puts inferred properties into the object file needs a new model, one which does not blindly link code that wasn't compiled from the same sourc=
es.

Then all you've done is to make attributes the author can't control part of
the API, which will force library users to recompile their code more often
for non-obvious reasons. Avoiding that is one of the points of shared
libraries.

I think we're actually talking about different contexts. I'm speaking in
the context of shared libraries, where I think the API needs to be exactly
what the author requests and nothing more. With object files, static
libraries, and static linking, I agree that this sort of thing could work
and wouldn't cause the same problems because it's impossible to swap the
library code without recompiling/relinking the entire program.
May 07 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 07 May 2012 13:34:49 -0400, Andrew Wiley  
<wiley.andrew.j gmail.com> wrote:

 On Mon, May 7, 2012 at 12:21 PM, Steven Schveighoffer
 <schveiguy yahoo.com>wrote:

 I agree that's the case with the current object/linker model.  Something
 that puts inferred properties into the object file needs a new model,  
 one
 which does not blindly link code that wasn't compiled from the same  
 sources.
Then all you've done is to make attributes the author can't control part of the API, which will force library users to recompile their code more often for non-obvious reasons. Avoiding that is one of the points of shared libraries.
Shared library entry points have to have *no* inference. Otherwise you could inadvertently change the public API without explicitly tagging it. I believe in D, shared library entry points have to be tagged with export. Not to mention, shared libraries on a certain platform usually have to be linked by the platform's linker. So we can't exactly overtake that aspect with a new model.
 I think we're actually talking about different contexts. I'm speaking in
 the context of shared libraries, where I think the API needs to be  
 exactly
 what the author requests and nothing more. With object files, static
 libraries, and static linking, I agree that this sort of thing could work
 and wouldn't cause the same problems because it's impossible to swap the
 library code without recompiling/relinking the entire program.
OK, that makes sense, I think you are right, we were talking about two different pieces of the model. -Steve
May 07 2012
prev sibling parent Artur Skawina <art.08.09 gmail.com> writes:
On 05/07/12 13:21, Steven Schveighoffer wrote:
 On Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen <xtzgzorex gmail.com>
wrote:
 
 Purity inference won't happen either way. Purity is part of your API and also
meant to help you reason about your code. If the compiler just infers purity in
a function and you later change the implementation so it's no longer pure, you
break your users' code. Also, purity would no longer help you reason about your
code if it's not explicit.
It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions.
In WPO mode - it doesn't matter - it's just another internal compiler optimization. Otherwise in general it can't be done - a change to the function definition would change its signature - which means that all callers need to be recompiled. So at best only the intra-module calls can be affected, when the compiler knows that the caller will always be generated together with the callee. And the latter has to be assumed impure if it's not private, for the same reasons. artur
May 07 2012
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
I also don't see the issue.

This is already a long tradition in the languages that don't have to carry C 
linker baggage.

- Turbo Pascal 4.0, 1987
- Oberon 1.0, 1986

So I also don't why a 2012 language can't have a similar mechanism.

--
Paulo

"Andrej Mitrovic"  wrote in message 
news:mailman.324.1336158548.24740.digitalmars-d puremagic.com...

On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Current tools:  read .di files and extract API
 new tools: read .dobj files and extract API.

 I'm not really seeing the difficulty here...
I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 06 2012
prev sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 04.05.2012 20:26, schrieb Steven Schveighoffer:
 On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic
 <andrej.mitrovich gmail.com>  wrote:

  On 5/4/12, foobar<foo bar.com>  wrote:
  How about augmenting the object format so that libraries would be
  self contained and would not require additional .di files? Is
  this possible optlink by e.g. adding special sections that would
  be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magic
May 05 2012
next sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 05.05.2012 09:06, schrieb dennis luehring:
 Am 04.05.2012 20:26, schrieb Steven Schveighoffer:
  On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic
  <andrej.mitrovich gmail.com>   wrote:

   On 5/4/12, foobar<foo bar.com>   wrote:
   How about augmenting the object format so that libraries would be
   self contained and would not require additional .di files? Is
   this possible optlink by e.g. adding special sections that would
   be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magic
an more up-to-date example can be seen using the freepascal compiler and its ppdump tool: http://www.freepascal.org/tools/ppudump.var and turbo pascal gots even since 1987 a very good package system like a Java Jar file - you can just integrate compiled pascal sources (.pas -> .tpu) into something called .tpl file (turbo pascal library) the freepascal compiler got something similar called .ppl these "technologies" are damn good and invented so long before - but sometimes totaly unknown to all the obj-file-linker-guys
May 05 2012
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
I really really think that mankind did a wrong turn when C won over Pascal
in the 80's.

And that Wirth somehow lost interest in the industry and did not try to push
Modula-* or Oberon. There are some papers where he states this.

Now we suffer from

- daggling pointers
- buffer overflows
- pre-historic compiler toolchains



--
Paulo

"dennis luehring"  wrote in message news:jo2kb8$htd$1 digitalmars.com...

Am 05.05.2012 09:06, schrieb dennis luehring:
 Am 04.05.2012 20:26, schrieb Steven Schveighoffer:
  On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic
  <andrej.mitrovich gmail.com>   wrote:

   On 5/4/12, foobar<foo bar.com>   wrote:
   How about augmenting the object format so that libraries would be
   self contained and would not require additional .di files? Is
   this possible optlink by e.g. adding special sections that would
   be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magic
an more up-to-date example can be seen using the freepascal compiler and its ppdump tool: http://www.freepascal.org/tools/ppudump.var and turbo pascal gots even since 1987 a very good package system like a Java Jar file - you can just integrate compiled pascal sources (.pas -> .tpu) into something called .tpl file (turbo pascal library) the freepascal compiler got something similar called .ppl these "technologies" are damn good and invented so long before - but sometimes totaly unknown to all the obj-file-linker-guys
May 06 2012
parent reply dennis luehring <dl.soluz gmx.net> writes:
Am 07.05.2012 07:53, schrieb Paulo Pinto:
 I really really think that mankind did a wrong turn when C won over Pascal
 in the 80's.

 And that Wirth somehow lost interest in the industry and did not try to push
 Modula-* or Oberon. There are some papers where he states this.

 Now we suffer from

 - daggling pointers
 - buffer overflows
 - pre-historic compiler toolchains


we should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
May 07 2012
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
I like the idea, need to check what information I could provide.

Wirth's books about Oberon also provide similar information.

--
Paulo

"dennis luehring"  wrote in message news:jo85t1$1n9b$1 digitalmars.com...

Am 07.05.2012 07:53, schrieb Paulo Pinto:
 I really really think that mankind did a wrong turn when C won over Pascal
 in the 80's.

 And that Wirth somehow lost interest in the industry and did not try to 
 push
 Modula-* or Oberon. There are some papers where he states this.

 Now we suffer from

 - daggling pointers
 - buffer overflows
 - pre-historic compiler toolchains


we should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
May 07 2012
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 07.05.2012 15:27, schrieb Paulo Pinto:
 I like the idea, need to check what information I could provide.

 Wirth's books about Oberon also provide similar information.

 --
 Paulo

 "dennis luehring" wrote in message news:jo85t1$1n9b$1 digitalmars.com...

 Am 07.05.2012 07:53, schrieb Paulo Pinto:
 I really really think that mankind did a wrong turn when C won over
 Pascal
 in the 80's.

 And that Wirth somehow lost interest in the industry and did not try
 to push
 Modula-* or Oberon. There are some papers where he states this.

 Now we suffer from

 - daggling pointers
 - buffer overflows
 - pre-historic compiler toolchains


we should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
Description of the Free Pascal unit format http://www.freepascal.org/docs-html/prog/progap1.html#progse67.html How the dump command works http://www.freepascal.org/tools/ppudump.htm The source code of the ppudump utility http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/compiler/utils/ppudump.pp?view=markup -- Paulo
May 07 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 05 May 2012 03:06:52 -0400, dennis luehring <dl.soluz gmx.net>  
wrote:

 Am 04.05.2012 20:26, schrieb Steven Schveighoffer:
 On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic
 <andrej.mitrovich gmail.com>  wrote:

  On 5/4/12, foobar<foo bar.com>  wrote:
  How about augmenting the object format so that libraries would be
  self contained and would not require additional .di files? Is
  this possible optlink by e.g. adding special sections that would
  be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987
Honestly? No. I've heard of those languages, I don't know anyone who uses them, and I've never used them. I don't mean this as a slight or rebuttal. Java is just more recognizable. Using either language (Java or TurboPascal) is still a good way to prove the point that it is possible and works well. -Steve
May 07 2012
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
This just confirms what I saw yesterday on a presentation.

Many developers re-invent the wheel, or jump to the fad technology of the
year, because they don't have the knowledge of old already proven 
technologies,
that for whatever reason, are no longer common.

We need better ways to preserve knowledge in our industry.

--
Paulo

"Steven Schveighoffer"  wrote in message 
news:op.wdxra01ceav7ka steves-laptop...

On Sat, 05 May 2012 03:06:52 -0400, dennis luehring <dl.soluz gmx.net>
wrote:

 Am 04.05.2012 20:26, schrieb Steven Schveighoffer:
 On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic
 <andrej.mitrovich gmail.com>  wrote:

  On 5/4/12, foobar<foo bar.com>  wrote:
  How about augmenting the object format so that libraries would be
  self contained and would not require additional .di files? Is
  this possible optlink by e.g. adding special sections that would
  be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Ever heard of Java? -Steve
ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987
Honestly? No. I've heard of those languages, I don't know anyone who uses them, and I've never used them. I don't mean this as a slight or rebuttal. Java is just more recognizable. Using either language (Java or TurboPascal) is still a good way to prove the point that it is possible and works well. -Steve
May 07 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 07 May 2012 09:22:05 -0400, Paulo Pinto <pjmlp progtools.org>  
wrote:

 This just confirms what I saw yesterday on a presentation.

 Many developers re-invent the wheel, or jump to the fad technology of the
 year, because they don't have the knowledge of old already proven  
 technologies,
 that for whatever reason, are no longer common.

 We need better ways to preserve knowledge in our industry.
Again, don't take offense. I never suggested Java's use of an already existing technology was in some way a "new" thing, just that it proves it can work. I'm sure back in the day, TurboPascal had to walk uphill through the snow to school both ways too. :) -Steve
May 07 2012
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 07.05.2012 15:30, schrieb Steven Schveighoffer:
 On Mon, 07 May 2012 09:22:05 -0400, Paulo Pinto <pjmlp progtools.org>
 wrote:

 This just confirms what I saw yesterday on a presentation.

 Many developers re-invent the wheel, or jump to the fad technology of the
 year, because they don't have the knowledge of old already proven
 technologies,
 that for whatever reason, are no longer common.

 We need better ways to preserve knowledge in our industry.
Again, don't take offense. I never suggested Java's use of an already existing technology was in some way a "new" thing, just that it proves it can work. I'm sure back in the day, TurboPascal had to walk uphill through the snow to school both ways too. :) -Steve
No offense taken. My reply was just a small rant, based on your answer on lack of contact with Turbo Pascal and other languages I mentioned. Yesterday I watched a presentation, where the guy complains on knowledge being lost due to the lack of proper mentors in the industry, http://www.infoq.com/presentations/The-Frustrated-Architect I have spent a huge time in the university learning about compiler development, reading old books and papers from the early computing days. So in a general way, and not directed to you now, I saddens me that a great part of that knowledge is lost to most youth nowadays. Developers get amazed with JavaScript JIT compilation, and yet it already existed in Smalltalk systems. Go advertises fast compilation speeds, and they were already available to some language systems in the late 70's, early 80's. We are discussing storing module interfaces directly in the library files, and most seem to never heard of it. And the list goes on. Sometimes I wonder what do students learn in modern CS courses. -- Paulo
May 07 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, May 07, 2012 at 07:21:54PM +0200, Paulo Pinto wrote:
[...]
 I have spent a huge time in the university learning about compiler
 development, reading old books and papers from the early computing
 days.
 
 So in a general way, and not directed to you now, I saddens me that a
 great part of that knowledge is lost to most youth nowadays.
 
 Developers get amazed with JavaScript JIT compilation, and yet it
 already existed in Smalltalk systems.
 
 Go advertises fast compilation speeds, and they were already available
 to some language systems in the late 70's, early 80's.
 
 We are discussing storing module interfaces directly in the library
 files, and most seem to never heard of it.
 
 And the list goes on.
 
 Sometimes I wonder what do students learn in modern CS courses.
[...] Way too much theory and almost no practical applications. At least, that was my experience when I was in college. It gets worse the more prestigious the college is, apparently. I'm glad I spent much of my free time working on my own projects, and doing _real_ coding, like actually use C/C++ outside of the trivial assignments they hand out in class. About 90% of what I do at my job is what I learned during those free-time projects. Only 10% or maybe even less is what I got from CS courses. T -- The two rules of success: 1. Don't tell everything you know. -- YHL
May 07 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-05-07 20:13, H. S. Teoh wrote:
 On Mon, May 07, 2012 at 07:21:54PM +0200, Paulo Pinto wrote:
 Sometimes I wonder what do students learn in modern CS courses.
[...] Way too much theory and almost no practical applications. At least, that was my experience when I was in college. It gets worse the more prestigious the college is, apparently. I'm glad I spent much of my free time working on my own projects, and doing _real_ coding, like actually use C/C++ outside of the trivial assignments they hand out in class. About 90% of what I do at my job is what I learned during those free-time projects. Only 10% or maybe even less is what I got from CS courses.
So true, so true. I feel exactly the same. -- /Jacob Carlborg
May 07 2012
prev sibling parent Adrian <adrian.remove-nospam veith-system.de> writes:
Am 04.05.2012 19:54, schrieb Andrej Mitrovic:
 On 5/4/12, foobar<foo bar.com>  wrote:
 How about augmenting the object format so that libraries would be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Delphi does this since ages!
May 05 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-05-04 18:57, foobar wrote:

 How about augmenting the object format so that libraries would be self
 contained and would not require additional .di files? Is this possible
 optlink by e.g. adding special sections that would be otherwise ignored?
That would be nice. I guess that would mean that compiler needs to be changed as well to be able to read the .di files from the library. -- /Jacob Carlborg
May 04 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 07:54:38PM +0200, Andrej Mitrovic wrote:
 On 5/4/12, foobar <foo bar.com> wrote:
 How about augmenting the object format so that libraries would be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. T -- There are three kinds of people in the world: those who can count, and those who can't.
May 04 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh <hsteoh quickfur.ath.cx>  
wrote:

 On Fri, May 04, 2012 at 07:54:38PM +0200, Andrej Mitrovic wrote:
 On 5/4/12, foobar <foo bar.com> wrote:
 How about augmenting the object format so that libraries would be
 self contained and would not require additional .di files? Is
 this possible optlink by e.g. adding special sections that would
 be otherwise ignored?
How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. T
I've written code to do this, but apparently it breaks Phobos in the autotester. I can't get it to break Phobos on my local machine so I'm at a loss as how to fix it. Maybe you can help? The code is here: https://github.com/LightBender/dmd.git -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
May 04 2012
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 02:39:00PM -0700, Adam Wilson wrote:
 On Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh
 <hsteoh quickfur.ath.cx> wrote:
[...]
Exactly. And while we're at it, *really* strip unnecessary stuff from
.di files, like function bodies, template bodies, etc.. That stuff is
required by the compiler, not the user, so stick that in the object
files and let the compiler deal with it. The .di file should be ONLY
what's needed for the user to understand how to use the library.
[...]
 I've written code to do this, but apparently it breaks Phobos in the
 autotester. I can't get it to break Phobos on my local machine so I'm
 at a loss as how to fix it. Maybe you can help? The code is here:
 https://github.com/LightBender/dmd.git
[...] Sorry for taking so long to respond, been busy. Got some time this morning to cloned your repo and built dmd, then rebuilt druntime and phobos, and got this error from phobos: ../druntime/import/core/sys/posix/sys/select.di(25): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(25): no identifier for declarator __FDELT(int d) ../druntime/import/core/sys/posix/sys/select.di(27): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(27): no identifier for declarator __FDMASK(int d) make[1]: *** [generated/linux/release/32/libphobos2.a] Error 1 make: *** [release] Error 2 Looks like the bug only triggers when you rebuild druntime before rebuilding phobos. Hope this helps. Let me know if you want me to test anything else. T -- Freedom: (n.) Man's self-given right to be enslaved by his own depravity.
May 05 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, May 05, 2012 at 09:51:40AM -0700, H. S. Teoh wrote:
 On Fri, May 04, 2012 at 02:39:00PM -0700, Adam Wilson wrote:
 On Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh
 <hsteoh quickfur.ath.cx> wrote:
[...]
Exactly. And while we're at it, *really* strip unnecessary stuff from
.di files, like function bodies, template bodies, etc.. That stuff is
required by the compiler, not the user, so stick that in the object
files and let the compiler deal with it. The .di file should be ONLY
what's needed for the user to understand how to use the library.
[...]
 I've written code to do this, but apparently it breaks Phobos in the
 autotester. I can't get it to break Phobos on my local machine so I'm
 at a loss as how to fix it. Maybe you can help? The code is here:
 https://github.com/LightBender/dmd.git
[...] Sorry for taking so long to respond, been busy. Got some time this morning to cloned your repo and built dmd, then rebuilt druntime and phobos, and got this error from phobos: ../druntime/import/core/sys/posix/sys/select.di(25): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(25): no identifier for declarator __FDELT(int d) ../druntime/import/core/sys/posix/sys/select.di(27): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(27): no identifier for declarator __FDMASK(int d) make[1]: *** [generated/linux/release/32/libphobos2.a] Error 1 make: *** [release] Error 2
[...] Oh, and here's the snippet from the offending file (core/sys/posix/sys/select.di): ------SNIP------ private { alias c_long __fd_mask; enum uint __NFDBITS = 8 * __fd_mask.sizeof; extern (D) auto __FDELT(int d); // this is line 25 extern (D) auto __FDMASK(int d); // this is line 27 } ------SNIP------ Looks like the problem is caused by the auto, perhaps? T -- Lottery: tax on the stupid. -- Slashdotter
May 05 2012
prev sibling parent reply "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:
 Exactly. And while we're at it, *really* strip unnecessary 
 stuff from
 .di files, like function bodies, template bodies, etc.. That 
 stuff is
 required by the compiler, not the user, so stick that in the 
 object
 files and let the compiler deal with it. The .di file should be 
 ONLY
 what's needed for the user to understand how to use the library.


 T
You contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
May 04 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, May 05, 2012 at 12:07:16AM +0200, foobar wrote:
 On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:
Exactly. And while we're at it, *really* strip unnecessary stuff from
.di files, like function bodies, template bodies, etc.. That stuff is
required by the compiler, not the user, so stick that in the object
files and let the compiler deal with it. The .di file should be ONLY
what's needed for the user to understand how to use the library.


T
You contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
HTML is a stupid format, and ddoc output is not very navigable, but that's beside the point. I prefer to be reading actual code to be 100% sure that ddoc isn't leaving out some stuff that I should know about. All it takes is for somebody to leave out a doc comment and a particular declaration becomes invisible. (For example, std.uni was next to useless before I discovered that it actually had functions that I needed, but they didn't show up in dlang.org 'cos somebody failed to write doc comments for them.) I've seen too many commercial projects to believe for a moment that documentation is ever up-to-date. It depends on the library authors to provide ddoc output formats in a sane, usable format. Whereas if the compiler had a standardized, uniform, understandable format in well-known code syntax, that's a lot more dependable. It's often impossible to debug something if you don't get to see what the compiler sees. I suppose you could argue that leaving out function bodies and stuff amounts to the same thing, but at least the language's interface for a function is the function's signature. When you have a .di file, you're guaranteed that all public declarations are there, and you can see exactly what they are. Of course, IF ddoc can be guaranteed to produce exactly what's in a .di file, then I concede that it is sufficient this purpose. T -- Recently, our IT department hired a bug-fix engineer. He used to work for Volkswagen.
May 04 2012
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-05-05 00:39, H. S. Teoh wrote:

 It's often impossible to debug something if you don't get to see what
 the compiler sees. I suppose you could argue that leaving out function
 bodies and stuff amounts to the same thing, but at least the language's
 interface for a function is the function's signature. When you have a
 .di file, you're guaranteed that all public declarations are there, and
 you can see exactly what they are. Of course, IF ddoc can be guaranteed
 to produce exactly what's in a .di file, then I concede that it is
 sufficient this purpose.
If the compiler can extract the .di files from an object file so can other tools. I don't see the problem. -- /Jacob Carlborg
May 05 2012
prev sibling parent "foobar" <foo bar.com> writes:
On Friday, 4 May 2012 at 22:38:27 UTC, H. S. Teoh wrote:
 On Sat, May 05, 2012 at 12:07:16AM +0200, foobar wrote:
 On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:
Exactly. And while we're at it, *really* strip unnecessary 
stuff from
.di files, like function bodies, template bodies, etc.. That 
stuff is
required by the compiler, not the user, so stick that in the 
object
files and let the compiler deal with it. The .di file should 
be ONLY
what's needed for the user to understand how to use the 
library.


T
You contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
HTML is a stupid format, and ddoc output is not very navigable, but that's beside the point. I prefer to be reading actual code to be 100% sure that ddoc isn't leaving out some stuff that I should know about. All it takes is for somebody to leave out a doc comment and a particular declaration becomes invisible. (For example, std.uni was next to useless before I discovered that it actually had functions that I needed, but they didn't show up in dlang.org 'cos somebody failed to write doc comments for them.) I've seen too many commercial projects to believe for a moment that documentation is ever up-to-date. It depends on the library authors to provide ddoc output formats in a sane, usable format. Whereas if the compiler had a standardized, uniform, understandable format in well-known code syntax, that's a lot more dependable. It's often impossible to debug something if you don't get to see what the compiler sees. I suppose you could argue that leaving out function bodies and stuff amounts to the same thing, but at least the language's interface for a function is the function's signature. When you have a .di file, you're guaranteed that all public declarations are there, and you can see exactly what they are. Of course, IF ddoc can be guaranteed to produce exactly what's in a .di file, then I concede that it is sufficient this purpose. T
This all amounts to the issues you have with the current implementation of DDoc which I agree needs more work. The solution then is to fix/enhance DDoc. Doxygen for instance has a setting to output all declarations whether documented or not, thus addressing your main point. The projects you speak of I assume are written in C/C++? Those tend to have poor documentation precisely because people assume the header files are enough. C/C++ requires you to install a 3rd party doc tool and learn that tool's doc syntax - effort that people are too lazy to invest. In the Java world the syntax is standardized, the tool comes bundled with the compiler, all tools speak it and IDEs will even insert empty doc comment for you automatically. Frankly it takes effort to *not* document your code in this setting. D provides DDoc precisely because it strives to provide the same doc friendly setting as Java.
May 05 2012
prev sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
Delphi, Turbo Pascal and FreePascal do the same.

All the required information is stored in the tpu/fpu files (Turbo/Free 
Pascal Unit).

A command line tool or IDE support easily show the unit interface.

--
Paulo

"foobar"  wrote in message news:abzrrvpylkxhdzsdhesg forum.dlang.org...

On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:
 I'm interested in starting a project to make a linker besides optlink for 
 dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern language (D) and 
 to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.
May 06 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 5/4/12, Trass3r <un known.com> wrote:
 I'm interested in starting a project to make a linker besides optlink
 for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
Hear hear. But I wouldn't mind seeing a linker in D, just for research purposes.
May 04 2012
prev sibling next sibling parent "Pierre LeMoine" <yarr.luben+dlang gmail.com> writes:
On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:

 Imho changing dmd to use COFF (incl. 64 support) instead of 
 that crappy OMF would be more beneficial than yet another 
 linker.
I'd love to, but i don't think i can spend a whole summer doing that ;)
 If you do write a linker then make it cross-platform right from 
 the start; and modular so it can support all object file 
 formats.
I intend to first make something that works and gather experience and get a firm graps of all the quirks of writing a linker first. And there seems to be nice features such as dead code elimination and template magic to consider as well. Would be a shame to limit the capabilities by making the design to well defined in the beginning of the project, i think. So i'll defer the modularity & cross-platforminess for now but keep it in mind for the long run :)
May 06 2012
prev sibling parent mta`chrono <chrono mta-international.net> writes:
Am 04.05.2012 01:47, schrieb Trass3r:
 I'm interested in starting a project to make a linker besides optlink
 for dmd on windows.
Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
 My vision is to create a linker in a relatively modern language (D)
 and to release the project as open source.
If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
Yes supporting COFF would be a great benefit on Windows and would allow the user to use other compilers and linkers in conjunction with D. The other point: Writing a linker as part of GOSC 2013 will be an ease for you if you've implemented COFF since you don't need any furthur ramp-up time ;-).
May 08 2012
prev sibling parent reply "Roald Ribe" <roald.ribe hotmail.com> writes:
On Thu, 03 May 2012 19:47:19 -0300, Pierre LeMoine <yarr.luben+dlang gmail.com>
wrote:

 Hi!

 I'm interested in starting a project to make a linker besides
 optlink for dmd on windows. If possible it'd be cool to run it as
 a gsoc-project, but if that's not an option I'll try to get it
 admitted as a soc-project at my university.

 Anyway, the project would aim to be a replacement or alternative
 to optlink on windows. I've personally encountered quite a few
 seemingly random problems with optlink, and the error messages
 are not exactly friendly. My vision is to create a linker in a
 relatively modern language (D) and to release the project as open
 source.

 So, I'm curious about some things; Is it too late to get this
 accepted as a summer of code project? Are there any current
 alternative linkers for dmd on windows, or any current projects
 aiming to create one? And do any of you know of a "Everything you
 need to know to write the best linker ever" resource center? ;]
If you are interested in getting results rather than reinventing the wheel, I would advice you to have a look at the openwatcom.org wlink, and the forked jwlink as a starting point. The linker is open source, written in C and has user documentation (not source doc unfortunately). Roald
May 07 2012
parent reply "Pierre LeMoine" <yarr.luben+dlang gmail.com> writes:
On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:
 If you are interested in getting results rather than 
 reinventing the wheel,
 I would advice you to have a look at the openwatcom.org wlink, 
 and the
 forked jwlink as a starting point. The linker is open source, 
 written in
 C and has user documentation (not source doc unfortunately).

 Roald
Thanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
May 07 2012
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-05-07 17:41, Pierre LeMoine wrote:
 On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:
 If you are interested in getting results rather than reinventing the
 wheel,
 I would advice you to have a look at the openwatcom.org wlink, and the
 forked jwlink as a starting point. The linker is open source, written in
 C and has user documentation (not source doc unfortunately).

 Roald
Thanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
Perhaps you could have a look at "gold" as well: http://en.wikipedia.org/wiki/Gold_%28linker%29 -- /Jacob Carlborg
May 07 2012
prev sibling parent "Roald Ribe" <roald.ribe hotmail.com> writes:
On Mon, 07 May 2012 12:41:09 -0300, Pierre LeMoine <yarr.luben+dlang gmail.com>
wrote:

 On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:
 If you are interested in getting results rather than
 reinventing the wheel,
 I would advice you to have a look at the openwatcom.org wlink,
 and the
 forked jwlink as a starting point. The linker is open source,
 written in
 C and has user documentation (not source doc unfortunately).

 Roald
Thanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
I believed that this guy had done it already, but turns out it was for the DMC compilers, not D. He might have some advice for you. http://cmeerw.org/prog/dm/ I can't really tell you what is best to acheive what you want. Have a look at the sources, ask the maintainers, evaluate the supporting environment of the availale choices and find out. The openwarcom.org project also has a really nice debugger that could support D if anyone made the neecessary changes. Roald
May 08 2012