digitalmars.D - GSOC Linker project
- Pierre LeMoine (17/17) May 03 2012 Hi!
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (15/31) May 03 2012 Absolutely possible, though too late for this year's GSoC. If you're
- H. S. Teoh (30/40) May 03 2012 [...]
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (7/45) May 03 2012 I know, but that doesn't mean you can't write the linker so that it
- H. S. Teoh (16/28) May 03 2012 [...]
- Jacob Carlborg (7/21) May 04 2012 He can start with a version for Windows. If as much as possible of the
- Pierre LeMoine (14/26) May 06 2012 Too bad for me i guess, but i'll still try to get into my
- Trass3r (4/8) May 03 2012 If you do write a linker then make it cross-platform right from the star...
- Steven Schveighoffer (3/7) May 04 2012 +1
- foobar (7/17) May 04 2012 How about augmenting the object format so that libraries would be
- simendsjo (2/20) May 04 2012 http://dsource.org/projects/ddl
- foobar (4/31) May 04 2012 This is D1 only and AFAIK was abandoned long ago.
- Andrej Mitrovic (4/8) May 04 2012 How would you use a library you don't even have the interface to? I
- foobar (10/22) May 04 2012 How about using the documentation? It's meant to be consumed by
- Andrej Mitrovic (6/12) May 04 2012 I'd say the docs are more likely to be out of sync than .di code. If
- foobar (15/31) May 04 2012 I'd say you'd be wrong.
- Andrew Wiley (8/40) May 04 2012 I like the idea, but what about templates? For them, you'd basically be
- foobar (8/68) May 04 2012 C++ has pre-compiled header files (.pch) which speedup
- Steven Schveighoffer (7/9) May 04 2012 Nothing wrong with this. There is still a gain here -- object code
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (6/41) May 04 2012 Storing the AST would basically equal storing the source code except
- Paulo Pinto (19/58) May 06 2012 AST/symbol table manipulation is way faster than reparsing code.
- Jens Mueller (4/16) May 07 2012 Do you happen to remember to exact title of that paper?
- Paulo Pinto (5/26) May 07 2012 I'll try to find it, as I don't recall the title.
- Jens Mueller (7/36) May 07 2012 Many thanks.
- Paulo Pinto (14/41) May 07 2012 I think it was there where I read about it.
- Paulo Pinto (18/54) May 07 2012 Hi,
- Andre Tampubolon (4/71) May 07 2012 Interesting reading.
- Paulo Pinto (9/66) May 07 2012 Oops, copy/paste error. :(
- Paulo Pinto (3/74) May 08 2012 The correct link should have been
- Jacob Carlborg (6/11) May 04 2012 They would need to be able to read the library and extract the .di
- Steven Schveighoffer (4/12) May 04 2012 Ever heard of Java?
- Andrej Mitrovic (2/3) May 04 2012 Ever heard of not requiring a bring-your-quadcore-to-its-knees IDE?
- Steven Schveighoffer (9/12) May 04 2012 This is a totally false comparison :) Java's storage of its interface i...
- Andrej Mitrovic (6/8) May 04 2012 Yes but then you need to *modify* existing tools in order to add a new
- Steven Schveighoffer (6/15) May 04 2012 Current tools: read .di files and extract API
- Andrej Mitrovic (3/6) May 04 2012 I thought he meant libraries that are only distributed in binary form.
- Steven Schveighoffer (16/23) May 04 2012 No reason for .di files if the object file already serves as the interfa...
- foobar (2/28) May 04 2012 Exactly :)
- H. S. Teoh (16/29) May 04 2012 [...]
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (8/35) May 04 2012 Purity inference won't happen either way. Purity is part of your API and...
- Steven Schveighoffer (10/15) May 07 2012 nd =
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (9/22) May 07 2012 But that kind of inferred purity is something a compiler back end cares
- Steven Schveighoffer (19/40) May 07 2012 e
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (7/43) May 07 2012 OK, point taken; didn't consider that. But in the first place, for
- Steven Schveighoffer (13/63) May 07 2012 PI
- Andrew Wiley (18/80) May 07 2012 e
- Steven Schveighoffer (17/40) May 07 2012 ou
- Andrew Wiley (17/50) May 07 2012 m>
- Steven Schveighoffer (11/33) May 07 2012 Shared library entry points have to have *no* inference. Otherwise you ...
- Artur Skawina (8/13) May 07 2012 In WPO mode - it doesn't matter - it's just another internal compiler op...
- Paulo Pinto (13/16) May 06 2012 I also don't see the issue.
- dennis luehring (5/18) May 05 2012 ever heard about Turbo Pascal (and delphi) got this feature since turbo
- dennis luehring (9/30) May 05 2012 an more up-to-date example can be seen using the freepascal compiler and
- Paulo Pinto (21/42) May 06 2012 I really really think that mankind did a wrong turn when C won over Pasc...
- dennis luehring (10/19) May 07 2012 we should collect all the advantages of turbo pascal/delphi
- Paulo Pinto (15/25) May 07 2012 I like the idea, need to check what information I could provide.
- Paulo Pinto (9/39) May 07 2012 Description of the Free Pascal unit format
- Steven Schveighoffer (8/27) May 07 2012 Honestly? No. I've heard of those languages, I don't know anyone who
- Paulo Pinto (18/37) May 07 2012 This just confirms what I saw yesterday on a presentation.
- Steven Schveighoffer (8/14) May 07 2012 Again, don't take offense. I never suggested Java's use of an already
- Paulo Pinto (21/37) May 07 2012 No offense taken.
- H. S. Teoh (14/33) May 07 2012 [...]
- Jacob Carlborg (4/15) May 07 2012 So true, so true. I feel exactly the same.
- Adrian (2/10) May 05 2012 Delphi does this since ages!
- Jacob Carlborg (5/8) May 04 2012 That would be nice. I guess that would mean that compiler needs to be
- H. S. Teoh (9/18) May 04 2012 Exactly. And while we're at it, *really* strip unnecessary stuff from
- Adam Wilson (12/28) May 04 2012 I've written code to do this, but apparently it breaks Phobos in the
- H. S. Teoh (19/30) May 05 2012 [...]
- H. S. Teoh (17/43) May 05 2012 [...]
- foobar (8/18) May 04 2012 You contradict yourself.
- H. S. Teoh (24/41) May 04 2012 HTML is a stupid format, and ddoc output is not very navigable, but
- Jacob Carlborg (5/13) May 05 2012 If the compiler can extract the .di files from an object file so can
- foobar (17/83) May 05 2012 This all amounts to the issues you have with the current
- Paulo Pinto (14/22) May 06 2012 Delphi, Turbo Pascal and FreePascal do the same.
- Andrej Mitrovic (3/7) May 04 2012 Hear hear.
- Pierre LeMoine (11/17) May 06 2012 I'd love to, but i don't think i can spend a whole summer doing
- mta`chrono (6/18) May 08 2012 Yes supporting COFF would be a great benefit on Windows and would allow
- Roald Ribe (6/22) May 07 2012 If you are interested in getting results rather than reinventing the whe...
- Pierre LeMoine (9/17) May 07 2012 Thanks for the tip! :)
- Jacob Carlborg (5/20) May 07 2012 Perhaps you could have a look at "gold" as well:
- Roald Ribe (10/28) May 08 2012 I believed that this guy had done it already, but turns out it was
Hi! I'm interested in starting a project to make a linker besides optlink for dmd on windows. If possible it'd be cool to run it as a gsoc-project, but if that's not an option I'll try to get it admitted as a soc-project at my university. Anyway, the project would aim to be a replacement or alternative to optlink on windows. I've personally encountered quite a few seemingly random problems with optlink, and the error messages are not exactly friendly. My vision is to create a linker in a relatively modern language (D) and to release the project as open source. So, I'm curious about some things; Is it too late to get this accepted as a summer of code project? Are there any current alternative linkers for dmd on windows, or any current projects aiming to create one? And do any of you know of a "Everything you need to know to write the best linker ever" resource center? ;] /Pierre
May 03 2012
On 04-05-2012 00:47, Pierre LeMoine wrote:Hi! I'm interested in starting a project to make a linker besides optlink for dmd on windows. If possible it'd be cool to run it as a gsoc-project, but if that's not an option I'll try to get it admitted as a soc-project at my university.Absolutely possible, though too late for this year's GSoC. If you're still interested in working on it for GSoC 2013 (if Google decides to do another GSoC (which they most likely will)), then be sure to submit a proposal!Anyway, the project would aim to be a replacement or alternative to optlink on windows. I've personally encountered quite a few seemingly random problems with optlink, and the error messages are not exactly friendly. My vision is to create a linker in a relatively modern language (D) and to release the project as open source.Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.So, I'm curious about some things; Is it too late to get this accepted as a summer of code project? Are there any current alternative linkers for dmd on windows, or any current projects aiming to create one? And do any of you know of a "Everything you need to know to write the best linker ever" resource center? ;]Too late for this year's GSoC. For another linker option, see Unilink. As for resources on linkers, I think your best bet is reading the LLVM and GCC source code. I think someone also started an LLVM (machine code) linker project recently, but I don't know where to find it./Pierre-- - Alex
May 03 2012
On Fri, May 04, 2012 at 12:53:02AM +0200, Alex Rønne Petersen wrote:On 04-05-2012 00:47, Pierre LeMoine wrote:[...][...] The problem with writing linkers is that they are usually closely tied to implementation details on the host OS. At the very least, they must play nice with the OS's runtime dynamic linker (or *be* the dynamic linker themselves, like ld on the *nixes). They must also play nice with object files produced by other compilers on that platform, since otherwise it sorta defeats the purpose of rewriting optlink in the first place. This means that they must understand all the intimate details of every common object file and executable format on that OS. The basic concept behind a linker is very simple, really, but it's the implementation where details get ugly. To be frank, I question the wisdom of not just using ld on Posix systems... but OTOH, the world *needs* better linker technology than we currently have, so this projects like this one is a good thing. Linkers date from several decades ago, where programs can be broken up into separate, self-contained source files in a simple way. Things have changed a lot since then. Nowadays, we have template functions, virtual functions, dynamic libraries, etc., which require hacks like weak symbols to work properly. And we're *still* missing a sound conceptual framework for things like cross-module dead code elimination, cross-module template instantiation, duplicate code merging (like overlapping immutable arrays), etc.. These things _sorta_ work right now, but they're sorta hacked on top of basic 30-year-old linker technology, rather than being part of a sound, conceptual linker paradigm. T -- There are 10 kinds of people in the world: those who can count in binary, and those who can't.Anyway, the project would aim to be a replacement or alternative to optlink on windows. I've personally encountered quite a few seemingly random problems with optlink, and the error messages are not exactly friendly. My vision is to create a linker in a relatively modern language (D) and to release the project as open source.Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.
May 03 2012
On 04-05-2012 01:57, H. S. Teoh wrote:On Fri, May 04, 2012 at 12:53:02AM +0200, Alex Rønne Petersen wrote:I know, but that doesn't mean you can't write the linker so that it actually *is* portable at *all* (unlike a certain other linker ;).On 04-05-2012 00:47, Pierre LeMoine wrote:[...][...] The problem with writing linkers is that they are usually closely tied to implementation details on the host OS. At the very least, they must play nice with the OS's runtime dynamic linker (or *be* the dynamic linker themselves, like ld on the *nixes). They must also play nice with object files produced by other compilers on that platform, since otherwise it sorta defeats the purpose of rewriting optlink in the first place. This means that they must understand all the intimate details of every common object file and executable format on that OS.Anyway, the project would aim to be a replacement or alternative to optlink on windows. I've personally encountered quite a few seemingly random problems with optlink, and the error messages are not exactly friendly. My vision is to create a linker in a relatively modern language (D) and to release the project as open source.Sounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.The basic concept behind a linker is very simple, really, but it's the implementation where details get ugly. To be frank, I question the wisdom of not just using ld on Posix systems... but OTOH, the world *needs* better linker technology than we currently have, so this projects like this one is a good thing.Well, there's currently an LLVM linker in the works. If anything, that's probably the way forward. But seeing as DMD is not using LLVM...Linkers date from several decades ago, where programs can be broken up into separate, self-contained source files in a simple way. Things have changed a lot since then. Nowadays, we have template functions, virtual functions, dynamic libraries, etc., which require hacks like weak symbols to work properly. And we're *still* missing a sound conceptual framework for things like cross-module dead code elimination, cross-module template instantiation, duplicate code merging (like overlapping immutable arrays), etc.. These things _sorta_ work right now, but they're sorta hacked on top of basic 30-year-old linker technology, rather than being part of a sound, conceptual linker paradigm. T-- - Alex
May 03 2012
On Fri, May 04, 2012 at 02:43:34AM +0200, Alex Rønne Petersen wrote:On 04-05-2012 01:57, H. S. Teoh wrote:[...][...]The problem with writing linkers is that they are usually closely tied to implementation details on the host OS.I know, but that doesn't mean you can't write the linker so that it actually *is* portable at *all* (unlike a certain other linker ;).True, you could have a properly designed generic framework that makes plugging in new OS-dependent code very easy. I believe something like this is done by GNU BFD (binutils & family, probably subsuming ld as well). But then, you might as well just use binutils in the first place. :-) The only catch is that windows has its own conventions on stuff, and binutils is (AFAIK) tied to Posix. [...][...] As long as LDC is an option, I think all is well. :-) T -- Only boring people get bored. -- JMTo be frank, I question the wisdom of not just using ld on Posix systems... but OTOH, the world *needs* better linker technology than we currently have, so this projects like this one is a good thing.Well, there's currently an LLVM linker in the works. If anything, that's probably the way forward. But seeing as DMD is not using LLVM...
May 03 2012
On 2012-05-04 01:57, H. S. Teoh wrote:To be frank, I question the wisdom of not just using ld on Posix systems... but OTOH, the world *needs* better linker technology than we currently have, so this projects like this one is a good thing.He can start with a version for Windows. If as much as possible of the code is generic and modular designed it should be easy to add support for new formats and platforms.Linkers date from several decades ago, where programs can be broken up into separate, self-contained source files in a simple way. Things have changed a lot since then. Nowadays, we have template functions, virtual functions, dynamic libraries, etc., which require hacks like weak symbols to work properly. And we're *still* missing a sound conceptual framework for things like cross-module dead code elimination, cross-module template instantiation, duplicate code merging (like overlapping immutable arrays), etc.. These things _sorta_ work right now, but they're sorta hacked on top of basic 30-year-old linker technology, rather than being part of a sound, conceptual linker paradigm.That would be really nice. -- /Jacob Carlborg
May 04 2012
On Thursday, 3 May 2012 at 22:53:03 UTC, Alex Rønne Petersen wrote:Absolutely possible, though too late for this year's GSoC. If you're still interested in working on it for GSoC 2013 (if Google decides to do another GSoC (which they most likely will)), then be sure to submit a proposal!Too bad for me i guess, but i'll still try to get into my university's SoC-program. And it'd be better to start the project now compared to waiting for a year to start ;pSounds like a good idea to me. Though in my personal opinion, you should try to make the linker as platform-agnostic as possible, so it's easy to adapt for new platforms / file formats.Thanks! I'll try to make it modular and awesome in the end, but for a start i'll just aim to make a linker that's usable with dmd on windows. It's easier to make a good design after getting some more hands-on experience, i think.As for resources on linkers, I think your best bet is reading the LLVM and GCC source code. I think someone also started an LLVM (machine code) linker project recently, but I don't know where to find it.Guess i've got some interesting reading to do.. =) I've come across http://www.iecc.com/linker/ which is quite interesting to read. It seems that it is "quite old", but i don't know how much the linker infrastructure has progressed the last ten years so it's probably still reasonably up to date, i hope ;p
May 06 2012
I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 03 2012
On Thu, 03 May 2012 19:47:24 -0400, Trass3r <un known.com> wrote:+1 -SteveI'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
May 04 2012
On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 04 2012
On Fri, 04 May 2012 18:57:44 +0200, foobar <foo bar.com> wrote:On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:http://dsource.org/projects/ddlHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 04 2012
On Friday, 4 May 2012 at 17:52:54 UTC, simendsjo wrote:On Fri, 04 May 2012 18:57:44 +0200, foobar <foo bar.com> wrote:This is D1 only and AFAIK was abandoned long ago. Was a very good idea though and should be adopted by "official" D tool chain.On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:http://dsource.org/projects/ddlHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 04 2012
On 5/4/12, foobar <foo bar.com> wrote:How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
On Friday, 4 May 2012 at 17:54:47 UTC, Andrej Mitrovic wrote:On 5/4/12, foobar <foo bar.com> wrote:How about using the documentation? It's meant to be consumed by humans and comes with (or should if it isn't yet) with nicely formatted explanations. The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
On 5/4/12, foobar <foo bar.com> wrote:The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
May 04 2012
On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:On 5/4/12, foobar <foo bar.com> wrote:I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
May 04 2012
On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com> wrote:On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.On 5/4/12, foobar <foo bar.com> wrote:I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
May 04 2012
On Friday, 4 May 2012 at 19:21:02 UTC, Andrew Wiley wrote:On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com> wrote:C++ has pre-compiled header files (.pch) which speedup compilation time for projects with lots'o'templates. The same kind of info could be stored inside the object files, for example by serializing the AST as you said yourself. There are many uses for this kind of technology. we can store additional info that currently isn't available for all sorts of link-time optimizations.On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote:I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.On 5/4/12, foobar <foo bar.com> wrote:I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D.The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane.I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.
May 04 2012
On Fri, 04 May 2012 14:56:56 -0400, Andrew Wiley <wiley.andrew.j gmail.com> wrote:I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object filesNothing wrong with this. There is still a gain here -- object code compiled from the source containing the original source is tightly coupled with the template. You can be sure that this object file will link against one that you build based on the template contained in it. -Steve
May 04 2012
On Friday 04 May 2012 08:56 PM, Andrew Wiley wrote:On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com <mailto:foo bar.com>> wrote: On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote: On 5/4/12, foobar <foo bar.com <mailto:foo bar.com>> wrote: The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane. I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore. I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D. I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.Storing the AST would basically equal storing the source code except 'trivia' like white space and unneeded tokens. At that point, you may as well ship the source. -- - Alex
May 04 2012
AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades. -- Paulo "Alex Rønne Petersen" wrote in message news:jo1s2b$2bie$1 digitalmars.com... On Friday 04 May 2012 08:56 PM, Andrew Wiley wrote:On Fri, May 4, 2012 at 1:46 PM, foobar <foo bar.com <mailto:foo bar.com>> wrote: On Friday, 4 May 2012 at 18:30:32 UTC, Andrej Mitrovic wrote: On 5/4/12, foobar <foo bar.com <mailto:foo bar.com>> wrote: The di files are mostly meant to be machine read (e.g. the compiler) and this belongs as part of the library file in order to provide ease of use and maintain the relationship between the binary code and it's interface. maintaining two sets of files that could easily get out of sync and *not* using the docs is way more insane. I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented. And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore. I'd say you'd be wrong. Both di and docs are auto-generated from the same source. As I said docs are designed for human consumption. This includes all sorts of features such as a table of contents, a symbol index, the symbols should have links, the docs provide usage examples, etc, etc. Docs can be put online thus ensuring they're always up-to-date. Tools should either read the data from the lib file or retrieve it from the web. Keeping separate local di files is simply insane. And really, have you never heard of Java? How about Pascal? Should I continue back in history to all the languages that implemented this feature decades ago? C/C++ is a huge PITA with their nonsense compilation model which we shouldn't have copied verbatim in D. I like the idea, but what about templates? For them, you'd basically be stuffing source code into the object files (unless you came up with a way to store the AST, but that seems like the effort/benefit ratio wouldn't be worth it since we currently have no way to preserve an AST tree between compiler runs). Otherwise, I find this idea very compelling. I'm sure there are probably other issues, though.Storing the AST would basically equal storing the source code except 'trivia' like white space and unneeded tokens. At that point, you may as well ship the source. -- - Alex
May 06 2012
Paulo Pinto wrote:AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Paulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
I think it was there where I read about it. I'll update you if I have any success, otherwise I need to retract my statement. :( -- Paulo "Jens Mueller" wrote in message news:mailman.380.1336380192.24740.digitalmars-d puremagic.com... Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
Hi, it seems I have to excuse myself. I could not find anything from Adele Goldberg. So my statement is false. Most likely I ended up confusing Fran Allen's interview in Coders at Work, with some nonsense in my head. Still, I leave here a few links I manage to find from Fran Allen. Some remarks about bad languages on the page 27 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Complaint about C on slide 23 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Another remark about C http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml A video recorded at Purdue University, she also talks about C on minute 51 http://www.youtube.com/watch?v=Si3ZW3nI6oA -- Paulo Am 07.05.2012 10:41, schrieb Jens Mueller:Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
Interesting reading. I took a look at page 23, and didn't find the mention of C. Maybe I didn't read carefully? On 5/8/2012 3:34 AM, Paulo Pinto wrote:Hi, it seems I have to excuse myself. I could not find anything from Adele Goldberg. So my statement is false. Most likely I ended up confusing Fran Allen's interview in Coders at Work, with some nonsense in my head. Still, I leave here a few links I manage to find from Fran Allen. Some remarks about bad languages on the page 27 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Complaint about C on slide 23 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Another remark about C http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml A video recorded at Purdue University, she also talks about C on minute 51 http://www.youtube.com/watch?v=Si3ZW3nI6oA -- Paulo Am 07.05.2012 10:41, schrieb Jens Mueller:Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
Oops, copy/paste error. :( I'll check it, when I get back home. -- Paulo "Andre Tampubolon" wrote in message news:joa0lq$1t2k$1 digitalmars.com... Interesting reading. I took a look at page 23, and didn't find the mention of C. Maybe I didn't read carefully? On 5/8/2012 3:34 AM, Paulo Pinto wrote:Hi, it seems I have to excuse myself. I could not find anything from Adele Goldberg. So my statement is false. Most likely I ended up confusing Fran Allen's interview in Coders at Work, with some nonsense in my head. Still, I leave here a few links I manage to find from Fran Allen. Some remarks about bad languages on the page 27 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Complaint about C on slide 23 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Another remark about C http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml A video recorded at Purdue University, she also talks about C on minute 51 http://www.youtube.com/watch?v=Si3ZW3nI6oA -- Paulo Am 07.05.2012 10:41, schrieb Jens Mueller:Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 07 2012
The correct link should have been http://uhaweb.hartford.edu/ccscne/Allen.pdf Am 08.05.2012 04:33, schrieb Andre Tampubolon:Interesting reading. I took a look at page 23, and didn't find the mention of C. Maybe I didn't read carefully? On 5/8/2012 3:34 AM, Paulo Pinto wrote:Hi, it seems I have to excuse myself. I could not find anything from Adele Goldberg. So my statement is false. Most likely I ended up confusing Fran Allen's interview in Coders at Work, with some nonsense in my head. Still, I leave here a few links I manage to find from Fran Allen. Some remarks about bad languages on the page 27 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Complaint about C on slide 23 http://www-03.ibm.com/ibm/history/witexhibit/pdf/allen_history.pdf Another remark about C http://www.windley.com/archives/2008/02/fran_allen_compilers_and_parallel_computing_systems.shtml A video recorded at Purdue University, she also talks about C on minute 51 http://www.youtube.com/watch?v=Si3ZW3nI6oA -- Paulo Am 07.05.2012 10:41, schrieb Jens Mueller:Paulo Pinto wrote:On Monday, 7 May 2012 at 07:26:44 UTC, Jens Mueller wrote:Many thanks. I couldn't find it myself and I'm interested because Fran Allen said something similar in Coders at Work. I didn't understand what she meant. Andrei suggested that it is mostly (only?) about overlapping pointers to memory. I'm just curious. JensPaulo Pinto wrote:I'll try to find it, as I don't recall the title. I just remember that it made some remarks how primitive C was in regard to Algol toolchains.AST/symbol table manipulation is way faster than reparsing code. People keep talking about D and Go compilation speed, while I was already enjoying such compile times back in 1990 with Turbo Pascal in computers much less powerfull than my laptop. But C and C++ with their 70's compiler technology, somehow won the market share, and then people started complaining about compilation speeds. Adele Golberg, once wrote a paper telling how C made the compiler technology regress several decades.Do you happen to remember to exact title of that paper? Thanks. Jens
May 08 2012
On 2012-05-04 20:30, Andrej Mitrovic wrote:I'd say the docs are more likely to be out of sync than .di code. If the .di code is really out of sync you'll likely even get linker errors. And not everything ends up being documented.Then you need to manage your docs better.And then what about existing tools like IDEs and editors. E.g. autocomplete wouldn't work anymore.They would need to be able to read the library and extract the .di files. Isn't this basically just how Java works? -- /Jacob Carlborg
May 04 2012
On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:On 5/4/12, foobar <foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:Ever heard of Java?Ever heard of not requiring a bring-your-quadcore-to-its-knees IDE?
May 04 2012
On Fri, 04 May 2012 14:31:24 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:This is a totally false comparison :) Java's storage of its interface in its object files has nothing to do with its IDE's performance. What I'm saying is, it's completely possible to store the API in binary format *in* the object files, and use documentation generators to document the API. You do not have to read the interface files to understand the API, and Java is a good example of a language that successfully does that. -SteveEver heard of Java?Ever heard of not requiring a bring-your-quadcore-to-its-knees IDE?
May 04 2012
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:What I'm saying is, it's completely possible to store the API in binary format *in* the object files, and use documentation generators to documentYes but then you need to *modify* existing tools in order to add a new feature that extracts information from object files. Either that, or you'd have to somehow extract the .di files back from the object files. How else can you see the interface in your text editor without the source files? :)
May 04 2012
On Fri, 04 May 2012 14:48:04 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:Current tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here... -SteveWhat I'm saying is, it's completely possible to store the API in binary format *in* the object files, and use documentation generators to documentYes but then you need to *modify* existing tools in order to add a new feature that extracts information from object files. Either that, or you'd have to somehow extract the .di files back from the object files. How else can you see the interface in your text editor without the source files? :)
May 04 2012
On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:Current tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here...I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 04 2012
On Fri, 04 May 2012 15:07:43 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:No reason for .di files if the object file already serves as the interface file. I think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute). -SteveCurrent tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here...I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 04 2012
On Friday, 4 May 2012 at 19:13:21 UTC, Steven Schveighoffer wrote:On Fri, 04 May 2012 15:07:43 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:Exactly :)On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:No reason for .di files if the object file already serves as the interface file. I think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute). -SteveCurrent tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here...I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 04 2012
On Fri, May 04, 2012 at 03:13:21PM -0400, Steven Schveighoffer wrote: [...]I think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute).[...] +1. It's about time we moved on from 30+ year old outdated linker technology, to something more powerful. Full escape analysis, compiler deduced function attributes like pureness, all the stuff that's impractical to implement in the current system, can all be done in a reasonable way if we stuck this information into the object files. The linker doesn't have to care what's in those extra sections; the compiler reads the info and does what it needs to do. The linker can omit the extra info from the final executable. (Or make use of it, if we implement a smarter linker. Like do cross-module string optimization, or something.) T -- Кто везде - тот нигде.
May 04 2012
On Friday 04 May 2012 11:17 PM, H. S. Teoh wrote:On Fri, May 04, 2012 at 03:13:21PM -0400, Steven Schveighoffer wrote: [...]Purity inference won't happen either way. Purity is part of your API and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit. -- - AlexI think he meant that object (and library) binary files would be augmented by API segments that provide what di files provide now -- an interface-only version of the code. It doesn't have to be text, it can be binary (maybe even partially compiled). The really nice thing you get from this is, the compiler now would use this object file instead of .d files for importing. So not only do you eliminate errors from having two possibly separately maintained files, but the compiler can build *extra* details into the .dobj file. For example, it could put in metadata that would allow for full escape analysis. Or tag that a function is implied pure (without actually having to tag the function with the pure attribute).[...] +1. It's about time we moved on from 30+ year old outdated linker technology, to something more powerful. Full escape analysis, compiler deduced function attributes like pureness, all the stuff that's impractical to implement in the current system, can all be done in a reasonable way if we stuck this information into the object files. The linker doesn't have to care what's in those extra sections; the compiler reads the info and does what it needs to do. The linker can omit the extra info from the final executable. (Or make use of it, if we implement a smarter linker. Like do cross-module string optimization, or something.) T
May 04 2012
On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com> wrote:Purity inference won't happen either way. Purity is part of your API a=nd =also meant to help you reason about your code. If the compiler just =infers purity in a function and you later change the implementation so==it's no longer pure, you break your users' code. Also, purity would no==longer help you reason about your code if it's not explicit.It can be pure for the purposes of optimization without affecting code = whatsoever. Inferred purity can be marked separately from explicit = purity, and explicitly pure functions would not be allowed to call = implicitly pure functions. -Steve
May 07 2012
On 07-05-2012 13:21, Steven Schveighoffer wrote:On Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen <xtzgzorex gmail.com> wrote:But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable. (Note that what you described thus is the current situation, just that inferred purity is not part of the language (no reason it has to be).) -- - AlexPurity inference won't happen either way. Purity is part of your API and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit.It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -Steve
May 07 2012
On Mon, 07 May 2012 07:41:43 -0400, Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com> wrote:On 07-05-2012 13:21, Steven Schveighoffer wrote:On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen <xtzgzorex gmail.com> wrote:Purity inference won't happen either way. Purity is part of your API=and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's=enot explicit.It can be pure for the purposes of optimization without affecting cod=s =whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -SteveBut that kind of inferred purity is something a compiler back end care=about, not something the language should have to care about at all. In==practice, most compilers *do* analyze all functions for possible =side-effects and use that information where applicable.It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. I= f = the source isn't available for foo, the compiler cannot make this = optimization. I have no idea if this is a back-end or front-end issue. I'm not a = compiler writer. But I do understand that the compiler needs extra = information in the signature to determine if it can make this optimizati= on. -Steve
May 07 2012
On 07-05-2012 14:50, Steven Schveighoffer wrote:On Mon, 07 May 2012 07:41:43 -0400, Alex Rønne Petersen <xtzgzorex gmail.com> wrote:OK, point taken; didn't consider that. But in the first place, for inference of purity to work, the source would have to be available. Then, that inferred property has to be propagated somehow so that the compiler can make use of it when linking to the code as a library... -- - AlexOn 07-05-2012 13:21, Steven Schveighoffer wrote:It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y = foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. If the source isn't available for foo, the compiler cannot make this optimization. I have no idea if this is a back-end or front-end issue. I'm not a compiler writer. But I do understand that the compiler needs extra information in the signature to determine if it can make this optimization. -SteveOn Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen <xtzgzorex gmail.com> wrote:But that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable.Purity inference won't happen either way. Purity is part of your API and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit.It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -Steve
May 07 2012
On Mon, 07 May 2012 09:27:32 -0400, Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com> wrote:On 07-05-2012 14:50, Steven Schveighoffer wrote:PIOn Mon, 07 May 2012 07:41:43 -0400, Alex R=C3=B8nne Petersen <xtzgzorex gmail.com> wrote:On 07-05-2012 13:21, Steven Schveighoffer wrote:On Fri, 04 May 2012 20:30:05 -0400, Alex R=C3=B8nne Petersen <xtzgzorex gmail.com> wrote:Purity inference won't happen either way. Purity is part of your A=and also meant to help you reason about your code. If the compiler=just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code.='sAlso, purity would no longer help you reason about your code if it=odenot explicit.It can be pure for the purposes of optimization without affecting c=whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -SteveBut that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at=Ifall. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable.It affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in.=the source isn't available for foo, the compiler cannot make this optimization. I have no idea if this is a back-end or front-end issue. I'm not a compiler writer. But I do understand that the compiler needs extra information in the signature to determine if it can make this =optimization.OK, point taken; didn't consider that. But in the first place, for =inference of purity to work, the source would have to be available. =Then, that inferred property has to be propagated somehow so that the ==compiler can make use of it when linking to the code as a library...That's exactly what storing the interface in the object file does. You = = don't need the source because the object file contains the compiler's = interpretation of the source, and any inferred properties it has = discovered. -Steve
May 07 2012
On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer <schveiguy yahoo.com>w= rote:On Mon, 07 May 2012 09:27:32 -0400, Alex R=F8nne Petersen < xtzgzorex gmail.com> wrote: On 07-05-2012 14:50, Steven Schveighoffer wrote:eOn Mon, 07 May 2012 07:41:43 -0400, Alex R=F8nne Petersen <xtzgzorex gmail.com> wrote: On 07-05-2012 13:21, Steven Schveighoffer wrote:On Fri, 04 May 2012 20:30:05 -0400, Alex R=F8nne Petersen <xtzgzorex gmail.com> wrote: Purity inference won't happen either way. Purity is part of your APIand also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit.It can be pure for the purposes of optimization without affecting cod=fIt affects how callers code will be generated. If I have a function int foo(int x); and I have another function which calls foo like: int y =3D foo(x) + foo(x); Then the optimization is applied to whatever function this exists in. I=whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions. -SteveBut that kind of inferred purity is something a compiler back end cares about, not something the language should have to care about at all. In practice, most compilers *do* analyze all functions for possible side-effects and use that information where applicable.,the source isn't available for foo, the compiler cannot make this optimization. I have no idea if this is a back-end or front-end issue. I'm not a compiler writer. But I do understand that the compiler needs extra information in the signature to determine if it can make this optimization.OK, point taken; didn't consider that. But in the first place, for inference of purity to work, the source would have to be available. Then=ed.that inferred property has to be propagated somehow so that the compiler can make use of it when linking to the code as a library...That's exactly what storing the interface in the object file does. You don't need the source because the object file contains the compiler's interpretation of the source, and any inferred properties it has discover=Putting inferred purity into an object file sounds like a bad idea. It's not hard to imagine this scenario: -function foo in libSomething is inferred as pure (but not declared pure by the author) -exeSomethingElse is compiled to use libSomething, and the compiler takes advantage of purity optimizations when calling foo -libSomething is recompiled and foo is no longer pure, and exeSomethingElse silently breaks Purity inference is fine for templates (because recompiling the library won't change the generated template code in an executable that depends on it), but in all other cases, the API needs to be exactly what the author declared it to be, or strange things will happen.
May 07 2012
On Mon, 07 May 2012 12:59:24 -0400, Andrew Wiley = <wiley.andrew.j gmail.com> wrote:On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer =<schveiguy yahoo.com>wrote:ouOn Mon, 07 May 2012 09:27:32 -0400, Alex R=C3=B8nne Petersen < That's exactly what storing the interface in the object file does. Y=don't need the source because the object file contains the compiler's=interpretation of the source, and any inferred properties it has ='sdiscovered.Putting inferred purity into an object file sounds like a bad idea. It=not hard to imagine this scenario: -function foo in libSomething is inferred as pure (but not declared pu=re =by the author) -exeSomethingElse is compiled to use libSomething, and the compiler ta=kesadvantage of purity optimizations when calling foo -libSomething is recompiled and foo is no longer pure, and =exeSomethingElse silently breaksno, it just doesn't link.Purity inference is fine for templates (because recompiling the librar=ywon't change the generated template code in an executable that depends=onit), but in all other cases, the API needs to be exactly what the auth=ordeclared it to be, or strange things will happen.I agree that's the case with the current object/linker model. Something= = that puts inferred properties into the object file needs a new model, on= e = which does not blindly link code that wasn't compiled from the same = sources. -Steve
May 07 2012
On Mon, May 7, 2012 at 12:21 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote:On Mon, 07 May 2012 12:59:24 -0400, Andrew Wiley <wiley.andrew.j gmail.co=m>wrote: On Mon, May 7, 2012 at 8:42 AM, Steven Schveighoffer <schveiguy yahoo.co=mswrote:On Mon, 07 May 2012 09:27:32 -0400, Alex R=F8nne Petersen < That's exactly what storing the interface in the object file does. You don't need the source because the object file contains the compiler's interpretation of the source, and any inferred properties it has discovered.Putting inferred purity into an object file sounds like a bad idea. It's not hard to imagine this scenario: -function foo in libSomething is inferred as pure (but not declared pure by the author) -exeSomethingElse is compiled to use libSomething, and the compiler take=nadvantage of purity optimizations when calling foo -libSomething is recompiled and foo is no longer pure, and exeSomethingElse silently breaksno, it just doesn't link. Purity inference is fine for templates (because recompiling the librarywon't change the generated template code in an executable that depends o=es.it), but in all other cases, the API needs to be exactly what the author declared it to be, or strange things will happen.I agree that's the case with the current object/linker model. Something that puts inferred properties into the object file needs a new model, one which does not blindly link code that wasn't compiled from the same sourc=Then all you've done is to make attributes the author can't control part of the API, which will force library users to recompile their code more often for non-obvious reasons. Avoiding that is one of the points of shared libraries. I think we're actually talking about different contexts. I'm speaking in the context of shared libraries, where I think the API needs to be exactly what the author requests and nothing more. With object files, static libraries, and static linking, I agree that this sort of thing could work and wouldn't cause the same problems because it's impossible to swap the library code without recompiling/relinking the entire program.
May 07 2012
On Mon, 07 May 2012 13:34:49 -0400, Andrew Wiley <wiley.andrew.j gmail.com> wrote:On Mon, May 7, 2012 at 12:21 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote:Shared library entry points have to have *no* inference. Otherwise you could inadvertently change the public API without explicitly tagging it. I believe in D, shared library entry points have to be tagged with export. Not to mention, shared libraries on a certain platform usually have to be linked by the platform's linker. So we can't exactly overtake that aspect with a new model.I agree that's the case with the current object/linker model. Something that puts inferred properties into the object file needs a new model, one which does not blindly link code that wasn't compiled from the same sources.Then all you've done is to make attributes the author can't control part of the API, which will force library users to recompile their code more often for non-obvious reasons. Avoiding that is one of the points of shared libraries.I think we're actually talking about different contexts. I'm speaking in the context of shared libraries, where I think the API needs to be exactly what the author requests and nothing more. With object files, static libraries, and static linking, I agree that this sort of thing could work and wouldn't cause the same problems because it's impossible to swap the library code without recompiling/relinking the entire program.OK, that makes sense, I think you are right, we were talking about two different pieces of the model. -Steve
May 07 2012
On 05/07/12 13:21, Steven Schveighoffer wrote:On Fri, 04 May 2012 20:30:05 -0400, Alex Rønne Petersen <xtzgzorex gmail.com> wrote:In WPO mode - it doesn't matter - it's just another internal compiler optimization. Otherwise in general it can't be done - a change to the function definition would change its signature - which means that all callers need to be recompiled. So at best only the intra-module calls can be affected, when the compiler knows that the caller will always be generated together with the callee. And the latter has to be assumed impure if it's not private, for the same reasons. arturPurity inference won't happen either way. Purity is part of your API and also meant to help you reason about your code. If the compiler just infers purity in a function and you later change the implementation so it's no longer pure, you break your users' code. Also, purity would no longer help you reason about your code if it's not explicit.It can be pure for the purposes of optimization without affecting code whatsoever. Inferred purity can be marked separately from explicit purity, and explicitly pure functions would not be allowed to call implicitly pure functions.
May 07 2012
I also don't see the issue. This is already a long tradition in the languages that don't have to carry C linker baggage. - Turbo Pascal 4.0, 1987 - Oberon 1.0, 1986 So I also don't why a 2012 language can't have a similar mechanism. -- Paulo "Andrej Mitrovic" wrote in message news:mailman.324.1336158548.24740.digitalmars-d puremagic.com... On 5/4/12, Steven Schveighoffer <schveiguy yahoo.com> wrote:Current tools: read .di files and extract API new tools: read .dobj files and extract API. I'm not really seeing the difficulty here...I thought he meant libraries that are only distributed in binary form. So no .di files anywhere. Maybe I misunderstood..
May 06 2012
Am 04.05.2012 20:26, schrieb Steven Schveighoffer:On Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magicOn 5/4/12, foobar<foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 05 2012
Am 05.05.2012 09:06, schrieb dennis luehring:Am 04.05.2012 20:26, schrieb Steven Schveighoffer:an more up-to-date example can be seen using the freepascal compiler and its ppdump tool: http://www.freepascal.org/tools/ppudump.var and turbo pascal gots even since 1987 a very good package system like a Java Jar file - you can just integrate compiled pascal sources (.pas -> .tpu) into something called .tpl file (turbo pascal library) the freepascal compiler got something similar called .ppl these "technologies" are damn good and invented so long before - but sometimes totaly unknown to all the obj-file-linker-guysOn Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magicOn 5/4/12, foobar<foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 05 2012
I really really think that mankind did a wrong turn when C won over Pascal in the 80's. And that Wirth somehow lost interest in the industry and did not try to push Modula-* or Oberon. There are some papers where he states this. Now we suffer from - daggling pointers - buffer overflows - pre-historic compiler toolchains -- Paulo "dennis luehring" wrote in message news:jo2kb8$htd$1 digitalmars.com... Am 05.05.2012 09:06, schrieb dennis luehring:Am 04.05.2012 20:26, schrieb Steven Schveighoffer:an more up-to-date example can be seen using the freepascal compiler and its ppdump tool: http://www.freepascal.org/tools/ppudump.var and turbo pascal gots even since 1987 a very good package system like a Java Jar file - you can just integrate compiled pascal sources (.pas -> .tpu) into something called .tpl file (turbo pascal library) the freepascal compiler got something similar called .ppl these "technologies" are damn good and invented so long before - but sometimes totaly unknown to all the obj-file-linker-guysOn Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987 and turbo pascal and delphi are extremely fast native compilers without any Java, .Net magicOn 5/4/12, foobar<foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 06 2012
Am 07.05.2012 07:53, schrieb Paulo Pinto:I really really think that mankind did a wrong turn when C won over Pascal in the 80's. And that Wirth somehow lost interest in the industry and did not try to push Modula-* or Oberon. There are some papers where he states this. Now we suffer from - daggling pointers - buffer overflows - pre-historic compiler toolchainswe should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
May 07 2012
I like the idea, need to check what information I could provide. Wirth's books about Oberon also provide similar information. -- Paulo "dennis luehring" wrote in message news:jo85t1$1n9b$1 digitalmars.com... Am 07.05.2012 07:53, schrieb Paulo Pinto:I really really think that mankind did a wrong turn when C won over Pascal in the 80's. And that Wirth somehow lost interest in the industry and did not try to push Modula-* or Oberon. There are some papers where he states this. Now we suffer from - daggling pointers - buffer overflows - pre-historic compiler toolchainswe should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
May 07 2012
Am 07.05.2012 15:27, schrieb Paulo Pinto:I like the idea, need to check what information I could provide. Wirth's books about Oberon also provide similar information. -- Paulo "dennis luehring" wrote in message news:jo85t1$1n9b$1 digitalmars.com... Am 07.05.2012 07:53, schrieb Paulo Pinto:Description of the Free Pascal unit format http://www.freepascal.org/docs-html/prog/progap1.html#progse67.html How the dump command works http://www.freepascal.org/tools/ppudump.htm The source code of the ppudump utility http://svn.freepascal.org/cgi-bin/viewvc.cgi/trunk/compiler/utils/ppudump.pp?view=markup -- PauloI really really think that mankind did a wrong turn when C won over Pascal in the 80's. And that Wirth somehow lost interest in the industry and did not try to push Modula-* or Oberon. There are some papers where he states this. Now we suffer from - daggling pointers - buffer overflows - pre-historic compiler toolchainswe should collect all the advantages of turbo pascal/delphi object-file-formats and make a small description post to show others in a clear understandable way how good/and longlife these technics are so the unit-system (turbo pascal: .pas -> .tpu, delphi: .pas->.dcu, free pascal: .pas -> ppu), the tpumover, ppumover for tpl or ppl libraries, the dll delphi solution .bpl and the advantage of controling the output of source inside the source program -> exe, unit -> object, library -> dynamic libray etc. any ideas how to start?
May 07 2012
On Sat, 05 May 2012 03:06:52 -0400, dennis luehring <dl.soluz gmx.net> wrote:Am 04.05.2012 20:26, schrieb Steven Schveighoffer:Honestly? No. I've heard of those languages, I don't know anyone who uses them, and I've never used them. I don't mean this as a slight or rebuttal. Java is just more recognizable. Using either language (Java or TurboPascal) is still a good way to prove the point that it is possible and works well. -SteveOn Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987On 5/4/12, foobar<foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 07 2012
This just confirms what I saw yesterday on a presentation. Many developers re-invent the wheel, or jump to the fad technology of the year, because they don't have the knowledge of old already proven technologies, that for whatever reason, are no longer common. We need better ways to preserve knowledge in our industry. -- Paulo "Steven Schveighoffer" wrote in message news:op.wdxra01ceav7ka steves-laptop... On Sat, 05 May 2012 03:06:52 -0400, dennis luehring <dl.soluz gmx.net> wrote:Am 04.05.2012 20:26, schrieb Steven Schveighoffer:Honestly? No. I've heard of those languages, I don't know anyone who uses them, and I've never used them. I don't mean this as a slight or rebuttal. Java is just more recognizable. Using either language (Java or TurboPascal) is still a good way to prove the point that it is possible and works well. -SteveOn Fri, 04 May 2012 13:54:38 -0400, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:ever heard about Turbo Pascal (and delphi) got this feature since turbo pascal 4 around 1987On 5/4/12, foobar<foo bar.com> wrote:Ever heard of Java? -SteveHow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 07 2012
On Mon, 07 May 2012 09:22:05 -0400, Paulo Pinto <pjmlp progtools.org> wrote:This just confirms what I saw yesterday on a presentation. Many developers re-invent the wheel, or jump to the fad technology of the year, because they don't have the knowledge of old already proven technologies, that for whatever reason, are no longer common. We need better ways to preserve knowledge in our industry.Again, don't take offense. I never suggested Java's use of an already existing technology was in some way a "new" thing, just that it proves it can work. I'm sure back in the day, TurboPascal had to walk uphill through the snow to school both ways too. :) -Steve
May 07 2012
Am 07.05.2012 15:30, schrieb Steven Schveighoffer:On Mon, 07 May 2012 09:22:05 -0400, Paulo Pinto <pjmlp progtools.org> wrote:No offense taken. My reply was just a small rant, based on your answer on lack of contact with Turbo Pascal and other languages I mentioned. Yesterday I watched a presentation, where the guy complains on knowledge being lost due to the lack of proper mentors in the industry, http://www.infoq.com/presentations/The-Frustrated-Architect I have spent a huge time in the university learning about compiler development, reading old books and papers from the early computing days. So in a general way, and not directed to you now, I saddens me that a great part of that knowledge is lost to most youth nowadays. Developers get amazed with JavaScript JIT compilation, and yet it already existed in Smalltalk systems. Go advertises fast compilation speeds, and they were already available to some language systems in the late 70's, early 80's. We are discussing storing module interfaces directly in the library files, and most seem to never heard of it. And the list goes on. Sometimes I wonder what do students learn in modern CS courses. -- PauloThis just confirms what I saw yesterday on a presentation. Many developers re-invent the wheel, or jump to the fad technology of the year, because they don't have the knowledge of old already proven technologies, that for whatever reason, are no longer common. We need better ways to preserve knowledge in our industry.Again, don't take offense. I never suggested Java's use of an already existing technology was in some way a "new" thing, just that it proves it can work. I'm sure back in the day, TurboPascal had to walk uphill through the snow to school both ways too. :) -Steve
May 07 2012
On Mon, May 07, 2012 at 07:21:54PM +0200, Paulo Pinto wrote: [...]I have spent a huge time in the university learning about compiler development, reading old books and papers from the early computing days. So in a general way, and not directed to you now, I saddens me that a great part of that knowledge is lost to most youth nowadays. Developers get amazed with JavaScript JIT compilation, and yet it already existed in Smalltalk systems. Go advertises fast compilation speeds, and they were already available to some language systems in the late 70's, early 80's. We are discussing storing module interfaces directly in the library files, and most seem to never heard of it. And the list goes on. Sometimes I wonder what do students learn in modern CS courses.[...] Way too much theory and almost no practical applications. At least, that was my experience when I was in college. It gets worse the more prestigious the college is, apparently. I'm glad I spent much of my free time working on my own projects, and doing _real_ coding, like actually use C/C++ outside of the trivial assignments they hand out in class. About 90% of what I do at my job is what I learned during those free-time projects. Only 10% or maybe even less is what I got from CS courses. T -- The two rules of success: 1. Don't tell everything you know. -- YHL
May 07 2012
On 2012-05-07 20:13, H. S. Teoh wrote:On Mon, May 07, 2012 at 07:21:54PM +0200, Paulo Pinto wrote:So true, so true. I feel exactly the same. -- /Jacob CarlborgSometimes I wonder what do students learn in modern CS courses.[...] Way too much theory and almost no practical applications. At least, that was my experience when I was in college. It gets worse the more prestigious the college is, apparently. I'm glad I spent much of my free time working on my own projects, and doing _real_ coding, like actually use C/C++ outside of the trivial assignments they hand out in class. About 90% of what I do at my job is what I learned during those free-time projects. Only 10% or maybe even less is what I got from CS courses.
May 07 2012
Am 04.05.2012 19:54, schrieb Andrej Mitrovic:On 5/4/12, foobar<foo bar.com> wrote:Delphi does this since ages!How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 05 2012
On 2012-05-04 18:57, foobar wrote:How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?That would be nice. I guess that would mean that compiler needs to be changed as well to be able to read the .di files from the library. -- /Jacob Carlborg
May 04 2012
On Fri, May 04, 2012 at 07:54:38PM +0200, Andrej Mitrovic wrote:On 5/4/12, foobar <foo bar.com> wrote:Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. T -- There are three kinds of people in the world: those who can count, and those who can't.How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
On Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:On Fri, May 04, 2012 at 07:54:38PM +0200, Andrej Mitrovic wrote:I've written code to do this, but apparently it breaks Phobos in the autotester. I can't get it to break Phobos on my local machine so I'm at a loss as how to fix it. Maybe you can help? The code is here: https://github.com/LightBender/dmd.git -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/On 5/4/12, foobar <foo bar.com> wrote:Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. THow about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored?How would you use a library you don't even have the interface to? I mean if you can't even look at the API in your editor.. that'd be insane.
May 04 2012
On Fri, May 04, 2012 at 02:39:00PM -0700, Adam Wilson wrote:On Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:[...][...]Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library.I've written code to do this, but apparently it breaks Phobos in the autotester. I can't get it to break Phobos on my local machine so I'm at a loss as how to fix it. Maybe you can help? The code is here: https://github.com/LightBender/dmd.git[...] Sorry for taking so long to respond, been busy. Got some time this morning to cloned your repo and built dmd, then rebuilt druntime and phobos, and got this error from phobos: ../druntime/import/core/sys/posix/sys/select.di(25): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(25): no identifier for declarator __FDELT(int d) ../druntime/import/core/sys/posix/sys/select.di(27): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(27): no identifier for declarator __FDMASK(int d) make[1]: *** [generated/linux/release/32/libphobos2.a] Error 1 make: *** [release] Error 2 Looks like the bug only triggers when you rebuild druntime before rebuilding phobos. Hope this helps. Let me know if you want me to test anything else. T -- Freedom: (n.) Man's self-given right to be enslaved by his own depravity.
May 05 2012
On Sat, May 05, 2012 at 09:51:40AM -0700, H. S. Teoh wrote:On Fri, May 04, 2012 at 02:39:00PM -0700, Adam Wilson wrote:[...] Oh, and here's the snippet from the offending file (core/sys/posix/sys/select.di): ------SNIP------ private { alias c_long __fd_mask; enum uint __NFDBITS = 8 * __fd_mask.sizeof; extern (D) auto __FDELT(int d); // this is line 25 extern (D) auto __FDMASK(int d); // this is line 27 } ------SNIP------ Looks like the problem is caused by the auto, perhaps? T -- Lottery: tax on the stupid. -- SlashdotterOn Fri, 04 May 2012 14:12:16 -0700, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:[...][...]Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library.I've written code to do this, but apparently it breaks Phobos in the autotester. I can't get it to break Phobos on my local machine so I'm at a loss as how to fix it. Maybe you can help? The code is here: https://github.com/LightBender/dmd.git[...] Sorry for taking so long to respond, been busy. Got some time this morning to cloned your repo and built dmd, then rebuilt druntime and phobos, and got this error from phobos: ../druntime/import/core/sys/posix/sys/select.di(25): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(25): no identifier for declarator __FDELT(int d) ../druntime/import/core/sys/posix/sys/select.di(27): function declaration without return type. (Note that constructors are always named 'this') ../druntime/import/core/sys/posix/sys/select.di(27): no identifier for declarator __FDMASK(int d) make[1]: *** [generated/linux/release/32/libphobos2.a] Error 1 make: *** [release] Error 2
May 05 2012
On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. TYou contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
May 04 2012
On Sat, May 05, 2012 at 12:07:16AM +0200, foobar wrote:On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:HTML is a stupid format, and ddoc output is not very navigable, but that's beside the point. I prefer to be reading actual code to be 100% sure that ddoc isn't leaving out some stuff that I should know about. All it takes is for somebody to leave out a doc comment and a particular declaration becomes invisible. (For example, std.uni was next to useless before I discovered that it actually had functions that I needed, but they didn't show up in dlang.org 'cos somebody failed to write doc comments for them.) I've seen too many commercial projects to believe for a moment that documentation is ever up-to-date. It depends on the library authors to provide ddoc output formats in a sane, usable format. Whereas if the compiler had a standardized, uniform, understandable format in well-known code syntax, that's a lot more dependable. It's often impossible to debug something if you don't get to see what the compiler sees. I suppose you could argue that leaving out function bodies and stuff amounts to the same thing, but at least the language's interface for a function is the function's signature. When you have a .di file, you're guaranteed that all public declarations are there, and you can see exactly what they are. Of course, IF ddoc can be guaranteed to produce exactly what's in a .di file, then I concede that it is sufficient this purpose. T -- Recently, our IT department hired a bug-fix engineer. He used to work for Volkswagen.Exactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. TYou contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
May 04 2012
On 2012-05-05 00:39, H. S. Teoh wrote:It's often impossible to debug something if you don't get to see what the compiler sees. I suppose you could argue that leaving out function bodies and stuff amounts to the same thing, but at least the language's interface for a function is the function's signature. When you have a .di file, you're guaranteed that all public declarations are there, and you can see exactly what they are. Of course, IF ddoc can be guaranteed to produce exactly what's in a .di file, then I concede that it is sufficient this purpose.If the compiler can extract the .di files from an object file so can other tools. I don't see the problem. -- /Jacob Carlborg
May 05 2012
On Friday, 4 May 2012 at 22:38:27 UTC, H. S. Teoh wrote:On Sat, May 05, 2012 at 12:07:16AM +0200, foobar wrote:This all amounts to the issues you have with the current implementation of DDoc which I agree needs more work. The solution then is to fix/enhance DDoc. Doxygen for instance has a setting to output all declarations whether documented or not, thus addressing your main point. The projects you speak of I assume are written in C/C++? Those tend to have poor documentation precisely because people assume the header files are enough. C/C++ requires you to install a 3rd party doc tool and learn that tool's doc syntax - effort that people are too lazy to invest. In the Java world the syntax is standardized, the tool comes bundled with the compiler, all tools speak it and IDEs will even insert empty doc comment for you automatically. Frankly it takes effort to *not* document your code in this setting. D provides DDoc precisely because it strives to provide the same doc friendly setting as Java.On Friday, 4 May 2012 at 21:11:22 UTC, H. S. Teoh wrote:HTML is a stupid format, and ddoc output is not very navigable, but that's beside the point. I prefer to be reading actual code to be 100% sure that ddoc isn't leaving out some stuff that I should know about. All it takes is for somebody to leave out a doc comment and a particular declaration becomes invisible. (For example, std.uni was next to useless before I discovered that it actually had functions that I needed, but they didn't show up in dlang.org 'cos somebody failed to write doc comments for them.) I've seen too many commercial projects to believe for a moment that documentation is ever up-to-date. It depends on the library authors to provide ddoc output formats in a sane, usable format. Whereas if the compiler had a standardized, uniform, understandable format in well-known code syntax, that's a lot more dependable. It's often impossible to debug something if you don't get to see what the compiler sees. I suppose you could argue that leaving out function bodies and stuff amounts to the same thing, but at least the language's interface for a function is the function's signature. When you have a .di file, you're guaranteed that all public declarations are there, and you can see exactly what they are. Of course, IF ddoc can be guaranteed to produce exactly what's in a .di file, then I concede that it is sufficient this purpose. TExactly. And while we're at it, *really* strip unnecessary stuff from .di files, like function bodies, template bodies, etc.. That stuff is required by the compiler, not the user, so stick that in the object files and let the compiler deal with it. The .di file should be ONLY what's needed for the user to understand how to use the library. TYou contradict yourself. The purpose of di files *is* to provide the compiler the required info to use the binary object/library. If you want human readable docs we already have DDoc (and other 3rd party tools) for that. If you don't like the default HTML output (I can't fathom why) you can easily define appropriate macros for other output types such as TeX (and PDF via external converter), text based, etc..
May 05 2012
Delphi, Turbo Pascal and FreePascal do the same. All the required information is stored in the tpu/fpu files (Turbo/Free Pascal Unit). A command line tool or IDE support easily show the unit interface. -- Paulo "foobar" wrote in message news:abzrrvpylkxhdzsdhesg forum.dlang.org... On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:How about augmenting the object format so that libraries would be self contained and would not require additional .di files? Is this possible optlink by e.g. adding special sections that would be otherwise ignored? I think that's what Go did in their linker but I don't know what format they use, if it's something specific to Go or general.I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 06 2012
On 5/4/12, Trass3r <un known.com> wrote:Hear hear. But I wouldn't mind seeing a linker in D, just for research purposes.I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.
May 04 2012
On Thursday, 3 May 2012 at 23:47:26 UTC, Trass3r wrote:Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.I'd love to, but i don't think i can spend a whole summer doing that ;)If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.I intend to first make something that works and gather experience and get a firm graps of all the quirks of writing a linker first. And there seems to be nice features such as dead code elimination and template magic to consider as well. Would be a shame to limit the capabilities by making the design to well defined in the beginning of the project, i think. So i'll defer the modularity & cross-platforminess for now but keep it in mind for the long run :)
May 06 2012
Am 04.05.2012 01:47, schrieb Trass3r:Yes supporting COFF would be a great benefit on Windows and would allow the user to use other compilers and linkers in conjunction with D. The other point: Writing a linker as part of GOSC 2013 will be an ease for you if you've implemented COFF since you don't need any furthur ramp-up time ;-).I'm interested in starting a project to make a linker besides optlink for dmd on windows.Imho changing dmd to use COFF (incl. 64 support) instead of that crappy OMF would be more beneficial than yet another linker.My vision is to create a linker in a relatively modern language (D) and to release the project as open source.If you do write a linker then make it cross-platform right from the start; and modular so it can support all object file formats.
May 08 2012
On Thu, 03 May 2012 19:47:19 -0300, Pierre LeMoine <yarr.luben+dlang gmail.com> wrote:Hi! I'm interested in starting a project to make a linker besides optlink for dmd on windows. If possible it'd be cool to run it as a gsoc-project, but if that's not an option I'll try to get it admitted as a soc-project at my university. Anyway, the project would aim to be a replacement or alternative to optlink on windows. I've personally encountered quite a few seemingly random problems with optlink, and the error messages are not exactly friendly. My vision is to create a linker in a relatively modern language (D) and to release the project as open source. So, I'm curious about some things; Is it too late to get this accepted as a summer of code project? Are there any current alternative linkers for dmd on windows, or any current projects aiming to create one? And do any of you know of a "Everything you need to know to write the best linker ever" resource center? ;]If you are interested in getting results rather than reinventing the wheel, I would advice you to have a look at the openwatcom.org wlink, and the forked jwlink as a starting point. The linker is open source, written in C and has user documentation (not source doc unfortunately). Roald
May 07 2012
On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:If you are interested in getting results rather than reinventing the wheel, I would advice you to have a look at the openwatcom.org wlink, and the forked jwlink as a starting point. The linker is open source, written in C and has user documentation (not source doc unfortunately). RoaldThanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
May 07 2012
On 2012-05-07 17:41, Pierre LeMoine wrote:On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:Perhaps you could have a look at "gold" as well: http://en.wikipedia.org/wiki/Gold_%28linker%29 -- /Jacob CarlborgIf you are interested in getting results rather than reinventing the wheel, I would advice you to have a look at the openwatcom.org wlink, and the forked jwlink as a starting point. The linker is open source, written in C and has user documentation (not source doc unfortunately). RoaldThanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
May 07 2012
On Mon, 07 May 2012 12:41:09 -0300, Pierre LeMoine <yarr.luben+dlang gmail.com> wrote:On Monday, 7 May 2012 at 12:36:18 UTC, Roald Ribe wrote:I believed that this guy had done it already, but turns out it was for the DMC compilers, not D. He might have some advice for you. http://cmeerw.org/prog/dm/ I can't really tell you what is best to acheive what you want. Have a look at the sources, ask the maintainers, evaluate the supporting environment of the availale choices and find out. The openwarcom.org project also has a really nice debugger that could support D if anyone made the neecessary changes. RoaldIf you are interested in getting results rather than reinventing the wheel, I would advice you to have a look at the openwatcom.org wlink, and the forked jwlink as a starting point. The linker is open source, written in C and has user documentation (not source doc unfortunately). RoaldThanks for the tip! :) What level of reinventing the wheel are we talking about? Did you suggest i fork (j)wlink or somesuch, or that i take a look at how it's implemented instead of reinventing from scratch? :) And does anyone know if wlink is able to link programs from dmd? I made a half-hearted attempt myself, but didn't manage to get it to work ;p /Pierre
May 08 2012