www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - What Makes A Programming Language Good

reply Walter Bright <newshound2 digitalmars.com> writes:
http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
Jan 17 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
So, why do users still get a scary linker error when they try to compile a program with more than 1 module? IMO, sticking to the C-ism of "one object file at a time" and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. Practically everyone to whom I recommended to try D hit this obstacle. rdmd is nice but I see no reason why this shouldn't be in the compiler. Think of the time wasted by build tool authors (bud, rebuild, xfbuild and others, and now rdmd), which could have been put to better use if this were handled by the compiler, who could do it much easier (until relatively recently it was very hard to track dependencies correctly). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-progra
ming-language-good/ 
So, why do users still get a scary linker error when they try to compile a program with more than 1 module?
What is that message?
 IMO, sticking to the C-ism of "one object file at a time" and dependency 
 on external build tools / makefiles is the biggest mistake DMD did in 
 this regard. Practically everyone to whom I recommended to try D hit 
 this obstacle. rdmd is nice but I see no reason why this shouldn't be in 
 the compiler. Think of the time wasted by build tool authors (bud, 
 rebuild, xfbuild and others, and now rdmd), which could have been put to 
 better use if this were handled by the compiler, who could do it much 
 easier (until relatively recently it was very hard to track dependencies 
 correctly).
dmd can build entire programs with one command: dmd file1.d file2.d file3.d ...etc...
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 11:05:34 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
So, why do users still get a scary linker error when they try to compile a program with more than 1 module?
What is that message?
C:\Temp\D\Build> dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) b) does not give any indication of what the user has to do to fix it 2) OPTLINK doesn't demangle D mangled names, when it could, and it would improve the readability of its error messages considerably. (I know not all mangled names are demangleable, but it'd be a great improvement regardless)
 dmd can build entire programs with one command:

     dmd file1.d file2.d file3.d ...etc...
That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules? -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 11:11:01 +0200, Vladimir Panteleev  
<vladimir thecybershadow.net> wrote:

   a) does not indicate what exactly is wrong (module not passed to  
 linker, not that the linker knows that)
By the way, disregarding extern(C) declarations et cetera, the compiler has the ability to detect when such linker errors will appear and take appropriate measures (e.g. suggest using the -c flag, passing the appropriate .d or .obj file on its command line, or using a build tool). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
prev sibling next sibling parent reply Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes:
On 01/18/11 03:11, Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 11:05:34 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:
 
 Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
So, why do users still get a scary linker error when they try to compile a program with more than 1 module?
What is that message?
C:\Temp\D\Build> dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) b) does not give any indication of what the user has to do to fix it 2) OPTLINK doesn't demangle D mangled names, when it could, and it would improve the readability of its error messages considerably. (I know not all mangled names are demangleable, but it'd be a great improvement regardless)
 dmd can build entire programs with one command:

     dmd file1.d file2.d file3.d ...etc...
That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules?
Then I would expect the library vendor provides either a pre-compiled binary library, or the means to readily generate same -- whether that means a Makefile, a script, or what have you. At that time, there is no need to provide DMD with anything -- unless you are one-lining it a la 'dmd file1 file2 file3 third_party_stuff.lib'. Forgive me if I misunderstand, but I really don't want a language/compiler that goes too far into hand-holding. Let me screw up if I want to. -- Chris N-S
Jan 18 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 12:07:21 +0200, Christopher Nicholson-Sauls  
<ibisbasenji gmail.com> wrote:

 That doesn't scale anywhere. What if you want to use a 3rd-party library
 with a few dozen modules?
Then I would expect the library vendor provides either a pre-compiled binary library, or the means to readily generate same -- whether that means a Makefile, a script, or what have you.
Why? You're saying that both the user and every library maintainer must do that additional work. Why should the user care that they have to deal with pre-compiled libraries in general? The only thing the user should bother with is the package name for the library. D can take care of everything else: check out the library sources from version control, build a library and generate .di files. The .di files can include pragmas which specify to link to that library. There are no technical reasons against this. In fact, DSSS already does most of this. AFAIK Ruby takes care of everything else, even when the library isn't installed on your system.
 Forgive me if I misunderstand, but I really don't want a
 language/compiler that goes too far into hand-holding.  Let me screw up
 if I want to.
So, you want D to force people to do more work, out of no practical reason? -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Vladimir Panteleev:

 So, you want D to force people to do more work, out of no practical reason?
When you develop a large system, the nice hand holding that works with small systems often stops working (because the whole language ecosystem is often not much designed for hierarchical decomposition of problems). In this situation you are often on your own, and often the automatic features work against you because their work and actions are often opaque. So those programmer develop a mistrust toward a compiler+tools that hold too much your hand. A related problem is visible in old automatic pilot systems. They are very useful, but when their operative limits are reached (because some emergency has pushed the plane state outside them), they suddenly stop working, and leave the human pilots in bad waters because the humans don't have a lot of time to awake from their sleepy state and understand the situation well enough to face the problems. So those old automatic pilot systems were actively dangerous (new automatic pilot systems have found ways to reduce such problems). To solve the situation, the future automatic D tools need to work in a very transparent way, giving all the information in a easy to use and understand way, showing all they do in a very clear way. So when they fail or when they stop being enough, the programmer doesn't need to work three times harder to solve the problems manually. Bye, bearophile
Jan 18 2011
parent el muchacho <nicolas.janin gmail.com> writes:
Le 18/01/2011 11:45, bearophile a écrit :
 Vladimir Panteleev:
 
 So, you want D to force people to do more work, out of no practical reason?
When you develop a large system, the nice hand holding that works with small systems often stops working (because the whole language ecosystem is often not much designed for hierarchical decomposition of problems). In this situation you are often on your own, and often the automatic features work against you because their work and actions are often opaque. So those programmer develop a mistrust toward a compiler+tools that hold too much your hand. A related problem is visible in old automatic pilot systems. They are very useful, but when their operative limits are reached (because some emergency has pushed the plane state outside them), they suddenly stop working, and leave the human pilots in bad waters because the humans don't have a lot of time to awake from their sleepy state and understand the situation well enough to face the problems. So those old automatic pilot systems were actively dangerous (new automatic pilot systems have found ways to reduce such problems). To solve the situation, the future automatic D tools need to work in a very transparent way, giving all the information in a easy to use and understand way, showing all they do in a very clear way. So when they fail or when they stop being enough, the programmer doesn't need to work three times harder to solve the problems manually. Bye, bearophile
My 2 cents: There is no need for transparency in the compilation and linking processes if things are well defined. Armies of developers in Java shops that include banks trust their IDE to do almost everything, be it eclipse, Netbeans or IntelliJ, sometimes the 3 at the same time in the same team. This is the case in my team, where some developers use IntelliJ while others use eclipse, out of the same source code repository. Both IDEs can compile and debug the software and the final build is made by a big ant file which can check out, generate code, build with javac and run tests. So there are 3 build systems in parallel. One task of the ant file is run once by each developer to generate the code and then the build is entirely handled by the build system, is the compiler of the IDE. There is no need to specify any dependency in the ant file. Of course, the IDE's compiler needs to be told where to find the library dependencies because we don't use Maven yet, but apart from taht, there is no need to specify anything else. This is in contrast with the horrible makefiles that still cripple most C++ projects, and still prevent C++ shops to benefit from efficient IDEs. Having worked both on large C++ systems and Java systems, my only conclusion is: make is a huge waste of time.
Jan 29 2011
prev sibling parent reply Trass3r <un known.com> writes:
 Then I would expect the library vendor provides either a pre-compiled
 binary library
As soon as you provide templates in your library this isn't sufficient anymore.
 or the means to readily generate same -- whether that
 means a Makefile, a script, or what have you.
We must avoid having the same disastrous situation like C/C++ where everyone uses a different system, CMake, make, scons, blabla. Makefiles aren't portable (imo stuff like msys is no solution, it's a hack) and especially for small or medium-sized projects it's often enough to compile a single main file and all of its dependencies. We really need a standard, portable way to compile D projects, be it implemented in the compiler or in some tool everyone uses. dsss was kind of promising but as you know it's dead.
Jan 18 2011
next sibling parent Gour <gour atmarama.net> writes:
On Tue, 18 Jan 2011 10:32:53 +0000 (UTC)
Trass3r <un known.com> wrote:

 We must avoid having the same disastrous situation like C/C++ where
 everyone uses a different system, CMake, make, scons, blabla.
I agree (planning not to use blabla build system, but waf). Otoh, I hope D2 will also be able to avoid things like: http://cdsmith.wordpress.com/2011/01/16/haskells-own-dll-hell/ However, for now I'm more concerned to see 64bit DMD, complete QtD (or some other workable GUI bindings), some database bindings etc. first... Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 18 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/18/11 4:32 AM, Trass3r wrote:
 Then I would expect the library vendor provides either a pre-compiled
 binary library
As soon as you provide templates in your library this isn't sufficient anymore.
 or the means to readily generate same -- whether that
 means a Makefile, a script, or what have you.
We must avoid having the same disastrous situation like C/C++ where everyone uses a different system, CMake, make, scons, blabla. Makefiles aren't portable (imo stuff like msys is no solution, it's a hack) and especially for small or medium-sized projects it's often enough to compile a single main file and all of its dependencies. We really need a standard, portable way to compile D projects, be it implemented in the compiler or in some tool everyone uses. dsss was kind of promising but as you know it's dead.
You may add to bugzilla the features that rdmd needs to acquire. Andrei
Jan 18 2011
parent reply Trass3r <un known.com> writes:
 the features that rdmd needs to acquire
Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think.
Jan 18 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Trass3r" <un known.com> wrote in message 
news:ih4ij7$1g01$1 digitalmars.com...
 the features that rdmd needs to acquire
Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think.
FWIW, stbuild (part of semitwist d tools) exists to do exactly that: http://www.dsource.org/projects/semitwist/browser/trunk/src/semitwist/apps/stmanage/stbuild http://www.dsource.org/projects/semitwist/browser/trunk/bin Although I'm thinking of replacing it with something more rake-like.
Jan 18 2011
parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ih4p4o$1r1o$1 digitalmars.com...
 "Trass3r" <un known.com> wrote in message 
 news:ih4ij7$1g01$1 digitalmars.com...
 the features that rdmd needs to acquire
Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think.
FWIW, stbuild (part of semitwist d tools) exists to do exactly that: http://www.dsource.org/projects/semitwist/browser/trunk/src/semitwist/apps/stmanage/stbuild http://www.dsource.org/projects/semitwist/browser/trunk/bin
Oh, and an example of the config file: http://www.dsource.org/projects/semitwist/browser/trunk/stbuild.conf
 Although I'm thinking of replacing it with something more rake-like.

 
Jan 18 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 11:05:34 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:
 
 Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright 
 <newshound2 digitalmars.com> wrote:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-progra
ming-language-good/ 
So, why do users still get a scary linker error when they try to compile a program with more than 1 module?
What is that message?
C:\Temp\D\Build> dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that)
There could be many reasons for the error, see: http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined which is linked from the url listed: http://www.digitalmars.com/ctg/optlink.html and more directly from the FAQ: http://www.digitalmars.com/faq.html
   b) does not give any indication of what the user has to do to fix it
The link above does give such suggestions, depending on what the cause of the error is.
 2) OPTLINK doesn't demangle D mangled names, when it could, and it would 
 improve the readability of its error messages considerably.
    (I know not all mangled names are demangleable, but it'd be a great 
 improvement regardless)
The odd thing is that Optlink did demangle the C++ mangled names, and people actually didn't like it that much.
 dmd can build entire programs with one command:

     dmd file1.d file2.d file3.d ...etc...
That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules?
Just type the filenames and library names on the command line. You can put hundreds if you like. If you do blow up the command line processor (nothing dmd can do about that), you can put all those files in a file, say "cmd", and invoke with: dmd cmd The only limit is the amount of memory in your system.
Jan 18 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 13:28:32 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 What is that message?
C:\Temp\D\Build> dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that)
There could be many reasons for the error, see:
Sorry, you're missing the point. The toolchain has the ability to output a much more helpful error message (or just do the right thing and compile the whole project, which is obviously what the user intends to do in 99% of the time).
 http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined

 which is linked from the url listed:

 http://www.digitalmars.com/ctg/optlink.html

 and more directly from the FAQ:

 http://www.digitalmars.com/faq.html

   b) does not give any indication of what the user has to do to fix it
The link above does give such suggestions, depending on what the cause of the error is.
This is not nearly good enough. I can bet you that over 95% of users will Google for the error message instead. Further more, that webpage is very technical. Some D users (those wanting a high-performance high-level programming language) don't even need to know what a linker is or does.
 2) OPTLINK doesn't demangle D mangled names, when it could, and it  
 would improve the readability of its error messages considerably.
    (I know not all mangled names are demangleable, but it'd be a great  
 improvement regardless)
The odd thing is that Optlink did demangle the C++ mangled names, and people actually didn't like it that much.
I think we can agree that there is a significant difference between the two audiences (users of your C++ toolchain who need a high-end, high-performance C++ compiler, vs. people who want to try a new programming language). You can make it an option, or just print both mangled and demangled.
 dmd can build entire programs with one command:

     dmd file1.d file2.d file3.d ...etc...
That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules?
Just type the filenames and library names on the command line. You can put hundreds if you like. If you do blow up the command line processor (nothing dmd can do about that), you can put all those files in a file, say "cmd", and invoke with: dmd cmd The only limit is the amount of memory in your system.
That's not what I meant - I meant it doesn't scale as far as user effort in concerned. There is no reason why D should force users to maintain response files, make files, etc. D (the language) doesn't need them, and nor should the reference implementation. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply Jim <bitcirkel yahoo.com> writes:
      dmd  cmd

 The only limit is the amount of memory in your system.
That's not what I meant - I meant it doesn't scale as far as user effort in concerned. There is no reason why D should force users to maintain response files, make files, etc. D (the language) doesn't need them, and nor should the reference implementation.
I have to second that. Your main.d imports abd.d which, in turn, imports xyz.d. Why can't the compiler traverse this during compilation in order to find all relevant modules and compile them if needed? I imagine such a compiler could also do some interesting optimisations based on its greater perspective. The single file as a compilation unit seems a little myopic to me. Its reasons are historic, I bet.
Jan 18 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 14:47:29 +0200, Jim <bitcirkel yahoo.com> wrote:

 I imagine such a compiler could also do some interesting optimisations  
 based on its greater perspective.
Compiling the entire program at once opens the door to much more than just optimizations. You could have virtual templated methods, for one. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
prev sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
Jim wrote:
 Why can't the compiler traverse this during compilation in order to
 find all relevant modules and compile them if needed?
How will it find all the modules? Since modules and files don't have to have matching names, it can't assume "import foo;" will necessarily be found in "foo.d". I use this fact a lot to get all a program's dependencies in one place. The modules don't necessarily have to be under the current directory either. It'd have a lot of files to search, which might be brutally slow. ... but, if you do want that behavior, you can get it today somewhat easily: dmd *.d, which works quite well if all the things are in one folder anyway.
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 15:51:58 +0200, Adam Ruppe <destructionator gmail.com>  
wrote:

 Jim wrote:
 Why can't the compiler traverse this during compilation in order to
 find all relevant modules and compile them if needed?
How will it find all the modules? Since modules and files don't have to have matching names, it can't assume "import foo;" will necessarily be found in "foo.d". I use this fact a lot to get all a program's dependencies in one place.
I think this is a misfeature. I suppose you avoid using build tools and prefer makefiles/build scripts for some reason?
 The modules don't necessarily have to be under the current
 directory either. It'd have a lot of files to search, which might
 be brutally slow.
Not if the compiler knows the file name based on the module name.
 ... but, if you do want that behavior, you can get it today somewhat
 easily: dmd *.d, which works quite well if all the things are in
 one folder anyway.
...which won't work on Windows, for projects with packages, and if you have any unrelated .d files (backups, test programs) in your directory (which I almost always do). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
Vladimir Panteleev:
 I think [file/module name mismatches] is a misfeature.
Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. Here's what I think might work: just use the existing import path rule. If it gets a match, great. If not, the user can always manually add the other file to the command line anyway.
 I suppose you avoid using build tools and
 prefer makefiles/build scripts for some reason?
Yeah, makefiles and build scripts are adequately fit already. That is, they don't suck enough to justify the effort of getting something new. I've thought about making an automatic build+download thing myself in the past, but the old way has been good enough for me. (If I were to do it, I'd take rdmd and add a little http download facility to it. If you reference a module that isn't already there, it'd look up the path to download it from a config file, grab it, and try the compile. If the config file doesn't exist, it can grab one automatically from a central location. That way, it'd be customizable and extensible by anyone, but still just work out of the box. But, like I said, it stalled out because my classic makefile and simple scripts have been good enough for me.)
 ...which won't work on Windows, for projects with packages, and if
 you have any unrelated .d files (backups, test programs) in your
 directory (which I almost always do).
Indeed.
Jan 18 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 16:58:31 +0200, Adam Ruppe <destructionator gmail.com>  
wrote:

 Yeah, makefiles and build scripts are adequately fit already.
Then the question is: does the time you spent writing and maintaining makefiles and build scripts exceed the time it would take you to set up a build tool? -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent Adam Ruppe <destructionator gmail.com> writes:
Vladimir Panteleev wrote:
 Then the question is: does the time you spent writing and maintaining
 makefiles and build scripts exceed the time it would take you to
 set up a build tool?
I never spent too much time on it anyway, but this thread prompted me to write my own build thing. It isn't 100% done yet, but it does basically work in just 100 lines of code: http://arsdnet.net/dcode/build.d Also depends on these: http://arsdnet.net/dcode/exec.d http://arsdnet.net/dcode/curl.d The exec.d is Linux only, so this program is linux only too. When the new std.process gets into Phobos, exec.d will be obsolete and we'll be cross platform. I borrowed some code from rdmd, so thanks to Andrei for that. I didn't use rdmd directly though since it seems more script oriented than I wanted. The way it works: build somefile.d It uses dmd -v (same as rdmd) to get the list of files it tries to import. It watches dmd's error output for files it can't find. It then tries to fetch those files from my dpldocs.info http folder and tries again (http://dpldocs.info/repository/FILE). If dmd -v completes without errors, it moves on to run the actual compile. All of build's arguments are passed straight to dmd. In my other post, I talked about a configuration file. That would be preferred over just using my own http server so we can spread out our efforts. I just wanted something simple now to see if it actually works well. It worked on my simple program, but on my more complex program, the linker failed...but about the stupid assocative array opapply. Usually my hack to add object_.d from druntime fixes that, but not here. I don't know why. undefined reference to `_D6object30__T16AssociativeArrayTAyaTyAaZ16AssociativeArray7opApplyMFMDFKAyaKyAaZiZi' Meh, I should get to my real work anyway, maybe I'll come back to it. The stupid AAs give me more linker errors than anything else, and they are out of my control!
Jan 18 2011
prev sibling parent reply Austin Hastings <ah08010-d yahoo.com> writes:
On 1/18/2011 10:31 AM, Vladimir Panteleev wrote:
 Then the question is: does the time you spent writing and maintaining
 makefiles and build scripts exceed the time it would take you to set up
 a build tool?
For D, no. When I tried to get started with D2, there were a lot of pointers to kewl build utilities on d-source. None of them worked. None of them that needed to self-build were capable of it. (Some claimed to "just run," which was also false.) So I wound up pissing away about two days (spread out here and there as one library or another would proudly report "this uses build tool Z - isn't it cool?!" and I'd chase down another failure). On the other hand, Gnu Make works. And Perl works. And the dmd2 compiler spits out a dependency list that, with a little bit of perl foo, turns into a makefile fragment nicely. So now I have a standard makefile package that knows about parsing D source to figure out all the incestuous little details about what calls what. And I'm able, thanks to the miracle of "start here and recurse," to move this system from project to project with about 15 minutes of tweaking. Sometimes more, if there's a whole bunch of targets getting built. What's even more, of course, is that my little bit of Makefile foo is portable. I can use make with C, D, Java, C++, Perl, XML, or whatever language-of-the-week I'm playing with. Which is certainly not true of "L33+ build tool Z." And make is pretty much feature-complete at this point, again unlike any of the D build tools. Which means that investing in knowing how to tweak make pays off way better than time spent learning BTZ. =Austin
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 07:16:40 +0200, Austin Hastings <ah08010-d yahoo.com>  
wrote:

 None of them worked.
Most of those build utilities do exactly what make + your perl-foo do. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply Austin Hastings <ah08010-d yahoo.com> writes:
On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
 On Wed, 19 Jan 2011 07:16:40 +0200, Austin Hastings
 <ah08010-d yahoo.com> wrote:

 None of them worked.
Most of those build utilities do exactly what make + your perl-foo do.
No, they don't. That's the point: I was _getting started_ with D2. I had no strong desire to reinvent the wheel, build tool-wise. But the tools I was pointed at just didn't work. I don't mean in a theoretic way - "this tool doesn't detect clock skew on a network that spans the international date line!" - I mean they wouldn't compile, or would compile but couldn't parse the D2 source files, or would compile but then crashed when I ran them. =Austin
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings <ah08010-d yahoo.com>  
wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
 On Wed, 19 Jan 2011 07:16:40 +0200, Austin Hastings
 <ah08010-d yahoo.com> wrote:

 None of them worked.
Most of those build utilities do exactly what make + your perl-foo do.
No, they don't.
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
 That's the point: I was _getting started_ with D2. I had no strong  
 desire to reinvent the wheel, build tool-wise. But the tools I was  
 pointed at just didn't work.
When a tool works for the author and many other users but not for you, you have to wonder where the fault really is. Besides, aren't all these tools open-source? The one time I had a problem with DSSS, it was easy to fix, and I sent the author a patch and everyone was better off from it. Isn't that how open-source works? :) -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 19.01.2011 07:35, schrieb Vladimir Panteleev:
 On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings <ah08010-d yahoo.com>
wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
 On Wed, 19 Jan 2011 07:16:40 +0200, Austin Hastings
 <ah08010-d yahoo.com> wrote:

 None of them worked.
Most of those build utilities do exactly what make + your perl-foo do.
No, they don't.
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
 That's the point: I was _getting started_ with D2. I had no strong desire to
 reinvent the wheel, build tool-wise. But the tools I was pointed at just
 didn't work.
When a tool works for the author and many other users but not for you, you have to wonder where the fault really is. Besides, aren't all these tools open-source? The one time I had a problem with DSSS, it was easy to fix, and I sent the author a patch and everyone was better off from it. Isn't that how open-source works? :)
When you're learning a language, you want to get familiar with it before starting to fix stuff.
Jan 19 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
Daniel Gibson wrote:
 When you're learning a language, you want to get familiar with it before 
 starting to fix stuff.
I tend to learn things by fixing them :-)
Feb 07 2011
next sibling parent Gour <gour atmarama.net> writes:
On Mon, 07 Feb 2011 01:06:46 -0800
Walter Bright <newshound2 digitalmars.com> wrote:

 I tend to learn things by fixing them :-)
Heh...this is called 'engineer'. ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Feb 07 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 02/07/2011 10:06 AM, Walter Bright wrote:
 Daniel Gibson wrote:
 When you're learning a language, you want to get familiar with it before
 starting to fix stuff.
I tend to learn things by fixing them :-)
¡ great ! Though original authors often do not appreciate this attitude very much, early fans even less ;-) [whatever your true interest, humility, and, hum, say, "good will"] Denis -- _________________ vita es estrany spir.wikidot.com
Feb 07 2011
prev sibling parent reply "nedbrek" <nedbrek yahoo.com> writes:
"Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
news:op.vpjlwrletuzx1w cybershadow.mshome.net...
 On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings <ah08010-d yahoo.com> 
 wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
Can someone tell me the corner case that requires a build tool to parse the whole source file? My make helper is awk, it just looks for the "import" and strips out the needed info... Thanks, Ned
Jan 19 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"nedbrek" <nedbrek yahoo.com> wrote in message 
news:ih6o0g$2geu$1 digitalmars.com...
 "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
 news:op.vpjlwrletuzx1w cybershadow.mshome.net...
 On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings <ah08010-d yahoo.com> 
 wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
Can someone tell me the corner case that requires a build tool to parse the whole source file? My make helper is awk, it just looks for the "import" and strips out the needed info...
Just as a few examples: mixin("import foo.bar;"); // or enum a = "import "; enum b = "foo."; enum c = "bar;"; mixin(a~b~c); // or static if(/+some fancy condition here+/) import foo.bar;
Jan 19 2011
next sibling parent "nedbrek" <nedbrek yahoo.com> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:ih7dj0$s4j$1 digitalmars.com...
 "nedbrek" <nedbrek yahoo.com> wrote in message 
 news:ih6o0g$2geu$1 digitalmars.com...
 "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
 news:op.vpjlwrletuzx1w cybershadow.mshome.net...
 On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings 
 <ah08010-d yahoo.com> wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
Can someone tell me the corner case that requires a build tool to parse the whole source file? My make helper is awk, it just looks for the "import" and strips out the needed info...
Just as a few examples: mixin("import foo.bar;"); // or enum a = "import "; enum b = "foo."; enum c = "bar;"; mixin(a~b~c); // or static if(/+some fancy condition here+/) import foo.bar;
Thanks! Fortunately, I am the only one on this project, so I will be careful to avoid such things! :) Ned
Jan 19 2011
prev sibling parent el muchacho <nicolas.janin gmail.com> writes:
Le 19/01/2011 20:20, Nick Sabalausky a écrit :
 "nedbrek" <nedbrek yahoo.com> wrote in message 
 news:ih6o0g$2geu$1 digitalmars.com...
 "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message 
 news:op.vpjlwrletuzx1w cybershadow.mshome.net...
 On Wed, 19 Jan 2011 08:09:11 +0200, Austin Hastings <ah08010-d yahoo.com> 
 wrote:

 On 1/19/2011 12:50 AM, Vladimir Panteleev wrote:
Actually, you're probably right here. To my knowledge, there are only two build tools that take advantage of the -deps compiler option - rdmd and xfbuild. Older ones were forced to parse the source files - rebuild even used DMD's frontend for that. There's also a relatively new tool (dbuild oslt?) which generates makefiles.
Can someone tell me the corner case that requires a build tool to parse the whole source file? My make helper is awk, it just looks for the "import" and strips out the needed info...
Just as a few examples: mixin("import foo.bar;"); // or enum a = "import "; enum b = "foo."; enum c = "bar;"; mixin(a~b~c); // or static if(/+some fancy condition here+/) import foo.bar;
This is exactly the reason why the build system must be included in the compiler and not in external tools.
Jan 29 2011
prev sibling parent reply Jim <bitcirkel yahoo.com> writes:
Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if
 the package names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
Jan 18 2011
next sibling parent reply spir <denis.spir gmail.com> writes:
On 01/18/2011 06:33 PM, Jim wrote:
 Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if
 the package names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
The D styleguide requires on one hand capitalised names for types, and lowercase for filenames on the other. How are we supposed to make them match? Denis _________________ vita es estrany spir.wikidot.com
Jan 18 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
spir:

 The D styleguide requires on one hand capitalised names for types, and 
 lowercase for filenames on the other. How are we supposed to make them 
 match?
Why do you want them to match? Bye, bearophile
Jan 18 2011
parent reply spir <denis.spir gmail.com> writes:
On 01/18/2011 07:10 PM, bearophile wrote:
 spir:

 The D styleguide requires on one hand capitalised names for types, and
 lowercase for filenames on the other. How are we supposed to make them
 match?
Why do you want them to match?
Because when a module defines a type Foo (or rather, it's what is exported), I like it to be called Foo.d. A module called doFoo.d would certainly mainly define a func doFoo. So, people directly know what's in there (and this, from D's own [supposed] naming rules :-). Simple, no? Denis _________________ vita es estrany spir.wikidot.com
Jan 19 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
spir:

 Because when a module defines a type Foo (or rather, it's what is 
 exported), I like it to be called Foo.d.
Generally D modules contain many types. Bye, bearophile
Jan 19 2011
parent reply spir <denis.spir gmail.com> writes:
On 01/19/2011 12:56 PM, bearophile wrote:
 spir:

 Because when a module defines a type Foo (or rather, it's what is
 exported), I like it to be called Foo.d.
Generally D modules contain many types.
Yep, but often one is the main exported element. When there are several, hopefully sensibly related, exported things, then it's easy to indicate: mathFuncs, stringTools, bitOps... while still following D naming conventions. Was it me or you who heavily & repetedly insisted on the importance of consistent style, in particular naming, in a programming community (I strongly support you on this point). Why should modules not benefit of this? For sure, there are case-insensitive filesystems, but only prevents using _in the same dir_ (or package) module names that differ only on case. I guess. Denis _________________ vita es estrany spir.wikidot.com
Jan 19 2011
parent bearophile <bearophileHUGS lycos.com> writes:
spir:

 Yep, but often one is the main exported element.
That's not true for Phobos, my dlibs1, and lot of my code that uses those libs.
When there are several, hopefully sensibly related, exported things, then it's
easy to indicate: mathFuncs, stringTools, bitOps... while still following D
naming conventions.<
D module names are better fully lowercase. This is their D standard... Bye, bearophile
Jan 19 2011
prev sibling next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 12:57:42 +0200, spir <denis.spir gmail.com> wrote:

 Because when a module defines a type Foo (or rather, it's what is  
 exported), I like it to be called Foo.d. A module called doFoo.d would  
 certainly mainly define a func doFoo. So, people directly know what's in  
 there (and this, from D's own [supposed] naming rules :-). Simple, no?
I actually tried this convention for a project. It turned out a not very good idea, because if you want to access a static member or subclass of said class, you must specify the type twice (once for the module name, and another for the type) - e.g. "Foo.Foo.bar()". Besides, it's against the recommended D code style convention: http://www.digitalmars.com/d/2.0/dstyle.html -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 19 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"spir" <denis.spir gmail.com> wrote in message 
news:mailman.710.1295434677.4748.digitalmars-d puremagic.com...
 On 01/18/2011 07:10 PM, bearophile wrote:
 spir:

 The D styleguide requires on one hand capitalised names for types, and
 lowercase for filenames on the other. How are we supposed to make them
 match?
Why do you want them to match?
Because when a module defines a type Foo (or rather, it's what is exported), I like it to be called Foo.d. A module called doFoo.d would certainly mainly define a func doFoo. So, people directly know what's in there (and this, from D's own [supposed] naming rules :-). Simple, no?
If I have a class Foo in it's own module, I call the module (and file) "foo". I find this to be simple too, because this way types are always capitalzed and modules are always non-captialized. Plus, like Vladimir indicated, this makes it a lot easier to distinguish between the type ("Foo") and the module ("foo").
Jan 19 2011
prev sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 18.01.2011 18:41, schrieb spir:
 On 01/18/2011 06:33 PM, Jim wrote:
 Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if
 the package names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
The D styleguide requires on one hand capitalised names for types, and lowercase for filenames on the other. How are we supposed to make them match? Denis _________________ vita es estrany spir.wikidot.com
Filenames should match with the module they contain, not with the contained class(es).
Jan 18 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Jim wrote:
 Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if the package
 names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original.
Jan 18 2011
next sibling parent reply Jim <bitcirkel yahoo.com> writes:
Walter Bright Wrote:
 Forcing the module name to match the file name sounds good, but in practice it 
 makes it hard to debug modules. What I like to do is to copy a suspicious
module 
 to foo.d (or whatever.d) and link it in explicitly, which will override the 
 breaking one. Then, I hack away at it until I discover the problem, then fix
the 
 original.
This would admittedly impose some constraints, but I think it would ultimately be worth it. It makes everything much clearer and creates a bunch of opportunities for further development. I'd create a branch (in git or mercury) for that task, it's quick and dirt cheap, very easy to switch to and from, and you get the diff for free.
Jan 18 2011
parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Jim Wrote:

 Walter Bright Wrote:
 Forcing the module name to match the file name sounds good, but in practice it 
 makes it hard to debug modules. What I like to do is to copy a suspicious
module 
 to foo.d (or whatever.d) and link it in explicitly, which will override the 
 breaking one. Then, I hack away at it until I discover the problem, then fix
the 
 original.
This would admittedly impose some constraints, but I think it would ultimately be worth it. It makes everything much clearer and creates a bunch of opportunities for further development.
I don't see such benefit. First off, I don't see file/module names not matching very often. Tools can be developed to assume such structure exists which means more incentive to keep such structure, I believe rdmd already makes this assumption. It also wouldn't be hard to make a program that takes a list of files, names and places them into the proper structure.
 I'd create a branch (in git or mercury) for that task, it's quick and dirt
cheap, very easy to switch to and from, and you get the diff for free.
Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it? I don't really know how annoying I would find such a change, but I don't think I would ever see at as a feature.
Jan 18 2011
prev sibling parent reply Thias <void invalid.com> writes:
On 18/01/11 20:26, Walter Bright wrote:
 Jim wrote:
 Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if the package
 names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original.
Couldn’t you do exactly the same thing by just copying the file? cp suspicious.d suspicious.orig edit suspicious.d
Jan 18 2011
parent "Nick Sabalausky" <a a.a> writes:
"Thias" <void invalid.com> wrote in message 
news:ih52a8$2bba$1 digitalmars.com...
 On 18/01/11 20:26, Walter Bright wrote:
 Jim wrote:
 Adam Ruppe Wrote:
 Maybe. 9/10 times they match anyway, but I'd be annoyed if the package
 names had to match the containing folder.
This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original.
Couldn’t you do exactly the same thing by just copying the file? cp suspicious.d suspicious.orig edit suspicious.d
That's what I do. Works fine. (Although I keep the .d extension, and do like "suspicious orig.d")
Jan 18 2011
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/18/11, Walter Bright <newshound2 digitalmars.com> wrote:
 You can put
 hundreds if you like.
DMD can, but Optlink can't handle long arguments.
Jan 18 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
Andrej Mitrovic wrote:
 On 1/18/11, Walter Bright <newshound2 digitalmars.com> wrote:
 You can put
 hundreds if you like.
DMD can, but Optlink can't handle long arguments.
Example?
Jan 18 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/18/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 On 1/18/11, Walter Bright <newshound2 digitalmars.com> wrote:
 You can put
 hundreds if you like.
DMD can, but Optlink can't handle long arguments.
Although now that I've read the error description I might have passed a wrong argument somehow. I'll take a look.
Jan 18 2011
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Vladimir Panteleev:

 IMO, sticking to the C-ism of "one object file at a time" and dependency  
 on external build tools / makefiles is the biggest mistake DMD did in this  
 regard.
A Unix philosophy is to create tools that are able to do only one thing well, and rdmd uses DMD to do its job of helping compile small projects automatically. Yet the D compiler is not following that philosophy in many situations because it is doing lot of stuff beside compiling D code, like profiler, code coverage analyser, unittester, docs generator, JSON summary generator, and more. D1 compiler used to have a cute literary programming feature too, that's often used by Haskell blogs. Here Walter is pragmatic: docs generator happens to be quicker to create and maintain if it's built inside the compiler. So it's right to fold this rdmd functionality inside the compiler? Is this practically useful, like is this going to increase rdmd speed? Folding rdmd functionality inside the compiler may risk freezing the future evolution of future D build tools, so it has a risks/costs too. Bye, bearophile
Jan 18 2011
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 11:22:36 +0200, bearophile <bearophileHUGS lycos.com>  
wrote:

 Folding rdmd functionality inside the compiler may risk freezing the  
 future evolution of future D build tools, so it has a risks/costs too.
Nobody needs more than one (good) D build tool. How many build tools does Go/Scala/Haskell/etc. have? Regardless if it's in the compiler or not, the only real requirement is that the source is maintainable, and the barrier to contribute to it is low enough (hurray for GitHub). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-18 10:22, bearophile wrote:
 Vladimir Panteleev:

 IMO, sticking to the C-ism of "one object file at a time" and dependency
 on external build tools / makefiles is the biggest mistake DMD did in this
 regard.
A Unix philosophy is to create tools that are able to do only one thing well, and rdmd uses DMD to do its job of helping compile small projects automatically. Yet the D compiler is not following that philosophy in many situations because it is doing lot of stuff beside compiling D code, like profiler, code coverage analyser, unittester, docs generator, JSON summary generator, and more. D1 compiler used to have a cute literary programming feature too, that's often used by Haskell blogs. Here Walter is pragmatic: docs generator happens to be quicker to create and maintain if it's built inside the compiler. So it's right to fold this rdmd functionality inside the compiler? Is this practically useful, like is this going to increase rdmd speed? Folding rdmd functionality inside the compiler may risk freezing the future evolution of future D build tools, so it has a risks/costs too. Bye, bearophile
I would say that in this case the LLVM/Clang approach would be the best. Build a solid compiler library that other tools can be built upon, including the compiler. -- /Jacob Carlborg
Jan 18 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Vladimir Panteleev wrote:
 IMO, sticking to the C-ism of "one object file at a time" and dependency 
 on external build tools / makefiles is the biggest mistake DMD did in 
 this regard.
You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' "build tool" is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that.
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 22:17:08 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 IMO, sticking to the C-ism of "one object file at a time" and  
 dependency on external build tools / makefiles is the biggest mistake  
 DMD did in this regard.
You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' "build tool" is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that.
Let's review the two problems discussed in this thread: 1) Not passing all modules to the compiler results in a nearly-incomprehensible (for some) linker error. 2) DMD's inability (or rather, unwillingness) to build the whole program when it's in the position to, which creates the dependency on external build tools (or solutions that require unnecessary human effort). Are you saying that there's no need to fix neither of these because they don't bother you personally? -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/18/11 11:37 PM, Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 22:17:08 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 IMO, sticking to the C-ism of "one object file at a time" and
 dependency on external build tools / makefiles is the biggest mistake
 DMD did in this regard.
You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' "build tool" is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that.
Let's review the two problems discussed in this thread: 1) Not passing all modules to the compiler results in a nearly-incomprehensible (for some) linker error. 2) DMD's inability (or rather, unwillingness) to build the whole program when it's in the position to, which creates the dependency on external build tools (or solutions that require unnecessary human effort). Are you saying that there's no need to fix neither of these because they don't bother you personally?
I think the larger picture is even more important. We need a package system that takes Internet distribution into account. I got word on the IRC that dsss was that. It would be great to resurrect that, or start a new project with such goals. Andrei
Jan 18 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-19 06:55, Andrei Alexandrescu wrote:
 On 1/18/11 11:37 PM, Vladimir Panteleev wrote:
 On Tue, 18 Jan 2011 22:17:08 +0200, Walter Bright
 <newshound2 digitalmars.com> wrote:

 Vladimir Panteleev wrote:
 IMO, sticking to the C-ism of "one object file at a time" and
 dependency on external build tools / makefiles is the biggest mistake
 DMD did in this regard.
You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' "build tool" is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that.
Let's review the two problems discussed in this thread: 1) Not passing all modules to the compiler results in a nearly-incomprehensible (for some) linker error. 2) DMD's inability (or rather, unwillingness) to build the whole program when it's in the position to, which creates the dependency on external build tools (or solutions that require unnecessary human effort). Are you saying that there's no need to fix neither of these because they don't bother you personally?
I think the larger picture is even more important. We need a package system that takes Internet distribution into account. I got word on the IRC that dsss was that. It would be great to resurrect that, or start a new project with such goals. Andrei
I've been thinking for a while about doing a package system for D, basically gems but for D. But I first want to finish (finish as in somewhat usable and release it) another project I'm working on. -- /Jacob Carlborg
Jan 19 2011
prev sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
Andrei wrote:
  We need a package system that takes Internet distribution
 into account.
Do you think something like my simple http based system would work? Fetch dependencies. Try to compile. If the linker complains about missing files, download them from http://somewebsite/somepath/filename, try again from the beginning. There's no metadata, no version tracking, nothing like that, but I don't think such things are necessary. Worst case, just download the specific version you need for your project manually.
Jan 19 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 19.01.2011 14:56, schrieb Adam Ruppe:
 Andrei wrote:
   We need a package system that takes Internet distribution
 into account.
Do you think something like my simple http based system would work? Fetch dependencies. Try to compile. If the linker complains about missing files, download them from http://somewebsite/somepath/filename, try again from the beginning.
That'd suck horribly for bigger projects, and also when you've got a lot of dependencies, I guess.
 There's no metadata, no version tracking, nothing like that, but
 I don't think such things are necessary. Worst case, just download
 the specific version you need for your project manually.
I don't think it's such a big burden to list the dependencies for your project. Or, even better: combine both ideas: Automatically create and save a list of dependencies by trying (like you described). Then when you release your project, the dependency list is there and all dependencies can be fetched before building. Cheers, - Daniel
Jan 19 2011
parent Adam Ruppe <destructionator gmail.com> writes:
Daniel Gibson wrote:
 That'd suck horribly for bigger projects, and also when
 you've got a lot of dependencies, I guess
Maybe, especially if the dependencies have dependencies (it'd have to download one set before knowing what to look for for the next set), but that is a one time cost - after the files the first time, no need to download them again. It could probably cache the dependency list too, though I'm not sure the lag of checking it again is that bad anyway. Though, IMO, the biggest advantage for a system like this is for small programs instead of big ones. If you're doing a big program, it isn't much of an added effort to include the deps or manually script the build/makefile. You probably have some compile switches anyway. For a little program though, it is somewhat annoying to have to list all that stuff manually. Writing out the command line might take longer than writing the program!
 Or, even better: combine both ideas: Automatically create and
 save a list of dependencies by trying (like you described).
Yea, that'd work too. It could possibly figure out whole packages to grab that way too, instead of doing individual files. It'd be a little extra effort, though.
Jan 19 2011
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/19/11 7:56 AM, Adam Ruppe wrote:
 Andrei wrote:
   We need a package system that takes Internet distribution
 into account.
Do you think something like my simple http based system would work? Fetch dependencies. Try to compile. If the linker complains about missing files, download them from http://somewebsite/somepath/filename, try again from the beginning. There's no metadata, no version tracking, nothing like that, but I don't think such things are necessary. Worst case, just download the specific version you need for your project manually.
I'm not sure. A friend of mine who is well versed in such issues suggested two sources of inspiration: apt-get and cpan. As a casual user of both, I can indeed say that they are doing a very good job. Andrei
Jan 19 2011
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-19 14:56, Adam Ruppe wrote:
 Andrei wrote:
   We need a package system that takes Internet distribution
 into account.
Do you think something like my simple http based system would work? Fetch dependencies. Try to compile. If the linker complains about missing files, download them from http://somewebsite/somepath/filename, try again from the beginning. There's no metadata, no version tracking, nothing like that, but I don't think such things are necessary. Worst case, just download the specific version you need for your project manually.
That doesn't sound like a good solution. I think you would have to manually specify the dependencies. -- /Jacob Carlborg
Jan 19 2011
prev sibling parent reply retard <re tard.com.invalid> writes:
Wed, 19 Jan 2011 13:56:17 +0000, Adam Ruppe wrote:

 Andrei wrote:
  We need a package system that takes Internet distribution
 into account.
Do you think something like my simple http based system would work? Fetch dependencies. Try to compile. If the linker complains about missing files, download them from http://somewebsite/somepath/filename, try again from the beginning. There's no metadata, no version tracking, nothing like that, but I don't think such things are necessary. Worst case, just download the specific version you need for your project manually.
A build tool without any kind of dependency versioning support is a complete failure. Especially if it also tries to handle external non-D dependencies. It basically makes supporting all libraries with rapid API changes quite impossible.
Jan 19 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
retard wrote:
 A build tool without any kind of dependency versioning support is a
 complete failure.
You just delete the old files and let it re-download them to update. If the old one is working for you, simply keep it.
Jan 19 2011
next sibling parent reply retard <re tard.com.invalid> writes:
Wed, 19 Jan 2011 19:41:47 +0000, Adam Ruppe wrote:

 retard wrote:
 A build tool without any kind of dependency versioning support is a
 complete failure.
You just delete the old files and let it re-download them to update. If the old one is working for you, simply keep it.
I meant that if the latest version 0.321 of the project 'foobar' depends on 'bazbaz 0.5.8.2' but also versions 0.5.8.4 - 0.5.8.11 (API but not ABI compatible) and 0.5.9 (mostly incompatible) and 0.6 - 0.9.12.3 (totally incompatible) exist, the build fails badly when downloading the latest library. If you don't document the versions of the dependencies anywhere, it's almost impossible to build to project even manually.
Jan 19 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
 I meant that if the latest version 0.321 of the project 'foobar'
 depends on 'bazbaz 0.5.8.2'
Personally, I'd just prefer people to package their damned dependencies with their app.... But, a configuration file could fix that easily enough. Set one up like this: bazbaz = http://bazco.com/0.5.8.2/ Then it'd try to download http://bazco.com/0.5.8.2/bazbaz.module.d instead of the default site (which is presumably the latest version). This approach also makes it easy to add third party servers and libraries, so you wouldn't be dependent on a central source for your code. Here's a potential problem: what if bazbaz needs some specific version of something too? Maybe it could check for a config file on its server too, and use those directives when getting the library.
Jan 19 2011
parent reply retard <re tard.com.invalid> writes:
Wed, 19 Jan 2011 20:01:28 +0000, Adam Ruppe wrote:

 I meant that if the latest version 0.321 of the project 'foobar'
 depends on 'bazbaz 0.5.8.2'
Personally, I'd just prefer people to package their damned dependencies with their app.... But, a configuration file could fix that easily enough. Set one up like this: bazbaz = http://bazco.com/0.5.8.2/ Then it'd try to download http://bazco.com/0.5.8.2/bazbaz.module.d instead of the default site (which is presumably the latest version). This approach also makes it easy to add third party servers and libraries, so you wouldn't be dependent on a central source for your code. Here's a potential problem: what if bazbaz needs some specific version of something too? Maybe it could check for a config file on its server too, and use those directives when getting the library.
How it goes is you come up with more and more features if you spend some time THINKING about the possible functionality for such a tool. Instead of NIH, why don't you just study what the existing tools do and pick up all the relevant features. Why there are so many open source tools doing the exactly same thing is that developers are too lazy to study the previous work and start developing code before the common sense kicks in.
Jan 19 2011
parent Adam Ruppe <destructionator gmail.com> writes:
retard wrote:
 How it goes is you come up with more and more features if you spend
 sometime THINKING about the possible functionality for such a tool.
It, as written now, does everything I've ever wanted. If I try to do every possible function, it'll never be done. The question is what's trivially easy to automate, somewhat difficult to do by other means, and fairly useful. I'm not convinced building falls under that *at all*, much less every random edge case under the sun.
Jan 19 2011
prev sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Wed, 19 Jan 2011 21:41:47 +0200, Adam Ruppe <destructionator gmail.com>  
wrote:

 retard wrote:
 A build tool without any kind of dependency versioning support is a
 complete failure.
You just delete the old files and let it re-download them to update. If the old one is working for you, simply keep it.
You're missing the point. You want to install package X (either directly or as a dependency for something else), which was written for a specific version of Y. Your tool will just download the latest version of Y and the whole thing crashes and burns. Someone posted this somewhere else in this thread, I believe it's quite relevant: http://cdsmith.wordpress.com/2011/01/16/haskells-own-dll-hell/ -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 19 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
Vladimir Panteleev wrote:
 Your tool will just download the latest version of Y and the
 whole thing crashes and burns.
My problem is I don't see how that'd happen in the first place. Who would distribute something they've never compiled? If they compiled it, it would have downloaded the other libs already, so any sane distribution *already* has the dependent libraries, making this whole thing moot. The build tool is meant to help the developer, not the user. If the user needs help, it means the developer didn't do his job properly. That said, the configuration file, as described in my last post, seems like it can solve this easily enough.
Jan 19 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Meh.

Just give us File access in CTFE and we'll be done talking about build
tools. Just run DMD on the thing and the app automagically tracks and
downloads all of its dependencies.

Im kidding. But file access in CTFE would be so damn cool. :)
Jan 19 2011
parent reply Mafi <mafi example.org> writes:
Am 19.01.2011 21:22, schrieb Andrej Mitrovic:
 Meh.

 Just give us File access in CTFE and we'll be done talking about build
 tools. Just run DMD on the thing and the app automagically tracks and
 downloads all of its dependencies.

 Im kidding. But file access in CTFE would be so damn cool. :)
What about the alternative import import("file.ext") //compile time string of the contents of file.ext You can do for example: mixin(import("special.d")); //c-style import/include
Jan 19 2011
parent Jesse Phillips <jessekphillips+D gmail.com> writes:
Mafi Wrote:

 Am 19.01.2011 21:22, schrieb Andrej Mitrovic:
 Meh.

 Just give us File access in CTFE and we'll be done talking about build
 tools. Just run DMD on the thing and the app automagically tracks and
 downloads all of its dependencies.

 Im kidding. But file access in CTFE would be so damn cool. :)
What about the alternative import import("file.ext") //compile time string of the contents of file.ext You can do for example: mixin(import("special.d")); //c-style import/include
Then you have to add -J command line switches for the location of the importable files.
Jan 19 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-19 21:13, Adam Ruppe wrote:
 Vladimir Panteleev wrote:
 Your tool will just download the latest version of Y and the
 whole thing crashes and burns.
My problem is I don't see how that'd happen in the first place. Who would distribute something they've never compiled? If they compiled it, it would have downloaded the other libs already, so any sane distribution *already* has the dependent libraries, making this whole thing moot. The build tool is meant to help the developer, not the user. If the user needs help, it means the developer didn't do his job properly.
I would say it's for the user of the library. He only cares about the library he wants to use and not its dependencies.
 That said, the configuration file, as described in my last post,
 seems like it can solve this easily enough.
-- /Jacob Carlborg
Jan 20 2011
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
It's a cute blog post. It suggests that it will be good to: Getting Code: 1) Have a central repository for D modules that is easy to use for both submitters and users. - D code in such repository must Just Work. - I must stress that having a shared community-wide style to write D code helps a lot when you want to use in your program modules written by other people. Otherwise your program looks like a patchwork of wildly different styles. - D language must be designed to help writing code that works well both on 32 and 64 bit systems, helping to avoid the traps listed here: http://www.viva64.com/en/a/0065/ - Path-related problems must be minimized. - Probably D the package system needs to be improved. Some Java people are even talking about introducing means to create superpackages. Some module system theory from ML-like languages may help here. Figuring Out Code: - D compiler has to improve its error messages a lot. Error messages need to become polished and sharp. Linker errors are bad, they need to be avoided where possible (this means the compiler catches some errors before they become linker allows to read about the error and its causes and solutions. Writing Code: - Interactive Console: it will be good to have sometime like this built in the D distribution. From the article:
Further, while I know there are people who like IDEs, for me they serve to
cripple the exploratory play of coding so that it’s about as fun as filling out
tax forms.<
In this regard I think D has to take two things into account: - People today like to use modern IDEs. So the core of the language too needs be designed to work well with IDEs. Currently D doesn't look much designed to be IDE-friendly. - On the other hand I see it a failure if the language requires a complex IDE to write code. I am able to write complex Python programs with no IDE, this means that Python is well designed. One more thing: from a recent discussion with Walter about software engineering, it seems that computer languages are both a design tool and engineering building material. Python is better than D for exploratory programming (http://en.wikipedia.org/wiki/Exploratory_programming ), or even to invent new algorithms and to explore new software ideas. D is probably better than Python to build larger software engineering systems. Lately Python has added several features to improve its "programming in the large" skills (decorators, Abstract Base Classes, optional annotations, etc), likewise I think D will enjoy some little handy features that help both exploratory programming and "programming in the small", like tuple unpacking syntax (http://d.puremagic.com/issues/show_bug.cgi?id=4579 ). There are features like named arguments (http://en.wikipedia.org/wiki/Parameter_%28computer_programming 29#Named_parameters ) that are useful for both little and large programs, named arguments are one of the things I'd like in D still. Bye, bearophile
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 12:10:25 +0200, bearophile <bearophileHUGS lycos.com>  
wrote:

 Walter:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
It's a cute blog post. It suggests that it will be good to: Getting Code: 1) Have a central repository for D modules that is easy to use for both submitters and users.
Forcing a code repository is bad. Let authors use anything that they're comfortable with. The "repository" must be nothing more than a database of metadata (general information about a package, and how to download it).
 - D code in such repository must Just Work.
This is not practical. The only practical way is to put that responsibility on the authors, and to encourage forking and competition.
 - I must stress that having a shared community-wide style to write D  
 code helps a lot when you want to use in your program modules written by  
 other people. Otherwise your program looks like a patchwork of wildly  
 different styles.
I assume you mean naming conventions and not actual code style (indentation etc.)
 - Probably D the package system needs to be improved. Some Java people  
 are even talking about introducing means to create superpackages. Some  
 module system theory from ML-like languages may help here.
Why?
 Writing Code:
 - Interactive Console: it will be good to have sometime like this built  
 in the D distribution.
I don't think this is practical until someone writes a D interpreter. Have you ever seen an interactive console for a purely-compiled language?
 - People today like to use modern IDEs. So the core of the language too  
 needs be designed to work well with IDEs. Currently D doesn't look much  
 designed to be IDE-friendly.
How would DMD become even more IDE-friendly that it already is? What about -X?
 One more thing: from a recent discussion with Walter about software  
 engineering, it seems that computer languages are both a design tool and  
 engineering building material. Python is better than D for exploratory  
 programming (http://en.wikipedia.org/wiki/Exploratory_programming ), or  
 even to invent new algorithms and to explore new software ideas. D is  
 probably better than Python to build larger software engineering  
 systems. Lately Python has added several features to improve its  
 "programming in the large" skills (decorators, Abstract Base Classes,  
 optional annotations, etc), likewise I think D will enjoy some little  
 handy features that help both exploratory programming and "programming  
 in the small", like tuple unpacking syntax  
 (http://d.puremagic.com/issues/show_bug.cgi?id=4579 ). There are  
 features like named arguments  
 (http://en.wikipedia.org/wiki/Parameter_%28computer_programming
29#Named_parameters  
 ) that are useful for both little and large programs, named arguments  
 are one of the things I'd like in D still.
I have to agree that named arguments are awesome, they make the code much more readable and maintainable in many instances. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent "Simen kjaeraas" <simen.kjaras gmail.com> writes:
Vladimir Panteleev <vladimir thecybershadow.net> wrote:

 - I must stress that having a shared community-wide style to write D  
 code helps a lot when you want to use in your program modules written  
 by other people. Otherwise your program looks like a patchwork of  
 wildly different styles.
I assume you mean naming conventions and not actual code style (indentation etc.)
Likely he meant more than that. At least such is the impression I've had before. I am not vehemently opposed to such an idea, and I definitely agree that naming conventions should be observed, but I have at times had the impression that bearophile wants all aspects of code to be controlled by such a coding style. -- Simen
Jan 18 2011
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Vladimir Panteleev:

 Forcing a code repository is bad.
In this case I was not suggesting to force things :-) But having a place to find reliable modules is very good.
 This is not practical.
It works in Python, Ruby and often in Perl too, so I don't agree.
 I assume you mean naming conventions and not actual code style (indentation
etc.)
I meant that D code written by different people is better looking similar, where possible. C/C++ programmers have too much freedom where freedom is not necessary. Reducing some of such useless freedom helps improve the code ecosystem.
 - Probably D the package system needs to be improved. Some Java people  
 are even talking about introducing means to create superpackages. Some  
 module system theory from ML-like languages may help here.
 Why?
- Currently D packages are not working well yet, there are bug reports on this. - Something higher level than packages is useful when you build very large systems. - Module system theory from ML-like languages shows many years old ideas that otherwise will need to be painfully re-invented half-broken by D language developers. Sometimes wasting three days reading saves you some years of pain.
 I don't think this is practical until someone writes a D interpreter.
CTFE interpter is already there :-)
 How would DMD become even more IDE-friendly that it already is?
- error messages that give column number - folding annotations? - less usage of string mixins and more on delegates and normal D code - More introspection - etc
 I have to agree that named arguments are awesome, they make the code much more
readable and maintainable in many instances.<
I haven not already written an enhancement request on this because until few weeks ago I have thought that named arguments improve the usage of functions with many arguments, so they may encourage D programmers to create more functions like this from Windows API: HWND CreateWindow( LPCTSTR lpClassName, LPCTSTR lpWindowName,DWORD style,int x, int y, int width, int height, HWND hWndParent,HMENU hMenu,HANDLE hInstance,LPVOID lpParam); but lately I have understood that this is not the whole truth, named arguments are useful even when your functions have just 3 arguments. They make code more readable in both little script-like programs, and help avoid some mistakes in larger programs too. Bye, bearophile
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 13:27:56 +0200, bearophile <bearophileHUGS lycos.com>  
wrote:

 Vladimir Panteleev:

 Forcing a code repository is bad.
In this case I was not suggesting to force things :-) But having a place to find reliable modules is very good.
 This is not practical.
It works in Python, Ruby and often in Perl too, so I don't agree.
I think we have a misunderstanding, then? Who ensures that the modules "just work"? If someone breaks something, are they thrown out of The Holy Repository?
 I assume you mean naming conventions and not actual code style  
 (indentation etc.)
I meant that D code written by different people is better looking similar, where possible. C/C++ programmers have too much freedom where freedom is not necessary. Reducing some of such useless freedom helps improve the code ecosystem.
It also demotivates and alienates programmers.
 - Currently D packages are not working well yet, there are bug reports  
 on this.
 - Something higher level than packages is useful when you build very  
 large systems.
 - Module system theory from ML-like languages shows many years old ideas  
 that otherwise will need to be painfully re-invented half-broken by D  
 language developers. Sometimes wasting three days reading saves you some  
 years of pain.
I'm curious (not arguing), can you provide examples? I can't think of any drastic improvements to the package system.
 I don't think this is practical until someone writes a D interpreter.
CTFE interpter is already there :-)
So you think the subset of D that's CTFE-able is good enough to make an interactive console that's actually useful? -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Vladimir Panteleev:

 I think we have a misunderstanding, then? Who ensures that the modules
 "just work"? If someone breaks something, are they thrown out of The Holy  
 Repository?
There is no single solution to such problems. It's a matter of creating rules and lot of work to enforce them as years pass. If you talk about Holy things you are pushing this discussion toward a stupid direction.
 It also demotivates and alienates programmers.
Many programmers are able to understand the advantages of removing some unnecessary freedoms. Python has shown me that brace wars are not productive :-)
 I'm curious (not arguing), can you provide examples? I can't think of any
drastic
 improvements to the package system.
I was talking about fixing bugs, improving strength, maybe later adding super-packages, and generally taking a good look at the literature about the damn ML-style module systems and their theory.
 So you think the subset of D that's CTFE-able is good enough to make an
 interactive console that's actually useful?
The built-in interpreter needs some improvements in its memory management, and eventually it may support exceptions and other some other missing things. Currently functions can't access global mutable state in the compile-time execution path, despite they don't need to be wholly pure. But in a REPL you may want to do almost everything, like mutating global variables, importing modules and opening a GUI window on the fly, etc. SO currently the D CTFE interpreter is not good enough for a console, but I think it's already better than nothing (I'd like right now a console able to run D code with the current limitations of the CTFE interpreter), it will be improved, and it may even be made more flexible to be usable both for CTFE with pure-ish functions and in a different modality for the console. This allows to have a single interpreter for two purposes. Most modern video games are partially written with a scripting language, like Lua. So a third possible purpose is to allow run-time execution of code (so the program needs the compile at run time too), avoiding the need of a Lua/Python/MiniD interpreter. Bye, bearophile
Jan 18 2011
parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 14:30:53 +0200, bearophile <bearophileHUGS lycos.com>  
wrote:

 Vladimir Panteleev:

 I think we have a misunderstanding, then? Who ensures that the modules
 "just work"? If someone breaks something, are they thrown out of The  
 Holy
 Repository?
There is no single solution to such problems. It's a matter of creating rules and lot of work to enforce them as years pass. If you talk about Holy things you are pushing this discussion toward a stupid direction.
If a single entity controls the inclusion of submissions into an important set, then there will inevitably be conflicts. Also I still have no idea what you meant when you said that Python, Ruby and Perl do it. AFAIK their repositories are open and anyone can submit their project.
 I'm curious (not arguing), can you provide examples? I can't think of  
 any drastic
 improvements to the package system.
I was talking about fixing bugs, improving strength, maybe later adding super-packages, and generally taking a good look at the literature about the damn ML-style module systems and their theory.
I meant examples of why this is useful for D. (Why are you damning the ML-style module systems?) -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
prev sibling parent reply el muchacho <nicolas.janin gmail.com> writes:
Le 18/01/2011 13:01, Vladimir Panteleev a écrit :
 On Tue, 18 Jan 2011 13:27:56 +0200, bearophile
 <bearophileHUGS lycos.com> wrote:
 
 Vladimir Panteleev:
It also demotivates and alienates programmers.
I don't believe so. I've never seen any C++ programmer who has worked on other languages like Java complain about the Java naming conventions or the obligatory one class = one file. Never. In the contrary, I believe most of them, when going back to C++, try to follow the same conventions as much as possible.
Jan 29 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 29.01.2011 21:21, schrieb el muchacho:
 Le 18/01/2011 13:01, Vladimir Panteleev a écrit :
 On Tue, 18 Jan 2011 13:27:56 +0200, bearophile
 <bearophileHUGS lycos.com>  wrote:

 Vladimir Panteleev:
It also demotivates and alienates programmers.
I don't believe so. I've never seen any C++ programmer who has worked on other languages like Java complain about the Java naming conventions or the obligatory one class = one file. Never. In the contrary, I believe most of them, when going back to C++, try to follow the same conventions as much as possible.
I often find one class = one file annoying. I haven't done much with C++, but some stuff with Java and D1. I mostly agree with javas naming conventions, though. Cheers, - Daniel
Jan 29 2011
parent foobar <foo bar.com> writes:
Daniel Gibson Wrote:

 Am 29.01.2011 21:21, schrieb el muchacho:
 Le 18/01/2011 13:01, Vladimir Panteleev a écrit :
 On Tue, 18 Jan 2011 13:27:56 +0200, bearophile
 <bearophileHUGS lycos.com>  wrote:

 Vladimir Panteleev:
It also demotivates and alienates programmers.
I don't believe so. I've never seen any C++ programmer who has worked on other languages like Java complain about the Java naming conventions or the obligatory one class = one file. Never. In the contrary, I believe most of them, when going back to C++, try to follow the same conventions as much as possible.
I often find one class = one file annoying. I haven't done much with C++, but some stuff with Java and D1. I mostly agree with javas naming conventions, though. Cheers, - Daniel
I just wanted to remind that the accurate Java rule was one _public_ class per file. You can have more than one class in a file as long as only one is declared public. I dunno about your experience but mine was that this is not a problem in practice, at least not for me. As usually said bout this kind of things, YMMV.
Jan 29 2011
prev sibling parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 12:10:25 +0200, bearophile <bearophileHUGS lycos.com>
 wrote:
 
 Walter:

 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-
programming-language-good/
 It's a cute blog post. It suggests that it will be good to:

 Getting Code:
 1) Have a central repository for D modules that is easy to use for both
 submitters and users.
Forcing a code repository is bad. Let authors use anything that they're comfortable with. The "repository" must be nothing more than a database of metadata (general information about a package, and how to download it).
I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually.
 - D code in such repository must Just Work.
This is not practical. The only practical way is to put that responsibility on the authors, and to encourage forking and competition.
True, though one of the cool things Gregor did back the days with dsss is automagically run unittests for each package in the repository and publish the results. It wasn't perfect but gave at least some indication.
Jan 18 2011
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn  
<lutger.blijdestijn gmail.com> wrote:

 I'm pretty happy that my Fedora repositories are just a handful, most of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
 True, though one of the cool things Gregor did back the days with dsss is
 automagically run unittests for each package in the repository and  
 publish
 the results. It wasn't perfect but gave at least some indication.
I think that idea is taken from CPAN. CPAN refuses to install the package if it fails unit tests (unless you force it to). -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com> wrote:
 
 I'm pretty happy that my Fedora repositories are just a handful, most of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
Why? It works quite well for Ruby as well as other languages.
Jan 18 2011
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tue, 18 Jan 2011 14:36:43 +0200, Lutger Blijdestijn  
<lutger.blijdestijn gmail.com> wrote:

 Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com> wrote:

 I'm pretty happy that my Fedora repositories are just a handful, most  
 of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
Why? It works quite well for Ruby as well as other languages.
Um? Maybe I don't know enough about RubyGems (I don't use Ruby but used it once or twice for a Ruby app) but AFAIK it isn't maintained by a group of people who select and package libraries from authors' web pages, but it is the authors who publish their libraries directly on RubyGems. -- Best regards, Vladimir mailto:vladimir thecybershadow.net
Jan 18 2011
parent Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 14:36:43 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com> wrote:
 
 Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com> wrote:

 I'm pretty happy that my Fedora repositories are just a handful, most
 of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
Why? It works quite well for Ruby as well as other languages.
Um? Maybe I don't know enough about RubyGems (I don't use Ruby but used it once or twice for a Ruby app) but AFAIK it isn't maintained by a group of people who select and package libraries from authors' web pages, but it is the authors who publish their libraries directly on RubyGems.
Aha, I've been misunderstanding you all this time, thinking you were arguing against the very idea of standard repository and package *format*. Then I agree, I also prefer something more decentralized.
Jan 18 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/18/11 6:36 AM, Lutger Blijdestijn wrote:
 Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com>  wrote:

 I'm pretty happy that my Fedora repositories are just a handful, most of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
Why? It works quite well for Ruby as well as other languages.
Package management something we really need to figure out for D. Question is, do we have an expert on board (apt-get architecture, cpan, rubygems...)? Andrei
Jan 18 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-18 17:29, Andrei Alexandrescu wrote:
 On 1/18/11 6:36 AM, Lutger Blijdestijn wrote:
 Vladimir Panteleev wrote:

 On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn
 <lutger.blijdestijn gmail.com> wrote:

 I'm pretty happy that my Fedora repositories are just a handful,
 most of
 which are setup out of the box. It's a big time saver, one of it's best
 features. I would use / evaluate much less software if I had to read
 instructions and download each package manually.
I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic.
Why? It works quite well for Ruby as well as other languages.
Package management something we really need to figure out for D. Question is, do we have an expert on board (apt-get architecture, cpan, rubygems...)? Andrei
I'm not an expert but I've been thinking for a while about doing a package system for D, basically RubyGems but for D. But I first want to finish (finish as in somewhat usable and release it) another project I'm working on. -- /Jacob Carlborg
Jan 19 2011
parent reply Gour <gour atmarama.net> writes:
On Wed, 19 Jan 2011 14:07:27 +0100
Jacob Carlborg <doob me.com> wrote:

 I'm not an expert but I've been thinking for a while about doing a=20
 package system for D, basically RubyGems but for D.=20
Have you thought about waf (which already has some support for D as build system) and it is intended to be build framework? (http://waf-devel.blogspot.com/2010/12/make-your-own-build-system-with-waf.= html) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-19 14:39, Gour wrote:
 On Wed, 19 Jan 2011 14:07:27 +0100
 Jacob Carlborg<doob me.com>  wrote:

 I'm not an expert but I've been thinking for a while about doing a
 package system for D, basically RubyGems but for D.
Have you thought about waf (which already has some support for D as build system) and it is intended to be build framework? (http://waf-devel.blogspot.com/2010/12/make-your-own-build-system-with-waf.html) Sincerely, Gour
Never heard of it, I'll have a look. -- /Jacob Carlborg
Jan 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-01-19 18:44, Jacob Carlborg wrote:
 On 2011-01-19 14:39, Gour wrote:
 On Wed, 19 Jan 2011 14:07:27 +0100
 Jacob Carlborg<doob me.com> wrote:

 I'm not an expert but I've been thinking for a while about doing a
 package system for D, basically RubyGems but for D.
Have you thought about waf (which already has some support for D as build system) and it is intended to be build framework? (http://waf-devel.blogspot.com/2010/12/make-your-own-build-system-with-waf.html) Sincerely, Gour
Never heard of it, I'll have a look.
1. it uses python, yet another dependency 2. it seems complicated -- /Jacob Carlborg
Jan 19 2011
parent reply Gour <gour atmarama.net> writes:
On Wed, 19 Jan 2011 19:40:49 +0100
Jacob Carlborg <doob me.com> wrote:

 1. it uses python, yet another dependency
True, but it brings more features over e.g. cmake 'cause you have full language on disposal.
 2. it seems complicated
Well, build systems are complex... ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 20 2011
next sibling parent reply Russel Winder <russel russel.org.uk> writes:
I missed a lot of this thread and coming in part way through may miss
lots of past nuances, or even major facts.

On Thu, 2011-01-20 at 10:19 +0100, Gour wrote:
 On Wed, 19 Jan 2011 19:40:49 +0100
 Jacob Carlborg <doob me.com> wrote:
=20
 1. it uses python, yet another dependency
=20 True, but it brings more features over e.g. cmake 'cause you have full language on disposal.
Waf and SCons (both Python based) are top of the pile in the C/C ++/Fortran/LaTeX build game, with CMake a far back third and everything else failing to finish. In the Java/Scala/Groovy/Clojure build game Gradle beats Maven beats Gant beats Ant, for the reason that Groovy beats XML as a build specification language. Internal DSLs using dynamic languages just win in this game. (Though the Scala crew are trying to convince people that SBT, a build tool written such that you use Scala code to specify the build is good. It is a priori but it has some really critical negative design issues.)
 2. it seems complicated
=20 Well, build systems are complex... ;)
Definitely. Well at least for anything other than trivial projects anyway. The trick is to make it as easy as possible to specify the complexity easily and comprehensibly. Make did this in 1978, but is not now the tool of choice. Autotools was an heroic attempt to do something based on Make. CMake likewise. SCons, Waf, and Gradle are currently the tools of choice. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 20 2011
parent reply Gour <gour atmarama.net> writes:
On Thu, 20 Jan 2011 10:13:00 +0000
Russel Winder <russel russel.org.uk> wrote:

 SCons, Waf, and Gradle are currently the tools of choice.
Gradle is (mostly) for Java-based projects, afaict? Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ----------------------------------------------------------------
Jan 20 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-20 at 12:32 +0100, Gour wrote:
 On Thu, 20 Jan 2011 10:13:00 +0000
 Russel Winder <russel russel.org.uk> wrote:
=20
 SCons, Waf, and Gradle are currently the tools of choice.
=20 Gradle is (mostly) for Java-based projects, afaict?
It is the case that there are two more or less distinct domains of build -- JVM-oriented, and everything else. There is though nothing stopping a single build system from trying to be more universal. Sadly every attempt to date has failed for one reason or another (not necessarily technical). Basically there seems to be a positive feedback loop in action keeping the two domains separate: basically the tools from one domain don't work well on the opposite domain and so no-one uses them there, so no evolution happens to improve things. In this particular case, Gradle has great support for everything JVM-related and no real support for C, C++, Fortran, etc. All attempts to raise the profile of the Ant C/C++ compilation tasks, which Gradle could use trivially, have come to nothing. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 20 2011
parent reply Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes:
Russel Winder wrote:

 On Thu, 2011-01-20 at 12:32 +0100, Gour wrote:
 On Thu, 20 Jan 2011 10:13:00 +0000
 Russel Winder <russel russel.org.uk> wrote:
 
 SCons, Waf, and Gradle are currently the tools of choice.
Gradle is (mostly) for Java-based projects, afaict?
It is the case that there are two more or less distinct domains of build -- JVM-oriented, and everything else. There is though nothing stopping a single build system from trying to be more universal. Sadly every attempt to date has failed for one reason or another (not necessarily technical). Basically there seems to be a positive feedback loop in action keeping the two domains separate: basically the tools from one domain don't work well on the opposite domain and so no-one uses them there, so no evolution happens to improve things. In this particular case, Gradle has great support for everything JVM-related and no real support for C, C++, Fortran, etc. All attempts to raise the profile of the Ant C/C++ compilation tasks, which Gradle could use trivially, have come to nothing.
Do you have an opinion for the .NET world? I'm currently just using MSBuild, but know just enough to get it working. It sucks.
Jan 20 2011
parent Russel Winder <russel russel.org.uk> writes:
On Thu, 2011-01-20 at 19:24 +0100, Lutger Blijdestijn wrote:
[ . . . ]
=20
 Do you have an opinion for the .NET world? I'm currently just using MSBui=
ld,=20
 but know just enough to get it working. It sucks.=20
=20
I thought .NET was dominated by NAnt -- I have no direct personal experience, so am "speaking" from a position of deep ignorance. SCons and Waf should both work. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jan 21 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-20 10:19, Gour wrote:
 On Wed, 19 Jan 2011 19:40:49 +0100
 Jacob Carlborg<doob me.com>  wrote:

 1. it uses python, yet another dependency
True, but it brings more features over e.g. cmake 'cause you have full language on disposal.
I would go with a tool that uses a dynamic language as a DSL. I'm assuming you can embed the the dynamic language completely without the need for external dependencies.
 2. it seems complicated
Well, build systems are complex... ;) Sincerely, Gour
Hm, right. I was actually kind of thinking about a build tool, not a package system/tool. But it seemed complex anyway, it should be able to be quite simple. -- /Jacob Carlborg
Jan 20 2011
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 18/01/2011 16:29, Andrei Alexandrescu wrote:
 Package management something we really need to figure out for D.
 Question is, do we have an expert on board (apt-get architecture, cpan,
 rubygems...)?

 Andrei
I agree. Having worked on Eclipse a lot, which uses OSGi as the underlying package management system, I really stand by it's usefulness. For larger projects it is chaos without it (more or less chaos depending on the particular situation). -- Bruno Medeiros - Software Engineer
Feb 04 2011
prev sibling next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
Interestingly, my own experience with Ruby, a few years ago, was
almost 180 degrees opposite of the blogger's.

The two most frustrating aspects were documentation and deployment.
The documents were sparse and useless and deployment was the
hugest headache I've ever experienced, in great part due to Rubygems
not working properly!

They've probably improved it a lot since then, but it reinforced
my long-standing belief that third party libraries are, more often
than not, more trouble than they're worth anyway.
Jan 18 2011
parent Brad <brad.lanam.comp_nospam nospam_gmail.com> writes:
In digitalmars.D, you wrote:
 The two most frustrating aspects were documentation and deployment.
 The documents were sparse and useless and deployment was the
 hugest headache I've ever experienced, in great part due to Rubygems
 not working properly!

 They've probably improved it a lot since then, but it reinforced
 my long-standing belief that third party libraries are, more often
 than not, more trouble than they're worth anyway.
I only poked into RubyGems briefly and I had the same impression at the time. Perl's CPAN is much more mature. Much of the time, I feel as you do about 3rd party libraries. They try to do too much, are inflexible and not customizable. But many of the perl packages on CPAN are written to address a single task, are flexible and easy to use. I use several and have my favorites. Others are not worth the trouble. But this problem is going to happen to any system. Some of the packages are simply useless, poorly designed, too specific, not supported, out of date, etc. But other packages are well designed, well supported, work great. Some haven't changed in ages and work well. I think counters can help -- how many downloads, indicating popularity. How many _recent_ downloads, or a histogram of downloads by month so the user can tell if the package is out of date. Don't like the rating systems much, but that's also a possibility. An integrated bug database and forums similar to sourceforge would be very useful. You can check activity and see if the author of the package is active and keeps on top of problems. -- Brad
Jan 20 2011
prev sibling next sibling parent reply Jim <bitcirkel yahoo.com> writes:
Jesse Phillips Wrote:
 It makes everything much clearer and creates a bunch of opportunities for
further development.
I don't see such benefit.
It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name. It lets the compiler traverse dependencies by itself. This is good for the following reasons: 1) You don't need build tools, makefiles. Just "dmd myApp.d". Do you know how many build tools there are, each trying to do the same thing. They are at disadvantage to the compiler because the compiler can do conditional compiling and generally understands the code better than other programs. There's also extra work involved in keeping makefiles current. They are just like header files are for C/C++ -- an old solution. 2) The compiler can do more optimisation, inlining, reduction and refactoring. The compiler also knows which code interacts with other code and can use that information for cache-specific optimisations. Vladimir suggested it would open the door to new language features (like virtual templated methods). Generally I think it would be good for templates, mixins and the like. In the TDPL book Andrei makes hints about future AST-introspection functionality. Surely access to the source would benefit from this. It would simplify error messages now caused by the linker. Names within a program wouldn't need to be mangled. More information about the caller / callee would also be available at the point of error. It would also be of great help to third-party developers. Static code analysers (for performance, correctness, bugs, documentation etc), packet managers... They could all benefit from the simpler structure. They wouldn't have to guess what code is used or built (by matching names themselves or trying to interpret makefiles). It would be easier for novices. The simpler it is to build a program the better. It could be good for the community of D programmers. Download some code and it would fit right in. Naming is a little bit of a Wild West now. Standardised naming makes it easier to sort, structure and reuse code.
 I'd create a branch (in git or mercury) for that task, it's quick and dirt
cheap, very easy to switch to and from, and you get the diff for free.
Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it?
You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline.
Jan 18 2011
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Jim Wrote:

 Jesse Phillips Wrote:
 It makes everything much clearer and creates a bunch of opportunities for
further development.
I don't see such benefit.
It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name.
Like I sad, I haven't seen this as an issue. People don't go around naming their files completely different from module name. There are just too many benefits to do it otherwise, I believe the include path makes use of this.
 It lets the compiler traverse dependencies by itself. This is good for the
following reasons:
 1) You don't need build tools, makefiles. Just "dmd myApp.d". Do you know how
many build tools there are, each trying to do the same thing. They are at
disadvantage to the compiler because the compiler can do conditional compiling
and generally understands the code better than other programs. There's also
extra work involved in keeping makefiles current. They are just like header
files are for C/C++ -- an old solution.
This is what the "Open Scalable Language Toolchains" talk is about http://vimeo.com/16069687 The idea is that the compile has the job of compiling the program and providing information about the program to allow other tools to make use of the information without their own lex/parser/analysis work. Meaning the compile should not have an advantage. Lastly Walter has completely different reasons for not wanting to have "auto find" in the compiler. It will become yet another magic black box that will still confuse people when it fails.
 2) The compiler can do more optimisation, inlining, reduction and refactoring.
The compiler also knows which code interacts with other code and can use that
information for cache-specific optimisations. Vladimir suggested it would open
the door to new language features (like virtual templated methods). Generally I
think it would be good for templates, mixins and the like. In the TDPL book
Andrei makes hints about future AST-introspection functionality. Surely access
to the source would benefit from this.
No, you do not get optimization benefits from how the files are stored on the disk. What Vladimir was talking about was the restriction that compilation unit was the module. DMD already provides many of these benefits if you just list all the files you want compiled on the command line.
 It would simplify error messages now caused by the linker. Names within a
program wouldn't need to be mangled. More information about the caller / callee
would also be available at the point of error.
Nope, because the module you are looking for could be in a library somewhere, and if you forget to point the linker to it, you'll still get linker errors.
 It would also be of great help to third-party developers. Static code
analysers (for performance, correctness, bugs, documentation etc), packet
managers... They could all benefit from the simpler structure. They wouldn't
have to guess what code is used or built (by matching names themselves or
trying to interpret makefiles).
As I said, have all these tools assume such a structure. If people aren't already using the layout, they will if they want to use these tools. I believe that is how using the import path already works in dmd.
 It would be easier for novices. The simpler it is to build a program the
better. It could be good for the community of D programmers. Download some code
and it would fit right in. Naming is a little bit of a Wild West now.
Standardised naming makes it easier to sort, structure and reuse code.
rdmd is distributed with the compiler... do you have examples of poorly chosen module names, which have caused issue?
 Right, using such tools is great. But what if you are like me and don't have a
dev environment set up for Phobos, but I want to fix some module? Do I have to
setup such an environment or through the file in a folder std/ just do some
work on it?
You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline.
No no no, having git installed on the system is completely different from have a dev environment for Phobos. You'd have to download all the Phobos files and Druntime into their proper location and any other dependencies/issues you run into when you try and build it. Then you would need a dmd installation which used your custom test build of Phobos.
Jan 18 2011
next sibling parent reply Jim <bitcirkel yahoo.com> writes:
Jesse Phillips Wrote:
 This is what the "Open Scalable Language Toolchains" talk is about
 http://vimeo.com/16069687
 
 The idea is that the compile has the job of compiling the program and
providing information about the program to allow other tools to make use of the
information without their own lex/parser/analysis work. Meaning the compile
should not have an advantage.
Yes, I like that idea very much. I wouldn't mind having a D toolchain like that. Seems modular and nice. The point is not needing to manually write makefiles, or having different and conflicting ways to build source code. The D language itself is all that is needed for declaring dependencies by using import statements, and the compiler could very well traverse these files along the way.
 Lastly Walter has completely different reasons for not wanting to have "auto
find" in the compiler. It will become yet another magic black box that will
still confuse people when it fails.
I'm not talking about any magic at all. Just plain D semantics. Make use of it.
 2) The compiler can do more optimisation, inlining, reduction and refactoring.
The compiler also knows which code interacts with other code and can use that
information for cache-specific optimisations. Vladimir suggested it would open
the door to new language features (like virtual templated methods). Generally I
think it would be good for templates, mixins and the like. In the TDPL book
Andrei makes hints about future AST-introspection functionality. Surely access
to the source would benefit from this.
No, you do not get optimization benefits from how the files are stored on the disk. What Vladimir was talking about was the restriction that compilation unit was the module. DMD already provides many of these benefits if you just list all the files you want compiled on the command line.
I never claimed that file storage was an optimisation. The compiler can optimise better by seeing more source code (or a greater AST if you will) at compile time. Inlining, for example, can only occur within a compilation unit. I'm arguing that a file is not the optimal compilation unit. Computers today have enough memory to hold the entire program in memory while doing the compilation. It should be up to the compiler to make the best of it. If you need to manually list the files then, well, you do unnecessary labour.
 It would simplify error messages now caused by the linker. Names within a
program wouldn't need to be mangled. More information about the caller / callee
would also be available at the point of error.
Nope, because the module you are looking for could be in a library somewhere, and if you forget to point the linker to it, you'll still get linker errors.
I didn't say "no linking errors". I said simpler errors messages, as in easier to understand. It could, for example, say where you tried to access a particular function: file and line number. A linker alone cannot say that. Also, you wouldn't have to tell the linker anything other than where your libraries resides. It would find the correct ones based on their modules' names.
 It would also be of great help to third-party developers. Static code
analysers (for performance, correctness, bugs, documentation etc), packet
managers... They could all benefit from the simpler structure. They wouldn't
have to guess what code is used or built (by matching names themselves or
trying to interpret makefiles).
As I said, have all these tools assume such a structure. If people aren't already using the layout, they will if they want to use these tools. I believe that is how using the import path already works in dmd.
Standards are better than assumptions.
 No no no, having git installed on the system is completely different from have
a dev environment for Phobos. You'd have to download all the Phobos files and
Druntime into their proper location and any other dependencies/issues you run
into when you try and build it. Then you would need a dmd installation which
used your custom test build of Phobos.
It seems I misunderstood you. Of course you have to download all dependencies before you build something. Otherwise it wouldn't be a dependency, would it? How many megabytes are these, 15? Frankly, I don't see the problem. What is it really that you don't like? I'm trying to argue for less manual dependency juggling by using the specification that is already there, your source code. The second thing, I guess, is not being overly restrictive to files as compilation units. It made sense long ago, but today it is arbitrary. Remember, C/C++ even compels you to declare your symbols in a particular order -- probably because of how the parsing algorithm was conceived at the time. It's unfortunate when it becomes language specification.
Jan 18 2011
parent Adam Ruppe <destructionator gmail.com> writes:
Jim wrote:
 I never claimed that file storage was an optimisation. The compiler
 can optimise better by seeing more source code (or a greater AST if
 you will) at compile time. Inlining, for example, can only occur
 within a compilation unit. I'm arguing that a file is not the optimal
 compilation unit. Computers today have enough memory to hold the
 entire program in memory while doing the compilation. It should be up
 to the compiler to make the best of it.
Note that dmd already does this, if you pass all the files on the command line at once. My new build.d program fetches the dependency list from dmd, then compiles by passing them all at once - it's a really simple program, just adding the dependencies onto the end of the command line (and trying to download them if they don't exist). So then you wouldn't have to do it manually either.
Jan 19 2011
prev sibling parent spir <denis.spir gmail.com> writes:
On 01/19/2011 05:16 AM, Jesse Phillips wrote:

 This is what the "Open Scalable Language Toolchains" talk is about
 http://vimeo.com/16069687

 The idea is that the compile has the job of compiling the program and 
providing information about the program to allow other tools to make use of the information without their own lex/parser/analysis work. Meaning the compile should not have an advantage. Let us call "decoder" the part of a compiler that scans, parses, "semanticises" source code; and (syntactic/semantic) tree the resulting representation of code. What I dream of is a decoder that (on demand) spits out a data-description module of this tree. I mean a source code module --ideally in the source language itself: here D-- that can be imported by any other tool needing as input the said tree . [D is not that bad as data-desription language, thank to its nice literal notations (not comparable to Lua, indeed, but Lua was designed for that). It's also easy in D, I guess, to define proper types for the various kinds of nodes the tree would hold. D's main obstacle AFAIK is data description must all be put in the module's "static this" clause (for some reason I haven't yet understood); but we can survive that.] Denis _________________ vita es estrany spir.wikidot.com
Jan 19 2011
prev sibling next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Adam Ruppe Wrote:

 Vladimir Panteleev wrote:
 Your tool will just download the latest version of Y and the
 whole thing crashes and burns.
My problem is I don't see how that'd happen in the first place. Who would distribute something they've never compiled? If they compiled it, it would have downloaded the other libs already, so any sane distribution *already* has the dependent libraries, making this whole thing moot.
But if they haven't done any development on it for the last year, but the library it depends on has...
 The build tool is meant to help the developer, not the user. If
 the user needs help, it means the developer didn't do his job
 properly.
Isn't the developer the user?
 That said, the configuration file, as described in my last post,
 seems like it can solve this easily enough.
Jan 19 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
Jesse Phillips wrote:
 But if they haven't done any development on it for the last year, but
 the library it depends on has...
Unless you give library authors write access to your hard drive, it doesn't matter. They can't make your old, saved version magically disappear. If you then distribute that saved version with the rest of your app or library, it Just Works for the customer. I think I'm now at the point where I've spent more time posting to this thread than I've ever spent maintaining makefiles! If I have a different perspective on this from everyone else, there's really no point talking about it further. We might as well just go our separate ways.
Jan 19 2011
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Adam Ruppe Wrote:

 Jesse Phillips wrote:
 But if they haven't done any development on it for the last year, but
 the library it depends on has...
Unless you give library authors write access to your hard drive, it doesn't matter. They can't make your old, saved version magically disappear. If you then distribute that saved version with the rest of your app or library, it Just Works for the customer.
You can have the author release packaged libraries for developers to use and the author should do this. So this begs the question of what is the repository for? Why is the tool going out to different URLs and downloading files when you are supposed to use the pre-built lib? I believe the reason for pushing a system like rubygems is not so that the original author can compile their program, it is already setup for them. The purpose is to make it easier on those that want to start contributing to the project, those that are using the library and need to fix a bug, and those that want to take over the project long after it has died. Having a standard way for which things are built, having easy access to all relevant libraries, and knowing you can find most of what you need in one place. Those are the reasons.
 I think I'm now at the point where I've spent more time
 posting to this thread than I've ever spent maintaining makefiles!
 
 If I have a different perspective on this from everyone else, there's
 really no point talking about it further. We might as well just
 go our separate ways.
Jan 19 2011
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Jesse Phillips wrote:
 You can have the author release packaged libraries for developers
 to use and the author should do this. So this begs the question of
 what is the repository for?
It's so you have a variety of libraries available at once with minimal hassle when you are originally writing something. I really don't care about those libraries' implementation details. I just want it so when I type "import something.lib;" in my program it actually works. If something.lib's author wants to use other.thing, great, I just don't want to think about it anymore than I think about his private classes or functions.
 Why is the tool going out to different URLs and downloading files
 when you are supposed to use the pre-built lib?
The second level of downloads is an implementation detail, aiming to provide the illusion of a pre-built lib when the author didn't actually provide one. The first level of downloads (the things you actually import in your own program) are there for your own convenience. It's supposed to make this huge ecosystem of third party libraries available with the same kind of ease you get with Phobos. You just write "import std.coolness;" and it works. No downloading anything, no adding things to the command line. I want third party modules to be equally available. But, just like you don't care if Phobos uses zlib 1.3.x internally or whatever, you shouldn't care if third party modules do either. Phobos comes with "batteries included"; so should everything else.
 Having a standard way for which things are built, having easy
 access to all relevant libraries, and knowing you can find most
 of what you need in one place. Those are the reasons.
We agree here. The difference is I'm only interested in the top most layer - the modules I import myself. I couldn't care less about what those other modules import. In my mind, they are just like private functions - not my problem.
Jan 19 2011
next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Adam D. Ruppe Wrote:

 Jesse Phillips wrote:
 You can have the author release packaged libraries for developers
 to use and the author should do this. So this begs the question of
 what is the repository for?
It's so you have a variety of libraries available at once with minimal hassle when you are originally writing something. I really don't care about those libraries' implementation details. I just want it so when I type "import something.lib;" in my program it actually works. If something.lib's author wants to use other.thing, great, I just don't want to think about it anymore than I think about his private classes or functions.
Perfect, so what you want is a is a repository of pre-built packages which have all the bells and whistles you need.
 Why is the tool going out to different URLs and downloading files
 when you are supposed to use the pre-built lib?
The second level of downloads is an implementation detail, aiming to provide the illusion of a pre-built lib when the author didn't actually provide one.
And this is where things stop "just working." You are providing an illusion that isn't their. And it is one that is very fragile. You can of course ignore the issues and say the software must be maintained to support newer versions of the libraries if the feature is to work. I don't have an answer as to whether this is acceptable. The only way to find out is if a successful build tool takes this approach.
 The first level of downloads (the things you actually import in
 your own program) are there for your own convenience. It's
 supposed to make this huge ecosystem of third party libraries
 available with the same kind of ease you get with Phobos. You
 just write "import std.coolness;" and it works. No downloading
 anything, no adding things to the command line.
Right, and since the files are going to be downloaded you have to decide what you want to do with them. Pollute the programmers project with all the third-party libraries he imported? Save them to a location on the import path? Both of these are reasonable, but both have drawbacks. It is more common to place third-party libraries into a reusable location.
 I want third party modules to be equally available. But, just
 like you don't care if Phobos uses zlib 1.3.x internally or
 whatever, you shouldn't care if third party modules do either.
 Phobos comes with "batteries included"; so should everything else.
Yes this is generally how Windows software is developed. Linux on the other hand has had a long history of making libraries available to all those who want it. Of course it is usually only something to worry about for much larger libraries, at which point we have package managers from our OS (well some of us do).
 Having a standard way for which things are built, having easy
 access to all relevant libraries, and knowing you can find most
 of what you need in one place. Those are the reasons.
We agree here. The difference is I'm only interested in the top most layer - the modules I import myself. I couldn't care less about what those other modules import. In my mind, they are just like private functions - not my problem.
Right, unless you are making the software that is trying to hide those details from you. I think there is a fine line between the person using the libraries and the person that is adding libraries for other people. We want the barrier to entry of both to be as minimal as possible. So the question I want to put forth is what is required to be done for that library writer to commit his work for use by this magical tool? Should it be compilable without any command line flags? Should it contain all relevant code so their aren't external dependencies? Does it need to be packaged in a tar ball? Can it reside in a source repository? Does it need to be specified as an application or library? Creating such a system around your simple needs just means it won't be used when things become less simple. Creating a complex system means it won't be used if it is too hard to use or it doesn't work. DSSS seemed to provide a great amount of simplicity and power... the problem is that it didn't always work.
Jan 19 2011
parent reply Gary Whatmore <no spam.sp> writes:
Jesse Phillips Wrote:

 DSSS seemed to provide a great amount of simplicity and power... the problem
is that it didn't always work.
I always wondered what happened to that boy. He had impressive coding skills and lots of pragmatic common sense. There was at least one weakness in his persona if he left D, he was dumb enough to not realize the capability our language has now. That's just plainly stupidest thing to do wrt programming languages.
Jan 19 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/19/11 9:04 PM, Gary Whatmore wrote:
 Jesse Phillips Wrote:

 DSSS seemed to provide a great amount of simplicity and power... the problem
is that it didn't always work.
I always wondered what happened to that boy. He had impressive coding skills and lots of pragmatic common sense. There was at least one weakness in his persona if he left D, he was dumb enough to not realize the capability our language has now. That's just plainly stupidest thing to do wrt programming languages.
Gregor left when he started grad school, and grad school asks for all your time and then some more. http://www.cs.purdue.edu/homes/gkrichar/ Andrei
Jan 19 2011
prev sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 20.01.2011 00:54, schrieb Adam D. Ruppe:
 Jesse Phillips wrote:
 You can have the author release packaged libraries for developers
 to use and the author should do this. So this begs the question of
 what is the repository for?
It's so you have a variety of libraries available at once with minimal hassle when you are originally writing something. I really don't care about those libraries' implementation details. I just want it so when I type "import something.lib;" in my program it actually works. If something.lib's author wants to use other.thing, great, I just don't want to think about it anymore than I think about his private classes or functions.
 Why is the tool going out to different URLs and downloading files
 when you are supposed to use the pre-built lib?
Pre-built libs aren't all that useful anyway, for several reasons: 1. Templates 2. different operating systems: there would have to be pre-built libs for Windows, OSX, Linux and FreeBSD (if not even more) 3. different architectures: there would have to be pre-built libs for x86, AMD64 and, thanks to GDC and LDC, for about any platform supported by Linux.. Just provide source, so people can build their own libs from it or just compile the sources like their own source files. This can still be done automagically by the build-tool/package management. Cheers, - Daniel
Jan 20 2011
next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
 Pre-built libs aren't all that useful anyway, for several reasons:
By "pre-built" I mean all the source is in one place, so the compile Just Works, not necessarily being pre-compiled. So if you downloaded mylib.zip, every file it needs is in there. No need to separately hunt down random.garbage.0.5.3.2.tar.xz as well. Bascially, the developer can compile it on his machine. He sends me the files he used to build it all in one place. That way, it is guaranteed to work - everything needed is right there.
Jan 20 2011
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 20.01.2011 14:48, schrieb Adam Ruppe:
 Pre-built libs aren't all that useful anyway, for several reasons:
By "pre-built" I mean all the source is in one place, so the compile Just Works, not necessarily being pre-compiled. So if you downloaded mylib.zip, every file it needs is in there. No need to separately hunt down random.garbage.0.5.3.2.tar.xz as well. Bascially, the developer can compile it on his machine. He sends me the files he used to build it all in one place. That way, it is guaranteed to work - everything needed is right there.
Ah, ok. I'd prefer a dependency-system though, so if mylib needs random.garbage.0.5.etc it should fetch it from the repository as well. So when there's a non-breaking security update to random.garbage, mylib automatically gets it upon rebuild. However, when there are breaking changes, random.garbage needs a new version (e.g. 0.6.etc instead of 0.5.etc).
Jan 20 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
 However, when there are breaking changes, random.garbage needs a new
version (e.g. 0.6.etc instead of 0.5.etc). IMO the best way to do that would be to get everyone in the habit of including the version in their modules. module random.garbage.0.6; import random.garbage.0.6; That way, it is explicit, in the code itself, what you need and what the library provides. This also lets the two versions reside next to each other. Say I import cool.stuff.1.0. cool.stuff imports useless.crap.0.4. But I want useless.crap.1.0 in my main app. If they were both called just plain old "useless.crap", the two imports will probably break something when we build the whole application. But if the version is part of the module name, we can both import what we need and use it at the same time. There would be no "import useless.crap" module provided that actually does work. At best, it'd say pragma(msg, "useless.crap.1.0 is the most recent, please use it"); If the thing without version annotation actually compiles, it'll break eventually anyway, so forcing something there ensures long term usefulness. The bug fix version wouldn't need to be in the module name, since they are (ideally) forward and backward compatible. Unit tests could probably confirm that automatically.
Jan 20 2011
parent reply so <so so.do> writes:
On Thu, 20 Jan 2011 16:30:40 +0200, Adam Ruppe <destructionator gmail.com>  
wrote:

 IMO the best way to do that would be to get everyone in the habit
 of including the version in their modules.

 module random.garbage.0.6;

 import random.garbage.0.6;
Even better, we could enforce this to only module writers. module random.garbage.0.6; import random.garbage; When you compile, you have to provide a path anyhow, less hostile to user and you don't have to change the code.
Jan 20 2011
parent reply Adam Ruppe <destructionator gmail.com> writes:
 When you compile, you have to provide a path anyhow, less hostile to
 user and you don't have to change the code.
One of the things implicit in the thread now is removing the need to provide a path - the compiler can (usually) figure it out on its own. Try dmd -v and search for import lines. But requiring it on the user side just makes sense if versioning is important. Your program won't compile with a different version - you aren't importing a generic thing, you're depending on something specific. It should be explicit. (Btw, this is the big failure of Linux dynamic libraries. They started with a decent idea of having version numbers in the filename. But then they ruined it by having generic symlinks that people can use. They start using libwhatever.so when they really wanted libwhatever.so.4.2. It's a symlink on their system, so Works for Me, but if they give that binary to someone with a different symlink, it won't work. Gah.)
Jan 20 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 20 Jan 2011 09:58:17 -0500, Adam Ruppe <destructionator gmail.com>  
wrote:

 When you compile, you have to provide a path anyhow, less hostile to
 user and you don't have to change the code.
One of the things implicit in the thread now is removing the need to provide a path - the compiler can (usually) figure it out on its own. Try dmd -v and search for import lines. But requiring it on the user side just makes sense if versioning is important. Your program won't compile with a different version - you aren't importing a generic thing, you're depending on something specific. It should be explicit. (Btw, this is the big failure of Linux dynamic libraries. They started with a decent idea of having version numbers in the filename. But then they ruined it by having generic symlinks that people can use. They start using libwhatever.so when they really wanted libwhatever.so.4.2. It's a symlink on their system, so Works for Me, but if they give that binary to someone with a different symlink, it won't work. Gah.)
Hm... I thought the symlink was meant to point to binary-compatible bug-fix releases. So for example, if you need libwhatever.so.4.2, you have a symlink called libwhatever.so.4 which points to the latest point revision that is binary compatible with all 4.x versions. I think you still simply link with -lwhatever, but the binary requires the .so.4 version. I have seen a lot of libs where the symlink version seems to have nothing to do with the linked-to version (e.g. /lib/libc.so.6 -> libc-2.12.1.so), that doesn't really help matters. Given that almost all Linux releases are compiled from source, it's quite possible that one OS' libwhatever.so.4 is not compiled exactly the same as your libwhatever.so.4 (and might be binary incompatible). This is definitely an issue among linuxen. -Steve
Jan 20 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-20 15:58, Adam Ruppe wrote:
 When you compile, you have to provide a path anyhow, less hostile to
 user and you don't have to change the code.
One of the things implicit in the thread now is removing the need to provide a path - the compiler can (usually) figure it out on its own. Try dmd -v and search for import lines. But requiring it on the user side just makes sense if versioning is important. Your program won't compile with a different version - you aren't importing a generic thing, you're depending on something specific. It should be explicit. (Btw, this is the big failure of Linux dynamic libraries. They started with a decent idea of having version numbers in the filename. But then they ruined it by having generic symlinks that people can use. They start using libwhatever.so when they really wanted libwhatever.so.4.2. It's a symlink on their system, so Works for Me, but if they give that binary to someone with a different symlink, it won't work. Gah.)
This is where the "bundle" tool (often used together with rails) shines. It's basically a dependency tool on top of rubygems which creates like a bubble for your application. * You specify, in a in a gemfile, all the package/libraries your application depends on, if you want to can also specify a specific version of a package. * Then when you want to deploy your application (deploy your rails site to the server) you lock the gemfile and it will create a new locked gemfile. The locked gemfile specifies the exact version of all the packages (even those you never specified a version for). * Later on the server you run "bundle install" and it will use the locked gemfile and it will install the exact same versions of the packages you had on your developer machine. -- /Jacob Carlborg
Jan 20 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-01-20 13:12, Daniel Gibson wrote:
 Am 20.01.2011 00:54, schrieb Adam D. Ruppe:
 Jesse Phillips wrote:
 You can have the author release packaged libraries for developers
 to use and the author should do this. So this begs the question of
 what is the repository for?
It's so you have a variety of libraries available at once with minimal hassle when you are originally writing something. I really don't care about those libraries' implementation details. I just want it so when I type "import something.lib;" in my program it actually works. If something.lib's author wants to use other.thing, great, I just don't want to think about it anymore than I think about his private classes or functions.
 Why is the tool going out to different URLs and downloading files
 when you are supposed to use the pre-built lib?
Pre-built libs aren't all that useful anyway, for several reasons: 1. Templates 2. different operating systems: there would have to be pre-built libs for Windows, OSX, Linux and FreeBSD (if not even more) 3. different architectures: there would have to be pre-built libs for x86, AMD64 and, thanks to GDC and LDC, for about any platform supported by Linux..
And then one library for each of the compilers (ldc, gdc and dmd), do the math and one will soon realize that this won't work. Although pre-built libraries that only work for a given platform might work.
 Just provide source, so people can build their own libs from it or just
 compile the sources like their own source files. This can still be done
 automagically by the build-tool/package management.

 Cheers,
 - Daniel
-- /Jacob Carlborg
Jan 20 2011
prev sibling next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 18/01/2011 05:20, Walter Bright wrote:
 http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/
"I quit being a professional programmer." I usually avoid discussions and dismiss out of hand opinions about software development from those who no longer develop code (did that recently with my boss, to cut off a discussion). Mostly for time saving, it's not that I think they automatically wrong, or even likely to be wrong. Still, I read that article, and it's not bad, there are some good points. In fact, I strongly agree, in essence, with one of the things he said: That language ecosystems are what matter, not just the language itself. At least for most programmers, what you want is to develop software, software that is useful or interesting, it's not about staring at the beauty of your code and that's it. This, any language community that focuses excessively on the language only and forsakes, dismisses, or forgets the rest of the toolchain and ecosystem, will never succeed beyond a niche. (*cough* LISP *cough*) -- Bruno Medeiros - Software Engineer
Feb 04 2011
next sibling parent so <so so.do> writes:
 "I quit being a professional programmer."

 I usually avoid discussions and dismiss out of hand opinions about  
 software development from those who no longer develop code (did that  
 recently with my boss, to cut off a discussion). Mostly for time saving,  
 it's not that I think they automatically wrong, or even likely to be  
 wrong.
We are still in the stone age of programming, what has changed in last 10 years? Nothing.
Feb 04 2011
prev sibling parent Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes:
2011/2/4 Bruno Medeiros <brunodomedeiros+spam com.gmail>:
 language ecosystems are what matter, not just the language itself. At least
 for most programmers, what you want is to develop software, software that is
 useful or interesting, it's not about staring at the beauty of your code and
 that's it.
My view is that "language ecosystems matters TOO", but it's not enough if the language itself, or the platforms it's tied to makes me grit my teeth. What good is earning lots of money, to buy the finest food, if I only have my gums left to chew it with? (Figuratively speaking, of course)
Feb 06 2011
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Bruno Medeiros:

 That language ecosystems are what matter, not just the language itself.
This is true, but only once your language is already very good :-) Bye, bearophile
Feb 04 2011
next sibling parent spir <denis.spir gmail.com> writes:
On 02/04/2011 09:55 PM, bearophile wrote:
 Bruno Medeiros:

 That language ecosystems are what matter, not just the language itself.
This is true, but only once your language is already very good :-)
A key point is, imo, whether the eco-system grows, and how harmoniously, is completely unrelated to whether your language is "very good", or how good it is --compared to others of similar vein. I would say these are close to orthogonal dimensions, with probability close to 1 ;-) (sorry for unclear formulation, I guess you get the point anyway). Denis -- _________________ vita es estrany spir.wikidot.com
Feb 04 2011
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
On 04/02/2011 20:55, bearophile wrote:
 Bruno Medeiros:

 That language ecosystems are what matter, not just the language itself.
This is true, but only once your language is already very good :-) Bye, bearophile
I disagree. I think an average language with an average toolchain (I'm not even considering the whole ecosystem here, just the toolchain - compilers, debuggers, IDEs, profilers, and some other tools) will be better than a good language with a mediocre toolchain. By better I mean that people will be more willing to use it, and better programs will be created. Obviously it is very hard to quantify in a non-subjective way what exactly good/average/mediocre is in terms of a language and toolchain. But roughly speaking, I think the above to be true. The only advantage that a good language with bad toolchain has over another ecosystem, is in terms of *potential*: it might be easier to improve the toolchain than to improve the language. This might be relevant if one is still an early-adopter or hobbyist, but if you want to do a real, important non-trivial project, what you care is what is the state of the toolchain and ecosystem *now*. -- Bruno Medeiros - Software Engineer
Feb 16 2011
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, February 16, 2011 09:23:04 Bruno Medeiros wrote:
 On 04/02/2011 20:55, bearophile wrote:
 Bruno Medeiros:
 That language ecosystems are what matter, not just the language itself.
This is true, but only once your language is already very good :-) Bye, bearophile
I disagree. I think an average language with an average toolchain (I'm not even considering the whole ecosystem here, just the toolchain - compilers, debuggers, IDEs, profilers, and some other tools) will be better than a good language with a mediocre toolchain. By better I mean that people will be more willing to use it, and better programs will be created. Obviously it is very hard to quantify in a non-subjective way what exactly good/average/mediocre is in terms of a language and toolchain. But roughly speaking, I think the above to be true. The only advantage that a good language with bad toolchain has over another ecosystem, is in terms of *potential*: it might be easier to improve the toolchain than to improve the language. This might be relevant if one is still an early-adopter or hobbyist, but if you want to do a real, important non-trivial project, what you care is what is the state of the toolchain and ecosystem *now*.
There are people who will want a better language and will be willing to go to some effort to findand use one, bet there are plenty of programmers who just don't care. They might like to have a better language, but they're not willing to go togreat lengths to find and use one. So, even if they're presented with a fnantastic language that they might like to use, if it's a pain to use due to toolchain issues or whatnot, they won't use it. Every barrier of entry reduces the number of peoplle willing to use a particular programing language. For someone to be willing to put up/work passed a particular barrier of entry, the effort has to appear to be worthwhile to them. And often, it won't take much for a particular barrier of entry to be enough for someone to not try a language which they think _might_ be much better but don't know is much better, because they've never really used it. So, until the toolchain is reasonable to your average programmer, your average programmer isn't going to use the language. Now, by no means does that mean that the toolchain is the most important aspect of getting people to use a new language, but it _does_ mean that if you want to increase your user base, you need a solid toolchain. To get your language ironed out, you need a fair-sized user base, but there's also no point in growing the user base to large sizes long before the language is even properly usable. So, I don't know what point in the development process is the best time to really be worrying about the toolchain. As it stands, D has generally worried more about getting the language right. Now that the spec is mostly frozen, the focus is shifting towards ironing out the major bugs and fixing the major wholes in the toolchain (such as the lack of support for 64-bit and shared libraries). So, D has definitely taken the approach of trying to iron out the language before ironing out the toolchain. Regardless, it's definitely true that problems with the toolchain are preventing people from using D at this point. How good the language is if you can put up with some of its problems or how good it will be once those problems are solved is irrelevant to many programmers. They want a good toolchain _now_. And when most programmers are used to dealing with toolchains with virtually no bugs (or at least not bugs that they typically run into), they're not going to put up with one with as many bugs as D's has. We're really going to need a solid toolchain for D to truly grow its user base. And we're getting there, but it takes time. Until then though, there are a lot of programmers who won't touch D. - Jonathan M Davis
Feb 16 2011
prev sibling parent retard <re tard.com.invalid> writes:
Wed, 16 Feb 2011 17:23:04 +0000, Bruno Medeiros wrote:

 On 04/02/2011 20:55, bearophile wrote:
 Bruno Medeiros:

 That language ecosystems are what matter, not just the language
 itself.
This is true, but only once your language is already very good :-) Bye, bearophile
I disagree. I think an average language with an average toolchain (I'm not even considering the whole ecosystem here, just the toolchain - compilers, debuggers, IDEs, profilers, and some other tools) will be better than a good language with a mediocre toolchain. By better I mean that people will be more willing to use it, and better programs will be created. Obviously it is very hard to quantify in a non-subjective way what exactly good/average/mediocre is in terms of a language and toolchain. But roughly speaking, I think the above to be true. The only advantage that a good language with bad toolchain has over another ecosystem, is in terms of *potential*: it might be easier to improve the toolchain than to improve the language. This might be relevant if one is still an early-adopter or hobbyist, but if you want to do a real, important non-trivial project, what you care is what is the state of the toolchain and ecosystem *now*.
Surprisingly this is exactly what I've been saying several times. I'd also like to point out that part of the potential for new languages comes from the fact that you can design much cleaner standard & de facto libs before it takes off. Some of the issues with "old" languages come from the standard utilities and libraries. It sometimes takes an enormous effort to replace those. So, 100% of the potential doesn't come from redesign of the language, it's also the redesign of tools and the ecosystem. I'm also quite sure it's a redesign every time now. There are simply too many languages already to choose from. Some examples of failed designs which are still in use: PHP's stdlib with weird parameter conventions and intensive use of globals, (GNU) C/C++ build tools, Java's wasteful (in terms of heap allocation) stdlib, C++'s thread/multicore unaware runtime, C++'s metaprogramming libraries using the terrible template model, Javascript's "bad" parts from the era when it still was a joke. However there has been a constant flux of new languages since the 1950s. I'm sure many new languages can beat Java and C++ in several ways. But in general a new language isn't some kind of a silver bullet. Advancements in language design follow the law of diminishing returns -- even though we see complex breakthroughs in type system design, better syntax and cleaner APIs, something around 5-50% better usability/productivity/safety many times isn't worth the effort. I've seen numbers that moving from procedural programming to OOP only improved the productivity about 20-40%. Moving from OOP language 1 to OOP language 2 quite likely improves the numbers a lot less. As an example, Java's toolchain and its set of available libraries are so huge that you need millions of $$$ and thousands of man years to beat it in many domains. There simply isn't any valid technical reason not to use that tool (assuming it's the tool people typically use to get the work done). If you need a low cost web site and only php hosting is available at that price, you can't do a shit with D. Some hardcore fanboy would perhaps build a PHP backend for D, but it doesn't make any sense. It's 1000 lines of PHP vs 100000 lines of D. And reclaiming the potential takes forever. It's not worth it.
Feb 16 2011