www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - A possible future purpose for D1

reply bearophile <bearophileHUGS lycos.com> writes:
I think this comment contains a grain of truth: languages that start simple can
gain an user base, and then they can slowly grow more complex:

http://www.reddit.com/r/programming/comments/b74jv/scala_books_in_general_are_just_not_selling_well/

The const/nothrow/pure system of D2 is useful, but in practice it's restrictive
and a bit fussy too: there are legal and useful D1 programs that just can't be
compiled by D2. I have shown a small example problem here (I think this problem
can be fixed):
http://d.puremagic.com/issues/show_bug.cgi?id=3833

So the D1 language can be useful as ladder to climb to the complexity heights
of D2 language. People can learn D1, that's simpler and less fussy. Once they
know D1, if they like it and they need it they can learn D2 too. Sometimes you
want to use D2 just because you want to create larger programs (while D1 can be
fitter for smaller ones).

If this usage of D1 is more than an illusion of mine, then the future evolution
of D1 language can be shaped to help in such didactic/introduction purposes. To
do this the D1 can be changed a little, removing some of its features that are
absent in D2 (but not all of them, some of them were removed in D2 because of
other improvements that are missing in D1), and adding few useful features that
are very simple to use and understand, (like struct constructors, that I think
(despite increasing the compiler complexity a little) can decrease a little the
complexity of the language because they remove a special case, you don't need
to remember that structs don't have those constructors).

Bye,
bearophile
Feb 27 2010
next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
bearophile wrote:

 I think this comment contains a grain of truth: languages that start
 simple can gain an user base, and then they can slowly grow more complex:
 
 
http://www.reddit.com/r/programming/comments/b74jv/scala_books_in_general_are_just_not_selling_well/ I think you need to distinguish between accidental complexity and inherent complexity. For example, C++ has a lot of accidental complexity that is actually not necessary, this is nothing more than a hindrance. But take javascript's lack of a module system for example, this makes the language itself simpler but the scaling up javascript programs much harder. In this regard python's module system is way more complex, which makes programming way more simple. Tacking on features while remaining backwards compatible is what grows a lot of accidental complexity in languages. D2 is not free from that of course as it's designers are leaving the 'no issue left behind' stage of development behind in order to actually finish something.
 The const/nothrow/pure system of D2 is useful, but in practice it's
 restrictive and a bit fussy too: there are legal and useful D1 programs
 that just can't be compiled by D2. I have shown a small example problem
 here (I think this problem can be fixed):
 http://d.puremagic.com/issues/show_bug.cgi?id=3833
But these examples are genuine bugs right? They are not inherent flaws in the type system (I hope).
 So the D1 language can be useful as ladder to climb to the complexity
 heights of D2 language. People can learn D1, that's simpler and less
 fussy. Once they know D1, if they like it and they need it they can learn
 D2 too. Sometimes you want to use D2 just because you want to create
 larger programs (while D1 can be fitter for smaller ones).
 
 If this usage of D1 is more than an illusion of mine, then the future
 evolution of D1 language can be shaped to help in such
 didactic/introduction purposes. To do this the D1 can be changed a little,
 removing some of its features that are absent in D2 (but not all of them,
 some of them were removed in D2 because of other improvements that are
 missing in D1), and adding few useful features that are very simple to use
 and understand, (like struct constructors, that I think (despite
 increasing the compiler complexity a little) can decrease a little the
 complexity of the language because they remove a special case, you don't
 need to remember that structs don't have those constructors).
 
 Bye,
 bearophile
D is a complex language allright, but it's made to implement complex D2 spec is frozen and the most important bugs are fixed, it will be more clear whether it is *too* complex or not. I think (hope) most of the D2 features you mention as complex can just not be used if they are not suited for the kind of program you are writing. What about D1? I think a better stepping stone to D2 is just the subset of D2 that is simple or more familiar to most programmers. At least you don't get any questions about what 'public static void main()' means and what a class is when introducing hello world :) D1 has but one major advantage over D2: it is much more mature. I think D1 has a future as long as that is the case, or as long as there is a large enough body of code depending on it. Assuming Walter Bright keeps supporting it of course (as he has).
Feb 27 2010
parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 27-feb-10, at 15:49, Lutger wrote:

 D1 has but one major advantage over D2: it is much more mature. I think D1
 has a future as long as that is the case, or as long as there is a large
 enough body of code depending on it. Assuming Walter Bright keeps supporting
 it of course (as he has).
Well that was my idea too, but lately I am wondering if it is really the case. I am coming from a big rewrite of my correct code to work around compiler bugs. Since 1.047 there were almost 10 releases, and no one of them was able to compile my (correct) code due to one regression or the another. As so many versions passed by these regressions were also picked up by ldc, and so I had the choice of havin an older ldc based on a non released llvm and tango, or the new regressions. Things like http://d.puremagic.com/issues/show_bug.cgi?id=3867 are very annoying and time consuming to find. Also http://d.puremagic.com/issues/show_bug.cgi?id=3792 needs invasive changes. Ldc has also aquired another bugs that forces one to prefix fields of some nested structs with this. For http://d.puremagic.com/issues/show_bug.cgi?id=3803 I found no other way but to patch the compiler. Looking at the D community it seems that almost all big user of the language have touched the compiler, or patched it at some point, this is something that I have resisted with as much as possible. I don't want to work on the compiler, I spent already way too much time on blip and tango, which are just means to develop my programs, I don't want to start another time sink, and I should not have to. Normally I try to code defensively, to avoid forward refs, to test well, and to rewrite my code in such a way that I avoid bugs if possible. And when a tricky part is done and works well I want to be able to forget about it. This kind of regressions that need rewrites and are hard to debug make it difficult to do, and force me to have direct control of almost all the code, I cannot easily depend on external D libs, because I might need to hack them to use them with my compiler version. Maybe I am painting the situation more dire than it is, but I sure got annoyed by it, and I hope that it will be rectified soon. More than new language features D needs stable and efficient libraries, something that can come only if the compiler is stable enough, and at least for D1.0 that should be the case Fawzi 
Mar 01 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Fawzi Mohamed wrote:
 Maybe I am painting the situation more dire than it is, but I sure got 
 annoyed by it, and I hope that it will be rectified soon.
 More than new language features D needs stable and efficient libraries, 
 something that can come only if the compiler is stable enough, and at 
 least for D1.0 that should be the case
Regressions are caused by fixing bugs in the compiler while having an inadequate test suite. The good news in this, is that every fixed problem in bugzilla also winds up in the test suite, so it stays fixed.
Mar 01 2010
next sibling parent reply Fawzi Mohamed <fawzi gmx.ch> writes:
On 2-mar-10, at 01:26, Walter Bright wrote:

 Fawzi Mohamed wrote:
 Maybe I am painting the situation more dire than it is, but I sure  
 got annoyed by it, and I hope that it will be rectified soon.
 More than new language features D needs stable and efficient  
 libraries, something that can come only if the compiler is stable  
 enough, and at least for D1.0 that should be the case
Regressions are caused by fixing bugs in the compiler while having an inadequate test suite. The good news in this, is that every fixed problem in bugzilla also winds up in the test suite, so it stays fixed.
that s good, but maybe for a release one should also try to compile some of the largish projects that are done in D (even al older frozen version) to see if in larger codebases something comes up... At least for D 1.0 on a fixed system the idea "if id did compile it should compile again" is something that could be considered. You could ask people to make a script "setup my program" that does it with the dmd in the path. This could be done for some releases (stable ones), yes bugs found this way are harder to isolate, but it could be worthwhile. Fawzi
Mar 01 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Fawzi Mohamed wrote:
 that s good, but maybe for a release one should also try to compile some 
 of the largish projects that are done in D (even al older frozen 
 version) to see if in larger codebases something comes up...
 At least for D 1.0 on a fixed system the idea "if id did compile it 
 should compile again" is something that could be considered.
 You could ask people to make a script "setup my program" that does it 
 with the dmd in the path.
 This could be done for some releases (stable ones), yes bugs found this 
 way are harder to isolate, but it could be worthwhile.
I generally don't do that because: 1. I didn't write the code, and when it fails to compile it can be rather time consuming because I don't know what it is intended to be doing. 2. They often don't come with a test suite, and simply compiling it doesn't really say if it works or not. If it does come with a test suite, and the test suite fails, debugging someone else's code can be very hard. 3. It makes running the test suite take considerably longer. I need it to be fast, as I run it very often when developing things. 4. Nearly all problems boil down to less than 10 lines of code. This makes for a very compact and fast running test suite. If the test suite fails, the problem is usually already isolated down. 5. I've discovered over the years that programmers write in particular "islands" of the language. No matter how large a code base they produce, they never stray outside that island, so once the bugs they initially encountered are fixed, they never run into compiler bugs anymore. The coverage of a test suite is simply not a function of number of lines of code thrown at it. In other words, large applications tend to make lousy test suites. A test suite needs to be written to be a test suite. On the other hand, anyone can subscribe to the dmd-beta mailing list, and have a chance to check for regressions on their own code before the release. It's the point of the list.
Mar 02 2010
next sibling parent Fawzi Mohamed <fmohamed mac.com> writes:
On 2010-03-02 09:25:05 +0100, Walter Bright <newshound1 digitalmars.com> said:

 Fawzi Mohamed wrote:
 that s good, but maybe for a release one should also try to compile 
 some of the largish projects that are done in D (even al older frozen 
 version) to see if in larger codebases something comes up...
 At least for D 1.0 on a fixed system the idea "if id did compile it 
 should compile again" is something that could be considered.
 You could ask people to make a script "setup my program" that does it 
 with the dmd in the path.
 This could be done for some releases (stable ones), yes bugs found this 
 way are harder to isolate, but it could be worthwhile.
I generally don't do that because: 1. I didn't write the code, and when it fails to compile it can be rather time consuming because I don't know what it is intended to be doing. 2. They often don't come with a test suite, and simply compiling it doesn't really say if it works or not. If it does come with a test suite, and the test suite fails, debugging someone else's code can be very hard. 3. It makes running the test suite take considerably longer. I need it to be fast, as I run it very often when developing things. 4. Nearly all problems boil down to less than 10 lines of code. This makes for a very compact and fast running test suite. If the test suite fails, the problem is usually already isolated down. 5. I've discovered over the years that programmers write in particular "islands" of the language. No matter how large a code base they produce, they never stray outside that island, so once the bugs they initially encountered are fixed, they never run into compiler bugs anymore. The coverage of a test suite is simply not a function of number of lines of code thrown at it. In other words, large applications tend to make lousy test suites. A test suite needs to be written to be a test suite. On the other hand, anyone can subscribe to the dmd-beta mailing list, and have a chance to check for regressions on their own code before the release. It's the point of the list.
I did not know about dmd-beta, and have now subscribed to it. I must have missed its announcement, as I don't always read all the posts, on the NG, I just check it out from time to time. It is definitely a good idea, a way for people not involved in compiler development to quickly, and possibly painlessly check if the new release breaks something. I fully agree that large codebases aren't a good testsuite for compiler development, but they are a good test for beta releases. As you say it is indeed better if the developer of the lib/app does the test, and it scales better, so dmd-beta is the right idea. Fawzi
Mar 02 2010
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 5. I've discovered over the years that programmers write in particular 
 "islands" of the language. No matter how large a code base they produce, 
 they never stray outside that island, so once the bugs they initially 
 encountered are fixed, they never run into compiler bugs anymore. The 
 coverage of a test suite is simply not a function of number of lines of 
 code thrown at it.
This is interesting. I know that different groups of C++ programmers use a different "tidy" subset of C++ (for example Google coding standards forbid many C++ features), but I have never read of this. I presume the mind of human programmers works in a more grammar-based way compared to the normal linguistic generative capabilities. I'll send an email to Steven Pinker about this. Bye and thank you, bearophile
Mar 02 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 5. I've discovered over the years that programmers write in
 particular "islands" of the language. No matter how large a code
 base they produce, they never stray outside that island, so once
 the bugs they initially encountered are fixed, they never run into
 compiler bugs anymore. The coverage of a test suite is simply not a
 function of number of lines of code thrown at it.
This is interesting. I know that different groups of C++ programmers use a different "tidy" subset of C++ (for example Google coding standards forbid many C++ features), but I have never read of this. I presume the mind of human programmers works in a more grammar-based way compared to the normal linguistic generative capabilities. I'll send an email to Steven Pinker about this.
You see a similar thing with writers. A writer can be identified by doing a statistical analysis of the words/phrases used.
Mar 02 2010
parent retard <re tard.com.invalid> writes:
Tue, 02 Mar 2010 11:57:13 -0800, Walter Bright wrote:

 bearophile wrote:
 Walter Bright:
 5. I've discovered over the years that programmers write in particular
 "islands" of the language. No matter how large a code base they
 produce, they never stray outside that island, so once the bugs they
 initially encountered are fixed, they never run into compiler bugs
 anymore. The coverage of a test suite is simply not a function of
 number of lines of code thrown at it.
This is interesting. I know that different groups of C++ programmers use a different "tidy" subset of C++ (for example Google coding standards forbid many C++ features), but I have never read of this. I presume the mind of human programmers works in a more grammar-based way compared to the normal linguistic generative capabilities. I'll send an email to Steven Pinker about this.
You see a similar thing with writers. A writer can be identified by doing a statistical analysis of the words/phrases used.
I don't think the language can change the way of thinking that much. No matter what language you have, certain types of developers use a very different coding style. Even if the language is minimalistic or simple (Scheme / Io) or horribly complex like C++, code review always reveals varying coding styles.
Mar 02 2010
prev sibling parent Brad Roberts <braddr puremagic.com> writes:
On 3/1/2010 11:43 PM, Fawzi Mohamed wrote:
 
 On 2-mar-10, at 01:26, Walter Bright wrote:
 
 Fawzi Mohamed wrote:
 Maybe I am painting the situation more dire than it is, but I sure
 got annoyed by it, and I hope that it will be rectified soon.
 More than new language features D needs stable and efficient
 libraries, something that can come only if the compiler is stable
 enough, and at least for D1.0 that should be the case
Regressions are caused by fixing bugs in the compiler while having an inadequate test suite. The good news in this, is that every fixed problem in bugzilla also winds up in the test suite, so it stays fixed.
that s good, but maybe for a release one should also try to compile some of the largish projects that are done in D (even al older frozen version) to see if in larger codebases something comes up... At least for D 1.0 on a fixed system the idea "if id did compile it should compile again" is something that could be considered. You could ask people to make a script "setup my program" that does it with the dmd in the path. This could be done for some releases (stable ones), yes bugs found this way are harder to isolate, but it could be worthwhile. Fawzi
Why is exactly why there are beta releases.. to give people a chance to do exactly those sorts of test builds. If you want to be a part of the solution, subscribe to the dmd-beta mailing list and test your applications. Reported regressions have a high chance of being fixed prior to release. Expecting walter to do all of that regression testing on un-reduced test cases is unrealistic. This isn't a new topic, see the newsgroup history to periodic repeats of it. Later, Brad
Mar 01 2010
prev sibling next sibling parent reply Bane <branimir.milosavljevic gmail.com> writes:
bearophile Wrote:

 I think this comment contains a grain of truth: languages that start simple
can gain an user base, and then they can slowly grow more complex:
 
 http://www.reddit.com/r/programming/comments/b74jv/scala_books_in_general_are_just_not_selling_well/
 
 The const/nothrow/pure system of D2 is useful, but in practice it's
restrictive and a bit fussy too: there are legal and useful D1 programs that
just can't be compiled by D2. I have shown a small example problem here (I
think this problem can be fixed):
 http://d.puremagic.com/issues/show_bug.cgi?id=3833
 
 So the D1 language can be useful as ladder to climb to the complexity heights
of D2 language. People can learn D1, that's simpler and less fussy. Once they
know D1, if they like it and they need it they can learn D2 too. Sometimes you
want to use D2 just because you want to create larger programs (while D1 can be
fitter for smaller ones).
 
I strongly object, sir! As practice proved too many times, only functional and maintainable large programs are those composed of may simple components. There for, D1, in my opinion, is capable of producing even large programs. On the other hand, D2 carries more complexity than D1, more power at a greater risk of potentially more dangerous programs (due to programmers fault). As Language D homepage states, D aims to balance simplicity and power. Seems to me D1 leans to first, while D2 to second. I see place for both in this world for making both small and large programs.
 If this usage of D1 is more than an illusion of mine, then the future
evolution of D1 language can be shaped to help in such didactic/introduction
purposes. To do this the D1 can be changed a little, removing some of its
features that are absent in D2 (but not all of them, some of them were removed
in D2 because of other improvements that are missing in D1), and adding few
useful features that are very simple to use and understand, (like struct
constructors, that I think (despite increasing the compiler complexity a
little) can decrease a little the complexity of the language because they
remove a special case, you don't need to remember that structs don't have those
constructors).
 
 Bye,
 bearophile
Feb 28 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bane wrote:
 On the other hand, D2 carries more complexity than D1, more power at
 a greater risk of potentially more dangerous programs (due to
 programmers fault). As Language D homepage states, D aims to balance
 simplicity and power. Seems to me D1 leans to first, while D2 to
 second. I see place for both in this world for making both small and
 large programs.
Actually, I think D2 is a much safer language than D1. The fundamental problem with simple languages is that they tend to push the complexity off upon the user source code. Whenever you have an IDE that generates many lines of boilerplate at the push of a button, that is a red flag that the language is too simple.
Feb 28 2010
parent Bane <branimir.milosavljevic gmail.com> writes:
Walter Bright Wrote:

 Bane wrote:
 On the other hand, D2 carries more complexity than D1, more power at
 a greater risk of potentially more dangerous programs (due to
 programmers fault). As Language D homepage states, D aims to balance
 simplicity and power. Seems to me D1 leans to first, while D2 to
 second. I see place for both in this world for making both small and
 large programs.
Actually, I think D2 is a much safer language than D1.
I hope so, I'll be first to switch to it after it becomes stable enough. And for D2 being more dork safer, only time and dorks will prove that (your opinion is not of an average user) :D
 The fundamental problem with simple languages is that they tend to push 
 the complexity off upon the user source code. Whenever you have an IDE 
 that generates many lines of boilerplate at the push of a button, that 
 is a red flag that the language is too simple.
Mar 01 2010
prev sibling parent Norbert Nemec <Norbert Nemec-online.de> writes:
I strongly disagree: Having two versions of the language can only lead 
to confusion.

If there is a need for a "simplified" D, this should be achieved by 
defining D in several levels, not just by branching off the project. The 
simplified D should still evolve alongside with full D and kept in sync.




bearophile wrote:
 I think this comment contains a grain of truth: languages that start simple
can gain an user base, and then they can slowly grow more complex:
 
 http://www.reddit.com/r/programming/comments/b74jv/scala_books_in_general_are_just_not_selling_well/
 
 The const/nothrow/pure system of D2 is useful, but in practice it's
restrictive and a bit fussy too: there are legal and useful D1 programs that
just can't be compiled by D2. I have shown a small example problem here (I
think this problem can be fixed):
 http://d.puremagic.com/issues/show_bug.cgi?id=3833
 
 So the D1 language can be useful as ladder to climb to the complexity heights
of D2 language. People can learn D1, that's simpler and less fussy. Once they
know D1, if they like it and they need it they can learn D2 too. Sometimes you
want to use D2 just because you want to create larger programs (while D1 can be
fitter for smaller ones).
 
 If this usage of D1 is more than an illusion of mine, then the future
evolution of D1 language can be shaped to help in such didactic/introduction
purposes. To do this the D1 can be changed a little, removing some of its
features that are absent in D2 (but not all of them, some of them were removed
in D2 because of other improvements that are missing in D1), and adding few
useful features that are very simple to use and understand, (like struct
constructors, that I think (despite increasing the compiler complexity a
little) can decrease a little the complexity of the language because they
remove a special case, you don't need to remember that structs don't have those
constructors).
 
 Bye,
 bearophile
Feb 28 2010