www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: null references redux + Looney Tunes

reply Justin Johansson <no spam.com> writes:
For the interest of newsgroups readers, I dropped in at the Cafe the other day
and
the barista had this to say

http://cafe.elharo.com/programming/imagine-theres-no-null/

Disclaimer: YMMV

Cheers

-- Justin Johansson
Oct 02 2009
next sibling parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the other day
and
 the barista had this to say
 
 http://cafe.elharo.com/programming/imagine-theres-no-null/
 
 Disclaimer: YMMV
 
 Cheers
 
 -- Justin Johansson
Most of the bugs he expose are trivial to debug and mostly come from beginners. From the article: "The distinction between primitive and object types is a relic of days when 40 MHz was considered a fast CPU" I so disagree with that on so many levels. That's exactly what I believe is wrong with programmers today, they excuse their sloppy programming and lazy debugging with safe constructs which have way more overhead than is actually needed. It doesn't really make the program easier to code but the programmer less careful, leading to new kind of bugs. Maybe for financial or medical domains its acceptable since speed is not an issue, but I expect my $3k computer to not slow down to a crawl because its software is written in a "safe" way and I like people with older computers to still be able to run my programs without waiting 5 minutes between any two mouse clicks.
Oct 02 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 02/10/2009 16:16, Jeremie Pelletier wrote:
 Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the
 other day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
Most of the bugs he expose are trivial to debug and mostly come from beginners. From the article: "The distinction between primitive and object types is a relic of days when 40 MHz was considered a fast CPU" I so disagree with that on so many levels. That's exactly what I believe is wrong with programmers today, they excuse their sloppy programming and lazy debugging with safe constructs which have way more overhead than is actually needed. It doesn't really make the program easier to code but the programmer less careful, leading to new kind of bugs. Maybe for financial or medical domains its acceptable since speed is not an issue, but I expect my $3k computer to not slow down to a crawl because its software is written in a "safe" way and I like people with older computers to still be able to run my programs without waiting 5 minutes between any two mouse clicks.
all I can say is: Thank God I'm an atheist. it seems you do not want to hear a different opinion despite the fact that option types exist in FP for half a century already and provide the correct semantics for nullable types. with your logic we should remove seat-belts from cars since it makes for less careful drivers.
Oct 02 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Yigal Chripun wrote:
 On 02/10/2009 16:16, Jeremie Pelletier wrote:
 Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the
 other day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
Most of the bugs he expose are trivial to debug and mostly come from beginners. From the article: "The distinction between primitive and object types is a relic of days when 40 MHz was considered a fast CPU" I so disagree with that on so many levels. That's exactly what I believe is wrong with programmers today, they excuse their sloppy programming and lazy debugging with safe constructs which have way more overhead than is actually needed. It doesn't really make the program easier to code but the programmer less careful, leading to new kind of bugs. Maybe for financial or medical domains its acceptable since speed is not an issue, but I expect my $3k computer to not slow down to a crawl because its software is written in a "safe" way and I like people with older computers to still be able to run my programs without waiting 5 minutes between any two mouse clicks.
all I can say is: Thank God I'm an atheist. it seems you do not want to hear a different opinion despite the fact that option types exist in FP for half a century already and provide the correct semantics for nullable types. with your logic we should remove seat-belts from cars since it makes for less careful drivers.
I don't think you understood what I meant, seat-belts don't require you to buy a bigger car engine because they don't affect the performance of the car whatsoever. They're also not enforced, the car will run just as fine if you don't wear them.
Oct 03 2009
parent Yigal Chripun <yigal100 gmail.com> writes:
On 03/10/2009 16:09, Jeremie Pelletier wrote:
 I don't think you understood what I meant, seat-belts don't require you
 to buy a bigger car engine because they don't affect the performance of
 the car whatsoever. They're also not enforced, the car will run just as
 fine if you don't wear them.
I understood you quite well. seat belts do not require a bigger car engine and compile time safety features do not add any overhead to run-time execution. seat belts *ARE* enforced. they are required by law and you get a hefty fine if you violate this. also, seat belts save lives and are used by regular drivers and car racers.
Oct 03 2009
prev sibling next sibling parent reply language_fan <foo bar.com.invalid> writes:
Fri, 02 Oct 2009 10:16:05 -0400, Jeremie Pelletier thusly wrote:

 I expect my $3k computer to not slow down to a crawl
 because its software is written in a "safe" way and I like people with
 older computers to still be able to run my programs without waiting 5
 minutes between any two mouse clicks.
Your $3k computer can probably run about 7200 billion instructions in 5 minutes - it will eventually get old in the coming years. I really hope the bloat in various software components never gets that bad that you would have to wait 5 minutes between two mouse clicks!
Oct 02 2009
parent Jeremie Pelletier <jeremiep gmail.com> writes:
language_fan wrote:
 Fri, 02 Oct 2009 10:16:05 -0400, Jeremie Pelletier thusly wrote:
 
 I expect my $3k computer to not slow down to a crawl
 because its software is written in a "safe" way and I like people with
 older computers to still be able to run my programs without waiting 5
 minutes between any two mouse clicks.
Your $3k computer can probably run about 7200 billion instructions in 5 minutes - it will eventually get old in the coming years. I really hope the bloat in various software components never gets that bad that you would have to wait 5 minutes between two mouse clicks!
I was talking about older computers (for example a PIII 500Mhz), I know my laptop will perform just fine for the next 5-10 years :)
Oct 03 2009
prev sibling parent Jeremie Pelletier <jeremiep gmail.com> writes:
Nick Sabalausky wrote:
 "Jeremie Pelletier" <jeremiep gmail.com> wrote in message 
 news:ha51v1$24ps$1 digitalmars.com...
 Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the 
 other day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
Most of the bugs he expose are trivial to debug and mostly come from beginners. From the article: "The distinction between primitive and object types is a relic of days when 40 MHz was considered a fast CPU" I so disagree with that on so many levels. That's exactly what I believe is wrong with programmers today, they excuse their sloppy programming and lazy debugging with safe constructs which have way more overhead than is actually needed. It doesn't really make the program easier to code but the programmer less careful, leading to new kind of bugs. Maybe for financial or medical domains its acceptable since speed is not an issue, but I expect my $3k computer to not slow down to a crawl because its software is written in a "safe" way and I like people with older computers to still be able to run my programs without waiting 5 minutes between any two mouse clicks.
Holy crap, I feel like I have a clone ;) (Hopefully that was original enough to rationalize a blatant "me too" post ;) )
It certainly was, thanks :)
Oct 03 2009
prev sibling next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Fri, Oct 2, 2009 at 8:13 AM, Justin Johansson <no spam.com> wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the other day
and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
I always think it's funny when people are like "so, I had this idea, lemme throw this out there. I know it sounds weird, but just bear with me - what if there were _no null_? Did I just _blow your mind?_" And the perspective of languages with better type systems, it's like.. and? data Maybe T = Just T | Nothing The whole null/nonnull debate is a complete nonissue in languages like Haskell because _they actually treat it formally and correctly_. And they've _been_ doing this for years. For all the Java-ites to be like "OMG PARADIGM SHIFT" it's just funny.
Oct 02 2009
parent reply language_fan <foo bar.com.invalid> writes:
Fri, 02 Oct 2009 10:30:24 -0400, Jarrett Billingsley thusly wrote:

 I always think it's funny when people are like "so, I had this idea,
 lemme throw this out there. I know it sounds weird, but just bear with
 me - what if there were _no null_? Did I just _blow your mind?_"
 
 And the perspective of languages with **better type systems**, it's 
like.. *plonk* :-P (old-timers might know)
 The whole null/nonnull debate is a complete nonissue in languages like
 Haskell because _they actually treat it formally and correctly_. And
 they've _been_ doing this for years. For all the Java-ites to be like
 "OMG PARADIGM SHIFT" it's just funny.
You know, mainstream is pretty much religion driven.. many might have already plonked you automatically because your postings have contained the words 'disagree', 'progress', 'haskell', or 'scala'. The performance focused people from the c++ land seem to have a strong conservative view towards new things - like it or not. Walter being mostly a C++ guy and not having written much code in any other language (including D!) only makes the situation a bit worse, if you prefer progress.
Oct 02 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
language_fan wrote:
 Fri, 02 Oct 2009 10:30:24 -0400, Jarrett Billingsley thusly wrote:
 
 I always think it's funny when people are like "so, I had this idea,
 lemme throw this out there. I know it sounds weird, but just bear with
 me - what if there were _no null_? Did I just _blow your mind?_"

 And the perspective of languages with **better type systems**, it's 
like.. *plonk* :-P (old-timers might know)
 The whole null/nonnull debate is a complete nonissue in languages like
 Haskell because _they actually treat it formally and correctly_. And
 they've _been_ doing this for years. For all the Java-ites to be like
 "OMG PARADIGM SHIFT" it's just funny.
You know, mainstream is pretty much religion driven.. many might have already plonked you automatically because your postings have contained the words 'disagree', 'progress', 'haskell', or 'scala'. The performance focused people from the c++ land seem to have a strong conservative view towards new things - like it or not. Walter being mostly a C++ guy and not having written much code in any other language (including D!) only makes the situation a bit worse, if you prefer progress.
I'll note two things. For one, Walter is a heck more progressive than his pedigree might lead one to think. He has taken quite some risks with a number of features that made definite steps outside the mainstream, and I feel he bet on the right horse more often than not. Second, this particular discussion is not about efficiency. Andrei
Oct 02 2009
parent reply language_fan <foo bar.com.invalid> writes:
Fri, 02 Oct 2009 12:38:33 -0500, Andrei Alexandrescu thusly wrote:

 I'll note two things. For one, Walter is a heck more progressive than
 his pedigree might lead one to think. He has taken quite some risks with
 a number of features that made definite steps outside the mainstream,
 and I feel he bet on the right horse more often than not. Second, this
 particular discussion is not about efficiency.
I apologize that I said it in a way that might hurt Walter. I know he is extremely talented programmer and also open for new ideas. That is often not the problem. But it is not that hard to find features in D that are there only to make old C++ users feel comfortable. E.g. C style pointer syntax is harmful for the syntax of new features like tuples. It is also really confusing, but somehow has to be there since D "must" feel like C+ +, otherwise someone would notice that D is actually a modern multi- paradigm language that allows even functional programming, which is a bit bad for the reputation in conservative c++ circles. Some people would not even touch the language with a 10 foot pole, if someone dared to provide a practical garbage collector library for it. Because that would mean that there are people with wrong opinions (tm) in the community. I know there is a old and stubborn language war between academic foofoo and "practical aspects".
Oct 02 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
language_fan wrote:
 Fri, 02 Oct 2009 12:38:33 -0500, Andrei Alexandrescu thusly wrote:
 
 I'll note two things. For one, Walter is a heck more progressive than
 his pedigree might lead one to think. He has taken quite some risks with
 a number of features that made definite steps outside the mainstream,
 and I feel he bet on the right horse more often than not. Second, this
 particular discussion is not about efficiency.
I apologize that I said it in a way that might hurt Walter. I know he is extremely talented programmer and also open for new ideas. That is often not the problem. But it is not that hard to find features in D that are there only to make old C++ users feel comfortable. E.g. C style pointer syntax is harmful for the syntax of new features like tuples. It is also really confusing, but somehow has to be there since D "must" feel like C+ +, otherwise someone would notice that D is actually a modern multi- paradigm language that allows even functional programming, which is a bit bad for the reputation in conservative c++ circles.
You will never be able to please everyone, or get everyone's attention. I don't believe D is having some features merely to attract attention to it, that's the thing I like best about D; it provides a very large set of tools and let me choose how to use them, instead of enforcing a certain model or paradigm. Pointers are a critical feature of D, they allow both binary compatibility with C code and optimizations not possible without pointers. I use pointers all the time in D, just not nearly as much as I would in C/C++.
 Some people would not even touch the language with a 10 foot pole, if 
 someone dared to provide a practical garbage collector library for it. 
 Because that would mean that there are people with wrong opinions (tm) in 
 the community. I know there is a old and stubborn language war between 
 academic foofoo and "practical aspects".
Academics also seems to live in a fantasy world where code execute instantly and everyone in the world owns the latest computer hardware. They may not have a pet language but they have pet designs, which is quite equivalent. There are conservative people on all sides :)
Oct 03 2009
parent reply language_fan <somewhere internet.com.invalid> writes:
On Sat, 03 Oct 2009 10:32:28 -0400, Jeremie Pelletier wrote:

 I don't believe D is having some features merely to attract attention to
 it, that's the thing I like best about D; it provides a very large set
 of tools and let me choose how to use them, instead of enforcing a
 certain model or paradigm.
There has to be some limit on the amount of features a language can have before managing the complexity gets too large. Imagine that D 4.0 had 50 keywords more than D 2.0 currently has. Those features would make your code 5% faster. Would you still love D?
 Pointers are a critical feature of D, they allow both binary
 compatibility with C code and optimizations not possible without
 pointers. I use pointers all the time in D, just not nearly as much as I
 would in C/C++.
I did not argue against pointers, in general! Pointers can be useful but you do not need the C style syntax for declaring pointers to functions anywhere. I find it hard to read, especially after reading too much maths or functional code.
Oct 03 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
language_fan wrote:
 On Sat, 03 Oct 2009 10:32:28 -0400, Jeremie Pelletier wrote:
 
 I don't believe D is having some features merely to attract attention to
 it, that's the thing I like best about D; it provides a very large set
 of tools and let me choose how to use them, instead of enforcing a
 certain model or paradigm.
There has to be some limit on the amount of features a language can have before managing the complexity gets too large. Imagine that D 4.0 had 50 keywords more than D 2.0 currently has. Those features would make your code 5% faster. Would you still love D?
Think of the english languages, how many words does it have? I would hate to try and express my ideas if I had only 100 words to choose from. Some people do but we call them simple minded or uneducated :) Same for programming, D could have 100 keywords and be the most flexible language ever. Some would think its the best thing since sliced bread, others would only use the subset they're comfortable with, and a few would be scared away back to javascript. People using a very limited subset of words to express their ideas tend to talk more to say less.
 Pointers are a critical feature of D, they allow both binary
 compatibility with C code and optimizations not possible without
 pointers. I use pointers all the time in D, just not nearly as much as I
 would in C/C++.
I did not argue against pointers, in general! Pointers can be useful but you do not need the C style syntax for declaring pointers to functions anywhere. I find it hard to read, especially after reading too much maths or functional code.
It makes writing C bindings that much easier, function pointers are mostly used for C code anyways since D has the much better delegate type.
Oct 03 2009
parent reply language_fan <somewhere internet.com.invalid> writes:
On Sat, 03 Oct 2009 14:35:22 -0400, Jeremie Pelletier wrote:

 language_fan wrote:
 On Sat, 03 Oct 2009 10:32:28 -0400, Jeremie Pelletier wrote:
 
 I don't believe D is having some features merely to attract attention
 to it, that's the thing I like best about D; it provides a very large
 set of tools and let me choose how to use them, instead of enforcing a
 certain model or paradigm.
There has to be some limit on the amount of features a language can have before managing the complexity gets too large. Imagine that D 4.0 had 50 keywords more than D 2.0 currently has. Those features would make your code 5% faster. Would you still love D?
Think of the english languages, how many words does it have? I would hate to try and express my ideas if I had only 100 words to choose from. Some people do but we call them simple minded or uneducated :)
Comparing spoken languages and formal languages used to program computers is rather far fetched. Even a small child recognizes more words than a complex programming language has keywords. There are programming languages with rather minimal set of core keywords and constructs. This makes them in no way more suitable for less intelligent people. And your stance of disagreeing with everyone here does not make you better than the rest of us, it is just irritating. D is pretty verbose in many respects. There are some totally unnecessary words like 'body' in the grammar. Also things like foreach_reverse should just die. Even a novice programmer can write a meta-program to replace foreach_reverse without any runtime performance hit. Designing a crappy programming language is not hard. Usually the elegance arises from clever use of powerful, generic core structures.
Oct 03 2009
next sibling parent reply Justin Johansson <no spam.com> writes:
language_fan Wrote:

 On Sat, 03 Oct 2009 14:35:22 -0400, Jeremie Pelletier wrote:
 
 language_fan wrote:
 On Sat, 03 Oct 2009 10:32:28 -0400, Jeremie Pelletier wrote:
 
 I don't believe D is having some features merely to attract attention
 to it, that's the thing I like best about D; it provides a very large
 set of tools and let me choose how to use them, instead of enforcing a
 certain model or paradigm.
There has to be some limit on the amount of features a language can have before managing the complexity gets too large. Imagine that D 4.0 had 50 keywords more than D 2.0 currently has. Those features would make your code 5% faster. Would you still love D?
Think of the english languages, how many words does it have? I would hate to try and express my ideas if I had only 100 words to choose from. Some people do but we call them simple minded or uneducated :)
Comparing spoken languages and formal languages used to program computers is rather far fetched. Even a small child recognizes more words than a complex programming language has keywords. There are programming languages with rather minimal set of core keywords and constructs. This makes them in no way more suitable for less intelligent people. And your stance of disagreeing with everyone here does not make you better than the rest of us, it is just irritating. D is pretty verbose in many respects. There are some totally unnecessary words like 'body' in the grammar. Also things like foreach_reverse should just die. Even a novice programmer can write a meta-program to replace foreach_reverse without any runtime performance hit. Designing a crappy programming language is not hard. Usually the elegance arises from clever use of powerful, generic core structures.
Re foreach_reverse People might remember that when I picked up D and joined this forum just some 3 or so weeks ago I made mention of being a Scala refugee.*** When asked what I didn't like about Scala I commented about there being too many language constructs. Someone here (maybe you, Fan?) consequently pointed out some of the superfluous cruft like foreach_reverse in D. I couldn't agree more; foreach_reverse should be euthanased by intralexical injection forthwith. (***To be fair, my current interest is in non-JVM-hosted languages and I wouldn't be using a minimalistic language like Clojure (also JVM hosted) either at the moment.)
 Even a novice programmer can write a meta-program to replace foreach_reverse
without any runtime performance hit. I haven't had much time to investigate/learn meta programming facilities in D so I'm less than a novice in this respect. If it's not too much trouble, Fan, please post your solution for replacing reverse_foreach with a meta-program; I know it sounds lazy of me, but your answer will save me precious time from having to RTFM. Guessing its a recursive solution, and btw I am making use of opApply already in a small collection library that I'm messing with. Cheers Justin Johansson
Oct 03 2009
parent reply language_fan <somewhere internet.com.invalid> writes:
On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:

 People might remember that when I picked up D and joined this forum just
 some 3 or so weeks ago I made mention of being a Scala refugee.***  When
 asked what I didn't like about Scala I commented about there being too
 many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
Oct 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
language_fan wrote:
 On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:
 
 People might remember that when I picked up D and joined this forum just
 some 3 or so weeks ago I made mention of being a Scala refugee.***  When
 asked what I didn't like about Scala I commented about there being too
 many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
I agree that a hemorrhage of keywords is of dubious value, and Walter has been much more generous with keywords than I would have ever liked. Assuming you're not hanging out in this group just to feel smug: what steps do you think we could take to make D a better language than it currently is? Andrei
Oct 04 2009
parent reply language_fan <somewhere internet.com.invalid> writes:
On Sun, 04 Oct 2009 04:28:35 -0500, Andrei Alexandrescu wrote:

 language_fan wrote:
 On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:
 
 People might remember that when I picked up D and joined this forum
 just some 3 or so weeks ago I made mention of being a Scala
 refugee.***  When asked what I didn't like about Scala I commented
 about there being too many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
I agree that a hemorrhage of keywords is of dubious value, and Walter has been much more generous with keywords than I would have ever liked. Assuming you're not hanging out in this group just to feel smug: what steps do you think we could take to make D a better language than it currently is?
I would concentrate on combining and generalizing the core constructs and types to cut down language complexity. Starting from basic algebraic facts, D lacks built-in first class sum and product types. Higher order type operators also feel like a hack. The level of orthogonality is often very poor -- this is the result of the uncontrolled language evolution. D was not designed to be very orthogonal in these respects. Practical languages are rarely built with types in mind. Same applies to rationale behind built-in meta-properties of types and free function like constructs. Even though templates and string mixins provide some kind of macro facility, I would like to offer something more Scheme like, with Template Haskell like flavor. String mixins are powerful, but unfortunately they do not provide any kind of meta-level type system. As a result the compile errors on the wrong abstraction level. It also fails at capturing symbol references in a nicely scoped manner. Andrei, I remember you also suggested all kinds of macro systems, but the discussion died ages ago. In OOP I have found Scala and some prototype based OOP languages to behave in the most elegant way. You should read the OOP articles by Odersky. For instance, try to find a solution to the Node-Edge subtyping problem in D. Experiment with traits to see how powerful they are. Try to find justifications for the lack of genuine new features of Scala (self types, etc.). I cannot foretell how changing the low level abstractions changes the overall look and feel. Probably some constructs become unnecessary, others will remain. I do not have the skills to build a full language that works optimally in all possible ways. The keyword comparison was a bit unfair. I find it acceptable for a lower level language to have a bit more keywords since there are many hardware capabilities that need to have a mapping on the language level.
Oct 04 2009
next sibling parent Jeremie Pelletier <jeremiep gmail.com> writes:
language_fan wrote:
 On Sun, 04 Oct 2009 04:28:35 -0500, Andrei Alexandrescu wrote:
 
 language_fan wrote:
 On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:

 People might remember that when I picked up D and joined this forum
 just some 3 or so weeks ago I made mention of being a Scala
 refugee.***  When asked what I didn't like about Scala I commented
 about there being too many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
I agree that a hemorrhage of keywords is of dubious value, and Walter has been much more generous with keywords than I would have ever liked. Assuming you're not hanging out in this group just to feel smug: what steps do you think we could take to make D a better language than it currently is?
I would concentrate on combining and generalizing the core constructs and types to cut down language complexity. Starting from basic algebraic facts, D lacks built-in first class sum and product types. Higher order type operators also feel like a hack. The level of orthogonality is often very poor -- this is the result of the uncontrolled language evolution. D was not designed to be very orthogonal in these respects. Practical languages are rarely built with types in mind. Same applies to rationale behind built-in meta-properties of types and free function like constructs. Even though templates and string mixins provide some kind of macro facility, I would like to offer something more Scheme like, with Template Haskell like flavor. String mixins are powerful, but unfortunately they do not provide any kind of meta-level type system. As a result the compile errors on the wrong abstraction level. It also fails at capturing symbol references in a nicely scoped manner. Andrei, I remember you also suggested all kinds of macro systems, but the discussion died ages ago.
I couldn't agree more, string mixins often feel like a hack in D and you lose all semantics informations an IDE can use to generate intellisense for example.
 In OOP I have found Scala and some prototype based OOP languages to 
 behave in the most elegant way. You should read the OOP articles by 
 Odersky. For instance, try to find a solution to the Node-Edge subtyping 
 problem in D. Experiment with traits to see how powerful they are. Try to 
 find justifications for the lack of genuine new features of Scala (self 
 types, etc.).
Aren't prototype based objects only possible with a VM? I mean the prototype can be extended at anytime throughout the execution. It also adds a level of indirection since the objects don't hold a direct reference to the vtable, they hold a reference to the prototype which contains the dynamic vtable and string identifiers are used to resolve calls to the proper method since vtable indices aren't known at compile time. Prototype based OOP is a great model, but I don't think it can be implemented in compiled languages like D.
Oct 04 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
language_fan wrote:
 On Sun, 04 Oct 2009 04:28:35 -0500, Andrei Alexandrescu wrote:
 
 language_fan wrote:
 On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:

 People might remember that when I picked up D and joined this forum
 just some 3 or so weeks ago I made mention of being a Scala
 refugee.***  When asked what I didn't like about Scala I commented
 about there being too many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
I agree that a hemorrhage of keywords is of dubious value, and Walter has been much more generous with keywords than I would have ever liked. Assuming you're not hanging out in this group just to feel smug: what steps do you think we could take to make D a better language than it currently is?
I would concentrate on combining and generalizing the core constructs and types to cut down language complexity. Starting from basic algebraic facts, D lacks built-in first class sum and product types. Higher order type operators also feel like a hack. The level of orthogonality is often very poor -- this is the result of the uncontrolled language evolution. D was not designed to be very orthogonal in these respects. Practical languages are rarely built with types in mind. Same applies to rationale behind built-in meta-properties of types and free function like constructs.
Well this is a bit vague to act on. So, you say D lacks built-in first-class sum and product types. Yet Tuple is a product type. In spite of appearances, it's a built-in type, just that it has no literal. I don't see that a deal breaker. Then I fail to find fault for Algebraic (in std.variant) as a sum type. I need to add visitation to it, but other than that I don't think Algebraic is worse than a built-in type. I agree that the language is unorthogonal but realistically there isn't a lot we can do about that now.
 Even though templates and string mixins provide some kind of macro 
 facility, I would like to offer something more Scheme like, with Template 
 Haskell like flavor. String mixins are powerful, but unfortunately they 
 do not provide any kind of meta-level type system. As a result the 
 compile errors on the wrong abstraction level. It also fails at capturing 
 symbol references in a nicely scoped manner. Andrei, I remember you also 
 suggested all kinds of macro systems, but the discussion died ages ago.
It hasn't died. We just concluded that it would take many months to define and implement a decent macro system. We also had a ton of other things to do, so we decided macros have to wait.
 In OOP I have found Scala and some prototype based OOP languages to 
 behave in the most elegant way. You should read the OOP articles by 
 Odersky. For instance, try to find a solution to the Node-Edge subtyping 
 problem in D. Experiment with traits to see how powerful they are. Try to 
 find justifications for the lack of genuine new features of Scala (self 
 types, etc.).
I am very familiar with much of Odersky's work and have a lot of respect for it. But then Walter created D and has brought his world view in D, not someone else's. We can't go like, hey, let's wheelbarrow whatever's good in language X into D. That's why I specifically asked "what steps we need to take" hoping for much more detail and aim at integration than "Scala is good". Regarding the Node-Edge subtyping problem, I'd appreciate a link. The first relevant result returned by Google is your own post :o). Is this relevant? http://www.jot.fm/issues/issue_2008_06/article3.pdf
 I cannot foretell how changing the low level abstractions changes the 
 overall look and feel. Probably some constructs become unnecessary, 
 others will remain. I do not have the skills to build a full language 
 that works optimally in all possible ways.
I guess you're in good company - nobody quite does.
 The keyword comparison was a bit unfair. I find it acceptable for a lower 
 level language to have a bit more keywords since there are many hardware 
 capabilities that need to have a mapping on the language level.
I don't know. I agree that D's relative abundance of keywords doesn't seem to be a huge problem in practice, but I also think we shouldn't add many more lest a phase shift of sorts occurs. And then just like you I prefer orthogonality, and abundance of keywords is abundance of magic and exceptions. I'd prefer "classinfo" to be an identifier like any other and obey the same rules that every symbol in its scope does. I'm very happy that Tomasz Stachowiak revealed to me that in-situ class allocation can be elegantly defined as a library facility. I don't care for foreach_reverse because foreach (e; retro(r)) actually does better. The popping of "length" in scope inside slice bounds is about the most distasteful hack there is, which gives the "length" keyword an almost-keyword status. And so on... Andrei
Oct 04 2009
parent language_fan <foo bar.com.invalid> writes:
Sun, 04 Oct 2009 11:08:52 -0500, Andrei Alexandrescu thusly wrote:

 So, you say D lacks built-in
 first-class sum and product types. Yet Tuple is a product type. In spite
 of appearances, it's a built-in type, just that it has no literal.
Not true. A tuple of tuples, for instance, breaks the property (so you need struct tuple hacks). The auto-flattening is just harmful. Also, not only does it not have a literal, in many places the use has been disabled. Recent versions of the compiler have started to throw errors in those use cases. Previously I expected it to be fixed, but apparently the feature was considered too good to be allowed.
 don't see that a deal breaker. Then I fail to find fault for Algebraic
 (in std.variant) as a sum type. I need to add visitation to it, but
 other than that I don't think Algebraic is worse than a built-in type.
Ok, might be. I have not used it yet. It least it's too verbose for my taste. Too much verboseness makes the feature impractical to use.
 I remember you also suggested all kinds of macro systems, but the
 discussion died ages ago.
It hasn't died. We just concluded that it would take many months to define and implement a decent macro system. We also had a ton of other things to do, so we decided macros have to wait.
Ok.
 I am very familiar with much of Odersky's work and have a lot of respect
 for it. But then Walter created D and has brought his world view in D,
 not someone else's. We can't go like, hey, let's wheelbarrow whatever's
 good in language X into D. That's why I specifically asked "what steps
 we need to take" hoping for much more detail and aim at integration than
 "Scala is good".
I agree you don't need to copy each feature. There just are some open problems and it would be really nice if the language could solve them. Since D is a practical language, you might say that it doesn't need to solve every possible problem (especially not high level problems), just some low-level systems programming related ones.
 Regarding the Node-Edge subtyping problem, I'd appreciate a link.
http://lampwww.epfl.ch/~odersky/papers/ScalableComponent.html, page 7 in the pdf.
Oct 04 2009
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
language_fan wrote:

 On Sat, 03 Oct 2009 16:39:29 -0400, Justin Johansson wrote:
 
 People might remember that when I picked up D and joined this forum just
 some 3 or so weeks ago I made mention of being a Scala refugee.***  When
 asked what I didn't like about Scala I commented about there being too
 many language constructs.
Compared to D that is not even true. The Scala language spec lists 40 keywords + 10 additional reserved tokens. D 2.0 spec lists 106 keywords + a bit over 60 reserved tokens. In general there are no features in Scala that are not built around those keywords and tokens. The keywords and token are not more heavily overloaded than in D, on the contrary in my subjective opinion. So how I see things is that the language core in Scala is about 75% smaller, faster to learn, and easier to reason about. I have to admit that the features often are more powerful than in D. You need to recognize concepts like contra/co/invariance, higher order functions and kinds, and algebra that is based on terms discussed in lambda calculus books.
How do you think Scala is going to manage to be a popular alternative for Java by requiring its user to read books about lambda calculus? The keyword metric is so flawed you just cannot base any argument around it. Take for example the debug statement in D. It's very unorthogonal, basically just the same as version(debug). D is a syntax heavy language, but a lot of that come from such simplistic features. Heck, you could built a whole programming language with the number of tokens D reserves specially for stuff around floating point types! But they make the language only slightly more complicated by requiring the programmer to remember a few more words and meanings that don't interact so heavily with other features. Contrast this to say, the interaction because the C preprocessor and C++ template system or the various meanings of const in C++. Now that makes a language more *complex*. If you want to measure the 'size' of a language (what does that mean by the way?) not all tokens should weigh equal. I highly recommend this short and subjective article on the subject by Yukihiro Matsumoto: "Treating Code as an Essay" from the book Beautiful Code. Unfortunately I couldn't find it online.
Oct 04 2009
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Forget to qualify my reply: I don't know Scala so you might as well be right 
and I do not mean to say that D isn't a complex language.
Oct 04 2009
prev sibling parent reply language_fan <somewhere internet.com.invalid> writes:
On Sun, 04 Oct 2009 11:51:13 +0200, Lutger wrote:

 How do you think Scala is going to manage to be a popular alternative
 for Java by requiring its user to read books about lambda calculus?
It's not necessary. Often removing extra semicolons and changing the form '<type> <value>' to '<value> : <type>' suffices. Especially when porting Java code.
 The keyword metric is so flawed you just cannot base any argument around
 it.
I admitted that later. Some of the keywords have a strong justification behind them. Others feel irritatingly unnecessary. Imagine if D had keywords goto_unknown, goto_forward and goto_backward, and goto_here (which is rewritten as while(true) {} by the compiler), based on how the program counter will change. Jumps to addresses with a lower program counter value use goto_backward, vice versa for goto_forward. Situations where the pc value is runtime determined would have to use goto_unknown. D 2.0 would also have construct 'goto dynamic (addr);' which would behave identically to 'goto_unknown addr;'. Now some of you might see a pattern here and would try to combine the various gotos into a single keyword. I am talking about similar situation, but my focus is on a higher level.
 I highly recommend this short and subjective article on the subject by
 Yukihiro Matsumoto: "Treating Code as an Essay" from the book Beautiful
 Code. Unfortunately I couldn't find it online.
Thanks. Added to my todo list.
Oct 04 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
language_fan wrote:
 On Sun, 04 Oct 2009 11:51:13 +0200, Lutger wrote:
 
 How do you think Scala is going to manage to be a popular alternative
 for Java by requiring its user to read books about lambda calculus?
It's not necessary. Often removing extra semicolons and changing the form '<type> <value>' to '<value> : <type>' suffices. Especially when porting Java code.
 The keyword metric is so flawed you just cannot base any argument around
 it.
I admitted that later. Some of the keywords have a strong justification behind them. Others feel irritatingly unnecessary.
I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
 Imagine if D had keywords goto_unknown, goto_forward and goto_backward, 
 and goto_here (which is rewritten as while(true) {} by the compiler), 
 based on how the program counter will change. Jumps to addresses with a 
 lower program counter value use goto_backward, vice versa for 
 goto_forward. Situations where the pc value is runtime determined would 
 have to use goto_unknown. D 2.0 would also have construct 'goto dynamic
 (addr);' which would behave identically to 'goto_unknown addr;'. Now some 
 of you might see a pattern here and would try to combine the various 
 gotos into a single keyword. I am talking about similar situation, but my 
 focus is on a higher level.
I can't make sense out of this example, since there's only one possible goto with current hardware, and even VMs don't care where you jump at. I think you meant to say that we can generalize keywords and let the compiler decide what we meant to do. And that is not always a good thing, sometimes you want your keyword to be verbose to other programmers too, for example the recent discussion about casts which stated D should have support for explicit static, dynamic and reinterpret casts just like C++.
Oct 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jeremie Pelletier wrote:
 language_fan wrote:
 I admitted that later. Some of the keywords have a strong 
 justification behind them. Others feel irritatingly unnecessary.
I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for. Also, the complex and imaginary types will be removed at some point and replaced with a library type; there goes 6 keywords.
Oct 04 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Walter Bright wrote:
 Jeremie Pelletier wrote:
 language_fan wrote:
 I admitted that later. Some of the keywords have a strong 
 justification behind them. Others feel irritatingly unnecessary.
I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.
I agree, especially since most libraries redefine these types to not have to use "unsigned long" and others all over the place and to abstract different compilers. Having standard types in D is one of it's best features, just makes everything much easier.
 Also, the complex and imaginary types will be removed at some point and 
 replaced with a library type; there goes 6 keywords.
Why? What's the rationale behind such a move? These types will always be handled the same no matter what library implements them. These are always tricky to use in C since different compilers implement them differently, why do the same in D?
Oct 04 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jeremie Pelletier wrote:
 Walter Bright wrote:
 Also, the complex and imaginary types will be removed at some point 
 and replaced with a library type; there goes 6 keywords.
Why? What's the rationale behind such a move? These types will always be handled the same no matter what library implements them. These are always tricky to use in C since different compilers implement them differently, why do the same in D?
Using a standard library type solves the standardization problem. The big reason for moving it to a library type is the user defined type capabilities of D have grown to the point where there is no longer much of any advantage to having it built in. Simplifying the internal logic of the compiler then has a lot of advantages.
Oct 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 The 
 big reason for moving it to a library type is the user defined type 
 capabilities of D have grown to the point where there is no longer much 
 of any advantage to having it built in.
If the compiler/language is now flexible enough to allow the creation of a very good complex number, and the compilation time for such library numbers is good enough, and they get compiled efficiently enough, then removing them from the language is positive. But is the compiler now good enough to allow to implement very good complex numbers in the std lib? One problem is to have a good syntax to define and use complex numbers. Time ago I have even suggested to keep the complex syntax in the compiler, and move the implementation in the std lib. Another problem that I think is present still is the lack of a method like opBool, that gets called in implicit boolean situations like if(somecomplex){... A third and more serious question is if the library complex type avoids the pitfalls discussed in the page about built-in complex numbers in the digitalmars site. Bye, bearophile
Oct 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Walter Bright:
 
 The big reason for moving it to a library type is the user defined
 type capabilities of D have grown to the point where there is no
 longer much of any advantage to having it built in.
If the compiler/language is now flexible enough to allow the creation of a very good complex number, and the compilation time for such library numbers is good enough, and they get compiled efficiently enough, then removing them from the language is positive. But is the compiler now good enough to allow to implement very good complex numbers in the std lib?
Quoting myself: please name five remarkable complex literals.
 One problem is to have a good syntax to define and use complex
 numbers. Time ago I have even suggested to keep the complex syntax in
 the compiler, and move the implementation in the std lib.
 
 Another problem that I think is present still is the lack of a method
 like opBool, that gets called in implicit boolean situations like
 if(somecomplex){...
That's not opBool, it's opIf. Testing with if does not mean conversion to bool and then testing the bool.
 A third and more serious question is if the library complex type
 avoids the pitfalls discussed in the page about built-in complex
 numbers in the digitalmars site.
I'd love to hear more about that. I've asked several times about it and never got a clear answer. My feeling is that an obscure mathematician burped during a conference in the 1960s and was overheard and misunderstood by someone who spread the news that complex numbers must be built-in, or else. Andrei
Oct 04 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

please name five remarkable complex literals.<
I agree that having a syntax is often not necessary (but it may be handy). Complex literals in a program can be unremarkable too, they can be arguments of complex functions, integration intervals, default values that replace missing argument inputs, etc.
 That's not opBool, it's opIf. Testing with if does not mean conversion 
 to bool and then testing the bool.
opIf sounds strange :-) Why don't you like the idea of the implicit conversion to bool followed by the testing of the bool? (someone may have already answered a similar question, please bear with me). (Python has such standard method, as I have described (its name is diffeernt),
 I'd love to hear more about that. I've asked several times about it and 
 never got a clear answer.
Probably you have to ask to people that use complex numbers heavily, like in refined numerical simulations. Do you know some researcher at the university? A numerical physicist or teacher of numeric computation may be fine. If you ask to normal programmers they probably will not give you a good answer. To design certain language features you need experts :-) Bye, bearophile
Oct 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 opIf sounds strange :-) Why don't you like the idea of the implicit
 conversion to bool followed by the testing of the bool? (someone may
 have already answered a similar question, please bear with me).
Try this: void * p; if (p) {} Then try this: void * p; bool b = p; Andrei
Oct 04 2009
parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Sun, Oct 4, 2009 at 5:58 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 bearophile wrote:
 opIf sounds strange :-) Why don't you like the idea of the implicit
 conversion to bool followed by the testing of the bool? (someone may
 have already answered a similar question, please bear with me).
Try this: void * p; if (p) {} Then try this: void * p; bool b = p;
Okay, how about if if() used the result of an *explicit* cast to bool? Barring the silly opCast operator (and maybe providing something else), would this be consistent?
Oct 04 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
bearophile wrote:
 (Python has such standard method, as I have described (its name is diffeernt),

Hi bearophile,
Oct 05 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Ary Borenszweig:

They are just named operator true and operator false. What I don't understand is why there are two of them, one of them looks enough to me: http://msdn.microsoft.com/en-us/library/6x6y6z4d%28loband%29.aspx http://www.blackwasp.co.uk/CSharpTrueFalseOverload.aspx Bye, bearophile
Oct 05 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
To test my level of ignorance of D2, and to test how structs can be used to
implement complex numbers in the std lib I have done few experiments with
something simile, subsets of integers.

I have found some problems, some of them may come from my ignorance. This is a
natural number:

import std.stdio: writeln;

struct Natural {
    int x = 1;
    alias x this;

    invariant() {
        assert(x >= 1);
    }
}

void main() {
    Natural x, y;
    x = 10;
    y = -20; // OK, invariant() isn't called because it's a direct access

    // From the D2 docs: http://www.digitalmars.com/d/2.0/class.html#Invariant
    // The invariant can be checked when a class object is the argument
    // to an assert() expression, as:
    assert(x); // test.d(20): Error: expression x of type Natural does not have
a boolean value
}

It seems invariant() can be used in structs too (even if in D2 docs they are
named class invariants), but I think the assert(x) doesn't work.


To solve the problem of the invariant not being called in the assignment I have
written this second version:

import std.stdio: writeln;
import std.conv: to;

struct Natural {
    int x_ = 1;

    int x() { return this.x_; }
    int x(int xx) { this.x_ = xx; return xx; }
    alias x this;

    invariant() { assert(this.x_ >= 1, "not a natural"); }

    string toString() { return to!string(this.x_); }
}

void main() {
    Natural x, y;
    x = 10;
    writeln(x, " ", x + x * 3); // OK

    // a problem: the error message gives the line number of the
    // assert instead of the assignment
    y = -20; // core.exception.AssertError test2.d(11): not a natural
}

Now it works better, but the assert gives an unhelpful line number.



This is a variant, a ranged value:

import std.stdio: writeln;
import std.conv: to;

struct Ranged(int RANGED_MIN, int RANGED_MAX) {
    int x_ = RANGED_MIN;

    int x() { return this.x_; }
    int x(int xx) { this.x_ = xx; return xx; }
    alias x this;

    invariant() {
        //assert(this.x_ >= RANGED_MIN, "Ranged value too much small");
        assert(this.x_ < RANGED_MAX, "Ranged value too much big");
    }

    string toString() { return to!string(this.x_); }
}

void main() {
    typedef Ranged!(10, 20) ranged;
    ranged x;
    writeln(x);
    ranged y = 1000; // Uh, bypasses the setter, no errors
    writeln(y); // 0?
}


I have commented out the first assert to understand better what's happening.
In the line:
ranged y = 1000;
The invariant isn't called, the value 1000 goes nowhere, and even the int x_ =
RANGED_MIN; is being bypassed, so in this.x_ goes a zero.
This will be another problem for library-defined complex numbers. So in the end
I may like to keep complex numbers in the language for now.

Bye,
bearophile
Oct 04 2009
parent bearophile <bearophileHUGS lycos.com> writes:
     ranged y = 1000; // Uh, bypasses the setter, no errors
     writeln(y); // 0?
In the last version of DMD it gives an error: ranged y = 1000; temp.d(23): Error: cannot implicitly convert expression (1000) of type int to ranged Better. And opCast may help here. No implicit cast. Bye, bearophile
Oct 05 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 05 Oct 2009 01:03:01 +0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 Walter Bright:

 The
 big reason for moving it to a library type is the user defined type
 capabilities of D have grown to the point where there is no longer much
 of any advantage to having it built in.
If the compiler/language is now flexible enough to allow the creation of a very good complex number, and the compilation time for such library numbers is good enough, and they get compiled efficiently enough, then removing them from the language is positive. But is the compiler now good enough to allow to implement very good complex numbers in the std lib? One problem is to have a good syntax to define and use complex numbers. Time ago I have even suggested to keep the complex syntax in the compiler, and move the implementation in the std lib. Another problem that I think is present still is the lack of a method like opBool, that gets called in implicit boolean situations like if(somecomplex){... A third and more serious question is if the library complex type avoids the pitfalls discussed in the page about built-in complex numbers in the digitalmars site. Bye, bearophile
I don't see any reason why if (someComplexNumber) { ... } should be a valid code, it hardly makes any sense for me. In fact, I am trying to avoid if (foo) as much as possible (unless foo is a bool, of course): if (somePtr !is null) { } if (someInt != 0) { } but: if (someCondition) { } I wouldn't like to sacrifice code clarity for saving a few keystrokes, but maybe it's just me.
Oct 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Denis Koroskin:

I don't see any reason why if (someComplexNumber) { ... } should be a  
valid code, it hardly makes any sense for me.< In general I think adding a boolean-evaluation standard method to D can be positive and handy and not that bug-prone. But complex numbers are FP, so you usually test if they are close to zero (test of exactly zero can be useful to know if a value was not changed, etc). So I agree with you that for complex numbers such boolarn-eveluation method isn't very useful. Once D has such operator, it can be defined for library complex numbers too, but probably it will not be used often. It's useful if you want to write generic code, so if in a templated function you use if(x){... it will work with integers, double, complex values, etc, without special casing. Bye, bearophile
Oct 05 2009
parent reply downs <default_357-line yahoo.de> writes:
bearophile wrote:
 Denis Koroskin:
 
 I don't see any reason why if (someComplexNumber) { ... } should be a  
valid code, it hardly makes any sense for me.< In general I think adding a boolean-evaluation standard method to D can be positive and handy and not that bug-prone. But complex numbers are FP, so you usually test if they are close to zero (test of exactly zero can be useful to know if a value was not changed, etc). So I agree with you that for complex numbers such boolarn-eveluation method isn't very useful. Once D has such operator, it can be defined for library complex numbers too, but probably it will not be used often. It's useful if you want to write generic code, so if in a templated function you use if(x){... it will work with integers, double, complex values, etc, without special casing.
I'm not buying that. What kind of function would that be? I can't imagine a need for this.
Oct 05 2009
parent bearophile <bearophileHUGS lycos.com> writes:
downs:

 I'm not buying that. What kind of function would that be? I can't imagine a
need for this.
I don't know, sorry. But I'd like to have such method to define my collections as false when they are empty, this is really handy. And it may be useful to make nullable values false. Bye, bearophile
Oct 05 2009
prev sibling parent reply Justin Johansson <no spam.com> writes:
Jeremie Pelletier Wrote:

 Walter Bright wrote:
 Jeremie Pelletier wrote:
 language_fan wrote:
 I admitted that later. Some of the keywords have a strong 
 justification behind them. Others feel irritatingly unnecessary.
I would rather have many different specialized keywords than a few keywords with many different meanings. Its *much* easier to remember a large set of simple words than a small set of complex words.
Many of the keywords come from each basic type having its own keyword. Sure, it could be done like C does with "unsigned long", etc., but those were always hard to grep for.
I agree, especially since most libraries redefine these types to not have to use "unsigned long" and others all over the place and to abstract different compilers. Having standard types in D is one of it's best features, just makes everything much easier.
 Also, the complex and imaginary types will be removed at some point and 
 replaced with a library type; there goes 6 keywords.
 Many of the keywords come from each basic type having its own keyword. 
 Sure, it could be done like C does with "unsigned long", etc., but those 
 were always hard to grep for.
Agree 110%. Not only hard to grep for, but also an example of orthogonality not actually helping. The number of C/C++ programs that I've had to deal over the years, all defining their own standard library int types just to nail down the bit size is mind boggling. UINT8, uint8, UInt8, SINT8, int8, Int8, int8_t, uint8_t, ubyte, sbyte ... and these just some of the 8 bit variations. Then to make thinks worse, someone invents compiler switches to make ints signed or unsigned by default. What madness.
 Why? What's the rationale behind such a move? These types will always be 
 handled the same no matter what library implements them. These are 
 always tricky to use in C since different compilers implement them 
 differently, why do the same in D?
I'm with Jeremie on this one .. or at least the jury should still be out. Imaginary numbers have the same right to life as real numbers. How many scientific, engineering, applied maths problems have been solved because of the invention or discovery of complex numbers? I like the idea of a language treats its complex numbers as first class citizens. Come to think of it, it was one of the first salient features of D that drew me to the language. I speak not only with an emotive affection towards complex numbers but with many years of practical experience with DSP (digital signal processing) software, software development at the coalface using GMM (gaussian mixture models) for speech processing, FFT (Fast Fourier Transforms) in general and the FFTW (The Fastest Fourier Transform in the West*) C FFT library (which by the way, the authors thereof received a prestigious award for their contribution to numerical software**). * http://www.fftw.org/ ** http://www.mcs.anl.gov/research/opportunities/wilkinsonprize/3rd-1999.php It's a difficult challenge to get high performance, readable and maintainable code out of complex number intensive algorithms. Use of library types for complex numbers has, in my experience been problematic. Complex numbers should be first class value types I say. My $0.05 -- Justin Johansson
Oct 04 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Justin Johansson" <no spam.com> wrote in message 
news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers. (I've also been wondering if it might be a huge benefit for distinguishing between strings that represent a filename vs file content vs file-extention-only vs relative-path+filename, vs absolute-path-only, etc. I've been really wanting a better way to handle that than just a variable naming convention.)
Oct 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Justin Johansson" <no spam.com> wrote in message 
 news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals. Now please name five remarkable complex literals. The feature you're referring to is called dimensional analysis.
 (I've also 
 been wondering if it might be a huge benefit for distinguishing between 
 strings that represent a filename vs file content vs file-extention-only vs 
 relative-path+filename, vs absolute-path-only, etc. I've been really wanting 
 a better way to handle that than just a variable naming convention.) 
I don't quite think so. In fact I don't think so at all. Pathnames of various flavors evolve quite a bit in many programs, and having to worry about tracking their type throughout is too much aggravation to be worthwhile. Last thing I'd want when manipulating pathnames would be a sticker of a library slapping my wrist anytime I misuse one of its six dedicated types. Andrei
Oct 04 2009
next sibling parent reply Justin Johansson <no spam.com> writes:
Andrei Alexandrescu Wrote:

 Nick Sabalausky wrote:
 "Justin Johansson" <no spam.com> wrote in message 
 news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals.
 "Now please name five remarkable complex literals."
(re, im) ::= (0, 0), (1,0), (0,1), (1,1), (pi/2, 0), (0, pi/2), e_to_the_power_(minus j), e_to_the_power_(minus j * pi/2) Is that what you mean? Guess one has to study Maxwell's equations, microwaves and the black art of Smith Charts to appreciate .. not that anyone really cares too much these days .. no need to design antennae for transmitters and receivers from scratch now that microwave towers and iPhones are consumer items. (FYI, Electrical engineers use j and mathematicians use i to denote sqroot(-1). We do that since the letter i is conventionally used for current (as in amperes)). -- Justin
Oct 04 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Justin Johansson wrote:
 Andrei Alexandrescu Wrote:
 
 Nick Sabalausky wrote:
 "Justin Johansson" <no spam.com> wrote in message 
 news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals.
 "Now please name five remarkable complex literals."
(re, im) ::= (0, 0), (1,0), (0,1), (1,1), (pi/2, 0), (0, pi/2), e_to_the_power_(minus j), e_to_the_power_(minus j * pi/2) Is that what you mean?
(Three of those are real.) What I meant was that complex literals helped by syntax are seldom likely to improve code quality. Many numeric literals are of questionable taste anyway and should at best defined as symbols. I don't see why complex literals shouldn't follow the same recommendation. Andrei
Oct 04 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hac2ku$1nfu$1 digitalmars.com...
 Justin Johansson wrote:
 "Now please name five remarkable complex literals."
(re, im) ::= (0, 0), (1,0), (0,1), (1,1), (pi/2, 0), (0, pi/2), e_to_the_power_(minus j), e_to_the_power_(minus j * pi/2) Is that what you mean?
(Three of those are real.) What I meant was that complex literals helped by syntax are seldom likely to improve code quality. Many numeric literals are of questionable taste anyway and should at best defined as symbols. I don't see why complex literals shouldn't follow the same recommendation.
I think people just don't like the idea of having to deal with a distinction of "Some types can have nice handy literals but others can't."
Oct 04 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hac2ku$1nfu$1 digitalmars.com...
 Justin Johansson wrote:
 "Now please name five remarkable complex literals."
(re, im) ::= (0, 0), (1,0), (0,1), (1,1), (pi/2, 0), (0, pi/2), e_to_the_power_(minus j), e_to_the_power_(minus j * pi/2) Is that what you mean?
(Three of those are real.) What I meant was that complex literals helped by syntax are seldom likely to improve code quality. Many numeric literals are of questionable taste anyway and should at best defined as symbols. I don't see why complex literals shouldn't follow the same recommendation.
I think people just don't like the idea of having to deal with a distinction of "Some types can have nice handy literals but others can't."
We got to stop somewhere. Andrei
Oct 04 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 We got to stop somewhere.
The precise stopping point can be discussed. In the new C++ they have even added some flexibility in such regard: http://stackoverflow.com/questions/237804/user-defined-literals-in-c0x-a-much-needed-addition-or-making-c-even-more-bl Bye, bearophile
Oct 05 2009
prev sibling parent Justin Johansson <no spam.com> writes:
Andrei Alexandrescu Wrote:

 Justin Johansson wrote:
 Andrei Alexandrescu Wrote:
 
 Nick Sabalausky wrote:
 "Justin Johansson" <no spam.com> wrote in message 
 news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals.
 "Now please name five remarkable complex literals."
(re, im) ::= (0, 0), (1,0), (0,1), (1,1), (pi/2, 0), (0, pi/2), e_to_the_power_(minus j), e_to_the_power_(minus j * pi/2) Is that what you mean?
(Three of those are real.) What I meant was that complex literals helped by syntax are seldom likely to improve code quality. Many numeric literals are of questionable taste anyway and should at best defined as symbols. I don't see why complex literals shouldn't follow the same recommendation.
"Three of those are real." What? Members of the set of real numbers are not members of the set of complex numbers? If anything remove from the language type which is a subtype of a bigger type!!! -- Devil's advocate :-) P.S. Thanks for the clarification, Andrei, I understand what you mean now.
Oct 05 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:hab3r2$2pgr$1 digitalmars.com...
 Nick Sabalausky wrote:
 There's been discussion before (I can't find it now, or remember the name 
 for it) of type systems that allow for proper handling of things like m/s 
 vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a 
 feature or with complex/imaginary numbers in any actual code, so I can't 
 be sure, but I've been wondering if a type system like that would be an 
 appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals. Now please name five remarkable complex literals. The feature you're referring to is called dimensional analysis.
 (I've also been wondering if it might be a huge benefit for 
 distinguishing between strings that represent a filename vs file content 
 vs file-extention-only vs relative-path+filename, vs absolute-path-only, 
 etc. I've been really wanting a better way to handle that than just a 
 variable naming convention.)
I don't quite think so. In fact I don't think so at all. Pathnames of various flavors evolve quite a bit in many programs, and having to worry about tracking their type throughout is too much aggravation to be worthwhile. Last thing I'd want when manipulating pathnames would be a sticker of a library slapping my wrist anytime I misuse one of its six dedicated types.
Last thing I'd want when dealing with paths and filenames would be making one small mistake and getting *neither* a runtime nor a compile-time error, and then have the program happily trounce about the filesystem with corrupted names or paths (sort of like getting m/s and m/(s*s) mixed up). Sure, it *might* error out with an "access denied" or "file not found" or something like that before actually doing anything wrong, but that's far from guaranteed. And even if it doesn't, it might not cause real harm, but there are times when it could so why risk it? I guess I just don't see why something like (fig. A) would be bad if (fig. B) and (fig. C) are considered good (as I certainly consider them): -- fig A ----------- open(inFile ~ "out" ~ inFile.file ~ "log"); // As many as 4 different errors that could be caught but currently aren't. // (But obviously all would be overridable, of course) // Without such checking, if inFile contained "/usr/home/joe/foo/bar.dat", // the result would be: // "/usr/home/joe/foo/bar.datoutbar.datlog" // Meant to do this: open(inFile.path ~ "out/" ~ inFile.name ~ ".log"); // Result now: "/usr/home/joe/foo/out/bar.log" --------------------- -- fig B ----------- int add(int* a, int* b) { return a + b; // Error currently caught // Meant: return *a + *b; } --------------------- -- fig C ----------- auto accel = velocity * time; // Error caught with that "dimensional analysis" // Meant this: auto accel = velocity / time; --------------------- Granted, I agree that as things currently are, it might not be possible to write a library to handle such file/path manipulations without it being too unweildy for the library's prospective users. But with a good "dimensional analysis" system...maybe not?
Oct 04 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:hab3r2$2pgr$1 digitalmars.com...
 Nick Sabalausky wrote:
 There's been discussion before (I can't find it now, or remember the name 
 for it) of type systems that allow for proper handling of things like m/s 
 vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a 
 feature or with complex/imaginary numbers in any actual code, so I can't 
 be sure, but I've been wondering if a type system like that would be an 
 appropriate (or even ideal) way to handle real/complex/imaginary numbers.
It better be. Complex numbers aren't that complicated of a notion. What's lost in pulling them out of the language is the ability to define literals. Now please name five remarkable complex literals. The feature you're referring to is called dimensional analysis.
 (I've also been wondering if it might be a huge benefit for 
 distinguishing between strings that represent a filename vs file content 
 vs file-extention-only vs relative-path+filename, vs absolute-path-only, 
 etc. I've been really wanting a better way to handle that than just a 
 variable naming convention.)
I don't quite think so. In fact I don't think so at all. Pathnames of various flavors evolve quite a bit in many programs, and having to worry about tracking their type throughout is too much aggravation to be worthwhile. Last thing I'd want when manipulating pathnames would be a sticker of a library slapping my wrist anytime I misuse one of its six dedicated types.
Last thing I'd want when dealing with paths and filenames would be making one small mistake and getting *neither* a runtime nor a compile-time error, and then have the program happily trounce about the filesystem with corrupted names or paths (sort of like getting m/s and m/(s*s) mixed up). Sure, it *might* error out with an "access denied" or "file not found" or something like that before actually doing anything wrong, but that's far from guaranteed. And even if it doesn't, it might not cause real harm, but there are times when it could so why risk it?
I think it's a judgment call. I did see a few examples throughout the years that created special types for various pathname components. I think one early C++ book had such an example. None of those attempts survived, and perhaps that could mean something. What may be happening is that in applications, the same variable may be e.g. a filename or a dirname, and handling that with an algebraic type may be considered too much aggravation.
 I guess I just don't see why something like (fig. A) would be bad if (fig. 
 B) and (fig. C) are considered good (as I certainly consider them):
 
 -- fig A -----------
 open(inFile ~ "out" ~ inFile.file ~ "log");
 // As many as 4 different errors that could be caught but currently aren't.
 // (But obviously all would be overridable, of course)
 // Without such checking, if inFile contained "/usr/home/joe/foo/bar.dat",
 // the result would be:
 //   "/usr/home/joe/foo/bar.datoutbar.datlog"
 
 // Meant to do this:
 open(inFile.path ~ "out/" ~ inFile.name ~ ".log");
 // Result now: "/usr/home/joe/foo/out/bar.log"
 ---------------------
 
 -- fig B -----------
 int add(int* a, int* b)
 {
     return a + b; // Error currently caught
     // Meant:
     return *a + *b;
 }
 ---------------------
 
 -- fig C -----------
 auto accel = velocity * time; // Error caught with that "dimensional 
 analysis"
 // Meant this:
 auto accel = velocity / time;
 ---------------------
 
 Granted, I agree that as things currently are, it might not be possible to 
 write a library to handle such file/path manipulations without it being too 
 unweildy for the library's prospective users. But with a good "dimensional 
 analysis" system...maybe not?
Well it may be a good idea if you tried it and see how it feels. A little algebra of path components isn't difficult to define and implement. Andrei
Oct 04 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Nick Sabalausky:

 -- fig A -----------
 open(inFile ~ "out" ~ inFile.file ~ "log");
 // As many as 4 different errors that could be caught but currently aren't.
 // (But obviously all would be overridable, of course)
 // Without such checking, if inFile contained "/usr/home/joe/foo/bar.dat",
 // the result would be:
 //   "/usr/home/joe/foo/bar.datoutbar.datlog"
 
 // Meant to do this:
 open(inFile.path ~ "out/" ~ inFile.name ~ ".log");
 // Result now: "/usr/home/joe/foo/out/bar.log"
 ---------------------
 
 -- fig B -----------
 int add(int* a, int* b)
 {
     return a + b; // Error currently caught
     // Meant:
     return *a + *b;
 }
 ---------------------
 
 -- fig C -----------
 auto accel = velocity * time; // Error caught with that "dimensional 
 analysis"
 // Meant this:
 auto accel = velocity / time;
 ---------------------
Scala has a powerful type system that allows to implement such things in a good enough way: http://www.michaelnygard.com/blog/2009/05/units_of_measure_in_scala.html Bye, bearophile
Oct 05 2009
next sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
bearophile wrote:
 Scala has a powerful type system that allows to implement such things
 in a good enough way:
 
 http://www.michaelnygard.com/blog/2009/05/units_of_measure_in_scala.html
Either I'm missing something, or this system only checks units at runtime (which would make it both slow and unsafe). Boost.Units (C++) checks units at compile time. There is no reason why D could not use the same approach. -- Rainer Deyke - rainerd eldwood.com
Oct 05 2009
next sibling parent reply language_fan <foo bar.com.invalid> writes:
Mon, 05 Oct 2009 05:29:20 -0600, Rainer Deyke thusly wrote:

 bearophile wrote:
 Scala has a powerful type system that allows to implement such things
 in a good enough way:
 
 http://www.michaelnygard.com/blog/2009/05/
units_of_measure_in_scala.html
 
 Either I'm missing something, or this system only checks units at
 runtime (which would make it both slow and unsafe).
 
 Boost.Units (C++) checks units at compile time.  There is no reason why
 D could not use the same approach.
There have been several existing implementations of SI unit libraries for D. By Oskar Linde et al. The checking can be built statically without any runtime performance penalty.
Oct 05 2009
parent reply language_fan <foo bar.com.invalid> writes:
Mon, 05 Oct 2009 11:41:52 +0000, language_fan wrote:

 Mon, 05 Oct 2009 05:29:20 -0600, Rainer Deyke thusly wrote:
 
 bearophile wrote:
 Scala has a powerful type system that allows to implement such things
 in a good enough way:
 
 http://www.michaelnygard.com/blog/2009/05/
units_of_measure_in_scala.html
 
 Either I'm missing something, or this system only checks units at
 runtime (which would make it both slow and unsafe).
 
 Boost.Units (C++) checks units at compile time.  There is no reason why
 D could not use the same approach.
There have been several existing implementations of SI unit libraries for D. By Oskar Linde et al. The checking can be built statically without any runtime performance penalty.
The only problem with these was that there was no way to signal the location of the type error in the client code, it always reported the location of the (static) assert in the library, which is pretty much useless.
Oct 05 2009
next sibling parent "Nick Sabalausky" <a a.a> writes:
"language_fan" <foo bar.com.invalid> wrote in message 
news:hacm6o$2en$2 digitalmars.com...
 Mon, 05 Oct 2009 11:41:52 +0000, language_fan wrote:
 There have been several existing implementations of SI unit libraries
 for D. By Oskar Linde et al. The checking can be built statically
 without any runtime performance penalty.
The only problem with these was that there was no way to signal the location of the type error in the client code, it always reported the location of the (static) assert in the library, which is pretty much useless.
Yea, that's been a problem for a *lot* of stuff... Although I think it might be fixed now... I could swear there was a patch on bugzilla for that that I had grabbed and made a custom-build of DMD with, and then saw that a newer version of DMD had incorporated it... But in my current sleepless stupor I might be confusing it with something else...
Oct 05 2009
prev sibling parent Rainer Deyke <rainerd eldwood.com> writes:
language_fan wrote:
 The only problem with these was that there was no way to signal the 
 location of the type error in the client code, it always reported the 
 location of the (static) assert in the library, which is pretty much 
 useless.
That's a compiler problem, no? I don't think this is a big deal either way. Unit errors should be exceedingly rare. The purpose of a unit library not to track down unit errors, but to formally prove that correct code is correct. -- Rainer Deyke - rainerd eldwood.com
Oct 05 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Rainer Deyke" <rainerd eldwood.com> wrote in message 
news:haclah$33m$1 digitalmars.com...
 bearophile wrote:
 Scala has a powerful type system that allows to implement such things
 in a good enough way:

 http://www.michaelnygard.com/blog/2009/05/units_of_measure_in_scala.html
Either I'm missing something, or this system only checks units at runtime (which would make it both slow and unsafe). Boost.Units (C++) checks units at compile time. There is no reason why D could not use the same approach.
I've been thinking it might be nice to have both. Compile-time for obvious reasons but then also run-time ones that could do conversions: double(inch) distance = 7; double(minute) time = 1.5; double(meter/second) velocity = distance / time; // Compile-time error auto velocity = convert(meter/second)(distance / time); // Actual runtime-conversion auto velocity = convert(meter)(distance / time); // Compile-time error: Incompatible Although that would probably be far from trivial to design/implement.
Oct 05 2009
next sibling parent reply language_fan <foo bar.com.invalid> writes:
Mon, 05 Oct 2009 08:03:11 -0400, Nick Sabalausky thusly wrote:

 "Rainer Deyke" <rainerd eldwood.com> wrote in message
 I've been thinking it might be nice to have both. Compile-time for
 obvious reasons but then also run-time ones that could do conversions:
 Although that would probably be far from trivial to design/implement.
Why? Because of the NIH syndrome? Just grab the O. Linde's sources from the ng archives and build the conversion function you mentioned above. It's not that hard to do..
Oct 05 2009
parent "Nick Sabalausky" <a a.a> writes:
"language_fan" <foo bar.com.invalid> wrote in message 
news:hacnq3$2en$3 digitalmars.com...
 Mon, 05 Oct 2009 08:03:11 -0400, Nick Sabalausky thusly wrote:

 "Rainer Deyke" <rainerd eldwood.com> wrote in message
 I've been thinking it might be nice to have both. Compile-time for
 obvious reasons but then also run-time ones that could do conversions:
 Although that would probably be far from trivial to design/implement.
Why? Because of the NIH syndrome?
Because everything seems super-hard when I'm as tired as I am right now ;)
Oct 05 2009
prev sibling parent Rainer Deyke <rainerd eldwood.com> writes:
Nick Sabalausky wrote:
 I've been thinking it might be nice to have both. Compile-time for obvious 
 reasons but then also run-time ones that could do conversions:
 
 auto velocity = convert(meter/second)(distance / time); // Actual 
 runtime-conversion
I'm pretty sure Boost.Units already does this, although in general it's probably better to stick with SI units in your code and only perform conversions on input/output. -- Rainer Deyke - rainerd eldwood.com
Oct 05 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Rainer Deyke wrote:
 Boost.Units (C++) checks units at compile time.  There is no reason why
 D could not use the same approach.
Oskar's code I posted does it at compile time.
Oct 05 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Scala has a powerful type system that allows to implement such things in a
good enough way:
 
 http://www.michaelnygard.com/blog/2009/05/units_of_measure_in_scala.html
So does D: =============================================== // by Oskar Linde Aug 2006 // This is just a quick hack to test // IFTI operators opMul and opDel import std.stdio; import std.math; import std.string; version = unicode; struct SiQuantity(T,int e1, int e2, int e3, int e4, int e5, int e6, int e7) { T value = 0; alias T ValueType; const exp1 = e1; const exp2 = e2; const exp3 = e3; const exp4 = e4; const exp5 = e5; const exp6 = e6; const exp7 = e7; static assert(SiQuantity.sizeof == ValueType.sizeof); template AddDimensions(int mul, U) { static assert(is(U.ValueType == ValueType) || is(U == ValueType), "incompatible value types"); static if (is(U == ValueType)) alias SiQuantity AddDimensions; else alias SiQuantity!(T,exp1+mul*U.exp1,exp2+mul*U.exp2, exp3+mul*U.exp3,exp4+mul*U.exp4, exp5+mul*U.exp5,exp6+mul*U.exp6, exp7+U.exp7) AddDimensions; } SiQuantity opAddAssign(SiQuantity rhs) { value += rhs.value; return this; } SiQuantity opSubAssign(SiQuantity rhs) { value -= rhs.value; return this; } const { SiQuantity opAdd(SiQuantity rhs) { SiQuantity ret; ret.value = value + rhs.value; return ret; } SiQuantity opSub(SiQuantity rhs) { SiQuantity ret; ret.value = value - rhs.value; return ret; } SiQuantity opNeg() { SiQuantity ret; ret.value = -value; return ret; } SiQuantity opPos() { typeof(return) ret; ret.value = value; return ret; } int opCmp(SiQuantity rhs) { if (value > rhs.value) return 1; if (value < rhs.value) return -1; return 0; // BUG: NaN } AddDimensions!(+1,Rhs) opMul(Rhs)(Rhs rhs) { AddDimensions!(+1,Rhs) ret; static if (is(Rhs : T)) ret.value = value * rhs; else ret.value = value * rhs.value; return ret; } AddDimensions!(-1,Rhs) opDiv(Rhs)(Rhs rhs) { AddDimensions!(-1,Rhs) ret; static if (is(Rhs : T)) ret.value = value / rhs; else ret.value = value / rhs.value; return ret; } SiQuantity opMul_r(T lhs) { SiQuantity ret; ret.value = lhs * value; return ret; } SiQuantity!(T,-e1,-e2,-e3,-e4,-e5,-e6,-e7) opDiv_r(T lhs) { SiQuantity!(T,-e1,-e2,-e3,-e4,-e5,-e6,-e7) ret; ret.value = lhs / value; return ret; } string toString() { string prefix = ""; T multiplier = 1; T value = this.value; string unit; static if (is(typeof(UnitName!(SiQuantity)))) unit = UnitName!(SiQuantity); else { value *= pow(cast(real)1e3,cast(uint)e2); // convert kg -> g // Take mass (e2) first to handle kg->g prefix issue if (e2 != 0) unit ~= format("·g^%s",e2); if (e1 != 0) unit ~= format("·m^%s",e1); if (e3 != 0) unit ~= format("·s^%s",e3); if (e4 != 0) unit ~= format("·A^%s",e4); if (e5 != 0) unit ~= format("·K^%s",e5); if (e6 != 0) unit ~= format("·mol^%s",e6); if (e7 != 0) unit ~= format("·cd^%s",e7); if (unit) unit = unit[2..$].split("^1").join(""); } if (value >= 1e24) { prefix = "Y"; multiplier = 1e24; } else if (value >= 1e21) { prefix = "Z"; multiplier = 1e21; } else if (value >= 1e18) { prefix = "E"; multiplier = 1e18; } else if (value >= 1e15) { prefix = "P"; multiplier = 1e15; } else if (value >= 1e12) { prefix = "T"; multiplier = 1e12; } else if (value >= 1e9) { prefix = "G"; multiplier = 1e9; } else if (value >= 1e6) { prefix = "M"; multiplier = 1e6; } else if (value >= 1e3) { prefix = "k"; multiplier = 1e3; } else if (value >= 1) { } else if (value >= 1e-3) { prefix = "m"; multiplier = 1e-3; } else if (value >= 1e-6) { version(unicode) prefix = "µ"; else prefix = "u"; multiplier = 1e-6; } else if (value >= 1e-9) { prefix = "n"; multiplier = 1e-9; } else if (value >= 1e-12) { prefix = "p"; multiplier = 1e-12; } else if (value >= 1e-15) { prefix = "f"; multiplier = 1e-15; } else if (value >= 1e-18) { prefix = "a"; multiplier = 1e-18; } else if (value >= 1e-21) { prefix = "z"; multiplier = 1e-21; } else if (value >= 1e-24) { prefix = "y"; multiplier = 1e-24; } return format("%.3s %s%s",value/multiplier, prefix, unit); } } } //length meter m //mass kilogram kg //time second s //electric current ampere A //thermodynamic temperature kelvin K //amount of substance mole mol //luminous intensity candela cd // Si base quantities alias SiQuantity!(real,1,0,0,0,0,0,0) Length; alias SiQuantity!(real,0,1,0,0,0,0,0) Mass; alias SiQuantity!(real,0,0,1,0,0,0,0) Time; alias SiQuantity!(real,0,0,0,1,0,0,0) Current; alias SiQuantity!(real,0,0,0,0,1,0,0) Temperature; alias SiQuantity!(real,0,0,0,0,0,1,0) AmountOfSubstance; alias SiQuantity!(real,0,0,0,0,0,0,1) Intensity; alias SiQuantity!(real,0,0,0,0,0,0,0) UnitLess; // Derived quantities alias typeof(Length*Length) Area; alias typeof(Length*Area) Volume; alias typeof(Mass/Volume) Density; alias typeof(Length*Mass/Time/Time) Force; alias typeof(1/Time) Frequency; alias typeof(Force/Area) Pressure; alias typeof(Force*Length) Energy; alias typeof(Energy/Time) Power; alias typeof(Time*Current) Charge; alias typeof(Power/Current) Voltage; alias typeof(Charge/Voltage) Capacitance; alias typeof(Voltage/Current) Resistance; alias typeof(1/Resistance) Conductance; alias typeof(Voltage*Time) MagneticFlux; alias typeof(MagneticFlux/Area) MagneticFluxDensity; alias typeof(MagneticFlux/Current) Inductance; alias typeof(Intensity*UnitLess) LuminousFlux; alias typeof(LuminousFlux/Area) Illuminance; // SI fundamental units const Length meter = {1}; const Mass kilogram = {1}; const Time second = {1}; const Current ampere = {1}; const Temperature kelvin = {1}; const AmountOfSubstance mole = {1}; const Intensity candela = {1}; // Derived units const Frequency hertz = {1}; const Force newton = {1}; const Pressure pascal = {1}; const Energy joule = {1}; const Power watt = {1}; const Charge coulomb = {1}; const Voltage volt = {1}; const Capacitance farad = {1}; const Resistance ohm = {1}; const Conductance siemens = {1}; const MagneticFlux weber = {1}; const MagneticFluxDensity tesla = {1}; const Inductance henry = {1}; const LuminousFlux lumen = {1}; const Illuminance lux = {1}; template UnitName(U:Frequency) { const UnitName = "Hz"; } template UnitName(U:Force) { const UnitName = "N"; } template UnitName(U:Pressure) { const UnitName = "Pa"; } template UnitName(U:Energy) { const UnitName = "J"; } template UnitName(U:Power) { const UnitName = "W"; } template UnitName(U:Charge) { const UnitName = "C"; } template UnitName(U:Voltage) { const UnitName = "V"; } template UnitName(U:Capacitance){ const UnitName = "F"; } version(unicode) { template UnitName(U:Resistance) { const UnitName = "?"; } } else { template UnitName(U:Resistance) { const UnitNAme = "ohm"; } } template UnitName(U:Conductance){ const UnitName = "S"; } template UnitName(U:MagneticFlux){ const UnitName = "Wb"; } template UnitName(U:MagneticFluxDensity) { const UnitName = "T"; } template UnitName(U:Inductance) { const UnitName = "H"; } void main() { Area a = 25 * meter * meter; Length l = 10 * 1e3 * meter; Volume vol = a * l; Mass m = 100 * kilogram; assert(!is(typeof(vol / m) == Density)); //Density density = vol / m; // dimension error -> syntax error Density density = m / vol; writefln("The volume is %s",vol.toString); writefln("The mass is %s",m.toString); writefln("The density is %s",density.toString); writef("\nElectrical example:\n\n"); Voltage v = 5 * volt; Resistance r = 1 * 1e3 * ohm; Current i = v/r; Time ti = 1 * second; Power w = v*v/r; Energy e = w * ti; // One wishes the .toString was unnecessary... writefln("A current of ",i.toString); writefln("through a voltage of ",v.toString); writefln("requires a resistance of ",r.toString); writefln("and produces ",w.toString," of heat."); writefln("Total energy used in ",ti.toString," is ",e.toString); writef("\nCapacitor time curve:\n\n"); Capacitance C = 0.47 * 1e-6 * farad; // Capacitance Voltage V0 = 5 * volt; // Starting voltage Resistance R = 4.7 * 1e3 * ohm; // Resistance for (Time t; t < 51 * 1e-3 * second; t += 1e-3 * second) { Voltage Vt = V0 * exp((-t / (R*C)).value); writefln("at %5s the voltage is %s",t.toString,Vt.toString); } }
Oct 05 2009
prev sibling parent reply Justin Johansson <no spam.com> writes:
Nick Sabalausky Wrote:

 "Justin Johansson" <no spam.com> wrote in message 
 news:haavf1$2gs7$1 digitalmars.com...
 It's a difficult challenge to get high performance, readable and 
 maintainable code out of complex number
 intensive algorithms.   Use of library types for complex numbers has, in 
 my experience been problematic.
 Complex numbers should be first class value types I say.
There's been discussion before (I can't find it now, or remember the name for it) of type systems that allow for proper handling of things like m/s vs. m/(s*s) vs inch/min etc. I haven't actually worked with such a feature or with complex/imaginary numbers in any actual code, so I can't be sure, but I've been wondering if a type system like that would be an appropriate (or even ideal) way to handle real/complex/imaginary numbers.
 "I've also 
 been wondering if it might be a huge benefit for distinguishing between 
 strings that represent a filename vs file content vs file-extention-only vs 
 relative-path+filename, vs absolute-path-only, etc. I've been really wanting 
 a better way to handle that than just a variable naming convention.)"
Bingo. I'm sure there would be a huge benefit to be able to distinguish string or any primitive type in such manner without having to invent a Filename class, AbsolutePathName class etc. This whole business about types is much to do about how to construct (and validate) the type from primitive information -- often given in lexical form. If I pass an email-address type (even if it is really just a string) to an sendmail function, that function should not have to revalidate/reparse the string to be sure that the string data is indeed a (lexically) valid email address. Yeah, good point you bring up. -- Justin Johansson
Oct 04 2009
next sibling parent reply language_fan <foo bar.com.invalid> writes:
Sun, 04 Oct 2009 17:39:50 -0400, Justin Johansson thusly wrote:

 Nick Sabalausky Wrote:
 "I've also
 been wondering if it might be a huge benefit for distinguishing between
 strings that represent a filename vs file content vs
 file-extention-only vs relative-path+filename, vs absolute-path-only,
 etc. I've been really wanting a better way to handle that than just a
 variable naming convention.)"
Bingo. I'm sure there would be a huge benefit to be able to distinguish string or any primitive type in such manner without having to invent a Filename class, AbsolutePathName class etc.
You could use a variant: typedef Filename = char[]; typedef Path = char[]; typedef File = JustFile Filename | RelPath Path Filename | AbsPath Path Filename
 
 This whole business about types is much to do about how to construct
 (and validate) the type from primitive information -- often given in
 lexical form.  If I pass an email-address type (even if it is really
 just a string) to an sendmail function, that function should not have to
 revalidate/reparse the string to be sure that the string data is indeed
 a (lexically) valid email address.
Something that came to my mind while reading this: typedefs could also be extended to support contracts just like functions.
Oct 04 2009
parent "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 05 Oct 2009 01:47:00 +0400, language_fan <foo bar.com.invalid>  
wrote:

 Something that came to my mind while reading this: typedefs could also be
 extended to support contracts just like functions.
It is a nice idea! It would make D typedef much more powerful and useful.
Oct 04 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Justin Johansson Wrote:

 Bingo. I'm sure there would be a huge benefit to be able to distinguish string
or any primitive type
 in such manner without having to invent a Filename class, AbsolutePathName
class etc.
Can D typedef be used for such purpose? Bye, bearophile
Oct 04 2009
parent Justin Johansson <no spam.com> writes:
bearophile Wrote:

 Justin Johansson Wrote:
 
 Bingo. I'm sure there would be a huge benefit to be able to distinguish string
or any primitive type
 in such manner without having to invent a Filename class, AbsolutePathName
class etc.
Can D typedef be used for such purpose? Bye, bearophile
Yes it can to a degree. It is useful and a little type-safe smarter than its C counterpart going by the few D tests that I've done on my journey into the language. Drawback, as far as I can tell, is the lack of a constructor for typedef'ed values. Seems like you need to synthesise a typedef'ed value with a function that casts it to the typedef type from a primitive type inside the function and return the casted result. Otherwise, reminiscent of leaky fountain pens of yesteryear, you can blot casts all over your code. I may be wrong; there may be a better way in D. Doesn't C++ allow you to construct primitive values in syntax similar to a function call, e.g. int(20); Whether same works for typedef types in C++ I cannot recall just right now. Ciao Justin
Oct 04 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"language_fan" <somewhere internet.com.invalid> wrote in message 
news:ha87kd$2j3d$1 digitalmars.com...
 On Sat, 03 Oct 2009 14:35:22 -0400, Jeremie Pelletier wrote:
 Think of the english languages, how many words does it have? I would
 hate to try and express my ideas if I had only 100 words to choose from.
 Some people do but we call them simple minded or uneducated :)
Comparing spoken languages and formal languages used to program computers is rather far fetched. Even a small child recognizes more words than a complex programming language has keywords. There are programming languages with rather minimal set of core keywords and constructs. This makes them in no way more suitable for less intelligent people.
I think his point was that number of keywords isn't a particularly good measure of language complexity. To bring it back to programming languages, VB has keywords coming out the wazoo, but the only thing complex about VB is the complexity involved in trying to express high-level (or low-level) concepts that VB is just too *simple* to handle.
 And your
 stance of disagreeing with everyone here does not make you better than
 the rest of us, it is just irritating.
I think I missed the memo indicating that disagreeing with others had suddenly become a bad thing. ;)
 D is pretty verbose in many respects. There are some totally unnecessary
 words like 'body' in the grammar. Also things like foreach_reverse should
 just die. Even a novice programmer can write a meta-program to replace
 foreach_reverse without any runtime performance hit. Designing a crappy
 programming language is not hard. Usually the elegance arises from clever
 use of powerful, generic core structures.
Fair enough. *But*, I really think "elegantly simple" language design is double-edged sword. In my experience, and I think this is what Jeremie was alluding to, I've found that an "elegantly simple" language, no matter how well-chosen the primitives are, generally results in a problematic lack of expressiveness and a frequent sense of fighting against the language instead of merely using it. For example, the most elegantly simple languages I've seen are Java (at least earlier versions, anyway), JavaScript, Smalltalk, Haskell, and maybe Forth. And I really do admire those languages from a theoretical perspective...but only in the same sense that I admire Brainfuck. I'd never want to actually use any of those languages for any real work simply because their "elegantly simple" designs lead to many cases where I'd have to (or have had to) fight against them to accomplish what I need. I guess it just comes down to "as simple as possible, *but no more*." In a more complex language like D, I never feel like I need to try to keep the whole langauge in my head. I just need some subset at any particular time, and then when (and I do mean "when", not "if"), when I need something else, it's nice to know that it's there to use and that I won't have to try to cram it into the wrong tool for the job or constantly switch between an array of languages while trying to keep them all playing nice with each other. It's like a professional handyman having the smallest possible possible toolbox with only the barest essentialls, versus a big super-toolbox that has all the *right* tools he might need. Just because it's there doesn't mean it has to be used, but if I were a handyman and had to remove a phillips-head screw, I'd want to be able to reach for a forward/reverse drill and an appropriately-sized phillips-head bit, and not have to pry it out with the bare minimum (the back of a hammer, or a sort-of-sized-similarly manual flathead screwdriver), and also not have to put one specialized mini-toolbax back and switch to a differently-specialized mini-toolbox for every different task.
Oct 03 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 Fair enough. *But*, I really think "elegantly simple" language design is 
 double-edged sword. In my experience, and I think this is what Jeremie was 
 alluding to, I've found that an "elegantly simple" language, no matter how 
 well-chosen the primitives are, generally results in a problematic lack of 
 expressiveness and a frequent sense of fighting against the language instead 
 of merely using it.
It's a good point. One finds when programming in a simple language that one has to write a lot of rather complex code to make up for it. C is an obvious example - try writing OOP in C. It can and has been done, but it's ugly, verbose, complex, error-prone and inelegant.
 It's like a professional handyman having the smallest possible possible 
 toolbox with only the barest essentialls, versus a big super-toolbox that 
 has all the *right* tools he might need. Just because it's there doesn't 
 mean it has to be used, but if I were a handyman and had to remove a 
 phillips-head screw, I'd want to be able to reach for a forward/reverse 
 drill and an appropriately-sized phillips-head bit, and not have to pry it 
 out with the bare minimum (the back of a hammer, or a 
 sort-of-sized-similarly manual flathead screwdriver), and also not have to 
 put one specialized mini-toolbax back and switch to a 
 differently-specialized mini-toolbox for every different task.
That resonates with me. When I was a kid working on cars, I had nothing but the most basic tools. You could get things done, but the workarounds were unpleasant and difficult, and I often wound up damaging the parts in the process. Now, I just go buy the specialized tool, and get it done quickly and easily, and no damage. For example, it's so nice to have a drill press and get the hole *straight* <g>.
Oct 04 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the other day
and
 the barista had this to say
 
 http://cafe.elharo.com/programming/imagine-theres-no-null/
 
 Disclaimer: YMMV
 
 Cheers
 
 -- Justin Johansson
This article brings up a very interesting point that beats Walter's argument to a pulp, then puts salt on it. Walter's overriding argument (I'm sure you know it, he repeated it claiming nobody understands it until we learned it by heart) was: "I don't want the compiler to require a value there! People will just put some crappy value in to get the code to compile, and the code with errors in it will soldier on instead of duly crashing! How is that better???" etc. Yet D has structs. Walter knows D has structs, and knows how D structs operate. He put structs in D because he thought structs, vegetables, exercising, and flossing are good for you. Yet structs operate the exact way that Walter claim is pernicious. Structs don't have a singular null value and always are nominally valid objects. Yet I've never heard Walter continuing his argument with "Just look at those stinky structs. It must be a million times I had a bug caused by the absence of null structs! I just had to put a crappy struct there in my code, and my code soldiered on in error instead of crashing!" Why didn't he continue his argument that way? And why didn't anybody else continue his argument that way? Because nobody has had such a problem. Everybody uses structs, and everybody's happy about them lacking null. To complete the irony, Walter and I discussed a while ago about structs and .init values. We concluded that D, at least for the time being, will allow struct construction without any code invocation, by just bitcopying the .init value of the struct over. He was very happy about that because a lot of code generation got majorly simplified that way. (I was less happy because that meant less user control over struct construction.) So Walter was happy that he had for structs a feature he thinks is amazingly dangerous for classes. So I don't think Walter's argument is invalid, I think it simply doesn't exist. This post made it disappear. Andrei
Oct 02 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Andrei Alexandrescu wrote:
 Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the 
 other day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
This article brings up a very interesting point that beats Walter's argument to a pulp, then puts salt on it. Walter's overriding argument (I'm sure you know it, he repeated it claiming nobody understands it until we learned it by heart) was: "I don't want the compiler to require a value there! People will just put some crappy value in to get the code to compile, and the code with errors in it will soldier on instead of duly crashing! How is that better???" etc. Yet D has structs. Walter knows D has structs, and knows how D structs operate. He put structs in D because he thought structs, vegetables, exercising, and flossing are good for you.
We all know a serving of structs a day makes for healthy programmers!
 Yet structs operate the exact way that Walter claim is pernicious. 
 Structs don't have a singular null value and always are nominally valid 
 objects.
You're comparing value types to reference types. A class object is also always valid, its the reference that can be null by pointing to no object. You would have the same semantics by using struct pointers.
 Yet I've never heard Walter continuing his argument with "Just look at 
 those stinky structs. It must be a million times I had a bug caused by 
 the absence of null structs! I just had to put a crappy struct there in 
 my code, and my code soldiered on in error instead of crashing!"
I never pass structs by value in D, except for returns because of RVO, so I can get a null struct pointer sometimes, but thats what contracts and backtraces are for. If my code was still executing on an invalid struct reference it would be much, much harder to pinpoint the origin of the bug.
 Why didn't he continue his argument that way? And why didn't anybody 
 else continue his argument that way? Because nobody has had such a 
 problem. Everybody uses structs, and everybody's happy about them 
 lacking null.
Once again, structs are value types.
 To complete the irony, Walter and I discussed a while ago about structs 
 and .init values. We concluded that D, at least for the time being, will 
 allow struct construction without any code invocation, by just 
 bitcopying the .init value of the struct over. He was very happy about 
 that because a lot of code generation got majorly simplified that way. 
 (I was less happy because that meant less user control over struct 
 construction.) So Walter was happy that he had for structs a feature he 
 thinks is amazingly dangerous for classes.
Classes have an initializer too, it's just copied by the GC after allocation instead.
 So I don't think Walter's argument is invalid, I think it simply doesn't 
 exist. This post made it disappear.
 
 
 Andrei
I disagree, you compared apples to oranges here. Correct me if I'm wrong, but I don't think we should base an argument over reference types by using value types. Jeremie
Oct 03 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jeremie Pelletier wrote:
 Andrei Alexandrescu wrote:
 Justin Johansson wrote:
 For the interest of newsgroups readers, I dropped in at the Cafe the 
 other day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers

 -- Justin Johansson
This article brings up a very interesting point that beats Walter's argument to a pulp, then puts salt on it. Walter's overriding argument (I'm sure you know it, he repeated it claiming nobody understands it until we learned it by heart) was: "I don't want the compiler to require a value there! People will just put some crappy value in to get the code to compile, and the code with errors in it will soldier on instead of duly crashing! How is that better???" etc. Yet D has structs. Walter knows D has structs, and knows how D structs operate. He put structs in D because he thought structs, vegetables, exercising, and flossing are good for you.
We all know a serving of structs a day makes for healthy programmers!
 Yet structs operate the exact way that Walter claim is pernicious. 
 Structs don't have a singular null value and always are nominally 
 valid objects.
You're comparing value types to reference types. A class object is also always valid, its the reference that can be null by pointing to no object. You would have the same semantics by using struct pointers.
 Yet I've never heard Walter continuing his argument with "Just look at 
 those stinky structs. It must be a million times I had a bug caused by 
 the absence of null structs! I just had to put a crappy struct there 
 in my code, and my code soldiered on in error instead of crashing!"
I never pass structs by value in D, except for returns because of RVO, so I can get a null struct pointer sometimes, but thats what contracts and backtraces are for. If my code was still executing on an invalid struct reference it would be much, much harder to pinpoint the origin of the bug.
 Why didn't he continue his argument that way? And why didn't anybody 
 else continue his argument that way? Because nobody has had such a 
 problem. Everybody uses structs, and everybody's happy about them 
 lacking null.
Once again, structs are value types.
 To complete the irony, Walter and I discussed a while ago about 
 structs and .init values. We concluded that D, at least for the time 
 being, will allow struct construction without any code invocation, by 
 just bitcopying the .init value of the struct over. He was very happy 
 about that because a lot of code generation got majorly simplified 
 that way. (I was less happy because that meant less user control over 
 struct construction.) So Walter was happy that he had for structs a 
 feature he thinks is amazingly dangerous for classes.
Classes have an initializer too, it's just copied by the GC after allocation instead.
 So I don't think Walter's argument is invalid, I think it simply 
 doesn't exist. This post made it disappear.


 Andrei
I disagree, you compared apples to oranges here. Correct me if I'm wrong, but I don't think we should base an argument over reference types by using value types. Jeremie
Save for address taking, value types are indistinguishable from immutable reference types. Value types don't have a null and that doesn't seem to make them unusable. It's a simple point. Andrei
Oct 03 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Justin Johansson" <no spam.com> wrote in message 
news:ha4qpi$189h$1 digitalmars.com...
 For the interest of newsgroups readers, I dropped in at the Cafe the other 
 day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers
This is gonna sound trivial (and probably is), but it's been bugging the hell out of me: What is the meaning of the "+ Looney Tunes" added to the title of this sub-thread? I don't see a connection...?
Oct 05 2009
parent Justin Johansson <no spam.com> writes:
Nick Sabalausky Wrote:

 "Justin Johansson" <no spam.com> wrote in message 
 news:ha4qpi$189h$1 digitalmars.com...
 For the interest of newsgroups readers, I dropped in at the Cafe the other 
 day and
 the barista had this to say

 http://cafe.elharo.com/programming/imagine-theres-no-null/

 Disclaimer: YMMV

 Cheers
This is gonna sound trivial (and probably is), but it's been bugging the hell out of me: What is the meaning of the "+ Looney Tunes" added to the title of this sub-thread? I don't see a connection...?
Good question. Next question? :-) Okay, Nick, this is the twist (warped as it may be). For starters I'm guessing that you are somewhat younger than me. (Walter said once that D seems to attract a lot of the younger crowd; myself, I'm Walter's vintage.) I grew up watching a lot of Marx Brothers movies (and, btw, I'm continually surprised by just how many people, mostly younger, have never heard of the Marx Brothers, so just in case you are a member of said set, here's his bio: http://en.wikipedia.org/wiki/Groucho_Marx ). Groucho Marx was the absolute master of wisecracks; IMHO, few comedians have ever come close to matching his undeniable ability to make fast-talking wit out of the even the most subtle of connections. Alan Alda (in M*A*S*H) often played out Groucho, much to my delight. I also grew up with eyes glued to every Bugs Bunny cartoon that I could watch and every Looney Tune that accompanied the same. ( http://en.wikipedia.org/wiki/Looney_Tunes ) So now, having mentioned Marx, Alda and Bunny, this explains where my sense of humour comes from. Now the title of the Elliotte Rusty Harold article, http://cafe.elharo.com/programming/imagine-theres-no-null/ , that started this thread was a play on words from the title of the song "Imagine there’s no Heaven" by John Lennon, and acknowledged by ERH at the end of the article. (Also so happens that The Beatles along with Pink Floyd are some of my favourite bands.) ( Lyrics to Imagine written up here - http://www.slymarketing.com/2007/10/imagine-there-is-no-heaven/ ) So at the time I was thinking, "Hey, that's a good controversial story line to throw in for a bit of Fudd** on the D newsgroup" and, come to think of it (still saying to myself), it's probably a bit of a Looney Tune. Now having said all that, several possible interpretations are left open for the amusement of readers. ** Fudd. Acronym Definition FUDD Functional Description Document FUDD Fear Uncertainty Doubt and Disinformation Also Google on (exactly and typed below) define:fudd to get further connection with Looney Tunes Cheers -- Justin Johansson
Oct 05 2009