digitalmars.D - [Sorta OT] License Restrictions
- Paul Bonser (11/11) Feb 03 2005 Some mention of license problems got me thinking about this piece of
- Charles Hixson (7/16) Feb 04 2005 The contexts I've usually seen that in is a disclaimer of
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (4/14) Feb 04 2005 Might cover long distance missiles :-)
- pragma (5/9) Feb 04 2005 Its interesting that you bring that up. Walter may want to clarify tha...
- Walter (7/16) Feb 04 2005 applications
- Matthew (5/17) Feb 04 2005 Guys, if we persist with the mechanism of no compile-time detection of
- Matthew (3/22) Feb 04 2005 "and switch cases"
- Vathix (1/6) Feb 04 2005 Would you fly to mars in debug mode?
- John Reimer (2/14) Feb 04 2005 Maybe if there were a debugger available, and one could single step. ;-)
- Walter (4/13) Feb 04 2005 If it was critical software, yes, I'd run it with all the debugging chec...
- Matthew (30/41) Feb 04 2005 Well, to seriously answer your question: I think production code, at
- Dave (29/44) Feb 05 2005 How about some middle ground?
- Charles (8/56) Feb 05 2005 I would even settle for less , like a -release-with-cp flag.
- Matthew (13/93) Feb 05 2005 I just think that we should have CP and debug/release somewhat
- Thomas Kuehne (12/23) Feb 05 2005 -----BEGIN PGP SIGNED MESSAGE-----
- Matthew (25/66) Feb 05 2005 If you mean that we should be able to individually select on/off the
- Dave (15/68) Feb 05 2005 I think you're right. What I mentioned above would tie PwC too closely t...
- Kris (2/12) Feb 05 2005 Aye - add my voice to that call.
- Matthew (10/84) Feb 05 2005 I would agree, for reasons of legacy breaking, were it not that D is
- Walter (11/19) Feb 05 2005 It is on by default. All -release does is turn it off. You could compile
- Matthew (5/32) Feb 05 2005 Cool. Sounds like the _only_ thing to do is rename the misnomer
- Walter (6/9) Feb 05 2005 The original idea behind -release was to not require the DMD programmer ...
- Matthew (5/17) Feb 05 2005 Yes, it's that real world again. Better to have several switches which
- Mark T (5/14) Feb 06 2005 Maybe you could recycle -release to turn off contracts and debug, etc. I...
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (17/18) Feb 06 2005 For the most part, yes. See this page:
- sai (7/11) Feb 06 2005 I would say, -release is more intutive and self-explainatory, its just a...
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (15/17) Feb 06 2005 Okay, so -release is "intuitive" and "self-explanatory" - but you'll
- sai (9/15) Feb 06 2005 Yes, -release means ..... it is a release version with all contracts (in...
- Matthew (8/17) Feb 06 2005 I think the original issue under debate, sadly largely ignored since, is...
- Derek Parnell (17/36) Feb 06 2005 I'm thinking as I write here, so I could be way off ...
- Matthew (20/64) Feb 06 2005 Well, yes and no. Yes, in the sense of a literal interpretation of that
- Derek Parnell (46/123) Feb 06 2005 Of course. In the same sense that nothing is ever perfect. By 'testing' ...
- Matthew (80/236) Feb 07 2005 Gotcha
- Unknown W. Brackets (32/38) Feb 07 2005 I can also lament this; when people feel they've found a bug, they feel
- Regan Heath (6/12) Feb 08 2005 On Mon, 07 Feb 2005 22:56:53 -0800, Unknown W. Brackets
-
Dave
(36/39)
Feb 07 2005
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (4/8) Feb 07 2005 Maybe I missed something, but what's the difference between
- Dave (9/17) Feb 07 2005 I'm not sure what the consensus would be on that but PwC expresses the
- =?windows-1252?Q?Anders_F_Bj=F6rklund?= (13/17) Feb 07 2005 Nah, it was only that I had to look it up myself,
- Kris (17/34) Feb 07 2005 I'd just like to point out that DbC is but a subset of AOP. The latter w...
- Matthew (17/27) Feb 07 2005 It's a distinction primarily promoted by Chris Diggins
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (15/21) Feb 06 2005 I just don't think contracts has anything to do with release vs. debug ?
- Matthew (15/38) Feb 06 2005 I think it's getting clear that we're going to need fine grained control...
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (13/45) Feb 07 2005 After doing some more reading on the subject, I've come to agree with
- Kris (8/26) Feb 06 2005 Got my vote, on both counts.
- Lars Ivar Igesund (6/31) Feb 07 2005 Sorry if I didn't say so before Matthew, but I agree wholeheartedly.
- Dave (67/77) Feb 06 2005 I don't think you failed - actually I think the original plan and how it...
- Dave (7/24) Feb 05 2005 But right now (when not using -release) there are invariant calls for ea...
- Walter (63/66) Feb 04 2005 NASA uses C, C++, Ada and assembler for space hardware.
- Unknown W. Brackets (38/45) Feb 04 2005 I'd just like to say how much I agree with this. Of course, this is an
- Matthew (114/231) Feb 04 2005 This is all tired old ground, and I know I'm not going to prevail.
- Walter (78/187) Feb 05 2005 If the error is silently ignored, it will be orders of magnitude harder ...
- Matthew (82/339) Feb 05 2005 I'm not arguing for that!
- Unknown W. Brackets (88/88) Feb 05 2005 Matthew, this response makes it sound like you're ignoring Walter's
- Matthew (2/8) Feb 05 2005 I didn't say that. You appear to have caught Walter's disease.
- John Reimer (5/14) Feb 05 2005 I better stay out of this... but Matthew's last post did clarify that he...
- Walter (42/44) Feb 05 2005 That's essentially right.
- Matthew (49/93) Feb 05 2005 This is total rubbish. A maintenance engineer is stymied by *both*
- Walter (44/74) Feb 05 2005 From a C/C++ perspective, you're right, this is the only correct solutio...
- Matthew (22/132) Feb 05 2005 Sorry, mate, I've given up - I'll just have to content myself with
- Kris (12/89) Feb 05 2005 Here's a suggestion that /might/ help: are you, Walter, familiar with AO...
- Regan Heath (3/137) Feb 06 2005 AOP is cool, I wish it was possible to use it in D.
- Walter (4/5) Feb 06 2005 I looked at Kris' reference, but AOP is one of those things I don't
- Regan Heath (19/24) Feb 06 2005 I looked too, I found it less easy to follow than the article on
- Regan Heath (60/65) Feb 07 2005 Ok, here is my attempt at a syntax for AOP for D.
- Regan Heath (4/71) Feb 07 2005 Small addition added above, specifically:
- Walter (5/5) Mar 02 2005 Thank-you. That actually does make sense. I can see now why it would be ...
- Regan Heath (4/13) Mar 02 2005 Yes, that's essentially it.
- pandemic (7/12) Mar 02 2005 Yes and no. As I understand it, the real power of AOP lies in its abilit...
- Regan Heath (24/46) Mar 02 2005 Well, true, technically.
- Charlie Patterson (14/29) Mar 04 2005 Before the topic gets dropped, I read about AOP a couple of years ago an...
- xs0 (35/93) Mar 05 2005 Hi,
- Regan Heath (12/42) Mar 06 2005 Correct.
- xs0 (12/24) Mar 07 2005 If you look at
- Regan Heath (5/23) Mar 07 2005 The very reason I don't like it. What if I want to use the old class and...
- xs0 (21/28) Mar 07 2005 Well, as far as I know, you can't - the aspect is an integral part of
- Regan Heath (22/39) Mar 07 2005 Using my concept, I can. What do you mean by "as far as I know" are you ...
- xs0 (68/101) Mar 07 2005 First, I'd like to say that I responded to your claim that an aspect
- Regan Heath (57/151) Mar 07 2005 This is less efficient.
- xs0 (23/51) Mar 07 2005 Well, I looked at several AOP languages, and you seem to be the only one...
- Regan Heath (20/67) Mar 07 2005 Sorry, no can do. The article is in DDJ, and you have to be a subscriber...
- xs0 (18/26) Mar 07 2005 It's not no code at all.. if you apply an aspect to a method/function
- Regan Heath (14/34) Mar 07 2005 Correct. If I don't need the "original+aspect" then being forced to use ...
- Regan Heath (8/44) Mar 07 2005 I can't seem to 'edit' with my client "Opera".. allow me to re-phrase th...
- xs0 (6/12) Mar 07 2005 I see no point in arguing this further. You're just making arbitrary
- Regan Heath (6/11) Mar 08 2005 *sigh* I just don't understand what I'm doing that makes you so hostile....
- xs0 (51/59) Mar 08 2005 I'm not trying to be hostile, perhaps that is the result of my limited
- Regan Heath (67/146) Mar 09 2005 The reason I didn't address the cache comment is because you
- xs0 (34/40) Mar 09 2005 Yes it did. You're suggesting that there exist two methods (actually,
- Regan Heath (14/53) Mar 13 2005 Yes, two classes.
- h3r3tic (4/5) Feb 06 2005 I've written a simple aspect preprocessor for D, but it hasn't received
- Kris (36/113) Feb 07 2005 I'm jumping into this at a somewhat arbitrary point, but the general cla...
- Regan Heath (18/186) Feb 07 2005 I agree 'length' seems to be poorly implemented, or perhaps is simply a ...
- Matthew (5/13) Feb 07 2005 Oh come on! It goes to the motivation behind the missing return value.
- Regan Heath (17/30) Feb 07 2005 Good.
- Matthew (7/29) Feb 07 2005 Marvellous stuff. Keep going. I'm sure you've got one for every
- Regan Heath (31/63) Feb 07 2005 Sorry, don't agree with what in particular?
- Matthew (20/94) Feb 07 2005 Very well, put. What you either fail to recognise, or may recognise all
- Ben Hinkle (5/112) Feb 08 2005 Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted...
- Matthew (6/136) Feb 08 2005 Agreed. Bad day behaviour. I guess I just don't like being told what to
- Regan Heath (14/150) Feb 08 2005 Matthew, I too am sorry. My intention wasn't to tell you what to do or h...
- Matthew (12/188) Feb 08 2005 Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I
- Regan Heath (6/197) Feb 08 2005 Understood. I'll do my best to curtail my religeous zeal.
- Matthew (1/37) Feb 08 2005 You're welcome to it.
- Regan Heath (5/10) Feb 08 2005 (secretly stealing the last word again)
- Matthew (3/13) Feb 08 2005 It was nothing
- Regan Heath (3/17) Feb 08 2005 Again! I fear I am no match...
- John Reimer (3/31) Feb 08 2005 Okay, guys! This is rediculous. I'll have the last word and be done with...
- Matthew (3/21) Feb 08 2005 Surely not
- Regan Heath (18/127) Feb 08 2005 Please explain, I don't understand.
- Derek (18/55) Feb 07 2005 And this is one of the reason why I use 'decorated' identifier names; to
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (4/7) Feb 07 2005 I'm not sure that warts classifies as decorations in all cultures ? :-)
- Derek (5/15) Feb 07 2005 Well its been working for me and my teams for 10 years now, so sue me. ;...
- Matthew (22/37) Feb 07 2005 Type decoration - void fn(long lLimit); - is bad, because it is
- Kris (14/26) Feb 07 2005 I applaud any group that sets their own standards to deal with complexit...
- Derek Parnell (25/58) Feb 07 2005 Sorry I digressed from the main point of your post. I tend to agree with
- Kris (19/22) Feb 07 2005 Amen, Derek. But that's a somewhat different topic.
- Matthew (24/87) Feb 07 2005 Very sensible. But very sad that we must do so, given total ugliness of
- Derek Parnell (18/114) Feb 07 2005 Agreed, if the only purpose of using decorated words for identifiers is
- Dave (4/95) Feb 07 2005 How about:
- Matthew (11/123) Feb 07 2005 IIRC, that was a popular suggestion at the time, as was
- Ben Hinkle (3/10) Feb 07 2005 Unicode to the rescue: array[from..\u221E]
- Kris (9/21) Feb 07 2005 The problem is that one may need to reference the array-length within an
- Matthew (4/35) Feb 07 2005 Ah, of course. Silly me.
- Matthew (6/187) Feb 07 2005 I'm afraid I was out working + book writing when that went in. Very poor...
- Unknown W. Brackets (10/13) Feb 07 2005 I agree. IMHO, "length" should either be:
- Ben Hinkle (6/11) Feb 08 2005 Two things:
- Matthew (6/22) Feb 08 2005 It's got to be 1. Kris is right that this is just a crazy idea. (While I...
- Unknown W. Brackets (14/28) Feb 05 2005 I'd just like to say three more things and then I'll shut up since no
- Ben Hinkle (16/21) Feb 05 2005 [snip]
- Walter (8/20) Feb 05 2005 would
- Ben Hinkle (15/19) Feb 06 2005 I hope someone picks this up. Some of the possible rules that come to mi...
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (7/11) Feb 06 2005 Currently this throws an Error at run-time, for non-release builds.
- Ben Hinkle (18/29) Feb 06 2005 That's why it would make a good candidate for dlint. It is legal code bu...
- zwang (10/34) Feb 06 2005 and some more:
- Ben Hinkle (11/18) Feb 06 2005 Interesting idea but it could be hard to compute and hard to know what t...
- zwang (6/38) Feb 06 2005 I was just thinking about a list of classes sorted by their complexity,
- Derek (44/96) Feb 05 2005 And the coder probably should have done some more like ...
- Walter (5/7) Feb 05 2005 It's fair to disagree. I just want to get across the reasoning, so the
- Scott Wood (30/43) Feb 06 2005 No, its root cause *is* bad programmers. A good programmer would not
- Walter (34/38) Feb 06 2005 Back when I used to work for Boeing, a major focus of attention was maki...
- Derek Parnell (9/54) Feb 06 2005 It appears to me then that you now have DMD informing the pilot at 30,00...
- Georg Wrede (6/59) Mar 08 2005 Hey, hey, one really sould put an assert(0) there!
- Charles (16/376) Feb 05 2005 represents allot of responses from Walter when he's dead set on somethin...
- Matthew (14/543) Feb 05 2005 Yes. Kind of like calling upon a disinterested god.
- Regan Heath (15/37) Feb 06 2005 The problem is not whether compile time detecting missing returns is goo...
- Regan Heath (9/12) Feb 06 2005 On Sat, 5 Feb 2005 20:26:43 +1100, Matthew
- Regan Heath (69/425) Feb 06 2005 Disclaimer: Please correct me if I have miss-represented anyone, I
- Matthew (7/554) Feb 06 2005 Sounds good to me.
- Regan Heath (9/598) Feb 06 2005 He already has, it's the (a) option below, the 'worst case' scenario.
- John Reimer (3/10) Feb 06 2005 If it's that new Apple laptop you've got on order, please send it to me
- Matthew (4/14) Feb 06 2005 Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the
- Carlos Santander B. (4/11) Feb 09 2005 Does that mean you don't want it? I'll take it. Seriously.
- Matthew (5/14) Feb 09 2005 He he. No, sorry. I've ordered a hinge from Dell - a company that seems
- Carlos Santander B. (4/11) Feb 10 2005 Oh, ok. Can't say I didn't try... :D
- Matthew (5/15) Feb 08 2005 Cancelled it. I'm not going to document here why Apple have lost my
- John Reimer (5/11) Feb 08 2005 Oh no! why?! Too many delays?
- Matthew (6/13) Feb 08 2005 Yeah, plus an unbelievably slack attitude. World leaders in customer
- John Reimer (3/26) Feb 08 2005 Um... something like that! :-(
- Matthew (9/33) Feb 08 2005 Well, I've sent of a snotty letter to sales@apple.com and
- John Reimer (2/12) Feb 08 2005 Ok, Matthew. Quit holding back. Where's your blog site?
- Matthew (6/16) Feb 08 2005 It's on Artima, where I can rub shoulders with people who really know
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (8/19) Feb 09 2005 Being new to the Mac, it's easy how you could misunderstand this.
- John Reimer (43/63) Feb 09 2005 Actually, I'm not really that new to the mac. I grew up with one. My
- Charles Patterson (52/150) Feb 08 2005 First time reader, first time poster!
- Matthew (2/14) Feb 08 2005 Agreed. Last time this was debated, someone suggested a "neverreturn"
- Regan Heath (10/26) Feb 08 2005 I agree.
- Regan Heath (5/131) Feb 08 2005 I think you and I have very similar opinions on this matter.
- Matthew (6/156) Feb 08 2005 Absolutely. That's the entire problem. Walter thinks that if the
- Derek Parnell (11/23) Feb 08 2005 The best compromise I've heard of so far is to have the '-v' (verbose) D...
- Kris (11/34) Feb 08 2005 Good stuff.
- Regan Heath (13/26) Feb 08 2005 Because of real life contraints, time, etc
- Unknown W. Brackets (20/26) Feb 08 2005 I disagree. He doesn't think that. He thinks (umm, I think he thinks)
- Charlie Patterson (51/60) Feb 09 2005 I don't see the problem as that.
- Derek (11/15) Feb 09 2005 [snip]
- Matthew (8/23) Feb 09 2005 Agreed.
- Matthew (33/68) Feb 09 2005 Agreed. As I've just posted on the 'const/readonly string' thread, I
- Carlos Santander B. (6/8) Feb 09 2005 No offense intended, but it reminds me of what a teacher of mine said
- Matthew (8/15) Feb 09 2005 No offence to me, I'm not an atheist. (Certainly on either side
- Alex Stevenson (7/23) Feb 09 2005 I tend to use 'Bob' as a generic drop-in replacement for deity invocatio...
- Charlie Patterson (19/24) Feb 10 2005 When the fog lifts... (-:
- Derek (40/61) Feb 05 2005 I come from the position that a compiler's job (a part from compiling), ...
- Walter (36/57) Feb 05 2005 so
- Derek (50/117) Feb 05 2005 I don't think I made myself very clear.
- Walter (7/35) Feb 05 2005 What you're advocating sounds very much like how compile time warnings w...
- Derek (20/22) Feb 05 2005 Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching'...
- Ben Hinkle (4/26) Feb 06 2005 A statement about inserting code could be made in verbose mode (-v) sinc...
- Derek Parnell (6/9) Feb 06 2005 Now that's a decent idea.
-
Regan Heath
(30/33)
Feb 06 2005
- Matthew (3/37) Feb 06 2005 Sounds like a pretty excellent compromise to me!!
- Regan Heath (7/59) Feb 06 2005 It did to me too, however after more thought it appears more complex, se...
- Charles Patterson (7/16) Feb 08 2005 Not to pick on you Regan, because this is the second note of yours I've
- Regan Heath (31/47) Feb 08 2005 I don't feel you are. :)
- Matthew (41/86) Feb 05 2005 This is eminently sensible. I give it 0.01% chance of getting traction.
- Walter (7/10) Feb 05 2005 It is the conventional wisdom, and all other things being equal, it's
- Matthew (7/67) Feb 05 2005 It is indeed mostly better. Unfortunately, in the ways in which it is
- Derek Parnell (9/32) Feb 06 2005 Of course, this is all a moot point if you compile using the -release
- Matthew (5/40) Feb 06 2005 Shit! Is that so? I hadn't cottoned on to that. If that is indeed the
- Unknown W. Brackets (13/14) Feb 06 2005 I agree this behavior is not nice at all, unless you've never seen
- Paul Bonser (8/17) Feb 07 2005 Leave it to you guys to take a perfectly good semi-off-topic thread and
- Paul Bonser (6/6) Feb 22 2005 I'm proud to have fathered such a successful thread...
- John Reimer (5/9) Feb 22 2005 He he... well I don't think this is just /a/ thread... it's a
Some mention of license problems got me thinking about this piece of standard Sun boilerplate: "Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited." Are we going to have that kind of restrictions on D, or will we be free to use it to guide weapons of mass destruction? :P -- -PIB -- "C++ also supports the notion of *friends*: cooperative classes that are permitted to see each other's private parts." - Grady Booch
Feb 03 2005
Paul Bonser wrote:Some mention of license problems got me thinking about this piece of standard Sun boilerplate: "Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited." Are we going to have that kind of restrictions on D, or will we be free to use it to guide weapons of mass destruction? :PThe contexts I've usually seen that in is a disclaimer of responsibility for the results of using (this or that) product for (this or that) purpose. I doubt that it would have any effect (IANAL), but supposedly the claim is implicitly "We aren't responsible if you use it that way, so you can't sue us, and neither can your victims."
Feb 04 2005
Charles Hixson wrote:I think the D license's:Are we going to have that kind of restrictions on D, or will we be free to use it to guide weapons of mass destruction? :PThe contexts I've usually seen that in is a disclaimer of responsibility for the results of using (this or that) product for (this or that) purpose. I doubt that it would have any effect (IANAL), but supposedly the claim is implicitly "We aren't responsible if you use it that way, so you can't sue us, and neither can your victims."Do not use this software for life critical applications, or applications that could cause significant harm or property damage.Might cover long distance missiles :-) --anders
Feb 04 2005
In article <cu0q2s$a8v$1 digitaldaemon.com>,I think the D license's:Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0. - EricAnderton at yahooDo not use this software for life critical applications, or applications that could cause significant harm or property damage.Might cover long distance missiles :-)
Feb 04 2005
"pragma" <pragma_member pathlink.com> wrote in message news:cu0qpe$avq$1 digitaldaemon.com...In article <cu0q2s$a8v$1 digitaldaemon.com>,applicationsI think the D license's:Do not use this software for life critical applications, oroutIts interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESAthat could cause significant harm or property damage.Might cover long distance missiles :-)of the loop... that is if its not changed after v1.0.I don't care for the liability. An organization could use it for such purposes, but only if they're willing to send me a signed statement assuming liability and indemnifying Digital Mars.
Feb 04 2005
"pragma" <pragma_member pathlink.com> wrote in message news:cu0qpe$avq$1 digitaldaemon.com...In article <cu0q2s$a8v$1 digitaldaemon.com>,Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!I think the D license's:Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0.Do not use this software for life critical applications, or applications that could cause significant harm or property damage.Might cover long distance missiles :-)
Feb 04 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com..."pragma" <pragma_member pathlink.com> wrote in message news:cu0qpe$avq$1 digitaldaemon.com..."and switch cases"In article <cu0q2s$a8v$1 digitaldaemon.com>,Guys, if we persist with the mechanism of no compile-time detection of return pathsI think the D license's:Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0.Do not use this software for life critical applications, or applications that could cause significant harm or property damage.Might cover long distance missiles :-), and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
Vathix wrote:Maybe if there were a debugger available, and one could single step. ;-)Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...If it was critical software, yes, I'd run it with all the debugging checks turned on.Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on. I recently worked on a large-scale multi-protocol, (multi-threaded) multi-process, non-stop system, and used a lot of contract programming (CP) in it. It's now humming away happily with all that good contract enforcement, and suicidal servers. I have to tell you, I had a devil of a time persuading the project managers of the utility of CP, and even the techie guy had his qualms. Like all commercial projects, this was started system testing the day it went into production. And, do you know, it has only had two bugs so far. One of these had an invariant condition ready for it, so it killed itself informatively and the bug was fixed in 10 minutes. The second did not have an invariant coded for it - much to my chagrin - and took over a week to find. So, the lesson to me is that CP should always be on, and the more complex the system the more important it is that that be so. Although I've worked on a few complex large-scale systems in the past that did not have it (and which have run without flaw for years), I will not do so in the future. CP all the way! btw, we're going to write this up as a case-study for an instalment of Bjorn Karlsson and my Smart Pointers column, called: "The Nuclear Reactor and the Deep Space Probe". It's mostly written, including some excellent quotes from big-W, and we hope to get it out sometime this month. (The column's on Artima.com, and available free for anyone; no sign-up required.) Cheers MatthewWould you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com..."Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts where they are really needed but let the optimizer do it's thing everywhere else like remove asserts and array bounds checking. - DaveWell, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 05 2005
IMO, this could give the best of both worlds.I would even settle for less , like a -release-with-cp flag. Charlie "Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com...of"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...Guys, if we persist with the mechanism of no compile-time detectionusereturn paths"and switch cases", and rely on the runtime exceptions, do we really think NASA wouldleastWell, to seriously answer your question: I think production code, atWould you fly to mars in debug mode?D? Come on!theyfor 'important commercial', should be shipped with contract programming enforcement on.How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts whereare really needed but let the optimizer do it's thing everywhere else like remove asserts and array bounds checking. - Dave
Feb 05 2005
I just think that we should have CP and debug/release somewhat independent. Ideally, I'd like "-debug" to have debugging info _and_ CP "-release" to have CP "-release -contracts=off" to have neither and, if anyone's that perverse "-debug -contracts=off" to have debugging info only This all seems eminently straightforward. The only 'twist' is that CP is on by default, unless one explicitly requests it to be off. (I'm sure we can now start a heated battle about that ...) "Charles" <no email.com> wrote in message news:cu36bt$2f2h$1 digitaldaemon.com...IMO, this could give the best of both worlds.I would even settle for less , like a -release-with-cp flag. Charlie "Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com...of"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...Guys, if we persist with the mechanism of no compile-time detectionusereturn paths"and switch cases", and rely on the runtime exceptions, do we really think NASA wouldleastWell, to seriously answer your question: I think production code, atWould you fly to mars in debug mode?D? Come on!theyfor 'important commercial', should be shipped with contract programming enforcement on.How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts whereare really needed but let the optimizer do it's thing everywhere else like remove asserts and array bounds checking. - Dave
Feb 05 2005
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Matthew schrieb am Sun, 6 Feb 2005 07:08:19 +1100:I just think that we should have CP and debug/release somewhat independent. Ideally, I'd like "-debug" to have debugging info _and_ CP "-release" to have CP "-release -contracts=off" to have neither and, if anyone's that perverse "-debug -contracts=off" to have debugging info only This all seems eminently straightforward. The only 'twist' is that CP is on by default, unless one explicitly requests it to be off. (I'm sure we can now start a heated battle about that ...)How about using GDC having a look at d/dmd/mars.h -> Param.useAssert|useInvariants|useIn|useOut|useArrayBounds|useSwitchError|useUnitTests d/d-lang.cc -> opt_code , d_handle_option Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFCBURl3w+/yD4P9tIRAmfoAJ9VpNNbQO5ArPy8buyJfeyGsjiWZwCgnILL 94OisNSbPmTTocUz0qXFjf8= =4wnf -----END PGP SIGNATURE-----
Feb 05 2005
"Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com...If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. Oh, no, I see. You mean mix CP constructs with version. I'd say no, I think this is _too_ much flexibility. In the rare cases where people really need to have some class's constructs versioned, I think version would suffice. (That'd require that body can be supplied without in or out. I don't know if that's currently legal, but it should be.)"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!Would it make sense in the commercial world where PwC is used (like your last project, Matthew)?I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed anyway. The specific constructs which are 'purely debug', are left as simple debug-time asserts ACME_ASSERT / ACME_MESSAGE_ASSERT, while the CP constructs are ACME_ASSERT_PRECONDITION, ACME_ASSERT_POSTCONDITION, ACME_ASSERT_INVARIANT. Now this does raise the question of whether/how we discriminate between CP constructs that we want moderated with the "-contracts=on/off" flag and those with the "-debug" flag. In my recent experience, I would say that it's *very important* to be able to both types, and control them separately. But that's going to require a new keyword, since people will not be willing to pepper their code with debug { ... } blocks. Walter, your thoughts?
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3a23$2iuq$1 digitaldaemon.com..."Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com...I think you're right. What I mentioned above would tie PwC too closely to debug(...) and also potentially complicate large projects greatly, by allowing too much granularity. I'm with what you and Charlie posted earlier, some kind of -contracts=[on,off] flag or some such that could be used to override what -debug and -release both enforce now. I'm currently thinking that the defaults should perhaps act the same as-is ('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') because that is what current D users have come to expect and also because PwC is generally considered to be "debug" related (in other words, I think those defaults would be more intuitive for the majority of people familiar with PwC, but of course I could be wrong). - Dave"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com...If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 05 2005
In article <cu3iia$2r7v$1 digitaldaemon.com>, Dave says...I'm with what you and Charlie posted earlier, some kind of -contracts=[on,off] flag or some such that could be used to override what -debug and -release both enforce now. I'm currently thinking that the defaults should perhaps act the same as-is ('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') because that is what current D users have come to expect and also because PwC is generally considered to be "debug" related (in other words, I think those defaults would be more intuitive for the majority of people familiar with PwC, but of course I could be wrong). - DaveAye - add my voice to that call.
Feb 05 2005
"Dave" <Dave_member pathlink.com> wrote in message news:cu3iia$2r7v$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3a23$2iuq$1 digitaldaemon.com...I would agree, for reasons of legacy breaking, were it not that D is pre-1.0. Since I'm coming to believe more and more that PwC should _not_ be considered a debug only thing, I think it should be the default. Naturally, I'm looking at this from the perspective of large commercial systems. For simple utilities, I'd be adding -contracts=off to my makefiles, and be content with that decision. Walter, may we have it on by default? (and divorce from debug?) Please. I'll be polite. Honest, ...., mate! :-)"Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com...I think you're right. What I mentioned above would tie PwC too closely to debug(...) and also potentially complicate large projects greatly, by allowing too much granularity. I'm with what you and Charlie posted earlier, some kind of -contracts=[on,off] flag or some such that could be used to override what -debug and -release both enforce now. I'm currently thinking that the defaults should perhaps act the same as-is ('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') because that is what current D users have come to expect and also because PwC is generally considered to be "debug" related (in other words, I think those defaults would be more intuitive for the majority of people familiar with PwC, but of course I could be wrong)."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1q0r$1609$1 digitaldaemon.com...If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed"Vathix" <vathix dprogramming.com> wrote in message news:opslo861ihkcck4r esi...How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.Would you fly to mars in debug mode?Guys, if we persist with the mechanism of no compile-time detection of return paths"and switch cases", and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3tn3$292$1 digitaldaemon.com...I would agree, for reasons of legacy breaking, were it not that D is pre-1.0. Since I'm coming to believe more and more that PwC should _not_ be considered a debug only thing, I think it should be the default. Naturally, I'm looking at this from the perspective of large commercial systems. For simple utilities, I'd be adding -contracts=off to my makefiles, and be content with that decision. Walter, may we have it on by default? (and divorce from debug?) Please. I'll be polite. Honest, ...., mate! :-)It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on. -debug turns on the debug() statements. -g turns on the "generate symbolic debug info". These are all independent of each other. The reason -inline is a seperate switch is that sometimes inlining can make things slower, debugging can be difficult with inlining happening, and profiling is more accurate with inlining off.
Feb 05 2005
"Walter" <newshound digitalmars.com> wrote in message news:cu454v$7km$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3tn3$292$1 digitaldaemon.com...Cool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?I would agree, for reasons of legacy breaking, were it not that D is pre-1.0. Since I'm coming to believe more and more that PwC should _not_ be considered a debug only thing, I think it should be the default. Naturally, I'm looking at this from the perspective of large commercial systems. For simple utilities, I'd be adding -contracts=off to my makefiles, and be content with that decision. Walter, may we have it on by default? (and divorce from debug?) Please. I'll be polite. Honest, ...., mate! :-)It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on. -debug turns on the debug() statements. -g turns on the "generate symbolic debug info". These are all independent of each other. The reason -inline is a seperate switch is that sometimes inlining can make things slower, debugging can be difficult with inlining happening, and profiling is more accurate with inlining off.
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu45td$89b$1 digitaldaemon.com...Cool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Feb 05 2005
"Walter" <newshound digitalmars.com> wrote in message news:cu4c88$ecu$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu45td$89b$1 digitaldaemon.com...Yes, it's that real world again. Better to have several switches which tell the absolute truth about what they each do than a few umbrella switches that mislead, don't you think? :-)Cool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Feb 05 2005
In article <cu4c88$ecu$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu45td$89b$1 digitaldaemon.com...Maybe you could recycle -release to turn off contracts and debug, etc. It appears that you also need to provide individual control for the major D features. Do the GDC folks use the same switches?Cool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Feb 06 2005
Mark T wrote:Do the GDC folks use the same switches?For the most part, yes. See this page: http://home.earthlink.net/~dvdfrdmn/d/ With GDC, they are called for instance: -frelease -finline-functions -fbounds-check -fdeprecated -funittest -fdebug (i.e. usually the same, with a "f" prefix) There's also a "dmd" wrapper perl script, that converts DMD syntax to a GDC call... --anders PS. The Missing Manual Pages can be found at: http://www.algonet.se/~afb/d/d-manpages/
Feb 06 2005
In article <cu4c88$ecu$1 digitaldaemon.com>, Walter says...The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(I would say, -release is more intutive and self-explainatory, its just a matter of documentation to specify what -release does. or have multiple switches -nocontracts, -noarrayboundchecks etc etc ...... and specify in documentation that -release is shortcut to all above switches. Sai
Feb 06 2005
sai wrote:I would say, -release is more intutive and self-explainatory, its just a matter of documentation to specify what -release does.Okay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ? But the first is just a version, and the second means to drop contracts (including pre-conditions, invariants, post-conditions and assertions) and also array-bounds and switch-default checks. Thus it is OK to mix. And of course, the -O and -g are somewhat related to debug vs. release too - but in D that's something else: optimization and debug symbols (neither of which is affected by settings for "-debug" or "-release") --anders PS. GDC has already added two flags, that are not found in DMD: -fbounds-check (for ArrayBounds) and -femit-templates (needed for template workarounds, on compilers without one-only linkage)
Feb 06 2005
Anders_F_Bj=F6rklund?= says...Okay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !! Now, to know what all types of checks, contracts, pre-conditions, invariants etc are supported by the compiler, see its documentation.But the first is just a version, and the second means to drop contracts (including pre-conditions, invariants, post-conditions and assertions) and also array-bounds and switch-default checks. Thus it is OK to mix.I didn't say it is OK to mix. I usually don't put a -debug switch along with -release switch. Mixing both switches doesn't make sense either. Sai
Feb 06 2005
"sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
Feb 06 2005
On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:"sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing. And by 'bugs', I mean behaviour that is not documented in the program's (business) requirements specifications. As distinct from runtime handling of bad data or unexpected situations. If so, then by the time you build a final production version of the application, all the testing is completed. And thus contracts can be removed from the final release. However, you might keep them in for a beta release. Bad data and unexpected situations should be still addressed by exceptions and/or simple messages, designed to be read by an *end* user and not only the developers. -- Derek Melbourne, Australia 7/02/2005 9:39:01 AMAnders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
Feb 06 2005
"Derek Parnell" <derek psych.ward> wrote in message news:cu66qc$29vn$1 digitaldaemon.com...On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested! As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds."sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!And by 'bugs', I mean behaviour that is not documented in the program's (business) requirements specifications. As distinct from runtime handling of bad data or unexpected situations.Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing. A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.If so, then by the time you build a final production version of the application, all the testing is completed.As I said, this can never be asserted with 100% confidence.And thus contracts can be removed from the final release.So this conclusion may not be drawn.However, you might keep them in for a beta release.Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.Bad data and unexpected situations should be still addressed by exceptions and/or simple messages, designed to be read by an *end* user and not only the developers.Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.
Feb 06 2005
On Mon, 7 Feb 2005 13:36:48 +1100, Matthew wrote:"Derek Parnell" <derek psych.ward> wrote in message news:cu66qc$29vn$1 digitaldaemon.com...Of course. In the same sense that nothing is ever perfect. By 'testing' I was referring to the formal development process. And I was thinking more about *who* was doing the testing (as a formal process). The contract code, as I see it, is designed to interact with a developer and not an end user.On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!"sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.'stay in' what? The executable shipped to the end user? Well of course you could, as in the end, its really a matter of style. I'm using the model that says that "contract code" is that portion of the source code that is only examining stuff so that it can detect specification errors. Other sorts of errors, such as bad data, and such as (illogical?) situations that *have not been specified*, are being tested by different portions of source code at run-time. So its just a matter of definition, I guess. I'm just segregating the types of errors being tested based on who will be getting the messages about said errors. "Contract" code assumes its audience for its messages is the development team, "Other Error Testing" code assumes its audience for its messages is both development people *and* end users. "Contract" code checks for bad output using good input, bad input caused by coding errors (i.e. not user entered data), illogical process flows, etc... "Other Error Testing" code checks for bad inputs, bad environments (eg. missing files), temporal anomalies (eg. a file which was open, is suddenly found to be closed), etc...Sorry. They are two (of many) distinct classes of errors. 'bad data' is one type of error. 'unexpected situations' is another type of error.And by 'bugs', I mean behaviour that is not documented in the program's (business) requirements specifications. As distinct from runtime handling of bad data or unexpected situations.Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing.A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.I meant 'unexpected' in the sense that it is a situation that was not documented in the requirements specification, but happened anyway. I could be seen as a bug in the spec, rather than the code.Again it's a definition thing. "testing is completed" means that the formal testing process for release candidate X is completed and the source code for that candidate is frozen. A production build is produced from that and a 'gold disk' created for the marketing/sales group. Of course, the test builds for the code still exist, but they are only used in house and by beta testers. But yes, I agree that end users are also involuntary gamma testers ;-)If so, then by the time you build a final production version of the application, all the testing is completed.As I said, this can never be asserted with 100% confidence.I'm thinking about the *cause* of invariant violations. When caused by coding errors, then they should be tested for by contract code. When caused by inputting bad data, then they should be handled by non-contract testing code. I say this, just because I can conceive that some testing code is not suitable for shipping to unsuspecting customers, and should really just be handled in-house. Such code needs to be removed from production versions and the DMD -release switch is the current mechanism for doing that. -- Derek Melbourne, Australia 7/02/2005 1:51:12 PMAnd thus contracts can be removed from the final release.So this conclusion may not be drawn.However, you might keep them in for a beta release.Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.Bad data and unexpected situations should be still addressed by exceptions and/or simple messages, designed to be read by an *end* user and not only the developers.Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.
Feb 06 2005
"Derek Parnell" <derek psych.ward> wrote in message news:cu6noo$9a1$1 digitaldaemon.com...On Mon, 7 Feb 2005 13:36:48 +1100, Matthew wrote:Gotcha"Derek Parnell" <derek psych.ward> wrote in message news:cu66qc$29vn$1 digitaldaemon.com...Of course. In the same sense that nothing is ever perfect. By 'testing' I was referring to the formal development process.On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!"sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!And I was thinking more about *who* was doing the testing (as a formal process). The contract code, as I see it, is designed to interact with a developer and not an end user.Yes, that's a valid pov. For myself, I tend towards thinking of the contract between the software design and its reification in code (by fallible programmers), whose primary purpose is to protect the system on which it runs from damage. Sounds a bit calamitous, I know, but I've found that to be the most helpful, albeit a bit strict, perspective.Most certainly. Since a contract violation indicates that the code is performing out of the bounds / against its design, that's something that needs to be detected and handled in all possible circumstances. (Naturally, that's a completely different thing from handling out of bounds runtime conditions.)As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.'stay in' what? The executable shipped to the end user?Well of course you could, as in the end, its really a matter of style. I'm using the model that says that "contract code" is that portion of the source code that is only examining stuff so that it can detect specification errors. Other sorts of errors, such as bad data, and such as (illogical?) situations that *have not been specified*, are being tested by different portions of source code at run-time. So its just a matter of definition, I guess.It is indeed.I'm just segregating the types of errors being tested based on who will be getting the messages about said errors. "Contract" code assumes its audience for its messages is the development team, "Other Error Testing" code assumes its audience for its messages is both development people *and* end users.From the perspective of who gets to see them, then I agree with what you say. Furthermore, I think it's a nice way of looking at it. Attractive as it is, however, I don't think it can be allowed to sway us into accepting that contracts should not manifest in the presence of users just because they're ill equipped to deal with the messages taht will be produced. Their prime purpose is to detect invalid programs, which must, axiomatically, be something that a user would not want to be executing on their system. More practically, I can say that from the experience of my recent work for a client - the project that has informed on / firmed up my commitment to 'CP-live' - that the users did indeed find it strange that the software informed them that it was invalid. However, when I (i) explained what this meant, and (ii) fixed the bug and had everything flying again with 10 minutes, they grok'ed it. Enthusiastically."Contract" code checks for bad output using good input, bad input caused by coding errors (i.e. not user entered data), illogical process flows, etc...Yes."Other Error Testing" code checks for bad inputs, bad environments (eg. missing files), temporal anomalies (eg. a file which was open, is suddenly found to be closed), etc...Yes, although I would hazard to suggest that the latter (the unexpectedly closed) would more likely be a sign of a bug in most instances in which it might possibly happen. Now, if you meant unexpectedly deleted, that would be more a runtime error condition, rather than violation.Here is where we get woolled up on the terminology, I think. Bad data is unequivocally a runtime error condition. But 'unexpected situations' can be that, or it can be a contract violation, depending on circumstances. The (10 minute) violation of which I've spoken a couple of times was most certainly an unexpected situation - hence it fired the violation assert and killed the process. There were other 'unexpected conditions' that were, in a sense, not so unexpected, since they were catered for in the code. One or two of these did occur, even though we didn't expect them to, but because we'd accounted for them, they resulted in a graceful reset and restart of the offended communications channel. I think I'd have to say that 'unexpected situation' is too maleable a term to be meaningful. I see it in black-and-white: there are contract violations and there are runtime error conditions. The former detect violations of the design assumptions. The latter are part of the design. (It is in the cracks between the two where the nightmares occur. The only other bug in the system was not caught for a week because the invariant for the Channel class was insufficiently specific. Thus, this can be said to be a defficiency in the design of the contracts, just as much as it manifests as a bug in the code.)Sorry. They are two (of many) distinct classes of errors. 'bad data' is one type of error. 'unexpected situations' is another type of error.And by 'bugs', I mean behaviour that is not documented in the program's (business) requirements specifications. As distinct from runtime handling of bad data or unexpected situations.Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing.Between spec and code is design. If it was accounted for in the design, then the code should handle it. If not, violation! <G>A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.I meant 'unexpected' in the sense that it is a situation that was not documented in the requirements specification, but happened anyway. I could be seen as a bug in the spec, rather than the code.An excellent phrase! I shall quote you with gay abandon. <G>Again it's a definition thing. "testing is completed" means that the formal testing process for release candidate X is completed and the source code for that candidate is frozen. A production build is produced from that and a 'gold disk' created for the marketing/sales group. Of course, the test builds for the code still exist, but they are only used in house and by beta testers. But yes, I agree that end users are also involuntary gamma testers ;-)If so, then by the time you build a final production version of the application, all the testing is completed.As I said, this can never be asserted with 100% confidence.By definition, a contract violation cannot be as a result of bad data.I'm thinking about the *cause* of invariant violations. When caused by coding errors, then they should be tested for by contract code. When caused by inputting bad data, then they should be handled by non-contract testing code.And thus contracts can be removed from the final release.So this conclusion may not be drawn.However, you might keep them in for a beta release.Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.Bad data and unexpected situations should be still addressed by exceptions and/or simple messages, designed to be read by an *end* user and not only the developers.Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.I say this, just because I can conceive that some testing code is not suitable for shipping to unsuspecting customers, and should really just be handled in-house.As can I. In the latest project - the only one in which I've really gone to town on C.P - there was a lot of code that was debug only. I wrote ACMECLIENT_ASSERT and ACMECLIENT_MESSAGE_ASSERT for the debugging stuff, and there was ACMECLIENT_ASSERT_PRECONDITION, ACMECLIENT_ASSERT_POSTCONDITION and ACMECLIENT_ASSERT_INVARIANT for the C.P. The presence/absence of the two sets were independent. In practice, we have everything in the debug build, and only the CP ones in release.Such code needs to be removed from production versions and the DMD -release switch is the current mechanism for doing that.Agreed. That's why I'm suggesting that we need : 1. Separate 'assertion' constructs for C.P., as opposed to debug/developer only assertions 2. To separation the elision/enabling of the two types. Specifically I suggest that assertions are in unless "-release" is specified, and but C.P. constructs are in unless "-nocontracts" is specified. However, I fear all this good stuff we've been covering in the last few days will falter because Walter may not want to, at this stage, introduce separate constructs for CP vs debug/developer assertions. Which, then, makes writing large system stuff in D harder, as Kris's been regretfully observing. Perhaps a solution is that "-nocontracts" elide all assertions within in/out/invariant blocks, and "-release" all those without. Walter? Cheers Matthew
Feb 07 2005
I can also lament this; when people feel they've found a bug, they feel one of two things: anger, that there's a bug (most common in production) or satisfaction, for finding a mistake in someone else's work. In fact, some of the second case of people will get noticeably dissapointed if you verify that what they experienced, for whatever reason, is NOT a bug - even if it's not their fault either (I remember remarking on this, but at the moment I can't think of how these two conditions can both be true.) Anyway, if someone finds a bug and they get a warning describing something about what happened, they tend to be more often of the second class. Not only because the error tends to corrupt their data less (not transparently screwing up, but catching itself) but because it's definitely a bug. There's still often anger there, but there's usually satisfaction too. But, next, if you have it fixed almost immediately after it's reported... well, let's be realistic: Most end users know that many softwares they use have bugs. They've been frustrated by bugs in Windows, spyware (which often crashes Internet Explorer for them), and other even good software. There are always bugs, and so EVEN WHEN THEY DON'T FIND ANY, they expect them. But, if they report a bug (which hopefully should be unlikely anyway) and it gets fast response, that's something else. Confidence. They knew there would be bugs before, but now, NOW they know that if they ever find one, they'll have an easy message to report, and once you get it you'll fix it for them and get them the new version. They will love you. Maybe if we were back 20 years ago, we could try to fix this. But, it's too late now. We can't pretend bugs don't exist, or are so uncommon our clients won't expect them - even in OUR software. Nor can we pretend they won't, because... they will. So, the trick is optimizing the solution. Making them trust us, you, again. At least, imho. -[Unknown]More practically, I can say that from the experience of my recent work for a client - the project that has informed on / firmed up my commitment to 'CP-live' - that the users did indeed find it strange that the software informed them that it was invalid. However, when I (i) explained what this meant, and (ii) fixed the bug and had everything flying again with 10 minutes, they grok'ed it. Enthusiastically.
Feb 07 2005
On Mon, 07 Feb 2005 22:56:53 -0800, Unknown W. Brackets <unknown simplemachines.org> wrote: <snip>But, if they report a bug (which hopefully should be unlikely anyway) and it gets fast response, that's something else. Confidence. They knew there would be bugs before, but now, NOW they know that if they ever find one, they'll have an easy message to report, and once you get it you'll fix it for them and get them the new version. They will love you.This is the principle that the company I work for operates by. It, by and large it appears to work as you have described. Regan
Feb 08 2005
In article <cu6k7l$2dk$1 digitaldaemon.com>, Matthew says...<snip>As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.<rant> I agree. I mean let's face it, we've probably all shipped production code with what amounts to non-language formalized PwC in it, hence my earlier suggestion to divorce PwC from any switches, or at least any of the default "meta" switches. In the current ref. compiler implementation of PwC, the only perf. side effect that I can see is that calls are made to check for invariants even when there are none defined for a class. That is extremely expensive because it is done for every call to every method. If the compiler front-end can optimize those away than we only pay for what we use. I'm afraid that as-is, I'd end up falling back to some non-formalized form of PwC (and not even use it as D intends) because as you pointed out, Matthew, non-trivial & complicated code is hard to ever completely debug. As an example, consider an ad-hoc reporting UI where the user can select all sorts of different interdependent parameters that may have side-effects on any number of other parameters. Typically most of these different combinations will never be tested before release. The UI param class doesn't need high-performance, but the report data-gen class does. What to do? You can either write gobs of code spread out all over the place to try and validate the input params. or you can place it all in an invariant block and live with the shitty performance of the data-gen class that doesn't even use invariant (or end up segregating the code - and compiler switches in the build script - even though it may not otherwise make sense to do so, only because of the performance side effects of invariants on classes that don't use them). The compiler already has to walk the class geneology tree for things like making sure that super ctors are implemented with correct arguments, to enforce protection attributes, etc., etc. I suspect it can't add much complexity to add a cd->hasInvariant || cd->baseClass->hasInvaiant flag or whatever to make the decision on whether or not to emit the invariant call for a class. Formalized PwC is a great feature of D and I just want to see it used. After all, (hopefully) our D applications will go through many more CPU cycles in production than in test. </rant> - Dave
Feb 07 2005
Dave wrote:I agree. I mean let's face it, we've probably all shipped production code with what amounts to non-language formalized PwC in it, hence my earlier suggestion to divorce PwC from any switches, or at least any of the default "meta" switches.Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ? --anders
Feb 07 2005
In article <cu87kd$2soo$1 digitaldaemon.com>, =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...Dave wrote:I'm not sure what the consensus would be on that but PwC expresses the implementation side of the DbC idea better in my mind, and it seems that the terms are used interchangeably anyhow. Whether or not the originators of the two terms intended that I don't know. If the consensus here is that DbC is the better blanket term, I'll be more than happy to use that. - DaveI agree. I mean let's face it, we've probably all shipped production code with what amounts to non-language formalized PwC in it, hence my earlier suggestion to divorce PwC from any switches, or at least any of the default "meta" switches.Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ? --anders
Feb 07 2005
Dave wrote:Maybe I missed something, but what's the difference betweenProgramming with Contracts (PwC) and Design by Contract (DbC) ?If the consensus here is that DbC is the better blanket term, I'll be more than happy to use that.Nah, it was only that I had to look it up myself, - AFAIK, it stood for PricewaterhouseCoopers :-) The D spec only used Design by Contract™, though. As in http://www.digitalmars.com/d/dbc.html "Design by Contract" is now a trademark by Eiffel Software. (yet another reason to use a generic: Contract Programming) But "Contracts" seems to be a simple enough term to use ? (otherwise we'll just debate "in/out" vs. "pre/post"...) There does seem to be quite the confusion about contracts vs. exceptions vs. unittests vs. release vs. whatever, so anything specific added to the D language spec helps here. --anders
Feb 07 2005
In article <cu8bdv$389$1 digitaldaemon.com>, =?windows-1252?Q?Anders_F_Bj=F6rklund?= says...Dave wrote:I'd just like to point out that DbC is but a subset of AOP. The latter wields some pretty awesome mechanics for 'instrumenting' code, in highly managable and ulimately configurable ways. Where DbC is about manually instrumenting each method and class, AOP subsumes that and extends it across classes; across behavior. It would seem to be a perfect match for the stated goals of D. Note that some 'aspects' of AOP relate to the injection of code before and after a particular set of methods is executed. D supports this via in{} and out{} constructs, plus invariant{}. This is why, I imagine, one can write an AOP preprocessor for D. However, AOP goes far beyond that. I encourage all to read up on AOP, just to see what the potential is. Why does this relate to the -release switch? The D version{} feature could be leveraged to enable/disable broad swathes of AOP functionality, at a high & adroitly manageable level. This would represent the ultimate in controlling which particular tests are retained for any given compile. - KrisMaybe I missed something, but what's the difference betweenProgramming with Contracts (PwC) and Design by Contract (DbC) ?If the consensus here is that DbC is the better blanket term, I'll be more than happy to use that.Nah, it was only that I had to look it up myself, - AFAIK, it stood for PricewaterhouseCoopers :-) The D spec only used Design by Contract™, though. As in http://www.digitalmars.com/d/dbc.html "Design by Contract" is now a trademark by Eiffel Software. (yet another reason to use a generic: Contract Programming) But "Contracts" seems to be a simple enough term to use ? (otherwise we'll just debate "in/out" vs. "pre/post"...) There does seem to be quite the confusion about contracts vs. exceptions vs. unittests vs. release vs. whatever, so anything specific added to the D language spec helps here. --anders
Feb 07 2005
"Anders F Björklund" <afb algonet.se> wrote in message news:cu87kd$2soo$1 digitaldaemon.com...Dave wrote:It's a distinction primarily promoted by Chris Diggins (www.heron-language.com), which attempts to draw meaningful distinctions between the practice of contract programming techniques in code, and the use of contracts as design specifications. Being a decompositionist by nature, I get where he's coming from, and there's good historical support for his position - people have been using asserts for a long time in blissful ignorance of the lofty ideals of CP (or DbC, if you want to pay Mssr Dr Meyer lots of cash), and - it's eminently reasonable to use a contract to specify the behaviour of a function without enforcing it at runtime; we all do it all the time! However, since the two things are more and more coming together, I tend to prefer to follow Walter's lead, and just call it Contract Programming.I agree. I mean let's face it, we've probably all shipped production code with what amounts to non-language formalized PwC in it, hence my earlier suggestion to divorce PwC from any switches, or at least any of the default "meta" switches.Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ?
Feb 07 2005
Matthew wrote:I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringI just don't think contracts has anything to do with release vs. debug ? For instance, I had to build a non-release version of the Phobos lib just to make it check the runtime contracts in my own debugging builds. I though that pure debugging code was to be put in debug {} blocks ? And that the contracts *could* remain, even in released versions... Array-bounds and switch-default are probably OK to strip for release. Maybe stripping asserts and contracts in release builds is standard procedure, but it would be more straight-forward if called -contract ? (which could be a "subflag" that is triggered to 0 by -release, but) Or maybe I am just mixing up exceptions versus contracts, as usual... http://research.remobjects.com/blogs/mh/archive/2005/01/11/232.aspx Even so, having a libphobos-debug.a version has helped me catch a few. (i.e. for debugging builds I use -lphobos-debug, -lphobos for release) --anders
Feb 06 2005
I think it's getting clear that we're going to need fine grained control of what stays in a final production build, and what does not. I also think that we should probably either: 1 make "-release" mean "*all* CP stuff stays in", or 2 make "*all* CP is elided". Any halfway house is just likely to lead to confusion. Since I, personally, think that 2 is a bad thing, but I strongly suspect there'll be objections to 1, for obverse reasons. Maybe the answer might be to drop "-release" entirely. Can someone with a more detailed understanding than me describe what this might entail, i.e. what the diff between "-debug" and "" might be? Vaguely, yours Matthew "Anders F Björklund" <afb algonet.se> wrote in message news:cu66t7$29pc$1 digitaldaemon.com...Matthew wrote:I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringI just don't think contracts has anything to do with release vs. debug ? For instance, I had to build a non-release version of the Phobos lib just to make it check the runtime contracts in my own debugging builds. I though that pure debugging code was to be put in debug {} blocks ? And that the contracts *could* remain, even in released versions... Array-bounds and switch-default are probably OK to strip for release. Maybe stripping asserts and contracts in release builds is standard procedure, but it would be more straight-forward if called -contract ? (which could be a "subflag" that is triggered to 0 by -release, but) Or maybe I am just mixing up exceptions versus contracts, as usual... http://research.remobjects.com/blogs/mh/archive/2005/01/11/232.aspx Even so, having a libphobos-debug.a version has helped me catch a few. (i.e. for debugging builds I use -lphobos-debug, -lphobos for release) --anders
Feb 06 2005
Matthew wrote:I think it's getting clear that we're going to need fine grained control of what stays in a final production build, and what does not. I also think that we should probably either: 1 make "-release" mean "*all* CP stuff stays in", or 2 make "*all* CP is elided". Any halfway house is just likely to lead to confusion. Since I, personally, think that 2 is a bad thing, but I strongly suspect there'll be objections to 1, for obverse reasons.After doing some more reading on the subject, I've come to agree with the default setting of DMD, which is to strip all contracts on -release. But there could still be optional individual flags to toggle each of: *) contracts (all four of them) *) array bounds *) switch default There are already such flags in the code, just not on the command line.if (global.params.release) { global.params.useInvariants = 0; global.params.useIn = 0; global.params.useOut = 0; global.params.useAssert = 0; global.params.useArrayBounds = 0; global.params.useSwitchError = 0; }Just didn't make it all the way clear to the documentation, I suppose ?Maybe the answer might be to drop "-release" entirely. Can someone with a more detailed understanding than me describe what this might entail, i.e. what the diff between "-debug" and "" might be?"debug" is just a version, i.e. it triggers the debug { ... } sections. It doesn't have *anything* to do with the "-release" flag whatsoever... Where as leaving out "-release" leaves the default values for the above:global.params.useAssert = 1; global.params.useInvariants = 1; global.params.useIn = 1; global.params.useOut = 1; global.params.useArrayBounds = 1; global.params.useSwitchError = 1;I found some more interesting opinions on: http://c2.com/cgi/wiki?WhatAreAssertionsSince assertions that don't fail are no-ops, once a program has been thoroughly tested and bug-fixed, it is possible to recompile the source code without the assertions to produce a program that is both smaller and fasterOnce upon a time assertions were not executable: See AssertionsAsComments.--anders
Feb 07 2005
Got my vote, on both counts. Contracts 'on' by default and, perhaps more pressing, a means to retain the contracts whilst removing asserts, invariants, array-bounds, etc. I don't suppose there will be any real agreement upon which of the latter are appropriate or not; hence it would seem prudent to expose a compound flag, controlling which of those tests should be on or off ... <sigh> - Kris In article <cu65vg$289g$1 digitaldaemon.com>, Matthew says..."sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
Feb 06 2005
Sorry if I didn't say so before Matthew, but I agree wholeheartedly. Which switches to use I don't really care about, though I agree that -'release' is somewhat badly named as it means different things to different people, dependent on experience, company practices and more. Lars Ivar Igesund Matthew wrote:"sai" <sai_member pathlink.com> wrote in message news:cu65me$27i4$1 digitaldaemon.com...Anders_F_Bj=F6rklund?= says...I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineeringOkay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ?Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
Feb 07 2005
"Walter" <newshound digitalmars.com> wrote in message news:cu4c88$ecu$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu45td$89b$1 digitaldaemon.com...I don't think you failed - actually I think the original plan and how it's implemented makes a lot of sense, with perhaps one exception on the implementation.. If this issue: http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/15834 is addressed (basically, so a call to check for invariants at run-time is done /only/ if there is an invariant defined or inherited for a class), then I think PwC can be divorced from the -release switch altogether. What that would mean for developers is that 1) they could always have PwC without paying for it where it isn't used, 2) there would be no new switches and 3) they would have to change asserts to throws, i.e.: import std.stream; class X { int i,j; this(int iVal, int jVal) { i = iVal; j = jVal; } // code that also manipulates i and j here invariant { // assert(i >= 6 && i <= 10); if(i < 6 || i > 10) throw new Exception("Class X value i is out of range"); // assert(i >= 20 && i <= 100); if(j < 20 || j > 100) throw new Exception("Class X value j is out of range"); } } void main() { X x; int iVal = 0, jVal = 0; // simulate a confused user picking the values iVal = 6; jVal = 102; while(!foo(x,iVal,jVal)) { if(!(jVal % 2)) iVal--; else iVal++; jVal--; } // ... if(x) stdout.writefln("x.i,j = %d,%d",x.i,x.j); } bool foo(out X x, int iVal, int jVal) { try { x = new X(iVal,jVal); return true; } catch(Exception e) { stdout.writefln("%s",e); delete x; return false; } } which will probably work out to be a good thing because the error messages displayed to users can be better than the asserts, plus the developer can use invariant blocks to roll up common exceptions for a class into one place if they so desire. What that would mean for /users/ is that 1) they could always have PwC without paying for it where it isn't used, 2) Developers would be encouraged to use catchable exceptions in PwC that would make more sense than cryptic asserts and allow for better exception recovery. This could move PwC into being something that commonly makes it into production code instead of stopping at the release build. I think that could be a great thing to move the whole concept of language formalized PwC forward. - DaveCool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Feb 06 2005
In article <cu454v$7km$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3tn3$292$1 digitaldaemon.com...But right now (when not using -release) there are invariant calls for each method of each class even when the class doesn't have or inherit an invariant definition. Can that issue be addressed? Thanks, - DaveI would agree, for reasons of legacy breaking, were it not that D is pre-1.0. Since I'm coming to believe more and more that PwC should _not_ be considered a debug only thing, I think it should be the default. Naturally, I'm looking at this from the perspective of large commercial systems. For simple utilities, I'd be adding -contracts=off to my makefiles, and be content with that decision. Walter, may we have it on by default? (and divorce from debug?) Please. I'll be polite. Honest, ...., mate! :-)It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on.
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com...Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are. The illustrative example of why this is a superior approach is the Java compiler's insistence on function signatures listing every exception they might raise. Sounds like a great idea to create robust code. Unfortunately, the opposite happens. Java programmers get used to just inserting catch all statements just to get the compiler to shut up. The end result is that critical errors get SILENTLY IGNORED rather than dealt with. The ABSOLUTELY WORST thing a critical software app can do is silently ignore errors. I've talked to a couple NASA probe engineers. They insert "deadman" switches in the computers. If the code crashes, locks, or has an unhandled exception, the deadman trips and the computer resets. The other approach I've seen in critical systems is "shut me down, notify the pilot, and engage the backup" upon crash, lock, or unhandled exception. This won't happen if the error is silently ignored. Having the compiler complain about lack of a return statement will encourage the programmer to just throw in a return 0; statement. Compiler is happy, the potential bug is NOT fixed, maintenance programmer is left wondering why there's a statement there that never gets executed, visual inspection of the code will not reveal anything obviously wrong, and testing will likely not reveal the bug since the function returns normally. Testers who use code coverage analyzers (an excellent QA technique) will have dead code sticking in their craws. However, if the runtime exception does throw, the programmer knows he has a REAL bug, not a HYPOTHETICAL bug, and it's something that needs fixing, not an annoyance that needs shutting up. Testing will likely reveal it. If it happens in the field in critical software, the deadman or backup can be engaged. A return 0; will paper over the bug, potentially causing far worse things to happen. The same comments apply to the switch default issue. Correct me if I'm wrong, but your position is that the compiler issuing an error will ensure that the programmer will correct the hypothetical error by inserting dead code, thereby making it correct. This may happen most of the time, but I worry about the cases where the shut the compiler up dead code is inserted instead, as what happens in Java even by Java experts who KNOW BETTER yet do it anyway. (I know this because they've told me they do this even knowing they shouldn't.) I've used compilers that insisted that I insert dead code. I usually add a comment saying it's dead code to shut the compiler up. It doesn't look good <g>. I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up. In short, I strongly believe that inserting dead code (code that will never be executed) is not the answer to writing bug free code. Having such code in there is misleading at best, and at worst will cause critical errors to be silently ignored.
Feb 04 2005
I'd just like to say how much I agree with this. Of course, this is an open source concept, really, but in practice it is something I have found to polish software much more robustly. As an example, I write forum software (in PHP - I also do stuff in D, database. One of the primary causes of bugs is database errors - meaning, syntax or data errors created by unexpected input (for example, not selecting any items but then clicking "delete selected".) Of course, showing such database errors to end users is a bad idea. In the worst case, showing detailed information about these errors can more easily expose a security hole which might otherwise be patched by the time we heard of the error. Instead, database errors are shown to administrators (that is, people with privilege to see them) and also logged in the database for later retrieval. Now, this may not seem to translate directly, but to me it does. In previous versions of the software, database error messages were neither logged nor shown to anyone. After the change, fixing bugs became much easier... especially for third party add-on developers. It was then possible to fix bugs much more quickly and easily for all involved (including the users, which were sometimes programmers themselves), only increasing productivity and stability. Moreover, sometimes relying on the compiler to detect your errors makes you soft. By this, I don't mean you just stick in dead code - I mean that you expect the compiler to tell you if there are any paths that lead to a missing return (as an admittedly bad example.) If the compiler, for any reason, mistakenly ignores a possibility... you will ignore it too. Yes, this could be considered a bug in the compiler... but that only compounds the number of bugs. IMHO, one of the best ways to make software stable is to make it so that if there ARE bugs, they won't do as much damage as they might otherwise. Some people think they can dream up some way to rid the world of bugs. You can't do it - they can live through nuclear blasts, darn it! Prevention and a good strong boot are the only things that work, and thinking otherwise is only going to cause infestation not salvation. For those who don't like metaphors, I only mean to emphasize what I said above; there is no catch all solution to software bugs - even misplaced returns. -[Unknown]I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up.
Feb 04 2005
This is all tired old ground, and I know I'm not going to prevail. However, the fact that my comment's got your back up sufficiently to post a long and erudite response must indicate that you realise that I'm not the sole barking-mad dog, howling at the wind. So, I'll bite. Just a little. Before I kick off, I must say I find a disappointing lack of weight to your list of points, which I think reflects the lack of cogency to the state of D around this area:That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"? *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late? Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.3) not making it easy for the programmer to insert dead code to "shut up the compiler"I completely agree. But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few. Do you really think it's worth it? Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )? But I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.) My position is simply that compile-time error detection is better than runtime error detection. Further, where compile-time detection is not possible, runtime protection should be to the MAX: practically, this means that I *strongly* believe that contract violations mean death for an application, without exception. (So, FTR, the last several paragraphs of your post most certainly don't apply to this position. I'm highly confident you already know I hold this position, so I assume they're in there for wider pedegagical purposes, and will not comment on them further.) Your position now - or maybe its just expressed altogether in a single place for the first time- seems to be that having a compiler detect potential errors opens up the door for programmers to shut the compiler up with dead code. This is indeed true. You seem to argue that, as a consequence, it's better to prevent the compiler from giving (what you admit would be: "This may happen most of the time ...") very useful help in the majority of cases. I disagree. Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again. Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either. It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system. You seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>). Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one? It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect? I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.) One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction .... [My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...] Matthew "Walter" <newshound digitalmars.com> wrote in message news:cu1clr$r71$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com...Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are. The illustrative example of why this is a superior approach is the Java compiler's insistence on function signatures listing every exception they might raise. Sounds like a great idea to create robust code. Unfortunately, the opposite happens. Java programmers get used to just inserting catch all statements just to get the compiler to shut up. The end result is that critical errors get SILENTLY IGNORED rather than dealt with. The ABSOLUTELY WORST thing a critical software app can do is silently ignore errors. I've talked to a couple NASA probe engineers. They insert "deadman" switches in the computers. If the code crashes, locks, or has an unhandled exception, the deadman trips and the computer resets. The other approach I've seen in critical systems is "shut me down, notify the pilot, and engage the backup" upon crash, lock, or unhandled exception. This won't happen if the error is silently ignored. Having the compiler complain about lack of a return statement will encourage the programmer to just throw in a return 0; statement. Compiler is happy, the potential bug is NOT fixed, maintenance programmer is left wondering why there's a statement there that never gets executed, visual inspection of the code will not reveal anything obviously wrong, and testing will likely not reveal the bug since the function returns normally. Testers who use code coverage analyzers (an excellent QA technique) will have dead code sticking in their craws. However, if the runtime exception does throw, the programmer knows he has a REAL bug, not a HYPOTHETICAL bug, and it's something that needs fixing, not an annoyance that needs shutting up. Testing will likely reveal it. If it happens in the field in critical software, the deadman or backup can be engaged. A return 0; will paper over the bug, potentially causing far worse things to happen. The same comments apply to the switch default issue. Correct me if I'm wrong, but your position is that the compiler issuing an error will ensure that the programmer will correct the hypothetical error by inserting dead code, thereby making it correct. This may happen most of the time, but I worry about the cases where the shut the compiler up dead code is inserted instead, as what happens in Java even by Java experts who KNOW BETTER yet do it anyway. (I know this because they've told me they do this even knowing they shouldn't.) I've used compilers that insisted that I insert dead code. I usually add a comment saying it's dead code to shut the compiler up. It doesn't look good <g>. I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up. In short, I strongly believe that inserting dead code (code that will never be executed) is not the answer to writing bug free code. Having such code in there is misleading at best, and at worst will cause critical errors to be silently ignored.
Feb 04 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Do you really think it's worth it?Absolutely.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.But I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility. There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Further, where compile-time detection is not possible, runtime protection should be to the MAX: practically, this means that I *strongly* believe that contract violations mean death for an application, without exception. (So, FTR, the last several paragraphs of your post most certainly don't apply to this position. I'm highly confident you already know I hold this position, so I assume they're in there for wider pedegagical purposes, and will not comment on them further.) Your position now - or maybe its just expressed altogether in a single place for the first time- seems to be that having a compiler detect potential errors opens up the door for programmers to shut the compiler up with dead code. This is indeed true. You seem to argue that, as a consequence, it's better to prevent the compiler from giving (what you admit would be: "This may happen most of the time ...") very useful help in the majority of cases. I disagree.I know we disagree. <g>Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well. On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.You seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0; It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at![My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 05 2005
Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with. Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime. You say: Runtime checking is bad. Let's use compile time, that fixes everything! You and Derek, who posted earlier, have implied that the runtime checking can still supplement the compile time checking. Perhaps I've missed something crucial here, but I don't understand how - either there is a return there, or there isn't. Example: int main() { return 0; } I do not see any space for runtime checking there. None. Not a single bit. So, by that, we can logically come to the conclusion that if compile time checking is used runtime checking is impossible, because it makes no sense. Walter, to my reckoning, is saying that the problem is this: int main(char[][] args) { if (args[1] != "--help") { doStuff(); return 0; } else showHelp(); } Oops. Forgot the "return 1;". His argument is that, in a more complicated function (with many lines and possibly different return values...) it may be difficult to tell what should be returned here. Tell me, if you're working on a group project, using CVS or otherwise, and you are testing some code you've just added which you are about to check in... but someone else has checked in some code which no longer compiles because of said return warning - what is your instinct? To sit on it until the return is fixed? Maybe. Or, maybe you want to fix it. Being that you didn't write the code, you might say... well, it looks like if it gets to here it should return a 0. Maybe you're right. Maybe you're wrong. Maybe if you're wrong, the original author will notice and fix it. Maybe not. I hate maybes, they mean bugs. Now, I'm sure I'm misrepresenting you. We're all good patient programmers, and we'll wait for the guy on vacation who wrote this to come back and add his return. Then we'll all break his bones for checking in code that doesn't even compile. Here's another example. Someone might argue that the compiler should give errors/warnings for the following: if (true) 1; else if (var == 4) 2; Obviously, 2 will never happen. Unreachable code detected, yes? But what if it's this: if (true) //var == 3) 1; else if (var == 4) 2; Suddenly, the obviousness of this error is gone. It's no longer an error, it's testing. 2 isn't unreachable at all, it's only "commented out" so to speak! What about this...? version (1) 1; else version(2) 2; Is that an error? No else for the versions... shouldn't there (probably) be a static assert there or similar? Yes, maybe. Obviously that can't be relied on, because sometimes it won't be true. But, should you be forced to do this? version (1) 1; else version(2) 2; else 1 == 1; Okay. Let me reformat this example. Should you be forced to do this? int doIt(int var) { if (var == 1) return 1; else if (var == 2) return 2; else return 0; } Same thing. You'll say no, though. These are different. One's returning things, the other isn't, you'll say. -[Unknown]
Feb 05 2005
Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with.Does it? How?Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime. You say: Runtime checking is bad. Let's use compile time, that fixes everything!I didn't say that. You appear to have caught Walter's disease.
Feb 05 2005
Unknown W. Brackets wrote:Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with. Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime. You say: Runtime checking is bad. Let's use compile time, that fixes everything!I better stay out of this... but Matthew's last post did clarify that he was /not/ against runtime checking. He states that quite clearly. <Ducks away again> - John R.
Feb 05 2005
"Unknown W. Brackets" <unknown simplemachines.org> wrote in message news:cu25p2$1jbc$1 digitaldaemon.com...Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime.That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; } I'm not saying you would advocate "fixing" the code this way. I don't either. Nobody would. I am saying that this is often how real programmers will fix it. I know this because I see it done, time and again, in response to compilers that emit such error messages. This kind of code is a disaster waiting to happen. No compiler will detect it. It's hard to pick up on a code review. Testing isn't going to pick it up. It's an insidious, nasty kind of bug. It's root cause is not bad programmers, but a compiler error message that encourages writing bad code. Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy. If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix, and he won't be tempted to insert a return of an arbitrary value "because it'll never be executed anyway". This is the point I have consistently failed to make clear.
Feb 05 2005
"Walter" <newshound digitalmars.com> wrote in message news:cu3rt4$ra$1 digitaldaemon.com..."Unknown W. Brackets" <unknown simplemachines.org> wrote in message news:cu25p2$1jbc$1 digitaldaemon.com...This is total rubbish. A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; } This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique. Walter, you're just digging yourself in deeper. It's embarassing. It strongly gives the impression that you only work with yourself. You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool. Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper? One of the reviewers of Imperfect C++ made the sage comment that I was spending too much time "protecting from Machiavelli". He said that that was a quest without end, and he's spot on. Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0? The code's still wrong, but now it doesn't even have your backup plan active. Here's a thought. When people cotton on to this implicit behaviour in D, maybe there'll be a large-scale propagation of "Make sure you get all your returns in!" warnings. Do you have empirical evidence that there won't be a concommitant swell of crappy / neophyte programmers who will add a return X; at the end of every function by rote, to avoid the dreaded swipe of the indeterminate exception? Maybe you're going to actually exacerbate the problem you think you're countering!Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime.That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; }Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy. If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix, and he won't be tempted to insert a return of an arbitrary value "because it'll never be executed anyway". This is the point I have consistently failed to make clear.Man, this is *so* frustrating. You obviously (now admittedly!) think that we're all just not getting your point. I get it. WE GET IT! I/we just think you're wrong. There's a problem with two opposing automatic ways of doing things. So the answer is to not have things automatic. I feel like Cassandra. Gah! I give up.
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
Sorry, mate, I've given up - I'll just have to content myself with writing Imperfect D with all the good amo you're providing - and am about to take the family out for some retail therapy. Ah, D.J.'s pesto, there's nothing like it ..... once you've tasted it, anything that comes out of a bottle might as well be cat vomit. As for our doomed debate, I'll leave this parting shot: you've ignored the two most salient points of the debate recently made, namely the effect that missing error harbinging will have on the mindset - will it cause people to (mis-)add more return 0's than they would have anyway? - and the issue of having the compiler require what Derek wisely suggests. Alas, it appears that you are wont to do so. Dr Cassandra Bigboy P.S. For all the people who've joined the NG since the middle of last year, and have not seen such stag-rutting battles between Walter and myself (and others), you should know that despite (what I believe) are his stunning misapprehensions, I have a higher (technical and good-egg-edness) regard for big-W than almost anyone I know, famous or just quietly-good-at-their-job. Maybe it's because of that that I find his wrongness so affronting? Kind of like finding out your mother farts. ;) "Walter" <newshound digitalmars.com> wrote in message news:cu44i1$739$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
Here's a suggestion that /might/ help: are you, Walter, familiar with AOP at all? If so, you might consider treating such things as part of "cross-cutting concerns", where a) a "point-cut" is declared, by the programmer, to add code to those non-void-returning functions which don't actually end with a return statement. b) the code generated at the end of a function would thus be an "advice". One which has been explicitly provided by the programmer, rather than by the compiler. Here's a little blurb on AOP, via Google: http://www.onjava.com/pub/a/onjava/2004/01/14/aop.html - Kris In article <cu44i1$739$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
AOP is cool, I wish it was possible to use it in D. On Sun, 6 Feb 2005 05:34:27 +0000 (UTC), Kris <Kris_member pathlink.com> wrote:Here's a suggestion that /might/ help: are you, Walter, familiar with AOP at all? If so, you might consider treating such things as part of "cross-cutting concerns", where a) a "point-cut" is declared, by the programmer, to add code to those non-void-returning functions which don't actually end with a return statement. b) the code generated at the end of a function would thus be an "advice". One which has been explicitly provided by the programmer, rather than by the compiler. Here's a little blurb on AOP, via Google: http://www.onjava.com/pub/a/onjava/2004/01/14/aop.html - Kris In article <cu44i1$739$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 06 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsuv5qc23k2f5 ally...AOP is cool, I wish it was possible to use it in D.I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Feb 06 2005
On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsuv5qc23k2f5 ally...I looked too, I found it less easy to follow than the article on Aspect-Oriented Programming that Christopher Diggins wrote in the August 2004 edition of Dr Dobbs Journal. A simple example of AOP is... you have a class Bob, you want to log calls to all it's functions, dumping state etc. You write the code to do the logging and 'hook' it up to certain methods in Bob, but, the important part is that it does not require changes to Bob, and the new code can be 'hook'ed up to another class in the same way. 3 things are involved, the original class, the new code, and a 'pointcut' which defines the methods the new code effects i.e. how to 'hook' it up. C/C++ achieves it using the preprocessor. I am not sure how Java does it. If you/we think about it a bit I'm sure we can come up with a syntax for D. Mixins are almost it, though what you need is a way of defining where the mixins go without actually modifying the original class. See also: http://www.aspectc.org/ ReganAOP is cool, I wish it was possible to use it in D.I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Feb 06 2005
On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsuv5qc23k2f5 ally...Ok, here is my attempt at a syntax for AOP for D. In reality I am no expert on AOP. Or on context free grammar. //An attempt at a syntax for Aspect Oriented Programming in D. //Based on the Aug 2004 DDJ article by Christopher Diggins //-Regan Heath //the original class, remains unmodified by this process. class Original { this() { } ~this() { } void foo() { } void bar() { } void baz() { } } //the aspect: to be added to classes as defined in a pointcut. aspect Logging { //joinpoint: code to be placed before the start of a function void in { writefln("Entering(",this.name,")"); } //joinpoint: code to be placed after the end of a function void out { writefln("Leaving(",this.name,")"); } //joinpoint: called before all other joinpoints, if it returns false the joinpoint is skipped bool query { } //joinpoint: executed on an exception void except { } //joinpoint: called after execution of the joinpoint even if 'query' returns false void finally { } //more joinpoints could be defined and added, requires more thought. //the definitions of the above are not set in stone, requires more thought. } //defines the new class, based on an existing class and 1 or more aspects pointcut newOriginal, Original { Logging { this, foo, bar } //<other aspect name> { // this, bar, baz //} //..etc.. } //how to use the new class void main() { newOriginal o = new newOriginal(); }AOP is cool, I wish it was possible to use it in D.I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Feb 07 2005
On Tue, 08 Feb 2005 12:48:45 +1300, Regan Heath <regan netwin.co.nz> wrote:On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com> wrote:Small addition added above, specifically: //defines the method to apply the aspect to Regan"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsuv5qc23k2f5 ally...Ok, here is my attempt at a syntax for AOP for D. In reality I am no expert on AOP. Or on context free grammar. //An attempt at a syntax for Aspect Oriented Programming in D. //Based on the Aug 2004 DDJ article by Christopher Diggins //-Regan Heath //the original class, remains unmodified by this process. class Original { this() { } ~this() { } void foo() { } void bar() { } void baz() { } } //the aspect: to be added to classes as defined in a pointcut. aspect Logging { //joinpoint: code to be placed before the start of a function void in { writefln("Entering(",this.name,")"); } //joinpoint: code to be placed after the end of a function void out { writefln("Leaving(",this.name,")"); } //joinpoint: called before all other joinpoints, if it returns false the joinpoint is skipped bool query { } //joinpoint: executed on an exception void except { } //joinpoint: called after execution of the joinpoint even if 'query' returns false void finally { } //more joinpoints could be defined and added, requires more thought. //the definitions of the above are not set in stone, requires more thought. } //defines the new class, based on an existing class and 1 or more aspects pointcut newOriginal, Original { Logging { //defines the method to apply the aspect to this, foo, bar } //<other aspect name> { // this, bar, baz //} //..etc.. } //how to use the new class void main() { newOriginal o = new newOriginal(); }AOP is cool, I wish it was possible to use it in D.I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Feb 07 2005
Thank-you. That actually does make sense. I can see now why it would be an interesting feature. I can only understand these things in terms of how they are implemented :-). So for AOP, what I see is being essentially a derived class, with the modified methods being created that are wrappers around the base class's methods. The aspect code is inserted into the wrappers.
Mar 02 2005
On Wed, 2 Mar 2005 10:34:14 -0800, Walter <newshound digitalmars.com> wrote:Thank-you. That actually does make sense. I can see now why it would be an interesting feature. I can only understand these things in terms of how they are implemented :-). So for AOP, what I see is being essentially a derived class, with the modified methods being created that are wrappers around the base class's methods. The aspect code is inserted into the wrappers.Yes, that's essentially it. Regan
Mar 02 2005
In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...Thank-you. That actually does make sense. I can see now why it would be an interesting feature. I can only understand these things in terms of how they are implemented :-). So for AOP, what I see is being essentially a derived class, with the modified methods being created that are wrappers around the base class's methods. The aspect code is inserted into the wrappers.Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example). As suggested, the additional code is wrapped around the methods.
Mar 02 2005
On Wed, 2 Mar 2005 23:39:21 +0000 (UTC), pandemic <pandemic_member pathlink.com> wrote:In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...Well, true, technically. The way I see it, you're simply stating. Take class A. add concern C1,C2 to method x,y, and z. call it C12_A. Take class B. add concern C1,C3 to method x, and y. call it C13_B. It is similar to inheritance, as in, you could do it manually... class A { void foo() {} } class C12_A : A { <- take class A, call it C12_A void foo() { ..concern.. <- add concern to method foo super.foo(); ..~concern.. } } but the idea is that it's done automatically, via some description/format and can be done to any base class, not just A, that you can add several different concerns to a class, that you can pick methods for each concern and they may differ to picks for another concern. Or am I missing your point? I must admit my understanding of it comes from a couple of articles I've read and not much else. ReganThank-you. That actually does make sense. I can see now why it would be an interesting feature. I can only understand these things in terms of how they are implemented :-). So for AOP, what I see is being essentially a derived class, with the modified methods being created that are wrappers around the base class's methods. The aspect code is inserted into the wrappers.Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example).
Mar 02 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opsm1bmre123k2f5 ally...Well, true, technically. The way I see it, you're simply stating. Take class A. add concern C1,C2 to method x,y, and z. call it C12_A. Take class B. add concern C1,C3 to method x, and y. call it C13_B. It is similar to inheritance, as in, you could do it manually... class A { void foo() {} } class C12_A : A { <- take class A, call it C12_A void foo() { ..concern.. <- add concern to method foo super.foo(); ..~concern.. } }Before the topic gets dropped, I read about AOP a couple of years ago and forgot about it. (-: But there was a Java reference implementation which could be perused. Also, I *think* you can do cross-cuts on variables as well as functions. This may pose a problem for class T { int _a; } class B { public T t; } B = new B(); // special case for init of t B.t = something(); // check t again B.t._a = 3; // could even have a check on an int
Mar 04 2005
Hi, are you sure a new class gets defined? I though the point was to add functionality to existing classes without actually modifying them directly. There are three issues that are addressed by AOP - scattering (similar code all over the place), tangling (same span of code doing more than one thing) and crosscutting (if I get it right, the problem of connecting modules that do completely different stuff, like a profiler and profilee). The typical case seems to be logging - you normally have to include logging code in all the classes you want to log (scattering+tangling), and you need to have a Logger class and pass it all around (crosscutting). This results in a large amount of code, and it's not even related to the original class functionality (an ImageProcessor should process images, not concern itself with logging). If you don't want to use Logger anymore, but a different class, you have to change all the classes that use it. Conversely, if you have AOP at hand, you can just write an aspect that takes care of logging without modifying the original classes' code. This is obviously more efficient, especially in the amount of code to be written, ease of turning the functionality on/off (you can just remove the aspect) and modifyability (is this a word? anyway, it's really easy to do something different with all the concerned classes). It's also a cleanly defined "link" between the logging part of your app and its other parts. Now, as far as compiling goes, what is done at compile time is that all aspects that are turned on are resolved and classes are compiled with their original code wrapped in aspect code. I'm really almost sure that you don't get a new class (it does have additional functionality, of course, but its name and position in class hierarchy are still exactly the same). For example, in AspectJ, you can attach code to reading/writing a field and calls of methods, ctors and exception handlers (you can match both before and after). So, for each of those, all matching aspects are identified and their code is inserted. xs0 Regan Heath wrote:On Wed, 2 Mar 2005 23:39:21 +0000 (UTC), pandemic <pandemic_member pathlink.com> wrote:In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...Well, true, technically. The way I see it, you're simply stating. Take class A. add concern C1,C2 to method x,y, and z. call it C12_A. Take class B. add concern C1,C3 to method x, and y. call it C13_B. It is similar to inheritance, as in, you could do it manually... class A { void foo() {} } class C12_A : A { <- take class A, call it C12_A void foo() { ..concern.. <- add concern to method foo super.foo(); ..~concern.. } } but the idea is that it's done automatically, via some description/format and can be done to any base class, not just A, that you can add several different concerns to a class, that you can pick methods for each concern and they may differ to picks for another concern. Or am I missing your point? I must admit my understanding of it comes from a couple of articles I've read and not much else. ReganThank-you. That actually does make sense. I can see now why it would be an interesting feature. I can only understand these things in terms of how they are implemented :-). So for AOP, what I see is being essentially a derived class, with the modified methods being created that are wrappers around the base class's methods. The aspect code is inserted into the wrappers.Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example).
Mar 05 2005
On Sat, 05 Mar 2005 10:45:31 +0100, xs0 <xs0 xs0.com> wrote:are you sure a new class gets defined?Yes, though it's defined in a different manner to inheritance.I though the point was to add functionality to existing classes without actually modifying them directly.Correct.There are three issues that are addressed by AOP - scattering (similar code all over the place), tangling (same span of code doing more than one thing) and crosscutting (if I get it right, the problem of connecting modules that do completely different stuff, like a profiler and profilee). The typical case seems to be logging - you normally have to include logging code in all the classes you want to log (scattering+tangling), and you need to have a Logger class and pass it all around (crosscutting). This results in a large amount of code, and it's not even related to the original class functionality (an ImageProcessor should process images, not concern itself with logging). If you don't want to use Logger anymore, but a different class, you have to change all the classes that use it. Conversely, if you have AOP at hand, you can just write an aspect that takes care of logging without modifying the original classes' code. This is obviously more efficient, especially in the amount of code to be written, ease of turning the functionality on/off (you can just remove the aspect) and modifyability (is this a word? anyway, it's really easy to do something different with all the concerned classes). It's also a cleanly defined "link" between the logging part of your app and its other parts.I agree with the description(s) above.Now, as far as compiling goes, what is done at compile time is that all aspects that are turned on are resolved and classes are compiled with their original code wrapped in aspect code.That seems to me to be how Walter see's it working (from his reply earlier).I'm really almost sure that you don't get a new class (it does have additional functionality, of course, but its name and position in class hierarchy are still exactly the same).You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class. Regan
Mar 06 2005
Hopefully we won't do the same thing as the last time :)If you look at http://dev.eclipse.org/viewcvs/indextech.cgi/~checkout~/aspectj-home/doc/progguide/examples-development.html you'll see that no new class names are produced. It wouldn't make sense either - the point, if you want logging, for example, is to have exactly the same code (including class hierarchy) for the logged part, and turn on logging from outside.. Or, to put it another way, there is a "new" class, but it has the same name as the "old" class and the old class doesn't exist anymore.. Or, yet another way, with AOP, the class is no longer just itself, but itself+all its aspects.. xs0I'm really almost sure that you don't get a new class (it does have additional functionality, of course, but its name and position in class hierarchy are still exactly the same).You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class.
Mar 07 2005
On Mon, 07 Mar 2005 09:29:33 +0100, xs0 <xs0 xs0.com> wrote:I see. I don't like it.If you look at http://dev.eclipse.org/viewcvs/indextech.cgi/~checkout~/aspectj-home/doc/progguide/examples-development.html you'll see that no new class names are produced. It wouldn't make sense either - the point, if you want logging, for example, is to have exactly the same code (including class hierarchy) for the logged part, and turn on logging from outside..I'm really almost sure that you don't get a new class (it does have additional functionality, of course, but its name and position in class hierarchy are still exactly the same).You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class.Or, to put it another way, there is a "new" class, but it has the same name as the "old" class and the old class doesn't exist anymore.. Or, yet another way, with AOP, the class is no longer just itself, but itself+all its aspects..The very reason I don't like it. What if I want to use the old class and the new class in the same application? Regan
Mar 07 2005
Well, as far as I know, you can't - the aspect is an integral part of the class (or method), just like class members are, it's just defined elsewhere because it may not be what the class is about. For example, the purpose of a Shape class is to have something that can draw itself, and not also to perform timing measurements, so it makes sense to put such profiling code outside in an aspect, where it can be turned off when no longer needed. It can also be reused for all other cases where you want to measure performance, because it is not tied in to Shape. Of course, you can reuse such code as is by putting it inside some class (Profiler), but you need to change your classes to use it, and then again to not use it anymore, when you're done. If you take a look at what the typical aspects are (tracing, logging, change monitoring, etc.), it would seem you don't use them in cases where you don't want the new behavior. Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of them (of course, the logging code can choose to not do anything, but it's still "turned on" all the time). You do have the option of defining pointcuts for just classes that are of interest, and, of course, if you want to control this inside the classes themselves, you don't need aspects, I guess.. xs0Or, to put it another way, there is a "new" class, but it has the same name as the "old" class and the old class doesn't exist anymore.. Or, yet another way, with AOP, the class is no longer just itself, but itself+all its aspects..The very reason I don't like it. What if I want to use the old class and the new class in the same application?
Mar 07 2005
On Mon, 07 Mar 2005 11:02:58 +0100, xs0 <xs0 xs0.com> wrote:Using my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?Well, as far as I know, you can'tOr, to put it another way, there is a "new" class, but it has the same name as the "old" class and the old class doesn't exist anymore.. Or, yet another way, with AOP, the class is no longer just itself, but itself+all its aspects..The very reason I don't like it. What if I want to use the old class and the new class in the same application?If you take a look at what the typical aspects are (tracing, logging, change monitoring, etc.), it would seem you don't use them in cases where you don't want the new behavior.Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point. Eg. Locking, I need to lock access to the members of a class, but only if it's being shared between threads.Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of themYes, I can. It's called targetting a specific instance. It would be great for debugging.(of course, the logging code can choose to not do anything, but it's still "turned on" all the time).There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.You do have the option of defining pointcuts for just classes that are of interest, and, of course, if you want to control this inside the classes themselves, you don't need aspects, I guess..You never want to "control this inside the classes themselves" that would defeat the purpose of AOP. However, you might want to enable or disable logging with a button, that button would flip a variable, that variable would be checked by the AOP code (not the class itself), it's the feature I described above. In addition to this feature, you might want to apply the AOP code to an instance of a class and not another. I see no point in limiting AOP in the ways you describe. Regan
Mar 07 2005
First, I'd like to say that I responded to your claim that an aspect causes a new class to be produced (with the original one still available), which I disagreed with, so please, let's keep the discussion focused on that.No, I was talking in general. I think that if you want two versions of the class, you need to declare the new class (if for no other reason, to give it a name), which is very different than declaring an aspect (which just modifies the existing class). If you do declare a new class, you can of course implement the new functionality using an aspect that matches the new class but not the old class. Or, you can just use a mixin..Well, as far as I know, you can'tUsing my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.If you take a look at what the typical aspects are (tracing, logging, change monitoring, etc.), it would seem you don't use them in cases where you don't want the new behavior.Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point.Eg. Locking, I need to lock access to the members of a class, but only if it's being shared between threads.I don't see how you can do this by producing a new class (how will you switch implementation in runtime? except by creating a new instance, or by using a proxy, but I guess that causes more trouble than it solves), while I can see how you could do this with a single class (e.g. if (shared) { mutex.acquire(); }). So, please provide an example how you would do this.That's not logical - if you want to log all calls, you want to log all calls, not just some :) On a more serious note, you can easily implement what you want by having the aspect check some variable to see whether the instance is the one you're interested in. However, the aspect will still get executed on all calls to the method in all instances.Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of themYes, I can. It's called targetting a specific instance. It would be great for debugging.Can I have the link to the article?(of course, the logging code can choose to not do anything, but it's still "turned on" all the time).There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.You do have the option of defining pointcuts for just classes that are of interest, and, of course, if you want to control this inside the classes themselves, you don't need aspects, I guess..However, you might want to enable or disable logging with a button, that button would flip a variable, that variable would be checked by the AOP code (not the class itself), it's the feature I described above.Sure, AOP code can check a variable, but I don't see how this requires a new class to be produced.In addition to this feature, you might want to apply the AOP code to an instance of a class and not another. I see no point in limiting AOP in the ways you describe.I would argue that your approach is the one that is limiting. Consider this: class A { } aspect B { // match A and do something with it } A obj=new A(); Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B(); to use the aspect. I don't see how that could be useful (I mean, it's far easier to just modify A than to modify all references to A). You seem to see aspects as something similar to mixins, but they are actually quite different, even though they superficially seem to do the same thing - include some code somewhere. Mixins' primary purpose is to reuse a piece of code instead of typing it over and over again. Aspects' primary purpose is to connect two parts of an app that do not have much in common, in a way that is clean and doesn't require those parts to handle each other. For example, you can have a rendering module and a profiling module. It does not make sense for the rendering module to call the profiling module (which is the non-AOP way), because the rendering module should not concern itself with profiling. Likewise, the profiling module should not need to know that there exists a rendering module, because its purpose is to measure time (or memory or whatnot). So, the solution AOP provides is to have those two modules completely unaware of each other, and the only thing that provides profiling of rendering is the aspect. The benefits are obvious - in non-AOP code, you will need to have every rendering class you want to profile be aware of a profiler, you will need to implement methods to set the profiler that is used (and you will also need to set it somewhere), each draw() method will need to call functions of the profiler; when you decide you no longer need the profiling code, you will have to manually delete it from everywhere (or set the profiler to null, but that will require a bunch of null-checks slowing the thing down). If you decide to use a completely different profiler (i.e. a non-compatible class), you will again have to manually change all references to the new one, possibly also changing which methods get called and in what order. If you use AOP, you avoid all that. xs0
Mar 07 2005
On Mon, 07 Mar 2005 14:16:18 +0100, xs0 <xs0 xs0.com> wrote:Nope. IMO adding aspects to a class defines a new class.No, I was talking in general. I think that if you want two versions of the class, you need to declare the new class (if for no other reason, to give it a name), which is very different than declaring an aspect (which just modifies the existing class).Well, as far as I know, you can'tUsing my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?This is less efficient.Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.If you take a look at what the typical aspects are (tracing, logging, change monitoring, etc.), it would seem you don't use them in cases where you don't want the new behavior.Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point.You wont. You simply need a locked version of class Foo at one point, and not at another.Eg. Locking, I need to lock access to the members of a class, but only if it's being shared between threads.I don't see how you can do this by producing a new class (how will you switch implementation in runtime?except by creating a new instanceExactly. You create a LockedFoo when you need a shared one, and a Foo when you don't (see below)., or by using a proxy, but I guess that causes more trouble than it solves), while I can see how you could do this with a single class (e.g. if (shared) { mutex.acquire(); }). So, please provide an example how you would do this.The aspect calls mutex.acquire() instead of the class itself. So, instead of "if (shared) mutex.acquire();" we just apply the locked aspect to the class. eg. class Foo { void baz() {} } <.. AOP definition defining locked version of Foo ..> LockedFoo; LockedFoo a; static this() { a = new LockedFoo(); } void main() { Foo b; ..create threads, threads share 'a' .. b = new Foo(); <- nothing but main accesses 'b' ..use b.. a.baz(); <- aspect calls mutex.acquire(); }It's perfectly logical. I didn't say I wanted to "log all calls". I said "targetting a specific instance", in other words I wanted to "log calls for specific instances", not "all calls".That's not logical - if you want to log all calls, you want to log all calls, not just some :)Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of themYes, I can. It's called targetting a specific instance. It would be great for debugging.On a more serious note, you can easily implement what you want by having the aspect check some variable to see whether the instance is the one you're interested in. However, the aspect will still get executed on all calls to the method in all instances.I could, but that requires a global var, and is less efficient.It was in DrDobbs Journal. Written by "Christopher Diggin" (sp?) I cannot remember the issue (the mag is at work). I have posted the article info in another message to this NG, if you search you might find it.Can I have the link to the article?(of course, the logging code can choose to not do anything, but it's still "turned on" all the time).There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.It doesn't. This is another feature described in the article. It's for runtime enable/disable of an aspect.You do have the option of defining pointcuts for just classes that are of interest, and, of course, if you want to control this inside the classes themselves, you don't need aspects, I guess..However, you might want to enable or disable logging with a button, that button would flip a variable, that variable would be checked by the AOP code (not the class itself), it's the feature I described above.Sure, AOP code can check a variable, but I don't see how this requires a new class to be produced.no, A_B obj = new A_B();In addition to this feature, you might want to apply the AOP code to an instance of a class and not another. I see no point in limiting AOP in the ways you describe.I would argue that your approach is the one that is limiting. Consider this: class A { } aspect B { // match A and do something with it } A obj=new A(); Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B();I don't see how that could be usefulIt's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.(I mean, it's far easier to just modify A than to modify all references to A).True, which is why, if it's an aspect for debugging/profiling, one that would be enabled/disabled a lot and/or periodically you would use an alias, eg. class NormalFoo {} <.. aspect ..> LockedFoo; alias Foo NormalFoo; ... Foo f = new Foo(); just as we've been doing in C/C++ for years (with #define).You seem to see aspects as something similar to mixins, but they are actually quite different, even though they superficially seem to do the same thing - include some code somewhere. Mixins' primary purpose is to reuse a piece of code instead of typing it over and over again. Aspects' primary purpose is to connect two parts of an app that do not have much in common, in a way that is clean and doesn't require those parts to handle each other.I understand your concept, I just don't think it makes for a better AOP implementation than my own. Your's appears (to me) to be less flexible and/or less efficient (due to inflexibility).For example, you can have a rendering module and a profiling module. It does not make sense for the rendering module to call the profiling module (which is the non-AOP way), because the rendering module should not concern itself with profiling. Likewise, the profiling module should not need to know that there exists a rendering module, because its purpose is to measure time (or memory or whatnot). So, the solution AOP provides is to have those two modules completely unaware of each other, and the only thing that provides profiling of rendering is the aspect. The benefits are obvious - in non-AOP code, you will need to have every rendering class you want to profile be aware of a profiler, you will need to implement methods to set the profiler that is used (and you will also need to set it somewhere), each draw() method will need to call functions of the profiler; when you decide you no longer need the profiling code, you will have to manually delete it from everywhere (or set the profiler to null, but that will require a bunch of null-checks slowing the thing down). If you decide to use a completely different profiler (i.e. a non-compatible class), you will again have to manually change all references to the new one, possibly also changing which methods get called and in what order. If you use AOP, you avoid all that.I agree with this example, it's a good description of where you'd use AOP. I still prefer my concept/method of implementing it. Regan
Mar 07 2005
Nope. IMO adding aspects to a class defines a new class.Well, I looked at several AOP languages, and you seem to be the only one that thinks aspects define a new class. If I'm wrong, please provide a reference (preferably on web this time).How? I'd say it's faster to check a var than to execute completely different code, because modern CPUs rely on cache so heavily, its far more efficient to stay within cache than to avoid two CPU instructions. That is even more true in the case you're arguing (tracking a single instance), because the branch predictor will be right most of the time, avoiding even the potential cost of conditional jump (i.e. pipeline flush).Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.This is less efficient.Why do you need an aspect for this? There is no cross-cutting concern and whatnot, if that is what you want to do..I don't see how you can do this by producing a new class (how will you switch implementation in runtime?You wont. You simply need a locked version of class Foo at one point, and not at another.How is that less of a change?Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B();no, A_B obj = new A_B();But there is no point in using aspects if all you want is different versions of the same class. Or, as a question, why would you use an aspect in this case?I don't see how that could be usefulIt's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.I agree with this example, it's a good description of where you'd use AOP. I still prefer my concept/method of implementing it.Well, it's a contradiction that you agree with what I said and also think that aspects should produce new classes. If an aspect produces a new class, you still have to manually change all references from OriginalClassName to AOPClassName (and back when you no longer want it), which is again far more work than just changing the original class, so rather pointless. Why would you do more work with same benefits (i.e. new functionality) and how is that better? xs0
Mar 07 2005
On Tue, 08 Mar 2005 02:07:28 +0100, xs0 <xs0 xs0.com> wrote:Sorry, no can do. The article is in DDJ, and you have to be a subscriber to read it, IIRC. I'm not saying "you're wrong". I'm saying I prefer my concept to yours (the one you're describing).Nope. IMO adding aspects to a class defines a new class.Well, I looked at several AOP languages, and you seem to be the only one that thinks aspects define a new class. If I'm wrong, please provide a reference (preferably on web this time).It's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.How? I'd say it's faster to check a var than to execute completely different codeWell, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.This is less efficient., because modern CPUs rely on cache so heavily, its far more efficient to stay within cache than to avoid two CPU instructions. That is even more true in the case you're arguing (tracking a single instance), because the branch predictor will be right most of the time, avoiding even the potential cost of conditional jump (i.e. pipeline flush).I'm saying additional code will make it slower.How else can I apply a set of generic code to specific methods of any number of existing classes?Why do you need an aspect for this? There is no cross-cutting concern and whatnot, if that is what you want to do..I don't see how you can do this by producing a new class (how will you switch implementation in runtime?You wont. You simply need a locked version of class Foo at one point, and not at another.It's not, I was correcting a mistake.How is that less of a change?Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B();no, A_B obj = new A_B();Yes there is, because there is no other generic way to do it.But there is no point in using aspects if all you want is different versions of the same class.I don't see how that could be usefulIt's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.Or, as a question, why would you use an aspect in this case?Why not? It appears to be the best way to achieve what I want.No, it's not. I agreed with your description of a problem. A problem solved by AOP. There are other problems, also solved by AOP. I believe my concept solves them better than the one you're describing.I agree with this example, it's a good description of where you'd use AOP. I still prefer my concept/method of implementing it.Well, it's a contradiction that you agree with what I said and also think that aspects should produce new classes.If an aspect produces a new class, you still have to manually change all references from OriginalClassName to AOPClassName (and back when you no longer want it),No. I've already explained how that would be done. Using alias.which is again far more work than just changing the original class, so rather pointless. Why would you do more work with same benefits (i.e. new functionality) and how is that better?It's not more work. My way has more benefits i.e. is more flexible. That is why I prefer it. Regan
Mar 07 2005
It's not no code at all.. if you apply an aspect to a method/function (and all code is in a method or a function), there are then two functions, the original, and the original+aspect (and in the case the original does nothing, why is it there?)How? I'd say it's faster to check a var than to execute completely different codeIt's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.I'm saying additional code will make it slower.Good for you.. I was wondering, however, what your arguments are? And, like explained above, there is actually more code in your case..How else can I apply a set of generic code to specific methods of any number of existing classes?Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes.. - in "your" version, new classes are always produced, which might be useful in some cases, but is completely useless when you don't want new classes, as there is no single-line-of-code "workaround" (unless you go and change all the rest of the code as well; again, this defeats the purpose) xs0
Mar 07 2005
On Tue, 08 Mar 2005 03:13:23 +0100, xs0 <xs0 xs0.com> wrote:Sorry, I meant to say "no additional code".It's not no code at all..How? I'd say it's faster to check a var than to execute completely different codeIt's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.if you apply an aspect to a method/function (and all code is in a method or a function), there are then two functions, the original, and the original+aspectCorrect. If I don't need the "original+aspect" then being forced to use it, but skip it with a variable will be slower than "the original" without aspect applied.True.How else can I apply a set of generic code to specific methods of any number of existing classes?Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes..- in "your" version, new classes are always produced, which might be useful in some cases, but is completely useless when you don't want new classesTrue., as there is no single-line-of-code "workaround"There is a workaround, alias, it's a couple of lines, its comparable to what is done in C/C++ for the same reason. I still prefer to create a new class. If, simply because when a class behaviour ismodified I think the name should change to relfect that. It appears there is little or no function difference between our ideas, I simply prefer mine. Go figure. Regan
Mar 07 2005
On Tue, 08 Mar 2005 15:36:36 +1300, Regan Heath <regan netwin.co.nz> wrote:On Tue, 08 Mar 2005 03:13:23 +0100, xs0 <xs0 xs0.com> wrote:I can't seem to 'edit' with my client "Opera".. allow me to re-phrase the para above: I still prefer to create a new class. If, simply because when a class behaviour is modified I think the name should change to relfect that. It appears there is little or no functional difference between our ideas, I simply prefer mine. Go figure. ReganSorry, I meant to say "no additional code".It's not no code at all..How? I'd say it's faster to check a var than to execute completely different codeIt's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.if you apply an aspect to a method/function (and all code is in a method or a function), there are then two functions, the original, and the original+aspectCorrect. If I don't need the "original+aspect" then being forced to use it, but skip it with a variable will be slower than "the original" without aspect applied.True.How else can I apply a set of generic code to specific methods of any number of existing classes?Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes..- in "your" version, new classes are always produced, which might be useful in some cases, but is completely useless when you don't want new classesTrue., as there is no single-line-of-code "workaround"There is a workaround, alias, it's a couple of lines, its comparable to what is done in C/C++ for the same reason. I still prefer to create a new class. If, simply because when a class behaviour ismodified I think the name should change to relfect that. It appears there is little or no function difference between our ideas, I simply prefer mine. Go figure.
Mar 07 2005
I see no point in arguing this further. You're just making arbitrary unsubstantiated claims and/or say that your preference is somehow a good argument in itself. Even when I said the whole world disagrees with you, the only thing you managed to respond with was that I'm not a DDJ subscriber.. xs0I still prefer to create a new class. If, simply because when a class behaviour is modified I think the name should change to relfect that. It appears there is little or no functional difference between our ideas, I simply prefer mine. Go figure. Regan
Mar 07 2005
On Tue, 08 Mar 2005 08:50:54 +0100, xs0 <xs0 xs0.com> wrote:I see no point in arguing this further.I agree.You're just making arbitrary unsubstantiated claims and/or say that your preference is somehow a good argument in itself. Even when I said the whole world disagrees with you, the only thing you managed to respond with was that I'm not a DDJ subscriber..*sigh* I just don't understand what I'm doing that makes you so hostile. Obviously it must be something *I'm doing* because "the whole world disagrees with [me]" Regan
Mar 08 2005
Disclaimer: this post is long and meant for Regan, so it is probably not worth your time reading it.I'm not trying to be hostile, perhaps that is the result of my limited knowledge of english or something. But since you asked why you [annoy] me (in random order): - you ignore half of what other people write (e.g. I said something about CPU cache and how checking a flag could actually be faster, you just said "no, it's slower" without even considering _why_ I said checking a flag _could_ be faster) - when you misread something, you'll tag the other person as basically stupid without considering that you may be the one that made the mistake (e.g. in the thread on stable functions you misunderstood the comment on caching and suggested that the poster is proposing some bizarre global caching scheme) - you cling to your ideas like it was a matter of life and death (e.g. in the opCast thread, even after several people, including me, said that using cast to select a method is ridiculous, you still went on and on how it is something natural; if it was the natural thing to do, there would obviously be no disagreement) - you use a type of argument, but don't allow others to use that same type of argument (e.g., again in the opCast thread, the whole Brad is with me/Greg is with you thing) - you never admit you're wrong (e.g. I said "if you want to log all calls, you can't also want to not log some calls", and you said you can; that's simply a logical fallacy, but you failed to admit even something that simple) - even though you're quick to point out that other people use their preference as arguments, _your_ own preference is often the only argument you have (e.g. "I'm saying I prefer my concept to yours" without any argumentation; if you do manage to say something like "I prefer it because it is more flexible", you totally fail to argument that it is indeed more flexible, at least in my opinion) - you fail to provide counterarguments in most cases, and just say something arbitrary. Our typical conversation goes like me: A you: B me: ~B, because C, D, E you: B me: ~B, because D, F, G you: B, isn't it obvious? this thread was also going this exact same way, so I decided to drop it, because I don't feel any of us is gaining something from it. - you take things out of context way too often (e.g. "the whole world disagrees with you"; the point of that sentence was that you fail to counterargue and I exagerated a bit to make that point clearer; you sliced it and took it out of context (which naturally completely changes its meaning) and again failed to counterargue (which you could do by showing that you do indeed counterargue)) There you have it, it got much longer than I planned, but I tried to argument that you do indeed do those things that bother me :) xs0You're just making arbitrary unsubstantiated claims and/or say that your preference is somehow a good argument in itself. Even when I said the whole world disagrees with you, the only thing you managed to respond with was that I'm not a DDJ subscriber..*sigh* I just don't understand what I'm doing that makes you so hostile. Obviously it must be something *I'm doing* because "the whole world disagrees with [me]"
Mar 08 2005
On Tue, 08 Mar 2005 12:05:09 +0100, xs0 <xs0 xs0.com> wrote:I'm not trying to be hostile, perhaps that is the result of my limited knowledge of english or something. But since you asked why you [annoy] me (in random order): - you ignore half of what other people write (e.g. I said something about CPU cache and how checking a flag could actually be faster, you just said "no, it's slower" without even considering _why_ I said checking a flag _could_ be faster)The reason I didn't address the cache comment is because you missunderstood what I was trying to say, here is the thread: <quote></quote> caching didn't apply to what I was saying.Correct. If I don't need the "original+aspect" then being forced to use it, butskip it with a variable will be slower than "the original" without aspect applied.It's not no code at all.. if you apply an aspect to a method/function (andall code is in a method or a function), there are then two functions, theoriginal, and the original+aspect (and in the case the original does nothing, why is it there?)It's not "execute completely different code", it's "execute no code at all"as in, the class without the aspect applied. So, it's faster.How? I'd say it's faster to check a var than to execute completely different code, because modern CPUs rely on cache so heavily, its far more efficient to stay within cache than to avoid two CPU instructions.That is even more true in the case you're arguing (tracking a singleinstance), because the branch predictor will be right most of the time,avoiding even the potential cost of conditional jump (i.e. pipeline flush).Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.This is less efficient.- when you misread something, you'll tag the other person as basically stupid without considering that you may be the one that made the mistake (e.g. in the thread on stable functions you misunderstood the comment on caching and suggested that the poster is proposing some bizarre global caching scheme)FYI: I don't "tag" people as anything. In that particular example the OP suggested a compile time optimisation, I ammended their definition of "stable functions" to include "known at compile time", they agreed. <quote "martin">Exactly. </quote> Ilya disagreed "No need to limit it to compile-time known arguments. ... " we discussed it, Sebastian joined in, I am still unclear exactly what he was suggesting. I would still like to know.Instead I suggest the concept of "stable functions". A class of functions (and methods), that have no side effects and are guaranteed to generate the same result given the same inputknown at compile time.- you cling to your ideas like it was a matter of life and death (e.g. in the opCast thread, even after several people, including me, said that using cast to select a method is ridiculous, you still went on and on how it is something natural; if it was the natural thing to do, there would obviously be no disagreement)I will argue my own point of view up and until I convice you, you convince me, or we agree to go our seperate ways. It appears (to me) you do the same thing. Yes, some people disagreed, you, and brad. Some people also agreed. <quote "georg"> If we had overloading on return type, then in some situations we'd want some way to choose which return type to use. Using cast for this would seem natural. </quote> <quote "derek"> To counter this, one could make the rule that every call to a function must either assign the result or indicate to the compiler which return type is being ignored/required. This would help make programs more robust and help readers know the coder's intentions better. For example... cast(int)foo('x'); // Call the 'int' version and ignore the result. bar( cast(real)foo('y') ); // Call the 'real' version of foo and bar. </quote>- you use a type of argument, but don't allow others to use that same type of argument (e.g., again in the opCast thread, the whole Brad is with me/Greg is with you thing)That was a bad comment on my part, "taking sides" should not happen in a NG. Sorry.- you never admit you're wrong (e.g. I said "if you want to log all calls, you can't also want to not log some calls", and you said you can; that's simply a logical fallacy, but you failed to admit even something that simple)You missrepresented my argument (and you're doing it again), that is logical fallacy. <quote "regan"> The very reason I don't like it. What if I want to use the old class and the new class in the same application? </quote> To which you replied with a very long paragraph, which I won't quote in entirety, the part in question read: <quote> Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of them </quote> You brought up logging all calls, not I. You missunderstood my comment.- even though you're quick to point out that other people use their preference as arguments, _your_ own preference is often the only argument you have (e.g. "I'm saying I prefer my concept to yours" without any argumentation; if you do manage to say something like "I prefer it because it is more flexible", you totally fail to argument that it is indeed more flexible, at least in my opinion)The entire thread was my argument as to why I preferred my idea. As it turned out, our ideas were almost functionally identical. But, they were different and I preferred the tradeoffs of my idea to the tradeoffs of yours. Those statements were reflections of that fact.- you fail to provide counterarguments in most cases, and just say something arbitrary. Our typical conversation goes like me: A you: B me: ~B, because C, D, E you: B me: ~B, because D, F, G you: B, isn't it obvious? this thread was also going this exact same way, so I decided to drop it, because I don't feel any of us is gaining something from it.Please post an example of this. I don't believe it has ever occurred as I intentionally make a point to address every argument someone makes (the exception being when I got "fed up" halfway thru a reply to you).- you take things out of context way too often (e.g. "the whole world disagrees with you"; the point of that sentence was that you fail to counterargue and I exagerated a bit to make that point clearer;You should have said "you fail to counterargue". Instead your statement could not be prooven and was simply inflamatory.you sliced it and took it out of context (which naturally completely changes its meaning) and again failed to counterargue (which you could do by showing that you do indeed counterargue))What is the point in arguing with a statement which cannot be proven?There you have it, it got much longer than I planned, but I tried to argument that you do indeed do those things that bother me :)I honestly believe that a lot of the 'problems' we seem to have with each other stem from missunderstanding. You've stated that you have a "limited knowledge of english", so I will take extra care to be as clear as possible in future discussions. For the record I can only speak english, I have respect for anyone who is multi-lingual. Regan
Mar 09 2005
I'll reply to OT stuff via e-mail later, as it probably is of no interest to anybody but us..The reason I didn't address the cache comment is because you missunderstood what I was trying to say, here is the thread: [snip quotes] caching didn't apply to what I was saying.Yes it did. You're suggesting that there exist two methods (actually, two entire classes), one without the aspect code, the other one with aspect code. Code also occupies cache. If the compiled original method is 1000 bytes long, and the new method is 1200 bytes, they take 2200 bytes of cache. If you just have one version that checks a flag, it's like 1210 bytes (including the flag). Considering that L1 cache is usually really small (like 16K for code and 16K for data), that can be a significant difference. I'm not saying it's always the case that it's better to check flags than to have two methods, I'm just saying it can be faster in some cases. Not to even mention how much more flexible a flag is than conditionally doing something with two separate classes.. If you test this with really simple/short functions (I did test), flag checking is indeed slower, because you'll have everything in cache anyway (although even such a simple thing as "if (flag) a++; else a+=2" is only like 2% slower compared to having two methods that "a++" or "a+=2"), but in "real" code, flag checking may be faster. Efficiency (as in speed) in modern systems is really not that simple anymore. I read an article the other day on real-time ray tracing, and the two things that provided the biggest speed gains were using SSE instructions (because they can work on more than one data at a time) and a cache-friendly layout of data structures. It was faster to unconditionally do 4 calculations than to conditionally do one. It was faster to convert everything to triangles so that only one case exists, than to handle other primitives (even though a single sphere became like 50 triangles). It was faster to just compute some stuff than to have a check if it is even needed and only then compute it (even though the check was far simpler and the total executed instructions count would be lower, the time that took was longer). These cases all go against the conventional wisdom that the fastest code is the one that doesn't get executed (that is still true, of course, just not 100% of time). xs0
Mar 09 2005
On Thu, 10 Mar 2005 07:33:21 +0100, xs0 <xs0 xs0.com> wrote:I'll reply to OT stuff via e-mail later, as it probably is of no interest to anybody but us..If you like.Yes, two classes.The reason I didn't address the cache comment is because you missunderstood what I was trying to say, here is the thread: [snip quotes] caching didn't apply to what I was saying.Yes it did. You're suggesting that there exist two methods (actually, two entire classes), one without the aspect code, the other one with aspect code.Yes.Code also occupies cache. If the compiled original method is 1000 bytes long, and the new method is 1200 bytes, they take 2200 bytes of cache. If you just have one version that checks a flag, it's like 1210 bytes (including the flag). Considering that L1 cache is usually really small (like 16K for code and 16K for data), that can be a significant difference. I'm not saying it's always the case that it's better to check flags than to have two methods, I'm just saying it can be faster in some cases.Ahh.. I see what you're saying now.Not to even mention how much more flexible a flag is than conditionally doing something with two separate classes..It's more flexible in that it allows a runtime change in behaviour. I think it has a place regardless which method Walter chooses to use (if he chooses to implement AOP).If you test this with really simple/short functions (I did test), flag checking is indeed slower, because you'll have everything in cache anyway (although even such a simple thing as "if (flag) a++; else a+=2" is only like 2% slower compared to having two methods that "a++" or "a+=2"), but in "real" code, flag checking may be faster.So, in short, if code is cached, flags are slower, but, if the code isn't cached, flags may be faster.Efficiency (as in speed) in modern systems is really not that simple anymore. I read an article the other day on real-time ray tracing, and the two things that provided the biggest speed gains were using SSE instructions (because they can work on more than one data at a time) and a cache-friendly layout of data structures. It was faster to unconditionally do 4 calculations than to conditionally do one. It was faster to convert everything to triangles so that only one case exists, than to handle other primitives (even though a single sphere became like 50 triangles). It was faster to just compute some stuff than to have a check if it is even needed and only then compute it (even though the check was far simpler and the total executed instructions count would be lower, the time that took was longer). These cases all go against the conventional wisdom that the fastest code is the one that doesn't get executed (that is still true, of course, just not 100% of time).So, from this we can conclude that efficiency is not a pro nor con for either method, as it's dependant on the exact situation in which the code is used. Regan
Mar 13 2005
Regan Heath wrote:AOP is cool, I wish it was possible to use it in D.I've written a simple aspect preprocessor for D, but it hasn't received too much attention in the ng. If still anyone wants to take a look, it's here: http://codeinsane.info/download/adp.zip
Feb 06 2005
I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead. Any talk about D with regard to (1) and (2) are moot, when D clearly injects subtle and glorious ways to f%ck the programmer in simple, and shall I say common, ways. Fair warning :-) I fully sympathize with your head-beating-wall exercise, Matthew. Keep it up! In article <cu44i1$739$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y)return v.z;} throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
I agree 'length' seems to be poorly implemented, or perhaps is simply a bad idea. I think using a symbol like $ is better for this very reason. It should be an "error" IMO (not a warning) to 'hide' a variable from an enclosing scope. There could be 3 options for avoiding this error: 1. rename the variable or the enclosing variable 2. specify the reference in full. We can specify the enclosing variable in full, but we need to be able to say <in this scope>.varname also. 3. use an alias, we need some way to pick the preferred variable i.e. the inner variable, allowing you to specify the enclosing var in full. Regardless, either you're arguing that because this is bad about D, the missing return behaviour must also be bad, which is clearly illogical. Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so. In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htm On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris <Kris_member pathlink.com> wrote:I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead. Any talk about D with regard to (1) and (2) are moot, when D clearly injects subtle and glorious ways to f%ck the programmer in simple, and shall I say common, ways. Fair warning :-) I fully sympathize with your head-beating-wall exercise, Matthew. Keep it up! In article <cu44i1$739$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y)return v.z;} throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
Regardless, either you're arguing that because this is bad about D, the missing return behaviour must also be bad, which is clearly illogical.He's not saying that at all.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Feb 07 2005
On Tue, 8 Feb 2005 09:21:12 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Good.Regardless, either you're arguing that because this is bad about D, the missing return behaviour must also be bad, which is clearly illogical.He's not saying that at all.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself: http://www.datanation.com/fallacies/style.htm 2. To put it simply, "whether there is a link or not, has no bearing on whether it's important or not", to argue otherwise is clearly illogical. ReganIn other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Feb 07 2005
Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own. Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh! Let's see what gnomic little nugget you're going to profer next ...
Feb 07 2005
On Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic. Regan
Feb 07 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...On Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Feb 07 2005
In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says..."Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P -BenOn Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Feb 08 2005
"Ben Hinkle" <Ben_member pathlink.com> wrote in message news:cuaddf$1tqk$1 digitaldaemon.com...In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round. The Ranting Twit ....."Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-POn Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Feb 08 2005
On Wed, 9 Feb 2005 06:42:44 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Ben Hinkle" <Ben_member pathlink.com> wrote in message news:cuaddf$1tqk$1 digitaldaemon.com...Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature. ReganIn article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round."Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-POn Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Feb 08 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwiflq723k2f5 ally...On Wed, 9 Feb 2005 06:42:44 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but (i) I overreacted, and (ii) I know full well that I can be, and often am, at least as irritating, and probably in several different ways. 'nuff said? Cheers The Huffy Nerfal ....."Ben Hinkle" <Ben_member pathlink.com> wrote in message news:cuaddf$1tqk$1 digitaldaemon.com...Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature.In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round."Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-POn Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Feb 08 2005
On Wed, 9 Feb 2005 10:07:19 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwiflq723k2f5 ally...Understood. I'll do my best to curtail my religeous zeal.On Wed, 9 Feb 2005 06:42:44 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but"Ben Hinkle" <Ben_member pathlink.com> wrote in message news:cuaddf$1tqk$1 digitaldaemon.com...Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature.In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round."Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-POn Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.(i) I overreacted, and (ii) I know full well that I can be, and often am, at least as irritating, and probably in several different ways. 'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... ) Regan
Feb 08 2005
You're welcome to it.Understood. I'll do my best to curtail my religeous zeal.Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature.Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but(i) I overreacted, and (ii) I know full well that I can be, and often am, at least as irritating, and probably in several different ways. 'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
On Wed, 9 Feb 2005 10:23:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:(secretly stealing the last word again) LOL.. elegantly done! ReganYou're welcome to it.'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwlvgg823k2f5 ally...On Wed, 9 Feb 2005 10:23:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:It was nothing(secretly stealing the last word again) LOL.. elegantly done!You're welcome to it.'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
On Wed, 9 Feb 2005 11:22:27 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwlvgg823k2f5 ally...Again! I fear I am no match...On Wed, 9 Feb 2005 10:23:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:It was nothing(secretly stealing the last word again) LOL.. elegantly done!You're welcome to it.'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
Regan Heath wrote:On Wed, 9 Feb 2005 11:22:27 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Okay, guys! This is rediculous. I'll have the last word and be done with it! :-P"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwlvgg823k2f5 ally...Again! I fear I am no match...On Wed, 9 Feb 2005 10:23:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:It was nothing(secretly stealing the last word again) LOL.. elegantly done!You're welcome to it.'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwm1ox523k2f5 ally...On Wed, 9 Feb 2005 11:22:27 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Surely not"Regan Heath" <regan netwin.co.nz> wrote in message news:opslwlvgg823k2f5 ally...Again! I fear I am no match...On Wed, 9 Feb 2005 10:23:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:It was nothing(secretly stealing the last word again) LOL.. elegantly done!You're welcome to it.'nuff said?Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
Feb 08 2005
On Tue, 8 Feb 2005 15:18:52 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opsluof7fx23k2f5 ally...Please explain, I don't understand.On Tue, 8 Feb 2005 09:48:10 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute.Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.Well put. I just don't agree.I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.Or, this post is simply an attack designed to make Walters position seem weaker, when in fact it supplies no logical evidence to do so.Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.In other words I can't see how this post has any bearing on the argument at hand. At best it's a strawman: http://www.datanation.com/fallacies/straw.htmYawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.Now you're attacking me: http://www.datanation.com/fallacies/attack.htm1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself:Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh!You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.Let's see what gnomic little nugget you're going to profer next ...The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logicI see what you mean, and I agree, that sentence was ill considered. What I meant by it was that I believed some of the arguments presented were illogical, in particular those that I indicated were and why., it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links.The links merely serve to better explain the concepts I am trying to explain. I can see your point, some people view links as 'authorative' and posting links therefore has an effect. What can I say, I wish it were not so. It all gets a bit circular, by reading these links I've learnt to spot things like this, but I had to follow the link to do so.Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form.You seem to be assuming malicious intent on my part? I realise not everything can be expressed logically, but it appears to me that this can, and should be.I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.Saying I have 'tactics' implies that I am trying to beat you in some way. My intent is for us to find and share common ground not war. Regan
Feb 08 2005
On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead.And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope) -- Derek Melbourne, Australia
Feb 07 2005
Derek wrote:And this is one of the reason why I use 'decorated' identifier names;I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:Hungarian Notation Just say no.--anders
Feb 07 2005
On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:Derek wrote:Well its been working for me and my teams for 10 years now, so sue me. ;-) -- Derek Melbourne, AustraliaAnd this is one of the reason why I use 'decorated' identifier names;I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:Hungarian Notation Just say no.
Feb 07 2005
"Derek" <derek psych.ward> wrote in message news:1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net...On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:Type decoration - void fn(long lLimit); - is bad, because it is non-portable, and introduces strong probabilities that the code itself will be turned into a liar. Purpose decoration - void fn(char const *name, int bOverwrite); - is good, notwithstanding its uglification. (It's still better to do without, if that does not promote ambiguities.) (For a better exposition, consult section 17.4 of your copy of Imperfect C++ <g>) Cheers -- Matthew Wilson Author: "Imperfect C++", Addison-Wesley, 2004 (http://www.imperfectcplusplus.com) Contributing editor, C/C++ Users Journal (http://www.synesis.com.au/articles.html#columns) STLSoft moderator (http://www.stlsoft.org) "I can't sleep nights till I found out who hurled what ball through what apparatus" -- Dr Niles Crane -------------------------------------------------------------------------------Derek wrote:Well its been working for me and my teams for 10 years now, so sue me. ;-)And this is one of the reason why I use 'decorated' identifier names;I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:Hungarian Notation Just say no.
Feb 07 2005
In article <1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net>, Derek says...On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:I applaud any group that sets their own standards to deal with complexity; and then sticks with it. The issue here is that D slyly injects its own variable named 'length', which then (a) forces one to adopt just such a standard, once you've hopefully noticed the bug, and (b) D does not tell you what it did to f&ck you in the first place :-( Given Walter's current position on this particular language 'idiom', one must resort to (a) Hence, one has to adopt a somewhat tongue-in-cheek attitude to lofty claims regarding the goals of D to "protect and serve". I understand Walter invoked the "do as I say, not do as I do" as a rebuke within this thread somewhere. My opinion, and suggestion, is that perhaps he might reflect upon that for a while :-)Derek wrote:Well its been working for me and my teams for 10 years now, so sue me. ;-)And this is one of the reason why I use 'decorated' identifier names;I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:Hungarian Notation Just say no.
Feb 07 2005
On Mon, 7 Feb 2005 22:38:27 +0000 (UTC), Kris wrote:In article <1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net>, Derek says...Sorry I digressed from the main point of your post. I tend to agree with your assessment of the 'length' decision. However, even with that said, and without using decorated identifiers, your example could do with improved identifier naming, for example ... char[] getLine (char[] text_string) { uint newline_position = text_string.length; foreach (uint curr_position, char curr_char; text_string) { if (curr_char == '\n') { newline_position = curr_position; break; } } return text_string [0..newline_position]; } Short identifier names do not always lead to better legibility, just as longer ones do always enhance legibility. But there is a balance that can work. -- Derek Melbourne, Australia 8/02/2005 11:24:10 AMOn Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:I applaud any group that sets their own standards to deal with complexity; and then sticks with it. The issue here is that D slyly injects its own variable named 'length', which then (a) forces one to adopt just such a standard, once you've hopefully noticed the bug, and (b) D does not tell you what it did to f&ck you in the first place :-( Given Walter's current position on this particular language 'idiom', one must resort to (a) Hence, one has to adopt a somewhat tongue-in-cheek attitude to lofty claims regarding the goals of D to "protect and serve". I understand Walter invoked the "do as I say, not do as I do" as a rebuke within this thread somewhere. My opinion, and suggestion, is that perhaps he might reflect upon that for a while :-)Derek wrote:Well its been working for me and my teams for 10 years now, so sue me. ;-)And this is one of the reason why I use 'decorated' identifier names;I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:Hungarian Notation Just say no.
Feb 07 2005
In article <cu91gc$1fc7$1 digitaldaemon.com>, Derek Parnell says... <snip>Short identifier names do not always lead to better legibility, just as longer ones do always enhance legibility. But there is a balance that can work.Amen, Derek. But that's a somewhat different topic. This one is "Compiler support for writing bug free code". The point is that, in this case, the compiler does just the opposite. If I may be so bold: What bother's me is that while Walter acknowledges this issue, he doesn't think it's worthy enough to warrant any attention. This is in rather stark contrast to the "preaching" and "hand clasping" that's somewhat evident in parts of this thread. It would be funny, if it weren't so sad :-) Anyway; I must apologise for drifting this thread away from the original problem, so I'll finish with the following: ultimately, we all want D to be a better language -- not better than C++/Java -- better than D is currently. To that end, we have to point out all of the shortcomings and endevour to have them resolved appropriately. This is why I'm not giving your alternate topic of "better variable names; best practices" the credit it duly & truly deserves, yet keep carping on about the hypocrisy that needs to be rectified :~} Cheers! - Kris
Feb 07 2005
"Derek" <derek psych.ward> wrote in message news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead.And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Feb 07 2005
On Tue, 8 Feb 2005 09:24:38 +1100, Matthew wrote:"Derek" <derek psych.ward> wrote in message news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...Agreed, if the only purpose of using decorated words for identifiers is work around clashes with keywords. However, another major reason for using the decoration scheme that we do use here, is to make it faster for people to understand the code that they are reading. By having 'scope/purpose' hints in the identifier names, it usually saves people scanning large blocks of code looking for where an identifier was declared.On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature.I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead.And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave.You are preaching to the converted, brother ;-)Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks.Yes, it does. But nice try though.In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.Agreed. I have always supported the use of a symbol rather than an English word to represent the array's length property. I'm keen to promote the readibilty of source code by humans, so an extra 'dot' seems counter productive to that aim. -- Derek Melbourne, Australia 8/02/2005 11:15:38 AM
Feb 07 2005
In article <cu8qfi$11q3$1 digitaldaemon.com>, Matthew says..."Derek" <derek psych.ward> wrote in message news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...How about: array[from...] // analogous to array[from .. array.length]; - DaveOn Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead.And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Feb 07 2005
"Dave" <Dave_member pathlink.com> wrote in message news:cu91eb$1f90$1 digitaldaemon.com...In article <cu8qfi$11q3$1 digitaldaemon.com>, Matthew says...IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)"Derek" <derek psych.ward> wrote in message news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...How about: array[from...] // analogous to array[from .. array.length];On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead.And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Feb 07 2005
Unicode to the rescue: array[from..\u221E] For those who don't immediately recognize \u221E, it is the codepoint for infinity. :-)How about: array[from...] // analogous to array[from .. array.length];IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length
Feb 07 2005
In article <cu930u$1j33$1 digitaldaemon.com>, Matthew says...The problem is that one may need to reference the array-length within an expression; an expression within the brackets. This extends to templates, which needed a means to explicitly reference the array length whilst avoiding recanting the array itself (or something like that). The upshot, I understand, was that the notion of an implicit array-length 'temporary' seemed appropriate. Unfortunately it was implemented as a pseudo-reserved "length", rather than an alternate manner that didn't quite shove it up the programmers' proverbial arseHow about: array[from...] // analogous to array[from .. array.length];IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)
Feb 07 2005
"Kris" <Kris_member pathlink.com> wrote in message news:cu956n$1msm$1 digitaldaemon.com...In article <cu930u$1j33$1 digitaldaemon.com>, Matthew says...Ah, of course. Silly me.The problem is that one may need to reference the array-length within an expression; an expression within the brackets.How about: array[from...] // analogous to array[from .. array.length];IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)This extends to templates, which needed a means to explicitly reference the array length whilst avoiding recanting the array itself (or something like that). The upshot, I understand, was that the notion of an implicit array-length 'temporary' seemed appropriate. Unfortunately it was implemented as a pseudo-reserved "length", rather than an alternate manner that didn't quite shove it up the programmers' proverbial arseIndeed
Feb 07 2005
I'm afraid I was out working + book writing when that went in. Very poor form indeed. (And you're quite right that it totally takes the legs out from Walter's arguments on missing returns.) :-( "Kris" <Kris_member pathlink.com> wrote in message news:cu8gk2$e0a$1 digitaldaemon.com...I'm jumping into this at a somewhat arbitrary point, but the general claim Walter (apparently) makes is that 1) D tries to catch dumb mistakes made by a user 2) D tries to steer the programmer in the 'right' direction Let's see here: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0..length]; } The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline. Do you see the insideous bug there? Many of you will not, so I'll spell it out: Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted. This would perhaps not be so bad if the pseudo-reserved word were "implicitArrayLength" or something like that. But NO! Walter uses an undecorated, and exceptionally common variable name instead. Oh; and this was introduced to ease the implementation of certain templates - on technical merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from "length" to a 'decorated' version instead. Any talk about D with regard to (1) and (2) are moot, when D clearly injects subtle and glorious ways to f%ck the programmer in simple, and shall I say common, ways. Fair warning :-) I fully sympathize with your head-beating-wall exercise, Matthew. Keep it up! In article <cu44i1$739$1 digitaldaemon.com>, Walter says..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3v58$3c3$1 digitaldaemon.com...A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y)return v.z;} throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique.Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool.I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper?I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0?Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
I agree. IMHO, "length" should either be: - reserved; it should be an error to use it as a variable name, etc. - an error to use within a slice (in other words, if the scope contains a "length" variable and you want to use "length" within a slice - whichever you mean, you get an error!) - removed, and either not replaced or replaced with a symbol. I think you're advocating 2 or 3, which I agree with. If neither of those, 1 would be a good choice. But, I think it is fully clear that the current situation is a problem. -[Unknown]The above is just an arbitrary example of the apparent hypocritical nature of (1) and (2). The function is supposed to return the subset of its argument only as far as a newline.
Feb 07 2005
Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted.Two things: 1) I wouldn't mind seeing that feature removed or changed so that it works with overloaded opIndex and friends. I avoid using it. It's a piece of syntactic salt (looks like sugar but doesn't taste quite right). 2) A dlint program could flag shadowed variables called "length". -Ben
Feb 08 2005
"Ben Hinkle" <Ben_member pathlink.com> wrote in message news:cuafb1$22ud$1 digitaldaemon.com...It's got to be 1. Kris is right that this is just a crazy idea. (While I disagree with the return value thingy, I can actually see some sense in it. With this, though, it's just plain wrong.) Can someone enlighten me as to why $ was rejected?Walter added a very subtle pseudo-reserved word, that's only used when it comes to arrays. Yes, it's the word "length". When used within square-brackets, it always means "the length of the enclosing array". Of course, this overrides any other variable that happens to be called "length". Naturally, no warning is emitted.Two things: 1) I wouldn't mind seeing that feature removed or changed so that it works with overloaded opIndex and friends. I avoid using it. It's a piece of syntactic salt (looks like sugar but doesn't taste quite right). 2) A dlint program could flag shadowed variables called "length".
Feb 08 2005
On Wed, 9 Feb 2005 06:44:54 +1100, Matthew wrote: [snip]Can someone enlighten me as to why $ was rejected?I believe it was being saved for later, just in case a better usage came about. -- Derek Melbourne, Australia
Feb 08 2005
"Derek" <derek psych.ward> wrote in message news:1q7ahceywgxwl.156v9zjgzkizw$.dlg 40tude.net...On Wed, 9 Feb 2005 06:44:54 +1100, Matthew wrote: [snip]Fair enough. Though one might observe that even if a different behaviour was forthcoming, it may well be orthogonal to an array range expression. [OT] btw, if Walter was to build regex into the language, like in Ruby, I'd go for that. :-)Can someone enlighten me as to why $ was rejected?I believe it was being saved for later, just in case a better usage came about.
Feb 08 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cub9la$mbe$1 digitaldaemon.com...[OT] btw, if Walter was to build regex into the language, like in Ruby, I'd go for that. :-)Doing that, as I argued elsewhere, would break some of the core principles on which the D grammar is based. But something close to it can be achieved, see the new regexp and string functions in DMD 0.114.
Mar 02 2005
I'd just like to say three more things and then I'll shut up since no one asked me anyway: 1. I never said whether I actually think a warning of some sort would be nice. To say it now: I would like one indeed, although lint would be okay. 2. Some of you live in a vacuum where only bad programmers make mistakes and where maintenance programmers find every flaw there is. Read this: Customers see bugs in software all of the time. It happens. Get over it. Now make it so when they see it they don't use other software because it screws them over. 3. The code below is what I hate about this feature in some compilers. If I throw an exception I *should not* have to put a return (but have to detect unreachable code, why can't it detect that? -[Unknown]int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; }
Feb 05 2005
"Walter" <newshound digitalmars.com> wrote in message news:cu3rt4$ra$1 digitaldaemon.com..."Unknown W. Brackets" <unknown simplemachines.org> wrote in message news:cu25p2$1jbc$1 digitaldaemon.com...[snip] I don't know where to jump into this thread so I'll jump here. Walter, would it be possible to get a "lint" program that flags dubious constructs? I'd like to working something like that into the D emacs mode so that typical errors can be flagged as the source code is written instead - but at the user's request. At work we use a tool like this called mlint - obviously based on the old lint program and it does wonders for cleaning up code and suggesting more efficient constructs etc. We have it integrated into all of our MATLAB editor tools so that you just hit a button and it highlights all the lines with recommendations and what the recommendations are. I hope that given D's lack of preprocessor and simpler syntax a dlint program would be able to generate some very useful recommendations. -BenWalter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime.That's essentially right.
Feb 05 2005
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message news:cu40vq$4ne$1 digitaldaemon.com...I don't know where to jump into this thread so I'll jump here. Walter,wouldit be possible to get a "lint" program that flags dubious constructs? I'd like to working something like that into the D emacs mode so that typical errors can be flagged as the source code is written instead - but at the user's request. At work we use a tool like this called mlint - obviously based on the old lint program and it does wonders for cleaning up code and suggesting more efficient constructs etc. We have it integrated into all of our MATLAB editor tools so that you just hit a button and it highlights all the lines with recommendations and what the recommendations are. I hope that givenD'slack of preprocessor and simpler syntax a dlint program would be able to generate some very useful recommendations.I don't think it would be hard to morph the D front end code into a lint. Such a program could also be configurable by the end user to enforce the local coding style guide. I think it could be a valuable tool. Anyone looking for a D project to do? <g>
Feb 05 2005
I don't think it would be hard to morph the D front end code into a lint. Such a program could also be configurable by the end user to enforce the local coding style guide. I think it could be a valuable tool. Anyone looking for a D project to do? <g>I hope someone picks this up. Some of the possible rules that come to mind: 1) casting arrays to pointers vs using the ptr property 2) unused variables 3) dead code 4) returns at end of functions 5) switch statements without default clauses 6) checks for compilation errors 7) properties with getters and setters that mismatch types 8) replace simple for loops with foreach 9) comparing an object with null using ==, < or > maybe some others... 10) replace memmove/memcpy with slice assignment 11) using an floating point or char variable that has been initialized (I know D initializes all variables but the initial values for some types are chosen to usually force programmers to initialize variables)
Feb 06 2005
Ben Hinkle wrote:Some of the possible rules that come to mind:5) switch statements without default clausesCurrently this throws an Error at run-time, for non-release builds. "Error: Switch Default" This Exception is not thrown with -release. (like ArrayBoundsError)6) checks for compilation errorsUsing -c sort of works, with the side effect of generating objects.9) comparing an object with null using ==, < or >Hopefully this will be made into a "hard" compilation error, even ? --anders
Feb 06 2005
"Anders F Björklund" <afb algonet.se> wrote in message news:cu5d4b$19r6$1 digitaldaemon.com...Ben Hinkle wrote:That's why it would make a good candidate for dlint. It is legal code but potentially dangerous. I can sympathize with Walter's argument that "potentially dangerous" shouldn't necessarily be "illegal" (maybe I shouldn't put words in his mouth but hopefully I'm not too far off the mark). A tool to find potentially dangerous code should be available.Some of the possible rules that come to mind:5) switch statements without default clausesCurrently this throws an Error at run-time, for non-release builds. "Error: Switch Default" This Exception is not thrown with -release. (like ArrayBoundsError)plus dlint should generate output that is easy to parse by other tools. The compiler generates errors when it has to but its main job is to compile code. I wouldn't expect it to have a nice interface for dlint-like uses. Plus I would bet the output format would change from compiler to compiler so each tool would have to have special logic for parsing the output of every supported compiler.6) checks for compilation errorsUsing -c sort of works, with the side effect of generating objects.Actually I would be satisfied if dlint caught it. I just worry about my ported Java code that has these things floating around. I've been scanning the code manually or with grep but without some tool I just don't have confidence that I've found all the bugs. To me it doesn't really matter if that tool is the compiler or something else.9) comparing an object with null using ==, < or >Hopefully this will be made into a "hard" compilation error, even ?--anders
Feb 06 2005
Ben Hinkle wrote:and some more: 12) detect unused parameters/methods/members/labels/code blocks 13) suggest the use of "with" statement when applicable 14) list classes with high cyclomatic complexity measurements 15) list classes with getters/setters only 16) rant on use of uninstantiated objects 17) highlight ill-named identifiers ... wait a sec. is there anyone passionate enough to launch the dlint project?I don't think it would be hard to morph the D front end code into a lint. Such a program could also be configurable by the end user to enforce the local coding style guide. I think it could be a valuable tool. Anyone looking for a D project to do? <g>I hope someone picks this up. Some of the possible rules that come to mind: 1) casting arrays to pointers vs using the ptr property 2) unused variables 3) dead code 4) returns at end of functions 5) switch statements without default clauses 6) checks for compilation errors 7) properties with getters and setters that mismatch types 8) replace simple for loops with foreach 9) comparing an object with null using ==, < or > maybe some others... 10) replace memmove/memcpy with slice assignment 11) using an floating point or char variable that has been initialized (I know D initializes all variables but the initial values for some types are chosen to usually force programmers to initialize variables)
Feb 06 2005
14) list classes with high cyclomatic complexity measurementsInteresting idea but it could be hard to compute and hard to know what to do about it. Can you see a code analysis program saying something like "Your code is too complicated. Make it simpler." Or are you hinting at a dangerous problem involving cyclic references?15) list classes with getters/setters onlywhy is this one a problem?16) rant on use of uninstantiated objectsI like it - a lint with attitude. For those C++ programmers who don't read the documenation carefully enough.17) highlight ill-named identifiersStyle-guides would be nice - as long as they are very customizable and optional!... wait a sec. is there anyone passionate enough to launch the dlint project?If it's still not done by the time MinWin gets settled (still many months) then I will definitely give it a shot. I could use a dlint ASAP.
Feb 06 2005
Ben Hinkle wrote:I was just thinking about a list of classes sorted by their complexity, showing potential directions of high-level refactoring.14) list classes with high cyclomatic complexity measurementsInteresting idea but it could be hard to compute and hard to know what to do about it. Can you see a code analysis program saying something like "Your code is too complicated. Make it simpler." Or are you hinting at a dangerous problem involving cyclic references?It's not. I don't know why I wrote it down :p15) list classes with getters/setters onlywhy is this one a problem?I suppose MinWin is more anticipated by the D community. Keep up the good work :)16) rant on use of uninstantiated objectsI like it - a lint with attitude. For those C++ programmers who don't read the documenation carefully enough.17) highlight ill-named identifiersStyle-guides would be nice - as long as they are very customizable and optional!... wait a sec. is there anyone passionate enough to launch the dlint project?If it's still not done by the time MinWin gets settled (still many months) then I will definitely give it a shot. I could use a dlint ASAP.
Feb 06 2005
On Sat, 5 Feb 2005 17:27:07 -0800, Walter wrote:"Unknown W. Brackets" <unknown simplemachines.org> wrote in message news:cu25p2$1jbc$1 digitaldaemon.com...And the coder probably should have done some more like ... int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } an impossible value '" ~ toString(y) ~ "'"); }Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime.That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } }By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it.You use the word 'complain', whereas I'd tend you the phrase 'alert the coder to a potential problem'.I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; }For which the peer review team should give you a smack the hand for. Most commercial coders do not work in a vacuum. The lone coder is always going to be a minority. Most coders have multiple others reading and critiquing there code long before it gets into commercial release.I'm not saying you would advocate "fixing" the code this way. I don't either. Nobody would. I am saying that this is often how real programmers will fix it.Is that 'real' as opposed to 'unreal'? Or are you implying the 'real' also implies the majority of commercial coders working in a typical organisation that cares about quality.I know this because I see it done, time and again, in response to compilers that emit such error messages. This kind of code is a disaster waiting to happen. No compiler will detect it. It's hard to pick up on a code review.Why is that? Common checklist items for functions include ... ** Does every possible return value meet the contract for the function? ** Does the default return value meet the function contract? ** Does the default return value imply an error situation, and if so, does the caller respond to the error value? ** Is the default return value also one of the possible non-default values, and if so, is it allowed for in the function requirements.Testing isn't going to pick it up.Testing *may not* pick it up. Sometimes it does.It's an insidious, nasty kind of bug.Amen to that brother.It's root cause is not bad programmers, but a compiler error message that encourages writing bad code.No, its root cause *is* bad programmers. 'Bad' in the sense that they are not responsible coders. The compiler message would remind responsible coders to do the right thing; to others it just pisses them off.Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy. If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix, and he won't be tempted to insert a return of an arbitrary value "because it'll never be executed anyway".Actually, it's more likely that it is not the programmer who finds out about it, but one of his customers. Then the programmer *and* the customer is not so happy. What is so wrong about trying to find problems as early as possible? Why wait til the customer rings you up to complain?This is the point I have consistently failed to make clear.You have made your point very clear. I really, really , really do understand your point of view. I just don't agree with you that is useful. -- Derek Melbourne, Australia
Feb 05 2005
"Derek" <derek psych.ward> wrote in message news:1iq2ryvz6s0l9.id90ayeemth4$.dlg 40tude.net...You have made your point very clear. I really, really , really do understand your point of view. I just don't agree with you that is useful.It's fair to disagree. I just want to get across the reasoning, so the decision doesn't look arbitrary. Now that I've succeeded in that, I'll retire for the moment from this debate <g>.
Feb 05 2005
On Sat, 5 Feb 2005 17:27:07 -0800, Walter <newshound digitalmars.com> wrote:I'm not saying you would advocate "fixing" the code this way. I don't either. Nobody would. I am saying that this is often how real programmers will fix it. I know this because I see it done, time and again, in response to compilers that emit such error messages. This kind of code is a disaster waiting to happen. No compiler will detect it. It's hard to pick up on a code review. Testing isn't going to pick it up. It's an insidious, nasty kind of bug. It's root cause is not bad programmers, but a compiler error message that encourages writing bad code.No, its root cause *is* bad programmers. A good programmer would not interpret a missing-return error as an encouragement to mindlessly stick a "return 0;" in the code, but rather an indication that there's a code path that isn't properly terminated. Now, it may be the case that there are a lot of bad programmers out there, but that doesn't make it the compiler's fault[1], nor does it mean that those who would not commit this particular offense should have a useful compile-time diagnostic denied to them. What if the error message were "missing return statement or assert(0)" rather than just "missing return statement"?Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy.The person reading the code isn't happy when he can't tell whether it was an error that simply hadn't been caught in testing (or that was, but he's unsure of which piece of code to blame), or an intentional implicit assert(0). The person who mainly gets missing-return errors for cases where there really should be a return but it was forgotton (such as when a function is changed from returning void to returning non-void, or a non-void-returning stub function that was left completely empty) is also not happy that he doesn't find the bug until run-time testing.If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix,Only if it shows up in testing. If it's on a rare but known-possible execution path, where the programmer would have realized the need for a proper return statement (or other action) if he had been shown an error message, the bug gets found later. -Scott [1] This isn't in the same class as array bounds checking, garbage collection, etc. where the programmer mistakes that they avoid arise out of overlooking something (which all humans do from time to time); in this case, the programmer looked directly at the problem and decided to do something stupid.
Feb 06 2005
"Scott Wood" <scott buserror.net> wrote in message news:slrnd0cocc.5nk.nospam odin.buserror.net...No, its root cause *is* bad programmers. A good programmer would not interpret a missing-return error as an encouragement to mindlessly stick a "return 0;" in the code, but rather an indication that there's a code path that isn't properly terminated.Back when I used to work for Boeing, a major focus of attention was making it impossible to cross the hydraulic lines to critical flight controls. There have been many crashes and near disasters from this happening. Measures taken include: 1) making sure the lines are not long enough to connect to the wrong port 2) using different diameter lines and fittings for each port 3) one one port use left hand threads, on the other use right hand threads 4) warnings and labels 5) test proceedures to verify correct hookup 6) preflight checks to verify correct hookup Real life FAA certified mechanics sometimes went to astonishing lengths to idiotically cross the lines. You can't rely on better training, more certification, etc., eliminating the problem. You've got to design the machine to minimize the potential of mechanics getting it wrong. This kind of effort to eliminate tempting sources of error is pervasive in jetliner design. We all have to work with "bad" programmers, we can't wish them away or assume that better training will transform them. Even great programmers make silly mistakes. I tried to design D in a way that doing the right thing is *less* work than doing the wrong thing. This is because the wrong things often happen because they are easier. Sure, a determined mechanic could still cross the lines. But he's going to have to go through a great deal of effort to do it, and hopefully at some point the thought will cross his mind "this is too hard, I must be doing something wrong." P.S. There was a crash a few years back of a fighter (not a Boeing design). The pitch controls were reversed. Reversing the controls on that bird was as easy as moving a control rod from one spot to the wrong one. Their ace mechanic did it by mistake, and the pilot didn't do his preflight checkout right. Both got blamed. Me, I blame the design of the flight controls. http://www.aviationtoday.com/sia/20010801.htm http://www.ntsb.gov/ntsb/brief.asp?ev_id=20001212X24781&key=1
Feb 06 2005
On Sun, 6 Feb 2005 13:35:14 -0800, Walter wrote:"Scott Wood" <scott buserror.net> wrote in message news:slrnd0cocc.5nk.nospam odin.buserror.net...It appears to me then that you now have DMD informing the pilot at 30,000 feet that the lines are crossed and the system is shutting down, rather than letting maintenance people on the ground know before the plane takes off. -- Derek Melbourne, Australia 7/02/2005 10:06:54 AMNo, its root cause *is* bad programmers. A good programmer would not interpret a missing-return error as an encouragement to mindlessly stick a "return 0;" in the code, but rather an indication that there's a code path that isn't properly terminated.Back when I used to work for Boeing, a major focus of attention was making it impossible to cross the hydraulic lines to critical flight controls. There have been many crashes and near disasters from this happening. Measures taken include: 1) making sure the lines are not long enough to connect to the wrong port 2) using different diameter lines and fittings for each port 3) one one port use left hand threads, on the other use right hand threads 4) warnings and labels 5) test proceedures to verify correct hookup 6) preflight checks to verify correct hookup Real life FAA certified mechanics sometimes went to astonishing lengths to idiotically cross the lines. You can't rely on better training, more certification, etc., eliminating the problem. You've got to design the machine to minimize the potential of mechanics getting it wrong. This kind of effort to eliminate tempting sources of error is pervasive in jetliner design. We all have to work with "bad" programmers, we can't wish them away or assume that better training will transform them. Even great programmers make silly mistakes. I tried to design D in a way that doing the right thing is *less* work than doing the wrong thing. This is because the wrong things often happen because they are easier. Sure, a determined mechanic could still cross the lines. But he's going to have to go through a great deal of effort to do it, and hopefully at some point the thought will cross his mind "this is too hard, I must be doing something wrong." P.S. There was a crash a few years back of a fighter (not a Boeing design). The pitch controls were reversed. Reversing the controls on that bird was as easy as moving a control rod from one spot to the wrong one. Their ace mechanic did it by mistake, and the pilot didn't do his preflight checkout right. Both got blamed. Me, I blame the design of the flight controls. http://www.aviationtoday.com/sia/20010801.htm http://www.ntsb.gov/ntsb/brief.asp?ev_id=20001212X24781&key=1
Feb 06 2005
"Derek Parnell" <derek psych.ward> wrote in message news:cu6835$2cmj$1 digitaldaemon.com...It appears to me then that you now have DMD informing the pilot at 30,000 feet that the lines are crossed and the system is shutting down, rather than letting maintenance people on the ground know before the plane takes off.Actually, the point of that story was that one cannot assume away "bad" programmers, one must assume their existence and design to prevent errors or mitigate the damage they can cause. I view the compiler error in this case as akin to "Hey, hydraulic fluid was leaking. A couple of the lines weren't hooked up, so I just screwed them into a couple of ports nearby. It doesn't leak anymore!" It's a mistake to assume that the mechanic will go read the documentation and hook them to the correct port. Sooner or later, some mechanic will do the easiest "fix" possible, so it's very, very important to design so that the easiest fix is the correct one. The solution you use, while very correct, is not the easiest one. I wish all programmers were as careful as you obviously are.
Feb 06 2005
I think Walter's stand on 'missing return' is an opposite extreme of java checked exceptions enforcement. I mean, Java enforces checked exceptions, many people didn't like it, Walter didn't like it, he hated it, and believed that compiler should never enforce few things, including 'missing returns'. But not checking for missing return statements just too extreme. People hated Java compiler for its over involvement, and now, they might hate D for its under involvement !! Just my opinion Sai
Feb 06 2005
Walter wrote:"Unknown W. Brackets" <unknown simplemachines.org> wrote in message news:cu25p2$1jbc$1 digitaldaemon.com...Hey, hey, one really sould put an assert(0) there! And then I remebered, that a year ago I would've put the return(0) there myself, if all it was for was to shut up the compiler.Walter says: if it's compile time, programmers will patch it without thinking. That's bad. So let's use runtime.That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; }I'm not saying you would advocate "fixing" the code this way. I don't either. Nobody would. I am saying that this is often how real programmers will fix it. I know this because I see it done, time and again, in response to compilers that emit such error messages. This kind of code is a disaster waiting to happen. No compiler will detect it. It's hard to pick up on a code review. Testing isn't going to pick it up. It's an insidious, nasty kind of bug. It's root cause is not bad programmers, but a compiler error message that encourages writing bad code. Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy. If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix, and he won't be tempted to insert a return of an arbitrary value "because it'll never be executed anyway". This is the point I have consistently failed to make clear.You got me. Now I see it your way!
Mar 08 2005
I can see both your points. And I do not want to gang up on Walter butQ: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one. Now a switch with no default and no matching 'case' ? I can see the argument for that. I think that is a good example of 'shut-up code' doing harm, where its better caught at runtime then at compile time. But its almost GUARANTEED not to run right with a missing return statement , best to catch it at compile time. Charlie "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu23g5$1hh3$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 05 2005
I can see both your points. And I do not want to gang up on Walter but represents allot of responses from Walter when he's dead set on something.Yes. Kind of like calling upon a disinterested god. A night's sleep has interveened, and I tire of smashing my head on the same brick wall, so I'll segue on y'all to say:Given the fact that the majority of people are right-handed, and steering correctly is probably more important than changing gear correctly, driving on the LHS is the better thing. So nya nya from the Australasia/Japan/UK to all the rest of the world. Of course, an increasing number of people drive automatics, to it's largely moot. And then there's that bizarre steering wheel change business that you NW hemisphere types enjoy, which totally blows my argument. :-) "Charles" <no email.com> wrote in message news:cu3747$2foi$1 digitaldaemon.com...Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.I can see both your points. And I do not want to gang up on Walter butQ: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one. Now a switch with no default and no matching 'case' ? I can see the argument for that. I think that is a good example of 'shut-up code' doing harm, where its better caught at runtime then at compile time. But its almost GUARANTEED not to run right with a missing return statement , best to catch it at compile time. Charlie "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu23g5$1hh3$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 05 2005
On Sat, 5 Feb 2005 13:34:54 -0600, Charles <no email.com> wrote:I can see both your points. And I do not want to gang up on Walter butThe problem is not whether compile time detecting missing returns is good or bad, most would agree that it's good. IIRC the problem is that it's not possible to do it with 100% accuracy, and further doing it at to a degree that approaches 100% requires a lot of code which basically amounts to checking for each and every weird possible combination of factors.Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one.Now a switch with no default and no matching 'case' ? I can see the argument for that. I think that is a good example of 'shut-up code' doing harm, where its better caught at runtime then at compile time. But its almost GUARANTEED not to run right with a missing return statement , best to catch it at compile time.We're not simply talking about a missing return. We're talking about a return which: - The programmer believed was not required, as it would never be executed. - The compiler complained about (correctly, but seemingly incorrectly). - The programmer added a 'return 0;' "to shut it up" - Which was then later executed. - Causing un-expected behaviour, potentially un-noticed for ... Regan
Feb 06 2005
On Sat, 5 Feb 2005 20:26:43 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote: <snip>Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically.Not true, assuming: X == compile time warning/error of the fault Y == auto-insert of code to cause runtime error for fault. As described the compiler can do both. <snip> Regan
Feb 06 2005
Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug." Regan On Sat, 5 Feb 2005 20:26:43 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 06 2005
Sounds good to me. But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop. "Regan Heath" <regan netwin.co.nz> wrote in message news:opslstuwzg23k2f5 ally...Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug." Regan On Sat, 5 Feb 2005 20:26:43 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 06 2005
On Mon, 7 Feb 2005 13:27:06 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Sounds good to me. But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up.He already has, it's the (a) option below, the 'worst case' scenario. I did note that you mentioned it would be possible for people to start using a "return 0;" to avoid the auto assert, I agree it's possible, I just don't think it's very likely.At which point I'll have to smash myself in the head with my laptop.I do hope it's one of those new ones that isn't very big and/or solid, not like the one I used to have which survived being run over by a car. Regan"Regan Heath" <regan netwin.co.nz> wrote in message news:opslstuwzg23k2f5 ally...Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug." Regan On Sat, 5 Feb 2005 20:26:43 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1 digitaldaemon.com...I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.1) make it impossible to ignore situations the programmer did not think ofSo do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.2) the bias is to force bugs to show themselves in an obvious manner.So do I. But this statement is too bland to be worth anything. What's is "obvious"?Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?*Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?As early as possible. Putting in the return 0; means the showing will be late.Disagree.Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.All of this is of virtually no relevance to the topic under discussionBut I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.I'm with you there.Nothing is *always* true. That's kind of one of the bases of my thesis.My position is simply that compile-time error detection is better than runtime error detection.In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system.There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.TrulyYou seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).Jan Knepper came up with the slicing idea. Sheer genius!? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?I don't believe it is perfect. I believe it is the best balance of competing factors.I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.That is not the suggested syntax, at least not to the best of my recollection.One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....Essentially, that means requiring the programmer to insert: assert(0); return 0;It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?Agreed[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]I don't expect we'll agree on this anytime soon.
Feb 06 2005
Matthew wrote:Sounds good to me. But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop.If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Feb 06 2005
"John Reimer" <brk_6502 yahoo.com> wrote in message news:cu6mt4$785$1 digitaldaemon.com...Matthew wrote:Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the desk, with its miserable little broken hinge.Sounds good to me. But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop.If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Feb 06 2005
Matthew wrote:Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the desk, with its miserable little broken hinge.Does that mean you don't want it? I'll take it. Seriously. _______________________ Carlos Santander Bernal
Feb 09 2005
"Carlos Santander B." <csantander619 gmail.com> wrote in message news:cuefto$o4k$1 digitaldaemon.com...Matthew wrote:He he. No, sorry. I've ordered a hinge from Dell - a company that seems to know how to provide at least acceptable, if not great, customer service - and it's about to become a Linux machine. (GDC, here I come!)Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the desk, with its miserable little broken hinge.Does that mean you don't want it? I'll take it. Seriously.
Feb 09 2005
Matthew wrote:He he. No, sorry. I've ordered a hinge from Dell - a company that seems to know how to provide at least acceptable, if not great, customer service - and it's about to become a Linux machine. (GDC, here I come!)Oh, ok. Can't say I didn't try... :D _______________________ Carlos Santander Bernal
Feb 10 2005
"John Reimer" <brk_6502 yahoo.com> wrote in message news:cu6mt4$785$1 digitaldaemon.com...Matthew wrote:Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Sounds good to me. But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop.If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Feb 08 2005
Matthew wrote:Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays? Apple losing /your/ business is not a good thing. Darn it. - John
Feb 08 2005
"John Reimer" <brk_6502 yahoo.com> wrote in message news:cuc6at$1g83$1 digitaldaemon.com...Matthew wrote:Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays?Apple losing /your/ business is not a good thing.Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Feb 08 2005
Matthew wrote:"John Reimer" <brk_6502 yahoo.com> wrote in message news:cuc6at$1g83$1 digitaldaemon.com...Oh Bother!Matthew wrote:Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays?Um... something like that! :-(Apple losing /your/ business is not a good thing.Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Feb 08 2005
"John Reimer" <brk_6502 yahoo.com> wrote in message news:cuc7r9$1i7t$1 digitaldaemon.com...Matthew wrote:Well, I've sent of a snotty letter to sales apple.com and sales apple.com.au. The latter bounced, from which I deduce that Apple probably don't have, or don't service, any guessable email addresses - lord knows, there are none on their websites - and so it's gone in the bit bucket. If I don't hear anything back soon, I reckon there'll be a blog entry coming in a couple of weeks ..."John Reimer" <brk_6502 yahoo.com> wrote in message news:cuc6at$1g83$1 digitaldaemon.com...Oh Bother!Matthew wrote:Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays?Um... something like that! :-(Apple losing /your/ business is not a good thing.Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Feb 08 2005
Matthew wrote:Well, I've sent of a snotty letter to sales apple.com and sales apple.com.au. The latter bounced, from which I deduce that Apple probably don't have, or don't service, any guessable email addresses - lord knows, there are none on their websites - and so it's gone in the bit bucket. If I don't hear anything back soon, I reckon there'll be a blog entry coming in a couple of weeks ...Ok, Matthew. Quit holding back. Where's your blog site?
Feb 08 2005
"John Reimer" <brk_6502 yahoo.com> wrote in message news:cucak4$1kcp$1 digitaldaemon.com...Matthew wrote:It's on Artima, where I can rub shoulders with people who really know what they're talking about. But I haven't posted any yet. I've been, er, busy. I will be kicking it off next week, for sure, now I've got my back up!Well, I've sent of a snotty letter to sales apple.com and sales apple.com.au. The latter bounced, from which I deduce that Apple probably don't have, or don't service, any guessable email addresses - lord knows, there are none on their websites - and so it's gone in the bit bucket. If I don't hear anything back soon, I reckon there'll be a blog entry coming in a couple of weeks ...Ok, Matthew. Quit holding back. Where's your blog site?
Feb 08 2005
John Reimer wrote:Being new to the Mac, it's easy how you could misunderstand this. Apple is famous for their design, and infamous for their service. And over the years, they've also produced a fair share of "lemons"... Some of us like them anyway, and just make a big glass of lemonade. If you don't like that, you can always buy your Mac els... nevermind. :-P --andersOh Bother!Matthew wrote:Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays?
Feb 09 2005
On Wed, 09 Feb 2005 10:31:48 +0100, Anders F Björklund wrote:John Reimer wrote:Actually, I'm not really that new to the mac. I grew up with one. My parents got the first Mac 128 as soon as it came out. I spent hours on it with word processing, music programs, games (my favourite was Fokker Triplane Simulator: hours of fun). I think I even did a little programming in BASIC on it (not much; I didn't start programming until a few years later on the C64). We used it for years. I've read tons of MacWorld Mags while growing up. I learned to type on the Mac with Typing Tutor 3 when I was 9 or 10. My folks upgraded to a new PowerPC mac years later. That machine was also plagued with problems. Apple eventually replaced the motherboard because of known problems with the model (I don't think the cost was completely covered by Apple, though). I've followed Apple history closely throughout the time, watching their successes and many failures. It was the era I grew up in. Despite all this, for some inexplicable reason, I've maintained a certain fondness for the machine. It doesn't make sense, really. :-) That said, I've never personally taken the plunge to get my own personal Mac (I've used other computers for years). Now with Mac OS X and cheaper macs available, I'm almost ready to take the plunge DESPITE Apples tenuous grasp on market share /and/ reputation for abysmal customer service. The disappointment I share here is just an expression of sadness that Apple's poor customer service might destroy their chances at success. They have to impress people like Matthew, if they know what's good for them. ;-) I think Apple needs to succeed, at the very least to give Microsoft the competition it so badly needs.Being new to the Mac, it's easy how you could misunderstand this. Apple is famous for their design, and infamous for their service.Oh Bother!Matthew wrote:Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!Oh no! why?! Too many delays?And over the years, they've also produced a fair shareof "lemons"...Some of us like them anyway, and just make a big glass of lemonade.Oh yes, they sure have made their share of lemons. In fact, I must correct myself. I /did/ buy myself a Mac once... an iMac (or was it an eMac; it was G3 machine, one of the first models of the new sehll design). I loved it! ... that is until it began to repeatedly crash, and the CD drive refused to work within a few days of purchasing it. I was mortified. After years of wanting the machine, I finally got one and look what happened. I sent it back and never got a replacement... That was several years ago, and I haven't tried again yet. I'm hoping my next attempt won't meet with such failure.If you don't like that, you can always buy your Mac els... nevermind. :-P --andersThere aren't many options, really. It's too bad. It's too bad Apple isn't a little more open (as in clones; I realize they tried that once before), but then they are probably afraid of losing market share within their own ranks... *sigh*. Later, John R.
Feb 09 2005
First time reader, first time poster! I think some people are missing the point that code rots. A coder might add a return(0) to shut up the compiler and this might be a reasonable thing to do *at that point in time*. For instance, if it is simply impossible for the code to reach this point, then there is probably no acceptable real artifact that can be returned, so he might as well return 0. Later, if the code morphs around this function, the returned "artifact" might not come through as such. Zero might be an actual useful value and was just returned as an error because there was nothing more appropriate. The compiler can not guess that return(0) has become outdated. Restated, as code changes and assumptions change, leaving a potentially leaky return boldly states, "this will never happen and the compiler can add assertions all it wants". However, adding a return(0) loses that intention. So next argument is, "OK. So don't use a no-nag return but put an assertion in. This will accomplish the same thing without involving the compiler."... First, this will still not help until run-time. I think there is an elegance issue to work through as well. And I'll mention up front that I don't have any conclusions. (-: The following, Andrew code, is clean code. int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } Aahhh. This reminds me of the simple code snippets you would find in an analysis of algorithms tome -- no extra if's used for checking on production results. Could real code truly be this clean? And yet the question will nag: is this programmer boldly stating that he will always return inside the loop and so he needs no terminal case? what if the programmer simply forgot his terminal exception case? I wouldn't be upset with forcing the coder to have his own assertion, (or throw one in for him if not?) but it does seem less elegant. However, code like the above rarely appears in production code. It's usually tons of tests against file opens, matrix reads, zero values, etc. The "inelegance" of peppering your own code with asserts isn't too bad. So I think it is 6 of one, half dozen of the other. Perusing a few of the messages on this newsgroup, it appears that this language is Walter's puppy, and when it comes to a tie, I'd let him have his way. (-: I'd hate to see this language suffer the same burnt-out, never-finished fate of so many other promising starts. Let me also say that I dont' think it is worth distorting a language too much in order to get as much compile time checking as possible. I think the reasonable limits are fairly well known (strong typing, etc.) I wouldn't over-use the argument that the compiler *could have* caught this. - Charlie Regan Heath wrote:Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug."
Feb 08 2005
First time reader, first time poster! I think some people are missing the point that code rots. A coder might add a return(0) to shut up the compiler and this might be a reasonable thing to do *at that point in time*. For instance, if it is simply impossible for the code to reach this point, then there is probably no acceptable real artifact that can be returned, so he might as well return 0. Later, if the code morphs around this function, the returned "artifact" might not come through as such. Zero might be an actual useful value and was just returned as an error because there was nothing more appropriate. The compiler can not guess that return(0) has become outdated.Agreed. Last time this was debated, someone suggested a "neverreturn" keyword, or some such. That's less likely to rot, don't you think?
Feb 08 2005
On Wed, 9 Feb 2005 06:47:39 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:I agree. Having some sort of way to tell the compiler what you mean is useful, in addition it tells the next guy who looks at the code your intent. It's not strictly necessary to have a keyword, an assert can do the job, but it has to be visible in the code to have the full effect. The problem (it seems this is what Walter is most worried about) is how to get people to use it.. a keyword might achieve that goal? ReganFirst time reader, first time poster! I think some people are missing the point that code rots. A coder might add a return(0) to shut up the compiler and this might be a reasonable thing to do *at that point in time*. For instance, if it is simply impossible for the code to reach this point, then there is probably no acceptable real artifact that can be returned, so he might as well return 0. Later, if the code morphs around this function, the returned "artifact" might not come through as such. Zero might be an actual useful value and was just returned as an error because there was nothing more appropriate. The compiler can not guess that return(0) has become outdated.Agreed. Last time this was debated, someone suggested a "neverreturn" keyword, or some such. That's less likely to rot, don't you think?
Feb 08 2005
I think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it. On Tue, 08 Feb 2005 13:07:21 -0500, Charles Patterson <charliep1 excite.com> wrote:First time reader, first time poster! I think some people are missing the point that code rots. A coder might add a return(0) to shut up the compiler and this might be a reasonable thing to do *at that point in time*. For instance, if it is simply impossible for the code to reach this point, then there is probably no acceptable real artifact that can be returned, so he might as well return 0. Later, if the code morphs around this function, the returned "artifact" might not come through as such. Zero might be an actual useful value and was just returned as an error because there was nothing more appropriate. The compiler can not guess that return(0) has become outdated. Restated, as code changes and assumptions change, leaving a potentially leaky return boldly states, "this will never happen and the compiler can add assertions all it wants". However, adding a return(0) loses that intention. So next argument is, "OK. So don't use a no-nag return but put an assertion in. This will accomplish the same thing without involving the compiler."... First, this will still not help until run-time. I think there is an elegance issue to work through as well. And I'll mention up front that I don't have any conclusions. (-: The following, Andrew code, is clean code. int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } Aahhh. This reminds me of the simple code snippets you would find in an analysis of algorithms tome -- no extra if's used for checking on production results. Could real code truly be this clean? And yet the question will nag: is this programmer boldly stating that he will always return inside the loop and so he needs no terminal case? what if the programmer simply forgot his terminal exception case? I wouldn't be upset with forcing the coder to have his own assertion, (or throw one in for him if not?) but it does seem less elegant. However, code like the above rarely appears in production code. It's usually tons of tests against file opens, matrix reads, zero values, etc. The "inelegance" of peppering your own code with asserts isn't too bad. So I think it is 6 of one, half dozen of the other. Perusing a few of the messages on this newsgroup, it appears that this language is Walter's puppy, and when it comes to a tie, I'd let him have his way. (-: I'd hate to see this language suffer the same burnt-out, never-finished fate of so many other promising starts. Let me also say that I dont' think it is worth distorting a language too much in order to get as much compile time checking as possible. I think the reasonable limits are fairly well known (strong typing, etc.) I wouldn't over-use the argument that the compiler *could have* caught this. - Charlie Regan Heath wrote:Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug."
Feb 08 2005
I think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.On Tue, 08 Feb 2005 13:07:21 -0500, Charles Patterson <charliep1 excite.com> wrote:First time reader, first time poster! I think some people are missing the point that code rots. A coder might add a return(0) to shut up the compiler and this might be a reasonable thing to do *at that point in time*. For instance, if it is simply impossible for the code to reach this point, then there is probably no acceptable real artifact that can be returned, so he might as well return 0. Later, if the code morphs around this function, the returned "artifact" might not come through as such. Zero might be an actual useful value and was just returned as an error because there was nothing more appropriate. The compiler can not guess that return(0) has become outdated. Restated, as code changes and assumptions change, leaving a potentially leaky return boldly states, "this will never happen and the compiler can add assertions all it wants". However, adding a return(0) loses that intention. So next argument is, "OK. So don't use a no-nag return but put an assertion in. This will accomplish the same thing without involving the compiler."... First, this will still not help until run-time. I think there is an elegance issue to work through as well. And I'll mention up front that I don't have any conclusions. (-: The following, Andrew code, is clean code. int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } Aahhh. This reminds me of the simple code snippets you would find in an analysis of algorithms tome -- no extra if's used for checking on production results. Could real code truly be this clean? And yet the question will nag: is this programmer boldly stating that he will always return inside the loop and so he needs no terminal case? what if the programmer simply forgot his terminal exception case? I wouldn't be upset with forcing the coder to have his own assertion, (or throw one in for him if not?) but it does seem less elegant. However, code like the above rarely appears in production code. It's usually tons of tests against file opens, matrix reads, zero values, etc. The "inelegance" of peppering your own code with asserts isn't too bad. So I think it is 6 of one, half dozen of the other. Perusing a few of the messages on this newsgroup, it appears that this language is Walter's puppy, and when it comes to a tie, I'd let him have his way. (-: I'd hate to see this language suffer the same burnt-out, never-finished fate of so many other promising starts. Let me also say that I dont' think it is worth distorting a language too much in order to get as much compile time checking as possible. I think the reasonable limits are fairly well known (strong typing, etc.) I wouldn't over-use the argument that the compiler *could have* caught this. - Charlie Regan Heath wrote:Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent. The following is my impression of the points/positions in this argument: 1. Catching things at compile time is better than at runtime. - all parties agree 2. If it cannot be caught at compile time, then a hard failure at runtime is desired. - all parties agree 3. An error which causes the programmer to add code to 'shut the compiler up' causes hidden bugs - Walter Matthew? 4. Programmers should take responsibilty for the code they add to 'shut the compiler up' by adding an assert/exception. - Matthew Walter? 5. The language/compiler should where it can make it hard for the programmer to write bad code - Walter Matthew? IMO it seems to be a disagreement about what happens in the "real world", IMO Matthew has an optimistic view, Walter a pessimistic view, eg. Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error would be caught immediately or at runtime. It seems to me that Matthews position is that warning the programmer at compile time about the situation gives them the opportunity to fix it at compile time, and I agree. Walter: If it were a warning, programmers might add 'return 0;' causing the error to remain un-detected for longer. It seems to me that Walters position is that if it were a warning there is potential for the programmer to do something stupid, and I agree. So why can't we have both? To explore this, an imaginary situation: - Compiler detects problem. - Adds code to handle it (hard-fail at runtime). - Gives notification of the potential problem. - Programmer either: a. cannot see the problem, adds code to shut the compiler up. (causing removal of auto hard-fail code) b. cannot see the problem, adds an assert (hard-fail) and code to shut the compiler up. c. sees the problem, fixes it. if a then the bug could remain undetected for longer. if b then the bug is caught at runtime. if c then the bug is avoided. Without the notification (a) is impossible, so it seems Walters position removes the worst case scenario, BUT, without the notification (c) is impossible, so it seems Walters position removes the best case scenario also. Of course for any programmer who would choose (b) over (a) 'all the time' Matthews position is clearly the superior one, however... The real question is. In the real world are there more programmers who choose (a), as Walter imagines, or are there more choosing (b) as Matthew imagines? Those that choose (a), do they do so out of ignorance, impatience, or stupidity? (or some other reason) If stupidity, there is no cure for stupidity. If impatience (as Walter has suggested) what do we do, can we do anything. If ignorance, then how do we teach them? does auto-inserting the hard fail and giving no warning do so? would giving the warning do a better/worse job? eg. "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add your own exception or B. fix the bug."
Feb 08 2005
On Wed, 9 Feb 2005 09:25:50 +1100, Matthew wrote:The best compromise I've heard of so far is to have the '-v' (verbose) DMD switch to tell me (a hopefully responsible coder) where DMD has inserted the assert(0) code. Most people do not use -v in normal compilations, so only anal coders, such as myself, could use it to find out where I could improve my poor coding practices. [snipped stuff that is not relevant to the above comment] -- Derek Melbourne, Australia 9/02/2005 10:11:31 AMI think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
Feb 08 2005
In article <1koccnt7piohu.8r5ivwwg1lso.dlg 40tude.net>, Derek Parnell says...On Wed, 9 Feb 2005 09:25:50 +1100, Matthew wrote:Good stuff. Whilst on the subject, let's add the implicit "default:" injection to that list of "diagnostics" also. Frankly, I find it vaguely annoying when a compiler thinks it knows best, and does so silently. All changes made to the original code, as 'designed' by the programmer, should be clearly noted during compile time -- if that requires a -v switch, then great! Diagnostics are not warnings; therefore there cannot be any wiffle waffle about them, and Walter may actually accept that as a compromise. - KrisThe best compromise I've heard of so far is to have the '-v' (verbose) DMD switch to tell me (a hopefully responsible coder) where DMD has inserted the assert(0) code. Most people do not use -v in normal compilations, so only anal coders, such as myself, could use it to find out where I could improve my poor coding practices. [snipped stuff that is not relevant to the above comment] -- Derek Melbourne, Australia 9/02/2005 10:11:31I think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
Feb 08 2005
On Wed, 9 Feb 2005 09:25:50 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:Because of real life contraints, time, etcI think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-upbecause programmers are unprofessional.Some are.This hamstrings all diligent engineers.I agree. It also mitigates mistakes made by less diligent engineers, or diligent engineers having a bad day or in a bad position. Basically my problem is that I can see both sides and have no way to measure which side is correct. Interestingly it's my impression people like the default switch behaviour and dislike the missing return, I am struggling to find the difference between the two, tho I have a nagging feeling there is one.Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.Wouldn't life be boring if we were all the same. Regan
Feb 08 2005
I disagree. He doesn't think that. He thinks (umm, I think he thinks) that one possible outcome of such a situation is shut up code. He has expressed that be believes this happens 10% of the time. Regardless, I think we can probably all agree that verbose should *definitely* show this message. It seems to me as if Walter doesn't like too many options, but to me this argument and the sides in it seems to indicate the prefect opportunity for some sort of "tell me when I omit returns" option. The obvious problem is that then, you never know if the compiler has this enabled, and so sometimes good programs when built on other machines (e.g. other platforms, etc.) will give these warnings to the great annoyance of the programmer(s). But, perhaps, if there was a way to indicate options in the current directory (e.g. ".dmd") that wouldn't be a problem. And, everyone could enable this option if they understood its effects and that shut up code is bad. The default could remain off. Then again, I like options. When I open Firefox and go to "about:config", it makes me happy. That was one of the main things that sold me on the browser. Oh, well. -[Unknown]Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
Feb 08 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cubfs7$t5n$1 digitaldaemon.com...I don't see the problem as that. <preach> I don't think it is fair to call one side pessimistic. That's pretty rhetorical. And if this is a discussion between pessimists and optimists, then I'm not interested because both camps are typically full of non-thinkers. I think the appropriate position would be realist here. Sorry. </preach> <ot> Let me weigh in that most programmers *are* unprofessional. (-: You don't have to dig through too many books on software engineering to find that out. There is a factor of 30 between the productivity of the bad and good coders, for example. And most people in any "sweatshop" environment, of which there are plenty in programming, do the minimum work they can. But I don't think that matters. </ot> I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check. Doesn't D also have array bounds checks? Would you also think that the user should be forced to make these explicitly? What if the code // I am not a D programmer yet, so bear with me if this is C/C++ // A has atleast 10 elements and we only care about the first 10 for ( int i = 0; i < 10; i++ ) test( A[ i ] ); caused a compile-time warning because it can't tell for sure that your assumption is correct. The only way to "shut-up" the compiler is to have an assert in-line: for ( int i = 0; i < 10; i++ ) { assert( i < A.size ); A[ i ] = i' } So I think the compiler should supply one automatically for the return(0) case just like it does for other situations where the coder is checked on but given the flexibility to write the code as he sees fit. And this is all assuming you can tell the assert is related to the problem. How smart will the compiler have to be to force an assert in the return(0) case? int foo(CollectionClass c, int y) { int t; foreach (Value v; c) { if (v.x == y) return v.z; } assert( t == 0 ); } How does the compiler know that the assert is unrelated to the loop?I think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
Feb 09 2005
On Wed, 9 Feb 2005 11:31:46 -0500, Charlie Patterson wrote: [snip]I don't see the problem as that.[snip]I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check.I have no problem with this as well. The issue for me has boiled down to whether or not the compiler tells me that's what its done. I want to be told whenever the compiler does this sort of thing on my behalf. And I'm happy to have to ask for this too, for example via the -v compiler switch. -- Derek Melbourne, Australia
Feb 09 2005
"Derek" <derek psych.ward> wrote in message news:r5usmcalgzby.16ym9b7gewm7p.dlg 40tude.net...On Wed, 9 Feb 2005 11:31:46 -0500, Charlie Patterson wrote: [snip]Agreed.I don't see the problem as that.[snip]I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check.I have no problem with this as well. The issue for me has boiled down to whether or not the compiler tells me that's what its done. I want to be told whenever the compiler does this sort of thing on my behalf.And I'm happy to have to ask for this too, for example via the -v compiler switch.I think this is a mistake, but if this is as far as Walter can be pushed on this issue, then this'd an invocation of matthew->ShutUp("You get it as a flag on the compiler") would probably not throw an exception at this stage, (though it would result in the printing of a critical missive to stdwhine).
Feb 09 2005
"Charlie Patterson" <charliep1 excite.com> wrote in message news:cuddto$2o74$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cubfs7$t5n$1 digitaldaemon.com...Agreed. As I've just posted on the 'const/readonly string' thread, I think this issue maybe needs to go back to a simple distillation of the problem.I don't see the problem as that. <preach> I don't think it is fair to call one side pessimistic. That's pretty rhetorical.I think you and I have very similar opinions on this matter. I think most all of us here agree on what the best outcome is, what we seem to disagree over is what the compiler can best do to achieve it.Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.And if this is a discussion between pessimists and optimists, then I'm not interested because both camps are typically full of non-thinkers. I think the appropriate position would be realist here. Sorry. </preach> <ot> Let me weigh in that most programmers *are* unprofessional. (-: You don't have to dig through too many books on software engineering to find that out.Yes, alas, I guess I maybe have to admit that a significant proportion of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in years gone by. I suppose my thinking's been coloured these last few years because I've spent most of my time writing, wherein one learns all the myriad ways in which one's assumptions / implementations are wrong/bad, and any that are missed are gleefully pointed out by reviewers. The times I have worked have been consultative projects where I've done most/all the implementation myself. I confess that some years past I've worked with people who should be selling fake jewellery on street corners, rather than working on complex systems. But if we agree that there are many unprofessional programmers, must we not also agree that there will be a continuum, rather than just, say 10% programmers are highly professional and 90% are utterly unprofessional. Given that, I think a language which actually lays traps for people who are somewhere in between - and I am convinced that D does indeed do that - is a bad thing. Someone who's half-arsed may well put in "return 0;"s in every function by rote, so as to avoid the dreaded "indeterminate return" that their (somewhat more professional) colleague has warned them about (or their team-leader has castigated them about, after they've caused run-time f-ups for the third time). To me the only sane solution is a middle ground. But this clashes with Walter's strong intent to avoid warnings (and for good reason). My position is that something's got to give.There is a factor of 30 between the productivity of the bad and good coders, for example.Indeed, but try getting a 3000% pay rise on the strength of that. (btw, IIRC, it's only 29x <g>.)And most people in any "sweatshop" environment, of which there are plenty in programming, do the minimum work they can. But I don't think that matters. </ot> I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check. Doesn't D also have array bounds checks?Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
Feb 09 2005
Matthew wrote:of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in yearsNo offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D _______________________ Carlos Santander Bernal
Feb 09 2005
"Carlos Santander B." <csantander619 gmail.com> wrote in message news:cuegnn$opn$1 digitaldaemon.com...Matthew wrote:No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in yearsNo offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
Feb 09 2005
On Thu, 10 Feb 2005 13:49:51 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Carlos Santander B." <csantander619 gmail.com> wrote in message news:cuegnn$opn$1 digitaldaemon.com...I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob. -- Using Opera's revolutionary e-mail client: http://www.opera.com/m2/Matthew wrote:No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in yearsNo offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
Feb 09 2005
"Alex Stevenson" <ans104 cs.york.ac.uk> wrote in message news:opslywebqz08qma6 mjolnir.spamnet.local...On Thu, 10 Feb 2005 13:49:51 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:That's great! Thanks Bob only knows why I didn't think of it before. :-)"Carlos Santander B." <csantander619 gmail.com> wrote in message news:cuegnn$opn$1 digitaldaemon.com...I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob.Matthew wrote:No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in yearsNo offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
Feb 09 2005
In article <cuetqd$14g2$1 digitaldaemon.com>, Matthew says..."Alex Stevenson" <ans104 cs.york.ac.uk> wrote in message news:opslywebqz08qma6 mjolnir.spamnet.local...What about Void? Isn't that supposed to cover everything? Void only knows! or cast(void)(Diety) only knows! Sorry, couldn't help myself!On Thu, 10 Feb 2005 13:49:51 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:That's great! Thanks Bob only knows why I didn't think of it before. :-)"Carlos Santander B." <csantander619 gmail.com> wrote in message news:cuegnn$opn$1 digitaldaemon.com...I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob.Matthew wrote:No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in yearsNo offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
Feb 09 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cueank$jur$1 digitaldaemon.com...When the fog lifts... (-: So why is it OK to remove array-bounds checks at release but not functional return points? They seem remarkably similar to me. The assumption seems to be that if you didn't catch them in debug, you'll be alright. And why should the user be forced to insert dummy return point (or assertions at return points), but not dummy array-bounds checks or assertions? I'm also OK with Derek that a compile option such as --sanity would point out the automatically inserted assertions, but how big might this list be if it includes, again, array-bounds checks, etc? For the record, I hate it when compilers dump warnings about things that aren't a problem. I guess I'm anal like that, but it frustrates me for the compiler to point out non-errors. It's like being micro-managed. Like I'm painting and someone is standing over my shoulder saying, "Hey you missed a spot! Did you mean to leave it uneven? Are you going to put something there?" Plus, I really hate inheriting code that throws warnings. I dont' know the code base yet so how worried should I be?I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check. Doesn't D also have array bounds checks?Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
Feb 10 2005
In article <cufvel$2aln$1 digitaldaemon.com>, Charlie Patterson says...So why is it OK to remove array-bounds checks at release but not functional return points? They seem remarkably similar to me. The assumption seems to be that if you didn't catch them in debug, you'll be alright.One difference is that bounds checking costs cycles for every array lookup, but an assert(0) at the end of a function doesn't cost anything by just being there. Nick
Feb 10 2005
"Charlie Patterson" <charliep1 excite.com> wrote in message news:cufvel$2aln$1 digitaldaemon.com..."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cueank$jur$1 digitaldaemon.com...Assuming we're correct re checks in release, I agree it's inconsistent, as you point outWhen the fog lifts... (-: So why is it OK to remove array-bounds checks at release but not functional return points? They seem remarkably similar to me. The assumption seems to be that if you didn't catch them in debug, you'll be alright. And why should the user be forced to insert dummy return point (or assertions at return points), but not dummy array-bounds checks or assertions?I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check. Doesn't D also have array bounds checks?Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
Feb 10 2005
On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com...I come from the position that a compiler's job (a part from compiling), is to help the coder write correct programs. Of course, it can't do this to the Nth degree because how does the compiler 'know' what is correct or not? However, a compiler is often able to detect things that are *probably* incorrect or have a high probability to cause the application to function incorrectly. Thus I see think that a good compiler is one that is allowed to have the ability to point these situations out to the code writer. (The compiler should also allow coders to tell the compiler that the coder knows what they are doing in this instance and just let me get on with it, okay?!) Now, what to do though if the code writer chooses to ignore the compiler's observations? I would suggest that the compiler should insert run time code that prevents the application from continuing if the application tries to continue past the code that the compiler thinks might ( i.e. highly likely) cause bad results. You seem to be concerned that a code will always insert 'dead code' just so the compiler will stop nagging them. Of course, some coders are just this immature. They either grow up or whither. As a coder matures, they will begin to take the compiler seriously and add in code that makes sense in the context. I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ... Abort("Logic Error #nnn. If you see this, a mistake was made by the programmer. This should never be seen. Inform your supplier about this message."); You might regard this is superfluous 'dead code', however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode. Thus my switch constructs always have a default clause, and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keeps the users better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things. Currently, D is way too dogmatic and unreasonably unhelpful to the coder. It is mostly still better than C/C++ though. -- Derek Melbourne, AustraliaGuys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler"
Feb 05 2005
"Derek" <derek psych.ward> wrote in message news:1uqf5a6fc42ei$.8hb72yklj5d2.dlg 40tude.net...You seem to be concerned that a code will always insert 'dead code' justsothe compiler will stop nagging them.Always? No. But it happens much more often than one would think. It usually happens when one is in a hurry, or thinking about something else at the time. One promises oneself that one will go back and fix it later. But that never happens.I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ... Abort("Logic Error #nnn. If you see this, a mistake was made by the programmer. This should never be seen. Inform your supplier about this message."); You might regard this is superfluous 'dead code',No, I do not. I think such practice as yours is fine. My concern is with the temptation to just insert a return statement without any abort call. I've seen it happen, a lot, by professional programmers. This is where the good intentions of the compiler error message have gone awry and caused things to be worse.however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode.This is good, and is also achievable by putting a catch in at the top level to catch any wayward uncaught exceptions and print out any message desired.Thus my switch constructs always have a default clause,That's my normal practice with C/C++ code, years ago I had a paper advocating such called "Defensive Programming". Many of those ideas have been automated in D, as D will insert a default clause for you if none is specified, and that inserted default clause will throw an exception. It takes the place of all the default: assert(0); break; I write in C/C++. I think we are much more in agreement on these issues than not.and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keepstheusers better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things.No, I don't regard it as overkill. But I would regard an inserted return statement (that is not preceded by one of the abort messages you showed above) that is not intended to ever be executed as masking a potential bug. I also believe that dead code, unless it is marked with an Abort() like your example, is a problem for future maintainers. He'll see that return, and wonder what it's for and why it doesn't seem to be possible to execute it. (As an aside, it's interesting how much dead code tends to accumulate in an app. You can find such by running a coverage analyzer. Dead code accumulates like all the useless DNA we carry around <g>. I've been thinking of writing such a tool for D, it would be a good complement to the profiler.)Currently, D is way too dogmatic and unreasonably unhelpful to the coder.But I think that a compiler requiring dead code to be inserted is being dogmatic! <g> Guess it's all in one's perspective.It is mostly still better than C/C++ though.I surely hope so!
Feb 05 2005
I don't think I made myself very clear. I would like to see a compiler that would insert run-time code that would crash an application in those instances where it detected a probable mistake made by the coder, *and* inform the coder about what the compiler has done about it. The coder can then take one of three choices for each of these instances, (1) Do nothing. The coder lives with the compiler informing them and accepts the compiler inserted code. (2) The coder modifies their code so that the situation detected by the compiler no longer exists. If the coder adds irresponsible code then they have just continued their stupid (or uneducated) behaviour, as this is just as poor as leaving it unattended. The issue here is, who's responsibility is it to code well? The coder or the compiler? I maintain it is the coder and one role that the compiler brings to is similar to that of a mentor or coach, rather than a moral enforcement officer. (3) The coder adds into their code, a statement that informs the compiler that the coder acknowledges the situation and that the compiler no longer needs to inform the coder of it. The compiler still inserts the run-time code but no longer informs the coder. With the current DMD behaviour, if the coder is the type or person who continually writes irresponsible code, then it is more likely that the first person to find this out would be the user of the application rather than the coder. I believe it is politer for the compiler to mention the poor code to the coder before the end user is disturbed by it. If the coder does not mend their ways, then they probably deserve their consumer backlash. However, if the compiler's inserted code is what the user sees, then the whole programming community is tarnished, not just the original coder. On Sat, 5 Feb 2005 02:19:15 -0800, Walter wrote:"Derek" <derek psych.ward> wrote in message news:1uqf5a6fc42ei$.8hb72yklj5d2.dlg 40tude.net...I work in the software production industry. It pays my wages. I also have a load of hands-on experience from many projects - large and small. The behaviour you just describe is not universal. In our development regime, peer inspection, bloody-minded testers, and management supported quality control processes virtually ensure that irresponsible coding practices are detected, corrected, and perpetrators retrained.You seem to be concerned that a code will always insert 'dead code' justsothe compiler will stop nagging them.Always? No. But it happens much more often than one would think. It usually happens when one is in a hurry, or thinking about something else at the time. One promises oneself that one will go back and fix it later. But that never happens.Define 'professional'? My definition includes the concept of responsibility.I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ... Abort("Logic Error #nnn. If you see this, a mistake was made by the programmer. This should never be seen. Inform your supplier about this message."); You might regard this is superfluous 'dead code',No, I do not. I think such practice as yours is fine. My concern is with the temptation to just insert a return statement without any abort call. I've seen it happen, a lot, by professional programmers. This is where the good intentions of the compiler error message have gone awry and caused things to be worse.Yes, and that is just one of many mechanisms to achieve this effect.however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode.This is good, and is also achievable by putting a catch in at the top level to catch any wayward uncaught exceptions and print out any message desired.Having D insert this code is not a problem. But having DMD be silent about it is. I would regard is as good manners to inform the coder about what you have done to their code.Thus my switch constructs always have a default clause,That's my normal practice with C/C++ code, years ago I had a paper advocating such called "Defensive Programming". Many of those ideas have been automated in D, as D will insert a default clause for you if none is specified, and that inserted default clause will throw an exception. It takes the place of all the default: assert(0); break; I write in C/C++. I think we are much more in agreement on these issues than not.Sounds okay. Off you go then... ;-)and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keepstheusers better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things.No, I don't regard it as overkill. But I would regard an inserted return statement (that is not preceded by one of the abort messages you showed above) that is not intended to ever be executed as masking a potential bug. I also believe that dead code, unless it is marked with an Abort() like your example, is a problem for future maintainers. He'll see that return, and wonder what it's for and why it doesn't seem to be possible to execute it. (As an aside, it's interesting how much dead code tends to accumulate in an app. You can find such by running a coverage analyzer. Dead code accumulates like all the useless DNA we carry around <g>. I've been thinking of writing such a tool for D, it would be a good complement to the profiler.)See my choices above. The compiler is not required to force the coder to add dead code. The coder should be able to tell the compiler that "Hey! I know what I'm doing, ok!?"Currently, D is way too dogmatic and unreasonably unhelpful to the coder.But I think that a compiler requiring dead code to be inserted is being dogmatic! <g> Guess it's all in one's perspective.But it is still not as good as you can make it, Walter. You can make it even better still. The journey is not over with v1.0. -- Derek Melbourne, AustraliaIt is mostly still better than C/C++ though.I surely hope so!
Feb 05 2005
What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean? "Derek" <derek psych.ward> wrote in message news:w9f4rolhiyrh.t9depibclhn6$.dlg 40tude.net...I don't think I made myself very clear. I would like to see a compiler that would insert run-time code that would crash an application in those instances where it detected a probable mistake made by the coder, *and* inform the coder about what the compiler has done about it. The coder can then take one of three choices for eachofthese instances, (1) Do nothing. The coder lives with the compiler informing them and accepts the compiler inserted code. (2) The coder modifies their code so that the situation detected by the compiler no longer exists. If the coder adds irresponsible code then they have just continued their stupid (or uneducated) behaviour, as this isjustas poor as leaving it unattended. The issue here is, who's responsibility is it to code well? The coder or the compiler? I maintain it is the coder and one role that the compiler brings to is similar to that of a mentor or coach, rather than a moral enforcement officer. (3) The coder adds into their code, a statement that informs the compiler that the coder acknowledges the situation and that the compiler no longer needs to inform the coder of it. The compiler still inserts the run-time code but no longer informs the coder. With the current DMD behaviour, if the coder is the type or person who continually writes irresponsible code, then it is more likely that the first person to find this out would be the user of the application rather than the coder. I believe it is politer for the compiler to mention the poor code to the coder before the end user is disturbed by it. If thecoderdoes not mend their ways, then they probably deserve their consumer backlash. However, if the compiler's inserted code is what the user sees, then the whole programming community is tarnished, not just the original coder.
Feb 05 2005
On Sat, 5 Feb 2005 17:46:37 -0800, Walter wrote:What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching', whatever... messages, I don't care. I don't care what you call the messages. I am asking for better (useful, helpful, detailed) information to be passed from the compiler to the coder. As I know you have some deep seated hang-up with the concept of 'warning message', what say we call them Transitory Information/Problem Status messages (TIPS for short). Secondly, we are only talking about two or three distinct situations, not all the hundreds of possible constructs out there. Currently DMD *already* takes special action in these situations, so its not a big difference. DMD already has all the information at its fingertips, so to speak, all it needs to do is pass this information on to the coder. If the coder decides to ignore them, or add stupid code, or tells DMD to shut up, then there is nothing more you can do. Its not your fault! Its okay, really. You did your best to help. In the long run, one cannot protect oneself, or others, from idiots. A fool-proof system just causes the universe to come up with a better class of fool. -- Derek Melbourne, Australia
Feb 05 2005
In article <w34t3lnducuh$.1cbudn9r87umu.dlg 40tude.net>, Derek says...On Sat, 5 Feb 2005 17:46:37 -0800, Walter wrote:A statement about inserting code could be made in verbose mode (-v) since that is a flag to the compiler to get all the details about what it is doing. Non-verbose mode should be... non-verbose.What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching', whatever... messages, I don't care. I don't care what you call the messages. I am asking for better (useful, helpful, detailed) information to be passed from the compiler to the coder. As I know you have some deep seated hang-up with the concept of 'warning message', what say we call them Transitory Information/Problem Status messages (TIPS for short). Secondly, we are only talking about two or three distinct situations, not all the hundreds of possible constructs out there. Currently DMD *already* takes special action in these situations, so its not a big difference. DMD already has all the information at its fingertips, so to speak, all it needs to do is pass this information on to the coder. If the coder decides to ignore them, or add stupid code, or tells DMD to shut up, then there is nothing more you can do. Its not your fault! Its okay, really. You did your best to help. In the long run, one cannot protect oneself, or others, from idiots. A fool-proof system just causes the universe to come up with a better class of fool. -- Derek Melbourne, Australia
Feb 06 2005
On Mon, 7 Feb 2005 04:10:40 +0000 (UTC), Ben Hinkle wrote:A statement about inserting code could be made in verbose mode (-v) since that is a flag to the compiler to get all the details about what it is doing. Non-verbose mode should be... non-verbose.Now that's a decent idea. -- Derek Melbourne, Australia 7/02/2005 5:48:22 PM
Feb 06 2005
On Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com> wrote:What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Feb 06 2005
"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsqnpdr23k2f5 ally...On Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com> wrote:Sounds like a pretty excellent compromise to me!!What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Feb 06 2005
On Mon, 7 Feb 2005 09:33:27 +1100, Matthew <admin stlsoft.dot.dot.dot.dot.org> wrote:"Regan Heath" <regan netwin.co.nz> wrote in message news:opslsqnpdr23k2f5 ally...It did to me too, however after more thought it appears more complex, see my later post dated: Mon, 07 Feb 2005 12:21:58 +1300 in this same thread for my later ramblings. ReganOn Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com> wrote:Sounds like a pretty excellent compromise to me!!What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Feb 06 2005
Regan wrote:switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added".Not to pick on you Regan, because this is the second note of yours I've replied to, but if it is possible to leave off a default case *on purpose*, then I hate it when I have done what I intended and the compiler is spitting out any warnings or errors. Maybe I'm just anal, but I wouldn't like to see the compiler say "5 notes; 0 errors" when I'm through. I bring this up because I bet I'm not alone.
Feb 08 2005
On Tue, 08 Feb 2005 13:22:06 -0500, Charles Patterson <charliep1 excite.com> wrote:Regan wrote:I don't feel you are. :)switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added".Not to pick on you Regan, because this is the second note of yours I've replied to, but if it is possible to leave off a default case *on purpose*Sure, you leave it off because you don't believe it can 'ever' occur, I can understand that. The current D compiler behaviour is such that in this case the compiler inserts the 'assert' and if you're right, it never occurs, no harm done. But, if you're wrong you get an assert which clearly shows where the bug is. The alternative, if it did not add the assert, is for the program to continue having not done any of your switch statements, likely crashing shortly thereafter, in which case you'd be looking in the wrong place for the bug., then I hate it when I have done what I intended and the compiler is spitting out any warnings or errors. Maybe I'm just anal, but I wouldn't like to see the compiler say "5 notes; 0 errors" when I'm through. I bring this up because I bet I'm not alone.You're not alone. In fact, I agree with you. I too would find the "5 notes" annoying and would want to 'fix' them, I think it's part of our nature. This is exactly the behaviour Walter is describing which causes programmers to add 'dead code' in order to 'shut the compiler up'. So.. if it gave the 'note' you'd fix it, at best by adding a default case with an assert (which the compiler is already doing automatically), at worst by adding a default case with nothing in it. (there are other options, but I believe they fall into one of the two categories based on their outcome) I say at best and at worst because those two actions cause the two results I have described above, an assert at the bug, or a crash at some indeterminate stage later. I get the impression most people like the current behaviour WRT switch statements, it seems people dislike the same behaviour WRT to missing returns. I am confused as to why, to me they seem like the same thing.. tho there is a nagging in the back of my mind that I cannot put into words that says there is a difference somewhere. Regan
Feb 08 2005
I don't think I made myself very clear. I would like to see a compiler that would insert run-time code that would crash an application in those instances where it detected a probable mistake made by the coder, *and* inform the coder about what the compiler has done about it. The coder can then take one of three choices for each of these instances, (1) Do nothing. The coder lives with the compiler informing them and accepts the compiler inserted code. (2) The coder modifies their code so that the situation detected by the compiler no longer exists. If the coder adds irresponsible code then they have just continued their stupid (or uneducated) behaviour, as this is just as poor as leaving it unattended. The issue here is, who's responsibility is it to code well? The coder or the compiler? I maintain it is the coder and one role that the compiler brings to is similar to that of a mentor or coach, rather than a moral enforcement officer. (3) The coder adds into their code, a statement that informs the compiler that the coder acknowledges the situation and that the compiler no longer needs to inform the coder of it. The compiler still inserts the run-time code but no longer informs the coder.This is eminently sensible. I give it 0.01% chance of getting traction. Sarcasm aside, it's a warning by any other name, and we're not allowed warnings in D. :-(With the current DMD behaviour, if the coder is the type or person who continually writes irresponsible code, then it is more likely that the first person to find this out would be the user of the application rather than the coder. I believe it is politer for the compiler to mention the poor code to the coder before the end user is disturbed by it.Brilliantly put. I'm going to email myself a copy of this and quote you next chance I get. <g>If the coder does not mend their ways, then they probably deserve their consumer backlash. However, if the compiler's inserted code is what the user sees, then the whole programming community is tarnished, not just the original coder.In Chapter 1 of IC++, I expressed it more verbosely, as: " 1.1 Eggs And Ham I'm no doubt teaching all you gentle readers about egg-sucking here, but it's an important thing to state nevertheless. Permit me to wax awhile: · It's better to catch a bug at design time than at coding/compile time[1]. · It's better to catch a bug at coding/compile time than during unit testing[2]. · It's better to catch a bug during unit testing than during debug system testing. · It's better to catch a bug during debug system than in pre-release/beta system testing. · It's better to catch a bug during pre-release/beta system testing than have your customer catch one. · It's better to have your customer catch a bug (in a reasonably sophisticated/graceful manner), than to have no customers. This is all pretty obvious stuff, although customers would probably disagree with the last one; best keep that one to ourselves. There are two ways in which such enforcements can take effect: at compile-time and at run-time, and these form the substance of this chapter. [1] I'm not a waterfaller, so coding time and compiling time are the same time for me. But even though I like unit-tests and have had some blisteringly fast pair-programming partnerships, I don't think I'm an XP-er [Beck2000] either. [2] This assumes you do unit testing. If you don't, then you need to start doing so, sharpish! " Walter was one of the reviewers of IC++, and never expressed reservations on this section. Was I wrong? Matthew P.S. Sorry for having the bad manners to be quoting myself. But worry not, most of the gnomes I profer originate from others, so oftentimes I'm actually quoting someone worth listening to. <CG>
Feb 05 2005
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu3vu0$3sv$1 digitaldaemon.com...Walter was one of the reviewers of IC++, and never expressed reservations on this section. Was I wrong?It is the conventional wisdom, and all other things being equal, it's correct. It makes an implicit assumption that all bugs are equally bad. I'll refer you to the tradeoff I mentioned in the other posting, about preferring a lightweight bug in greater quantity to a heavyweight bug in lesser quantity.
Feb 05 2005
I come from the position that a compiler's job (a part from compiling), is to help the coder write correct programs. Of course, it can't do this to the Nth degree because how does the compiler 'know' what is correct or not? However, a compiler is often able to detect things that are *probably* incorrect or have a high probability to cause the application to function incorrectly. Thus I see think that a good compiler is one that is allowed to have the ability to point these situations out to the code writer. (The compiler should also allow coders to tell the compiler that the coder knows what they are doing in this instance and just let me get on with it, okay?!) Now, what to do though if the code writer chooses to ignore the compiler's observations? I would suggest that the compiler should insert run time code that prevents the application from continuing if the application tries to continue past the code that the compiler thinks might ( i.e. highly likely) cause bad results. You seem to be concerned that a code will always insert 'dead code' just so the compiler will stop nagging them. Of course, some coders are just this immature. They either grow up or whither. As a coder matures, they will begin to take the compiler seriously and add in code that makes sense in the context. I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ... Abort("Logic Error #nnn. If you see this, a mistake was made by the programmer. This should never be seen. Inform your supplier about this message.");He he! Great stuffYou might regard this is superfluous 'dead code', however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode. Thus my switch constructs always have a default clause, and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keeps the users better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things. Currently, D is way too dogmatic and unreasonably unhelpful to the coder. It is mostly still better than C/C++ though.It is indeed mostly better. Unfortunately, in the ways in which it is not better it is disconcertingly flawed. I've been involved with D for nearly three years now, and I've yet to meet a client who doesn't have use-preventing reservations about it. Though a big fan of D, and a hoper for its future, I myself do not use it for anything serious.
Feb 05 2005
On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com...Of course, this is all a moot point if you compile using the -release switch. In that case, one gets neither run time nor compile time messages, but the bugs still remain. So a really lazy/ignorant/impatient/stupid coder just always compiles using -release and never gets nagged by the compiler. -- Derek Melbourne, Australia 7/02/2005 10:36:21 AMGuys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are.
Feb 06 2005
"Derek Parnell" <derek psych.ward> wrote in message news:cu69r8$2gqi$1 digitaldaemon.com...On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:Shit! Is that so? I hadn't cottoned on to that. If that is indeed the case, then this whole thing is just a joke. Wake me up when things get sane again."Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1 digitaldaemon.com...Of course, this is all a moot point if you compile using the -release switch. In that case, one gets neither run time nor compile time messages, but the bugs still remain. So a really lazy/ignorant/impatient/stupid coder just always compiles using -release and never gets nagged by the compiler.Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are.
Feb 06 2005
I agree this behavior is not nice at all, unless you've never seen programs like "Windows" and believe that code can be totally bug free when compiled in release mode. I might suggest making it an error, when in release mode, but obviously that has flaws: not only *could* someone be MORE keen to shut the compiler up when switching to release, but it would be very annoying if it was working along and suddenly wouldn't compile using --release :/. So, I suppose that won't work. Personally, I think the asserts should go away and instead we should get your all-knowing exceptions thrown, whether in debug or release, with line and file information (assuming the case where no compile-time error/warning is shown.) Alas, I dream. -[Unknown]Wake me up when things get sane again.
Feb 06 2005
Paul Bonser wrote:Some mention of license problems got me thinking about this piece of standard Sun boilerplate: "Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited." Are we going to have that kind of restrictions on D, or will we be free to use it to guide weapons of mass destruction? :PLeave it to you guys to take a perfectly good semi-off-topic thread and bring it onto a topic :P -- -PIB -- "C++ also supports the notion of *friends*: cooperative classes that are permitted to see each other's private parts." - Grady Booch
Feb 07 2005
I'm proud to have fathered such a successful thread... -- -PIB -- "C++ also supports the notion of *friends*: cooperative classes that are permitted to see each other's private parts." - Grady Booch
Feb 22 2005
Paul Bonser wrote:I'm proud to have fathered such a successful thread...He he... well I don't think this is just /a/ thread... it's a multi-thread. Hard to believe there can be so many topics on one. :-) - John R.
Feb 22 2005