www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Go Programming talk [OT]

reply bearophile <bearophileHUGS lycos.com> writes:
A recent talk about Go, Google I/O 2010 - Go Programming, the real talk stops
at about 33 minutes:
http://www.youtube.com/user/GoogleDevelopers#p/u/9/jgVhBThJdXc

At 9.30 you can see the switch used on a type type :-)
You can see a similar example here:
http://golang.org/src/pkg/exp/datafmt/datafmt.go
Look for the line
switch t := fexpr.(type) {


Originally Go was looking almost like a toy language, I thought Google was
thinking of it as a toy, but I now think Google is getting more serious about
it, and I can see Go has developed some serious features to solve/do the basic
things.

So maybe Andrei was wrong, you can design a good flexible language that doesn't
need templates.

Compared to Go D2 is way more complex. I don't know if people today want to
learn a language as complex as D2.

Go target flexibility and performance is not C++-class one (but probably it's
not too much difficult to build a compiler able to produce very efficient Go
programs).

In the talk they show some interfaces and more things done with free functions,
I don't know those things get compiled in assembly.

Bye,
bearophile
Jun 06 2010
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Slides:
http://dl.google.com/googleio/2010/tech-talks-go-programming.pdf

Reddit thread:
http://www.reddit.com/r/programming/comments/cc2wf/go_language_google_io/
Jun 06 2010
prev sibling next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 6/6/10, bearophile <bearophileHUGS lycos.com> wrote:
 At 9.30 you can see the switch used on a type type :-)
 You can see a similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go
That example looks really similar to the D1 "D-style Variadic Functions" example here: http://digitalmars.com/d/1.0/function.html Of course, the D1 thing there is an if/else/if chain instead of a switch, but they really look to operate basically the same.
Jun 06 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Adam Ruppe, el  6 de junio a las 19:06 me escribiste:
 On 6/6/10, bearophile <bearophileHUGS lycos.com> wrote:
 At 9.30 you can see the switch used on a type type :-)
 You can see a similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go
That example looks really similar to the D1 "D-style Variadic Functions" example here: http://digitalmars.com/d/1.0/function.html Of course, the D1 thing there is an if/else/if chain instead of a switch, but they really look to operate basically the same.
It looks like Go now have scope (exit) =) http://golang.org/doc/go_spec.html#DeferStmt -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Hay manos capaces de fabricar herramientas con las que se hacen máquinas para hacer ordenadores que a su vez diseñan máquinas que hacen herramientas para que las use la mano
Jun 06 2010
next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 6/6/10, Leandro Lucarella <llucax gmail.com> wrote:
 It looks like Go now have scope (exit) =)
Not quite the same (defer is apparently only on function level), but definitely good to have. The scope statements are awesome beyond belief.
Jun 06 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Adam Ruppe, el  6 de junio a las 21:24 me escribiste:
 On 6/6/10, Leandro Lucarella <llucax gmail.com> wrote:
 It looks like Go now have scope (exit) =)
Not quite the same (defer is apparently only on function level), but definitely good to have. The scope statements are awesome beyond belief.
Yes, they are not implemented exactly the same, but the concept is very similar. And I agree that scope is really a life saver, it makes life much easier and code much more readable. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- For a minute there I lost myself, I lost myself. Phew, for a minute there, I lost myself, I lost myself.
Jun 07 2010
parent reply Adam Ruppe <destructionator gmail.com> writes:
On 6/7/10, Leandro Lucarella <llucax gmail.com> wrote:
 Yes, they are not implemented exactly the same, but the concept is very
 similar. And I agree that scope is really a life saver, it makes life
 much easier and code much more readable.
There is one important difference though: Go doesn't seem to have scope(failure) vs scope(success). I guess it doesn't have exceptions, so it is moot, but it looks to me like suckage. Take some recent code I wrote: void bid(MySql db, Money amount) { db.query("START TRANSACTION"); scope(success) db.query("COMMIT"); scope(failure) db.query("ROLLBACK"); // moderate complex logic of verifying and storing the bid, written as a simple linear block of code, with the faith that the scope guard and exceptions keep everything sane } Just beautiful, that scales in complexity and leaves no error unhandled. Looks like in Go, you'd be stuck mucking up the main logic with return value checks, then use a goto fail; like pattern, which is bah. It works reasonably well, but leaves potential for gaps in the checking, uglies up the work code, and might have side effects on variables. (The Go spec says goto isn't allowed to skip a variable declaration... so when I do: auto result = db.query(); if(result.failed) goto error; // refuses to compile thanks to the next line! auto otherResult = db.query(); error: Ew, gross.). That sucks hard. I prefer it to finally{} though, since finally doesn't scale as well in code complexity (it'd do fine in this case, but not if there were nested transactions), but both suck compared to the scalable, beautiful, and *correct* elegance of D's scope guards. That said, of course, Go's defer /is/ better than nothing, and it does have goto, so it is a step up from C. But it is leagues behind D.
Jun 07 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Adam Ruppe wrote:
 That sucks hard. I prefer it to finally{} though, since finally
 doesn't scale as well in code complexity (it'd do fine in this case,
 but not if there were nested transactions), but both suck compared to
 the scalable, beautiful, and *correct* elegance of D's scope guards.
I agree. D's scope statement looks fairly innocuous and one can easily pass it by with "blah, blah, another statement, blah, blah" but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: ------------------------------------------------------------- /******************************************** * Read file name[], return array of bytes read. * Throws: * FileException on error. */ void[] read(char[] name) { DWORD numread; HANDLE h; if (useWfuncs) { wchar* namez = std.utf.toUTF16z(name); h = CreateFileW(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } else { char* namez = toMBSz(name); h = CreateFileA(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } if (h == INVALID_HANDLE_VALUE) goto err1; auto size = GetFileSize(h, null); if (size == INVALID_FILE_SIZE) goto err2; auto buf = std.gc.malloc(size); if (buf) std.gc.hasNoPointers(buf.ptr); if (ReadFile(h,buf.ptr,size,&numread,null) != 1) goto err2; if (numread != size) goto err2; if (!CloseHandle(h)) goto err; return buf[0 .. size]; err2: CloseHandle(h); err: delete buf; err1: throw new FileException(name, GetLastError()); } ---------------------------------------------------------- Note the complex logic to recover and unwind from errors (none of the called functions throw exceptions), and the care with which this is constructed to ensure everything is done properly. Contrast this with D2's version written by Andrei: ----------------------------------------------------------- void[] read(in char[] name, size_t upTo = size_t.max) { alias TypeTuple!(GENERIC_READ, FILE_SHARE_READ, (SECURITY_ATTRIBUTES*).init, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN, HANDLE.init) defaults; auto h = useWfuncs ? CreateFileW(std.utf.toUTF16z(name), defaults) : CreateFileA(toMBSz(name), defaults); cenforce(h != INVALID_HANDLE_VALUE, name); scope(exit) cenforce(CloseHandle(h), name); auto size = GetFileSize(h, null); cenforce(size != INVALID_FILE_SIZE, name); size = min(upTo, size); auto buf = GC.malloc(size, GC.BlkAttr.NO_SCAN)[0 .. size]; scope(failure) delete buf; DWORD numread = void; cenforce(ReadFile(h,buf.ptr, size, &numread, null) == 1 && numread == size, name); return buf[0 .. size]; } -------------------------------------------------------- The code is the same logic, but using scope it is dramatically simplified. There's not a single control flow statement in it! Furthermore, it is correct even if functions like CloseHandle throw exceptions.
Jun 07 2010
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hujd7m$11gj$1 digitalmars.com...
 Adam Ruppe wrote:
 That sucks hard. I prefer it to finally{} though, since finally
 doesn't scale as well in code complexity (it'd do fine in this case,
 but not if there were nested transactions), but both suck compared to
 the scalable, beautiful, and *correct* elegance of D's scope guards.
I agree. D's scope statement looks fairly innocuous and one can easily pass it by with "blah, blah, another statement, blah, blah" but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: ------------------------------------------------------------- /******************************************** * Read file name[], return array of bytes read. * Throws: * FileException on error. */ void[] read(char[] name) { DWORD numread; HANDLE h; if (useWfuncs) { wchar* namez = std.utf.toUTF16z(name); h = CreateFileW(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } else { char* namez = toMBSz(name); h = CreateFileA(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } if (h == INVALID_HANDLE_VALUE) goto err1; auto size = GetFileSize(h, null); if (size == INVALID_FILE_SIZE) goto err2; auto buf = std.gc.malloc(size); if (buf) std.gc.hasNoPointers(buf.ptr); if (ReadFile(h,buf.ptr,size,&numread,null) != 1) goto err2; if (numread != size) goto err2; if (!CloseHandle(h)) goto err; return buf[0 .. size]; err2: CloseHandle(h); err: delete buf; err1: throw new FileException(name, GetLastError()); } ----------------------------------------------------------
Looking at that, if I didn't know better, I would think you were a VB programmer ;)
Jun 07 2010
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Jun 7, 2010 at 11:19 AM, Walter Bright
<newshound1 digitalmars.com>wrote:

 Adam Ruppe wrote:

 That sucks hard. I prefer it to finally{} though, since finally
 doesn't scale as well in code complexity (it'd do fine in this case,
 but not if there were nested transactions), but both suck compared to
 the scalable, beautiful, and *correct* elegance of D's scope guards.
I agree. D's scope statement looks fairly innocuous and one can easily pass it by with "blah, blah, another statement, blah, blah" but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: ------------------------------------------------------------- ...
 ----------------------------------------------------------

 Note the complex logic to recover and unwind from errors (none of the
 called functions throw exceptions), and the care with which this is
 constructed to ensure everything is done properly. Contrast this with D2's
 version written by Andrei:

 ...
 The code is the same logic, but using scope it is dramatically simplified.
 There's not a single control flow statement in it! Furthermore, it is
 correct even if functions like CloseHandle throw exceptions.
Hmm, but I can actually understand your code. :-( --bb
Jun 07 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Hmm, but I can actually understand your code.  :-( 
Yeah, but how long would it take you to be sure that it is handling all errors correctly and cleaning up properly in case of those errors? It'd probably take me at least 5 intensive minutes. But in the scope version, once you're comfortable with scope and enforce, it wouldn't take half that.
Jun 07 2010
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Jun 7, 2010 at 12:25 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 Hmm, but I can actually understand your code. =A0:-(
Yeah, but how long would it take you to be sure that it is handling all e=
rrors correctly and cleaning up properly in case of those errors? It'd prob= ably take me at least 5 intensive minutes. But in the scope version, once y= ou're comfortable with scope and enforce, it wouldn't take half that. Probably so. =A0What's cenforce do anyway? --bb
Jun 07 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Probably so.  What's cenforce do anyway?
private T cenforce(T, string file = __FILE__, uint line = __LINE__) (T condition, lazy const(char)[] name) { if (!condition) { throw new FileException( text("In ", file, "(", line, "), data file ", name), .getErrno); } return condition; }
Jun 07 2010
prev sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
On 6/7/10, Bill Baxter <wbaxter gmail.com> wrote:
 Hmm, but I can actually understand your code.  :-(
The confusing part is probably cenforce, which is a little helper function in the std.file module. cenforce(condition, filename) is the same as if( ! condition) throw new FileException(filename, __FILE__, __LINE__, GetLastError()); So the new read() does still have control statements, but they are hidden in that helper function template so you don't have to repeat them all over the main code. Then, of course, the scope guards clean up on the event of those exceptions, so you don't have to worry about the special error labels, which is what allows the helper function to actually be useful!
Jun 07 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Adam Ruppe wrote:
 On 6/7/10, Bill Baxter <wbaxter gmail.com> wrote:
 Hmm, but I can actually understand your code.  :-(
The confusing part is probably cenforce, which is a little helper function in the std.file module. cenforce(condition, filename) is the same as
The tldr version of what cenforce does is convert a C-style error code return into an exception. Hence the "C" in enforce.
Jun 07 2010
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Adam Ruppe, el  7 de junio a las 11:30 me escribiste:
 On 6/7/10, Leandro Lucarella <llucax gmail.com> wrote:
 Yes, they are not implemented exactly the same, but the concept is very
 similar. And I agree that scope is really a life saver, it makes life
 much easier and code much more readable.
There is one important difference though: Go doesn't seem to have scope(failure) vs scope(success). I guess it doesn't have exceptions, so it is moot, but it looks to me like suckage.
Go doesn't have exceptions, so scope(failure/success) makes no sense. You can argue about if not having exceptions is good or bad (I don't have a strong opinion about it, sometimes I feel exceptions are nice, sometimes I think they are evil), though. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- La esperanza es una amiga que nos presta la ilusión.
Jun 07 2010
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
Leandro Lucarella wrote:

 Go doesn't have exceptions, so scope(failure/success) makes no sense.
 You can argue about if not having exceptions is good or bad (I don't
 have a strong opinion about it, sometimes I feel exceptions are nice,
 sometimes I think they are evil), though.
Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly: // C code: int foo() { int err = 0; // allocate resources err = bar(); if (err) goto finally; err = zar(); if (err) goto finally; err = car(); if (err) goto finally; finally: // do cleanup return err; } (Ordinarily, the if(err) checks are hidden inside macros like check_error, check_error_null, etc.) With exceptions, the actual code emerges: // C++ or D code void foo() { // allocate resources bar(); zar(); car(); } Ali
Jun 07 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Ali Çehreli, el  7 de junio a las 14:41 me escribiste:
 Leandro Lucarella wrote:
 
Go doesn't have exceptions, so scope(failure/success) makes no sense.
You can argue about if not having exceptions is good or bad (I don't
have a strong opinion about it, sometimes I feel exceptions are nice,
sometimes I think they are evil), though.
Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly: // C code: int foo() { int err = 0; // allocate resources err = bar(); if (err) goto finally; err = zar(); if (err) goto finally; err = car(); if (err) goto finally; finally: // do cleanup return err; } (Ordinarily, the if(err) checks are hidden inside macros like check_error, check_error_null, etc.) With exceptions, the actual code emerges: // C++ or D code void foo() { // allocate resources bar(); zar(); car(); }
You are right, but when I see the former code, I know exactly was it going on, and when I see the later code I don't have a clue how errors are handled, or if they are handled at all. And try adding the try/catch statements, the code is even more verbose than the code without exceptions. Is a trade-off. When you don't handle the errors, exceptions might be a win, but when you do handle them, I'm not so sure. And again, I'm not saying I particularly like one more than the other, I don't have a strong opinion =) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Qué sabía Galileo de astronomía, Mendieta! Lo que pasa es que en este país habla cualquiera. -- Inodoro Pereyra
Jun 07 2010
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Leandro Lucarella wrote:
 Ali =C3=87ehreli, el  7 de junio a las 14:41 me escribiste:
 Leandro Lucarella wrote:

 Go doesn't have exceptions, so scope(failure/success) makes no sense.=
 You can argue about if not having exceptions is good or bad (I don't
 have a strong opinion about it, sometimes I feel exceptions are nice,=
 sometimes I think they are evil), though.
Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly:=
 // C code:
 int foo()
 {
     int err =3D 0;

     // allocate resources

     err =3D bar();
     if (err) goto finally;

     err =3D zar();
     if (err) goto finally;

     err =3D car();
     if (err) goto finally;

 finally:
     // do cleanup

     return err;
 }

 (Ordinarily, the if(err) checks are hidden inside macros like
 check_error, check_error_null, etc.)

 With exceptions, the actual code emerges:

 // C++ or D code
 void foo()
 {
     // allocate resources

     bar();
     zar();
     car();
 }
=20 You are right, but when I see the former code, I know exactly was it going on, and when I see the later code I don't have a clue how errors are handled, or if they are handled at all. And try adding the try/catc=
h
 statements, the code is even more verbose than the code without
 exceptions.
=20
 Is a trade-off. When you don't handle the errors, exceptions might be
 a win, but when you do handle them, I'm not so sure. And again, I'm not=
 saying I particularly like one more than the other, I don't have a
 strong opinion =3D)
=20
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jun 08 2010
parent reply Bane <branimir.milosavljevic gmail.com> writes:
Jérôme M. Berger Wrote:

 Leandro Lucarella wrote:
 Ali Çehreli, el  7 de junio a las 14:41 me escribiste:
 Leandro Lucarella wrote:

 Go doesn't have exceptions, so scope(failure/success) makes no sense.
 You can argue about if not having exceptions is good or bad (I don't
 have a strong opinion about it, sometimes I feel exceptions are nice,
 sometimes I think they are evil), though.
Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly: // C code: int foo() { int err = 0; // allocate resources err = bar(); if (err) goto finally; err = zar(); if (err) goto finally; err = car(); if (err) goto finally; finally: // do cleanup return err; } (Ordinarily, the if(err) checks are hidden inside macros like check_error, check_error_null, etc.) With exceptions, the actual code emerges: // C++ or D code void foo() { // allocate resources bar(); zar(); car(); }
You are right, but when I see the former code, I know exactly was it going on, and when I see the later code I don't have a clue how errors are handled, or if they are handled at all. And try adding the try/catch statements, the code is even more verbose than the code without exceptions. Is a trade-off. When you don't handle the errors, exceptions might be a win, but when you do handle them, I'm not so sure. And again, I'm not saying I particularly like one more than the other, I don't have a strong opinion =)
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements). Jerome -- mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Being lazy as I am, exceptions are faster and easier to use than manual error checking. There will always be some unchecked return value, with exceptions it can't happen. In a way same as GC vs manual memory handling. Each thread of program I make I always enclose in try catch, so everything is cought.
 
Jun 08 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Bane, el  8 de junio a las 14:42 me escribiste:
 Is a trade-off. When you don't handle the errors, exceptions might be
 a win, but when you do handle them, I'm not so sure. And again, I'm not
 saying I particularly like one more than the other, I don't have a
 strong opinion =)
 
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements).
Being lazy as I am, exceptions are faster and easier to use than manual error checking. There will always be some unchecked return value, with exceptions it can't happen. In a way same as GC vs manual memory handling. Each thread of program I make I always enclose in try catch, so everything is cought.
Yes, I agree that "safety" is the best argument in favour of exceptions (as explicitness is the best argument in favour of no-exceptions). The Python Zen put it this way: Errors should never pass silently. Unless explicitly silenced. That's what I like the most about exceptions. I think try/catch is really ugly though. There has to be something better. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Que barbaridad, este país se va cada ves más pa' tras, más pa' tras... -- Sidharta Kiwi
Jun 08 2010
next sibling parent Adam Ruppe <destructionator gmail.com> writes:
On 6/8/10, Leandro Lucarella <llucax gmail.com> wrote:
 That's what I like the most about exceptions. I think try/catch is
 really ugly though. There has to be something better.
Isn't there actually a function buried somewhere in Phobos that translates exceptions into return values? Yes, there is: http://dpldocs.info/std.contracts.collectException Still not the most beautiful thing ever, but perhaps something along that idea line would be better for you?
Jun 08 2010
prev sibling parent reply Pelle <pelle.mansson gmail.com> writes:
On 06/09/2010 01:04 AM, Leandro Lucarella wrote:
 Bane, el  8 de junio a las 14:42 me escribiste:
 Is a trade-off. When you don't handle the errors, exceptions might be
 a win, but when you do handle them, I'm not so sure. And again, I'm not
 saying I particularly like one more than the other, I don't have a
 strong opinion =)
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements).
Being lazy as I am, exceptions are faster and easier to use than manual error checking. There will always be some unchecked return value, with exceptions it can't happen. In a way same as GC vs manual memory handling. Each thread of program I make I always enclose in try catch, so everything is cought.
Yes, I agree that "safety" is the best argument in favour of exceptions (as explicitness is the best argument in favour of no-exceptions). The Python Zen put it this way: Errors should never pass silently. Unless explicitly silenced. That's what I like the most about exceptions. I think try/catch is really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
Jun 09 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/09/2010 06:28 AM, Pelle wrote:
 On 06/09/2010 01:04 AM, Leandro Lucarella wrote:
 Bane, el 8 de junio a las 14:42 me escribiste:
 Is a trade-off. When you don't handle the errors, exceptions might be
 a win, but when you do handle them, I'm not so sure. And again, I'm
 not
 saying I particularly like one more than the other, I don't have a
 strong opinion =)
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements).
Being lazy as I am, exceptions are faster and easier to use than manual error checking. There will always be some unchecked return value, with exceptions it can't happen. In a way same as GC vs manual memory handling. Each thread of program I make I always enclose in try catch, so everything is cought.
Yes, I agree that "safety" is the best argument in favour of exceptions (as explicitness is the best argument in favour of no-exceptions). The Python Zen put it this way: Errors should never pass silently. Unless explicitly silenced. That's what I like the most about exceptions. I think try/catch is really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
Same here. I think a good application only has few try/catch statements, so the fact that try is a relatively heavy statement is not very important. Andrei
Jun 09 2010
next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  9 de junio a las 09:52 me escribiste:
That's what I like the most about exceptions. I think try/catch is
really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
Same here. I think a good application only has few try/catch statements, so the fact that try is a relatively heavy statement is not very important.
I don't feel the same, but that's probably because I mostly write programs that need to take good care of errors. Maybe the vast majority of programs can afford exiting with a nice error message catching any exception at main level (or simply letting the runtime print the error/stack trace for you). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Vivimos en una época muy contemporánea, Don Inodoro... -- Mendieta
Jun 09 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/09/2010 11:14 AM, Leandro Lucarella wrote:
 Andrei Alexandrescu, el  9 de junio a las 09:52 me escribiste:
 That's what I like the most about exceptions. I think try/catch is
 really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
Same here. I think a good application only has few try/catch statements, so the fact that try is a relatively heavy statement is not very important.
I don't feel the same, but that's probably because I mostly write programs that need to take good care of errors.
I'm not sure that few try/catch statements imply there's some sloppy approach to errors going on. Exception are all about centralized error handling, so too many try/catch statements indicates a failure to centralize error handling. Andrei
Jun 09 2010
prev sibling parent Bane <branimir.milosavljevic gmail.com> writes:
Andrei Alexandrescu Wrote:

 On 06/09/2010 06:28 AM, Pelle wrote:
 On 06/09/2010 01:04 AM, Leandro Lucarella wrote:
 Bane, el 8 de junio a las 14:42 me escribiste:
 Is a trade-off. When you don't handle the errors, exceptions might be
 a win, but when you do handle them, I'm not so sure. And again, I'm
 not
 saying I particularly like one more than the other, I don't have a
 strong opinion =)
Of course, the problem is that you rarely see the former code. Most of the time, people just write the second one with or without exceptions and don't bother about error checking if there are no exceptions. You are a lot more likely to get them to handle errors properly with exceptions than without (particularly with D's scope statements).
Being lazy as I am, exceptions are faster and easier to use than manual error checking. There will always be some unchecked return value, with exceptions it can't happen. In a way same as GC vs manual memory handling. Each thread of program I make I always enclose in try catch, so everything is cought.
Yes, I agree that "safety" is the best argument in favour of exceptions (as explicitness is the best argument in favour of no-exceptions). The Python Zen put it this way: Errors should never pass silently. Unless explicitly silenced. That's what I like the most about exceptions. I think try/catch is really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
Same here. I think a good application only has few try/catch statements, so the fact that try is a relatively heavy statement is not very important. Andrei
In my experience exceptions have one great advantage that they are far easier to add/remove to existing codebase without changing lot of things. They require much less coding and do not obfuscate code flow. So for what they offer and for what price - they are extremely worth it.
Jun 09 2010
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Pelle, el  9 de junio a las 13:28 me escribiste:
Yes, I agree that "safety" is the best argument in favour of exceptions
(as explicitness is the best argument in favour of no-exceptions). The
Python Zen put it this way:

Errors should never pass silently.
Unless explicitly silenced.

That's what I like the most about exceptions. I think try/catch is
really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
I'm talking precisely about the case when you have to catch. In that case I think the resulting code is uglier and more convoluted than the code to manage errors by returning error codes or similar. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Si por el chancho fuera, se autocomería con chimichurri Worshestershire!
Jun 09 2010
parent reply Leandro Lucarella <llucax gmail.com> writes:
Leandro Lucarella, el  9 de junio a las 11:37 me escribiste:
 Pelle, el  9 de junio a las 13:28 me escribiste:
Yes, I agree that "safety" is the best argument in favour of exceptions
(as explicitness is the best argument in favour of no-exceptions). The
Python Zen put it this way:

Errors should never pass silently.
Unless explicitly silenced.

That's what I like the most about exceptions. I think try/catch is
really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
I'm talking precisely about the case when you have to catch. In that case I think the resulting code is uglier and more convoluted than the code to manage errors by returning error codes or similar.
BTW, here is a PhD thesis with a case against exceptions. I didn't read it (just have a peek) and it's rather old (1982), so it might be not that interesting, but I thought posting it here as the thread became mostly about exceptions and someone might be interested =) http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Every 5 minutes an area of rainforest the size of a foot ball field Is eliminated
Jun 09 2010
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Leandro Lucarella" <llucax gmail.com> wrote in message 
news:20100609162223.GC16920 burns.springfield.home...
 BTW, here is a PhD thesis with a case against exceptions. I didn't read
 it (just have a peek) and it's rather old (1982), so it might be not
 that interesting, but I thought posting it here as the thread became
 mostly about exceptions and someone might be interested =)

 http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf
I didn't have time to read the whole thing, but I read the Abstract, Introduction, and roughly the first half of the Conclusion. But I did ignore all the sections where he yapps on about what a programming language is, what their point is and what makes a good one (Maybe in his next paper he'll take a stab at P=NP while making sure to explain in perfect detail why 2 + 2 is 4.) Regarding what he said about exceptions: Some things struck me as things that may have seemed to make reasonable sense back in 1982 when it was written, but don't anymore. Other times, the guy seemed completely out of his mind. I did take a [very] brief skim through chapters 5 and 6, though, where he explains his alternative to exceptions. I could be wrong, but it looks like what he's describing amounts to using algebraic data types and/or tagged unions along with having the concept of "error" be a value of it's own strong type that works similar to propagated NANs or the propagated "error" expressions Walter recently implemented in DMD to improve error-reporting. Ie, something like this: -------------------------------- // Singleton, and vaguely similar to the "null object" idiom. class Error { public static Error error; public static this() { error = new Error(); } private this() {} } alias Algebraic!(double, Error) doubleOrErr; doubleOrErr divide(doubleOrErr a, doubleOrErr b) { if(a == Error.error || b == Error.error || b == 0) return Error.error; return a/b; } -------------------------------- I think that's an interesting idea that might be worth looking into. Although it would seem to bloat the memory usage and maybe speed of a program since just about anything would have to be an Algebraic. My analysis of what I saw as his main points about exceptions: In the Abstract: "The very real problems of survival after a component violates its specification [ie, general fault-tolerance] is not addressed by exception handling. [(He appears to be talking about things that are addressed by redundant systems, watchdogs, fail-safes, etc.)]" True. But neither do the things exceptions are used to replace. And exceptions at least make it possible in a few small cases (moreso than the things exceptions are used to replace). In the Introduction: "[People think exceptions are a cure-all for program errors. They're not, and thinking that they do is dangerous.]" Maybe there were people making/believing such broad claims about exceptions back then, but I don't think anyone does now. It's kind of as if he were saying "GC is worse than manual memory management because it makes people believe they never have to think about memory management issues". Programmers, of course, have already figured out that's not the case. "On practical grounds, too, exception handling handling seems to be poorly motivated..." The paragraph that starts with that sentence (in the introduction) seems to be nothing but a bunch of hand-waving. "[A compiler that supports exceptions] is likely to be larger, slower, more prone to bugs, and less explicit with its error messages." Probably would have been true in 1982, when the paper was written, but these days the difference would be minimal. "Each nested recovery compartment represents a frame of reference in which progressively more disastrous events are anticipated, but progressively less comprehensive recovery is attempted." That's completely dependent on the exception-handling code. What the system allows, and he seems to be ignoring, is an in-process version of a watchdog. Not as reliable as an out-of-process one, granted, but it can help. In the Conclusion, under the hading "8.1 Exception Handling is Unnecessary": "...abstract specifications can be written without abstract errors..." I'd have to read the chapter he cites to have anything meaningful to say about this. "...a programming language exception mechanism neither prevents the construction of erroneous programs nor provides a way of dealing with errors that absolves the programmer from thinking about them." Straw-man. I don't know what the programming climate was like in 1982, but I've never heard anyone claim that exceptions do either of those things. However, they *do* improve what had ended up becoming a very bad situation, so the idea that they're "unnecessary" is, at best, an overstatement. "[Bugs (ie, cases where the program deviates from its specification)] should be corrected before the program is delivered, not handled while it is being run." HA HA HA HA HA!!! (/Wipes tear/) AHH HA HA HA HA HA!!!! Translation: He seems to be living in lala-bizarro-land where the waterfall model produces good, reliable results and one can simply "choose" to find all bugs before shipping. Also, everybody is always happy, there is no disease, and everyone spends all day frolicking in the daisies, climbing rainbows and playing "Kumbayah" on their guitars. Then there's a big convoluted, meandering three-paragraph rant involving illegal states but doesn't appear have any obvious point I can discern. My best guess is he's trying to say that exceptions are part of the specification of the program, therefore they don't technically qualify as actually being "exceptions" at all, which, of course, is just useless semantic bickering. But frankly, I'm not sure even he knows what the hell his point in those paragraphs is. "Exception handling mechanisms are unnecessary because they can always be replaced by [he lists some other things here]" A thoroughly pointless pedantic argument as most useful programming language constructs can be replaced by other ones. So exceptions can be, too. Who is surprised? Who the hell cares? In the Conclusion, under the hading "8.2 Exception Handling is Undesirable": "The reason that exception handling mechanisms are undesirable is that, whereas they are irrelevant to the correction of erroneous programs, the difficulties they introduce are very real." I really did go into this paper with an open mind, but the deeper I get, the deeper the hyperbole seems to get. I can certainly grant that there are difficulties introduced by exceptions, but what bugs me is the implication that exceptions are undesirable purely because "they are irrelevant to the correction of erroneous programs" (and have other drawbacks). There are two way you can take that "irrelevant to the correction of erroneous programs" part, and either way leads to the argument becoming a big pile of bullcrap: A. You can interpret it literally and pedantically, in which case it might be *technically* true, but conveniently ignores other benefits of exceptions (ex, they are relevant *and* useful for managing and mitigating problems caused by certain types of programmer/user/system/hardware errors, and for keeping high-level intent clear while doing so). B. You can interpret it more liberally, in which case it's just a plain load of hyperbolic crap right at face value. Then he lists the difficulties he sees exceptions as introducing: "(i) They permit non-local transfers of protocol, re-introducing the dangers of the goto and the problem of clearing up." The only such things they introduce that aren't already comparable to "return", "if", and "while", have been solved by "finally" and scope guards (and the occasional RAII helps too). "(ii) [something about different processes]" I'm not going to re-type or refute this one as I'm not really sure what he's getting at anyway. "(iii) If "functions" are allowed to generate exceptions, a rift is introduced between the concepts of "function" in the programming language and in mathematics. This complicates the semantics of the language and makes reasoning about programs more difficult." I don't know about 1982, but I've been programming since the late 80's, and as far back as I can remember, the rift between the languages of math and programming (particularly imperative) has always been a mile wide. He may as well complain that chemical "equations" don't behave the same as they do in algebra. "(iv) Recursive exception handling () is extremely complex." It's certainly an issue, but I don't know about it being "extremely" complex. Things like linked exceptions (ie, an exception with a "next") certainly help. "(v) An exception handling mechanism may be used to deliver information to a level of abstraction where it ought not to be available, violating the principle of information hiding." You can violate information hiding with an ordinary function, too. Or with pointers. So what? It may make it easier than to accidentally leek some information, but I've used exceptions extensively and have never found it to be particularly problematic in actual practice. Actually, such problems are pretty easy to avoid if you just consider anything you throw to be "public". "(vi) [checked exceptions are a PITA]..." Agreed. "(vi) ...On the other hand, if exceptions are not specified then the dangers of using them are increased, and many of the advantages of strong typing are lost." I haven't a clue why he feels the advantages of strong typing would be lost. As for dangers of using exceptions being increased, that may be true to a small extent, but I think he's overstating the dangers. Also, there's nothing preventing the creation of a code-analysis tool that checks what exceptions can be thrown (directly or indirectly) from each function (in fact this has always been one of my arguments against checked exceptions). That would provide the best of both worlds.
Jun 09 2010
next sibling parent Bane <branimir.milosavljevic gmail.com> writes:
 "[Bugs (ie, cases where the program deviates from its specification)] should 
 be corrected before the program is delivered, not handled while it is being 
 run."
 
 HA HA HA HA HA!!! (/Wipes tear/) AHH HA HA HA HA HA!!!!
 
 Translation: He seems to be living in lala-bizarro-land where the waterfall 
 model produces good, reliable results and one can simply "choose" to find 
 all bugs before shipping. Also, everybody is always happy, there is no 
 disease, and everyone spends all day frolicking in the daisies, climbing 
 rainbows and playing "Kumbayah" on their guitars.
Sounds to me that guy never ever have written program that was used for anything other than lecturing.
Jun 09 2010
prev sibling parent "Jer" <jersey chicago.com> writes:
Nick Sabalausky wrote:
 "Leandro Lucarella" <llucax gmail.com> wrote in message
 news:20100609162223.GC16920 burns.springfield.home...
 BTW, here is a PhD thesis with a case against exceptions. I didn't
 read it (just have a peek) and it's rather old (1982), so it might
 be not that interesting, but I thought posting it here as the thread
 became mostly about exceptions and someone might be interested =)

 http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf
I didn't have time to read the whole thing, but
But he noted you 28 years ago. I wonder if that student now has made millions of dollars on the web.
 I read the Abstract,
 Introduction, and roughly the first half of the Conclusion. But I did
 ignore all the sections where he yapps on about what a programming
 language is,
I don't remember that, but you seem to take offense to it. Should I go back and read those parts? (rhetorical).
 what their point is and what makes a good one (Maybe in
 his next paper he'll take a stab at P=NP while making sure to explain
 in perfect detail why 2 + 2 is 4.)
In what you have "said" thus far, makes you look like someone flustered that your snakeoil is just what it is. (At least it is the feeling I get straight away).
 Regarding what he said about exceptions: Some things struck me as
 things that may have seemed to make reasonable sense back in 1982
 when it was written, but don't anymore.
As you are obviously flustered and about to go off on a rant, I'll just sit back and enjoy the show. ;)
 Other times, the guy seemed
 completely out of his mind.
Oh yeah, now you sound "objective"! LOL!
 I did take a [very] brief skim through chapters 5 and 6, though,
 where he explains his alternative to exceptions. I could be wrong,
 but it looks like what he's describing amounts to using algebraic
 data types and/or tagged unions
[blah, blah and Walter] Nuff said, no comment to your affiliations. I hear that "grouping" is "good".
 My analysis of what I saw as his main points about exceptions:

 In the Abstract:

 "The very real problems of survival after a component violates its
 specification [ie, general fault-tolerance] is not addressed by
 exception handling. [(He appears to be talking about things that are
 addressed by redundant systems, watchdogs, fail-safes, etc.)]"
No, he refers to such. Hello. We're all "big boys" here. We know about real-time systems concepts (though most of "us" are probably (?) not real-time systems programmers). Don't read "advanced" material if you are oblivious to it and then comment on it. Nor use it as a vehicle to show you know the elements of it. Nuff said: stop wasting bandwidth with your personal issues.
 True. But neither do the things exceptions are used to replace. And
 exceptions at least make it possible in a few small cases (moreso
 than the things exceptions are used to replace).
The point, may have been, and surely was (like I have time to read such a WORDY thing word-for-word, mind you), that a LARGE amount of complexity is not justified for incremental and debatable gain. You seem to be playing politics in the "in-between the line" space, in which, of course, you are completely just outright being a "liar" (for lack of the better term).
 In the Introduction:

 "[People think exceptions are a cure-all for program errors. They're
 not, and thinking that they do is dangerous.]"

 Maybe there were people making/believing such broad claims about
 exceptions back then, but I don't think anyone does now.
I don't 100% know, but I think that is still true and because I am alive today and can assess the programming languages of today and see that they "don't get it". "big ships turn slowly"-syndrome. Could a 1982 "rant thesis" be relevant in 2010! "Hypocrasy!" is outcried!
 It's kind of
 as if he were saying "GC
Ah. Can't have a thread of discussion in a D NG without interjecting GC. Sounds like where one bugs out of other religions.
 "On practical grounds, too, exception handling handling seems to be
 poorly motivated..."

 The paragraph that starts with that sentence (in the introduction)
 seems to be nothing but a bunch of hand-waving.
No one is going to go back and ponder your tirade about the details of a document. Get a grip dude.
 "[A compiler that supports exceptions] is likely to be larger,
 slower, more prone to bugs, and less explicit with its error
 messages."
 Probably would have been true in 1982, when the paper was written,
 but these days the difference would be minimal.
You are arguing FOR complexity, while he argues for simplicity. See the point now?
 "Each nested recovery compartment represents a frame of reference in
 which progressively more disastrous events are anticipated, but
 progressively less comprehensive recovery is attempted."

 That's completely dependent on the exception-handling code.
No, no... hang on.. what "exception handling code"? Godwin's law? If you start out with the mindset that "exceptions are..." and then conclude "so see, exceptions are", you haven't said anything.
 In the Conclusion, under the hading "8.1 Exception Handling is
 Unnecessary":
 "...abstract specifications can be written without abstract errors..."

 I'd have to read the chapter he cites to have anything meaningful to
 say about this.
I didn't remember that as key, but it was a long read. Certainly it is too abstract! ;)
 "...a programming language exception mechanism neither prevents the
 construction of erroneous programs nor provides a way of dealing with
 errors that absolves the programmer from thinking about them."

 Straw-man.
Not at all. He said: 1. a programming language exception mechanism does not prevent the construction of erroneous programs. 2. a programming language exception mechanism curtails the programmer from considering all possible consequences of action. I don't quote him. (2) is what is embodied in the thesis and to single out the passage and not realize what was meant, is to be 1. lazy 2. unknowledgeable 3. political. Which are you: lazy, unknowledgeable or political? "ain't no strawman"... except for your thoughts on the topic.
 I don't know what the programming climate was like in
 1982, but I've never heard anyone claim that exceptions do either of
 those things.
Hello. It doesn't require such harkening.
 However, they *do* improve what had ended up becoming a
 very bad situation, so the idea that they're "unnecessary" is, at
 best, an overstatement.
Now THAT is a strawman! You are a funny guy!
 "[Bugs (ie, cases where the program deviates from its specification)]
 should be corrected before the program is delivered, not handled
 while it is being run."

 HA HA HA HA HA!!! (/Wipes tear/) AHH HA HA HA HA HA!!!!
Well that must be an "inside secret" then. Show me, then laugh and maybe I'll laugh with you.
 Translation: He seems to be living in lala-bizarro-land where the
 waterfall model produces good, reliable results and one can simply
 "choose" to find all bugs before shipping. Also, everybody is always
 happy, there is no disease, and everyone spends all day frolicking in
 the daisies, climbing rainbows and playing "Kumbayah" on their
 guitars.
He actually seems to be THINKING about programming rather than just doing it as instructed. (Ha ha on me, cuz I wouldn't hire anyone to program for me but other than MY way).
 Then there's a big convoluted, meandering three-paragraph rant
 involving illegal states but doesn't appear have any obvious point I
 can discern.
You seem to be deeply and emotionally entranced by the document. Like those that follow the Greatful Dead?
 My best guess
My bad, people here actually attune to your "best guesses"?
  is he's trying to say that exceptions are
 part of the specification of the program,
OK, I see you studying under him. You're trying to figure it out (?). (I think you have a paradigm though). He has stated many times that no one know what an "exception" is. Hello.
 therefore they don't
 technically qualify as actually being "exceptions" at all, which, of
 course, is just useless semantic bickering. But frankly, I'm not sure
 even he knows what the hell his point in those paragraphs is.
Obviously YOU don't. (And keep ME out of this please, it's between you and him, not me).
 "Exception handling mechanisms are unnecessary because they can
 always be replaced by [he lists some other things here]"

 A thoroughly pointless pedantic argument as most useful programming
 language constructs can be replaced by other ones.
No. You are playing a semantic game, ignoring the context of the thesis. It doesn't make much sense to go on and on in a conversation (or, duh, thesis, cuz they'd throw you out of grad school, unless your parents had money... or on a (false) date with a woman). Him (in context): I did it this way to same effect. You: Wait!, I'll generalize to a level above and "win"!. See how you are just a boy and not a man?
 In the Conclusion, under the hading "8.2 Exception Handling is
 Undesirable":
 "The reason that exception handling mechanisms are undesirable is
 that, whereas they are irrelevant to the correction of erroneous
 programs, the difficulties they introduce are very real."

 I really did go into this paper with an open mind, but the deeper I
 get, the deeper the hyperbole seems to get.
And the more your snakeoil sales can climb! huh. Dude, he called you snakeoil from his grave (I surely hope he went on to do things that make him happy).
 I can certainly grant
Oh? How much? I don't talk to anyone who can't grant more than a million. THINK before you speak.
 that there are difficulties introduced by exceptions,
THAT is a strawman.
  but what bugs
 me
That is an indication of paradigm or politics.
 is the implication that exceptions are undesirable purely because
That is the godwin defense.
 Then he lists the difficulties he sees exceptions as introducing:

 "(i) They permit non-local transfers of protocol, re-introducing the
 dangers of the goto and the problem of clearing up."

 The only such things they introduce that aren't already comparable to
 "return", "if", and "while", have been solved by "finally" and scope
 guards (and the occasional RAII helps too).
Hello. Your stock price just went to 0.
 "(ii) [something about different processes]"

 I'm not going to re-type or refute this one as I'm not really sure
 what he's getting at anyway.
Then why mention it? His thesis disrupted your whole life and now you need to regroup or need help? I can't help you there! (Good luck with that idiocy).
 "(iii) If "functions" are allowed to generate exceptions, a rift is
 introduced between the concepts of "function" in the programming
 language and in mathematics. This complicates the semantics of the
 language and makes reasoning about programs more difficult."

 I don't know about 1982, but I've been programming since the late
 80's, and as far back as I can remember, the rift between the
 languages of math and programming (particularly imperative) has
 always been a mile wide. He may as well complain that chemical
 "equations" don't behave the same as they do in algebra.
He was saying NOTHING about programming and math. Dude: in 1982 there was the lingering of math, before that there was Coperincus.
 "(iv) Recursive exception handling () is extremely complex."

 It's certainly an issue, but I don't know about it being "extremely"
 complex.
You fix it, I may use it. ('may' is the keyword).
 Things like linked exceptions (ie, an exception with a
 "next") certainly help.
I wouldn't know. ;)
 "(v) An exception handling mechanism may be used to deliver
 information to a level of abstraction where it ought not to be
 available, violating the principle of information hiding."
that't true, obviously. In how many pages of context was this little passage? The guy was a STUDENT dude. Get a grip.
 "(vi) [checked exceptions are a PITA]..."

 Agreed.
How do you mean? The thesis asked you for an alternative, eh? Pfft, never mind that I ever asked. Pfft. I never asked that. It can happen in humanity time, but it can't in computer time. Real-time programming should come before any other type of programming instruction.
 "(vi) ...On the other hand, if exceptions are not specified then the
 dangers of using them are increased, and many of the advantages of
 strong typing are lost."

 I haven't a clue why he feels the advantages of strong typing would
 be lost.
;) You are "too learned".
 As for dangers of using exceptions being increased, that may be true
 to a small extent, but I think he's overstating the dangers.
But he presented a thesis. I await yours. (No, no offense, I won't wait).
 Also,
 there's nothing preventing the creation of a code-analysis tool that
 checks what exceptions can be thrown (directly or indirectly) from
 each function (in fact this has always been one of my arguments
 against checked exceptions). That would provide the best of both
 worlds.
Sure. Market it. Sell it. I dare you.
Jun 10 2010
prev sibling parent "Jer" <jersey chicago.com> writes:
Leandro Lucarella wrote:
 Leandro Lucarella, el  9 de junio a las 11:37 me escribiste:
 Pelle, el  9 de junio a las 13:28 me escribiste:
 Yes, I agree that "safety" is the best argument in favour of
 exceptions (as explicitness is the best argument in favour of
 no-exceptions). The Python Zen put it this way:

 Errors should never pass silently.
 Unless explicitly silenced.

 That's what I like the most about exceptions. I think try/catch is
 really ugly though. There has to be something better.
Careful use of scope(exit) and simply avoiding catching exceptions works well for me. Except when you have to catch, of course. :)
I'm talking precisely about the case when you have to catch. In that case I think the resulting code is uglier and more convoluted than the code to manage errors by returning error codes or similar.
BTW, here is a PhD thesis with a case against exceptions. I didn't read it (just have a peek) and it's rather old (1982), so it might be not that interesting, but I thought posting it here as the thread became mostly about exceptions and someone might be interested =) http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf
Thanks. I was feeling kinda lonely. Circa 1982, np.
Jun 10 2010
prev sibling parent Kagamin <spam here.lot> writes:
Leandro Lucarella Wrote:

 It looks like Go now have scope (exit) =)
 
 http://golang.org/doc/go_spec.html#DeferStmt
 
And in order to execute block of statements you must make compiler happy: // f returns 1 func f() (result int) { defer func() { result++ }() return 0 }
Jun 06 2010
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/06/2010 05:13 PM, bearophile wrote:
 A recent talk about Go, Google I/O 2010 - Go Programming, the real
 talk stops at about 33 minutes:
 http://www.youtube.com/user/GoogleDevelopers#p/u/9/jgVhBThJdXc

 At 9.30 you can see the switch used on a type type :-) You can see a
 similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line
 switch t := fexpr.(type) {


 Originally Go was looking almost like a toy language, I thought
 Google was thinking of it as a toy, but I now think Google is getting
 more serious about it, and I can see Go has developed some serious
 features to solve/do the basic things.

 So maybe Andrei was wrong, you can design a good flexible language
 that doesn't need templates.
Which part of the talk conveyed to you that information?
 Compared to Go D2 is way more complex. I don't know if people today
 want to learn a language as complex as D2.

 Go target flexibility and performance is not C++-class one (but
 probably it's not too much difficult to build a compiler able to
 produce very efficient Go programs).

 In the talk they show some interfaces and more things done with free
 functions, I don't know those things get compiled in assembly.

 Bye, bearophile
I'm surprised you found the talk compelling, as I'm sure you know better. The talk uses a common technique - cherry-picking examples and avoiding to discuss costs and tradeoffs - to make the language look good. In addition, the talk put Go in relation with the likes of Java, C++, and Python but ignores the fact that Go's choices have been made by other languages as well, along with the inherent advantages and disadvantages. The reality is that in programming language design decisions that are all-around wins are few and far apart; it's mostly tradeoffs and compromises. Structural conformance and implicit, run-time checked interfaces are well-known and have advantages but also disadvantages. Flat, two-level hierarchies with interfaces and implementations are also well-known along with their with advantages and disadvantages. I think an honest discussion - as I hope is the tone in TDPL - serves the language and its users better than giving half of the story. Andrei
Jun 07 2010
next sibling parent reply Kagamin <spam here.lot> writes:
Andrei Alexandrescu Wrote:

 I think 
 an honest discussion - as I hope is the tone in TDPL - serves the 
 language and its users better than giving half of the story.
An honest advertisement is an unusual thing. I saw none. You think, TDPL is the first one. There're many features in other languages no speak of costs, and every forgotten cost is a lie.
Jun 07 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/07/2010 06:36 AM, Kagamin wrote:
 Andrei Alexandrescu Wrote:

 I think an honest discussion - as I hope is the tone in TDPL -
 serves the language and its users better than giving half of the
 story.
An honest advertisement is an unusual thing. I saw none. You think, TDPL is the first one. There're many features in other languages no speak of costs, and every forgotten cost is a lie.
Agreed. Well, we'll see. I try to discuss costs, tradeoffs, and alternative approaches that were considered and rejected (and why) for all major features of D. For example, few books give a balanced view of dynamic polymorphism; in some you'll find the nice parts, in others you'll find the not-so-nice parts. I tried to discuss both in TDPL: ==================== \dee's reference semantics approach to handling class objects is similar to that found in many object-oriented languages. Using reference semantics and \idx{garbage collection} for class objects has both positive and negative consequences, among which the following: \begin{itemize*} \item[+]\index{polymorphism}\emph{\textbf{Polymorphism.}} The level of indirection brought by the consistent use of references enables support for \idx{polymorphism}. All references have the same size, but related objects can have different sizes even though they have ostensibly the same type (through the use of inheritance, which we'll discuss shortly). Because references have the same size regardless of the size of the object they refer to, you can always substitute references to derived objects for references to base objects. Also, arrays of objects work properly even when the actual objects in the array have different sizes. If you've used C++, you sure know about the necessity of using pointers with \idx{polymorphism}, and about the various lethal problems you encounter when you forget to. \item[+]\emph{\textbf{Safety.}} Many of us see \idx{garbage collection} as just a convenience that simplifies coding by relieving the programmer of managing memory. Perhaps surprisingly, however, there is a very strong connection between the infinite lifetime model (which \idx{garbage collection} makes practical) and memory safety. Where there's infinite lifetime, there are no dangling references, that is, references to some object that has gone out of existence and has had its memory reused for an unrelated object. Note that it would be just as safe to use value semantics throughout (have \cc{auto a2 = a1;} duplicate the A object that a1 refers to and have a2 refer to the copy). That setup, however, is hardly interesting because it disallows creation of any referential structure (such as lists, trees, graphs, and more generally shared resources). \item[--]\emph{\textbf{Allocation cost.}} Generally, class objects must reside in the \index{garbage collection}garbage-collected heap, which generally is slower and eats more memory than memory on the stack. The margin has diminished quite a bit lately but is still nonzero. \item[--]\emph{\textbf{Long-range coupling.}} The main risk with using references is undue aliasing. Using reference semantics throughout makes it all too easy to end up with references to the same object residing in different---and unexpected---places. In~Figure~\vref{fig:aliasing}, a1 and a2 may be arbitrarily far from each other as far as the application logic is concerned, and additionally there may be many other references hanging off the same object. Interestingly, if the referred object is immutable, the problem vanishes---as long as nobody modifies the object, there is no coupling. Difficulties arise when one change effected in a certain context affects surprisingly and dramatically the state as seen in a different part of the application. Another way to alleviate this problem is explicit duplication, often by calling a special method clone , whenever passing objects around. The downside of that technique is that it is based on discipline and that it could lead to inefficiency if several parts of an application decide to conservatively clone objects ``just to be sure.'' \end{itemize*} Contrast reference semantics with value semantics \`a~la int . Value semantics has advantages, notably equational reasoning: you can always substitute equals for equals in expressions without altering the result. (In contrast, references that use method calls to modify underlying objects do not allow such reasoning.) Speed is also an important advantage of value semantics, but if you want the dynamic generosity of \idx{polymorphism}, reference semantics is a must. Some languages tried to accommodate both, which earned them the moniker of ``impure,'' in contrast to pure object-oriented languages that foster reference semantics uniformly across all types. \dee is impure and up-front about it. You get to choose at design time whether you use~OOP for a particular type, in which case you use \kidx{class}; otherwise, you go with struct and forgo the particular~OOP amenities that go hand in hand with reference semantics. ==================== Andrei
Jun 07 2010
parent reply Kagamin <spam here.lot> writes:
Andrei Alexandrescu Wrote:

 You get  to choose  at design  time  whether you
 use~OOP for  a particular  type, in which  case you  use \kidx{class};
 otherwise, you go with  struct  and forgo the particular~OOP amenities
 that go hand in hand with reference semantics.
Good, but this is about user's decision. I meant decisions that were made by the language designer, so if you want a feature, you're forced to choose between languages. Well, I'm not sure whether such book can be "about just D".
Jun 07 2010
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/07/2010 09:02 AM, Kagamin wrote:
 Andrei Alexandrescu Wrote:

 You get  to choose  at design  time  whether you use~OOP for  a
 particular  type, in which  case you  use \kidx{class}; otherwise,
 you go with  struct  and forgo the particular~OOP amenities that go
 hand in hand with reference semantics.
Good, but this is about user's decision. I meant decisions that were made by the language designer, so if you want a feature, you're forced to choose between languages. Well, I'm not sure whether such book can be "about just D".
The book includes honest discussions of language design decisions, including merits of approaches D decided to diverge from. Example: ==================== Now what happens when the compiler sees the improved definition of find ? The compiler faces a tougher challenge compared to the int[] case because now\sbs T is not known yet---it could be just about any type. And different types are stored differently, passed around differently, and sport different definitions of~ == . Dealing with this challenge is important because type parameters really open up possibilities and multiply reusability of code. When it comes to generating code for type parameterization, two schools of thought are prevalent today~\cite{pizza}: \begin{itemize*} \item\emph{Homogeneous translation:} Bring all data to a common format, which allows compiling only one version of find that will work for everybody. \item\emph{Heterogeneous translation:} Invoking find with various type arguments (e.g., int versus double versus string ) prompts the compiler to generate as many specialized versions of find . \end{itemize*} In homogeneous translation, the language must offer a uniform access interface to data as a prerequisite to presenting it to find . Heterogeneous translation is pretty much as if you had an assistant writing one special find for each data format you may come up with, all built from the same mold. Clearly the two approaches have relative advantages and disadvantages, which are often the subject of passionate debates in various languages' communities. Homogeneous translation favors uniformity, simplicity, and compact generated code. For example, traditional functional languages favor putting everything in list format, and many traditional object-oriented languages favor making everything an object offering a uniform access to its features. However, the disadvantages of homogeneous translation may include rigidity, lack of expressive power, and inefficiency. In contrast, heterogeneous translation favors specialization, expressive power, and speed of generated code. The costs may include bloating of generated code, increases in language complexity, and an awkward compilation model (a frequently aired argument against heterogeneous approaches is that they're glorified macros [gasp]; and ever since~C gave such a bad reputation to macros, the label evokes quite a powerful negative connotation). A detail worth noting is an inclusion relationship: heterogeneous translation includes homogeneous translation for the simple reason that ``many formats'' includes ``one format,'' and ``many implementations'' includes ``one implementation.'' Therefore it can be argued (all other issues left aside) that heterogeneous translation is more powerful than homogeneous translation. If you have heterogeneous translation means at your disposal, at least in principle there's nothing stopping you from choosing one unified data format and one unified function when you so wish. The converse option is simply not available under a homogeneous approach. However, it would be oversimplifying to conclude that heterogeneous approaches are ``better'' because aside from expressive power there are, again, other arguments that need to be taken into consideration. \dee~uses heterogeneous translation with (warning, incoming technical terms flak) statically scoped symbol lookup and deferred typechecking. This means that when the~\dee compiler sees the generic find definition, it parses and saves the body, remembers where the function was defined, and does nothing else until find gets called. At that point, the compiler fetches the parsed definition of find and attempts to compile it with the type that the caller chose in lieu of\sbs T . When the function uses symbols, they are looked up in the context in which the function was defined. ==================== Andrei
Jun 07 2010
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Which part of the talk conveyed to you that information?
After thinking well about this question, my conclusion is that I was not just (as usual) wrong, I was trolling: I didn't know what I was talking about. I am sorry. I have not even programmed in Go. Bye, bearophile
Jun 07 2010
prev sibling next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote:

 At 9.30 you can see the switch used on a type type :-) You can see a
 similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line
 switch t := fexpr.(type) {
 
 ...

 Bye,
 bearophile
That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case "immutable(char)[]": case "int": } }
Jun 07 2010
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/07/2010 07:44 PM, Jesse Phillips wrote:
 On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote:

 At 9.30 you can see the switch used on a type type :-) You can see a
 similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line
 switch t := fexpr.(type) {

 ...

 Bye,
 bearophile
That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case "immutable(char)[]": case "int": } }
Actually the uses are not equivalent. A closer example is: class A {} void main() { Object a = new A; switch (typeid(a).name) { case "object.Object": writeln("it's an object"); break; case "test.A": writeln("yeah, it's an A"); break; default: writeln("default: ", typeid(a).name); break; } } Go stores the dynamic types together with objects, so what looks like a simple typedef for int is in fact a full-fledged class with one data member. Those objects are stored on the garbage-collected heap. Andrei
Jun 07 2010
parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Thanks, the important thing to note is that D can do what Go was doing in 
the example, Sorry bearophile.

On Mon, 07 Jun 2010 19:55:06 -0500, Andrei Alexandrescu wrote:

 On 06/07/2010 07:44 PM, Jesse Phillips wrote:
 On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote:

 At 9.30 you can see the switch used on a type type :-) You can see a
 similar example here:
 http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line
 switch t := fexpr.(type) {

 ...

 Bye,
 bearophile
That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case "immutable(char)[]": case "int": } }
Actually the uses are not equivalent. A closer example is: class A {} void main() { Object a = new A; switch (typeid(a).name) { case "object.Object": writeln("it's an object"); break; case "test.A": writeln("yeah, it's an A"); break; default: writeln("default: ", typeid(a).name); break; } } Go stores the dynamic types together with objects, so what looks like a simple typedef for int is in fact a full-fledged class with one data member. Those objects are stored on the garbage-collected heap. Andrei
Jun 07 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Jesse Phillips:

 Thanks, the important thing to note is that D can do what Go was doing in 
 the example, Sorry bearophile.
First, don't be sorry, I am using D2 instead of Go. If you show me D2 is better I am happy :-) Second, saying that D2 can do something is not so interesting, because D2 is a wide language, so you can do many things with it. What's important is not just being able to do something in D, but also if such thing is idiomatic (this means most D programmers do it). Bye, bearophile
Jun 08 2010
prev sibling parent reply nobody <someone somewhere.com> writes:
Linus Torvalds shows his opinion about why he chooses C here:
http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2

He wants a language that context-free, simple, down to the metal.
He dislikes C++ b/c it has many abstraction.

I think some D experts should post some comments.
Jun 10 2010
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&thre
did=110549&roomid=2 
 
 
 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
 
 I think some D experts should post some comments.
I posted a comment. We'll see. http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
Jun 10 2010
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/10/2010 12:36 PM, Walter Bright wrote:
 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2


 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.

 I think some D experts should post some comments.
I posted a comment. We'll see. http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
I fear that's not going to go anywhere interesting. Linus makes a sensible point. You do, too, but understanding that requires knowing a little about D. Andrei
Jun 10 2010
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 On 06/10/2010 12:36 PM, Walter Bright wrote:
 I posted a comment. We'll see.

 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&thre
did=110549&roomid=2 
I fear that's not going to go anywhere interesting. Linus makes a sensible point. You do, too, but understanding that requires knowing a little about D.
I agree Linus has a good point. He may already have looked at D and have an opinion about it. In any case, my post needed to be very short or nobody will read it, so I am reduced to just a few teaser points.
Jun 10 2010
prev sibling parent "Jer" <jersey chicago.com> writes:
Andrei Alexandrescu wrote:
 On 06/10/2010 12:36 PM, Walter Bright wrote:
 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2


 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.

 I think some D experts should post some comments.
I posted a comment. We'll see. http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
I fear
I believe you. Everyone is.
Jun 10 2010
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 I posted a comment. We'll see.
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
This is a part of what Linus said about C++:
 It tries to solve all the wrong problems, and
 does not tackle the right ones. The things C++ "solves"
 are trivial things, almost purely syntactic extensions to
 C rather than fixing some true deep problem.
Do you know what "right" and "true deep" problems he is talking about? This can become material to improve D3. Bye, bearophile
Jun 10 2010
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
bearophile Wrote:

 Walter Bright:
 I posted a comment. We'll see.
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
This is a part of what Linus said about C++:
 It tries to solve all the wrong problems, and
 does not tackle the right ones. The things C++ "solves"
 are trivial things, almost purely syntactic extensions to
 C rather than fixing some true deep problem.
Do you know what "right" and "true deep" problems he is talking about? This can become material to improve D3. Bye, bearophile
And when D3 comes out, you'll be waiting for D4? I don't understand your posts mentioning D3. I mean, D2 isn't even finalized yet (right?), and you're already treating the language like it came out 50 years ago and needs to be abandoned. Just my two cents..
Jun 10 2010
next sibling parent "Jer" <jersey chicago.com> writes:
Andrej Mitrovic wrote:
 bearophile Wrote:

 Walter Bright:
 I posted a comment. We'll see.
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
This is a part of what Linus said about C++:
 It tries to solve all the wrong problems, and
 does not tackle the right ones. The things C++ "solves"
 are trivial things, almost purely syntactic extensions to
 C rather than fixing some true deep problem.
Do you know what "right" and "true deep" problems he is talking about? This can become material to improve D3. Bye, bearophile
And when D3 comes out, you'll be waiting for D4? I don't understand your posts mentioning D3. I mean, D2 isn't even finalized yet (right?), and you're already treating the language like it came out 50 years ago and needs to be abandoned.
Leave him alone.
 Just my two cents..
That used to be parking meter coin when we used to... nevermind. It matters. Shut up.
Jun 10 2010
prev sibling parent BCS <none anon.com> writes:
Hello Andrej,

 And when D3 comes out, you'll be waiting for D4? I don't understand
 your posts mentioning D3. I mean, D2 isn't even finalized yet
 (right?), and you're already treating the language like it came out 50
 years ago and needs to be abandoned.
 
I get the impression that some of the people here are more interested in the development *of* D than development *in* D. In some ways, that's a good thing, as long as we don't just produce a toy for language developers. -- ... <IXOYE><
Jun 12 2010
prev sibling parent "Jer" <jersey chicago.com> writes:
bearophile wrote:
 Walter Bright:
 I posted a comment. We'll see.
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
This is a part of what Linus said about C++:
 It tries to solve all the wrong problems, and
 does not tackle the right ones. The things C++ "solves"
 are trivial things, almost purely syntactic extensions to
 C rather than fixing some true deep problem.
Do you know what "right" and "true deep" problems he is talking about? This can become material to improve D3.
I read one of your posts and it makes me cry.
Jun 10 2010
prev sibling next sibling parent "Jer" <jersey chicago.com> writes:
Walter Bright wrote:
 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2


 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.

 I think some D experts should post some comments.
I posted a comment. We'll see.
Well what is the price for you to be free and go on with your life?
Jun 10 2010
prev sibling next sibling parent Kagamin <spam here.lot> writes:
Walter Bright Wrote:

 I posted a comment. We'll see.
 
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2
Linus wants a readable language. D is designed to be writable. You gave him the wrong tool.
Jun 11 2010
prev sibling parent reply Jonathan M Davis <jmdavisProg gmail.com> writes:
Walter Bright wrote:

 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 
http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2
 
 
 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
 
 I think some D experts should post some comments.
I posted a comment. We'll see.
http://www.realworldtech.com/forums/index.cfm?action=detail&id=110634&threadid=110549&roomid=2 He seems to particularly dislike function overloading which is not only part of D but pretty integral to object-oriented programming in general, so my guess is that he wouldn't be all that enthused with any object-oriented language. At minimium, it would have to give him something that he thought was definitely worth the extra cost of dealing with function overloading. While D certainly improves greatly on C++ (including in how it deals with function overloading), it's close enough to C++ that I suspect that he wouldn't like it either. But of course, he'd have to actually look at D and then tell us what he thought for us to know for sure. - Jonathan M Davis
Jun 11 2010
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jonathan M Davis wrote:
 He seems to particularly dislike function overloading which is not only part 
 of D but pretty integral to object-oriented programming in general, so my 
 guess is that he wouldn't be all that enthused with any object-oriented 
 language. At minimium, it would have to give him something that he thought 
 was definitely worth the extra cost of dealing with function overloading. 
 While D certainly improves greatly on C++ (including in how it deals with 
 function overloading), it's close enough to C++ that I suspect that he 
 wouldn't like it either. But of course, he'd have to actually look at D and 
 then tell us what he thought for us to know for sure.
I don't expect him to switch to anything. When you've built up such a store in one language, it would take an incredible push to change. Nevertheless, D has some features, such as transitive immutability and purity, scope guard and memory safety, which are a nice fit for his style of programming. What would be more likely is D features moving into C.
Jun 11 2010
parent reply Alex Makhotin <alex bitprox.com> writes:
Walter Bright wrote:
 
 When you've built up such a 
 store in one language, it would take an incredible push to change.
Linux kernel is a large monolithic monster, one single bug can bring the system down. So considering this, Linus is right. He afraid of one single language feature may be the source of myriad of bugs, just because the language allows to use it, and assuming the worst case - everybody will misuse the feature as they always do. -- Alex Makhotin, the founder of BITPROX, http://bitprox.com
Jun 11 2010
parent Walter Bright <newshound1 digitalmars.com> writes:
Alex Makhotin wrote:
 Walter Bright wrote:
 When you've built up such a store in one language, it would take an 
 incredible push to change.
Linux kernel is a large monolithic monster, one single bug can bring the system down. So considering this, Linus is right. He afraid of one single language feature may be the source of myriad of bugs, just because the language allows to use it, and assuming the worst case - everybody will misuse the feature as they always do.
I agree that there's no freakin' way the Linux kernel could be moved to another language, regardless of the merits of that language. However, other independent utilities could be.
Jun 11 2010
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&thre
did=110549&roomid=2 
 
 
 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
 
 I think some D experts should post some comments.
This showed up on reddit: http://www.reddit.com/r/programming/comments/cdncx/linus_about_c_productivity_again/
Jun 10 2010
parent "Jer" <jersey chicago.com> writes:
Walter Bright wrote:
 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&threadid=110549&roomid=2


 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.

 I think some D experts should post some comments.
This showed up on reddit: http://www.reddit.com/r/programming/comments/cdncx/linus_about_c_productivity_again/
That's a long thing. Care to sum it up? And what is your value anyway? Do tell. Can I buy you? How much! Can I skip college? I have a car. Hello.
Jun 10 2010
prev sibling next sibling parent Alex Makhotin <alex bitprox.com> writes:
nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&thre
did=110549&roomid=2 
 
 
 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
 
 I think some D experts should post some comments.
I think the context problem comes from the wrong understanding of the OOP. The C programmers when try to write C++ object-oriented code still continue to think in procedural and everything available(global) way. But, object-oriented way assumes the programmer to actually think in abstract manner. Sure, when it comes to reading the C++ code of not mature C++ programmer, and who didn't adopt object oriented paradigm, who mix his past practical experience(and he knows it works!, why abandon it?) with OOP... It's hard to understand what he means by writing inconsistent code. Plus, adding poor documentation to this. Yes, to understand C++ code can be very difficult. My opinion on the topic is that the abstraction comes at a cost of hidden implementation. Such hidden implementation must have good, proper documentation, or readable interface. If programming with OOP, the programmer must think in OOP, not C procedural way. And few words about interfacing again. When I agitate for the interfaces I usually mean by this: OK, your code is good at what it does and does it well(or you think it does well), give me proper and documented interface so that I can understand how to use it and apply in real life. If you do not do this, I must read your code(oh no!) and understand how it works(isn't it the problem that your code is aimed to solve?!!), so that I make interface in my imagination(!) on how to use your code, eventually I will not remember all the imagined picture and therefore dissatisfy with the code, it can even drive me not to use your code any more, or C++ code in general. -- Alex Makhotin, the founder of BITPROX, http://bitprox.com
Jun 11 2010
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&thre
did=110549&roomid=2 
 
 
 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
This is another interesting point of view by Linus: http://www.realworldtech.com/forums/index.cfm?action=detail&id=110699&threadid=110549&roomid=2
Jun 11 2010
parent Don <nospam nospam.com> writes:
Walter Bright wrote:
 nobody wrote:
 Linus Torvalds shows his opinion about why he chooses C here:
 http://www.realworldtech.com/forums/index.cfm?action=detail&id=110618&thre
did=110549&roomid=2 


 He wants a language that context-free, simple, down to the metal.
 He dislikes C++ b/c it has many abstraction.
This is another interesting point of view by Linus: http://www.realworldtech.com/forums/index.cfm?action=detail&id=110699&thre did=110549&roomid=2
Most of his complaints seem to be about OOP. I particularly like the line: "Complicated problems don't have some simple strictly hierarchical data structures." I've read comments from Stepanov which criticise OOP and many of the other features of C++, for exactly the same reasons which Linus gives. It's interesting that their conclusions are so different.
Jun 16 2010