www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - dereferencing null

reply "Nathan M. Swan" <nathanmswan gmail.com> writes:
Am I correct that trying to use an Object null results in 
undefined behavior?

     Object o = null;
     o.opCmp(new Object); // segmentation fault on my OSX machine

This seems a bit non-D-ish to me, as other bugs like this throw 
Errors (e.g. RangeError).

It would be nice if it would throw a NullPointerError or 
something like that, because I spent a long time trying to find a 
bug that crashed the program before writeln-debugging statements 
could be flushed.

NMS
Mar 01 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?
 
      Object o = null;
      o.opCmp(new Object); // segmentation fault on my OSX machine
 
 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).
 
 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows. Walter's take on it is that there is no point in checking for what the operating system is already checking for - especially when it adds additional overhead. Plenty of folks disagree, but that's the way it is. If you really care about checking for it, then just assert: assert(obj !is null); or assert(obj); (the second one will also call the object's invariant). - Jonathan M Davis
Mar 01 2012
next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Friday, 2 March 2012 at 04:53:02 UTC, Jonathan M Davis wrote:
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?
 
      Object o = null;
      o.opCmp(new Object); // segmentation fault on my OSX 
 machine
 
 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).
 
 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to 
 find a
 bug that crashed the program before writeln-debugging 
 statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows.
False. ----------------------- import std.stdio; class Foo { final void bar() { writeln("I'm null!"); } } void main() { Foo foo; foo.bar(); } ----------------------- % dmd test.d -O -release -inline % ./test I'm null! % ----------------------- You only get an error if there is a memory access involved (vtable, member data etc.)
Mar 02 2012
next sibling parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:vicaibqyaerogseqsjbe forum.dlang.org...
 It's defined. The operating system protects you. You get a segfault on 
 *nix and
 an access violation on Windows.
False. [snip] You only get an error if there is a memory access involved (vtable, member data etc.)
It _is_ defined, you get an access violation whenever there's a dereference. Yes, you can call some types of member functions without any dereferences, but this is alse well defined and sometimes quite useful.
Mar 02 2012
parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Friday, 2 March 2012 at 10:01:32 UTC, Daniel Murphy wrote:
 "Peter Alexander" <peter.alexander.au gmail.com> wrote in 
 message
 news:vicaibqyaerogseqsjbe forum.dlang.org...
 It's defined. The operating system protects you. You get a 
 segfault on *nix and
 an access violation on Windows.
False. [snip] You only get an error if there is a memory access involved (vtable, member data etc.)
It _is_ defined, you get an access violation whenever there's a dereference. Yes, you can call some types of member functions without any dereferences, but this is alse well defined and sometimes quite useful.
Ok, if it is defined, then please tell me what the defined behaviour of my code snippet is.
Mar 02 2012
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:jxloisomieykanavmmlj forum.dlang.org...
 On Friday, 2 March 2012 at 10:01:32 UTC, Daniel Murphy wrote:
 "Peter Alexander" <peter.alexander.au gmail.com> wrote in message
 news:vicaibqyaerogseqsjbe forum.dlang.org...
 It's defined. The operating system protects you. You get a segfault on 
 *nix and
 an access violation on Windows.
False. [snip] You only get an error if there is a memory access involved (vtable, member data etc.)
It _is_ defined, you get an access violation whenever there's a dereference. Yes, you can call some types of member functions without any dereferences, but this is alse well defined and sometimes quite useful.
Ok, if it is defined, then please tell me what the defined behaviour of my code snippet is.
Assertion failure in debug mode, prints the message in release mode. (I think)
Mar 02 2012
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-02 10:22, Peter Alexander wrote:
 On Friday, 2 March 2012 at 04:53:02 UTC, Jonathan M Davis wrote:
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?

 Object o = null;
 o.opCmp(new Object); // segmentation fault on my OSX machine

 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).

 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows.
False. ----------------------- import std.stdio; class Foo { final void bar() { writeln("I'm null!"); } } void main() { Foo foo; foo.bar(); } ----------------------- % dmd test.d -O -release -inline % ./test I'm null! % ----------------------- You only get an error if there is a memory access involved (vtable, member data etc.)
I never thought about that. -- /Jacob Carlborg
Mar 02 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, March 02, 2012 11:56:52 Jacob Carlborg wrote:
 I never thought about that.
It's the same in C++. I was quite surprised when I first ran into it. But as Daniel points out, the behavior is still defined, just less expected. If you actually use this (or any member variable, since that would use this) inside of that member function though, you'll get a segfault just like if it had been dereferenced before calling the function like it would be with a virtual function. - Jonathan M Davis
Mar 02 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-03-02 12:05, Jonathan M Davis wrote:
 On Friday, March 02, 2012 11:56:52 Jacob Carlborg wrote:
 I never thought about that.
It's the same in C++. I was quite surprised when I first ran into it. But as Daniel points out, the behavior is still defined, just less expected. If you actually use this (or any member variable, since that would use this) inside of that member function though, you'll get a segfault just like if it had been dereferenced before calling the function like it would be with a virtual function. - Jonathan M Davis
Yeah, that makes sense. A final method, not accessing "this", would be just as a static method. Which is like a free function scoped in a class. -- /Jacob Carlborg
Mar 02 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 02/03/2012 11:56, Jacob Carlborg a écrit :
 False.

 -----------------------
 import std.stdio;

 class Foo
 {
 final void bar()
 {
 writeln("I'm null!");
 }
 }

 void main()
 {
 Foo foo;
 foo.bar();
 }
 -----------------------

 % dmd test.d -O -release -inline
 % ./test
 I'm null!
 %

 -----------------------

 You only get an error if there is a memory access involved (vtable,
 member data etc.)
I never thought about that.
This is a common C++ interview question.
Mar 02 2012
parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 02.03.2012, 14:01 Uhr, schrieb deadalnix <deadalnix gmail.com>:

 Le 02/03/2012 11:56, Jacob Carlborg a =C3=A9crit :
 You only get an error if there is a memory access involved (vtable,
 member data etc.)
I never thought about that.
This is a common C++ interview question.
Don't scare him! I only had interviews in small companies with self-educ= ated programmers. One question was, what Delphi cannot do. I said, writi= ng a kernel and cross-platform development wouldn't work, but the answer= the employer was looking for was COBRA. (And I think even at that time = there was already a solution for that.) The difficulty really depends on how sophisticated a company's personnel= department is. :-)
Mar 02 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 02/03/2012 14:30, Marco Leise a écrit :
 Am 02.03.2012, 14:01 Uhr, schrieb deadalnix <deadalnix gmail.com>:

 Le 02/03/2012 11:56, Jacob Carlborg a écrit :
 You only get an error if there is a memory access involved (vtable,
 member data etc.)
I never thought about that.
This is a common C++ interview question.
Don't scare him! I only had interviews in small companies with self-educated programmers. One question was, what Delphi cannot do. I said, writing a kernel and cross-platform development wouldn't work, but the answer the employer was looking for was COBRA. (And I think even at that time there was already a solution for that.) The difficulty really depends on how sophisticated a company's personnel department is. :-)
Yes, many company don't go that far in interview. But this is a red flag to me.
Mar 02 2012
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/02/2012 10:22 AM, Peter Alexander wrote:
 You only get an error if there is a memory access involved (vtable,
 member data etc.)
In non-release mode you get an assertion failure.
Mar 02 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Friday, 2 March 2012 at 09:22:28 UTC, Peter Alexander wrote:
 You only get an error if there is a memory access involved 
 (vtable, member data etc.)
By the way, my favorite application of that in C++ is debug helper member functions (think: using DMD's toChar() in GDB), which don't crash when invoked on a null pointer by checking if (this == 0) before accessing member variables. David
Mar 02 2012
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Peter Alexander" <peter.alexander.au gmail.com> wrote in message 
news:vicaibqyaerogseqsjbe forum.dlang.org...
 On Friday, 2 March 2012 at 04:53:02 UTC, Jonathan M Davis wrote:
 It's defined. The operating system protects you. You get a segfault on 
 *nix and
 an access violation on Windows.
False. ----------------------- import std.stdio; class Foo { final void bar() { writeln("I'm null!"); } } void main() { Foo foo; foo.bar(); } ----------------------- % dmd test.d -O -release -inline % ./test I'm null! % ----------------------- You only get an error if there is a memory access involved (vtable, member data etc.)
Technically speaking, there is no dereference of null occurring there. It *looks* like there is because of the "foo.bar" notation, but remember, calling a member function is not really dereferencing. It's just sugar for: foo.vtable[index_of_bar](foo); Since "bar()" is a final member of "Foo" and "foo" is statically known to be type "Foo", the usual vtable indirection is unnecessary, so it reduces to: bar(foo); Passing null into a function doesn't involve dereferening the null, and bar() doesn't dereference "this", so there's never any dereferencing of null. Therefore the rule about null dereferences always being caught does still hold true. It is counter-intuitive, though.
Mar 02 2012
parent "Martin Nowak" <dawg dawgfoto.de> writes:
 Technically speaking, there is no dereference of null occurring there. It
 *looks* like there is because of the "foo.bar" notation, but remember,
 calling a member function is not really dereferencing. It's just sugar  
 for:

 foo.vtable[index_of_bar](foo);
But usually there's an class invariant.
Mar 02 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 02/03/2012 05:51, Jonathan M Davis a écrit :
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?

       Object o = null;
       o.opCmp(new Object); // segmentation fault on my OSX machine

 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).

 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows. Walter's take on it is that there is no point in checking for what the operating system is already checking for - especially when it adds additional overhead. Plenty of folks disagree, but that's the way it is.
The assertion that it has overhead isn't true.You'll find solutions without overhead (using libsigsegv in druntime for example). BTW, object should be non nullable by default, if you ask me. The drawback of null is way bigger than any benefit.
Mar 02 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-02 14:00, deadalnix wrote:
 Le 02/03/2012 05:51, Jonathan M Davis a écrit :
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?

 Object o = null;
 o.opCmp(new Object); // segmentation fault on my OSX machine

 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).

 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows. Walter's take on it is that there is no point in checking for what the operating system is already checking for - especially when it adds additional overhead. Plenty of folks disagree, but that's the way it is.
The assertion that it has overhead isn't true.You'll find solutions without overhead (using libsigsegv in druntime for example). BTW, object should be non nullable by default, if you ask me. The drawback of null is way bigger than any benefit.
Isn't it quite unsafe to throw an exception in a signal ? -- /Jacob Carlborg
Mar 02 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 02/03/2012 15:37, Jacob Carlborg a écrit :
 On 2012-03-02 14:00, deadalnix wrote:
 Le 02/03/2012 05:51, Jonathan M Davis a écrit :
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?

 Object o = null;
 o.opCmp(new Object); // segmentation fault on my OSX machine

 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).

 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows. Walter's take on it is that there is no point in checking for what the operating system is already checking for - especially when it adds additional overhead. Plenty of folks disagree, but that's the way it is.
The assertion that it has overhead isn't true.You'll find solutions without overhead (using libsigsegv in druntime for example). BTW, object should be non nullable by default, if you ask me. The drawback of null is way bigger than any benefit.
Isn't it quite unsafe to throw an exception in a signal ?
The signal handler is called on top of the stack, but the information to retrieve the stack trace are system dependant. BTW, using lib like libsigsegv can help a lot to make it safe. It isn't safe ATM, but it is doable.
Mar 02 2012
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, March 02, 2012 16:19:13 deadalnix wrote:
 Le 02/03/2012 15:37, Jacob Carlborg a écrit :
 On 2012-03-02 14:00, deadalnix wrote:
 Le 02/03/2012 05:51, Jonathan M Davis a écrit :
 On Friday, March 02, 2012 05:37:46 Nathan M. Swan wrote:
 Am I correct that trying to use an Object null results in
 undefined behavior?
 
 Object o = null;
 o.opCmp(new Object); // segmentation fault on my OSX machine
 
 This seems a bit non-D-ish to me, as other bugs like this throw
 Errors (e.g. RangeError).
 
 It would be nice if it would throw a NullPointerError or
 something like that, because I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
It's defined. The operating system protects you. You get a segfault on *nix and an access violation on Windows. Walter's take on it is that there is no point in checking for what the operating system is already checking for - especially when it adds additional overhead. Plenty of folks disagree, but that's the way it is.
The assertion that it has overhead isn't true.You'll find solutions without overhead (using libsigsegv in druntime for example). BTW, object should be non nullable by default, if you ask me. The drawback of null is way bigger than any benefit.
Isn't it quite unsafe to throw an exception in a signal ?
The signal handler is called on top of the stack, but the information to retrieve the stack trace are system dependant. BTW, using lib like libsigsegv can help a lot to make it safe. It isn't safe ATM, but it is doable.
You could definitely set it up to print a stack trace in the signal handler on at least some systems, but throwing an exception would _not_ be a good idea. So, a NullPointerException _would_ require additional overhead, because it would require checking the pointer/reference for null every time that you dereference it. Of course, it wouldn't work if anyone installed their own signal handler, so using a signal handler has its limits anyway, but it could be done. - Jonathan M Davis
Mar 02 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 02 Mar 2012 10:19:13 -0500, deadalnix <deadalnix gmail.com> wrot=
e:

 Le 02/03/2012 15:37, Jacob Carlborg a =C3=A9crit :
 Isn't it quite unsafe to throw an exception in a signal ?
One does not need to throw an exception. Just print a stack trace. I'v= e = advocated for this multiple times. I agree it costs nothing to implemen= t, = and who cares about safety when the app is about to crash?!
 The signal handler is called on top of the stack, but the information =
to =
 retrieve the stack trace are system dependant. BTW, using lib like  =
 libsigsegv can help a lot to make it safe. It isn't safe ATM, but it i=
s =
 doable.
libsigsegv is used to perform custom handling of page faults (e.g. loadi= ng = pages of memory from a database instead of the MMC). You do not need = libsigsegv to handle SEGV signals. -Steve
Mar 05 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 05/03/2012 15:26, Steven Schveighoffer a écrit :
 On Fri, 02 Mar 2012 10:19:13 -0500, deadalnix <deadalnix gmail.com> wrote:

 Le 02/03/2012 15:37, Jacob Carlborg a écrit :
 Isn't it quite unsafe to throw an exception in a signal ?
One does not need to throw an exception. Just print a stack trace. I've advocated for this multiple times. I agree it costs nothing to implement, and who cares about safety when the app is about to crash?!
 The signal handler is called on top of the stack, but the information
 to retrieve the stack trace are system dependant. BTW, using lib like
 libsigsegv can help a lot to make it safe. It isn't safe ATM, but it
 is doable.
libsigsegv is used to perform custom handling of page faults (e.g. loading pages of memory from a database instead of the MMC). You do not need libsigsegv to handle SEGV signals. -Steve
No you don't, but if you want to know if you are facing a stackoverflow or a null deference for exemple, this greatly help the implementation.
Mar 05 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 05 Mar 2012 13:29:09 -0500, deadalnix <deadalnix gmail.com> wrot=
e:

 Le 05/03/2012 15:26, Steven Schveighoffer a =C3=A9crit :
 On Fri, 02 Mar 2012 10:19:13 -0500, deadalnix <deadalnix gmail.com>  =
 wrote:

 Le 02/03/2012 15:37, Jacob Carlborg a =C3=A9crit :
 Isn't it quite unsafe to throw an exception in a signal ?
One does not need to throw an exception. Just print a stack trace. I'=
ve
 advocated for this multiple times. I agree it costs nothing to
 implement, and who cares about safety when the app is about to crash?=
!
 The signal handler is called on top of the stack, but the informatio=
n
 to retrieve the stack trace are system dependant. BTW, using lib lik=
e
 libsigsegv can help a lot to make it safe. It isn't safe ATM, but it=
 is doable.
libsigsegv is used to perform custom handling of page faults (e.g. loading pages of memory from a database instead of the MMC). You do n=
ot
 need libsigsegv to handle SEGV signals.

 -Steve
No you don't, but if you want to know if you are facing a stackoverflo=
w =
 or a null deference for exemple, this greatly help the implementation.=
It's somewhat off the table with it's GPL license. But even so, I don't= = see that it helps here, we are not looking to continue execution, just = more information on the crash than "Segmentation Fault". -Steve
Mar 05 2012
parent =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 13:29:09 -0500, deadalnix <deadalnix gmail.com> wro=
te:
=20
 Le 05/03/2012 15:26, Steven Schveighoffer a =C3=A9crit :
 On Fri, 02 Mar 2012 10:19:13 -0500, deadalnix <deadalnix gmail.com>
 wrote:

 Le 02/03/2012 15:37, Jacob Carlborg a =C3=A9crit :
 Isn't it quite unsafe to throw an exception in a signal ?
One does not need to throw an exception. Just print a stack trace. I'=
ve
 advocated for this multiple times. I agree it costs nothing to
 implement, and who cares about safety when the app is about to crash?=
!
 The signal handler is called on top of the stack, but the informatio=
n
 to retrieve the stack trace are system dependant. BTW, using lib lik=
e
 libsigsegv can help a lot to make it safe. It isn't safe ATM, but it=
 is doable.
libsigsegv is used to perform custom handling of page faults (e.g. loading pages of memory from a database instead of the MMC). You do n=
ot
 need libsigsegv to handle SEGV signals.

 -Steve
No you don't, but if you want to know if you are facing a stackoverflow or a null deference for exemple, this greatly help the implementation.
=20 It's somewhat off the table with it's GPL license. But even so, I don'=
t
 see that it helps here, we are not looking to continue execution, just
 more information on the crash than "Segmentation Fault".
=20
I wonder if deadalnix isn't confusing with libSegFault which is part of GNU's glibc: http://blog.andrew.net.au/2007/08/15/ Jerome PS: Sorry if this message is sent twice, there was an error the first time and it looks like it didn't get through... --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Mar 05 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/1/2012 8:51 PM, Jonathan M Davis wrote:
 It's defined. The operating system protects you.
Not exactly. It's a feature of the hardware. You get this for free, and your code runs at full speed. Adding in software checks for null pointers will dramatically slow things down.
Mar 02 2012
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter:

 Adding in software checks for null pointers will dramatically slow things down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here? Bye, bearophile
Mar 03 2012
next sibling parent "Tove" <tove fransson.se> writes:
On Saturday, 3 March 2012 at 10:13:34 UTC, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically 
 slow things down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here? Bye, bearophile
It's not a fair comparison, because the Java JIT will optimize the null checks away... Signal handlers might be the answer though, if the same behavior can be guaranteed on all major platforms...
Mar 03 2012
prev sibling next sibling parent James Miller <james aatch.net> writes:
On 3 March 2012 23:13, bearophile <bearophileHUGS lycos.com> wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow things down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here? Bye, bearophile
Not to be too cheeky here, but since its done in hardware, and therefore no software checks are done, I'm going to say that it is infinitely slower. Since even 1 extra instruction is infinitely more than the current 0.
Mar 03 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
Mar 03 2012
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 03/03/2012 20:06, Walter Bright a écrit :
 On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow
 things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
Why would you want to check every time ? You program will get a signal from the system if it try to deference a null pointer, so thing can be done in the signal handler, and no cost is involved.
Mar 03 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/03/2012 09:00 PM, deadalnix wrote:
 Le 03/03/2012 20:06, Walter Bright a écrit :
 On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow
 things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
Why would you want to check every time ? You program will get a signal from the system if it try to deference a null pointer, so thing can be done in the signal handler, and no cost is involved.
The signal will likely be the same for the following two code snippets: void main(){ Object o; o.toString(); } void main(){ *cast(int*)0xDEADBEEF = 1337; } How to detect whether or not the access violation was actually caused by a null pointer?
Mar 03 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 03/03/2012 21:10, Timon Gehr a écrit :
 On 03/03/2012 09:00 PM, deadalnix wrote:
 Le 03/03/2012 20:06, Walter Bright a écrit :
 On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow
 things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
Why would you want to check every time ? You program will get a signal from the system if it try to deference a null pointer, so thing can be done in the signal handler, and no cost is involved.
The signal will likely be the same for the following two code snippets: void main(){ Object o; o.toString(); } void main(){ *cast(int*)0xDEADBEEF = 1337; } How to detect whether or not the access violation was actually caused by a null pointer?
Signal hanlder are provided a - system dependant - structure that contains such informations. This is used to detect stackoverflow as well a null pointer deference. Lib like libsigsegv can help a lot to implement such a thing.
Mar 05 2012
prev sibling parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/03/2012 02:06 PM, Walter Bright wrote:
 On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:

 Adding in software checks for null pointers will dramatically slow
 things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
I have a hard time buying this as a valid reason to avoid inserting such checks. I do think they should be optional, but they should be available, if not default, with optimizations for signal handlers and such taken in the cases where they apply. Even if it slows my code down 4x, it'll be a huge win for me to avoid this stuff. Because you know what pisses me off a helluva lot more than slightly slower code? Spending hours trying to figure out what made my program say "Segmentation fault". That's what. I hate hate HATE vague error messages that don't help me. I really want to emphasize how super dumb and counterproductive this is. If I find that my code is too slow all of a sudden, then let me turn off the extra checks. Otherwise, I expect my crashes to give me some indication of what happened. This is reminding me that I can't do stuff like this: class Bar { int foo; } void main() { Bar bar; try { bar.foo = 5; } catch ( Exception e ) { writefln("%s",e); } } DMD 2.057 on Gentoo Linux, compiled with "-g -debug". It prints this: Segmentation fault Very frustrating! (And totally NOT worth whatever optimization this buys me.)
Mar 04 2012
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 5 March 2012 at 02:32:12 UTC, Chad J wrote:
 I hate hate HATE vague error messages that don't help me.
In a lot of cases, getting more info is very, very easy: $ dmd -g -debug test9 $ ./test9 Segmentation fault $ gdb ./test9 GNU gdb (GDB) 7.1 [...] (gdb) r Starting program: /home/me/test9 [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x08067a57 in _Dmain () at test9.d:12 12 bar.foo = 5; (gdb) where (gdb) print bar $1 = (struct test9.Bar *) 0x0 My gdb is out of the box unmodified; you don't need anything special to get basic info like this. There's two cases where null annoys me though: 1) if it is stored somewhere where it isn't supposed to be. Then, the location of the dereference doesn't help - the question is how it got there in the first place. 2) Segfaults in the middle of a web app, where running it under the same conditions again in the debugger is a massive pain in the butt. I've trained myself to use assert (or functions with assert in out contracts/invariants) a lot to counter these.
Mar 04 2012
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 05, 2012 at 03:43:15AM +0100, Adam D. Ruppe wrote:
[...]
 There's two cases where null annoys me though:
 
 1) if it is stored somewhere where it isn't supposed to be.
 Then, the location of the dereference doesn't help - the
 question is how it got there in the first place.
And having the compiler insert explicit null checks doesn't help here either.
 2) Segfaults in the middle of a web app, where running it under the
 same conditions again in the debugger is a massive pain in the butt.
I've come to the conclusion after years of fighting with making the debugger work over the network to debug embedded apps, that fprintf is a lot less painful than using a debugger. (Yes I heard that groan.) A well-placed fprintf can narrow down the location of the problem considerably. A nicely-wrapped multiprocess-safe fprintf that appends to a debug file complete with getpid() information is even better. Especially as a debug library optionally linked into the app. :-) The only downside is that if your app takes a long time to build (or takes too much effort to install) then a debugger is the better ticket. [...]
 I've trained myself to use assert (or functions with assert in out
 contracts/invariants) a lot to counter these.
Yeah, asserts and DbC is extremely useful in detecting the problem at its source rather than who knows how long later down the road where all traces to the source is practically already non-existent. Thing is, you have to consistently do this, everywhere in your code. And everyone else on the project as well. Leave out one place, and it will just be that very place that eventually causes problems. Murphy's law at work. :-) T -- Having a smoking section in a restaurant is like having a peeing section in a swimming pool. -- Edward Burr
Mar 04 2012
prev sibling next sibling parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/04/2012 09:43 PM, Adam D. Ruppe wrote:
 On Monday, 5 March 2012 at 02:32:12 UTC, Chad J wrote:
 I hate hate HATE vague error messages that don't help me.
In a lot of cases, getting more info is very, very easy: $ dmd -g -debug test9 $ ./test9 Segmentation fault $ gdb ./test9 GNU gdb (GDB) 7.1 [...] (gdb) r Starting program: /home/me/test9 [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x08067a57 in _Dmain () at test9.d:12 12 bar.foo = 5; (gdb) where (gdb) print bar $1 = (struct test9.Bar *) 0x0 My gdb is out of the box unmodified; you don't need anything special to get basic info like this.
News to me. I've had bad runs with that back in the day, but maybe things have improved a bit.
 There's two cases where null annoys me though:

 1) if it is stored somewhere where it isn't supposed to be.
 Then, the location of the dereference doesn't help - the
 question is how it got there in the first place.
True, but that's a different problem space to me. Non-nullable types would be really cool right about now.
 2) Segfaults in the middle of a web app, where running it under
 the same conditions again in the debugger is a massive pain in
 the butt.
THIS. This is why I expect what I expect. It's not web apps in my case. It's that I simply cannot expect users to run my code in a debugger. That is just /not acceptable/.
 I've trained myself to use assert (or functions with assert
 in out contracts/invariants) a lot to counter these.
*quiver* It's not that I don't like assertions, contracts, or invariants. These are very cool. The problem is that they don't help me when I missed a spot and didn't use assertions, contracts, or invariants. Back to spending a bunch of time inserting writefln statements to do something that I should be able to accomplish with my eyeballs and a stack trace pretty much instantaneously.
Mar 04 2012
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 5 March 2012 at 03:24:32 UTC, Chad J wrote:
 News to me.  I've had bad runs with that back in the day, but 
 maybe things have improved a bit.
Strangely, I've never had a problem with gdb and D, as far back as 2007. (at least for the basic stack trace kind of stuff). But, yeah, they've been improving a lot of things recently too.
 Non-nullable types would be really cool right about now.
Huh, I thought there was one in phobos by now. You could spin your own with something like this: struct NotNull(T) { T t; alias t this; disable this(); disable this(typeof(null)); this(T value) { assert(value !is null); t = value; } disable typeof(this) opAssign(typeof(null)); typeof(this) opAssign(T rhs) { assert(rhs !is null); t = rhs; return this; } } This will catch usages of the null literal at compile time, and other null references at runtime as soon as you try to use it. With the disabled default constructor, you are forced to provide an initializer when you use it, so no accidental null will slip in. The alias this means NotNull!T is substitutable for T, so you can drop it into existing apis.
 It's that I simply cannot expect users to run my code in a 
 debugger.
:) I'm lucky if I can get more from my users than "the site doesn't work"!
 The problem is that they don't help me when I missed a spot and 
 didn't use assertions, contracts, or invariants.
Aye, I've had it happen. The not null types might help, though tbh I've never used anything like this in practice so maybe not. I don't really know.
Mar 04 2012
next sibling parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/04/2012 11:39 PM, Adam D. Ruppe wrote:
 On Monday, 5 March 2012 at 03:24:32 UTC, Chad J wrote:
 News to me. I've had bad runs with that back in the day, but maybe
 things have improved a bit.
Strangely, I've never had a problem with gdb and D, as far back as 2007. (at least for the basic stack trace kind of stuff). But, yeah, they've been improving a lot of things recently too.
 Non-nullable types would be really cool right about now.
Huh, I thought there was one in phobos by now. You could spin your own with something like this: struct NotNull(T) { T t; alias t this; disable this(); disable this(typeof(null)); this(T value) { assert(value !is null); t = value; } disable typeof(this) opAssign(typeof(null)); typeof(this) opAssign(T rhs) { assert(rhs !is null); t = rhs; return this; } } This will catch usages of the null literal at compile time, and other null references at runtime as soon as you try to use it. With the disabled default constructor, you are forced to provide an initializer when you use it, so no accidental null will slip in. The alias this means NotNull!T is substitutable for T, so you can drop it into existing apis.
That's cool. Maybe someone should stick it in Phobos? I haven't had time to try it yet though. I also didn't know about disabled; that's a nifty addition.
 It's that I simply cannot expect users to run my code in a debugger.
:) I'm lucky if I can get more from my users than "the site doesn't work"!
Ugh! This sort of thing has happened in non-web code at work. This is on an old OpenVMS system with a DIBOL derivative language and people accessing it from character-based terminals. Once I finally got the damn system capable of broadcasting emails reliably (!!) and without using disk IO (!!), I started having it send me stack traces of things before it dies. The only thing left that's really annoying about this is I still have no way of determining whether an exception is going to be caught or not before I send out the email, so I can't use it in cases where things are expected to throw sometimes (ex: end of file exception, key not found exception). So I can only do this effectively for errors that are pretty much guaranteed to be bad news. I hope Phobos will have (or already have) the ability to print stack traces without crashing from an exception. There are (surprisingly frequent) times when something abnormal happens and I want to know why, but it is safe to continue running the program and the last thing I want to do is crash on the user. In those cases it is very useful for me to grab a stacktrace and send it to myself in an email. I can definitely see web stuff being a lot less cut-and-dry than this though, and also having a lot of blind-spots in technologies that you can't control very easily.
 The problem is that they don't help me when I missed a spot and didn't
 use assertions, contracts, or invariants.
Aye, I've had it happen. The not null types might help, though tbh I've never used anything like this in practice so maybe not. I don't really know.
Mar 04 2012
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 5 March 2012 at 06:19:53 UTC, Chad J wrote:
 That's cool.  Maybe someone should stick it in Phobos?
I just made a patch and sent it up. It'll have to be reviewed by the others, but I expect we'll have something in std.typecons for the next release.
 I also didn't know about  disabled; that's a nifty addition.
Yeah, we brought it up one of the previous null discussions, and Walter+Andrei liked it as a general solution to enable this and other checks, like ranged integers. Still somewhat new, though; havn't realized all its potential yet.
Mar 05 2012
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"Adam D. Ruppe" <destructionator gmail.com> wrote in message 
news:ewuffoakafwmuybbzztb forum.dlang.org...
 On Monday, 5 March 2012 at 03:24:32 UTC, Chad J wrote:
 It's that I simply cannot expect users to run my code in a debugger.
:) I'm lucky if I can get more from my users than "the site doesn't work"!
I *hate* those reports!! But they get worse than that: Fairly soon after retaliating to a "Durr...It don't work!" email with a nice formal (and painfully friendly) explanation of how and why to give me useful reports (which he even acknowledged as being a good point), I got from the same damn person (ie *the top guy in charge of the project in question!*): "So-and-so person told me that one of *their* people told them that the site didn't work when they tried it last week." WHAT THE FUCKING FUCK?!?!?! Shit like that I'm inclined to just blame on user error. I mean, crap, with a report like that, how am I supposed to know they spelled the URL right or even had a fucking internet connection at all? Or even a damn computer. I swear, as soon as a computer enters the picture, most people turn shit stupid (well, more stupid than usual): I can't imagine that *even these people* would go up to an auto mechanic as say "Driving to Detroit didn't work!" But that's exactly the crap I have to put up with. And then *I* have to (politely!) explain to these shitheads how to not be a moron...only to have them come back and pull the same shit two weeks later? Fuck, and people wonder why I hate humans.
Mar 05 2012
prev sibling next sibling parent reply "Jason House" <jason.james.house gmail.com> writes:
On Monday, 5 March 2012 at 04:39:59 UTC, Adam D. Ruppe wrote:

 Huh, I thought there was one in phobos by now.

 You could spin your own with something like this:

 struct NotNull(T) {
   T t;
   alias t this;
    disable this();
    disable this(typeof(null));
   this(T value) {
      assert(value !is null);
      t = value;
   }

    disable typeof(this) opAssign(typeof(null));
   typeof(this) opAssign(T rhs) {
       assert(rhs !is null);
       t = rhs;
       return this;
   }
 }
The opAssign kills all type safety. I think only NotNull!T should be accepted... So "foo = bar" won't compile if bar is nullable. To fix, "foo = NotNull(bar)",
Mar 05 2012
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 6 March 2012 at 00:01:20 UTC, Jason House wrote:
 The opAssign kills all type safety. I think only NotNull!T 
 should be accepted... So "foo = bar" won't compile if bar is 
 nullable. To fix, "foo = NotNull(bar)",
In both cases, the assert(x !is null), whether in opAssign or in the constructor, is going to fire, preventing the assignment. Losing the opAssign would force you to think about it, but I was concerned that it'd make it annoying to use in a case where you already know it is not null. Perhaps worth it? I'm undecided. The constructor is the important part, definitely.
Mar 05 2012
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Jason House:

 The opAssign kills all type safety. I think only NotNull!T should 
 be accepted... So "foo = bar" won't compile if bar is nullable. 
 To fix, "foo = NotNull(bar)",
I think NotNull also needs this, as a workaround to what I think is a DMD bug: disable enum init = 0; It disallows code like: auto f1 = NotNull!Foo.init; And I think NotNull needs a guard, to refuse std.typecons.Nullable: struct NotNull(T) if (!IsNullable!T) {... Where the IsNullable template is similar to std.typecons.isTuple (I think std.typecons needs a isTemplateInstance(T,Template)). Bye, bearophile
Mar 05 2012
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/05/2012 05:39 AM, Adam D. Ruppe wrote:
 On Monday, 5 March 2012 at 03:24:32 UTC, Chad J wrote:
 News to me. I've had bad runs with that back in the day, but maybe
 things have improved a bit.
Strangely, I've never had a problem with gdb and D, as far back as 2007. (at least for the basic stack trace kind of stuff). But, yeah, they've been improving a lot of things recently too.
 Non-nullable types would be really cool right about now.
Huh, I thought there was one in phobos by now. You could spin your own with something like this: struct NotNull(T) { T t; alias t this; disable this(); disable this(typeof(null)); this(T value) { assert(value !is null); t = value; } disable typeof(this) opAssign(typeof(null)); typeof(this) opAssign(T rhs) { assert(rhs !is null); t = rhs; return this; } } This will catch usages of the null literal at compile time, and other null references at runtime as soon as you try to use it. With the disabled default constructor, you are forced to provide an initializer when you use it, so no accidental null will slip in. The alias this means NotNull!T is substitutable for T, so you can drop it into existing apis.
 It's that I simply cannot expect users to run my code in a debugger.
:) I'm lucky if I can get more from my users than "the site doesn't work"!
 The problem is that they don't help me when I missed a spot and didn't
 use assertions, contracts, or invariants.
Aye, I've had it happen. The not null types might help, though tbh I've never used anything like this in practice so maybe not. I don't really know.
This is quite close, but real support for non-nullable types means that they are the default and checked statically, ideally using data flow analysis.
Mar 06 2012
parent reply "foobar" <foo bar.com> writes:
On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:

 This is quite close, but real support for non-nullable types 
 means that they are the default and checked statically, ideally 
 using data flow analysis.
I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant. consider: T foo = ..; // T is not-nullable T? bar = ..; // T? is nullable bar = foo; // legal implicit coercion T -> T? foo = bar; // compile-time type mismatch error //correct way: if (bar) { // make sure bar isn't null // compiler knows that cast(T)bar is safe foo = bar; } of course we can employ additional syntax sugar such as: foo = bar || <default_value>; furthermore: foo.method(); // legal bar.method(); // compile-time error it's all easily implementable in the type system.
Mar 06 2012
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/06/2012 04:46 PM, foobar wrote:
 On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:

 This is quite close, but real support for non-nullable types means
 that they are the default and checked statically, ideally using data
 flow analysis.
I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant. consider: T foo = ..; // T is not-nullable T? bar = ..; // T? is nullable bar = foo; // legal implicit coercion T -> T? foo = bar; // compile-time type mismatch error //correct way: if (bar) { // make sure bar isn't null // compiler knows that cast(T)bar is safe foo = bar; }
Right. This example already demonstrates some simplistic data flow analysis.
 of course we can employ additional syntax sugar such as:
 foo = bar || <default_value>;

 furthermore:
 foo.method(); // legal
 bar.method(); // compile-time error

 it's all easily implementable in the type system.
Actually it requires some thinking because making initialization of non-null fields safe is not entirely trivial. For example: http://pm.inf.ethz.ch/publications/getpdf.php/bibname/Own/id/SummersMuellerTR11.pdf CTFE and static constructors solve that issue for static data.
Mar 06 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/06/2012 12:19 PM, Timon Gehr wrote:
 On 03/06/2012 04:46 PM, foobar wrote:
 On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:

 This is quite close, but real support for non-nullable types means
 that they are the default and checked statically, ideally using data
 flow analysis.
I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant. consider: T foo = ..; // T is not-nullable T? bar = ..; // T? is nullable bar = foo; // legal implicit coercion T -> T? foo = bar; // compile-time type mismatch error //correct way: if (bar) { // make sure bar isn't null // compiler knows that cast(T)bar is safe foo = bar; }
Right. This example already demonstrates some simplistic data flow analysis.
 of course we can employ additional syntax sugar such as:
 foo = bar || <default_value>;

 furthermore:
 foo.method(); // legal
 bar.method(); // compile-time error

 it's all easily implementable in the type system.
Actually it requires some thinking because making initialization of non-null fields safe is not entirely trivial. For example: http://pm.inf.ethz.ch/publications/getpdf.php/bibname/Own/id/SummersMuellerTR11.pdf CTFE and static constructors solve that issue for static data.
I can't seem to download the PDF... it always gives me just two bytes. But to initialize non-null fields, I suspect we would need to be able to do stuff like this: class Foo { int dummy; } class Bar { Foo foo = new Foo(); this() { foo.dummy = 5; } } Which would be lowered by the compiler into this: class Bar { // Assume we've already checked for bogus assignments. // It is now safe to make this nullable. Nullable!(Foo) foo; this() { // Member initialization is done first. foo = new Foo(); // Then programmer-supplied ctor code runs after. foo.dummy = 5; } } allow this. Without it, I have to repeat myself a lot, and that is just wrong ;). Allowing this kind of initialization might also make it possible for us to have zero-argument struct constructors.
Mar 06 2012
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Chad J:

 I can't seem to download the PDF... it always gives me just two bytes.
 
 But to initialize non-null fields, I suspect we would need to be able to 
 do stuff like this:
There are some links here: http://d.puremagic.com/issues/show_bug.cgi?id=4571 Bye, bearophile
Mar 06 2012
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/07/2012 02:40 AM, Chad J wrote:
 On 03/06/2012 12:19 PM, Timon Gehr wrote:
...
 For example:
 http://pm.inf.ethz.ch/publications/getpdf.php/bibname/Own/id/SummersMuellerTR11.pdf



 CTFE and static constructors solve that issue for static data.
I can't seem to download the PDF... it always gives me just two bytes.
Same here. Strange. Interestingly, it works if you copy and paste the link into google.
Mar 07 2012
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/07/2012 02:40 AM, Chad J wrote:
 But to initialize non-null fields, I suspect we would need to be able to
 do stuff like this:

 class Foo
 {
 int dummy;
 }

 class Bar
 {
 Foo foo = new Foo();

 this() { foo.dummy = 5; }
 }

 Which would be lowered by the compiler into this:

 class Bar
 {
 // Assume we've already checked for bogus assignments.
 // It is now safe to make this nullable.
 Nullable!(Foo) foo;

 this()
 {
 // Member initialization is done first.
 foo = new Foo();

 // Then programmer-supplied ctor code runs after.
 foo.dummy = 5;
 }
 }


 allow this. Without it, I have to repeat myself a lot, and that is just
 wrong ;).]
It is not sufficient. class Bar{ Foo foo = new Foo(this); void method(){...} } class Foo{ this(Bar bar){bar.foo.method();} }
Mar 07 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 04:41 AM, Timon Gehr wrote:
 On 03/07/2012 02:40 AM, Chad J wrote:
 But to initialize non-null fields, I suspect we would need to be able to
 do stuff like this:

 class Foo
 {
 int dummy;
 }

 class Bar
 {
 Foo foo = new Foo();

 this() { foo.dummy = 5; }
 }

 Which would be lowered by the compiler into this:

 class Bar
 {
 // Assume we've already checked for bogus assignments.
 // It is now safe to make this nullable.
 Nullable!(Foo) foo;

 this()
 {
 // Member initialization is done first.
 foo = new Foo();

 // Then programmer-supplied ctor code runs after.
 foo.dummy = 5;
 }
 }


 allow this. Without it, I have to repeat myself a lot, and that is just
 wrong ;).]
It is not sufficient. class Bar{ Foo foo = new Foo(this); void method(){...} } class Foo{ this(Bar bar){bar.foo.method();} }
Lowered it a bit to try to compile, because it seems Foo doesn't have a method() : import std.stdio; class Bar{ Foo foo; this() { foo = new Foo(this); } void method(){ writefln("poo"); } } class Foo{ this(Bar bar){bar.foo.method();} } void main() { } And, it doesn't: main.d(12): Error: no property 'method' for type 'main.Foo' Though, more to the point: I would probably forbid "Foo foo = new Foo(this);". The design that leads to this is creating circular dependencies, which is usually bad to begin with. Would we lose much of value?
Mar 07 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/08/2012 01:24 AM, Chad J wrote:
 On 03/07/2012 04:41 AM, Timon Gehr wrote:
 On 03/07/2012 02:40 AM, Chad J wrote:
 But to initialize non-null fields, I suspect we would need to be able to
 do stuff like this:

 class Foo
 {
 int dummy;
 }

 class Bar
 {
 Foo foo = new Foo();

 this() { foo.dummy = 5; }
 }

 Which would be lowered by the compiler into this:

 class Bar
 {
 // Assume we've already checked for bogus assignments.
 // It is now safe to make this nullable.
 Nullable!(Foo) foo;

 this()
 {
 // Member initialization is done first.
 foo = new Foo();

 // Then programmer-supplied ctor code runs after.
 foo.dummy = 5;
 }
 }


 allow this. Without it, I have to repeat myself a lot, and that is just
 wrong ;).]
It is not sufficient. class Bar{ Foo foo = new Foo(this); void method(){...} } class Foo{ this(Bar bar){bar.foo.method();} }
Lowered it a bit to try to compile, because it seems Foo doesn't have a method() : import std.stdio; class Bar{ Foo foo; this() { foo = new Foo(this); } void method(){ writefln("poo"); } } class Foo{ this(Bar bar){bar.foo.method();} } void main() { } And, it doesn't: main.d(12): Error: no property 'method' for type 'main.Foo'
Just move the method from Bar to Foo. import std.stdio; class Bar{ Foo foo; this() { foo = new Foo(this); } } class Foo{ this(Bar bar){bar.foo.method();} void method(){ writefln("poo"); } } void main() { auto bar = new Bar; }
 Though, more to the point:
 I would probably forbid "Foo foo = new Foo(this);". The design that
 leads to this is creating circular dependencies, which is usually bad to
 begin with.
Circular object references are often justified.
 Would we lose much of value?
Well this would amount to forbidding escaping an object from its constructor, as well as forbidding calling any member functions from the constructor. Also, if you *need* to create a circular structure, you'd have to use sentinel objects. Those are worse than null.
Mar 07 2012
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 07:39 PM, Timon Gehr wrote:
 On 03/08/2012 01:24 AM, Chad J wrote:

 Though, more to the point:
 I would probably forbid "Foo foo = new Foo(this);". The design that
 leads to this is creating circular dependencies, which is usually bad to
 begin with.
Circular object references are often justified.
 Would we lose much of value?
Well this would amount to forbidding escaping an object from its constructor, as well as forbidding calling any member functions from the constructor. Also, if you *need* to create a circular structure, you'd have to use sentinel objects. Those are worse than null.
OK, that does sound unusually harsh.
Mar 07 2012
prev sibling parent "Christopher Bergqvist" <spambox0 digitalpoetry.se> writes:
On Tuesday, 6 March 2012 at 15:46:54 UTC, foobar wrote:
 On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote:

 This is quite close, but real support for non-nullable types 
 means that they are the default and checked statically, 
 ideally using data flow analysis.
I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant. consider: T foo = ..; // T is not-nullable T? bar = ..; // T? is nullable bar = foo; // legal implicit coercion T -> T? foo = bar; // compile-time type mismatch error //correct way: if (bar) { // make sure bar isn't null // compiler knows that cast(T)bar is safe foo = bar; } of course we can employ additional syntax sugar such as: foo = bar || <default_value>; furthermore: foo.method(); // legal bar.method(); // compile-time error it's all easily implementable in the type system.
I agree with the above and would also suggest something along the lines of: assert (bar) { // make sure it isn't null in debug builds bar.method(); // legal } The branchy null-check would then disappear in build configurations with asserts disabled.
Mar 06 2012
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 I've trained myself to use assert (or functions with assert
 in out contracts/invariants) a lot to counter these.
I think contracts are better left to higher level ideas. The simple not-null contracts you are adding are better left to a type system that manages a succinct not-null syntax. Bye, bearophile
Mar 05 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/4/2012 6:31 PM, Chad J wrote:
 class Bar
 {
 int foo;
 }

 void main()
 {
 Bar bar;
 try {
 bar.foo = 5;
 } catch ( Exception e ) {
 writefln("%s",e);
 }
 }

 DMD 2.057 on Gentoo Linux, compiled with "-g -debug". It prints this:
 Segmentation fault

 Very frustrating!
This is what I get (I added in an import std.stdio;): dmd foo -gc gdb foo GNU gdb (GDB) 7.2-ubuntu Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /home/walter/cbx/mars/foo...done. (gdb) run Starting program: /home/walter/cbx/mars/foo [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x0000000000401e45 in D main () at foo.d:13 13 bar.foo = 5; (gdb) bt (gdb) By running it under gdb (the debugger), it tells me what file and line it failed on, and gives a lovely stack trace. There really are only 3 gdb commands you need (and the only ones I remember): run (run your program) bt (print a backtrace) quit (exit gdb) Voila! Also, a null pointer exception is only one of a whole menagerie of possible hardware-detected errors. There's a limit on the compiler instrumenting code to detect these. At some point, it really is worth learning how to use the debugger.
Mar 04 2012
next sibling parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/05/2012 01:25 AM, Walter Bright wrote:
 On 3/4/2012 6:31 PM, Chad J wrote:
 class Bar
 {
 int foo;
 }

 void main()
 {
 Bar bar;
 try {
 bar.foo = 5;
 } catch ( Exception e ) {
 writefln("%s",e);
 }
 }

 DMD 2.057 on Gentoo Linux, compiled with "-g -debug". It prints this:
 Segmentation fault

 Very frustrating!
This is what I get (I added in an import std.stdio;): dmd foo -gc gdb foo GNU gdb (GDB) 7.2-ubuntu Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /home/walter/cbx/mars/foo...done. (gdb) run Starting program: /home/walter/cbx/mars/foo [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x0000000000401e45 in D main () at foo.d:13 13 bar.foo = 5; (gdb) bt (gdb) By running it under gdb (the debugger), it tells me what file and line it failed on, and gives a lovely stack trace. There really are only 3 gdb commands you need (and the only ones I remember): run (run your program) bt (print a backtrace) quit (exit gdb) Voila! Also, a null pointer exception is only one of a whole menagerie of possible hardware-detected errors. There's a limit on the compiler instrumenting code to detect these. At some point, it really is worth learning how to use the debugger.
Problems: - I have to rerun the program in a debugger to see the stack trace. This is a slow workflow. It's a big improvement if the segfault is hard to find, but only a small improvement if the segfault is easy to find. Very bad if I'm prototyping experimental code and I have a bunch to go through. - It only gives one line number. I imagine there's a way to get it to spill the rest? At least it's the most important line number. Nonetheless, I commonly encounter cases where the real action is happening a few levels into the stack, which means I want to see ALL the line numbers /at one time/. - As I mentioned in another post, it is unreasonable to expect others to run your programs in a debugger. I like it when my users can send me stacktraces. (And they need to have ALL the line numbers displayed with no extra coercion.) There are a number of occasions where I don't even need to ask how to reproduce the bug, because I can just tell by looking at the trace. Super useful! - It doesn't seem to be possible to catch() these hardware errors. Booooo. I wouldn't even expect ALL hardware errors to be instrumented in the compiler. At least get the common ones. Null dereference is remarkably common. I can't actually think of others I care about right now. Array boundary errors and assertions already seem to have their own exceptions now; they were great pests back in the day when this was not so. The error messages could use a lot of work though. (Range Violation should print the index used and the index boundaries, and simpler assertions such as equality should print the values of their operands.) Haven't used those two in a while though. Haxe... totally got this right. Also Actionscript 3 by proxy. Hell, even Synergy/DE, the DIBOL (!!) derivative that I use at work, /gets this right/. I get stacktraces for null dereferences in these languages. It's /really/ convenient and useful. I consider D to be very backwards in this regard.
Mar 04 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/4/2012 11:50 PM, Chad J wrote:
 Problems:
 - I have to rerun the program in a debugger to see the stack trace. This is a
 slow workflow. It's a big improvement if the segfault is hard to find, but only
 a small improvement if the segfault is easy to find. Very bad if I'm
prototyping
 experimental code and I have a bunch to go through.
I don't get this at all. I find it trivial to run the program with a debugger: gdb foo >run that's it.
 - It only gives one line number. I imagine there's a way to get it to spill the
 rest? At least it's the most important line number. Nonetheless, I commonly
 encounter cases where the real action is happening a few levels into the stack,
 which means I want to see ALL the line numbers /at one time/.
That's because the runtime library is compiled without debug symbols in it. If it was compiled with -g, the line numbers would be there. You of course can compile the library that way if you want. Debug symbols substantially increase your program size.
 - As I mentioned in another post, it is unreasonable to expect others to run
 your programs in a debugger. I like it when my users can send me stacktraces.
 (And they need to have ALL the line numbers displayed with no extra coercion.)
 There are a number of occasions where I don't even need to ask how to reproduce
 the bug, because I can just tell by looking at the trace. Super useful!
I agree that customers emailing you a stack trace is a reasonable point. Andrei also brought up the point of the problems with using a debugger on a remote server machine.
 I wouldn't even expect ALL hardware errors to be instrumented in the compiler.
 At least get the common ones. Null dereference is remarkably common. I can't
 actually think of others I care about right now. Array boundary errors and
 assertions already seem to have their own exceptions now; they were great pests
 back in the day when this was not so.
No hardware support for them, so no choice.
 The error messages could use a lot of work
 though. (Range Violation should print the index used and the index boundaries,
 and simpler assertions such as equality should print the values of their
operands.)
The added bloat for this would be substantial.

 those two in a while though. Haxe... totally got this right. Also Actionscript
3
 by proxy. Hell, even Synergy/DE, the DIBOL (!!) derivative that I use at work,
 /gets this right/. I get stacktraces for null dereferences in these languages.
 It's /really/ convenient and useful. I consider D to be very backwards in this
 regard.
Notably, C and C++ do not do what you suggest.
Mar 05 2012
next sibling parent reply "Sandeep Datta" <datta.sandeep gmail.com> writes:
 No hardware support for them, so no choice.
I am just going to leave this here... *Fast Bounds Checking Using Debug Register* http://www.ecsl.cs.sunysb.edu/tr/TR225.pdf
Mar 05 2012
parent reply bearophile <bearophileHUGS lycos.com> writes:
Sandeep Datta:

 I am just going to leave this here...
 
 *Fast Bounds Checking Using Debug Register*
 
 http://www.ecsl.cs.sunysb.edu/tr/TR225.pdf
Is this idea usable in DMD to speed up D code compiled in non-release mode? Bye, bearophile
Mar 06 2012
parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Tue, 06 Mar 2012 14:22:09 +0100, bearophile <bearophileHUGS lycos.com>  
wrote:

 Sandeep Datta:

 I am just going to leave this here...

 *Fast Bounds Checking Using Debug Register*

 http://www.ecsl.cs.sunysb.edu/tr/TR225.pdf
Is this idea usable in DMD to speed up D code compiled in non-release mode? Bye, bearophile
Array accesses are already bounds checked in a much more reliable way than what the paper proposes. Furthermore the solution in the paper needs kernel support.
Mar 06 2012
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-05 11:38, Walter Bright wrote:
 On 3/4/2012 11:50 PM, Chad J wrote:

 Haven't used
 those two in a while though. Haxe... totally got this right. Also
 Actionscript 3
 by proxy. Hell, even Synergy/DE, the DIBOL (!!) derivative that I use
 at work,
 /gets this right/. I get stacktraces for null dereferences in these
 languages.
 It's /really/ convenient and useful. I consider D to be very backwards
 in this
 regard.
Notably, C and C++ do not do what you suggest.
Just because C and C++ do something in a certain way doesn't make it a valid reason to do the same thing in D. I think this is an argument we need to stop using immediately. It just shows we're stuck in our ways, can't innovate and can't think for our self. -- /Jacob Carlborg
Mar 05 2012
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Personally I'd love to get more info about out-of-bounds errors. E.g.
for arrays, which index the code attempted to access, and for hashes
which key.

Sure it's easy to use an enforce, but I'd rather have that inserted in
debug builds with bounds checking anyway. For example:

void main()
{
    int[int] aa;
    int key = 1;
    auto a = aa[key];
}

core.exception.RangeError test(22): Range violation

That's not too much information (well there's also that stacktrace
which is still broken on XP regardless of dbghelp.dll). This is
better:

import std.exception;
import core.exception;

void main()
{
    int[int] aa;
    int key = 1;
    enforce(key in aa, new RangeError(format(": Key %s not in hash. ", key)));
    auto a = aa[key];
}

core.exception.RangeError : Key 1 not in hash. (20): Range violation

I'd rather not have to depend on debuggers or code duplication (even
mixins) for this basic information.

Side-note: RangeError is missing a constructor that takes a *message*
as the first parameter, the one that was called takes a file string
parameter. With the ctor fixed the error becomes:
core.exception.RangeError test.d(20): Range violation: Key 1 not in hash.

That would help me so much without having to change code, recompile,
and then wait 20 seconds at runtime to reach that failing test again.
Mar 05 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 05, 2012 at 02:30:14PM +0100, Andrej Mitrovic wrote:
 Personally I'd love to get more info about out-of-bounds errors. E.g.
 for arrays, which index the code attempted to access, and for hashes
 which key.
Personally, I'd love to see D's added string capabilities put to good use in *all* exception messages. It's been how many decades since the first C compiler was written? Yet we still haven't moved on from using static strings in Exceptions. This is silly. A "file not found" exception should tell you what the offending filename is. It's as simple as: throw new IOException("File '%s' not found".format(filename)); A range violation should say what the offending index was: [...]
     enforce(key in aa, new RangeError(format(": Key %s not in hash. ", key)));
[...]
 core.exception.RangeError : Key 1 not in hash. (20): Range violation
A numerical conversion error should say what the offending malformed number was. Or at least, include the non-digit character that it choked on. A syntax error in getopt should tell you what the offending option was. (How'd you like it if you ran some random program, and it says "command line error" with no indication at all of what the error was?) It's pure common sense. I mean, if the only message dmd ever gave was "syntax error" without telling you *what* caused the syntax error or *where* (file, line number, perhaps column), we'd all be beating down Walter's door. So why should exceptions in other applications be any different? T -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a. -- Wouter Verhelst
Mar 05 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/5/2012 8:07 AM, H. S. Teoh wrote:
 A "file not found" exception should tell you what the offending filename is.
std.file.read("adsfasdf") gives: std.file.FileException std\file.d(305): adsfasdf: The system cannot find the file specified.
Mar 05 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 05, 2012 at 11:06:40AM -0800, Walter Bright wrote:
 On 3/5/2012 8:07 AM, H. S. Teoh wrote:
A "file not found" exception should tell you what the offending filename is.
std.file.read("adsfasdf") gives: std.file.FileException std\file.d(305): adsfasdf: The system cannot find the file specified.
That's good to know. I just picked "file not found" as my favorite whipping boy. :-) Now we just need this consistently across all the standard exceptions. The including the wrong parameter part, not the whipping part. T -- Today's society is one of specialization: as you grow, you learn more and more about less and less. Eventually, you know everything about nothing.
Mar 05 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/5/12, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
<snip>

Yep, agreed with everything you've said.

Also, I find your message signatures amusing. :P
Mar 05 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/5/2012 4:27 AM, Jacob Carlborg wrote:
 On 2012-03-05 11:38, Walter Bright wrote:
 Notably, C and C++ do not do what you suggest.
Just because C and C++ do something in a certain way doesn't make it a valid reason to do the same thing in D. I think this is an argument we need to stop using immediately. It just shows we're stuck in our ways, can't innovate and can't think for our self.
Doing things differently than well established practice requires a strong reason. There are often good reasons for that established practice that aren't obvious.
Mar 05 2012
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jj32l2$1dk$1 digitalmars.com...
 On 3/5/2012 4:27 AM, Jacob Carlborg wrote:
 On 2012-03-05 11:38, Walter Bright wrote:
 Notably, C and C++ do not do what you suggest.
Just because C and C++ do something in a certain way doesn't make it a valid reason to do the same thing in D. I think this is an argument we need to stop using immediately. It just shows we're stuck in our ways, can't innovate and can't think for our self.
Doing things differently than well established practice requires a strong reason. There are often good reasons for that established practice that aren't obvious.
Ok, then what's the strong reason for abandoning the practice here that's been well established by damn near everying other than C/C++?
Mar 05 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-05 20:02, Walter Bright wrote:
 On 3/5/2012 4:27 AM, Jacob Carlborg wrote:
 On 2012-03-05 11:38, Walter Bright wrote:
 Notably, C and C++ do not do what you suggest.
Just because C and C++ do something in a certain way doesn't make it a valid reason to do the same thing in D. I think this is an argument we need to stop using immediately. It just shows we're stuck in our ways, can't innovate and can't think for our self.
Doing things differently than well established practice requires a strong reason. There are often good reasons for that established practice that aren't obvious.
Yeah, C and C++ might not do what's suggested but basically all other languages do it. -- /Jacob Carlborg
Mar 05 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/5/2012 11:51 AM, Jacob Carlborg wrote:
 Yeah, C and C++ might not do what's suggested but basically all other languages
 do it.
People turn to C and C++ for systems work and high performance.
Mar 06 2012
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/06/2012 03:27 AM, Walter Bright wrote:
 On 3/5/2012 11:51 AM, Jacob Carlborg wrote:
 Yeah, C and C++ might not do what's suggested but basically all other
 languages
 do it.
People turn to C and C++ for systems work and high performance.
Optional. Flags. If there is a truly unavoidable trade-off, then you give users CHOICE. Your opinion on this matter does not work well for /everyone/ in practice. Not to mention that there seems to be a completely avoidable trade-off here. A few posts have mentioned handlers in Linux that would work for printing traces, though not for recovery. Seems kinda no-brainer. I'd still want the /choice/ to incur higher runtime overhead to make null dereferences and maybe a few other obvious ones behave consistently with other exceptions in the language. *I* would set this flag in a heartbeat; *you* don't have to.
Mar 06 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 05, 2012 at 06:12:43PM +0100, Andrej Mitrovic wrote:
[...]
 Also, I find your message signatures amusing. :P
I have a file of amusing quotes that I collected over the years from various sources (including some not-so-funny ones I made up myself), and a 1-line perl script hooked to my Mutt compose function that randomly selects a quote and puts it on my signature line. Once in a while it can coincidentally pick a quote relevant to the actual discussion in the message body, which makes it even funnier. T -- Laissez-faire is a French term commonly interpreted by Conservatives to mean 'lazy fairy,' which is the belief that if governments are lazy enough, the Good Fairy will come down from heaven and do all their work for them.
Mar 05 2012
prev sibling next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jj252v$15vf$1 digitalmars.com...
 On 3/4/2012 11:50 PM, Chad J wrote:
 Problems:
 - I have to rerun the program in a debugger to see the stack trace. This 
 is a
 slow workflow. It's a big improvement if the segfault is hard to find, 
 but only
 a small improvement if the segfault is easy to find. Very bad if I'm 
 prototyping
 experimental code and I have a bunch to go through.
I don't get this at all. I find it trivial to run the program with a debugger: gdb foo >run that's it.
Not all software is minimally-interactive CLI.

 used
 those two in a while though. Haxe... totally got this right. Also 
 Actionscript 3
 by proxy. Hell, even Synergy/DE, the DIBOL (!!) derivative that I use at 
 work,
 /gets this right/. I get stacktraces for null dereferences in these 
 languages.
 It's /really/ convenient and useful. I consider D to be very backwards in 
 this
 regard.
Notably, C and C++ do not do what you suggest.
So what? C and C++ suck ass. That's why D exists.
Mar 05 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 05 Mar 2012 05:38:20 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/4/2012 11:50 PM, Chad J wrote:
 Problems:
 - I have to rerun the program in a debugger to see the stack trace.  
 This is a
 slow workflow. It's a big improvement if the segfault is hard to find,  
 but only
 a small improvement if the segfault is easy to find. Very bad if I'm  
 prototyping
 experimental code and I have a bunch to go through.
I don't get this at all. I find it trivial to run the program with a debugger: gdb foo >run that's it.
This argument continually irks me to no end. It seems like the trusty (rusty?) sword you always pull out when defending the current behavior, but it falls flat on its face when a programmer is faced with a Seg Fault that has occurred on a program that was running for several days/weeks, possibly not in his development environment, and now he must run it via a debugger to wait another several days/weeks to (hopefully) get the same error. Please stop using this argument, it's only valid on trivial bugs that crash immediately during development. I wholeheartedly agree that we should use the hardware features that we are given, and that NullPointerException is not worth the bloat. But we should be doing *something* better than just printing "Segmentation Fault". -Steve
Mar 05 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 05, 2012 at 05:31:34PM -0500, Steven Schveighoffer wrote:
[...]
 I wholeheartedly agree that we should use the hardware features that
 we are given, and that NullPointerException is not worth the bloat.
 But we should be doing *something* better than just printing
 "Segmentation Fault".
[...] On Linux, you can catch SIGSEGV and print a stacktrace in the signal handler. This is pretty standard procedure. In theory, we *could* have druntime install a handler for SIGSEGV upon program startup that prints a stacktrace and exits (or do whatever the equivalent is on Windows, if compiled on Windows). T -- Let's eat some disquits while we format the biskettes.
Mar 05 2012
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 5 March 2012 at 22:50:46 UTC, H. S. Teoh wrote:
 In theory, we *could* have druntime install a handler for 
 SIGSEGV upon program startup that prints a stacktrace and
 exits
This sounds like a good idea to me.
 (or do whatever the
 equivalent is on Windows, if compiled on Windows).
On Windows, hardware exceptions are turned into D exceptions by the SEH system. The source is in druntime/src/rt/deh.d You can catch null pointers on Windows if you want; the OS makes this possible.
Mar 05 2012
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/5/12 2:31 PM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 05:38:20 -0500, Walter Bright
 <newshound2 digitalmars.com> wrote:
 I don't get this at all. I find it trivial to run the program with a
 debugger:

 gdb foo
run
that's it.
This argument continually irks me to no end. It seems like the trusty (rusty?) sword you always pull out when defending the current behavior, but it falls flat on its face when a programmer is faced with a Seg Fault that has occurred on a program that was running for several days/weeks, possibly not in his development environment, and now he must run it via a debugger to wait another several days/weeks to (hopefully) get the same error. Please stop using this argument, it's only valid on trivial bugs that crash immediately during development.
I second that. Andrei
Mar 05 2012
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2012-03-05 22:31:34 +0000, "Steven Schveighoffer" 
<schveiguy yahoo.com> said:

 On Mon, 05 Mar 2012 05:38:20 -0500, Walter Bright  
 <newshound2 digitalmars.com> wrote:
 
 I don't get this at all. I find it trivial to run the program with a  debugger:
 
    gdb foo
    >run
 
 that's it.
This argument continually irks me to no end. It seems like the trusty (rusty?) sword you always pull out when defending the current behavior, but it falls flat on its face when a programmer is faced with a Seg Fault that has occurred on a program that was running for several days/weeks, possibly not in his development environment, and now he must run it via a debugger to wait another several days/weeks to (hopefully) get the same error. Please stop using this argument, it's only valid on trivial bugs that crash immediately during development.
Walter's argument about using gdb doesn't make sense in many scenarios. He's probably used a little too much to programs which are short lived and have easily reproducible inputs (like compilers). That said, throwing an exception might not be a better response all the time. On my operating system (Mac OS X) when a program crashes I get a nice crash log with the date, a stack trace for each thread with named functions, the list of all loaded libraries, and the list of VM regions dumped into ~/Library/Logs/CrashReporter/. That's very useful when you have a customer experiencing a crash with your software, as you can ask for the crash log. Can't you do the same on other operating systems? Whereas if an exception is thrown without it being catched I get a stack trace on the console and nothing else, which is both less informative an easier to lose than a crash log sitting there on the disk. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Mar 05 2012
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 05 Mar 2012 20:17:32 -0500, Michel Fortin  
<michel.fortin michelf.com> wrote:

 That said, throwing an exception might not be a better response all the  
 time. On my operating system (Mac OS X) when a program crashes I get a  
 nice crash log with the date, a stack trace for each thread with named  
 functions, the list of all loaded libraries, and the list of VM regions  
 dumped into ~/Library/Logs/CrashReporter/. That's very useful when you  
 have a customer experiencing a crash with your software, as you can ask  
 for the crash log. Can't you do the same on other operating systems?
It depends on the OS facilities and the installed libraries for such features. It's eminently possible, and I think on Windows, you can catch such exceptions too in external programs to do the same sort of dumping. On Linux, you get a "Segmentation Fault" message (or nothing if you have no terminal showing the output), and the program goes away. That's the default behavior. I think it's better in any case to do *something* other than just print "Segmentation Fault" by default. If someone has a way to hook this in a better fashion, we can include that, but I hazard to guess it will not be on stock Linux boxes.
 Whereas if an exception is thrown without it being catched I get a stack  
 trace on the console and nothing else, which is both less informative an  
 easier to lose than a crash log sitting there on the disk.
Certainly for Mac OS X, it should do the most informative appropriate thing for the OS it's running on. Does the above happen for D programs currently on Mac OS X? Also, I don't think an exception is the right thing in any case -- it may not actually get caught if the Seg Fault is due to memory issues. I'd rather the program do it's best attempt to print a stack trace and then abort. -Steve
Mar 05 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, March 05, 2012 21:04:20 Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 20:17:32 -0500, Michel Fortin
 
 <michel.fortin michelf.com> wrote:
 That said, throwing an exception might not be a better response all the
 time. On my operating system (Mac OS X) when a program crashes I get a
 nice crash log with the date, a stack trace for each thread with named
 functions, the list of all loaded libraries, and the list of VM regions
 dumped into ~/Library/Logs/CrashReporter/. That's very useful when you
 have a customer experiencing a crash with your software, as you can ask
 for the crash log. Can't you do the same on other operating systems?
It depends on the OS facilities and the installed libraries for such features. It's eminently possible, and I think on Windows, you can catch such exceptions too in external programs to do the same sort of dumping. On Linux, you get a "Segmentation Fault" message (or nothing if you have no terminal showing the output), and the program goes away. That's the default behavior. I think it's better in any case to do *something* other than just print "Segmentation Fault" by default. If someone has a way to hook this in a better fashion, we can include that, but I hazard to guess it will not be on stock Linux boxes.
All you have to do is add a signal handler which handles SIGSEV and have it print out a stacktrace. It's pretty easy to do. It _is_ the sort of thing that programs may want to override (to handle other signals), so I'm not quite sure what the best way to handle that is without causing problems for them (e.g. initialization order could affect which handler is added last and is therefore the one used). Maybe a function should be added to druntime which wraps the glibc function so that programs can add their signal handler through _it_, and if that happens, the default one won't be used. Regardless, I'm not sure whether the functions involved are POSIX or not, so I don't know whether it'll work on anything besides Linux. It would still be of benefit even if it were Linux-only though. - Jonathan M Davis
Mar 05 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 05 Mar 2012 22:51:28 -0500, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, March 05, 2012 21:04:20 Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 20:17:32 -0500, Michel Fortin

 <michel.fortin michelf.com> wrote:
 That said, throwing an exception might not be a better response all  
the
 time. On my operating system (Mac OS X) when a program crashes I get a
 nice crash log with the date, a stack trace for each thread with named
 functions, the list of all loaded libraries, and the list of VM  
regions
 dumped into ~/Library/Logs/CrashReporter/. That's very useful when you
 have a customer experiencing a crash with your software, as you can  
ask
 for the crash log. Can't you do the same on other operating systems?
It depends on the OS facilities and the installed libraries for such features. It's eminently possible, and I think on Windows, you can catch such exceptions too in external programs to do the same sort of dumping. On Linux, you get a "Segmentation Fault" message (or nothing if you have no terminal showing the output), and the program goes away. That's the default behavior. I think it's better in any case to do *something* other than just print "Segmentation Fault" by default. If someone has a way to hook this in a better fashion, we can include that, but I hazard to guess it will not be on stock Linux boxes.
All you have to do is add a signal handler which handles SIGSEV and have it print out a stacktrace. It's pretty easy to do. It _is_ the sort of thing that programs may want to override (to handle other signals), so I'm not quite sure what the best way to handle that is without causing problems for them (e.g. initialization order could affect which handler is added last and is therefore the one used). Maybe a function should be added to druntime which wraps the glibc function so that programs can add their signal handler through _it_, and if that happens, the default one won't be used.
Install the default (stack-trace printing) handler before calling any of the static constructors. Any call to signal after that will override the installed handler. -Steve
Mar 07 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-06 03:04, Steven Schveighoffer wrote:
 Certainly for Mac OS X, it should do the most informative appropriate
 thing for the OS it's running on. Does the above happen for D programs
 currently on Mac OS X?
When an exception if thrown and uncaught it will print the stack trace to in the terminal (if run in the terminal). If the program ends with a segmentation fault the stack trace will be outputted to a log file. -- /Jacob Carlborg
Mar 05 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-06 08:53, Jacob Carlborg wrote:
 On 2012-03-06 03:04, Steven Schveighoffer wrote:
 Certainly for Mac OS X, it should do the most informative appropriate
 thing for the OS it's running on. Does the above happen for D programs
 currently on Mac OS X?
When an exception if thrown and uncaught it will print the stack trace to in the terminal (if run in the terminal). If the program ends with a segmentation fault the stack trace will be outputted to a log file.
Outputting to a log file is handle by the OS and not by druntime. -- /Jacob Carlborg
Mar 05 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/06/2012 02:54 AM, Jacob Carlborg wrote:
 On 2012-03-06 08:53, Jacob Carlborg wrote:
 On 2012-03-06 03:04, Steven Schveighoffer wrote:
 Certainly for Mac OS X, it should do the most informative appropriate
 thing for the OS it's running on. Does the above happen for D programs
 currently on Mac OS X?
When an exception if thrown and uncaught it will print the stack trace to in the terminal (if run in the terminal). If the program ends with a segmentation fault the stack trace will be outputted to a log file.
Outputting to a log file is handle by the OS and not by druntime.
It sounds like what you'd want to do is walk the stack and print a trace to stderr without actually jumping execution. Well, check for being caught first. Once that's printed, then trigger an OS error and quit.
Mar 06 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-03-06 13:48, Chad J wrote:
 On 03/06/2012 02:54 AM, Jacob Carlborg wrote:
 On 2012-03-06 08:53, Jacob Carlborg wrote:
 On 2012-03-06 03:04, Steven Schveighoffer wrote:
 Certainly for Mac OS X, it should do the most informative appropriate
 thing for the OS it's running on. Does the above happen for D programs
 currently on Mac OS X?
When an exception if thrown and uncaught it will print the stack trace to in the terminal (if run in the terminal). If the program ends with a segmentation fault the stack trace will be outputted to a log file.
Outputting to a log file is handle by the OS and not by druntime.
It sounds like what you'd want to do is walk the stack and print a trace to stderr without actually jumping execution. Well, check for being caught first. Once that's printed, then trigger an OS error and quit.
I'm just writing how it works, not what I want to do. -- /Jacob Carlborg
Mar 06 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-06 02:17, Michel Fortin wrote:
 On 2012-03-05 22:31:34 +0000, "Steven Schveighoffer"
 <schveiguy yahoo.com> said:

 On Mon, 05 Mar 2012 05:38:20 -0500, Walter Bright
 <newshound2 digitalmars.com> wrote:

 I don't get this at all. I find it trivial to run the program with a
 debugger:

 gdb foo
run
that's it.
This argument continually irks me to no end. It seems like the trusty (rusty?) sword you always pull out when defending the current behavior, but it falls flat on its face when a programmer is faced with a Seg Fault that has occurred on a program that was running for several days/weeks, possibly not in his development environment, and now he must run it via a debugger to wait another several days/weeks to (hopefully) get the same error. Please stop using this argument, it's only valid on trivial bugs that crash immediately during development.
Walter's argument about using gdb doesn't make sense in many scenarios. He's probably used a little too much to programs which are short lived and have easily reproducible inputs (like compilers). That said, throwing an exception might not be a better response all the time. On my operating system (Mac OS X) when a program crashes I get a nice crash log with the date, a stack trace for each thread with named functions, the list of all loaded libraries, and the list of VM regions dumped into ~/Library/Logs/CrashReporter/. That's very useful when you have a customer experiencing a crash with your software, as you can ask for the crash log. Can't you do the same on other operating systems? Whereas if an exception is thrown without it being catched I get a stack trace on the console and nothing else, which is both less informative an easier to lose than a crash log sitting there on the disk.
If possible, it would be nice to have both. If I do have a tool that is shorted lived and I'm developing on it, I don't want to have to look up the exception in the log files. -- /Jacob Carlborg
Mar 05 2012
parent reply James Miller <james aatch.net> writes:
If you have a possible null, then check for it *yourself* sometimes
you know its null, sometimes you don't have any control. However, the
compiler has no way of knowing that. Its basically an all-or-nothing
thing with the compiler.

However, the compiler can (and I think does) warn of possible
null-related errors. It doesn't fail, because, again, it can't be
certain of what is an error and what is not. And it can't know, since
that is the Halting Problem.

I'm not sure what the fuss is here, we cannot demand that every little
convenience be packed into D, at some point we need to face facts that
we are still programming, and sometimes things go wrong. The best
arguments I've seen so far is to install a handler that catches the
SEGFAULT in linux, and does whatever SEH stuff it does in windows and
print a stacktrace. If this happens in a long-running process, then,
to be blunt, tough. Unless you're telling me that the only way to
reproduce the bug is to run the program for the same amount of time in
near-identical conditions, then sir, you fail at being a detective.

If you have a specific need for extreme safety and no sharp corners,
use Java, or some other VM language, PHP comes to mind as well. If you
want a systems programming language that is geared for performance,
with modern convenience then stick around, do I have the language for
you! Stop thinking in hypotheticals, because no language can cover
every situation; "What if this is running in a space ship for 12 years
and the segfault is caused by space bees?!" is not something we should
be thinking about. If a process fails, then it fails, you try to
figure out what happened (you do have logging on this mysterious
program right?" then fix it.

Its not easy, but if it was easy, we'd be out of jobs.

</rant>

--
James Miller
Mar 06 2012
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-06 10:11, James Miller wrote:
 If you have a possible null, then check for it *yourself* sometimes
 you know its null, sometimes you don't have any control. However, the
 compiler has no way of knowing that. Its basically an all-or-nothing
 thing with the compiler.
If I know there is a possible null, then of course I check for it. But I MAY NOT KNOW there is a possible null. People make mistakes, I make mistakes. But I guess you just turn of exceptions and another error handling completely because you never ever make a mistake.
 However, the compiler can (and I think does) warn of possible
 null-related errors. It doesn't fail, because, again, it can't be
 certain of what is an error and what is not. And it can't know, since
 that is the Halting Problem.

 I'm not sure what the fuss is here, we cannot demand that every little
 convenience be packed into D, at some point we need to face facts that
 we are still programming, and sometimes things go wrong. The best
 arguments I've seen so far is to install a handler that catches the
 SEGFAULT in linux, and does whatever SEH stuff it does in windows and
 print a stacktrace. If this happens in a long-running process, then,
 to be blunt, tough. Unless you're telling me that the only way to
 reproduce the bug is to run the program for the same amount of time in
 near-identical conditions, then sir, you fail at being a detective.
On Mac OS X the runtime would only need to catch any exception (as it already does) and print the stack trace. But also re-throw the exception to let the OS handle the logging of the exception (at least I hope that will work).
 If you have a specific need for extreme safety and no sharp corners,
 use Java, or some other VM language, PHP comes to mind as well. If you
 want a systems programming language that is geared for performance,
 with modern convenience then stick around, do I have the language for
 you! Stop thinking in hypotheticals, because no language can cover
 every situation; "What if this is running in a space ship for 12 years
 and the segfault is caused by space bees?!" is not something we should
 be thinking about. If a process fails, then it fails, you try to
 figure out what happened (you do have logging on this mysterious
 program right?" then fix it.

 Its not easy, but if it was easy, we'd be out of jobs.

 </rant>

 --
 James Miller
-- /Jacob Carlborg
Mar 06 2012
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2012-03-06 10:53:19 +0000, Jacob Carlborg <doob me.com> said:

 On Mac OS X the runtime would only need to catch any exception (as it 
 already does) and print the stack trace. But also re-throw the 
 exception to let the OS handle the logging of the exception (at least I 
 hope that will work).
Actually if you want a useful crash log the exception shouldn't be caught at all, because reaching the catch handler requires unwinding the stack which will ruin the stack trace for the log file. Printing the stack trace should be done in the exception handling code if no catch handler can be found, after which it can crash and let the OS do its thing. And for that to work there should be no catch block around the call to D main in the runtime initialization code. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Mar 06 2012
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
James Miller:

 If you have a possible null, then check for it *yourself* sometimes
 you know its null, sometimes you don't have any control. However, the
 compiler has no way of knowing that. Its basically an all-or-nothing
 thing with the compiler.
In a normal program there are many situations where the programmer knows a class reference or a pointer can't be null. If the type system of the language allows you to write down this semantic information with some kind of annotation, and the compiler is able to analyze the code a bit to make that correct and handy, some null-related bugs don't happen. And this costs nothing at run-time (but maybe it increases the compilation time a bit).
 I'm not sure what the fuss is here, we cannot demand that every little
 convenience be packed into D, at some point we need to face facts that
 we are still programming, and sometimes things go wrong.
I agree. On the other hand null-related bugs are common enough and bad enough that improving their management in some way (helping avoid them, helping their debug) is more than just a little convenience.
 If you have a specific need for extreme safety and no sharp corners,
 use Java, or some other VM language, PHP comes to mind as well.
PHP is not at the top of the list of the languages for people that want "extreme safety". Bye, bearophile
Mar 06 2012
prev sibling parent "foobar" <foo bar.com> writes:
On Tuesday, 6 March 2012 at 09:11:07 UTC, James Miller wrote:
 If you have a possible null, then check for it *yourself* 
 sometimes
 you know its null, sometimes you don't have any control. 
 However, the
 compiler has no way of knowing that. Its basically an 
 all-or-nothing
 thing with the compiler.

 However, the compiler can (and I think does) warn of possible
 null-related errors. It doesn't fail, because, again, it can't 
 be
 certain of what is an error and what is not. And it can't know, 
 since
 that is the Halting Problem.

 I'm not sure what the fuss is here, we cannot demand that every 
 little
 convenience be packed into D, at some point we need to face 
 facts that
 we are still programming, and sometimes things go wrong. The 
 best
 arguments I've seen so far is to install a handler that catches 
 the
 SEGFAULT in linux, and does whatever SEH stuff it does in 
 windows and
 print a stacktrace. If this happens in a long-running process, 
 then,
 to be blunt, tough. Unless you're telling me that the only way 
 to
 reproduce the bug is to run the program for the same amount of 
 time in
 near-identical conditions, then sir, you fail at being a 
 detective.

 If you have a specific need for extreme safety and no sharp 
 corners,
 use Java, or some other VM language, PHP comes to mind as well. 
 If you
 want a systems programming language that is geared for 
 performance,
 with modern convenience then stick around, do I have the 
 language for
 you! Stop thinking in hypotheticals, because no language can 
 cover
 every situation; "What if this is running in a space ship for 
 12 years
 and the segfault is caused by space bees?!" is not something we 
 should
 be thinking about. If a process fails, then it fails, you try to
 figure out what happened (you do have logging on this mysterious
 program right?" then fix it.

 Its not easy, but if it was easy, we'd be out of jobs.

 </rant>

 --
 James Miller
The only halting problem I see here is trying to find any logic in the above misplaced rant. The compiler can implement non-nullable types and prevent NPE bugs with zero run-time cost by employing the type system. This is a simple concept that has nothing to do with VMs and implementations for it do exist in other languages. Even the inventor of the pointer concept himself confesses that null-ability was a grave mistake. [I forgot his name but the i'm sure Google can find the video] I really wish that people would stop comparing everything to c/c++. Both are ancient pieces of obsolete technology and the trade-offs they provide are irrelevant today. This is why we use new languages such as D.
Mar 06 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Mon, 05 Mar 2012 23:31:34 +0100, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Mon, 05 Mar 2012 05:38:20 -0500, Walter Bright  
 <newshound2 digitalmars.com> wrote:

 On 3/4/2012 11:50 PM, Chad J wrote:
 Problems:
 - I have to rerun the program in a debugger to see the stack trace.  
 This is a
 slow workflow. It's a big improvement if the segfault is hard to find,  
 but only
 a small improvement if the segfault is easy to find. Very bad if I'm  
 prototyping
 experimental code and I have a bunch to go through.
I don't get this at all. I find it trivial to run the program with a debugger: gdb foo >run that's it.
This argument continually irks me to no end. It seems like the trusty (rusty?) sword you always pull out when defending the current behavior, but it falls flat on its face when a programmer is faced with a Seg Fault that has occurred on a program that was running for several days/weeks, possibly not in his development environment, and now he must run it via a debugger to wait another several days/weeks to (hopefully) get the same error. Please stop using this argument, it's only valid on trivial bugs that crash immediately during development. I wholeheartedly agree that we should use the hardware features that we are given, and that NullPointerException is not worth the bloat. But we should be doing *something* better than just printing "Segmentation Fault". -Steve
There are two independent discussions being conflated here. One about getting more information out of crashes even in release mode and the other about adding runtime checks to prevent crashing merely in debug builds.
Mar 05 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, March 06, 2012 05:11:30 Martin Nowak wrote:
 There are two independent discussions being conflated here. One about
 getting more
 information out of crashes even in release mode and the other about
 adding runtime checks to prevent crashing merely in debug builds.
A segfault should _always_ terminate a program - as should dereferencing a null pointer. Those are fatal errors. If we had extra checks, they would have to result in NullPointerErrors, not NullPointerExceptions. It's horribly broken to try and recover from dereferencing a null pointer. So, the question then becomes whether adding the checks and getting an Error thrown is worth doing as opposed to simply detecting it and printing out a stack trace. And throwing an Error is arguably _worse_, because it means that you can't get a useful core dump. Really, I think that checking for null when dereferencing is out of the question. What we need is to detect it and print out a stacktrace. That will maximize the debug information without costing performance. - Jonathan M Davis
Mar 05 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/05/2012 11:27 PM, Jonathan M Davis wrote:
 On Tuesday, March 06, 2012 05:11:30 Martin Nowak wrote:
 There are two independent discussions being conflated here. One about
 getting more
 information out of crashes even in release mode and the other about
 adding runtime checks to prevent crashing merely in debug builds.
A segfault should _always_ terminate a program - as should dereferencing a null pointer. Those are fatal errors. If we had extra checks, they would have to result in NullPointerErrors, not NullPointerExceptions. It's horribly broken to try and recover from dereferencing a null pointer. So, the question then becomes whether adding the checks and getting an Error thrown is worth doing as opposed to simply detecting it and printing out a stack trace. And throwing an Error is arguably _worse_, because it means that you can't get a useful core dump. Really, I think that checking for null when dereferencing is out of the question. What we need is to detect it and print out a stacktrace. That will maximize the debug information without costing performance. - Jonathan M Davis
Why is it fatal? I'd like to be able to catch these. I tend to run into a lot of fairly benign sources of these, and they should be try-caught so that the user doesn't get the boot unnecessarily. Unnecessary crashing can lose user data. Maybe a warning message is sufficient: "hey that last thing you did didn't turn out so well; please don't do that again." followed by some automatic emailing of admins. And the email would contain a nice stack trace with line numbers and stack values and... I can dream huh. I might be convinced that things like segfaults in the /general case/ are fatal. It could be writing to memory outside the bounds of an array which is both not bounds-checked and may or may not live on the stack. Yuck, huh. But this is not the same as a null-dereference: Foo f = null; f.bar = 4; // This is exception worthy, yes, // but how does it affect unrelated parts of the program?
Mar 05 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, March 05, 2012 23:58:48 Chad J wrote:
 On 03/05/2012 11:27 PM, Jonathan M Davis wrote:
 On Tuesday, March 06, 2012 05:11:30 Martin Nowak wrote:
 There are two independent discussions being conflated here. One about
 getting more
 information out of crashes even in release mode and the other about
 adding runtime checks to prevent crashing merely in debug builds.
A segfault should _always_ terminate a program - as should dereferencing a null pointer. Those are fatal errors. If we had extra checks, they would have to result in NullPointerErrors, not NullPointerExceptions. It's horribly broken to try and recover from dereferencing a null pointer. So, the question then becomes whether adding the checks and getting an Error thrown is worth doing as opposed to simply detecting it and printing out a stack trace. And throwing an Error is arguably _worse_, because it means that you can't get a useful core dump. Really, I think that checking for null when dereferencing is out of the question. What we need is to detect it and print out a stacktrace. That will maximize the debug information without costing performance. - Jonathan M Davis
Why is it fatal? I'd like to be able to catch these. I tend to run into a lot of fairly benign sources of these, and they should be try-caught so that the user doesn't get the boot unnecessarily. Unnecessary crashing can lose user data. Maybe a warning message is sufficient: "hey that last thing you did didn't turn out so well; please don't do that again." followed by some automatic emailing of admins. And the email would contain a nice stack trace with line numbers and stack values and... I can dream huh. I might be convinced that things like segfaults in the /general case/ are fatal. It could be writing to memory outside the bounds of an array which is both not bounds-checked and may or may not live on the stack. Yuck, huh. But this is not the same as a null-dereference: Foo f = null; f.bar = 4; // This is exception worthy, yes, // but how does it affect unrelated parts of the program?
If you dereference a null pointer, there is a serious bug in your program. Continuing is unwise. And if it actually goes so far as to be a segfault (since the hardware caught it rather than the program), it is beyond a doubt unsafe to continue. On rare occasion, it might make sense to try and recover from dereferencing a null pointer, but it's like catching an AssertError. It's rarely a good idea. Continuing would mean trying to recover from a logic error in your program. Your program obviously already assumed that the variable wasn't null, or it would have checked for null. So from the point of view of your program's logic, you are by definition in an undefined state, and continuing will have unexpected and potentially deadly behavior. - Jonathan M Davis
Mar 05 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/06/2012 12:07 AM, Jonathan M Davis wrote:
 If you dereference a null pointer, there is a serious bug in your program.
 Continuing is unwise. And if it actually goes so far as to be a segfault
 (since the hardware caught it rather than the program), it is beyond a doubt
 unsafe to continue. On rare occasion, it might make sense to try and recover
 from dereferencing a null pointer, but it's like catching an AssertError. It's
 rarely a good idea. Continuing would mean trying to recover from a logic error
 in your program. Your program obviously already assumed that the variable
 wasn't null, or it would have checked for null. So from the point of view of
 your program's logic, you are by definition in an undefined state, and
 continuing will have unexpected and potentially deadly behavior.

 - Jonathan M Davis
This could be said for a lot of things: array-out-of-bounds exceptions, file-not-found exceptions, conversion exception, etc. If the programmer thought about it, they would have checked the array length, checked for file existence before opening it, been more careful about converting things, etc. To me, the useful difference between fatal and non-fatal things is how well isolated the failure is. Out of memory errors and writes into unexpected parts of memory are very bad things and can corrupt completely unrelated sections of code. The other things I've mentioned, null-dereference included, cannot do this. Null-dereferences and such can be isolated to sections of code. A section of code might become compromised by the dereference, but the code outside of that section is still fine and can continue working. Example: // riskyShenanigans does some dubious things with nullable references. // It was probably written late at night after one too many caffeine // pills and alcoholic beverages. This guy is operating under much // worse conditions and easier objectives than the guy writing // someFunc(). // // Thankfully, it can be isolated. // int riskyShenanigans() { Foo f = new Foo(); ... blah blah blah ... f = null; // surprise! ... etc etc ... // Once this happens, we can't count on 'f' or // anything else in this function to be valid // anymore. return f.bar; } // The author of someFunc() is trying to be a bit more careful. // In fact, they'll even go so far as to make this thing nothrow. // Maybe it's a server process and it's not allowed to die. // nothrow void someFunc() { int cheesecake = 7; int donut = 0; // Here we will make sure that riskyShenanigans() is // well isolated from everything else. try { // All statefulness inside this scope cannot be // trusted when the null dereference happens. donut = riskyShenanigans(); } catch( NullDereferenceException e ) { // donut can be recovered if we are very // explicit about it. // It MUST be restored to some known state // before we consider using it again. donut = 0; // (so we restore it.) } // At this point, we HAVE accounted for null-dereferences. // donut is either a valid value, or it is zero. // We know what it will behave like. omnom(donut); // An even stronger case: // cheesecake had nothing to do with riskyShenanigans. // It is completely impossible for that null-dereference // to have touched the cheesecake in this code. omnom(cheesecake); } And if riskyShenanigans were to modify global state... well, it's no longer so well isolated anymore. This is just a disadvantage of global state, and it will be true with many other possible exceptions too. Long story short: I don't see how an unexpected behavior in one part of a program will necessarily create unexpected behavior in all parts of the program, especially when good encapsulation is practiced. Thoughts?
Mar 05 2012
parent reply Mantis <mail.mantis.88 gmail.com> writes:
06.03.2012 8:04, Chad J пишет:
 On 03/06/2012 12:07 AM, Jonathan M Davis wrote:
 If you dereference a null pointer, there is a serious bug in your 
 program.
 Continuing is unwise. And if it actually goes so far as to be a segfault
 (since the hardware caught it rather than the program), it is beyond 
 a doubt
 unsafe to continue. On rare occasion, it might make sense to try and 
 recover
 from dereferencing a null pointer, but it's like catching an 
 AssertError. It's
 rarely a good idea. Continuing would mean trying to recover from a 
 logic error
 in your program. Your program obviously already assumed that the 
 variable
 wasn't null, or it would have checked for null. So from the point of 
 view of
 your program's logic, you are by definition in an undefined state, and
 continuing will have unexpected and potentially deadly behavior.

 - Jonathan M Davis
This could be said for a lot of things: array-out-of-bounds exceptions, file-not-found exceptions, conversion exception, etc. If the programmer thought about it, they would have checked the array length, checked for file existence before opening it, been more careful about converting things, etc.
It's different: with array-out-of-bounds there's no hardware detection, so its either checked in software or unchecked (in best case you'll have access violation or segfault, but otherwise going past the bounds of array leads to undefined behavior). Both file-not-found and conv exceptions often rely on user's input, in which case they do not necessarily mean bug in a program.
 To me, the useful difference between fatal and non-fatal things is how 
 well isolated the failure is.  Out of memory errors and writes into 
 unexpected parts of memory are very bad things and can corrupt 
 completely unrelated sections of code.  The other things I've 
 mentioned, null-dereference included, cannot do this.

 Null-dereferences and such can be isolated to sections of code.  A 
 section of code might become compromised by the dereference, but the 
 code outside of that section is still fine and can continue working.

 Example:
 [...]
 And if riskyShenanigans were to modify global state... well, it's no 
 longer so well isolated anymore.  This is just a disadvantage of 
 global state, and it will be true with many other possible exceptions 
 too.

 Long story short: I don't see how an unexpected behavior in one part 
 of a program will necessarily create unexpected behavior in all parts 
 of the program, especially when good encapsulation is practiced.

 Thoughts?
If riskyShenanigans nullifies reference in a process, then it must check it before dereferencing. There's obviously a bug, and if program will leave a proper crash log you shouldn't have problems finding and fixing this bug. If you don't have access to function's source, then you cannot guarantee it's safeness and isolation, so recovering from exception is unsafe.
Mar 06 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/06/2012 03:39 PM, Mantis wrote:
 06.03.2012 8:04, Chad J пишет:
 On 03/06/2012 12:07 AM, Jonathan M Davis wrote:
 If you dereference a null pointer, there is a serious bug in your
 program.
 Continuing is unwise. And if it actually goes so far as to be a segfault
 (since the hardware caught it rather than the program), it is beyond
 a doubt
 unsafe to continue. On rare occasion, it might make sense to try and
 recover
 from dereferencing a null pointer, but it's like catching an
 AssertError. It's
 rarely a good idea. Continuing would mean trying to recover from a
 logic error
 in your program. Your program obviously already assumed that the
 variable
 wasn't null, or it would have checked for null. So from the point of
 view of
 your program's logic, you are by definition in an undefined state, and
 continuing will have unexpected and potentially deadly behavior.

 - Jonathan M Davis
This could be said for a lot of things: array-out-of-bounds exceptions, file-not-found exceptions, conversion exception, etc. If the programmer thought about it, they would have checked the array length, checked for file existence before opening it, been more careful about converting things, etc.
It's different: with array-out-of-bounds there's no hardware detection, so its either checked in software or unchecked (in best case you'll have access violation or segfault, but otherwise going past the bounds of array leads to undefined behavior). Both file-not-found and conv exceptions often rely on user's input, in which case they do not necessarily mean bug in a program.
Alright.
 To me, the useful difference between fatal and non-fatal things is how
 well isolated the failure is. Out of memory errors and writes into
 unexpected parts of memory are very bad things and can corrupt
 completely unrelated sections of code. The other things I've
 mentioned, null-dereference included, cannot do this.

 Null-dereferences and such can be isolated to sections of code. A
 section of code might become compromised by the dereference, but the
 code outside of that section is still fine and can continue working.

 Example:
 [...]
 And if riskyShenanigans were to modify global state... well, it's no
 longer so well isolated anymore. This is just a disadvantage of global
 state, and it will be true with many other possible exceptions too.

 Long story short: I don't see how an unexpected behavior in one part
 of a program will necessarily create unexpected behavior in all parts
 of the program, especially when good encapsulation is practiced.

 Thoughts?
If riskyShenanigans nullifies reference in a process, then it must check it before dereferencing. There's obviously a bug, and if program will leave a proper crash log you shouldn't have problems finding and fixing this bug. If you don't have access to function's source, then you cannot guarantee it's safeness and isolation, so recovering from exception is unsafe.
But what do you say to the notion of isolation? someFunc is isolated from riskyShenanigans becuase it /knows/ what state is touched by riskyShenanigans. If riskyShenanigans does something strange and unexpected, and yes, it does have a bug in it, then I feel that someFunc should be able to reset the state touched by riskyShenanigans and continue. The thing I find really strange here is that there's this belief that if feature A is buggy then the unrelated feature B shouldn't work either. Why? Shouldn't the user be able to continue using feature B? Btw, crashing a program is bad. That can lose data that the user has entered but not yet stored. I should have a very good reason before I let this happen. It would also be extremely frustrating for a user to have a program become crippled because some feature they don't even use will occasionally dereference null and crash the thing. Then they have to wait for me to fix it, and I'm busy, so it could be awhile. My impression so far is that this hinges on some kind of "where there's one, there's more" argument. I am unconvinced because programs tend to have bugs anyways. riskyShenanigans doing a null-dereference once doesn't mean it's any more likely to produce corrupt results the rest of the time: it can produce corrupt results anyways, because it is a computer program written by a fallible human being. Anyone trying to be really careful should validate the results in someFunc.
Mar 06 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 5:29 PM, Chad J wrote:
 But what do you say to the notion of isolation? someFunc is isolated from
 riskyShenanigans becuase it /knows/ what state is touched by riskyShenanigans.
 If riskyShenanigans does something strange and unexpected, and yes, it does
have
 a bug in it, then I feel that someFunc should be able to reset the state
touched
 by riskyShenanigans and continue.
That's the theory. But in practice, when you get a seg fault, there's (at minimum) a logical error in your program, and it is in an undefined state. Since memory is all shared, you have no idea whether that error is isolated or not, and you *cannot* know, because there's a logic error you didn't know about. Continuing on after the program has entered an unknown an undefined state is just a recipe for disaster.
Mar 06 2012
parent reply Sean Kelly <sean invisibleduck.org> writes:
On Mar 6, 2012, at 6:29 PM, Walter Bright <newshound2 digitalmars.com> wrote=
:

 On 3/6/2012 5:29 PM, Chad J wrote:
 But what do you say to the notion of isolation? someFunc is isolated from=
 riskyShenanigans becuase it /knows/ what state is touched by riskyShenani=
gans.
 If riskyShenanigans does something strange and unexpected, and yes, it do=
es have
 a bug in it, then I feel that someFunc should be able to reset the state t=
ouched
 by riskyShenanigans and continue.
=20 =20 That's the theory. But in practice, when you get a seg fault, there's (at m=
inimum) a logical error in your program, and it is in an undefined state. Si= nce memory is all shared, you have no idea whether that error is isolated or= not, and you *cannot* know, because there's a logic error you didn't know a= bout. Minor point, but some apps are designed such that segfaults are intended. I w= orked on a DB that dynamically mapped memory in the segfault handler and the= n resumed execution. Since D is a systems languages, very few assumptions c= an be made about error conditions.=20=
Mar 06 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 7:08 PM, Sean Kelly wrote:
 Minor point, but some apps are designed such that segfaults are intended. I
 worked on a DB that dynamically mapped memory in the segfault handler and
 then resumed execution.  Since D is a systems languages, very few assumptions
 can be made about error conditions.
Yes, and I've written a GC implementation that relied on intercepting invalid page writes to construct its list of 'dirty' pages. There's nothing in D preventing one from doing that, although for sure such code will be very, very system specific. What I'm talking about is the idea that one can recover from seg faults resulting from program bugs.
Mar 06 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 8:05 PM, Walter Bright wrote:
 What I'm talking about is the idea that one can recover from seg faults
 resulting from program bugs.
I've written about this before, but I want to emphasize that attempting to recover from program BUGS is the absolutely WRONG way to go about writing fail-safe, critical, fault-tolerant software.
Mar 06 2012
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 06 Mar 2012 23:07:24 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 3/6/2012 8:05 PM, Walter Bright wrote:
 What I'm talking about is the idea that one can recover from seg faults
 resulting from program bugs.
I've written about this before, but I want to emphasize that attempting to recover from program BUGS is the absolutely WRONG way to go about writing fail-safe, critical, fault-tolerant software.
100% agree. I just want as much information about the bug as possible before the program exits. -Steve
Mar 07 2012
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
Oh alright. Then we're in complete agreement.=20

On Mar 6, 2012, at 8:05 PM, Walter Bright <newshound2 digitalmars.com> wrote=
:

 On 3/6/2012 7:08 PM, Sean Kelly wrote:
 Minor point, but some apps are designed such that segfaults are intended.=
I
 worked on a DB that dynamically mapped memory in the segfault handler and=
 then resumed execution.  Since D is a systems languages, very few assumpt=
ions
 can be made about error conditions.
=20 Yes, and I've written a GC implementation that relied on intercepting inva=
lid page writes to construct its list of 'dirty' pages.
=20
 There's nothing in D preventing one from doing that, although for sure suc=
h code will be very, very system specific.
=20
 What I'm talking about is the idea that one can recover from seg faults re=
sulting from program bugs.
Mar 06 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Mar 06, 2012 at 08:29:35PM -0500, Chad J wrote:
[...]
 But what do you say to the notion of isolation?  someFunc is
 isolated from riskyShenanigans becuase it /knows/ what state is
 touched by riskyShenanigans.  If riskyShenanigans does something
 strange and unexpected, and yes, it does have a bug in it, then I
 feel that someFunc should be able to reset the state touched by
 riskyShenanigans and continue.

 The thing I find really strange here is that there's this belief
 that if feature A is buggy then the unrelated feature B shouldn't
 work either. Why?  Shouldn't the user be able to continue using
 feature B?
If feature A is buggy and the user is trying to use it, then there's a problem. If the user doesn't use feature A or knows that feature A is buggy and so works around it, then feature A doesn't (shouldn't) run and won't crash.
 Btw, crashing a program is bad.  That can lose data that the user
 has entered but not yet stored.  I should have a very good reason
 before I let this happen.
I don't know what your software design is, but when I write code, if there is the possibility of data loss, I always make the program backup the data at intervals. I don't trust the integrity of user data after a major problem like dereferencing a null pointer happens. Obviously there's a serious logic flaw in the program that led to this, so all bets are off as to whether the user's data is even usable.
 It would also be extremely frustrating for a user to have a program
 become crippled because some feature they don't even use will
 occasionally dereference null and crash the thing.  Then they have
 to wait for me to fix it, and I'm busy, so it could be awhile.
The fact that the unused feature running even though the user isn't using it is, to me, a sign that something like a null pointer dereference should be fatal, because it means that what you assumed the unused feature was doing before in the background was consistent, but it turns out to be false, so who knows what else it has been doing wrong before it hit the null pointer. I should hate for the program to continue running after that, since consistency has been compromised; continuing will probably only worsen the problem.
 My impression so far is that this hinges on some kind of "where
 there's one, there's more" argument.  I am unconvinced because
 programs tend to have bugs anyways.  riskyShenanigans doing a
 null-dereference once doesn't mean it's any more likely to produce
 corrupt results the rest of the time: it can produce corrupt results
 anyways, because it is a computer program written by a fallible
 human being.  Anyone trying to be really careful should validate the
 results in someFunc.
It sound like what you want is some kind of sandbox isolation function, and null pointers are just the most obvious problem among other things that could go wrong. We could have a std.sandbox module that can run some given code (say PossiblyBuggyFeatureA) inside a sandbox, so that if it dereferences a null pointer, corrupts memory, or whatever, it won't affect UnrelatedFeatureB which runs in a different sandbox, or the rest of the system. This way you can boldly charge forward in spite of any problems, because you know that only the code inside the sandbox is in a bad state, and the rest of the program (presumably) is still in good working condition. In Linux this is easily implemented by fork() and perhaps chroot() (if you're *really* paranoid) and message-passing (so the main program is guaranteed to have no corruption even when BadPluginX goes crazy and starts trashing memory everywhere). I don't know about Windows, but I assume there is some way to do sandboxing as well. T -- Customer support: the art of getting your clients to pay for your own incompetence.
Mar 06 2012
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 05 Mar 2012 23:58:48 -0500, Chad J  
<chadjoan __spam.is.bad__gmail.com> wrote:

 On 03/05/2012 11:27 PM, Jonathan M Davis wrote:
 On Tuesday, March 06, 2012 05:11:30 Martin Nowak wrote:
 There are two independent discussions being conflated here. One about
 getting more
 information out of crashes even in release mode and the other about
 adding runtime checks to prevent crashing merely in debug builds.
A segfault should _always_ terminate a program - as should dereferencing a null pointer. Those are fatal errors. If we had extra checks, they would have to result in NullPointerErrors, not NullPointerExceptions. It's horribly broken to try and recover from dereferencing a null pointer. So, the question then becomes whether adding the checks and getting an Error thrown is worth doing as opposed to simply detecting it and printing out a stack trace. And throwing an Error is arguably _worse_, because it means that you can't get a useful core dump. Really, I think that checking for null when dereferencing is out of the question. What we need is to detect it and print out a stacktrace. That will maximize the debug information without costing performance. - Jonathan M Davis
Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
 I'd like to be able to catch these.  I tend to run into a lot of fairly  
 benign sources of these, and they should be try-caught so that the user  
 doesn't get the boot unnecessarily.  Unnecessary crashing can lose user  
 data.  Maybe a warning message is sufficient: "hey that last thing you  
 did didn't turn out so well; please don't do that again." followed by  
 some automatic emailing of admins.  And the email would contain a nice  
 stack trace with line numbers and stack values and... I can dream huh.
You cannot be sure if your program is in a sane state.
 I might be convinced that things like segfaults in the /general case/  
 are fatal.  It could be writing to memory outside the bounds of an array  
 which is both not bounds-checked and may or may not live on the stack.  
 Yuck, huh.  But this is not the same as a null-dereference:

 Foo f = null;
 f.bar = 4;  // This is exception worthy, yes,
              // but how does it affect unrelated parts of the program?
Again, this is a simple case. There is also this case: Foo f = new Foo(); ... // some code that corrupts f so that it is now null f.bar = 4; This is not a "continue execution" case, and cannot be distinguished from the simple case by compiler or library code. Philosophically, any null pointer access is a program error, not a user error, and should not be considered for "normal" execution. Terminating execution is the only right choice. -Steve
Mar 07 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 23:58:48 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I still have a nagging doubt though: since the dereference in question is null, then there is no way for that particular dereference to corrupt other memory. The only way this happens in (2) and (3) is that related code tries to write to invalid memory. But if we have other measures in place to prevent that (bounds checking, other hardware signals, etc), then how is it still possible to corrupt memory?
 [...]

 -Steve
Mar 07 2012
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 07 Mar 2012 09:22:27 -0500, Chad J  
<chadjoan __spam.is.bad__gmail.com> wrote:

 On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 23:58:48 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot.
(1) occurs a lot, and in most cases, happens reliably. Most QA cycles should find them. There should be no case in which this is not a program error, to be fixed. (2) and (3) are sinister because errors that occur are generally far away from the root cause, and the memory you are using is compromised. For example, a memory corruption can cause an error several hours later when you try to use the corrupted memory. If allowed to continue, such corrupt memory programs can cause lots of problems, e.g. corrupt your saved data, or run malicious code (buffer overflow attack). It's not worth saving anything.
 I didn't even consider (2) and (3) as possibilities.  Those are far from  
 my mind.

 I still have a nagging doubt though: since the dereference in question  
 is null, then there is no way for that particular dereference to corrupt  
 other memory.  The only way this happens in (2) and (3) is that related  
 code tries to write to invalid memory.  But if we have other measures in  
 place to prevent that (bounds checking, other hardware signals, etc),  
 then how is it still possible to corrupt memory?
The null dereference may be a *result* of memory corruption. example: class Foo {void foo(){}} void main() { int[2] x = [1, 2]; Foo f = new Foo; x.ptr[2] = 0; // oops killed f f.foo(); // segfault } Again, this one is benign, but it doesn't have to be. I could have just nullified my return stack pointer, etc. along with f. The larger point is, a SEGV means memory is not as it is expected. Once you don't trust your memory, you might as well stop. -Steve
Mar 07 2012
prev sibling next sibling parent reply "Chad J" <chadjoan __spam.is.bad__gmail.com> writes:
On Wednesday, 7 March 2012 at 14:23:18 UTC, Chad J wrote:
 On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 23:58:48 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I still have a nagging doubt though: since the dereference in question is null, then there is no way for that particular dereference to corrupt other memory. The only way this happens in (2) and (3) is that related code tries to write to invalid memory. But if we have other measures in place to prevent that (bounds checking, other hardware signals, etc), then how is it still possible to corrupt memory?
 [...]

 -Steve
I spoke too soon! We missed one: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. 4. null was being used as a sentinal value, and it snuck into a place where the value should not be a sentinal anymore. I will now change what I said to reflect this: I think I see where the misunderstanding is coming from. I encounter (1) from time to time. It isn't a huge problem because usually if I declare something the next thing on my mind is initializing it. Even if I forget, I'll catch it in early testing. It tends to never make it to anyone else's desk, unless it's a regression. Regressions like this aren't terribly common though. If you make my program crash from (1), I'll live. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I think I'm used to VM languages at this point VM, (2) and (3) can't happen. I never worry about those. Feel free to crash these in D. I encounter (4) a lot. I really don't want my programs crashed when (4) happens. Such crashes would be super annoying, and they can happen at very bad times. ------ Now then, I have 2 things to say about this: - Why can't we distinguish between these? As I said in my previous thoughts, we should have ways of ruling out (2) and (3), thus ensuring that our NullDerefException was caused by only (1) or (4). It's possible in VM languages, but given that the VM is merely a cheesey abstraction, I beleive that it's always possible to accomplish the same things in D %100 of the time. Usually this requires isolating the system bits from the abstractions. Saying it can't be done would be giving up way too easily, and you can miss the hidden treasure that way. - If I'm given some sensible way of handling sentinal values then (4) will become a non-issue. Then that leaves (1-3), and I am OK if those cause mandatory crashing. I know I'm probably opening an old can of worms, but D is quite powerful and I think we should be able to solve this stuff. My instincts tell me that managing sentinal values with special patterns in memory (ex: null values or separate boolean flags) all have pitfalls (null-derefs or SSOT violations that lead to desync). Perhaps D's uber-powerful type system can rescue us? The only other problem with this is... what if our list is not exhaustive, and (5) exists?
Mar 07 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 07 Mar 2012 10:10:32 -0500, Chad J  
<chadjoan __spam.is.bad__gmail.com> wrote:

 On Wednesday, 7 March 2012 at 14:23:18 UTC, Chad J wrote:
 On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 23:58:48 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I still have a nagging doubt though: since the dereference in question is null, then there is no way for that particular dereference to corrupt other memory. The only way this happens in (2) and (3) is that related code tries to write to invalid memory. But if we have other measures in place to prevent that (bounds checking, other hardware signals, etc), then how is it still possible to corrupt memory?
 [...]

 -Steve
I spoke too soon! We missed one: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. 4. null was being used as a sentinal value, and it snuck into a place where the value should not be a sentinal anymore. I will now change what I said to reflect this: I think I see where the misunderstanding is coming from. I encounter (1) from time to time. It isn't a huge problem because usually if I declare something the next thing on my mind is initializing it. Even if I forget, I'll catch it in early testing. It tends to never make it to anyone else's desk, unless it's a regression. Regressions like this aren't terribly common though. If you make my program crash from (1), I'll live. I didn't even consider (2) and (3) as possibilities. Those are far from Actionscript 3, Haxe, Synergy/DE|DBL, etc). In the VM, (2) and (3) can't happen. I never worry about those. Feel free to crash these in D. I encounter (4) a lot. I really don't want my programs crashed when (4) happens. Such crashes would be super annoying, and they can happen at very bad times.
You can use sentinels other than null. -Steve
Mar 07 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 On Wed, 07 Mar 2012 10:10:32 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 On Wednesday, 7 March 2012 at 14:23:18 UTC, Chad J wrote:

 I spoke too soon!
 We missed one:

 1. You forgot to initialize a variable.
 2. Your memory has been corrupted, and some corrupted pointer
 now points into no-mem land.
 3. You are accessing memory that has been deallocated.
 4. null was being used as a sentinal value, and it snuck into
 a place where the value should not be a sentinal anymore.

 I will now change what I said to reflect this:

 I think I see where the misunderstanding is coming from.

 I encounter (1) from time to time. It isn't a huge problem because
 usually if I declare something the next thing on my mind is
 initializing it. Even if I forget, I'll catch it in early testing. It
 tends to never make it to anyone else's desk, unless it's a
 regression. Regressions like this aren't terribly common though. If
 you make my program crash from (1), I'll live.

 I didn't even consider (2) and (3) as possibilities. Those are far

 Java, Actionscript 3, Haxe, Synergy/DE|DBL, etc). In the VM, (2) and
 (3) can't happen. I never worry about those. Feel free to crash these
 in D.

 I encounter (4) a lot. I really don't want my programs crashed when
 (4) happens. Such crashes would be super annoying, and they can happen
 at very bad times.
You can use sentinels other than null. -Steve
Example? Here, if you want, I'll start with a typical case. Please make it right. class UnreliableResource { this(string sourceFile) {...} this(uint userId) {...} void doTheThing() {...} } void main() { // Set this to a sentinal value for cases where the source does // not exist, thus preventing proper initialization of res. UnreliableResource res = null; // The point here is that obtaining this unreliable resource // is tricky business, and therefore complicated. // if ( std.file.exists("some_special_file") ) { res = new UnreliableResource("some_special_file"); } else { uint uid = getUserIdSomehow(); if ( isValidUserId(uid) ) { res = new UnreliableResource(uid); } } // Do some other stuff. ... // Now use the resource. try { thisCouldBreakButItWont(res); } // Fairly safe if we were in a reasonable VM. catch ( NullDerefException e ) { writefln("This shouldn't happen, but it did."); } } void thisCouldBreakButItWont(UnreliableResource res) { if ( res != null ) { res.doTheThing(); } else { doSomethingUsefulThatCanHappenWhenResIsNotAvailable(); writefln("Couldn't find the resource thingy."); writefln("Resetting the m-rotor. (NOOoooo!)"); } } Please follow these constraints: - Do not use a separate boolean variable for determining whether or not 'res' could be created. This violates a kind of SSOT (http://en.wikipedia.org/wiki/Single_Source_of_Truth) because it allows cases where the hypothetical "resIsInitialized" variable is true but res isn't actually initialized, or where "resIsInitialized" is false but res is actually initialized. It also doesn't throw catchable exceptions when the uninitialized class has methods called on it. In my pansy VM-based languages I always prefer to risk the null sentinal. - Do not modify the implementation of UnreliableResource. It's not always possible. - Try to make the solution something that could, in principle, be placed into Phobos and reused without a lot of refactoring in the original code. ... Now I will think about this a bit... This reminds me a lot of algebraic data types. I kind of want to say something like: auto res = empty | UnreliableResource; and then unwrap it: ... thisCantBreakAnymore(res); } void thisCantBreakAnymore(UnreliableResource res) { res.doTheThing(); } void thisCantBreakAnymore(empty) { doSomethingUsefulThatCanHappenWhenResIsNotAvailable(); writefln("Couldn't find the resource thingy."); writefln("Resetting the m-rotor. (NOOoooo!)"); } I'm not absolutely sure I'd want to go that path though, and since D is unlikely to do any of those things, I just want to be able to catch an exception if the sentinel value tries to have the "doTheThing()" method called on it. I can maybe see invariants being used for this: class UnreliableResource { bool initialized = false; invariant { if (!initialized) throw new Exception("Not initialized."); } void initialize(string sourceFile) { ... } void initialize(uint userId) { ... } void doTheThing() {...} } But as I think about it, this approach already has a lot of problems: - It violates the condition that UnreliableResource shouldn't be modified to solve the problem. Sometimes the class in question is upstream or otherwise not available for modification. - I have to add this stupid boilerplate to every class. - There could be a mixin template to ease the boilerplate, but the D spec states that there can be only one invariant in a class. Using such a mixin would nix my ability to have an invariant for other things. - Calling initialize(...) would violate the invariant. It can't be initialized in the constructor because we need to be able to have the instance exist temporarily in a state where it is constructed from a nullary do-nothing constructor and remains uninitialized until a beneficial codepath initializes it properly. - It will not be present in release mode. This could be a deal-breaker in some cases. - Using this means that instances of UnreliableResource should just never be null, and thus I am required to do an allocation even when the program will take codepaths that don't actually use the class. I'm usually not concerned too much with premature optimization, but allocations are probably a nasty thing to sprinkle about unnecessarily. Maybe a proxy struct with opDispatch and such could be used to get around these limitations? Ex usage: Initializable!(UnreliableResource) res;
Mar 07 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.
 
 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Mar 07 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 10:08 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.

 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Makes sense. Awfully labor-intensive though. Doesn't work well on classes that can't be easily altered. That is, it violates this:
 - Do not modify the implementation of UnreliableResource.  It's not always
possible.
But, maybe it can be turned it into a template and made to work for arrays too...
Mar 07 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, March 07, 2012 22:36:50 Chad J wrote:
 On 03/07/2012 10:08 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.
 
 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Makes sense. Awfully labor-intensive though. Doesn't work well on classes that can't be easily altered. That is, it violates this:
 - Do not modify the implementation of UnreliableResource.  It's not always
 possible.
But, maybe it can be turned it into a template and made to work for arrays too...
Personally, I'd probably just use null. But if you want a sentinel other than null, it's quite feasible. - Jonathan M Davis
Mar 07 2012
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 10:40 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 22:36:50 Chad J wrote:
 On 03/07/2012 10:08 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.

 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Makes sense. Awfully labor-intensive though. Doesn't work well on classes that can't be easily altered. That is, it violates this:
 - Do not modify the implementation of UnreliableResource.  It's not always
 possible.
But, maybe it can be turned it into a template and made to work for arrays too...
Personally, I'd probably just use null. But if you want a sentinel other than null, it's quite feasible. - Jonathan M Davis
Wait, so you'd use null and then have the program unconditionally crash whenever you (inevitably) mess up sentinel logic?
Mar 07 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, March 07, 2012 22:58:44 Chad J wrote:
 On 03/07/2012 10:40 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 22:36:50 Chad J wrote:
 On 03/07/2012 10:08 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.
 
 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Makes sense. Awfully labor-intensive though. Doesn't work well on classes that can't be easily altered. That is, it violates this:
 - Do not modify the implementation of UnreliableResource.  It's not
 always
 possible.
But, maybe it can be turned it into a template and made to work for arrays too...
Personally, I'd probably just use null. But if you want a sentinel other than null, it's quite feasible. - Jonathan M Davis
Wait, so you'd use null and then have the program unconditionally crash whenever you (inevitably) mess up sentinel logic?
Yes. Proper testing will find most such problems. And it's not like having a non-null sentinel is going to prevent you from having problems. It just means that you're not distinguishing between a variable that you forgot to initialize and one which you set to the sentinel value. Your program can die from a variable being null in either case. And in _both_ cases, it's generally unsafe to continue executing your program anyway. And honestly, in my experience, null pointers are a very rare thing. You catch them through solid testing. - Jonathan M Davis
Mar 07 2012
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 11:17 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 22:58:44 Chad J wrote:
 On 03/07/2012 10:40 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 22:36:50 Chad J wrote:
 On 03/07/2012 10:08 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 20:44:59 Chad J wrote:
 On 03/07/2012 10:21 AM, Steven Schveighoffer wrote:
 You can use sentinels other than null.

 -Steve
Example?
Create an instance of the class which is immutable and represents an invalid value. You could check whether something is that value with the is operator, since there's only one of it. You could even make it a derived class and have all of its functions throw a particular exception if someone tries to call them. - Jonathan M Davis
Makes sense. Awfully labor-intensive though. Doesn't work well on classes that can't be easily altered. That is, it violates this:
 - Do not modify the implementation of UnreliableResource.  It's not
 always
 possible.
But, maybe it can be turned it into a template and made to work for arrays too...
Personally, I'd probably just use null. But if you want a sentinel other than null, it's quite feasible. - Jonathan M Davis
Wait, so you'd use null and then have the program unconditionally crash whenever you (inevitably) mess up sentinel logic?
Yes. Proper testing will find most such problems. And it's not like having a non-null sentinel is going to prevent you from having problems. It just means that you're not distinguishing between a variable that you forgot to initialize and one which you set to the sentinel value. Your program can die from a variable being null in either case. And in _both_ cases, it's generally unsafe to continue executing your program anyway.
The important difference in using explicit sentinel values here is that they are not null, and thus very unlikely to have been caused by memory corruption. It allows us to distinguish between the two sources of empty variables. With a better way to do sentinel values, I can isolate my cleaner looking code from the scarier looking code that comes from any number of places. I also am not too worried about null values that come from stuff that was simply forgotten, instead of intentionally nulled. I DO tend to catch those really early in testing, and they are unlikely to happen to begin with due to the close association between declaration and initialization.
 And honestly, in my experience, null pointers are a very rare thing. You catch
 them through solid testing.

 - Jonathan M Davis
Sorry, your testing doesn't help me as well as you probably wish it does. Our experiences must be very different. I run into a lot of cases where things can't be tested automatically, or at least not easily. Think along the lines of graphics operations, interactively driven code (ex: event lifetimes), network code, etc. Testing can help things between endpoints, but it doesn't help much where the rubber meets the road. And that's just game dev. Then I go to work at my job, the one that makes money, and experience code from the 80s. Rewriting it is completely impractical for near-term projects (though a complete phase-out of crufty old crap is on the horizon one way or another!). Yes it has bugs. If I had an attitude of "crash on every little nit" then these things wouldn't last a few seconds (OK, exaggeration). So I recover as well as possible, and occasionally rewrite strategically important pieces. But the world is NOT perfect, so relying on it being perfect is %100 unhelpful to me. Also, "quit your job" is not an acceptable solution. ;) Now, in principle, we will never have to deal with D code like that. Nonetheless, these experiences do make me severely afraid of lacking the tools that keep me safe. And then there are still those occasional weird problems where sentinel values are needed, and its so stateful that there's a vanishingly close-to-zero chance that testing will catch the stuff that it needs to. So I test it as well as I can and leave a "if all else fails, DO THIS" next to the dubious code. Indiscriminate segfaulting deprives me of this last-ditch option. There is no longer even a way to crash elegantly. It all just goes to hell. Long story short: in practice, I find that recovering from sentinel dereference is not only VERY safe, but also orders of magnitude less frustrating for both my users and me. (Memory corruption, on the other hand, is something I am very unfamiliar with, and sort of afraid of. So I'm willing to ditch nulls.)
Mar 07 2012
prev sibling next sibling parent "Chad J" <chadjoan __spam.is.bad__gmail.com> writes:
On Wednesday, 7 March 2012 at 14:23:18 UTC, Chad J wrote:
 On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
 On Mon, 05 Mar 2012 23:58:48 -0500, Chad J
 <chadjoan __spam.is.bad__gmail.com> wrote:

 Why is it fatal?
A segmentation fault indicates that a program tried to access memory that is not available. Since the 0 page is never allocated, any null pointer dereferencing results in a seg fault. However, there are several causes of seg faults: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of these three happened, the only valid choice is to terminate. I think the correct option is to print a stack trace, and abort the program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I still have a nagging doubt though: since the dereference in question is null, then there is no way for that particular dereference to corrupt other memory. The only way this happens in (2) and (3) is that related code tries to write to invalid memory. But if we have other measures in place to prevent that (bounds checking, other hardware signals, etc), then how is it still possible to corrupt memory?
 [...]

 -Steve
I spoke too soon! We missed one: 1. You forgot to initialize a variable. 2. Your memory has been corrupted, and some corrupted pointer now points into no-mem land. 3. You are accessing memory that has been deallocated. 4. null was being used as a sentinal value, and it snuck into a place where the value should not be a sentinal anymore. I will now change what I said to reflect this: I think I see where the misunderstanding is coming from. I encounter (1) from time to time. It isn't a huge problem because usually if I declare something the next thing on my mind is initializing it. Even if I forget, I'll catch it in early testing. It tends to never make it to anyone else's desk, unless it's a regression. Regressions like this aren't terribly common though. If you make my program crash from (1), I'll live. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I think I'm used to VM languages at this point VM, (2) and (3) can't happen. I never worry about those. Feel free to crash these in D. I encounter (4) a lot. I really don't want my programs crashed when (4) happens. Such crashes would be super annoying, and they can happen at very bad times. ------ Now then, I have 2 things to say about this: - Why can't we distinguish between these? As I said in my previous thoughts, we should have ways of ruling out (2) and (3), thus ensuring that our NullDerefException was caused by only (1) or (4). It's possible in VM languages, but given that the VM is merely a cheesey abstraction, I beleive that it's always possible to accomplish the same things in D %100 of the time. Usually this requires isolating the system bits from the abstractions. Saying it can't be done would be giving up way too easily, and you can miss the hidden treasure that way. - If I'm given some sensible way of handling sentinal values then (4) will become a non-issue. Then that leaves (1-3), and I am OK if those cause mandatory crashing. I know I'm probably opening an old can of worms, but D is quite powerful and I think we should be able to solve this stuff. My instincts tell me that managing sentinal values with special patterns in memory (ex: null values or separate boolean flags) all have pitfalls (null-derefs or SSOT violations that lead to desync). Perhaps D's uber-powerful type system can rescue us? The only other problem with this is... what if our list is not exhaustive, and (5) exists?
Mar 07 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 07, 2012 at 09:22:27AM -0500, Chad J wrote:
 On 03/07/2012 07:57 AM, Steven Schveighoffer wrote:
[...]
However, there are several causes of seg faults:

1. You forgot to initialize a variable.
2. Your memory has been corrupted, and some corrupted pointer now
points into no-mem land.
3. You are accessing memory that has been deallocated.

Only 1 is benign. 2 and 3 are fatal. Since you cannot know which of
these three happened, the only valid choice is to terminate.

I think the correct option is to print a stack trace, and abort the
program.
Alright, I think I see where the misunderstanding is coming from. I have only ever encountered (1). And I've encountered it a lot. I didn't even consider (2) and (3) as possibilities. Those are far from my mind. I still have a nagging doubt though: since the dereference in question is null, then there is no way for that particular dereference to corrupt other memory. The only way this happens in (2) and (3) is that related code tries to write to invalid memory. But if we have other measures in place to prevent that (bounds checking, other hardware signals, etc), then how is it still possible to corrupt memory?
[...] It's not that the null pointer itself corrupts memory. It's that the null pointer is a sign that something may have corrupted memory *before* you got to that point. The point is, it's impossible to tell whether the null pointer was merely the result of forgetting to initialize something, or it's a symptom of a far more sinister problem. The source of the problem could potentially be very far away, in unrelated code, and only when you tried to access the pointer, you discover that something is wrong. At that point, it may very well be the case that the null pointer isn't just a benign uninitialized pointer, but the result of a memory corruption, perhaps an exploit in the process of taking over your application, or some internal consistency error that is in the process of destroying user data. Trying to continue is a bad idea, since you'd be letting the exploit take over, or allowing user data to get even more corrupted than it already is. T -- Be in denial for long enough, and one day you'll deny yourself of things you wish you hadn't.
Mar 07 2012
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, March 07, 2012 07:55:35 H. S. Teoh wrote:
 It's not that the null pointer itself corrupts memory. It's that the
 null pointer is a sign that something may have corrupted memory *before*
 you got to that point.
 
 The point is, it's impossible to tell whether the null pointer was
 merely the result of forgetting to initialize something, or it's a
 symptom of a far more sinister problem. The source of the problem could
 potentially be very far away, in unrelated code, and only when you tried
 to access the pointer, you discover that something is wrong.
 
 At that point, it may very well be the case that the null pointer isn't
 just a benign uninitialized pointer, but the result of a memory
 corruption, perhaps an exploit in the process of taking over your
 application, or some internal consistency error that is in the process
 of destroying user data. Trying to continue is a bad idea, since you'd
 be letting the exploit take over, or allowing user data to get even more
 corrupted than it already is.
Also, while D does much more to protect you from stuff like memory corruption than C/C++ does, it's still a systems language. Stuff like that can definitely happen. If you're writing primarily in SafeD, then it's very much minimized, but it's not necessarily eliminated. All it takes is a bug in system code which could corrupt memory, and voila, you have corrupted memory, and an safe function could get a segfault even though it's correct code. It's likely to be a very rare occurrence, but it's possible. A since when you get a segfault, you can't know what caused it, you have to assume that it could have been caused by one of the nastier possibilites rather than a relatively benign one. And since ultimately, your program should be checking for null before derefencing a variable in any case where it could be null, segfaulting due to dereferencing a null pointer is a program bug which should be caught in testing - like assertions in general are - rather than having the program attempt to recover from it. And if you do _that_, the odds of a segfault being due to something very nasty just go up, making it that much more of a bad idea to try and recover from one. - Jonathan M Davis
Mar 07 2012
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/07/2012 02:09 PM, Jonathan M Davis wrote:
 On Wednesday, March 07, 2012 07:55:35 H. S. Teoh wrote:
 It's not that the null pointer itself corrupts memory. It's that the
 null pointer is a sign that something may have corrupted memory *before*
 you got to that point.

 The point is, it's impossible to tell whether the null pointer was
 merely the result of forgetting to initialize something, or it's a
 symptom of a far more sinister problem. The source of the problem could
 potentially be very far away, in unrelated code, and only when you tried
 to access the pointer, you discover that something is wrong.

 At that point, it may very well be the case that the null pointer isn't
 just a benign uninitialized pointer, but the result of a memory
 corruption, perhaps an exploit in the process of taking over your
 application, or some internal consistency error that is in the process
 of destroying user data. Trying to continue is a bad idea, since you'd
 be letting the exploit take over, or allowing user data to get even more
 corrupted than it already is.
Also, while D does much more to protect you from stuff like memory corruption than C/C++ does, it's still a systems language. Stuff like that can definitely happen. If you're writing primarily in SafeD, then it's very much minimized, but it's not necessarily eliminated. All it takes is a bug in system code which could corrupt memory, and voila, you have corrupted memory, and an safe function could get a segfault even though it's correct code. It's likely to be a very rare occurrence, but it's possible. A since when you get a segfault, you can't know what caused it, you have to assume that it could have been caused by one of the nastier possibilites rather than a relatively benign one. And since ultimately, your program should be checking for null before derefencing a variable in any case where it could be null, segfaulting due to dereferencing a null pointer is a program bug which should be caught in testing - like assertions in general are - rather than having the program attempt to recover from it. And if you do _that_, the odds of a segfault being due to something very nasty just go up, making it that much more of a bad idea to try and recover from one. - Jonathan M Davis
I can see where you're coming from now. As I mentioned in another post, my lack of consideration for this indicator of memory corruption is probably a reflection of my bias towards VM'd languages. I still don't buy the whole "it's a program bug that should be caught in testing". I mean... true, but sometimes it isn't. Especially since testing and assertions can never be %100 thorough. What then? Sorry, enjoy your suffering? At that point I would like to have a better way to do sentinel values. I'd at least like to get an exception of some kind if I try to access a value that /shouldn't/ be there (as opposed to something that /should/ be there but /isn't/). Combine that with sandboxing and I might just be satisfied for the time being. See my reply to Steve for more details. It's the one that starts like this:
 Example?

 Here, if you want, I'll start with a typical case.  Please make it right.

 class UnreliableResource
 {
     this(string sourceFile) {...}
     this(uint userId) {...}
     void doTheThing() {...}
 }
Mar 07 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-03-05 07:25, Walter Bright wrote:
 On 3/4/2012 6:31 PM, Chad J wrote:
 class Bar
 {
 int foo;
 }

 void main()
 {
 Bar bar;
 try {
 bar.foo = 5;
 } catch ( Exception e ) {
 writefln("%s",e);
 }
 }

 DMD 2.057 on Gentoo Linux, compiled with "-g -debug". It prints this:
 Segmentation fault

 Very frustrating!
This is what I get (I added in an import std.stdio;): dmd foo -gc gdb foo GNU gdb (GDB) 7.2-ubuntu Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /home/walter/cbx/mars/foo...done. (gdb) run Starting program: /home/walter/cbx/mars/foo [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x0000000000401e45 in D main () at foo.d:13 13 bar.foo = 5; (gdb) bt (gdb) By running it under gdb (the debugger), it tells me what file and line it failed on, and gives a lovely stack trace. There really are only 3 gdb commands you need (and the only ones I remember): run (run your program) bt (print a backtrace) quit (exit gdb) Voila! Also, a null pointer exception is only one of a whole menagerie of possible hardware-detected errors. There's a limit on the compiler instrumenting code to detect these. At some point, it really is worth learning how to use the debugger.
Is demangling supposed to work on Mac OS X? -- /Jacob Carlborg
Mar 05 2012
parent reply "David Nadlinger" <see klickverbot.at> writes:
On Monday, 5 March 2012 at 09:09:30 UTC, Jacob Carlborg wrote:
 On 2012-03-05 07:25, Walter Bright wrote:
 […]
 run (run your program)
 bt (print a backtrace)
 quit (exit gdb)
 […]
Is demangling supposed to work on Mac OS X?
As the D demangling support was added in GDB 7.<something>, unfortunately no, as Apple ships an ancient (customized) version. David
Mar 05 2012
parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Mon, 05 Mar 2012 10:43:22 +0100, David Nadlinger <see klickverbot.at>=
  =

wrote:

 On Monday, 5 March 2012 at 09:09:30 UTC, Jacob Carlborg wrote:
 On 2012-03-05 07:25, Walter Bright wrote:
 [=E2=80=A6]
 run (run your program)
 bt (print a backtrace)
 quit (exit gdb)
 [=E2=80=A6]
Is demangling supposed to work on Mac OS X?
As the D demangling support was added in GDB 7.<something>, =
 unfortunately no, as Apple ships an ancient (customized) version.

 David
From GDB7.3 OSX MACH-O support is somewhat fixed, but it's buggy when b= eing used with dmd because it doesn't relocate the "__textcoal_nt" sections.
Mar 05 2012
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, March 04, 2012 21:31:21 Chad J wrote:
 On 03/03/2012 02:06 PM, Walter Bright wrote:
 On 3/3/2012 2:13 AM, bearophile wrote:
 Walter:
 Adding in software checks for null pointers will dramatically slow
 things
 down.
Define this use of "dramatically" in a more quantitative and objective way, please. Is Java "dramatically" slower than C++/D here?
You can try it for yourself. Take some OOP code of yours, and insert a null check in front of every dereference of the class handle.
I have a hard time buying this as a valid reason to avoid inserting such checks. I do think they should be optional, but they should be available, if not default, with optimizations for signal handlers and such taken in the cases where they apply. Even if it slows my code down 4x, it'll be a huge win for me to avoid this stuff. Because you know what pisses me off a helluva lot more than slightly slower code? Spending hours trying to figure out what made my program say "Segmentation fault". That's what.
Really? I rarely run into segfaults, and when I do, it's easy enough to enable core dumps, rerun the program, and then get an actual stack trace (along with the whole state of the program for that matter). Yes, upon occasion, it would be useful - especially if you're talking about a large program where you can't simply rerun it with core dumps enabled and quickly reproduce the problem, but in my experience, the reality of the matter is that it's a very rare occurence. And if it really is something that keeps causing you problems, on Linux at least, it's very easy to enable a signal handler to get you a stack trace. So, I can see your complaint, but I find that it's rarely justified in practice. - Jonathan M Davis
Mar 04 2012
prev sibling next sibling parent reply "Sandeep Datta" <datta.sandeep gmail.com> writes:
I would recommend doing what Microsoft does in this case, use SEH
(Structured exception handling) on windows i.e. use OS facilities
to trap and convert hardware exceptions into software exceptions.

See the /EHa flag in the Microsoft C++ compiler.

I hope Linux has something similar, then we are all set!

ref:
http://msdn.microsoft.com/en-us/library/1deeycx5(v=vs.80).aspx

On Saturday, 3 March 2012 at 02:51:41 UTC, Walter Bright wrote:
 On 3/1/2012 8:51 PM, Jonathan M Davis wrote:
 It's defined. The operating system protects you.
Not exactly. It's a feature of the hardware. You get this for free, and your code runs at full speed. Adding in software checks for null pointers will dramatically slow things down.
Mar 03 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/3/2012 12:29 PM, Sandeep Datta wrote:
 I would recommend doing what Microsoft does in this case, use SEH
 (Structured exception handling) on windows i.e. use OS facilities
 to trap and convert hardware exceptions into software exceptions.
D for Windows already does that. It's been there for 10 years, and turns out to be a solution looking for a problem.
Mar 03 2012
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Saturday, 3 March 2012 at 22:24:59 UTC, Walter Bright wrote:
 On 3/3/2012 12:29 PM, Sandeep Datta wrote:
 I would recommend doing what Microsoft does in this case, use 
 SEH (Structured exception handling) on windows i.e. use OS 
 facilities to trap and convert hardware exceptions into 
 software exceptions.
D for Windows already does that. It's been there for 10 years, and turns out to be a solution looking for a problem.
Getting a stack trace without having to start a debugger is kinda nice.
Mar 03 2012
prev sibling parent reply "Sandeep Datta" <datta.sandeep gmail.com> writes:
 It's been there for 10 years, and turns out to be a solution 
 looking for a problem.
I beg to differ, the ability to catch and respond to such asynchronous exceptions is vital to the stable operation of long running software. It is not hard to see how this can be useful in programs which depend on plugins to extend functionality (e.g. IIS, Visual Studio, OS with drivers as plugins etc). A misbehaving plugin has the potential to bring down the whole house if hardware exceptions cannot be safely handled within the host application. Thus the inability of handling such exceptions undermines D's ability to support dynamically loaded modules of any kind and greatly impairs modularity. Also note hardware exceptions are not limited to segfaults there are other exceptions like division by zero, invalid operation, floating point exceptions (overflow, underflow) etc. Plus by using this approach (SEH) you can eliminate the software null checks and avoid taking a hit on performance. So in conclusion I think it will be worth our while to supply something like a NullReferenceException (and maybe NullPointerException for raw pointers) which will provide more context than a simple segfault (and that too without a core dump). Additional information may include things like a stacktrace (like Vladimir said in another post) with line exception hierarchy for some inspiration (not that you need any but it's nice to have some consistency across languages too). I am just a beginner in D but I hope D has something like exception capture the chain of events which led to failure.
Mar 03 2012
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 4 March 2012 at 02:53:54 UTC, Sandeep Datta wrote:
 Thus the inability of handling such exceptions undermines D's 
 ability to support dynamically loaded modules of any kind and 
 greatly impairs modularity.
You can catch it in D (on Windows): import std.stdio; void main() { int* a; try { *a = 0; } catch(Throwable t) { writefln("I caught it! %s", t.msg); } } dmd test9 test9 I caught it! Access Violation
Mar 03 2012
parent reply "Sandeep Datta" <datta.sandeep gmail.com> writes:
 You can catch it in D (on Windows):
This is great. All we have to do now is provide a more specific exception (say NullReferenceException) so that the programmer has the ability to provide a specific exception handler for NullReferenceException etc. I gave it a try on Linux but unfortunately it leads to a segfault (DMD 2.056, x86-64).
Mar 03 2012
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 4 March 2012 at 03:15:19 UTC, Sandeep Datta wrote:
 All we have to do now is provide a more specific exception (say 
 NullReferenceException) so that the programmer has the ability 
 to provide a specific exception handler for 
 NullReferenceException etc.
Looks like it is pretty easy to do. Check out dmd2/src/druntime/src/rt/deh.d The Access Violation error is thrown on about line 635. There's a big switch that handles a bunch of errors.
 I gave it a try on Linux but unfortunately it leads to a 
 segfault
Yeah, Linux does it as a signal which someone has tried to turn into an exception before, which kinda works but is easy to break. (I don't remember exactly why it didn't work right, but there were problems.)
Mar 03 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 04, 2012 at 04:29:27AM +0100, Adam D. Ruppe wrote:
 On Sunday, 4 March 2012 at 03:15:19 UTC, Sandeep Datta wrote:
All we have to do now is provide a more specific exception (say
NullReferenceException) so that the programmer has the ability to
provide a specific exception handler for NullReferenceException etc.
Looks like it is pretty easy to do. Check out dmd2/src/druntime/src/rt/deh.d The Access Violation error is thrown on about line 635. There's a big switch that handles a bunch of errors.
I gave it a try on Linux but unfortunately it leads to a segfault
Yeah, Linux does it as a signal which someone has tried to turn into an exception before, which kinda works but is easy to break. (I don't remember exactly why it didn't work right, but there were problems.)
Yeah, according to the Posix specs, trying to continue execution after catching SIGSEGV or SIGILL is ... to say the least, extremely dangerous. Basically you'll have to unwind the stack and run the rest of the program inside signal handler context, which means certain operations (like calling signal unsafe syscalls) are not guaranteed to do what you'd expect. Of course, there are ways around it, but it does depend on the specific way Linux implements signal handling, which is not guaranteed to not change across Linux versions (because it's not part of the Posix spec). So it would be very fragile, and prone to nasty bugs. T -- This is a tpyo.
Mar 03 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/3/2012 6:53 PM, Sandeep Datta wrote:
 It's been there for 10 years, and turns out to be a solution looking for a
 problem.
I beg to differ, the ability to catch and respond to such asynchronous exceptions is vital to the stable operation of long running software. It is not hard to see how this can be useful in programs which depend on plugins to extend functionality (e.g. IIS, Visual Studio, OS with drivers as plugins etc). A misbehaving plugin has the potential to bring down the whole house if hardware exceptions cannot be safely handled within the host application. Thus the inability of handling such exceptions undermines D's ability to support dynamically loaded modules of any kind and greatly impairs modularity. Also note hardware exceptions are not limited to segfaults there are other exceptions like division by zero, invalid operation, floating point exceptions (overflow, underflow) etc. Plus by using this approach (SEH) you can eliminate the software null checks and avoid taking a hit on performance. So in conclusion I think it will be worth our while to supply something like a NullReferenceException (and maybe NullPointerException for raw pointers) which will provide more context than a simple segfault (and that too without a core dump). Additional information may include things like a stacktrace (like Vladimir said in another post) with line numbers, file/module names etc. Please any but it's nice to have some consistency across languages too). I am just a which we can chain exceptions as we go to capture the chain of events which led to failure.
As I said, it already does that (on Windows). There is an access violation exception. Try it on windows, you'll see it. 1. SEH isn't portable. There's no way to make it work under non-Windows systems. 2. Converting SEH to D exceptions is not necessary to make a stack trace dump work. 3. Intercepting and recovering from seg faults, div by 0, etc., all sounds great on paper. In practice, it is almost always wrong. The only exception (!) to the rule is when sandboxing a plugin (as you suggested). Making such a sandbox work is highly system specific, and doesn't always fit into the D exception model (in fact, it never does outside of Windows).
Mar 03 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 03, 2012 at 07:34:50PM -0800, Walter Bright wrote:
[...]
 3. Intercepting and recovering from seg faults, div by 0, etc., all
 sounds great on paper. In practice, it is almost always wrong. The
 only exception (!) to the rule is when sandboxing a plugin (as you
 suggested). Making such a sandbox work is highly system specific, and
 doesn't always fit into the D exception model (in fact, it never does
 outside of Windows).
[...] I wonder if there's some merit to a std.sandbox module in Phobos... In Linux (any Posix), for example, it could run the sandbox code inside a fork()ed process, and watch for termination by signal, for example. Data could be returned via a pipe, or maybe a shared memory segment of some sort. Don't know how this would work on Windows, but presumably there are clean ways of doing it that doesn't endanger the health of the process creating the sandbox. T -- Freedom of speech: the whole world has no right *not* to hear my spouting off!
Mar 03 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/3/2012 8:20 PM, H. S. Teoh wrote:
 Don't know how this would work on Windows, but presumably there are
 clean ways of doing it that doesn't endanger the health of the process
 creating the sandbox.
If you're dealing with plugins from an unknown source, it's a good design to separate plugins and such as entirely separate processes. Then, when one goes down, it cannot bring down anyone else, since there is no shared address space. They can communicate with the OS-supplied interprocess communications API.
Mar 03 2012
parent "Sandeep Datta" <datta.sandeep gmail.com> writes:
 If you're dealing with plugins from an unknown source, it's a 
 good design to separate plugins and such as entirely separate 
 processes. Then, when one goes down, it cannot bring down 
 anyone else, since there is no shared address space.

 They can communicate with the OS-supplied interprocess 
 communications API.
Yes I think this is a good idea in general but the process/IPC overhead can be substantial if you have a lot of (small) plugins. I think Google chrome uses this trick (among others) to good effect in providing fault tolerance ( http://www.geekosystem.com/google-chrome-hacking-prize/ ).
Mar 03 2012
prev sibling next sibling parent "Sandeep Datta" <datta.sandeep gmail.com> writes:
 1. SEH isn't portable. There's no way to make it work under 
 non-Windows systems.
Ok after some digging around it appears (prima facie) that Linux doesn't have anything close to SEH. I am aware of POSIX signals but I am not sure if they work for individual threads in a process. Last I checked the whole process has to be hosed when you receive a segfault and there isn't much you can do about it. I am a Linux newbie but I am almost seriously considering implementing SEH for linux (in the kernel). Any Linux Gurus here who think this is a good idea?
Mar 03 2012
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jiunst$qrm$1 digitalmars.com...
 3. Intercepting and recovering from seg faults, div by 0, etc., all sounds 
 great on paper. In practice, it is almost always wrong. The only exception 
 (!) to the rule is when sandboxing a plugin (as you suggested).
The purpose of catching exceptions is to respond to a condition. Recovery is merely *one* such type of response.
Mar 05 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/5/2012 12:05 PM, Nick Sabalausky wrote:
 "Walter Bright"<newshound2 digitalmars.com>  wrote in message
 news:jiunst$qrm$1 digitalmars.com...
 3. Intercepting and recovering from seg faults, div by 0, etc., all sounds
 great on paper. In practice, it is almost always wrong. The only exception
 (!) to the rule is when sandboxing a plugin (as you suggested).
The purpose of catching exceptions is to respond to a condition. Recovery is merely *one* such type of response.
Right, but Sandeep and I were specifically talking about recovering.
Mar 06 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 04/03/12 04:34, Walter Bright wrote:
 On 3/3/2012 6:53 PM, Sandeep Datta wrote:
 It's been there for 10 years, and turns out to be a solution looking
 for a
 problem.
I beg to differ, the ability to catch and respond to such asynchronous exceptions is vital to the stable operation of long running software. It is not hard to see how this can be useful in programs which depend on plugins to extend functionality (e.g. IIS, Visual Studio, OS with drivers as plugins etc). A misbehaving plugin has the potential to bring down the whole house if hardware exceptions cannot be safely handled within the host application. Thus the inability of handling such exceptions undermines D's ability to support dynamically loaded modules of any kind and greatly impairs modularity. Also note hardware exceptions are not limited to segfaults there are other exceptions like division by zero, invalid operation, floating point exceptions (overflow, underflow) etc. Plus by using this approach (SEH) you can eliminate the software null checks and avoid taking a hit on performance. So in conclusion I think it will be worth our while to supply something like a NullReferenceException (and maybe NullPointerException for raw pointers) which will provide more context than a simple segfault (and that too without a core dump). Additional information may include things like a stacktrace (like Vladimir said in another post) with line numbers, file/module names etc. Please you need any but it's nice to have some consistency across languages too). I am just a using which we can chain exceptions as we go to capture the chain of events which led to failure.
As I said, it already does that (on Windows). There is an access violation exception. Try it on windows, you'll see it. 1. SEH isn't portable. There's no way to make it work under non-Windows systems. 2. Converting SEH to D exceptions is not necessary to make a stack trace dump work. 3. Intercepting and recovering from seg faults, div by 0, etc., all sounds great on paper. In practice, it is almost always wrong. The only exception (!) to the rule is when sandboxing a plugin (as you suggested). Making such a sandbox work is highly system specific, and doesn't always fit into the D exception model (in fact, it never does outside of Windows).
Responding to traps is one of the very few examples I know of, where Windows got it completely right, and *nix got it absolutely completely wrong. Most notably, the hardware is *designed* for floating point traps to be fully recoverable. It makes perfect sense to catch them and continue. But unfortunately the *nix operating systems are completely broken in this regard and there's nothing we can do to fix them.
Mar 06 2012
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
On Mar 6, 2012, at 3:14 AM, Don Clugston <dac nospam.com> wrote:
=20
 Responding to traps is one of the very few examples I know of, where Windo=
ws got it completely right,
 and *nix got it absolutely completely wrong. Most notably, the hardware is=
*designed* for floating point traps to be fully recoverable. It makes perfe= ct sense to catch them and continue.
 But unfortunately the *nix operating systems are completely broken in this=
regard and there's nothing we can do to fix them. Does SEH allow recovery at the point of error like signals do? Sometimes I t= hink it would be enough if the Posix spec were worded in a way that allowed e= xceptions to be thrown from signal handlers.=
Mar 06 2012
parent reply Don Clugston <dac nospam.com> writes:
On 06/03/12 17:05, Sean Kelly wrote:
 On Mar 6, 2012, at 3:14 AM, Don Clugston<dac nospam.com>  wrote:
 Responding to traps is one of the very few examples I know of, where Windows
got it completely right,
 and *nix got it absolutely completely wrong. Most notably, the hardware is
*designed* for floating point traps to be fully recoverable. It makes perfect
sense to catch them and continue.
 But unfortunately the *nix operating systems are completely broken in this
regard and there's nothing we can do to fix them.
Does SEH allow recovery at the point of error like signals do?
Yes, it does. It really acts like an interrupt. You can, for example, modify registers or memory locations, then perform the equivalent of an asm { iret; }, so that you continue at the next instruction. Or, you can pass control to any function, after unwinding the stack by any number of frames you chose. And, you regain control if any other exception occurs during the unwinding, and you're given the chance to change strategy at that point. An SEH handler behaves a bit like setjmp(), it's not a callback. Most importantly, in comparison to Posix, there are NO LIMITATIONS about what you can do in an SEH exception handler. You can call any function you like. The documentation is terrible, but it's really a beautiful design. Sometimes I think it would be enough if the Posix spec were worded in a way that allowed exceptions to be thrown from signal handlers. I think that would be difficult to allow. Some of the restrictions seem to be quite fundamental. (Otherwise, I'm sure they would have got rid of them by now!)
Mar 08 2012
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 08 Mar 2012 03:58:22 -0500, Don Clugston <dac nospam.com> wrote:

 On 06/03/12 17:05, Sean Kelly wrote:
 On Mar 6, 2012, at 3:14 AM, Don Clugston<dac nospam.com>  wrote:
 Responding to traps is one of the very few examples I know of, where  
 Windows got it completely right,
 and *nix got it absolutely completely wrong. Most notably, the  
 hardware is *designed* for floating point traps to be fully  
 recoverable. It makes perfect sense to catch them and continue.
 But unfortunately the *nix operating systems are completely broken in  
 this regard and there's nothing we can do to fix them.
Does SEH allow recovery at the point of error like signals do?
Yes, it does. It really acts like an interrupt. You can, for example, modify registers or memory locations, then perform the equivalent of an asm { iret; }, so that you continue at the next instruction. Or, you can pass control to any function, after unwinding the stack by any number of frames you chose. And, you regain control if any other exception occurs during the unwinding, and you're given the chance to change strategy at that point. An SEH handler behaves a bit like setjmp(), it's not a callback. Most importantly, in comparison to Posix, there are NO LIMITATIONS about what you can do in an SEH exception handler. You can call any function you like. The documentation is terrible, but it's really a beautiful design. Sometimes I think it would be enough if the Posix spec were worded in a way that allowed exceptions to be thrown from signal handlers. I think that would be difficult to allow. Some of the restrictions seem to be quite fundamental. (Otherwise, I'm sure they would have got rid of them by now!)
IIRC, SEH is patented... -Steve
Mar 08 2012
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
I think what Chad is looking for is a BlackHole/WhiteHole equivalent
which doesn't need abstract functions but which figures out all the
methods of a class at compile time and creates a subclass that
throws/does nothing on method invocation. An 'alias this' field would
be used that is default-initialized with this sentry object. I don't
know why we don't have __traits(allFunction). We have
'getVirtualFunctions' but it requires a function name, but using
allMembers to filter out function names is damn difficult if you ask
me. I've never had an easy time interacting with __traits.
Mar 08 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 08, 2012 at 02:57:00PM +0100, Andrej Mitrovic wrote:
[...]
 I don't know why we don't have __traits(allFunction). We have
 'getVirtualFunctions' but it requires a function name, but using
 allMembers to filter out function names is damn difficult if you ask
 me.
foreach (name; __traits(allMembers, typeof(obj))) { static if (__traits(compiles, &__traits(getMember, obj, name))) { alias typeof(__traits(getMember, obj, name)) type; static if (is(type==function)) { // name refers to a function of type // 'type' here } } }
 I've never had an easy time interacting with __traits.
Me too. I'm guessing that __traits is the way it is due to ease of implementation in the compiler. It's certainly not very friendly to use. T -- Маленькие детки - маленькие бедки.
Mar 08 2012
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
On 03/08/2012 10:40 AM, H. S. Teoh wrote:
 On Thu, Mar 08, 2012 at 02:57:00PM +0100, Andrej Mitrovic wrote:

 I think what Chad is looking for is a BlackHole/WhiteHole equivalent
 which doesn't need abstract functions but which figures out all the
 methods of a class at compile time and creates a subclass that
 throws/does nothing on method invocation. An 'alias this' field would
 be used that is default-initialized with this sentry object. I don't
 know why we don't have __traits(allFunction). We have
 'getVirtualFunctions' but it requires a function name, but using
 allMembers to filter out function names is damn difficult if you ask
 me. I've never had an easy time interacting with __traits.
Yep. That would be cool. Although there should be ways of doing the same thing for arrays and possibly even structs. For structs I wouldn't mind adding a boolean field to keep track of its empty-or-not status.
 	foreach (name; __traits(allMembers, typeof(obj))) {
 		static if (__traits(compiles,&__traits(getMember, obj,
 				name)))
 		{
 			alias typeof(__traits(getMember, obj, name))
 				type;
 			static if (is(type==function)) {
 				// name refers to a function of type
 				// 'type' here
 			}
 		}
 	}

 I've never had an easy time interacting with __traits.
Me too. I'm guessing that __traits is the way it is due to ease of implementation in the compiler. It's certainly not very friendly to use. T
Tried this, but it wasn't picking everything up. I also suspect that inheriting the class being wrapped and picking on its methods is going to be a losing battle in the long run for this thing. It doesn't allow the sentry to throw on field access. It also wouldn't work for final classes. I wonder if opDispatch would do better. Another mess I was running into was forwarding templated methods. I wonder if this is even possible. I wish __traits had some way of picking up on previous template instantiations (with cycles forbidden, of course). If __traits were beefed up enough, maybe it would be better than opDispatch after all. Hmmm. I did try this as a struct with an "private T t; alias t this;" in it. It looks like this: ----------------------------------- import std.c.stdlib; import std.stdio; struct Emptiable(T) { private static Emptiable!T m_sentinel; // private NotNull!T t; // TODO private T t; alias t this; public static property Emptiable!T sentinel() { return m_sentinel; } static this() { void* sentinelMem = malloc(T.sizeof); m_sentinel = cast(T)sentinelMem; } this(A...)(A args) { t = new T(args); } property bool isEmpty() const { if ( this is m_sentinel ) return true; else return false; } } static auto makeEmpty(T)(ref T v) { v = T.sentinel; return v; } class Bar { int i; this(int j) { i = j; } void blah() { writefln("blah!"); } /+void letsTemplate(T)(T f) { writefln("%s",f); }+/ } void main() { auto bar = Emptiable!(Bar).sentinel; auto foo = new Emptiable!Bar(5); if ( bar.isEmpty ) writefln("bar is empty, as it should be."); if ( !foo.isEmpty ) writefln("foo is full, as it should be."); //foo.letsTemplate("Just a string."); writefln("foo.i is %s",foo.i); foo.i = 2; writefln("foo.i is %s",foo.i); /+ makeEmpty(foo); if ( foo.isEmpty ) writefln("foo is now empty."); writefln("foo.i is %s",foo.i); +/ } ----------------------------------- Prints: bar is empty, as it should be. foo is full, as it should be. foo.i is 5 foo.i is 2 ----------------------------------- It's super incomplete. I am starting to realize that this sort of thing leads to a huge amount of weird corner cases. Also note what happens when it tries to use makeEmpty: it fails because "auto foo = new Emptiable!Bar(5);" allocates an instance of the Emptiable struct on the heap, which then allocates a separate instance of Bar. Yuck. It's that old struct-vs-class construction schism again.
Mar 08 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/8/12, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 	foreach (name; __traits(allMembers, typeof(obj))) {
 		static if (__traits(compiles, &__traits(getMember, obj,
 				name)))
 		{
 			alias typeof(__traits(getMember, obj, name))
 				type;
 			static if (is(type==function)) {
 				// name refers to a function of type
 				// 'type' here
 			}
 		}
 	}

 I've never had an easy time interacting with __traits.
Yesterday I've tried the same thing but it didn't work because I was missing the &__traits call. With that in place, here's a very hardcoded example of what you can do in D: http://paste.pocoo.org/show/562933/ So now you can catch the exception if the object was uninitialized. And you didn't have to modify the target class at all.
Mar 08 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Sorry,
mixin(getOverloads!(UnreliableResource)()); should be:
mixin(getOverloads!(Base)());
Mar 08 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Plus I left over some code. Anyway the point is that was just a
hardcoded example of something that's doable, you'd probably want it
to be much more sophisticated before it goes into a library.
Mar 08 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 08, 2012 at 05:49:20PM +0100, Andrej Mitrovic wrote:
 On 3/8/12, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 	foreach (name; __traits(allMembers, typeof(obj))) {
 		static if (__traits(compiles, &__traits(getMember, obj,
 				name)))
 		{
 			alias typeof(__traits(getMember, obj, name))
 				type;
 			static if (is(type==function)) {
 				// name refers to a function of type
 				// 'type' here
 			}
 		}
 	}

 I've never had an easy time interacting with __traits.
Yesterday I've tried the same thing but it didn't work because I was missing the &__traits call. With that in place, here's a very hardcoded example of what you can do in D: http://paste.pocoo.org/show/562933/ So now you can catch the exception if the object was uninitialized. And you didn't have to modify the target class at all.
Cool! That's a really neat way of doing it. Love the combination of alias this, compile-time introspection, and the awesomeness of D templates. D r0x0rs! T -- Your inconsistency is the only consistent thing about you! -- KD
Mar 08 2012
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On Mar 8, 2012, at 12:58 AM, Don Clugston <dac nospam.com> wrote:

 On 06/03/12 17:05, Sean Kelly wrote:
 On Mar 6, 2012, at 3:14 AM, Don Clugston<dac nospam.com>  wrote:
=20
 Responding to traps is one of the very few examples I know of, where Win=
dows got it completely right,
 and *nix got it absolutely completely wrong. Most notably, the hardware i=
s *designed* for floating point traps to be fully recoverable. It makes perf= ect sense to catch them and continue.
 But unfortunately the *nix operating systems are completely broken in th=
is regard and there's nothing we can do to fix them.
=20
 Does SEH allow recovery at the point of error like signals do?
=20 Yes, it does. It really acts like an interrupt. You can, for example, modi=
fy registers or memory locations, then perform the equivalent of an asm { ir= et; }, so that you continue at the next instruction. Or, you can pass contro= l to any function, after unwinding the stack by any number of frames you cho= se. And, you regain control if any other exception occurs during the unwindi= ng, and you're given the chance to change strategy at that point.
=20
 An SEH handler behaves a bit like setjmp(), it's not a callback.
 Most importantly, in comparison to Posix, there are NO LIMITATIONS about w=
hat you can do in an SEH exception handler. You can call any function you li= ke. Wow, sounds like paradise compared to signals.=20
 The documentation is terrible, but it's really a beautiful design.
=20
 Sometimes I think it would be enough if the Posix spec were worded in a wa=
y that allowed exceptions to be thrown from signal handlers.
=20
 I think that would be difficult to allow. Some of the restrictions seem to=
be quite fundamental. (Otherwise, I'm sure they would have got rid of them b= y now!) I'm sure it's too late now. And I imagine there was a good reason for it tha= t I'm not aware of, just like the restrictions on kernel calls (which could h= ave blocked signals during execution). Performance was probably part of it. A= h well. The last signal issue I ran into was a deadlock caused by a logger r= outine calling ctime_r inside a sigchild handler. What a pain.=20=
Mar 08 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/8/2012 12:58 AM, Don Clugston wrote:
 The documentation is terrible, but it's really a beautiful design.
I agree, it really is very flexible and useful. The downside is that it leaves significant overhead in functions that are exception-aware, even if they never throw or unwind. And you're right about the documentation. It's incredibly obtuse, probably the worst I've ever seen.
Mar 08 2012
next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 8 March 2012 at 22:20:10 UTC, Walter Bright wrote:
 I agree, it really is very flexible and useful. The downside is 
 that it leaves significant overhead in functions that are 
 exception-aware, even if they never throw or unwind.
This problem is avoided by the switch to the table-based implementation on x86_64, though, while the flexibility, as far as I know, still remains.
 And you're right about the documentation. It's incredibly 
 obtuse, probably the worst I've ever seen.
Yes – I love how everybody (including me) working on SEH code seems to end up reverse-engineering at least certain parts of it on their own. David
Mar 08 2012
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Regarding people that desire a compiler switch to add run-time tests inside the
binary that guard against null seg-faults to turn turn them into errors with a
stack trace: I have recently shown the D language to a friend. She answered me
that most code she writes is for the Web and it's not CPU-bound.

So maybe for her D language is not needed, other more flexible languages are
better for her. On the other hand if you already know D and you already use D
for other CPU-intensive purposes you may want to use it for situations where a
very high performance is not necessary at all.

In such situations you don't compile in release-mode (leaving array bound tests
on), and maybe you want null deference exceptions too, and other things (and
tolerating some more bloat of the  binary). Things like integral overflow
errors and null errors allow you to use D in other niches of the programming
landscape, while they do not hurt high performance D code at all because you
are able to refuse those things with a compiler switch (the disadvantage is a
little more complex compiler, but I think in this case this is acceptable).

Generally not giving such choice to the programmers restricts the applicability
and ecological niche of the D language, giving back no gain.

Bye,
bearophile
Mar 08 2012
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Friday, 9 March 2012 at 00:19:38 UTC, bearophile wrote:
 So maybe for her D language is not needed, other more flexible 
 languages are better for her.
D rox the web (and has for a while).
Mar 08 2012
parent reply bearophile <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 D rox the web (and has for a while).
(Oh, you are starting to copy Andrei talk style now :-) The birth of community words, idioms and sub-languages is a very common thing, sociology studies such things a lot). But there's always some space for improvements :-) In D.learn there is a thread titled "0 < negative loop condition bug or misunderstanding on my part": http://forum.dlang.org/thread/tbsvfbotcupussmeticq forum.dlang.org Web coders are not going to appreciate such traps. I think integral bound tests at run-time are able to catch part of those problems. Bye, bearophile
Mar 08 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/09/2012 01:43 AM, bearophile wrote:
 Adam D. Ruppe:

 D rox the web (and has for a while).
(Oh, you are starting to copy Andrei talk style now :-) The birth of community words, idioms and sub-languages is a very common thing, sociology studies such things a lot). But there's always some space for improvements :-) In D.learn there is a thread titled "0< negative loop condition bug or misunderstanding on my part": http://forum.dlang.org/thread/tbsvfbotcupussmeticq forum.dlang.org Web coders are not going to appreciate such traps. I think integral bound tests at run-time are able to catch part of those problems. Bye, bearophile
Comparing signed/unsigned is perfectly reasonable. This is what causes the problem discussed in D.learn: assert(-1<0U); // fail.
Mar 09 2012
parent "bearophile" <bearophileHUGS lycos.com> writes:
Timon Gehr:

 Comparing signed/unsigned is perfectly reasonable.
Right, but only if the numbers don't get implicit reinterpretations to other intervals, as C/C++/D do. Bye, bearophile
Mar 09 2012
prev sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Tue, 06 Mar 2012 12:14:56 +0100, Don Clugston <dac nospam.com> wrote:

 On 04/03/12 04:34, Walter Bright wrote:
 On 3/3/2012 6:53 PM, Sandeep Datta wrote:
 It's been there for 10 years, and turns out to be a solution looking
 for a
 problem.
I beg to differ, the ability to catch and respond to such asynchronous exceptions is vital to the stable operation of long running software. It is not hard to see how this can be useful in programs which depend on plugins to extend functionality (e.g. IIS, Visual Studio, OS with drivers as plugins etc). A misbehaving plugin has the potential to bring down the whole house if hardware exceptions cannot be safely handled within the host application. Thus the inability of handling such exceptions undermines D's ability to support dynamically loaded modules of any kind and greatly impairs modularity. Also note hardware exceptions are not limited to segfaults there are other exceptions like division by zero, invalid operation, floating point exceptions (overflow, underflow) etc. Plus by using this approach (SEH) you can eliminate the software null checks and avoid taking a hit on performance. So in conclusion I think it will be worth our while to supply something like a NullReferenceException (and maybe NullPointerException for raw pointers) which will provide more context than a simple segfault (and that too without a core dump). Additional information may include things like a stacktrace (like Vladimir said in another post) with line numbers, file/module names etc. Please you need any but it's nice to have some consistency across languages too). I am just a using which we can chain exceptions as we go to capture the chain of events which led to failure.
As I said, it already does that (on Windows). There is an access violation exception. Try it on windows, you'll see it. 1. SEH isn't portable. There's no way to make it work under non-Windows systems. 2. Converting SEH to D exceptions is not necessary to make a stack trace dump work. 3. Intercepting and recovering from seg faults, div by 0, etc., all sounds great on paper. In practice, it is almost always wrong. The only exception (!) to the rule is when sandboxing a plugin (as you suggested). Making such a sandbox work is highly system specific, and doesn't always fit into the D exception model (in fact, it never does outside of Windows).
Responding to traps is one of the very few examples I know of, where Windows got it completely right, and *nix got it absolutely completely wrong. Most notably, the hardware is *designed* for floating point traps to be fully recoverable. It makes perfect sense to catch them and continue. But unfortunately the *nix operating systems are completely broken in this regard and there's nothing we can do to fix them.
Yeah, it's true for FPU traps. You need signals+longjmp to handle them. Though with SEH you shouldn't forget to fninit before continuing or your FPU stack might overflow.
Mar 06 2012
prev sibling parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Sun, 04 Mar 2012 03:53:53 +0100, Sandeep Datta  
<datta.sandeep gmail.com> wrote:

 It's been there for 10 years, and turns out to be a solution looking  
 for a problem.
I beg to differ, the ability to catch and respond to such asynchronous exceptions is vital to the stable operation of long running software. It is not hard to see how this can be useful in programs which depend on plugins to extend functionality (e.g. IIS, Visual Studio, OS with drivers as plugins etc). A misbehaving plugin has the potential to bring down the whole house if hardware exceptions cannot be safely handled within the host application. Thus the inability of handling such exceptions undermines D's ability to support dynamically loaded modules of any kind and greatly impairs modularity.
A misbehaving plugin could easily corrupt your process. Destroying data is always much worse than crashing. The only sensible reaction to an async exception is dumping/tracing. If you want stable plugins you'll have to run them in another process.
Mar 03 2012
parent "Sandeep Datta" <datta.sandeep gmail.com> writes:
 A misbehaving plugin could easily corrupt your process. 
 Destroying data
 is always much worse than crashing.
At this point I usually say memory corruption is not an option for type safe languages but D doesn't really provide runtime type safety guarantees, or does it? I think in the future (D 4.0 or something) we could seriously consider something like proof carrying code etc to take memory/type safety to the next level. People interested in this will be aware of Google's effort in this direction NaCl ( http://code.google.com/p/nativeclient/ )
Mar 03 2012
prev sibling parent reply "Nathan M. Swan" <nathanmswan gmail.com> writes:
On Saturday, 3 March 2012 at 02:51:41 UTC, Walter Bright wrote:
 Adding in software checks for null pointers will dramatically 
 slow things down.
What about the debug/release difference? Isn't the point of debug mode to allow checks such as assert, RangeError, etc? "Segmentation fault: 11" prevents memory from corrupting, but it isn't helpful in locating a bug.
Mar 04 2012
parent "Jesse Phillips" <jessekphillips+D gmail.com> writes:
On Monday, 5 March 2012 at 00:33:18 UTC, Nathan M. Swan wrote:
 On Saturday, 3 March 2012 at 02:51:41 UTC, Walter Bright wrote:
 Adding in software checks for null pointers will dramatically 
 slow things down.
What about the debug/release difference? Isn't the point of debug mode to allow checks such as assert, RangeError, etc? "Segmentation fault: 11" prevents memory from corrupting, but it isn't helpful in locating a bug.
It can in linux. Enable debug symbols, and core dumps, open in gdb $ ulimit -c unlimited $ dmd files.d -gc $ gdb ./files core
Mar 05 2012
prev sibling parent reply "Nathan M. Swan" <nathanmswan gmail.com> writes:
On Friday, 2 March 2012 at 04:53:02 UTC, Jonathan M Davis wrote:
 It's defined. The operating system protects you. You get a 
 segfault on *nix and
 an access violation on Windows. Walter's take on it is that 
 there is no point
 in checking for what the operating system is already checking 
 for - especially
 when it adds additional overhead. Plenty of folks disagree, but 
 that's the way
 it is.
 - Jonathan M Davis
One thing we must consider is that this violates scope safety. This scope(failure) doesn't execute: import std.stdio; void main() { Object o = null; scope(failure) writeln("error"); o.opCmp(new Object()); } That's _very_ inconsistent with the scope(failure) guarantee of _always_ executing. NMS
Mar 05 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, March 06, 2012 07:16:52 Nathan M. Swan wrote:
 On Friday, 2 March 2012 at 04:53:02 UTC, Jonathan M Davis wrote:
 It's defined. The operating system protects you. You get a
 segfault on *nix and
 an access violation on Windows. Walter's take on it is that
 there is no point
 in checking for what the operating system is already checking
 for - especially
 when it adds additional overhead. Plenty of folks disagree, but
 that's the way
 it is.
 - Jonathan M Davis
One thing we must consider is that this violates scope safety. This scope(failure) doesn't execute: import std.stdio; void main() { Object o = null; scope(failure) writeln("error"); o.opCmp(new Object()); } That's _very_ inconsistent with the scope(failure) guarantee of _always_ executing.
scope(failure) is _not_ guaranteed to always execute on failure. It is _only_ guaranteed to run when an Exception is thrown. Any other Throwable - Errors included - skip all finally blocks, scope statements, and destructors. That's one of the reasons why it's so horrible to try and catch an Error. If dereferencing null pointers was checked for, it would result in an Error just like RangeError, which skips all destructors, finally blocks, and scope statements. Such problems are considered unrecoverable. If they occur, your program is in an invalid state, and it's better to kill it then to continue. If you want to recover from attempting to derefence a null object, then you need to check before you dereference it. - Jonathan M Davis
Mar 05 2012
parent "Nathan M. Swan" <nathanmswan gmail.com> writes:
On Tuesday, 6 March 2012 at 06:27:31 UTC, Jonathan M Davis wrote:
 scope(failure) is _not_ guaranteed to always execute on 
 failure. It is _only_
 guaranteed to run when an Exception is thrown. Any other 
 Throwable - Errors
 included - skip all finally blocks, scope statements, and 
 destructors. That's
 one of the reasons why it's so horrible to try and catch an 
 Error.
Maybe not guaranteed, but this happens: code: import std.stdio; void main() { scope(failure) writeln("bad things just happened"); int[] x = new int[4_000_000_000_000_000_000]; } output: bad things just happened core.exception.OutOfMemoryError
Mar 05 2012
prev sibling next sibling parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 02.03.2012, 05:37 Uhr, schrieb Nathan M. Swan <nathanmswan gmail.com>:

 I spent a long time trying to find a
 bug that crashed the program before writeln-debugging statements
 could be flushed.
Instead of writeln use stdout.writeln.
Mar 02 2012
prev sibling parent reply dennis luehring <dl.soluz gmx.net> writes:
could it be a good idea to add something like an check-scope for 
modules,functions,etc. for typical fails to easy the detection?

wild-idea-and-syntax-list:
 CheckForNull
 CheckForNaN
 CheckForUnnormalFloat
...

---- x.d

modul x

 CheckForNull // will introduce NullChecks for the complete module

..
..
..

-----

int test()
{
 CheckForNull; // introduces Null-Checks on the function scope
 CheckForNaN; // introduces NaN-Checks on the function scope
    ...very long evil function...
}

total waste of compiler-developer time - or something to make D better 
in the end?
Mar 08 2012
parent "David Eagen" <dontmailme mailinator.com> writes:
I like the way Scala handles this with the Option class. None 
indicates no value which is equivalent to your null sentinal 
value but it is a value itself so it is always safe to use.

Combined with pattern matching and the orElse methods makes it 
very easy to use one variable that both stores the value and at 
the same time indicates whether it is valid or not. It's not two 
variables that could get out of sync.
Mar 08 2012