www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D - Unsafe and doomed

reply "NoUseForAName" <no spam.com> writes:
This piece (recently seen on the Hacker News front page):

http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

.. includes a pretty damning assessment of D as "unsafe" 
(compared to Rust) and generally doomed. I remember hearing 
Walter Bright talking a lot about "safe code" during a D 
presentation. Was that about a different kind of safety? Is the 
author just wrong? Basically I want to hear the counterargument 
(if there is one).
Jan 03 2014
next sibling parent reply "Kelet" <kelethunter gmail.com> writes:
On Saturday, 4 January 2014 at 02:09:51 UTC, NoUseForAName wrote:
 This piece (recently seen on the Hacker News front page):

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

 .. includes a pretty damning assessment of D as "unsafe" 
 (compared to Rust) and generally doomed. I remember hearing 
 Walter Bright talking a lot about "safe code" during a D 
 presentation. Was that about a different kind of safety? Is the 
 author just wrong? Basically I want to hear the counterargument 
 (if there is one).
Disclaimer: I only have a cursory understanding of the subject. With Rust, there are no dangling or null pointers. This means that if a pointer exists, it points to a valid object of the appropriate type. When a pointer does not point to a valid object of the appropriate type, accessing the content at the pointer results in undefined behavior or an error in languages that allow it. Rust implements all of these pointer safety checks at compile time, so they do not incur a performance penalty. While ` safe` helps reduce this class of logic errors, it does not go so far as Rust -- you can still have null and dangling pointers, hence it is usually considered inferior with regards to safety. There was a SafeD[1] subset of D being worked on, but I'm not sure if it is active anymore. As for D slowly dying, I would say it is not true. It has been growing by all measures lately. With projects like DUB and Derelict making progress, the ecosystem is more inviting to users. I think a lot of people have a bad taste in their mouth from D1 with Phobos/Tango. D exceeds Rust in some aspects, but my understanding is that Rust is a more safe language. Anyhow, my analysis may be wrong, so I expect that someone may correct it. Regards, Kelet
Jan 03 2014
next sibling parent "NoUseForAName" <no spam.com> writes:
Thanks!
Jan 03 2014
prev sibling next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Saturday, 4 January 2014 at 02:27:24 UTC, Kelet wrote:
 While ` safe` helps reduce this class of logic errors […]
 you can still have […] dangling pointers, hence it is
 usually considered inferior with regards to safety.
This is not true. While it _is_ possible to get null pointers in safe code, they are not a safety problem, as the first page is never mapped in any D processes (yes, I'm aware of the subtle issues w.r.t. object size here, c.f. Bugzilla). And if you find a way to obtain a dangling pointer in safe code, please report it to the bug tracker, this is not supposed to happen.
 There was a SafeD[1] subset of D being worked on, but I'm not 
 sure if it is active anymore.
SafeD is D in safe mode. Cheers, David
Jan 03 2014
next sibling parent reply "Kelet" <kelethunter gmail.com> writes:
On Saturday, 4 January 2014 at 04:20:30 UTC, David Nadlinger 
wrote:
 On Saturday, 4 January 2014 at 02:27:24 UTC, Kelet wrote:
 While ` safe` helps reduce this class of logic errors […]
 you can still have […] dangling pointers, hence it is
 usually considered inferior with regards to safety.
This is not true. While it _is_ possible to get null pointers in safe code, they are not a safety problem, as the first page is never mapped in any D processes (yes, I'm aware of the subtle issues w.r.t. object size here, c.f. Bugzilla). And if you find a way to obtain a dangling pointer in safe code, please report it to the bug tracker, this is not supposed to happen.
 There was a SafeD[1] subset of D being worked on, but I'm not 
 sure if it is active anymore.
SafeD is D in safe mode. Cheers, David
Thanks for the corrections. Ultimately, it sounds like Rust primarily takes the 'default on' approach for things like safety and immutability, whereas D takes the 'default off' approach. Regards, Kelet
Jan 03 2014
parent reply "logicchains" <jonathan.t.barnard gmail.com> writes:
On Saturday, 4 January 2014 at 04:26:24 UTC, Kelet wrote:
 Ultimately, it sounds like Rust primarily takes the 'default 
 on' approach for things like safety and immutability, whereas D 
 takes the 'default off' approach.
Sometimes the Rust approach is simply different. For instance, is needed to enable them, and all accesses must be declared unsafe. D on the other hand just makes them all thread-local, requiring explicit 'shared' declarations. I think the default D approach here may actually be safer, and is definitely more convenient.
Jan 03 2014
parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 4 January 2014 at 04:49:47 UTC, logicchains wrote:
 D on the other hand just makes them all thread-local, requiring 
 explicit 'shared' declarations. I think the default D approach 
 here may actually be safer, and is definitely more convenient.
Even then they aren't really globals, they have module scope; it is a small distinction and may not be applicable to a comparison to Rust.
Jan 04 2014
prev sibling next sibling parent reply "Thiez" <thiezz gmail.com> writes:
On Saturday, 4 January 2014 at 04:20:30 UTC, David Nadlinger 
wrote:
 This is not true. While it _is_ possible to get null pointers 
 in  safe code, they are not a safety problem, as the first page 
 is never mapped in any D processes (yes, I'm aware of the 
 subtle issues w.r.t. object size here, c.f. Bugzilla). And if 
 you find a way to obtain a dangling pointer in  safe code, 
 please report it to the bug tracker, this is not supposed to 
 happen.
What happens when you have an object/array/struct/whatever that is larger than a page, and access one of the members/indices that is more than one page-size away from the starting point? Wouldn't this cause memory corrupting if the second page is mapped and you have a NULL pointer?
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 9:13 AM, Thiez wrote:
 On Saturday, 4 January 2014 at 04:20:30 UTC, David Nadlinger wrote:
 This is not true. While it _is_ possible to get null pointers in  safe code,
 they are not a safety problem, as the first page is never mapped in any D
 processes (yes, I'm aware of the subtle issues w.r.t. object size here, c.f.
 Bugzilla). And if you find a way to obtain a dangling pointer in  safe code,
 please report it to the bug tracker, this is not supposed to happen.
What happens when you have an object/array/struct/whatever that is larger than a page, and access one of the members/indices that is more than one page-size away from the starting point? Wouldn't this cause memory corrupting if the second page is mapped and you have a NULL pointer?
Yes, it would. Many systems, in order to deal with this, map out the first 64K, not just the first page. Java, to deal with this, makes objects larger than 64K illegal.
Jan 04 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 19:11:27 UTC, Walter Bright wrote:
 Many systems, in order to deal with this, map out the first 
 64K, not just the first page. Java, to deal with this, makes 
 objects larger than 64K illegal.
If linux and BSD style system aren't major, then that is true (mapped memory is variable on both). Java do not limit the size of object to 64k, but the number of methods (or member, I don't have the spec on my eyes now). No size limit. Java rely on runtime checks for null.
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 11:20 AM, deadalnix wrote:
 Java do not limit the size of object to 64k, but the number of methods (or
 member, I don't have the spec on my eyes now). No size limit. Java rely on
 runtime checks for null.
Java must have changed that, then.
Jan 04 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 20:08:32 UTC, Walter Bright wrote:
 On 1/4/2014 11:20 AM, deadalnix wrote:
 Java do not limit the size of object to 64k, but the number of 
 methods (or
 member, I don't have the spec on my eyes now). No size limit. 
 Java rely on
 runtime checks for null.
Java must have changed that, then.
No, the whole 64k story came from Andrei misinterpreting the java spec. It never was.
Jan 04 2014
prev sibling parent reply "Maxim Fomin" <maxim maxim-fomin.ru> writes:
On Saturday, 4 January 2014 at 04:20:30 UTC, David Nadlinger 
wrote:
 On Saturday, 4 January 2014 at 02:27:24 UTC, Kelet wrote:
 While ` safe` helps reduce this class of logic errors […]
 you can still have […] dangling pointers, hence it is
 usually considered inferior with regards to safety.
This is not true. While it _is_ possible to get null pointers in safe code, they are not a safety problem, as the first page is never mapped in any D processes (yes, I'm aware of the subtle issues w.r.t. object size here, c.f. Bugzilla). And if you find a way to obtain a dangling pointer in safe code, please report it to the bug tracker, this is not supposed to happen. Cheers, David
There are many examples when one can get dangling pointer in safe code, they are fixed slowly, almost never (like slicing static array - it was in bugzilla for some time and still not fixed AFAIK, let alone other issues which received zero response). By the way, asking to post such examples to bugzilla contradicts idea that it is impossible to have such kind of code. And being in bugzilla is not excuse for these bugs.
Jan 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 9:35 AM, Maxim Fomin wrote:
 There are many examples when one can get dangling pointer in  safe code, they
 are fixed slowly, almost never (like slicing static array - it was in bugzilla
 for some time and still not fixed AFAIK, let alone other issues which received
 zero response). By the way, asking to post such examples to bugzilla
contradicts
 idea that it is impossible to have such kind of code. And being in bugzilla is
 not excuse for these bugs.
Pull requests to fix bugzilla issues are always welcome.
Jan 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/3/2014 6:27 PM, Kelet wrote:
 With Rust, there are no dangling or null pointers. This means that if a pointer
 exists, it points to a valid object of the appropriate type. When a pointer
does
 not point to a valid object of the appropriate type, accessing the content at
 the pointer results in undefined behavior or an error in languages that allow
 it. Rust implements all of these pointer safety checks at compile time, so they
 do not incur a performance penalty. While ` safe` helps reduce this class of
 logic errors, it does not go so far as Rust -- you can still have null and
 dangling pointers, hence it is usually considered inferior with regards to
 safety.
Null pointers are not a safety issue. Safety means no memory corruption.
 There was a SafeD[1] subset of D being worked on, but I'm not sure if it
 is active anymore.
That became safe, which is very much active.
Jan 03 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/04/2014 05:31 AM, Walter Bright wrote:
 ...

 Null pointers are not a safety issue.
In the general sense of the word, yes they are.
 Safety means no memory corruption.
 ...
That's memory safety.
Jan 03 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/3/2014 8:36 PM, Timon Gehr wrote:
 On 01/04/2014 05:31 AM, Walter Bright wrote:
 ...

 Null pointers are not a safety issue.
In the general sense of the word, yes they are.
Please explain.
Jan 03 2014
next sibling parent reply "ilya-stromberg" <ilya-stromberg-2009 yandex.ru> writes:
On Saturday, 4 January 2014 at 05:16:38 UTC, Walter Bright wrote:
 On 1/3/2014 8:36 PM, Timon Gehr wrote:
 On 01/04/2014 05:31 AM, Walter Bright wrote:
 ...

 Null pointers are not a safety issue.
In the general sense of the word, yes they are.
Please explain.
I don't know Timon Gehr's opinion, but it will be very nice to have NOT NULL pointers. NULL pointer means that I don't have any valid object, and it's good situation. But there are a lot of situations when function must take a valid object (at least NOT NULL pointer). D allows: 1) use `if(p is null)` and than throw exception - it will be safe, but I have additional `if` check 2) ues `assert(p !is null)` - theoretically, it will be safe, but program can have different situation is realise mode and fails (for example, because nobody provide the same example in debug mode) 3) do nothing - programmer just forgot to add any checks Also, I must to add unit tests for every posible case usage of that function with a valid object. So, it's kind of dynamic typing that can be done by compiler type system. So, in a few cases null pointers are a safety issue.
Jan 03 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/3/2014 11:42 PM, ilya-stromberg wrote:
 NULL pointer means that I don't have any valid object, and it's good situation.
 But there are a lot of situations when function must take a valid object (at
 least NOT NULL pointer). D allows:

 1) use `if(p is null)` and than throw exception - it will be safe, but I have
 additional `if` check
 2) ues `assert(p !is null)` - theoretically, it will be safe, but program can
 have different situation is realise mode and fails (for example, because nobody
 provide the same example in debug mode)
 3) do nothing - programmer just forgot to add any checks

 Also, I must to add unit tests for every posible case usage of that function
 with a valid object. So, it's kind of dynamic typing that can be done by
 compiler type system.

 So, in a few cases null pointers are a safety issue.
I believe this is a misunderstanding of what safety is. It means memory safety - i.e. no memory corruption. It does not mean "no bugs". Memory corruption happens when you've got a pointer to garbage, and then you read/write that garbage. Null pointers seg fault when they are dereferenced, halting your program. While a programming bug, it is not a safety issue.
Jan 04 2014
next sibling parent reply "ilya-stromberg" <ilya-stromberg-2009 yandex.ru> writes:
On Saturday, 4 January 2014 at 08:10:18 UTC, Walter Bright wrote:
 On 1/3/2014 11:42 PM, ilya-stromberg wrote:
 So, in a few cases null pointers are a safety issue.
I believe this is a misunderstanding of what safety is. It means memory safety - i.e. no memory corruption. It does not mean "no bugs".
OK, but this feature can be also useful. For example: class Foo { int i; } void main(string[] args) { Foo f; //Oops! writeln(f.i); } It's definetly bug, but compiler hasn't got any mistakes. I know that I'll have seg fault at runtime, but see an error at compile time will be much better. Have you got any plans to impove this situation?
Jan 04 2014
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 4 January 2014 at 12:37:34 UTC, ilya-stromberg wrote:
 Have you got any plans to impove this situation?
I wrote a NotNull struct for phobos that could catch that situation. I don't think it got pulled though. http://arsdnet.net/dcode/notnull.d With disable is becomes reasonably possible to restrict built in types with wrapper structs. It isn't perfect but it isn't awful either. The big thing people have asked for before is Object foo; if(auto obj = checkNull(foo)) { obj == NotNull!Object } else { // foo is null } and i haven't figured that out yet...
Jan 04 2014
next sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-01-04 14:46:26 +0000, "Adam D. Ruppe" <destructionator gmail.com> said:

 On Saturday, 4 January 2014 at 12:37:34 UTC, ilya-stromberg wrote:
 Have you got any plans to impove this situation?
I wrote a NotNull struct for phobos that could catch that situation. I don't think it got pulled though. http://arsdnet.net/dcode/notnull.d With disable is becomes reasonably possible to restrict built in types with wrapper structs. It isn't perfect but it isn't awful either. The big thing people have asked for before is Object foo; if(auto obj = checkNull(foo)) { obj == NotNull!Object } else { // foo is null } and i haven't figured that out yet...
In my nice little C++ world where I'm abusing macros and for loops: #define IF_VALID(a) \ Usage: T * ptr = ...something...; IF_VALID (ptr) { ... here ptr is of type ValidPtr< T > which can't be null ... can be passed to functions that wants a ValidPtr< T > } else { ... here ptr is of type T* } Can't do that in D. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Jan 04 2014
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/04/2014 03:46 PM, Adam D. Ruppe wrote:
 On Saturday, 4 January 2014 at 12:37:34 UTC, ilya-stromberg wrote:
 Have you got any plans to impove this situation?
I wrote a NotNull struct for phobos that could catch that situation. I don't think it got pulled though. http://arsdnet.net/dcode/notnull.d With disable is becomes reasonably possible to restrict built in types with wrapper structs. It isn't perfect but it isn't awful either. ...
This mechanism would be more useful if moving was specified to occur whenever provably possible using live variable analysis. Currently it is impossible to implement even a type analogous to rusts ~T type, a unique reference (without a non-dereferenceable state.)
 The big thing people have asked for before is

 Object foo;
 if(auto obj = checkNull(foo)) {
     obj == NotNull!Object
 } else {
    // foo is null
 }

 and i haven't figured that out yet...
I think it is impossible to do, because the boolean value tested must be computable from the result of checkNull, which must be a variable of type NotNull!Object, which does not have a state for null. The following is possible: auto checkNull(alias notnull, alias isnull,T)(T arg) /+if(...)+/{ return arg !is null ? notnull(assumeNotNull(arg)) : isnull(); } Object foo; foo.checkNull!( obj => ... /* is(typeof(obj)==NotNull!Object) */, () => ... /* foo is null */, )
Jan 04 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 4 January 2014 at 17:57:26 UTC, Timon Gehr wrote:
 This mechanism would be more useful if moving was specified to 
 occur whenever provably possible using live variable analysis.
Yes, I agree. I'd really like to have the unique and lent stuff.
 The following is possible:
Right, and it isn't too bad.
Jan 04 2014
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/4/14, Adam D. Ruppe <destructionator gmail.com> wrote:
 The big thing people have asked for before is

 Object foo;
 if(auto obj = checkNull(foo)) {
     obj == NotNull!Object
 } else {
    // foo is null
 }

 and i haven't figured that out yet...
Here you go: ----- import std.stdio; struct NotNull(T) { T obj; } struct CheckNull(T) { private T _payload; auto opCast(X = bool)() { return _payload !is null; } property NotNull!T getNotNull() { return NotNull!T(_payload); } alias getNotNull this; } CheckNull!T checkNull(T)(T obj) { return CheckNull!T(obj); } class C { } void main() { Object foo; if (auto obj = checkNull(foo)) { writeln("foo is not null"); } else { writeln("foo is null"); } foo = new C; if (auto obj = checkNull(foo)) { // note: ":" rather than "==" due to alias this. static assert(is(typeof(obj) : NotNull!Object)); // assignment will work of course (alias this) NotNull!Object obj2 = obj; writeln("foo is not null"); } else { writeln("foo is null"); } } -----
Jan 04 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Well, whatever happens, I really hope that I don't have to start 
adding tests for null before using a library function just 
because the non-nullable type system cannot establish that a 
pointer is not null and therefore flags it as a compile-time 
error.

That would be annoying.

In my opinion a constraint system should go hand in hand with 
higher-level symbolic optimization (think maxima etc), but I 
think more polish is needed on what is in the language first…

What could be useful was whole program analysis that moves checks 
to the calling code where possible, because then hopefully the 
backend will remove some of the checks and it might make more 
sense to leave them in for more than debugging. If each function 
has two entry points that could probably work out fine, if the 
backends can handle it (which they probably don't if they are 
C-centric).

Hm, this is probably the one of the few time in my life that I 
have felt an urge to revisit Ole-Johan Dahl's book Verifiable 
Programming... The treatment of types was pretty nice though IIRC 
(proving correspondence between formal type definitions used for 
proofs of correctness and implementations or something like that).
Jan 04 2014
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 4 January 2014 at 22:08:59 UTC, Andrej Mitrovic 
wrote:
 Here you go:
Genius! I'm pretty happy with that, and it can be used for all kinds of range checks following the same pattern.
Jan 04 2014
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 1/5/14, Adam D. Ruppe <destructionator gmail.com> wrote:
 On Saturday, 4 January 2014 at 22:08:59 UTC, Andrej Mitrovic
 wrote:
 Here you go:
Genius! I'm pretty happy with that, and it can be used for all kinds of range checks following the same pattern.
Yeah. I've actually posted about this trick a few years ago, I think one of the main devs (Kenji/Walter) said it might be a bug that it works, but I've been using this for years and I think it deserves to be upgraded to be a full language feature (meaning we document it). It's been sitting in my library, here's some unittests: https://github.com/AndrejMitrovic/minilib/blob/master/src/minilib/core/types.d#L98 Note how you can even use enforce.
Jan 05 2014
prev sibling parent reply "Organic Farmer" <x x.de> writes:
On Saturday, 4 January 2014 at 22:08:59 UTC, Andrej Mitrovic 
wrote:
 On 1/4/14, Adam D. Ruppe <destructionator gmail.com> wrote:
 The big thing people have asked for before is

 Object foo;
 if(auto obj = checkNull(foo)) {
     obj == NotNull!Object
 } else {
    // foo is null
 }

 and i haven't figured that out yet...
Here you go: ----- import std.stdio; struct NotNull(T) { T obj; } struct CheckNull(T) { private T _payload; auto opCast(X = bool)() { return _payload !is null; } property NotNull!T getNotNull() { return NotNull!T(_payload); } alias getNotNull this; } CheckNull!T checkNull(T)(T obj) { return CheckNull!T(obj); } class C { } void main() { Object foo; if (auto obj = checkNull(foo)) { writeln("foo is not null"); } else { writeln("foo is null"); } foo = new C; if (auto obj = checkNull(foo)) { // note: ":" rather than "==" due to alias this. static assert(is(typeof(obj) : NotNull!Object)); // assignment will work of course (alias this) NotNull!Object obj2 = obj; writeln("foo is not null"); } else { writeln("foo is null"); } } -----
Excuse me, not so fast ... 1. static assert asserts at compile time, so let's not put it in a runtime check of obj (converted to bool). 2. Object foo; assert(foo is null); NotNull!Object test = NotNull!Object(foo); assert(is(typeof(test) : NotNull!Object)); static assert(is(typeof(test) : NotNull!Object)); always passes for a null reference, with or without static, as it should. So what good is NotNull? 3. To wit: if you replace main with void main() { Object foo; auto obj = checkNull(foo); if (obj) { static assert(is(typeof(obj) : NotNull!Object)); //let's leave this inside the dynamic if, just to demonstrate (yuck!) assert(is(typeof(obj) : NotNull!Object)); NotNull!Object obj2 = obj; writeln("foo is not null"); } else { static assert(is(typeof(obj) : NotNull!Object)); assert(is(typeof(obj) : NotNull!Object)); writeln("foo is null"); } foo = new C; obj = checkNull(foo); if (obj) { static assert(is(typeof(obj) : NotNull!Object)); assert(is(typeof(obj) : NotNull!Object)); NotNull!Object obj2 = obj; writeln("foo is not null"); } else { static assert(is(typeof(obj) : NotNull!Object)); assert(is(typeof(obj) : NotNull!Object)); writeln("foo is null"); } } all asserts pass, be they static or dynamic. The output is correct as in the original, but it has nothing to do with obj being convertible to NotNull!Object or not. It always is. The output is correct due to the simple runtime check whether obj is null or not. Greets. (PS: I think I just poked me in the eye with my pencil.)
Jan 05 2014
parent reply "Organic Farmer" <x x.de> writes:
* correct: whether obj converts to true or false
Jan 05 2014
parent reply "Organic Farmer" <x x.de> writes:
Just found out that when I replace

struct NotNull(T) { T obj; }

with (http://arsdnet.net/dcode/notnull.d)'s definition of NotNull 
it all makes sense.

Greets.
Jan 06 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 6 January 2014 at 09:59:55 UTC, Organic Farmer wrote:
 Just found out that when I replace

 struct NotNull(T) { T obj; }

 with (http://arsdnet.net/dcode/notnull.d)'s definition of 
 NotNull it all makes sense.
Yes, it is very important to use the full type so you get the checks. The reason this is better than the segfault is that here, the run-time error occurs closer to the point of assignment instead of at the point of use. my_function(enforceNotNull(obj)); // throw right here if it is null This especially matters if the function stores the object somewhere. Having an unexpected null in the middle of a container can be a hidden bug for some time. Fixing it means finding how null got in there in the first place, and the segfault stack trace is almost no help at all. The not null things though catch it early and then the type system (almost*) ensures it stays that way. * it is still possible to use casts and stuff to get a null in there but surely nobody would actually do that!
Jan 06 2014
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Jan 04, 2014 at 12:10:20AM -0800, Walter Bright wrote:
 On 1/3/2014 11:42 PM, ilya-stromberg wrote:
NULL pointer means that I don't have any valid object, and it's good
situation.  But there are a lot of situations when function must take
a valid object (at least NOT NULL pointer). D allows:

1) use `if(p is null)` and than throw exception - it will be safe,
but I have additional `if` check
2) ues `assert(p !is null)` - theoretically, it will be safe, but
program can have different situation is realise mode and fails (for
example, because nobody provide the same example in debug mode)
3) do nothing - programmer just forgot to add any checks

Also, I must to add unit tests for every posible case usage of that
function with a valid object. So, it's kind of dynamic typing that
can be done by compiler type system.

So, in a few cases null pointers are a safety issue.
I believe this is a misunderstanding of what safety is. It means memory safety - i.e. no memory corruption. It does not mean "no bugs". Memory corruption happens when you've got a pointer to garbage, and then you read/write that garbage. Null pointers seg fault when they are dereferenced, halting your program. While a programming bug, it is not a safety issue.
Keep in mind, though, that for sufficiently large objects, null pointers may not segfault (e.g., when you dereference a field at the end of the object, which, when large enough, will have a sufficiently large address to not cause a segfault when the base pointer is null -- you *will* end up with memory corruption in that case). T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Jan 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 9:40 AM, H. S. Teoh wrote:
 Keep in mind, though, that for sufficiently large objects, null pointers
 may not segfault (e.g., when you dereference a field at the end of the
 object, which, when large enough, will have a sufficiently large address
 to not cause a segfault when the base pointer is null -- you *will* end
 up with memory corruption in that case).
Yes, that was already mentioned.
Jan 04 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 4 January 2014 at 07:42:51 UTC, ilya-stromberg wrote:
 I don't know Timon Gehr's opinion, but it will be very nice to 
 have NOT NULL pointers.
I don't disagree, but isn't that just a special case of type constraints? Why limit it arbitrarily to null-values, limiting the range of values is useful for ints and floats too. If you move the constraint check to the function caller you can avoid testing when it isn't needed.
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 1:18 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 I don't disagree, but isn't that just a special case of type constraints? Why
 limit it arbitrarily to null-values, limiting the range of values is useful for
 ints and floats too. If you move the constraint check to the function caller
you
 can avoid testing when it isn't needed.
Yes, the non-NULL thing is just one example of a useful constraint one can put on types.
Jan 04 2014
next sibling parent "ilya-stromberg" <ilya-stromberg-2009 yandex.ru> writes:
On Saturday, 4 January 2014 at 19:05:00 UTC, Walter Bright wrote:
 On 1/4/2014 1:18 AM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 I don't disagree, but isn't that just a special case of type 
 constraints? Why
 limit it arbitrarily to null-values, limiting the range of 
 values is useful for
 ints and floats too. If you move the constraint check to the 
 function caller you
 can avoid testing when it isn't needed.
Yes, the non-NULL thing is just one example of a useful constraint one can put on types.
Yes, exactly. And we have contact programming for this rules, but DMD doesn't support any contact checks at the compile time. Do you have any plans to improve the situation? For example, we can add `static in` and `static out` contacts.
Jan 04 2014
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Jan 04, 2014 at 11:04:59AM -0800, Walter Bright wrote:
 On 1/4/2014 1:18 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
I don't disagree, but isn't that just a special case of type
constraints? Why limit it arbitrarily to null-values, limiting the
range of values is useful for ints and floats too. If you move the
constraint check to the function caller you can avoid testing when it
isn't needed.
Yes, the non-NULL thing is just one example of a useful constraint one can put on types.
I still like what Walter said in the past about this issue: Making non-nullable pointers is just plugging one hole in a cheese grater. -- Walter Bright :-) There are many other issues to be addressed in an ideal programming language. Range constraints are but another hole in the cheese grater; there are many others. T -- If creativity is stifled by rigid discipline, then it is not true creativity.
Jan 04 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/04/2014 06:16 AM, Walter Bright wrote:
 On 1/3/2014 8:36 PM, Timon Gehr wrote:
 On 01/04/2014 05:31 AM, Walter Bright wrote:
 ...

 Null pointers are not a safety issue.
In the general sense of the word, yes they are.
Please explain.
Safety is some kind of guarantee that something bad never happens. Eg. memory safety guarantees that memory never gets corrupted and null safety guarantees that null pointers never get dereferenced. Any property that can be stated in this way is a safety property. Hence it is fine to claim that the lack of dereferenceable null pointers makes a language safer, even though it has no bearing on memory safety.
Jan 04 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 13:49:40 UTC, Timon Gehr wrote:
 On 01/04/2014 06:16 AM, Walter Bright wrote:
 On 1/3/2014 8:36 PM, Timon Gehr wrote:
 On 01/04/2014 05:31 AM, Walter Bright wrote:
 ...

 Null pointers are not a safety issue.
In the general sense of the word, yes they are.
Please explain.
Safety is some kind of guarantee that something bad never happens. Eg. memory safety guarantees that memory never gets corrupted and null safety guarantees that null pointers never get dereferenced. Any property that can be stated in this way is a safety property. Hence it is fine to claim that the lack of dereferenceable null pointers makes a language safer, even though it has no bearing on memory safety.
Amen
Jan 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 5:49 AM, Timon Gehr wrote:
 Hence it is fine to claim that the lack of dereferenceable null pointers makes
a
 language safer, even though it has no bearing on memory safety.
I believe it is misusing the term by conflating safety with bug-free.
Jan 04 2014
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 19:05:54 UTC, Walter Bright wrote:
 On 1/4/2014 5:49 AM, Timon Gehr wrote:
 Hence it is fine to claim that the lack of dereferenceable 
 null pointers makes a
 language safer, even though it has no bearing on memory safety.
I believe it is misusing the term by conflating safety with bug-free.
Dereferencing a null pointer is ALWAYS an error, just as dereferencing freed memory is.
Jan 04 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 I believe it is misusing the term by conflating safety with 
 bug-free.
Regarding the bugs, this is quoted from a recently shown open letter. It's about the Xamarin Studio, that I think is written in https://gist.github.com/anonymous/38850edf6b9105ee1f8a
SH*TTON of null exceptions at runtime. Every now and then I get 
a nice error popup showing a null exception somewhere in Xamarin 
Studio. Most often this happens when I move a file, do some 
changes in Android UI designer or just do something non-trivial. 
And yes, I always restart the IDE after that, because when one 
exception pops up, many more are to come, so restart is 
mandatory here.<
If you write an IDE in D language you wish to avoid this situation :-) Bye, bearophile
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 11:41 AM, bearophile wrote:
 If you write an IDE in D language you wish to avoid this situation :-)
If you write an IDE in any language, you wish to avoid having bugs in it. I know that non-NULL was popularized by that billion dollar mistake article, but step back a moment. Non-NULL is really only a particular case of having a type with a constrained set of values. It isn't all that special.
Jan 04 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 20:16:29 UTC, Walter Bright wrote:
 On 1/4/2014 11:41 AM, bearophile wrote:
 If you write an IDE in D language you wish to avoid this 
 situation :-)
If you write an IDE in any language, you wish to avoid having bugs in it. I know that non-NULL was popularized by that billion dollar mistake article, but step back a moment. Non-NULL is really only a particular case of having a type with a constrained set of values. It isn't all that special.
If you step back one step further, you'll notice that having a nullable type may be desirable for almost anything, not only classes/pointers.
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 12:24 PM, deadalnix wrote:
 On Saturday, 4 January 2014 at 20:16:29 UTC, Walter Bright wrote:
 On 1/4/2014 11:41 AM, bearophile wrote:
 If you write an IDE in D language you wish to avoid this situation :-)
If you write an IDE in any language, you wish to avoid having bugs in it. I know that non-NULL was popularized by that billion dollar mistake article, but step back a moment. Non-NULL is really only a particular case of having a type with a constrained set of values. It isn't all that special.
If you step back one step further, you'll notice that having a nullable type may be desirable for almost anything, not only classes/pointers.
I don't really understand your point. Null is not that special. For example, you may want a constrained type: 1. a float guaranteed to be not NaN 2. a code point guaranteed to be a valid code point 3. a prime number guaranteed to be a prime number 4. a path+filename guaranteed to be well-formed according to operating system rules 5. an SQL argument guaranteed to not contain an injection attack The list is endless. Why is null special?
Jan 04 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 4 January 2014 at 22:06:13 UTC, Walter Bright wrote:
 I don't really understand your point. Null is not that special.

 For example, you may want a constrained type:

 1. a float guaranteed to be not NaN
 2. a code point guaranteed to be a valid code point
 3. a prime number guaranteed to be a prime number
 4. a path+filename guaranteed to be well-formed according to 
 operating system rules
 5. an SQL argument guaranteed to not contain an injection attack

 The list is endless. Why is null special?
Because it is an instant crash, because it is not possible to make it safe without runtime check, because it is known to fool optimizer and cause really nasty bugs (typically, a pointer is dereferenced, so the optimizer assume it isn't null and remove null check after the dereference, and then the dereference is removed as it is dead. a bugguy code that could have crashed will know behave in random ways). On the other hand, it is really easy to make all of this burden disappear at language level. 2 should also be ensure by safe . 3, 4, 5 can easily be ensured by current type system. I'm not knowledgeable enough on floating point standard to express any opinion on 1.
Jan 04 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 4 January 2014 at 23:04:12 UTC, deadalnix wrote:
 Because it is an instant crash, because it is not possible to
Actually, array out of bounds is no less an instant "crash" than trapping page 0 which is similar to implementing stack increase by trapping page faults. What is likely to happen if you add non-null-pointers without organization wide code reviews to enforce them, or a state-of-the-art partial correctness proof system to back it up, is that people create null objects and point to those instead. And that will solve absolutely no bugs. It makes more sense for high-level languages than for those languages who will receive a steady stream of null pointers from various libraries. It makes sense for Rust, because it is a priority issue for the organization backing the project. It might have made sense for Go which is trying to stay tiny and not low level and don't care all that much about performance, but for D… get the feature set stable and prove that correct (sound) before starting on a route to a partial correctness proof system.
Jan 04 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 3:04 PM, deadalnix wrote:
 On Saturday, 4 January 2014 at 22:06:13 UTC, Walter Bright wrote:
 I don't really understand your point. Null is not that special.

 For example, you may want a constrained type:

 1. a float guaranteed to be not NaN
 2. a code point guaranteed to be a valid code point
 3. a prime number guaranteed to be a prime number
 4. a path+filename guaranteed to be well-formed according to operating system
 rules
 5. an SQL argument guaranteed to not contain an injection attack

 The list is endless. Why is null special?
Because it is an instant crash,
Would things going on and a random thing happening randomly later be better?
 because it is not possible to make it safe
 without runtime check,
Wrapper types can handle this.
 because it is known to fool optimizer and cause really
 nasty bugs (typically, a pointer is dereferenced, so the optimizer assume it
 isn't null and remove null check after the dereference, and then the
dereference
 is removed as it is dead.
I'd like to see a case where this is nasty. I can't think of one.
 a bugguy code that could have crashed will know behave
 in random ways).
Above it seems you were preferring it to fail in random ways rather than instant and obvious seg fault :-) For the record, I vastly prefer the instant seg fault.
 On the other hand, it is really easy to make all of this burden disappear at
 language level.
I've posted a NonNull wrapper here a couple of times. I think it is adequately addressable at the library level, with the bonus that the same technique will work for other constrained types.
 2 should also be ensure by  safe .
safe is for memory safety.
 3, 4, 5 can easily be ensured by current type system.
By exactly the same technique as non-null can be. Non-null does not require a special language case.
 I'm not knowledgeable enough on floating point standard to express any opinion
 on 1.
It's the same issue.
Jan 04 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 January 2014 at 00:05:46 UTC, Walter Bright wrote:
 On 1/4/2014 3:04 PM, deadalnix wrote:
 Because it is an instant crash,
Would things going on and a random thing happening randomly later be better?
In a web-service server it is desirable to trap the SIGSEGV so that an appropriate http status can be returned before going down (telling the client to not do that again).
Jan 04 2014
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 05, 2014 at 07:51:31AM +0000, digitalmars-d-bounces puremagic.com
wrote:
 On Sunday, 5 January 2014 at 00:05:46 UTC, Walter Bright wrote:
On 1/4/2014 3:04 PM, deadalnix wrote:
Because it is an instant crash,
Would things going on and a random thing happening randomly later be better?
In a web-service server it is desirable to trap the SIGSEGV so that an appropriate http status can be returned before going down (telling the client to not do that again).
Isn't that usually handled by running the webserver itself as a separate process, so that when the child segfaults the parent returns HTTP 501? Trusting the faulty process to return a sane status sounds rather risky to me (how do you know somebody didn't specially craft an attack to dump the contents of /etc/passwd to stdout, which gets redirected over the HTTP link? I rather the process segfault immediately rather than continuing to run when it detected an obvious logic problem with its own code). T -- Almost all proofs have bugs, but almost all theorems are true. -- Paul Pedersen
Jan 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 January 2014 at 15:19:15 UTC, H. S. Teoh wrote:
 Isn't that usually handled by running the webserver itself as a 
 separate
 process, so that when the child segfaults the parent returns 
 HTTP 501?
You can do that. The hard part is how to deal with the other 99 non-offending concurrent requests running in the faulty process. How does the parent process know which request was the offending, and what if the parent process was the one failing, then you should handle it in the front-end-proxy anyway? Worse, cutting off all requests could leave trash around in the system where requests write to temporary data stores where it is undesirable to implement a full logging/cross-server transactional mechanism. That could be a DoS vector.
 HTTP link? I rather the process segfault immediately rather than
 continuing to run when it detected an obvious logic problem 
 with its own
 code).
And not start up again, keeping the service down until a bugfix arrives? A null pointer error can be a innocent bug for some services, so I don't think the programming language should dictate what you do, though you probably should have write protected code-pages with execute flag. E.g. I don't think it makes sense to shut down a trivial service written in "Python" if it has a logic flaw that tries to access a None pointer for a specific request if you know where in the code it happens. It makes sense to issue an exception, catch it in the request handler free all temporary allocated resources and tell the offending client not to do that again and keep the process running completing all other requests. Otherwise you have a DoS vector? It should be up to the application programmer whether the program should recover and complete the other 99 concurrent requests before resetting, not the language. If one http request can shut down the other 99 requests in the process then it becomes a DoS vector.
Jan 05 2014
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jan 06, 2014 at 02:24:09AM +0000, digitalmars-d-bounces puremagic.com
wrote:
 On Sunday, 5 January 2014 at 15:19:15 UTC, H. S. Teoh wrote:
Isn't that usually handled by running the webserver itself as a
separate process, so that when the child segfaults the parent returns
HTTP 501?
You can do that. The hard part is how to deal with the other 99 non-offending concurrent requests running in the faulty process.
Since a null pointer implies that there's some kind of logic error in the code, how much confidence do you have that the other 99 concurrent requests aren't being wrongly processed too?
 How does the parent process know which request was the offending,
 and what if the parent process was the one failing, then you should
 handle it in the front-end-proxy anyway?
Usually the sysadmin would set things up so that if the front-end proxy dies, it would be restarted by a script in (hopefully) a clean state.
 Worse, cutting off all requests could leave trash around in the
 system where requests write to temporary data stores where it is
 undesirable to implement a full logging/cross-server transactional
 mechanism. That could be a DoS vector.
I've had to deal with this issue before at my work (it's not related to webservers, but I think the same principle applies). There's a daemon that has to run an operation to clean up a bunch of auxiliary data after the user initiates the removal of certain database objects. The problem is, some of the cleanup operations are non-trivial, and has possibility of failure (could be an error returned from deep within the cleanup code, or a segfault, or whatever). So I wrote some complex scaffolding code to catch these kinds of problems, and to try to clean things up afterwards. But eventually we found that attempting this sort of error recovery is actually counterproductive, because it made the code more complicated, and added intermediate states: in addition to "object present" and "object deleted", there was now "object partially deleted" -- now all code has to detect this and decide what to do with it. Then customers started seeing the "object partially deleted" state, which was never part of the design of the system, which led to all sorts of odd behaviour (certain operations don't work, the object shows up in some places but not others, etc.). Finally, we decided that it's better to keep the system in simple, well-defined states (only "object present" and "object not present"), even if it comes at the cost of leaving stray unreferenced data lying around from a previous failed cleanup operation. Based on this, I'm inclined to say that if a web request process encountered a NULL pointer, it's probably better to just reset back to a known-good state by restarting. Sure it leaves a bunch of stray data around, but reducing code complexity often outweighs saving wasted space.
HTTP link? I rather the process segfault immediately rather than
continuing to run when it detected an obvious logic problem with
its own code).
And not start up again, keeping the service down until a bugfix arrives?
No, usually you'd set things up so that if the webserver goes down, an init script would restart it. Restarting is preferable, because it resets the program back to a known-good state. Continuing to barge on when something has obviously gone wrong (null pointer where it's not expected) is risky, because what if that null pointer is not due to a careless bug, but a symptom of somebody attempting to inject a root exploit? Blindly continuing will only play into the hand of the attacker.
 A null pointer error can be a innocent bug for some services, so I
 don't think the programming language should dictate what you do,
 though you probably should have write protected code-pages with
 execute flag.
The thing is, a null pointer error isn't just an exceptional condition caused by bad user data; it's a *logic* error in the code. It's a sign that something is wrong with the program logic. I don't consider that an "innocent error"; it's a sign that the code can no longer be trusted to do the right thing anymore. So, I'd say it's safer to terminate the program and have the restart script reset the program state back to a known-good initial state.
 E.g. I don't think it makes sense to shut down a trivial service
 written in "Python" if it has a logic flaw that tries to access a
 None pointer for a specific request if you know where in the code it
 happens. It makes sense to issue an exception, catch it in the
 request handler free all temporary allocated resources and tell the
 offending client not to do that again and keep the process running
 completing all other requests. Otherwise you have a DoS vector?
Tell the client not to do that again? *That* sounds like the formula for a DoS vector (a rogue client deliberately sending the crashing request over and over again).
 It should be up to the application programmer whether the program
 should recover and complete the other 99 concurrent requests before
 resetting, not the language. If one http request can shut down the
 other 99 requests in the process then it becomes a DoS vector.
I agree with the principle that the programmer should decide what happens, but I think there's a wrong assumption here that the *program* is fit to make this decision after encountering a logic error like an unexpected null pointer. Again, it's not a case of bad user input, where the problem is just with the data and you can just throw away the bad data and start over. This is a case of a problem with the *code*, which means you cannot trust the program will continue doing what you designed it to -- the null pointer proves that the program state *isn't* what you assumed it is, so now you can no longer trust that any subsequent code will actually do what you think it should do. This kind of misplaced assumption is the underlying basis for things like stack corruption exploits: under normal circumstances your function call will simply return to its caller after it finishes, but now, it actually *doesn't* return to the caller. There's no way you can predict where it will go, because the fundamental assumptions about how the stack works no longer hold due to the corruption. Blindly assuming that things will still work the way you think they work, will only lead to your program running the exploit code that has been injected into the corrupted stack. The safest recourse is to reset the program back to a known state. T -- People say I'm arrogant, and I'm proud of it.
Jan 05 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 6 January 2014 at 04:16:56 UTC, H. S. Teoh wrote:
 Since a null pointer implies that there's some kind of logic 
 error in
 the code, how much confidence do you have that the other 99 
 concurrent
 requests aren't being wrongly processed too?
That doesn't matter if the service isn't critical, it only matters if it destructively writes to a database. You can also shut down parts of the service rather than the entire service.
 Based on this, I'm inclined to say that if a web request process
 encountered a NULL pointer, it's probably better to just reset 
 back to a known-good state by restarting.
I many cases it might be, but it should be up to the project management or the organization to set the policy, not the language designer. This is an issue I have with many of the "c++ wannabe languages". They enforce policies that shouldn't be done on the level of a tool (it could be a compiler option though). My pet peeve is Go and its banning of assert() because many programmers use it in an appropriate manner. In D you have the overloading of conditionals and others. With Ada and Rust, it is ok, because they exist to enforce a policy for existing organizations (DoD, Mozilla). Generic programming languages that claim should be more adaptable.
 No, usually you'd set things up so that if the webserver goes 
 down, an init script would restart it. Restarting is 
 preferable, because it resets the program back to a known-good 
 state.
The program might be written in such a way that you know that it is a good state when you catch the null exception.
 careless bug, but a symptom of somebody attempting to inject a 
 root
 exploit?  Blindly continuing will only play into the hand of the
 attacker.
Protection against root exploits should be done on lower level (jail).
 The thing is, a null pointer error isn't just an exceptional 
 condition
 caused by bad user data; it's a *logic* error in the code. It's 
 a sign
 that something is wrong with the program logic.
And so is array-out-of bounds, or division-by-zero.
 Tell the client not to do that again? *That* sounds like the 
 formula for
 a DoS vector (a rogue client deliberately sending the crashing 
 request
 over and over again).
What else can you do? You return an error and block subsequent requests if appropriate. In a networked computer game you log misbehaviour, you drop the client after a random delay and you can block the offender. What you do not want is to disable the entire service. It is better to run a somewhat faulty service that entertain and retain your customers than shutting down until a bug fix appears. If it takes 15-30 seconds to bring the server back up then you cannot afford to reset all the time. I can point to many launches of online computer games that has resulted in massive losses due to servers going down during the first few weeks. That is actually one good reason to not use C++ in game servers, the lack of robustness to failure. In some domains the ability to keep the service running, and the ability to turn off parts of the service, is more important than correctness. What you want is a log of player-resources so that you post-failure can restore game balance.
 data and start over. This is a case of a problem with the 
 *code*, which
 means you cannot trust the program will continue doing what you
That depends on how the program is written and in which area the null exception happend. It might even be a known bug that might take a long time to locate and fix, but that is known to be innocent.
 things will still work the way you think they work, will only 
 lead to
 your program running the exploit code that has been injected 
 into the
 corrupted stack.
Pages with execution bit set should be write protected. You can only jump into existing code, injection of code isn't really possible. So if the existing code is unknown to the attacker that attack vector is weak.
 The safest recourse is to reset the program back to a known 
 state.
I see no problem with trapping None-failures in pure Python and keeping the service running. The places where it can happen tend to be when you are looking up a non-existing object in a database. Quite innocent if you can backtrack all the way down to the request handler and return an appropriate status code. If you use the safe subset of D, why should it be different?
Jan 06 2014
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 5 January 2014 at 00:05:46 UTC, Walter Bright wrote:
 Because it is an instant crash,
Would things going on and a random thing happening randomly later be better?
Compile time error is preferable.
 because it is not possible to make it safe
 without runtime check,
Wrapper types can handle this.
 because it is known to fool optimizer and cause really
 nasty bugs (typically, a pointer is dereferenced, so the 
 optimizer assume it
 isn't null and remove null check after the dereference, and 
 then the dereference
 is removed as it is dead.
I'd like to see a case where this is nasty. I can't think of one.
A recent linux kernel exploit was caused by this. Reread carefully, this nasty behavior is created by the optimizer, and avoiding it mean preventing the optimizer to optimize aways loads, unless it can prove the pointer is non null. As D is meant to be fast, this limitation in the optimizer is highly undesirable.
 a bugguy code that could have crashed will know behave
 in random ways).
Above it seems you were preferring it to fail in random ways rather than instant and obvious seg fault :-) For the record, I vastly prefer the instant seg fault.
You made that up. I do not prefers such behavior.
 I've posted a NonNull wrapper here a couple of times. I think 
 it is adequately addressable at the library level, with the 
 bonus that the same technique will work for other constrained 
 types.
We already have a Nullable type as library.
Jan 05 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2014 3:59 PM, deadalnix wrote:
 because it is known to fool optimizer and cause really
 nasty bugs (typically, a pointer is dereferenced, so the optimizer assume it
 isn't null and remove null check after the dereference, and then the
dereference
 is removed as it is dead.
I'd like to see a case where this is nasty. I can't think of one.
A recent linux kernel exploit was caused by this. Reread carefully, this nasty behavior is created by the optimizer, and avoiding it mean preventing the optimizer to optimize aways loads, unless it can prove the pointer is non null. As D is meant to be fast, this limitation in the optimizer is highly undesirable.
I'd still like to see an example, even a contrived one.
Jan 05 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
 On 1/5/2014 3:59 PM, deadalnix wrote:
 because it is known to fool optimizer and cause really
 nasty bugs (typically, a pointer is dereferenced, so the 
 optimizer assume it
 isn't null and remove null check after the dereference, and 
 then the dereference
 is removed as it is dead.
I'd like to see a case where this is nasty. I can't think of one.
A recent linux kernel exploit was caused by this. Reread carefully, this nasty behavior is created by the optimizer, and avoiding it mean preventing the optimizer to optimize aways loads, unless it can prove the pointer is non null. As D is meant to be fast, this limitation in the optimizer is highly undesirable.
I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. } The code look stupid, but this is quite common after a first pass of optimization/inlining, do end up with something like that when a null check if forgotten. The problem here is that the if can be removed, as you can't reach that point if the pointer is null, but *ptr can also be removed later as it is a dead load. The resulting code won't crash and do random shit instead.
Jan 05 2014
next sibling parent reply "Thiez" <thiezz gmail.com> writes:
On Monday, 6 January 2014 at 00:20:59 UTC, deadalnix wrote:
 void foo(int* ptr) {
     *ptr;
     if (ptr is null) {
         // do stuff
     }

     // do stuff.
 }

 The code look stupid, but this is quite common after a first 
 pass of optimization/inlining, do end up with something like 
 that when a null check if forgotten.

 The problem here is that the if can be removed, as you can't 
 reach that point if the pointer is null, but *ptr can also be 
 removed later as it is a dead load.

 The resulting code won't crash and do random shit instead.
If you read http://people.csail.mit.edu/akcheung/papers/apsys12.pdf there is a nice instance where a compiler moved a division above the check that was designed to prevent division by zero, because it assumed a function would return (when in fact it wouldn't). I imagine a similar scenario could happen with a null pointer, e.g.: if (ptr is null) { perform_function_that_never_returns(); } auto x = *ptr; If the compiler assumes that 'perform_function_that_never_returns()' returns, it will recognize the whole if-statement and its body as dead code. Optimizers can be a little too smart for their own good at times.
Jan 05 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 6 January 2014 at 00:43:22 UTC, Thiez wrote:
 On Monday, 6 January 2014 at 00:20:59 UTC, deadalnix wrote:
 void foo(int* ptr) {
    *ptr;
    if (ptr is null) {
        // do stuff
    }

    // do stuff.
 }

 The code look stupid, but this is quite common after a first 
 pass of optimization/inlining, do end up with something like 
 that when a null check if forgotten.

 The problem here is that the if can be removed, as you can't 
 reach that point if the pointer is null, but *ptr can also be 
 removed later as it is a dead load.

 The resulting code won't crash and do random shit instead.
If you read http://people.csail.mit.edu/akcheung/papers/apsys12.pdf there is a nice instance where a compiler moved a division above the check that was designed to prevent division by zero, because it assumed a function would return (when in fact it wouldn't). I imagine a similar scenario could happen with a null pointer, e.g.: if (ptr is null) { perform_function_that_never_returns(); } auto x = *ptr; If the compiler assumes that 'perform_function_that_never_returns()' returns, it will recognize the whole if-statement and its body as dead code. Optimizers can be a little too smart for their own good at times.
Your example is a bug in the optimizer. Mine isn't.
Jan 05 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2014 4:20 PM, deadalnix wrote:
 On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
 I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. } The code look stupid, but this is quite common after a first pass of optimization/inlining, do end up with something like that when a null check if forgotten.
The code is fundamentally broken. I don't know of any legitimate optimization transforms that would move a dereference from after a null check to before, so I suspect the code was broken before that first pass of optimization/inlining.
 The problem here is that the if can be removed, as you can't reach that point
if
 the pointer is null, but *ptr can also be removed later as it is a dead load.

 The resulting code won't crash and do random shit instead.
If you're writing code where you expect undefined behavior to cause a crash, then that code has faulty assumptions. This is why many languages work to eliminate undefined behavior - but still, as a professional programmer, you should not be relying on undefined behavior, and it is not the optimizer's fault if you did. If you deliberately rely on UB (and I do on occasion) then you should be prepared to take your lumps if the compiler changes.
Jan 05 2014
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Jan 05, 2014 at 06:03:04PM -0800, Walter Bright wrote:
 On 1/5/2014 4:20 PM, deadalnix wrote:
On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. }
[...]
 If you're writing code where you expect undefined behavior to cause a
 crash, then that code has faulty assumptions.
 
 This is why many languages work to eliminate undefined behavior -
 but still, as a professional programmer, you should not be relying
 on undefined behavior, and it is not the optimizer's fault if you
 did. If you deliberately rely on UB (and I do on occasion) then you
 should be prepared to take your lumps if the compiler changes.
On that note, some time last year I fixed a bug in std.bigint where a division by zero was deliberately triggered with the assumption that it will cause an exception / trap. But it didn't, so the code caused a malfunction further on, since control passed on to where the original author assumed it wouldn't. T -- Philosophy: how to make a career out of daydreaming.
Jan 05 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2014 7:25 PM, H. S. Teoh wrote:
 On that note, some time last year I fixed a bug in std.bigint where a
 division by zero was deliberately triggered with the assumption that it
 will cause an exception / trap. But it didn't, so the code caused a
 malfunction further on, since control passed on to where the original
 author assumed it wouldn't.
A nice example of what I was talking about.
Jan 05 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 6 January 2014 at 02:03:03 UTC, Walter Bright wrote:
 On 1/5/2014 4:20 PM, deadalnix wrote:
 On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
 I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. } The code look stupid, but this is quite common after a first pass of optimization/inlining, do end up with something like that when a null check if forgotten.
The code is fundamentally broken. I don't know of any legitimate optimization transforms that would move a dereference from after a null check to before, so I suspect the code was broken before that first pass of optimization/inlining.
I know. But his code will behave in random ways, not instant fail. This example show that the instant fail approach you seem to like is inherently flawed.
 If you're writing code where you expect undefined behavior to 
 cause a crash, then that code has faulty assumptions.

 This is why many languages work to eliminate undefined behavior 
 - but still, as a professional programmer, you should not be 
 relying on undefined behavior, and it is not the optimizer's 
 fault if you did. If you deliberately rely on UB (and I do on 
 occasion) then you should be prepared to take your lumps if the 
 compiler changes.
Are you saying that dereferencing null must be undefined behavior, and not instant failure ? That contradict the position you gave before.
Jan 06 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 7:01 AM, deadalnix wrote:
 I know. But his code will behave in random ways, not instant fail. This example
 show that the instant fail approach you seem to like is inherently flawed.
That code is broken whether types are nullable or not.
 Are you saying that dereferencing null must be undefined behavior, and not
 instant failure ? That contradict the position you gave before.
I've also said I think it is better to eliminate undefined behavior by defining it.
Jan 06 2014
prev sibling parent reply "fra" <a b.it> writes:
On Monday, 6 January 2014 at 00:20:59 UTC, deadalnix wrote:
 On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
 I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. }
 The problem here is that the if can be removed, as you can't 
 reach that point if the pointer is null, but *ptr can also be 
 removed later as it is a dead load.

 The resulting code won't crash and do random shit instead.
"Code can't be reached if pointer is null" means "The code could fail before reaching here". Honestly, this looks like an optimizer issue to me. Who the **** would remove code that could fail?
Jan 06 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 6 January 2014 at 15:56:09 UTC, fra wrote:
 On Monday, 6 January 2014 at 00:20:59 UTC, deadalnix wrote:
 On Monday, 6 January 2014 at 00:13:19 UTC, Walter Bright wrote:
 I'd still like to see an example, even a contrived one.
void foo(int* ptr) { *ptr; if (ptr is null) { // do stuff } // do stuff. }
 The problem here is that the if can be removed, as you can't 
 reach that point if the pointer is null, but *ptr can also be 
 removed later as it is a dead load.

 The resulting code won't crash and do random shit instead.
"Code can't be reached if pointer is null" means "The code could fail before reaching here". Honestly, this looks like an optimizer issue to me. Who the **** would remove code that could fail?
That is the whole node of the issue. As a matter of fact, any load can trap. Considering this, we either want the optimizer to prove that the load won't trap before to optimize it away OR we consider the trap as a special case that can be removed by the optimizer. The thing is that the first option is highly undesirable for performance reason, as the optimizer won't be able to remove most loads. This isn't something small as memory is WAY slower than CPU nowadays (a cache miss at the very least 200 cycles, typically in the 300, add limitation in memory bandwidth and you have an idea). That is what I cover in short in one of my previous posts. We could decide that the optimizer can't remove load unless it can prove that they do not trap. That mean that most loads can't be optimized away anymore. Or, we can decide that trapping in not guaranteed, and then dereferencing null is undefined behavior, which is much worse that a compile time failure.
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then dereferencing null
is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 1:03 PM, Walter Bright wrote:
 On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then dereferencing null
 is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Or better, if you want to issue a pull request for the documentation that says unless it is a dead load, a null reference will cause a program-ending fault of one sort or another, I'll back it.
Jan 06 2014
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 6 January 2014 at 21:09:44 UTC, Walter Bright wrote:
 On 1/6/2014 1:03 PM, Walter Bright wrote:
 On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then 
 dereferencing null
 is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Or better, if you want to issue a pull request for the documentation that says unless it is a dead load, a null reference will cause a program-ending fault of one sort or another, I'll back it.
You realize that every foo.bar(); is undefined behavior unless it is preceded by a null check under that definition ?
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 4:13 PM, deadalnix wrote:
 On Monday, 6 January 2014 at 21:09:44 UTC, Walter Bright wrote:
 On 1/6/2014 1:03 PM, Walter Bright wrote:
 On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then dereferencing null
 is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Or better, if you want to issue a pull request for the documentation that says unless it is a dead load, a null reference will cause a program-ending fault of one sort or another, I'll back it.
You realize that every foo.bar(); is undefined behavior unless it is preceded by a null check under that definition ?
No, I don't realize that. Or you could amend the documentation to say that null checks will not be removed even if they occur after a dereference.
Jan 06 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
 On 1/6/2014 4:13 PM, deadalnix wrote:
 On Monday, 6 January 2014 at 21:09:44 UTC, Walter Bright wrote:
 On 1/6/2014 1:03 PM, Walter Bright wrote:
 On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then 
 dereferencing null
 is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Or better, if you want to issue a pull request for the documentation that says unless it is a dead load, a null reference will cause a program-ending fault of one sort or another, I'll back it.
You realize that every foo.bar(); is undefined behavior unless it is preceded by a null check under that definition ?
No, I don't realize that. Or you could amend the documentation to say that null checks will not be removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 7:20 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
 Or you could amend the documentation to say that null checks will not be
 removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
You're assuming that LDC and GDC are stuck with C semantics.
Jan 06 2014
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
 On 1/6/2014 7:20 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright 
 wrote:
 Or you could amend the documentation to say that null checks 
 will not be
 removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
You're assuming that LDC and GDC are stuck with C semantics.
Unless we plan to rewrite our own optimizer, they are to some extent.
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 8:55 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
 On 1/6/2014 7:20 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
 Or you could amend the documentation to say that null checks will not be
 removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
You're assuming that LDC and GDC are stuck with C semantics.
Unless we plan to rewrite our own optimizer, they are to some extent.
I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.
Jan 06 2014
next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 7 January 2014 06:03, Walter Bright <newshound2 digitalmars.com> wrote:
 On 1/6/2014 8:55 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright wrote:
 On 1/6/2014 7:20 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright wrote:
 Or you could amend the documentation to say that null checks will not
 be
 removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
You're assuming that LDC and GDC are stuck with C semantics.
Unless we plan to rewrite our own optimizer, they are to some extent.
I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.
Half and half. In GCC, though the default is to follow C semantics, the front-end language is allowed to overrule the optimiser with its own semantics at certain stages of the compilation.
Jan 07 2014
prev sibling parent "Araq" <rumpf_a web.de> writes:
On Tuesday, 7 January 2014 at 06:03:55 UTC, Walter Bright wrote:
 On 1/6/2014 8:55 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 04:37:12 UTC, Walter Bright 
 wrote:
 On 1/6/2014 7:20 PM, deadalnix wrote:
 On Tuesday, 7 January 2014 at 03:18:01 UTC, Walter Bright 
 wrote:
 Or you could amend the documentation to say that null 
 checks will not be
 removed even if they occur after a dereference.
Which won't be true with LDC and GDC.
You're assuming that LDC and GDC are stuck with C semantics.
Unless we plan to rewrite our own optimizer, they are to some extent.
I don't buy that. The back ends are built to compile multiple languages, hence they'll have multiple sets of requirements to contend with.
Another case where D is "inherently faster" than C? ;-)
Jan 07 2014
prev sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-01-06 21:09:44 +0000, Walter Bright <newshound2 digitalmars.com> said:

 On 1/6/2014 1:03 PM, Walter Bright wrote:
 On 1/6/2014 11:21 AM, deadalnix wrote:
 Or, we can decide that trapping in not guaranteed, and then dereferencing null
 is undefined behavior,
 which is much worse that a compile time failure.
Realistically, this is a non-problem.
Or better, if you want to issue a pull request for the documentation that says unless it is a dead load, a null reference will cause a program-ending fault of one sort or another, I'll back it.
That's pretty much the same as undefined behaviour because "dead load" is not defined. What is a dead load actually depends on how much inlining is done and how the optimizer works, and that's hard to define as part of the language. For instance, you could dereference a value and pass it to a function (as in "foo(*x)"). If that function gets inlined, and if what that function does is it multiplies the passed integer by zero, then the optimizer might rewrite the program to never load the value, the null dereference has simply disappeared. I think the best way to describe what happens is this: The only guaranty made when dereferencing a null pointer is that the program will stop instead of using a garbage value. An optimizing compiler might find ways to avoid or delay using dereferenced values which will allow the program to continue running beyond the null dereference. In general one shouldn't count on dereferencing a null pointer to stop a program at the right place or at all. I think this is a good explanation of what happens, but it obviously leaves undefined the *if and when* it'll stop the program because this highly depends on inlining and what the optimizer does. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Jan 06 2014
prev sibling parent reply "Alex Burton" <alexibu.remove me.com> writes:
On Sunday, 5 January 2014 at 00:05:46 UTC, Walter Bright wrote:
 On 1/4/2014 3:04 PM, deadalnix wrote:
 On Saturday, 4 January 2014 at 22:06:13 UTC, Walter Bright 
 wrote:
 I don't really understand your point. Null is not that 
 special.

 For example, you may want a constrained type:

 1. a float guaranteed to be not NaN
 2. a code point guaranteed to be a valid code point
 3. a prime number guaranteed to be a prime number
 4. a path+filename guaranteed to be well-formed according to 
 operating system
 rules
 5. an SQL argument guaranteed to not contain an injection 
 attack

 The list is endless. Why is null special?
Null is special in this set of examples because all of the examples show sub classes of the original type. Math operations on NaN are well defined and don't result in a crash. All the others should result in an exception at some point. Exceptions allow stack unwinding, which allows people to write code that doesn't leave things in undefined states in the event of an exception. Like files half written, database transactions half done, and all sorts of hardware with state left in intermediate states. It also allows the program to recover gracefully, and allow the user to save their work, and continue working etc, with only a slightly embarassing, and possibly descriptive for bug reporting message stating that a problem occured. Dereferencing null on windows can result in stack unwind, but on linux etc it is segfault with no unwind. Null is not a valid pointer value. People that are assuming all pointers can be null are essentially treating all pointers as: union { PointerType pointer; bool pointerIsValid; }; This might be a perfectly valid thing to do, but it is exceptional in the above list and therefore I would assume requires a new type (and more keyboard typing:) ) than what should be the default case of non null pointers.
 Because it is an instant crash,
Would things going on and a random thing happening randomly later be better?
 because it is not possible to make it safe
 without runtime check,
 a bugguy code that could have crashed will know behave
 in random ways).
Above it seems you were preferring it to fail in random ways rather than instant and obvious seg fault :-) For the record, I vastly prefer the instant seg fault.
Yes I think this is certainly easier to debug, but the user experience will be equivalent, and the reputational damage and bug report will be equivalent.
Jan 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/6/2014 3:02 PM, Alex Burton wrote:
 All the others should result in an exception at some point.
 Exceptions allow stack unwinding, which allows people to write code that
doesn't
 leave things in undefined states in the event of an exception.
Hardware exceptions allow for the same thing.
Jan 06 2014
parent reply "alex burton" <alexibu.remove me.com> writes:
On Monday, 6 January 2014 at 23:13:14 UTC, Walter Bright wrote:
 On 1/6/2014 3:02 PM, Alex Burton wrote:
 All the others should result in an exception at some point.
 Exceptions allow stack unwinding, which allows people to write 
 code that doesn't
 leave things in undefined states in the event of an exception.
Hardware exceptions allow for the same thing.
I am not sure what you mean by the above. To be clear: the below program does not unwind at least on linux. Same result using dmd or gdc : Segmentation fault (core dumped). When I see this from a piece of software I think : ABI problem or Amatuer programmer ? void main() { class Foo { void bar() {} }; try { Foo f; f.bar(); }catch { writefln("Sorry something went wrong"); } } In my code the vast majority of the references to classes can be relied on to point to an instance of the class. Where it is optional for a reference to be valid, I am happy to explicitly state that with a new type like Optional!Foo f or Nullable!Foo f; The phisolsophy of D you have applied in other areas, says design is chosen so that code is correct and common mistakes are prevented and unwanted inherited features from C are discarded. In my view it would be consistent to make class references difficult to leave or make null by default. I am sure you could still cast a null in there if you tried, but the default natural language should not do this. In code where changing this would make a compiler error, in my experience the code is fragile and prone to bugs anyway, so without a counter example I think the worst that could happen if D changed in this way would be people would fix their code and probably find some potential bugs they were not aware of. pointers to structs would still be valuable for interfacing to C libraries, and implementing efficient data structures, but the high level day to day code of the average user where objects are classes by default would benefit from having the compiler prevent null class references.
Jan 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 January 2014 at 11:29:18 UTC, alex burton wrote:
 Hardware exceptions allow for the same thing.
I am not sure what you mean by the above.
You can trap the segfault and access a OS-specific data structure which tells you where it happened, then recover if the runtime supports it.
Jan 07 2014
parent reply "alex burton" <alexibu.remove me.com> writes:
On Tuesday, 7 January 2014 at 11:36:50 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 7 January 2014 at 11:29:18 UTC, alex burton wrote:
 Hardware exceptions allow for the same thing.
I am not sure what you mean by the above.
You can trap the segfault and access a OS-specific data structure which tells you where it happened, then recover if the runtime supports it.
Thanks for this. I tested the same code on Windows and it appears that you can catch exceptions of unknown type using catch with no exception variable. The stack is unwound properly and scope(exit) calls work as expected etc. After reading about signal handling in unix and structured exception handling on Windows, it sounds possible though difficult to implement a similar system on unix to introduce an exception by trapping the seg fault signal, reading the data structure you mention and then using assembler jump instructions to jump into the exception mechanism. So I take Walters statement to mean that : hardware exceptions (AKA non software exceptions / SEH on windows) fix the problem - where programmers have put catch unknown exception statements after their normal catch statements in the appropriate places. And that a seg fault exception should result on linux it just happens that it is not yet implmented, which is why we just get the signal and crash.
Jan 07 2014
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 7 January 2014 at 12:51:51 UTC, alex burton wrote:
 After reading about signal handling in unix and structured 
 exception handling on Windows, it sounds possible though 
 difficult to implement a similar system on unix to introduce an 
 exception by trapping the seg fault signal, reading the data 
 structure you mention and then using assembler jump 
 instructions to jump into the exception mechanism.
If you are on linux and add this file to your project: dmd2/src/druntime/import/etc/linux/memoryerror.d (it is part of the regular dmd zip) you might have to import it and call registerMemoryErrorHandler() but then it will do the magic tricks to turn a segfault into a D exception. But this is a bit unreliable so it isn't in any default build.
Jan 07 2014
prev sibling parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 4 January 2014 at 23:04:12 UTC, deadalnix wrote:
 The list is endless. Why is null special?
Because it is an instant crash, because it is not possible to make it safe without runtime check, because it is known to fool optimizer and cause really nasty bugs (typically, a pointer is dereferenced, so the optimizer assume it isn't null and remove null check after the dereference, and then the dereference is removed as it is dead. a bugguy code that could have crashed will know behave in random ways).
An instant crash is a very nice way to fail, compared to, for example, what failure means for an SQL injection or a buffer overrun. A crash is bad, but it's better than a program continuing to execute erroneously. I have to agree with Walter here. Non-null is certainly nice, but it's just one kind of error out of a million, and not a particularly serious one at that.
Jan 05 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 01/04/2014 09:16 PM, Walter Bright wrote:
 Non-NULL is really only a particular case of having a type with a
 constrained set of values. It isn't all that special.
If you allow a crude analogy: Constraining a nullable pointer to be not null is like sending an invitation to your birthday party to all your friends and also Chuck, including a notice that Chuck cannot come. You are defending this practise based on the observation that some birthday parties have a required dress code.
Jan 04 2014
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 01/04/2014 11:24 PM, Timon Gehr wrote:
 You are defending this practise based on the observation that some
 birthday parties have a required dress code.
Agh. *practice*.
Jan 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 2:24 PM, Timon Gehr wrote:
 On 01/04/2014 09:16 PM, Walter Bright wrote:
 Non-NULL is really only a particular case of having a type with a
 constrained set of values. It isn't all that special.
If you allow a crude analogy: Constraining a nullable pointer to be not null is like sending an invitation to your birthday party to all your friends and also Chuck, including a notice that Chuck cannot come. You are defending this practise based on the observation that some birthday parties have a required dress code.
No, I am not defending it. I am pointing out that there's excessive emphasis on just one hole in that cheesegrater.
Jan 04 2014
parent reply "Chris Cain" <clcain uncg.edu> writes:
On Saturday, 4 January 2014 at 22:36:38 UTC, Walter Bright wrote:
 No, I am not defending it. I am pointing out that there's 
 excessive emphasis on just one hole in that cheesegrater.
I think that's because it's the one "hole in the cheesegrater" that matters to most people. A non-null constraint has an effect on a lot more code than a "must be prime" constraint, for instance.
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 2:41 PM, Chris Cain wrote:
 On Saturday, 4 January 2014 at 22:36:38 UTC, Walter Bright wrote:
 No, I am not defending it. I am pointing out that there's excessive emphasis
 on just one hole in that cheesegrater.
I think that's because it's the one "hole in the cheesegrater" that matters to most people. A non-null constraint has an effect on a lot more code than a "must be prime" constraint, for instance.
If you look at code carefully, you'll see that most usages of types are constrained. Not only that, an awful lot of types have an "invalid" value, which is used to denote an error or missing data. The classic would be the -1 values returned by many C int returning functions. Using those without checking first doesn't even give the courtesy of a seg fault. I think a solution that potentially plugs all the holes in a cheesegrater rather than only one would be better.
Jan 04 2014
parent reply "Organic Farmer" <x x.de> writes:
Oh my! My pencil is so *unsafe*. Whenever I try to write down 
this new no 1 chart hit, everybody just tells me my music sounds 
like crap.

Would someone here please provide me with a *safe* pencil?
Jan 04 2014
parent reply "Organic Farmer" <x x.de> writes:
Are there any developers left who can afford to choose their 
programming language for its expressive power and not for the 

language.

But I guess that's just someone speaking who, in the ol' days, 
didn't have a problem even in large C++ projects with matching 
each *new* at the start of a block with its *delete* at the end.
Jan 04 2014
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 5 January 2014 at 00:49:43 UTC, Organic Farmer wrote:
 Are there any developers left who can afford to choose their 
 programming language for its expressive power and not for the 

 language.
Well it depends. On my case, the technology stack is always choosen from the customers. Our freedom to choose is quite limited.
 But I guess that's just someone speaking who, in the ol' days, 
 didn't have a problem even in large C++ projects with matching 
 each *new* at the start of a block with its *delete* at the end.
I also don't have any problem, but my experience tells me that it doesn't scale when you have mixed experienced developers on teams scattered across multiple sites. I had my share of tracking down pointer issues as senior developer covering up the mess, while customers were keeping the technical support ears warm. :( I don't miss those days. -- Paulo
Jan 05 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-01-05 13:58, Paulo Pinto wrote:

 Well it depends. On my case, the technology stack is always choosen from
 the customers.

 Our freedom to choose is quite limited.
One could think that the technology stack is chosen based on the task it should solve. -- /Jacob Carlborg
Jan 05 2014
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 5 January 2014 at 16:21:31 UTC, Jacob Carlborg wrote:
 On 2014-01-05 13:58, Paulo Pinto wrote:

 Well it depends. On my case, the technology stack is always 
 choosen from
 the customers.

 Our freedom to choose is quite limited.
One could think that the technology stack is chosen based on the task it should solve.
I oversimplified our use case. Usually on my line of work, the company gets a request for proposal given a certain problem and corresponding technology being used. We then look for developers with the skill sets being asked for. The teams are usually a mix of people with the requested skill sets and a new ones that will learn on the job as a means to acquire those skills, in case similar projects appear. So the direction of what technologies the company masters is driven by customer requests, not by what we might suggest. -- Paulo
Jan 06 2014
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 5 January 2014 at 00:49:43 UTC, Organic Farmer wrote:
 Are there any developers left who can afford to choose their 
 programming language for its expressive power and not for the 

 language.
Safety nets can be provided with way to bypass them. Smart developers now they are mostly idiots.
 But I guess that's just someone speaking who, in the ol' days, 
 didn't have a problem even in large C++ projects with matching 
 each *new* at the start of a block with its *delete* at the end.
And an exception in the middle. Ooops!
Jan 05 2014
prev sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 4 January 2014 at 04:31:16 UTC, Walter Bright wrote:
 Null pointers are not a safety issue. Safety means no memory 
 corruption.
That's all well and good until you corrupt the interrupt vector table through a null pointer. We are talking about kernels after all. (though i think this is different in 32 and 64 bit, but as you'll probably remember, the interrupt table in 16 bit DOS was located at address 0.)
Jan 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 6:51 AM, Adam D. Ruppe wrote:
 (though i think this is different in 32 and 64 bit, but as you'll probably
 remember, the interrupt table in 16 bit DOS was located at address 0.)
That was such a bad CPU design decision. It sure was a costly error. It would have been so much better to put the boot rom at address 0.
Jan 04 2014
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Saturday, 4 January 2014 at 02:09:51 UTC, NoUseForAName wrote:
 .. includes a pretty damning assessment of D as "unsafe" 
 (compared to Rust) and generally doomed.
I'd say the author is simply wrong about the doomed thing, the link he cites doesn't make a convincing case, and is many years old anyway. As for the safety thing, I partially agree. The concepts Rust has are potentially very useful when working without the garbage collector. If you can use the gc, it obviates much of it (the owner of all items is the gc and they have an infinite lifetime, so tracking those things is trivial), but writing a kernel is one place where you probably don't want to use it, so that makes sense. It is possible to use the Rust concepts in D, but you don't get as much help from the compiler. Still better than C, but the rust system is nice in this respect. but i hate the rust syntax lol
Jan 03 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
On 04.01.2014 03:39, Adam D. Ruppe wrote:
 On Saturday, 4 January 2014 at 02:09:51 UTC, NoUseForAName wrote:
 .. includes a pretty damning assessment of D as "unsafe" (compared to
 Rust) and generally doomed.
I'd say the author is simply wrong about the doomed thing, the link he cites doesn't make a convincing case, and is many years old anyway. As for the safety thing, I partially agree. The concepts Rust has are potentially very useful when working without the garbage collector. If you can use the gc, it obviates much of it (the owner of all items is the gc and they have an infinite lifetime, so tracking those things is trivial), but writing a kernel is one place where you probably don't want to use it, so that makes sense. It is possible to use the Rust concepts in D, but you don't get as much help from the compiler. Still better than C, but the rust system is nice in this respect. but i hate the rust syntax lol
I love it except for the pointer sigils, but then again I use ML -- Paulo
Jan 04 2014
prev sibling next sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 4 January 2014 at 02:09:51 UTC, NoUseForAName wrote:
 This piece (recently seen on the Hacker News front page):

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

 .. includes a pretty damning assessment of D as "unsafe" 
 (compared to Rust) and generally doomed. I remember hearing 
 Walter Bright talking a lot about "safe code" during a D 
 presentation. Was that about a different kind of safety? Is the 
 author just wrong? Basically I want to hear the counterargument 
 (if there is one).
I'd say Kelet has it right, and I don't think the author has it wrong either. He goes into the specific issue he has in the section about Rust: "Go and D provide memory safety but with all objects being automatically managed with a garbage collector (over which languages users have little control). Rust provides a way for programmers to declare objects that are automatically managed or explicitly managed, and statically checks that explicitly managed objects are used safely." Basically D provides safety, but it also provides means to do unsafe things. I'm not familiar with Rust, but I wouldn't be surprised if unsafe actions could also be taken.
Jan 03 2014
next sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 4 January 2014 at 03:16:37 UTC, Jesse Phillips wrote:
 Basically D provides safety, but it also provides means to do 
 unsafe things. I'm not familiar with Rust, but I wouldn't be 
 surprised if unsafe actions could also be taken.
Haha, he covers that in the next section, just before I stopped reading to reply. "Rust still provides an escape hatch to allow students to experiment with unsafe code." So realy Rust requires safety by default while D allows unsafe code by default. This leads me to believe that the reason Rust is safer is three fold, the SafeD system ( safe, trusted, system) isn't fully implemented, not enough libraries are marking safe, and we don't have a good library to encapsulate the unsafe manual memory management (a library could probably get pretty close to what Rust's compiler does).
Jan 03 2014
prev sibling parent reply "logicchains" <jonathan.t.barnard gmail.com> writes:
On Saturday, 4 January 2014 at 03:16:37 UTC, Jesse Phillips wrote:
 Basically D provides safety, but it also provides means to do 
 unsafe things. I'm not familiar with Rust, but I wouldn't be 
 surprised if unsafe actions could also be taken.
You can still take unsafe actions, they just need to be wrapped in an 'unsafe' block. Any code that calls this block also needs to be marked as 'unsafe'. I recently wrote the exact same program in both D and Rust, and if you compare the two you'll find that almost the entire Rust program is enclosed in 'unsafe' blocks (note the Rust code is for a release from a couple of months ago, so the syntax is outdated). https://github.com/logicchains/ParticleBench/blob/master/D.d https://github.com/logicchains/ParticleBench/blob/master/R.rs
Jan 03 2014
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 4 January 2014 03:31, logicchains <jonathan.t.barnard gmail.com> wrote:
 On Saturday, 4 January 2014 at 03:16:37 UTC, Jesse Phillips wrote:
 Basically D provides safety, but it also provides means to do unsafe
 things. I'm not familiar with Rust, but I wouldn't be surprised if unsafe
 actions could also be taken.
You can still take unsafe actions, they just need to be wrapped in an 'unsafe' block. Any code that calls this block also needs to be marked as 'unsafe'. I recently wrote the exact same program in both D and Rust, and if you compare the two you'll find that almost the entire Rust program is enclosed in 'unsafe' blocks (note the Rust code is for a release from a couple of months ago, so the syntax is outdated).
Rust syntax changes every couple of months?!?!?
Jan 04 2014
parent reply "ilya-stromberg" <ilya-stromberg-2009 yandex.ru> writes:
On Saturday, 4 January 2014 at 12:31:06 UTC, Iain Buclaw wrote:
 Rust syntax changes every couple of months?!?!?
It looks like yes: "The following code examples are valid as of Rust 0.8. Syntax and semantics may change in subsequent versions." http://en.wikipedia.org/wiki/Rust_%28programming_language%29
Jan 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 4:44 AM, ilya-stromberg wrote:
 On Saturday, 4 January 2014 at 12:31:06 UTC, Iain Buclaw wrote:
 Rust syntax changes every couple of months?!?!?
It looks like yes: "The following code examples are valid as of Rust 0.8. Syntax and semantics may change in subsequent versions." http://en.wikipedia.org/wiki/Rust_%28programming_language%29
I attended a presentation on Rust a couple months ago, by one of the Rust developers, and he said his own slides were syntactically out of date :-)
Jan 04 2014
prev sibling next sibling parent reply "Maxim Fomin" <maxim maxim-fomin.ru> writes:
On Saturday, 4 January 2014 at 02:09:51 UTC, NoUseForAName wrote:
 This piece (recently seen on the Hacker News front page):

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

 .. includes a pretty damning assessment of D as "unsafe" 
 (compared to Rust) and generally doomed. I remember hearing 
 Walter Bright talking a lot about "safe code" during a D 
 presentation. Was that about a different kind of safety? Is the 
 author just wrong? Basically I want to hear the counterargument 
 (if there is one).
Quoting: "The biggest disadvantage of D compared to Rust is that it does not have the kind of safety perspective that Rust does, and in particular does not provide safe constructs for concurrency. " On surface this looks like explaining why D is unsafe, but the article fails to study real issues which were discussed in newsgroups or were filed in bugzilla. From my experience, there are much better opportunities to elaborate on why D is unsafe. Quoted citation looks extremely naive. "The other argument against using D is that it has been around more than 10 years now, without much adoption and appears to be more likely on its way out rather than increasing popularity." I doubt. Why have you posted this ungrounded Rust advertisement anyway?
Jan 03 2014
parent "JR" <zorael gmail.com> writes:
On Saturday, 4 January 2014 at 03:45:22 UTC, Maxim Fomin wrote:
 Why have you posted this ungrounded Rust advertisement anyway?
To spark discussion?
Jan 05 2014
prev sibling next sibling parent "Dylan Knutson" <tcdknutson gmail.com> writes:
The article, based on the title, struck me as FUD, and reading it 
just confirmed that suspicion.

It really is just an advertisement for Rust. And I like Rust, 
except for the buggy compiler and the near impossibility to do 
advanced metaprogramming techniques. But, I like D better, 
because (IMO) I can write more succinct, correct code without 
having to fight the type system, and do insanely powerful compile 
time and template magic (I'd like to note my project, Temple, as 
an example of how powerful D's CTFE is: github.com/dymk/temple).

 From the article:
 and in particular does not provide safe constructs for 
 concurrency
Did the author do their research at all? Of course D has safe constructs for concurrency, albeit optional ones.
Jan 04 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
NoUseForAName:

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html
Why aren't they using Ada? It has a really refined and safe parallelism, it's quite safe, it teaches a student the correct ways of dealing with pointers, memory etc in a low-level setting. It's usable for hard-realtime. And it's way more commonly used than Rust. There are books on Ada. Its compilers are solid, and while Ada is being updated significantly (the latest is Ada2012) there's no risk in having important parts of the language become backward incompatible in the short term. Ada code is not sexy, but this is not a significant problem for an advanced course lasting few months. Ada is a complex language, but it's the right kind of complexity, it's not special cases piled on special cases, it's features piled on features to deal correctly with different needs (just like in D, despite D is less designed for correctness compared to Ada). Bye, bearophile
Jan 04 2014
next sibling parent reply "QAston" <qaston gmail.com> writes:
On Saturday, 4 January 2014 at 11:36:20 UTC, bearophile wrote:
 NoUseForAName:

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html
Why aren't they using Ada? It has a really refined and safe parallelism, it's quite safe, it teaches a student the correct ways of dealing with pointers, memory etc in a low-level setting. It's usable for hard-realtime. And it's way more commonly used than Rust. There are books on Ada. Its compilers are solid, and while Ada is being updated significantly (the latest is Ada2012) there's no risk in having important parts of the language become backward incompatible in the short term. Ada code is not sexy, but this is not a significant problem for an advanced course lasting few months. Ada is a complex language, but it's the right kind of complexity, it's not special cases piled on special cases, it's features piled on features to deal correctly with different needs (just like in D, despite D is less designed for correctness compared to Ada). Bye, bearophile
Ada is not hype enough, so it doesn't qualify. J/K (no death-threats please), I gave rust a try, i couldn't get it to run on my OS.
Jan 04 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
On 04.01.2014 13:09, QAston wrote:
 On Saturday, 4 January 2014 at 11:36:20 UTC, bearophile wrote:
 NoUseForAName:

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html
Why aren't they using Ada? It has a really refined and safe parallelism, it's quite safe, it teaches a student the correct ways of dealing with pointers, memory etc in a low-level setting. It's usable for hard-realtime. And it's way more commonly used than Rust. There are books on Ada. Its compilers are solid, and while Ada is being updated significantly (the latest is Ada2012) there's no risk in having important parts of the language become backward incompatible in the short term. Ada code is not sexy, but this is not a significant problem for an advanced course lasting few months. Ada is a complex language, but it's the right kind of complexity, it's not special cases piled on special cases, it's features piled on features to deal correctly with different needs (just like in D, despite D is less designed for correctness compared to Ada). Bye, bearophile
Ada is not hype enough, so it doesn't qualify. J/K (no death-threats please), I gave rust a try, i couldn't get it to run on my OS.
I agree with you here. Ada seems to be growing in Europe, at least from what I can tell every time I attend FOSDEM. I would say we have to thank C's lack of safety and the availability of GNAT for it. But the language uses Algol based syntax and is verbose for C developers, which makes it not hype enough as you say. -- Paulo
Jan 04 2014
prev sibling parent reply "renoX" <renozyx gmail.com> writes:
On Saturday, 4 January 2014 at 11:36:20 UTC, bearophile wrote:
[cut]
 Why aren't they using Ada?[cut]
Because the "software world" is unfortunately very much a "fashion world": if it's old, it's not interesting.. :-( renoX
Jan 06 2014
parent "bearophile" <bearophileHUGS lycos.com> writes:
renoX:

 Because the "software world" is unfortunately very much a 
 "fashion world": if it's old, it's not interesting.. :-(
This good article helps get a better point of view on the topic: http://www.inventio.co.uk/threeforthsmakeahole.htm Bye, bearophile
Jan 06 2014
prev sibling next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
 The biggest disadvantage of D compared to Rust is that it does 
 not have the kind of safety perspective that Rust does, and in 
 particular does not provide safe constructs for concurrency.
Pretty sure immutable, purity, and thread-local statics are all safe constructs for concurrency; not to mention all the library features. Rust probably is safer by some metric, but all those pointer types add considerable complexity.
 The other argument against using D is that it has been around 
 more than 10 years now, without much adoption and appears to be 
 more likely on its way out rather than increasing popularity.
This is just false. Any metric you look at suggests D use is on the increase, and certainly starting to get more commercial interest. It's worth noting that many languages take a long time before they blossom. It took Ruby 10 years before Rails appeared.
Jan 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/4/2014 4:51 AM, Peter Alexander wrote:
 It's worth noting that many languages take a long time before they blossom. It
 took Ruby 10 years before Rails appeared.
Many languages have a long history before they burst on the scene. It's much like rock bands. The Beatles labored fruitlessly for years in the salt mines before appearing out of nowhere.
Jan 04 2014
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Jan 04, 2014 at 11:07:37AM -0800, Walter Bright wrote:
 On 1/4/2014 4:51 AM, Peter Alexander wrote:
It's worth noting that many languages take a long time before they
blossom. It took Ruby 10 years before Rails appeared.
Many languages have a long history before they burst on the scene. It's much like rock bands. The Beatles labored fruitlessly for years in the salt mines before appearing out of nowhere.
I never trusted in the "hot new emerging trends" thing. Artifacts of quality take time to produce and develop, and bandwagons have a reputation of turning out to be disappointments. That goes for programming languages, and also software in general. What stands the test of time is what has the real value. T -- It said to install Windows 2000 or better, so I installed Linux instead.
Jan 04 2014
parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 4 January 2014 at 21:03:53 UTC, H. S. Teoh wrote:
 I never trusted in the "hot new emerging trends" thing. 
 Artifacts of
 quality take time to produce and develop, and bandwagons have a
 reputation of turning out to be disappointments. That goes for
 programming languages, and also software in general. What 
 stands the test of time is what has the real value.


 T
Yeah, the statement would have better been written, "D has been around more than 10 years now, without much adoption and been through internal segregation but appears to be pushing through and increasing development support. While this past is concerning, it has at least demonstrated survival ability." But I'm not sure that is obvious from the outside, there certainly is evidence that supports it.
Jan 04 2014
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On 04.01.2014 03:09, NoUseForAName wrote:
 This piece (recently seen on the Hacker News front page):

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

 .. includes a pretty damning assessment of D as "unsafe" (compared to
 Rust) and generally doomed. I remember hearing Walter Bright talking a
 lot about "safe code" during a D presentation. Was that about a
 different kind of safety? Is the author just wrong? Basically I want to
 hear the counterargument (if there is one).
He gets his assumptions about D wrong, but I find it nice that he decided to show his students there are other safer languages to write operating systems on. If UNIX did not widespread outside academia, most likely C would not have reached the status it achieved. -- Paulo
Jan 04 2014
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 1/3/14 6:09 PM, NoUseForAName wrote:
 This piece (recently seen on the Hacker News front page):

 http://rust-class.org/pages/using-rust-for-an-undergraduate-os-course.html

 .. includes a pretty damning assessment of D as "unsafe" (compared to
 Rust) and generally doomed. I remember hearing Walter Bright talking a
 lot about "safe code" during a D presentation. Was that about a
 different kind of safety? Is the author just wrong? Basically I want to
 hear the counterargument (if there is one).
This thread is very interesting - I posted a link to it to the reddit discussion: http://www.reddit.com/r/programming/comments/1ucvtd/using_rust_for_an_undergraduate_os_course/ceh5ysq Andrei
Jan 04 2014