www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Discussion Thread: DIP 1028--Make safe the Default--Final Review

reply Mike Parker <aldacron gmail.com> writes:
This is the discussion thread for the Final Review of DIP 1028, 
"Make  safe the Default":

https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/DIP1028.md

The review period will end at 11:59 PM ET on April 8, or when I 
make a post declaring it complete. Discussion in this thread may 
continue beyond that point.

Here in the discussion thread, you are free to discuss anything 
and everything related to the DIP. Express your support or 
opposition, debate alternatives, argue the merits, etc.

However, if you have any specific feedback on how to improve the 
proposal itself, then please post it in the feedback thread. The 
feedback thread will be the source for the review summary I write 
at the end of this review round. I will post a link to that 
thread immediately following this post. Just be sure to read and 
understand the Reviewer Guidelines before posting there:

https://github.com/dlang/DIPs/blob/master/docs/guidelines-reviewers.md

And my blog post on the difference between the Discussion and 
Feedback threads:

https://dlang.org/blog/2020/01/26/dip-reviews-discussion-vs-feedback/

Please stay on topic here. I will delete posts that are 
completely off-topic.
Mar 25 2020
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Wednesday, 25 March 2020 at 07:02:32 UTC, Mike Parker wrote:

 However, if you have any specific feedback on how to improve 
 the proposal itself, then please post it in the feedback 
 thread. The feedback thread will be the source for the review 
 summary I write at the end of this review round. I will post a 
 link to that thread immediately following this post. Just be 
 sure to read and understand the Reviewer Guidelines before 
 posting there:
The Feeback thread is here: https://forum.dlang.org/post/wkdpnzarkbtqryighzpx forum.dlang.org #DIP1028
Mar 25 2020
prev sibling next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
I have an alternative to  safe by default that may be a better option 
for the near future.

Given a headconst tied to life time (similar to a borrowed pointer) we 
can then change the default of that function to be  safe.

This can be slowly expanded to include raw pointers declarations that 
are never assigned to as well.

It doesn't get us 100% of the way there, but it does allow those who 
care about memory safety about 80% of the way there and gives us more 
time to adapt code.

Thoughts welcome.
Mar 25 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
In response to Walter's response to ag*, I would say that there is a 
fatal problem with automatically allowing extern(C) function prototypes 
(and really, anything that does not mangle  safe) to be default  safe.

The reason is simple -- the change is silent and automatically marks 
everything  safe that has not been checked.

I would argue that if the compiler is going to make things  safe by 
default, then things that are not marked and are not  safe should not 
compile AT ALL COSTS. Otherwise the value of  safe is completely lost.

The DIP should be rejected IMO unless all functions with no mechanism to 
mangle  safe into the name (e.g. extern(C), extern(C++), etc) that have 
no implementation are either:

a) required to be marked, or
b) default  system.

Everything else in the DIP is possibly annoying to deal with but at 
least doesn't silently destroy the meaning of  safe.

I will note that I support the notion of  safe by default. I would be in 
favor of the DIP as long as this fatal flaw is not included.

-Steve
Mar 25 2020
next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer 
wrote:

 Everything else in the DIP is possibly annoying to deal with 
 but at least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
Mar 25 2020
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 25, 2020 at 05:34:11PM +0000, bachmeier via Digitalmars-d wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer wrote:
 
 Everything else in the DIP is possibly annoying to deal with but at
 least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe.
[...] If you're interfacing D code with C code, your main() is probably already system anyway, so you might as well just stick system: at the top of your extern(C) declarations and call it a day. T -- The two rules of success: 1. Don't tell everything you know. -- YHL
Mar 25 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/25/20 1:34 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer wrote:
 
 Everything else in the DIP is possibly annoying to deal with but at 
 least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
This is overblown. Adding system: at the top of a c library header is not hard. Tools which generate headers for C libraries (e.g. dpp) can automatically do the right thing. -Steve
Mar 25 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, March 25, 2020 12:04:33 PM MDT Steven Schveighoffer via 
Digitalmars-d wrote:
 On 3/25/20 1:34 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer wrote:
 Everything else in the DIP is possibly annoying to deal with but at
 least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
This is overblown. Adding system: at the top of a c library header is not hard. Tools which generate headers for C libraries (e.g. dpp) can automatically do the right thing.
Not only that, but if _any_ function is automatically marked as safe when the compiler can't verify that it is, and no programmer has verified that it is and marked it with trusted, then safe is borderline pointless and useless, because it's not actually guaranteeing memory safety. We have enough of a problem with programmers incorrectly using trusted without the compiler doing it. safe needs to provide actual compiler guarantees, or it just provides a false sense of security. extern(C) functions ultimately exist in the call stack of every D program (even if most of them are buried in D code rather than being used directly by the program), and to have those treated as system with no verification basically throws safe's guarantees out the window. I agree with Walter's assertion that the DIP applies to extern(C) functions unless it says otherwise, but that being the case, the DIP needs to be fixed so that it does not apply to extern(C) functions, or it will do _far_ more damage than good. - Jonathan M Davis
Mar 25 2020
parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Wednesday, 25 March 2020 at 19:27:22 UTC, Jonathan M Davis 
wrote:
 On Wednesday, March 25, 2020 12:04:33 PM MDT Steven 
 Schveighoffer via Digitalmars-d wrote:
 On 3/25/20 1:34 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven 
 Schveighoffer wrote:
 Everything else in the DIP is possibly annoying to deal 
 with but at least doesn't silently destroy the meaning of 
  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
This is overblown. Adding system: at the top of a c library header is not hard. Tools which generate headers for C libraries (e.g. dpp) can automatically do the right thing.
Not only that, but if _any_ function is automatically marked as safe when the compiler can't verify that it is, and no programmer has verified that it is and marked it with trusted, then safe is borderline pointless and useless, because it's not actually guaranteeing memory safety. We have enough of a problem with programmers incorrectly using trusted without the compiler doing it. safe needs to provide actual compiler guarantees, or it just provides a false sense of security. extern(C) functions ultimately exist in the call stack of every D program (even if most of them are buried in D code rather than being used directly by the program), and to have those treated as system with no verification basically throws safe's guarantees out the window. I agree with Walter's assertion that the DIP applies to extern(C) functions unless it says otherwise, but that being the case, the DIP needs to be fixed so that it does not apply to extern(C) functions, or it will do _far_ more damage than good. - Jonathan M Davis
I agree that safe is almost pointless if it's not on by default. That being said, in practice I'm not sure how much benefit safe actually provides. In theory it sounds nice. It could help audit code, but for me, I audit all my code the same whether or not it's safe. So for me the whole feature seems kinda pointless. Maybe this is different for others? Does anyone have any real life examples/experience where safe has helped? Has the benefit warranted the cost to manage these tags throughout your code? Do we have any projects that are already using this behavior by putting " safe:" at the top of every file? Does anyone have any pointers to projects that have done this? Have they seen any benefits from doing so?
Mar 25 2020
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 25, 2020 at 09:58:40PM +0000, Jonathan Marler via Digitalmars-d
wrote:
[...]
 That being said, in practice I'm not sure how much benefit  safe
 actually provides. In theory it sounds nice.  It could help audit
 code, but for me, I audit all my code the same whether or not it's
 safe.  So for me the whole feature seems kinda pointless. Maybe this
 is different for others?  Does anyone have any real life
 examples/experience where  safe has helped?  Has the benefit warranted
 the cost to manage these tags throughout your code? Do we have any
 projects that are already using this behavior by putting " safe:" at
 the top of every file?  Does anyone have any pointers to projects that
 have done this?  Have they seen any benefits from doing so?
Some of the latest new features like DIP1000 are in full force only inside safe code. I've run into a couple of escaping reference bugs that were not caught because I didn't tag my code safe, but once I added safe I immediately got a compiler error pinpointing the code that leaked a scoped reference. I wouldn't say this is a big impact, but it did catch a couple of bugs that would've been a pain to track down. From this perspective, it makes sense to make safe the default: most users would not bother with the pain of manually tagging everything safe just to get a few minor benefits. But having it by default means everyone reaps the benefits, and where you need an escape to do something seemingly dangerous, you can explicitly use system or trusted to temporarily suspend the compiler's checks for specific bits of code. T -- Real men don't take backups. They put their source on a public FTP-server and let the world mirror it. -- Linus Torvalds
Mar 25 2020
next sibling parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Wednesday, 25 March 2020 at 22:40:10 UTC, H. S. Teoh wrote:
 Some of the latest new features like DIP1000 are in full force 
 only inside  safe code.
 [...snip...]
Oh I didn't know DIP1000 was only enabled in safe code. Why is that?
Mar 25 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 25, 2020 at 11:12:46PM +0000, Jonathan Marler via Digitalmars-d
wrote:
 On Wednesday, 25 March 2020 at 22:40:10 UTC, H. S. Teoh wrote:
 
 Some of the latest new features like DIP1000 are in full force only
 inside  safe code.
 [...snip...]
Oh I didn't know DIP1000 was only enabled in safe code. Why is that?
I don't know if *all* of DIP1000 only applies in safe code, I think some of the fixes applies to system code as well. But the way I understand it, the idea is that in system code the programmer is supposed to know what he's doing, and is assumed to be doing something "outside the box" that the compiler does not fully understand, so by default system code is allowed to do seemingly "dangerous" things. safe code is for when the programmer is committed to doing only safe operations, so that's where it makes the most sense to enforce these checks. I think the intention is that most D code will be safe, and things like DIP1000 will apply to enforce memory safety, and only when you need to "go under the hood" you'd have a system function as the escape hatch to do the low-level hacks. The way I read it, this is all part of a general grand plan, which includes this DIP, to move towards having most D code be safe and only occasionally drop down to system for operations that cannot be mechanically verified. Hence, safe by default. T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De Vitis
Mar 25 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 4:33 PM, H. S. Teoh wrote:
 I think the intention is that most D code will be  safe, and things like
 DIP1000 will apply to enforce memory safety, and only when you need to
 "go under the hood" you'd have a  system function as the escape hatch to
 do the low-level hacks.  The way I read it, this is all part of a
 general grand plan, which includes this DIP, to move towards having most
 D code be  safe and only occasionally drop down to  system for
 operations that cannot be mechanically verified.  Hence,  safe by
 default.
That's right.
Mar 27 2020
prev sibling next sibling parent Kagamin <spam here.lot> writes:
On Wednesday, 25 March 2020 at 22:40:10 UTC, H. S. Teoh wrote:
 On Wed, Mar 25, 2020 at 09:58:40PM +0000, Jonathan Marler via 
 Digitalmars-d wrote: [...]
 That being said, in practice I'm not sure how much benefit 
  safe actually provides. In theory it sounds nice.  It could 
 help audit code, but for me, I audit all my code the same 
 whether or not it's safe.  So for me the whole feature seems 
 kinda pointless. Maybe this is different for others?  Does 
 anyone have any real life examples/experience where  safe has 
 helped?  Has the benefit warranted the cost to manage these 
 tags throughout your code? Do we have any projects that are 
 already using this behavior by putting " safe:" at the top of 
 every file?  Does anyone have any pointers to projects that 
 have done this?  Have they seen any benefits from doing so?
Some of the latest new features like DIP1000 are in full force only inside safe code. I've run into a couple of escaping reference bugs that were not caught because I didn't tag my code safe, but once I added safe I immediately got a compiler error pinpointing the code that leaked a scoped reference. I wouldn't say this is a big impact, but it did catch a couple of bugs that would've been a pain to track down. From this perspective, it makes sense to make safe the default: most users would not bother with the pain of manually tagging everything safe just to get a few minor benefits.
Even to get those minor benefits your code should be annotated with `scope`, `return` and whatnot. And even then dip1000 is not the default yet, so even those minor benefits are out. If people are lazy to add 6 characters ` safe:` why do you think people aren't lazy enough to keep their codebase dip1000 compliant?
Mar 25 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 3:40 PM, H. S. Teoh wrote:
 I wouldn't say this is a big impact, but it did catch a couple of bugs
 that would've been a pain to track down.
Even if escaping references to the stack are rare, it is very very important to catch them as they are hard to track down and cause silent data corruption. They're some of the worst bugs. safe recently found a data corruption bug in the druntime stack unwinder that had been there, latent, for years. (It hadn't caught the bug before because I had overlooked checking the msg argument to assert() for safe errors.)
Mar 27 2020
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Wednesday, 25 March 2020 at 21:58:40 UTC, Jonathan Marler 
wrote:
 Has the benefit warranted the cost to manage these tags 
 throughout your code?
Yes. Especially since the cost is trivial.
 Do we have any projects that are already using this behavior by 
 putting " safe:" at the top of every file?  Does anyone have 
 any pointers to projects that have done this?
All my projects that aren't called reggae. The only reason for that exception is that it's ancient and I didn't know any better then. I don't know how we've managed, but we've utterly failed at marketing safe to the community. Writing safe code is easy unless you're doing manual memory management or trying to abstract it with a library. *Especially* with DIP1000.
Mar 26 2020
parent reply Mathias Lang <pro.mathias.lang gmail.com> writes:
On Thursday, 26 March 2020 at 10:55:44 UTC, Atila Neves wrote:
 On Wednesday, 25 March 2020 at 21:58:40 UTC, Jonathan Marler 
 wrote:
 Has the benefit warranted the cost to manage these tags 
 throughout your code?
Yes. Especially since the cost is trivial.
 Do we have any projects that are already using this behavior 
 by putting " safe:" at the top of every file?  Does anyone 
 have any pointers to projects that have done this?
All my projects that aren't called reggae. The only reason for that exception is that it's ancient and I didn't know any better then. I don't know how we've managed, but we've utterly failed at marketing safe to the community. Writing safe code is easy unless you're doing manual memory management or trying to abstract it with a library. *Especially* with DIP1000.
There's a huge difference between correctly using ` safe` and having things compile. I have *never* seen non-trivial library that manage to do the former without imposing strong requirements on the user. Let me repeat that: I haven't seen a *single* non-trivial library out there that does it correctly. It either forces user code to be ` safe`, or bypasses ` safe`ty checks completely. And since exceptional claims calls for exceptional proof, I wanted to check whether or not your libraries would be any different. It took me less than 5 minutes to find this: https://github.com/atilaneves/unit-threaded/issues/176 We didn't fail to market ` safe`. We failed to provide a construct that allows users to write libraries that take attributes (` safe`, `nothrow`, ` nogc`, same fight) into account without incredible amounts of boilerplate. Take any library that accepts a delegate: Either one has to restrict what the delegate can do, or make a certain attribute un-enforceable. There's no way to write an interface that concisely expresses that its ` safe`ty, ` nogc`-ness or `nothrow`-ness depends on the user-provided delegate, unless you template absolutely everything, which is not a viable (and sometimes, not possible) solution. We need something similar to `inout` for attributes. Or at the very least, a way to express this dependency. Only then will changing the default be a remotely viable possibility, in my opinion. At the moment, all this is doing is ignoring the problem and pushing the complexity from one demographic to the other.
Mar 26 2020
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 9:20 PM, Mathias Lang wrote:
 Take any library that accepts a delegate: Either one has to restrict what the 
 delegate can do, or make a certain attribute un-enforceable. There's no way to 
 write an interface that concisely expresses that its ` safe`ty, ` nogc`-ness
or 
 `nothrow`-ness depends on the user-provided delegate, unless you template 
 absolutely everything, which is not a viable (and sometimes, not possible) 
 solution.
Making safe the default will substantially reduce this problem for the simple reason that the vast bulk of code should be safe. BTW, I have an upcoming DIP that changes the default attributes for delegate parameter types to match the function they appear in.
Mar 27 2020
next sibling parent reply Mathias Lang <pro.mathias.lang gmail.com> writes:
On Friday, 27 March 2020 at 09:07:01 UTC, Walter Bright wrote:
 Making  safe the default will substantially reduce this problem 
 for the simple reason that the vast bulk of code should be 
  safe.
I've been repeating for a while now that this is simply not true. It was the very point (and last sentence) in my previous message:
 At the moment, all this is doing is ignoring the problem and 
 pushing the complexity from one demographic to the other.
Changing the default makes it "easier" to deal with ` safe` by making it harder to deal with ` system`. How is that an improvement?
 BTW, I have an upcoming DIP that changes the default attributes 
 for delegate parameter types to match the function they appear 
 in.
I already gave it [a lengthy review](https://github.com/dlang/DIPs/pull/170#pullrequestreview-294073723).
Mar 27 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/27/2020 2:40 AM, Mathias Lang wrote:
 Changing the default makes it "easier" to deal with ` safe` by making it
harder 
 to deal with ` system`. How is that an improvement?
The idea is that system code should be relatively rare - much rarer than safe code. Generally speaking, the common case should be the default one.
Apr 02 2020
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 27 March 2020 at 09:07:01 UTC, Walter Bright wrote:
 BTW, I have an upcoming DIP that changes the default attributes 
 for delegate parameter types to match the function they appear 
 in.
You'd probably have to make an exception there for trusted... if a trusted function takes a trusted delegate it would get silently ugly. OR perhaps trusted void foo( trusted void delegate) { } foo( () { /* do unsafe thing */} ); // fails to compile unless the body infers to safe, it will never assume a system literal is trusted foo( () trusted { /* do unsafe thing */}); // OK, you specifically said trusted so it works
Mar 27 2020
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, March 27, 2020 9:40:00 AM MDT Adam D. Ruppe via Digitalmars-d 
wrote:
 On Friday, 27 March 2020 at 09:07:01 UTC, Walter Bright wrote:
 BTW, I have an upcoming DIP that changes the default attributes
 for delegate parameter types to match the function they appear
 in.
You'd probably have to make an exception there for trusted... if a trusted function takes a trusted delegate it would get silently ugly. OR perhaps trusted void foo( trusted void delegate) { } foo( () { /* do unsafe thing */} ); // fails to compile unless the body infers to safe, it will never assume a system literal is trusted foo( () trusted { /* do unsafe thing */}); // OK, you specifically said trusted so it works
trusted should never have been part of the name mangling / linkage of D functions. trusted and safe need to be treated very differently when compiling functions, and they signal very different things to the programmer, but their difference is an implementation detail. A function calling an safe or trusted function doesn't care about the difference, and all it takes is one safe function in-between, and an trusted function effectively becomes an safe one anyway. If trusted just mangled to the same thing as safe, then that definitely improves the situation for stuff like delegates (not fix it given the attribute soup that we have, but it would certainly improve it). I don't know if we can reasonably change how trusted is treated in name mangling at this point, but I definitely think that it was a mistake to distinguish between safe and trusted with name mangling. - Jonathan M Davis
Mar 27 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/27/2020 2:02 PM, Jonathan M Davis wrote:
 I don't know if we can reasonably change how  trusted is treated in name
 mangling at this point, but I definitely think that it was a mistake to
 distinguish between  safe and  trusted with name mangling.
If trusted wasn't part of the mangling, one could not turn the mangling back into the function signature.
Apr 04 2020
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04.04.20 09:01, Walter Bright wrote:
 On 3/27/2020 2:02 PM, Jonathan M Davis wrote:
 I don't know if we can reasonably change how  trusted is treated in name
 mangling at this point, but I definitely think that it was a mistake to
 distinguish between  safe and  trusted with name mangling.
If trusted wasn't part of the mangling, one could not turn the mangling back into the function signature.
Jonathan's more general point was that there is no reason to distinguish safe and trusted in function signatures. In fact, it would be an improvement if trusted functions had safe function signatures.
Apr 05 2020
prev sibling next sibling parent Atila Neves <atila.neves gmail.com> writes:
On Friday, 27 March 2020 at 04:20:47 UTC, Mathias Lang wrote:
 On Thursday, 26 March 2020 at 10:55:44 UTC, Atila Neves wrote:
 On Wednesday, 25 March 2020 at 21:58:40 UTC, Jonathan Marler 
 wrote:
 Has the benefit warranted the cost to manage these tags 
 throughout your code?
Yes. Especially since the cost is trivial.
 Do we have any projects that are already using this behavior 
 by putting " safe:" at the top of every file?  Does anyone 
 have any pointers to projects that have done this?
All my projects that aren't called reggae. The only reason for that exception is that it's ancient and I didn't know any better then. I don't know how we've managed, but we've utterly failed at marketing safe to the community. Writing safe code is easy unless you're doing manual memory management or trying to abstract it with a library. *Especially* with DIP1000.
There's a huge difference between correctly using ` safe` and having things compile.
I think there's a huge difference in using ` trusted` correctly and having things compile.
 And since exceptional claims calls for exceptional proof, I 
 wanted to check whether or not your libraries would be any 
 different. It took me less than 5 minutes to find this: 
 https://github.com/atilaneves/unit-threaded/issues/176
Oops. Thanks for the bug report! I've been quite bad at using trusted myself. I think part of the reason that I've been using it wrongly is because safe isn't the default, and code that *should* be safe wasn't. The compiler complained and I wrongly applied trusted somewhere. It's my belief that I would've screwed up far less if safe had been the default.
 Take any library that accepts a delegate:
Yes, this is a problem. In my libclang binding (it's on dub) I wanted to write safe pure code but couldn't since libclang takes a visitor callback. I *could* make the callback declaration safe and pure, but that's too restrictive for other users.
Mar 27 2020
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/27/20 12:20 AM, Mathias Lang wrote:
 I haven't seen a *single* non-trivial library out there that does it 
 correctly.
It's not completely merged, but it's ready for review, if you can find anything here, I'll fix it: https://github.com/mysql-d/mysql-native/pull/214 I also made diet-ng safe, and it wasn't hard. https://github.com/rejectedsoftware/diet-ng -Steve
Mar 27 2020
prev sibling parent Arine <arine123445128843 gmail.com> writes:
On Friday, 27 March 2020 at 04:20:47 UTC, Mathias Lang wrote:
 And since exceptional claims calls for exceptional proof, I 
 wanted to check whether or not your libraries would be any 
 different. It took me less than 5 minutes to find this: 
 https://github.com/atilaneves/unit-threaded/issues/176
That's a good example of why trusted is broken. And guess what the "proper" solution is? https://github.com/atilaneves/unit-threaded/commit/b7457d5d317a2eb1f2bbb2ece9a80f3b26b71600#diff-1e0b7d3d93f30ae83fa8d01f92dfd5aaR59 Unsafe blocks like from Rust (aka hacky trusted lambda in D). The whole way trusted works itself creates buggy unsafe code. You shouldn't be calling unsafe code from safe code, unless it is explicitly marked as unsafe. Here in this case, the "proper" way to solve this problem is the harder uglier way to solve it. Is that the design philosophy that should be striven toward? I remember from the other thread Walter saying that syntax is ugly on purpose so as to be avoided. Funnily a lot of the times it's the proper way to solve the problem yet the syntax is still ugly because someone doesn't seem to understand why.
Mar 28 2020
prev sibling next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 25 March 2020 at 18:04:33 UTC, Steven Schveighoffer 
wrote:
 On 3/25/20 1:34 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven 
 Schveighoffer wrote:
 
 Everything else in the DIP is possibly annoying to deal with 
 but at least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
This is overblown. Adding system: at the top of a c library header is not hard. Tools which generate headers for C libraries (e.g. dpp) can automatically do the right thing. -Steve
Let me put it differently. Suppose I release a linear algebra library that's a wrapper over a C library. Nobody using D the way it's supposed to be used can use my library. It just doesn't make sense for a language that claims strong C interoperability. Anyway, I'm going to let this die, because nobody else sees it as an issue.
Mar 25 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, March 25, 2020 9:36:15 PM MDT bachmeier via Digitalmars-d 
wrote:
 Let me put it differently. Suppose I release a linear algebra
 library that's a wrapper over a C library. Nobody using D the way
 it's supposed to be used can use my library. It just doesn't make
 sense for a language that claims strong C interoperability.

 Anyway, I'm going to let this die, because nobody else sees it as
 an issue.
In general, a D wrapper library around a C library should be presenting an safe API which was verified by the programmer who wrote that wrapper library to be safe. In some cases, that isn't possible, but regardless of whether safe and system is actually a thing, code which can't present an safe API is a potential memory safety problem. D just makes it possible for the compiler to flag stuff that it knows is not memory safe so that you can easily find it and fix it. And the code that actually needs to do stuff that the compiler can't guarantee is memory safe (and thus requires that the programmer verify it) is then segregated by trusted code so that you only have to examine a relatively small portion of a library or program for potential memory safety bugs. D code does make it much easier to integrate with C code than is the case with many other languages, but it also loses a lot of its value if the compiler treats C code as if it were safe even though its memory safety was not verified by the compiler, and the programmer gave no indication to the compiler that they had verified it. Having the compiler treat C bindings as safe by default would be a huge whole in safe and make it much harder to track down bugs related to memory safety when they occur. - Jonathan M Davis
Mar 25 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 9:14 PM, Jonathan M Davis wrote:
 In general, a D wrapper library around a C library should be presenting an
  safe API
A D wrapper should be as thin as possible, which means if the C function being wrapped is safe then the D wrapper should be safe, and if is not safe then the D wrapper should be system. For the D wrapper developer, since he's providing a service to the D user, part of the job will be identifying which of the C interfaces are safe and which are system (of course, he can just mark them all as system just to get things done and move on, and it'll be up to the users of said library if that is acceptable or not).
 D code does make it much easier to integrate with C code than is the case
 with many other languages, but it also loses a lot of its value if the
 compiler treats C code as if it were  safe even though its memory safety was
 not verified by the compiler, and the programmer gave no indication to the
 compiler that they had verified it. Having the compiler treat C bindings as
  safe by default would be a huge whole in  safe and make it much harder to
 track down bugs related to memory safety when they occur.
I seriously doubt that would be any harder than it is now. For those D programmers interfacing with C, they are more sophisticated than raw beginners, and it is reasonable to expect them to be capable of adding system: at the start of the module before they go through and check which ones can be safe. P.S. I started looking through druntime/src/core/stdc/*.d. They all use system: or trusted: at the start. Unfortunately, little thought seems to have been put into it. For example: https://github.com/dlang/druntime/blob/master/src/core/stdc/limits.d#L29 declares everything as trusted, yet should be safe. It isn't technically broken, but it isn't right.
Mar 25 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, March 25, 2020 11:43:41 PM MDT Walter Bright via Digitalmars-d 
wrote:
 On 3/25/2020 9:14 PM, Jonathan M Davis wrote:
 In general, a D wrapper library around a C library should be presenting
 an  safe API
A D wrapper should be as thin as possible, which means if the C function being wrapped is safe then the D wrapper should be safe, and if is not safe then the D wrapper should be system. For the D wrapper developer, since he's providing a service to the D user, part of the job will be identifying which of the C interfaces are safe and which are system (of course, he can just mark them all as system just to get things done and move on, and it'll be up to the users of said library if that is acceptable or not).
There is a huge difference between providing bindings for a C library and providing a D wrapper library for a C library. IMHO, how thick the D wrapper should be very much depends on the C code in question, but in general, I would expect a D wrapper library to be trying to present a D API with the niceties that go with that, which does not necessarily mean that the wrapper is thin. Sometimes, a thin wrapper works just fine, but in general, if a wrapper is thin, then IMHO, it's providing very little value over simply using the C bindings directly and thus is of questionable utility.
 D code does make it much easier to integrate with C code than is the
 case
 with many other languages, but it also loses a lot of its value if the
 compiler treats C code as if it were  safe even though its memory safety
 was not verified by the compiler, and the programmer gave no indication
 to the compiler that they had verified it. Having the compiler treat C
 bindings as  safe by default would be a huge whole in  safe and make it
 much harder to track down bugs related to memory safety when they
 occur.
I seriously doubt that would be any harder than it is now. For those D programmers interfacing with C, they are more sophisticated than raw beginners, and it is reasonable to expect them to be capable of adding system: at the start of the module before they go through and check which ones can be safe.
The problem is not that it's hard to mark extern(C) declarations as system. The problem is that if they're automatically safe, then it's harder to track them down when there's an safety bug. It should always be possible to segregate memory safety bugs in safe code by looking for trusted code that it's calling. I don't think that _anything_ should be considered safe unless the compiler has actually verified that it is, with trusted of course being used to indicate that the programmer claims to have verified it. extern(C) declarations cannot be verified by the compiler and thus should never be considered safe. At most, the programmer should be marking them with trusted.
 P.S. I started looking through druntime/src/core/stdc/*.d. They all use
  system: or  trusted: at the start. Unfortunately, little thought seems
 to have been put into it. For example:


 https://github.com/dlang/druntime/blob/master/src/core/stdc/limits.d#L29

 declares everything as  trusted, yet should be  safe. It isn't technically
 broken, but it isn't right.
That's concerning, since it implies that whoever did it was just slapping trusted on the various C declarations. Honestly, I'm inclined to argue that using trusted with : or {} is just plain bad practice. Non-extern(D) functions need to be individually verified and thus should be marked individually. - Jonathan M Davis
Mar 25 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 11:14 PM, Jonathan M Davis wrote:
 On Wednesday, March 25, 2020 11:43:41 PM MDT Walter Bright via Digitalmars-d
 wrote:
 On 3/25/2020 9:14 PM, Jonathan M Davis wrote:
 In general, a D wrapper library around a C library should be presenting
 an  safe API
A D wrapper should be as thin as possible, which means if the C function being wrapped is safe then the D wrapper should be safe, and if is not safe then the D wrapper should be system. For the D wrapper developer, since he's providing a service to the D user, part of the job will be identifying which of the C interfaces are safe and which are system (of course, he can just mark them all as system just to get things done and move on, and it'll be up to the users of said library if that is acceptable or not).
There is a huge difference between providing bindings for a C library and providing a D wrapper library for a C library.
Yes, you're right.
 That's concerning, since it implies that whoever did it was just slapping
  trusted on the various C declarations.
Yup.
Mar 26 2020
prev sibling parent reply Kagamin <spam here.lot> writes:
On Thursday, 26 March 2020 at 05:43:41 UTC, Walter Bright wrote:
 https://github.com/dlang/druntime/blob/master/src/core/stdc/limits.d#L29

 declares everything as  trusted, yet should be  safe. It isn't 
 technically broken, but it isn't right.
That one should be system, like any other C header. safe is supposed to be checked by the compiler, so C headers can't possibly have safe declarations.
Mar 25 2020
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, March 26, 2020 12:49:25 AM MDT Kagamin via Digitalmars-d wrote:
 On Thursday, 26 March 2020 at 05:43:41 UTC, Walter Bright wrote:
 https://github.com/dlang/druntime/blob/master/src/core/stdc/limits.d#L29

 declares everything as  trusted, yet should be  safe. It isn't
 technically broken, but it isn't right.
That one should be system, like any other C header. safe is supposed to be checked by the compiler, so C headers can't possibly have safe declarations.
Really, the issue with that particular module is that it contains no function declarations whatsoever, making any kind of safety attribute utterly pointless. Either way, I agree that safe makes no sense for extern(C) declarations of any kind. trusted can make sense but not safe. Unfortunately though, it _is_ currently legal to mark non-extern(D) function declarations with safe even though the compiler isn't verifying anything. - Jonathan M Davis
Mar 26 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/25/20 11:36 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 18:04:33 UTC, Steven Schveighoffer wrote:
 On 3/25/20 1:34 PM, bachmeier wrote:
 On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer wrote:

 Everything else in the DIP is possibly annoying to deal with but at 
 least doesn't silently destroy the meaning of  safe.
To be perfectly honest, I can't imagine D being a sensible option for someone wanting to work heavily with C code if you have to add pointless annotations and constantly deal with compiler errors. It's not a matter of annoyance, it's simply impractical to add that kind of overhead, particularly if someone else is involved. If you're using C, you're well aware that it's not going to be safe. Rust was designed for *writing* safe code, not for wrapping C libraries, which is maybe the main use of D right now.
This is overblown. Adding system: at the top of a c library header is not hard. Tools which generate headers for C libraries (e.g. dpp) can automatically do the right thing.
Let me put it differently. Suppose I release a linear algebra library that's a wrapper over a C library. Nobody using D the way it's supposed to be used can use my library. It just doesn't make sense for a language that claims strong C interoperability.
I understand, it is a good point. To rephrase (to make sure I understand): Today, people open an editor, start a main function, and import your library, and everything works. They don't realize or care that they are using system code. If this DIP gets accepted, they open their editor, start a main function, which is now implied safe, and they cannot use their library without marking their main system. It's an extra step, and one that forces them to think about what they are doing in terms of safety. So even though nothing has changed exactly (things that were system are still system), the status quo for D code changes, which means now your library moves from by-default acceptable, to you need to go into dangerous territory to use this library. I still think this is the appropriate path. We cannot continue to ignore memory safety as a secondary concern just because C code is by-default unsafe. Memory unsafe HAS to be opt-in for any new modern language to succeed. I think for sure people need to have guidance as to what makes sense. I think unlike something like Rust, the path to using all system code is pretty straightforward in D. So yes, it's a burden, but with guidance and the tools we have, it should be a small burden. And in actuality, most D code is safe, so for most D code out there, this is not going to be a huge problem. You will have to mark few things. Wrappers/bindings for C libraries are going to be an exception, and that's just the pain we have to deal with. Either use trusted markings to make everything safe (after verification), or just punt to the user.
 Anyway, I'm going to let this die, because nobody else sees it as an issue.
I think it's worth discussing, and I hadn't thought of this perspective, so thanks for clarifying. -Steve
Mar 26 2020
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 26 March 2020 at 14:12:24 UTC, Steven Schveighoffer 
wrote:
 I still think this is the appropriate path. We cannot continue 
 to ignore memory safety as a secondary concern just because C 
 code is by-default unsafe. Memory unsafe HAS to be opt-in for 
 any new modern language to succeed.
What frustrates me about these discussions is the facts that slices always check bounds by default. The GC prevents use-after-free bugs by default. C doesn't do those. So assuming C's problems apply to D is fallacious. Rust's complication is because they wanted to avoid the runtime checks. But D's runtime checks are also a valid solution. I suspect 95+% of C's problems already are extremely rare in D, yet the safe advocates never seem to consider this at all.
Mar 26 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 10:24 AM, Adam D. Ruppe wrote:
 On Thursday, 26 March 2020 at 14:12:24 UTC, Steven Schveighoffer wrote:
 I still think this is the appropriate path. We cannot continue to 
 ignore memory safety as a secondary concern just because C code is 
 by-default unsafe. Memory unsafe HAS to be opt-in for any new modern 
 language to succeed.
What frustrates me about these discussions is the facts that slices always check bounds by default. The GC prevents use-after-free bugs by default.
And so any code that uses the defaults will be safe and continue to compile.
 C doesn't do those. So assuming C's problems apply to D is fallacious. 
 Rust's complication is because they wanted to avoid the runtime checks. 
 But D's runtime checks are also a valid solution.
Unsafe D code can do exactly the same thing that C code does (use pointers and malloc). The runtime checks are gone at that point. How do you distinguish code like that from the good D code?
 I suspect 95+% of C's problems already are extremely rare in D, yet the 
  safe advocates never seem to consider this at all.
I consider that BECAUSE of these mitigating factors you listed, most D code is already safe, just not marked that way. Most code that is written will currently "just work". Unless it doesn't. We are going to have a hard time finding that code or those migration pains without trying it. And it's hard to judge that this will be such a huge burden that we need to reconsider this path. this is why it should be a trial switch, and people should be encouraged to use the switch and report the pains that come from it. We can potentially make things pretty seamless of a transition, or even adjust our thinking to make things more palatable. I've recently migrated 2 significant projects into fully-safe code. One is diet-ng, which was a couple hours of work. Mostly just consisted of slapping safe: at the top of modules that had non-templates. This is because most of the code was ALREADY safe. The one major difficulty? Object.opCmp is not safe, even though the implementation in the classes was safe. So I have this awesome shim: https://github.com/rejectedsoftware/diet-ng/blob/e2e947f24faaa71a4bab9dd8bda6f93375c67755/source/diet/parser.d#L31-L36 The other project was mysql-native. I'm still not finished there, but the largest problem is/was Variant. Because of that, I had to create two almost identical APIs and split them into safe/unsafe packages (where eventually the unsafe packages will be deprecated and removed). Thanks to the awesome taggedalgebraic package, and Sönke has been really helpful including features to make it easier to do safe code, we should be able to have a drop-in replacement for Variant that is safe, and your code should (almost) just work. You can read about it here (not released yet): https://github.com/schveiguy/mysql-native/blob/safeupdate/SAFE_MIGRATION.md My point is, putting in the effort to migrate to safe is the best way to determine where the sticking points are (and there are definitely sticking points). Hand-wavy statistics aren't persuasive. -Steve
Mar 26 2020
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 26 March 2020 at 15:14:35 UTC, Steven Schveighoffer 
wrote:
 My point is, putting in the effort to migrate to  safe is the 
 best way to determine where the sticking points are (and there 
 are definitely sticking points).
we should do a trial release with it all compiled in, with all the associated switches set by default, so it is dead easy to drop in and see what happens.
Mar 26 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 8:14 AM, Steven Schveighoffer wrote:
 My point is, putting in the effort to migrate to  safe is the best way to 
 determine where the sticking points are (and there are definitely sticking 
 points). Hand-wavy statistics aren't persuasive.
Yah, you never know if the paper airplane design will actually fly until you build it, gas it up, and convince your test pilot to risk his life.
Mar 26 2020
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Thursday, 26 March 2020 at 14:24:24 UTC, Adam D. Ruppe wrote:

 What frustrates me about these discussions is the facts that 
 slices always check bounds by default.
As they should.
 C doesn't do those. So assuming C's problems apply to D is 
 fallacious.
C's problems apply to D as soon as you allocate on the C heap or use pointers to stack-allocated memory.
 Rust's complication is because they wanted to avoid the runtime 
 checks.
Rust's complication is because they wanted to avoid a GC, which was marketing genius. It has runtime checks for when access patterns can't be guaranteed at compile-time.
 But D's runtime checks are also a valid solution.
We can do better that at compile time.
 I suspect 95+% of C's problems already are extremely rare in D,
Yes. The remaining 5% are all related to the stack and allocating on the C heap.
 yet the  safe advocates never seem to consider this at all.
I'm not sure what you mean by this. Is it your opinion that writing safe code is hard and/or restrictive? If you allocate on the GC heap and use -preview=dip1000, then writing safe code is writing D code*, *except* when you call non- safe library code. Unfortunately this is common because safe isn't the default. * Pretty much, but not exactly always
Mar 26 2020
next sibling parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Thursday, 26 March 2020 at 16:19:44 UTC, Atila Neves wrote:
 On Thursday, 26 March 2020 at 14:24:24 UTC, Adam D. Ruppe wrote:
 But D's runtime checks are also a valid solution.
We can do better that at compile time.
But not always though. There are scenarios where you have to rely on runtime checks, such as user driven input for example. Side note: There should be a significant push for runtime reflection if we are pushing for safe by default. I recall Andrei talking about it in one of his presentation on how the compile time reflection will lay ground work for runtime reflection. I have not seen any developments on that front. Reason being that classes can inherent from other classes. -Alex
Mar 26 2020
parent Atila Neves <atila.neves gmail.com> writes:
On Thursday, 26 March 2020 at 17:02:51 UTC, 12345swordy wrote:
 On Thursday, 26 March 2020 at 16:19:44 UTC, Atila Neves wrote:
 On Thursday, 26 March 2020 at 14:24:24 UTC, Adam D. Ruppe 
 wrote:
 But D's runtime checks are also a valid solution.
We can do better that at compile time.
But not always though. There are scenarios where you have to rely on runtime checks, such as user driven input for example.
Sure.
 Side note: There should be a significant push for runtime 
 reflection if we are pushing for safe by default. I recall 
 Andrei talking about it in one of his presentation on how the 
 compile time reflection will lay ground work for runtime 
 reflection. I have not seen any developments on that front. 
 Reason being that classes can inherent from other classes.
I'm currently working on this.
Mar 27 2020
prev sibling parent Kagamin <spam here.lot> writes:
On Thursday, 26 March 2020 at 16:19:44 UTC, Atila Neves wrote:
 I suspect 95+% of C's problems already are extremely rare in D,
Yes. The remaining 5% are all related to the stack and allocating on the C heap.
safe doesn't deal with stack and C heap, it only suppresses C reflexes in the former 95%.
 If you allocate on the GC heap and use -preview=dip1000, then
Then you don't allocate on stack and C heap.
Mar 26 2020
prev sibling next sibling parent Kagamin <spam here.lot> writes:
On Thursday, 26 March 2020 at 14:24:24 UTC, Adam D. Ruppe wrote:
 On Thursday, 26 March 2020 at 14:12:24 UTC, Steven 
 Schveighoffer wrote:
 I still think this is the appropriate path. We cannot continue 
 to ignore memory safety as a secondary concern just because C 
 code is by-default unsafe. Memory unsafe HAS to be opt-in for 
 any new modern language to succeed.
What frustrates me about these discussions is the facts that slices always check bounds by default. The GC prevents use-after-free bugs by default. C doesn't do those. So assuming C's problems apply to D is fallacious. Rust's complication is because they wanted to avoid the runtime checks. But D's runtime checks are also a valid solution. I suspect 95+% of C's problems already are extremely rare in D, yet the safe advocates never seem to consider this at all.
This. Buffer overflows in D happen solely due to prejudice, when people abuse their C reflexes when writing in D, so compulsory safety may be useful to educate them to start using slices, but seriously, if it wasn't for C junkies the last buffer overflow would happen 30 years ago and not a second ago.
Mar 26 2020
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, March 26, 2020 8:24:24 AM MDT Adam D. Ruppe via Digitalmars-d 
wrote:
 On Thursday, 26 March 2020 at 14:12:24 UTC, Steven Schveighoffer

 wrote:
 I still think this is the appropriate path. We cannot continue
 to ignore memory safety as a secondary concern just because C
 code is by-default unsafe. Memory unsafe HAS to be opt-in for
 any new modern language to succeed.
What frustrates me about these discussions is the facts that slices always check bounds by default. The GC prevents use-after-free bugs by default. C doesn't do those. So assuming C's problems apply to D is fallacious. Rust's complication is because they wanted to avoid the runtime checks. But D's runtime checks are also a valid solution. I suspect 95+% of C's problems already are extremely rare in D, yet the safe advocates never seem to consider this at all.
Except that in system code, the bounds checking gets turned off with -release. So, with system as the default, a lot less bounds checking is going on than I think many people realize. Sure, D code is much less likely to have safety issues than C code, but the safety system is really designed with the idea that almost all code will be safe with only pockets of it being system or trusted, and as long as a large percentage of code is system, stuff like bounds checking or scope with DIP 1000 doesn't really do what it's supposed to. - Jonathan M Davis
Mar 26 2020
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 26 March 2020 at 23:10:17 UTC, Jonathan M Davis 
wrote:
 Except that in  system code, the bounds checking gets turned 
 off with -release.
This is one of the reasons why I tell people to NEVER use -release, it is plain awful and should be formally deprecated. The -boundscheck and -check switches fully replace it and are more obvious what they do. But regardless, the right thing is still the default.
Mar 26 2020
prev sibling parent Kagamin <spam here.lot> writes:
On Thursday, 26 March 2020 at 23:10:17 UTC, Jonathan M Davis 
wrote:
 Except that in  system code, the bounds checking gets turned 
 off with -release. So, with  system as the default, a lot less 
 bounds checking is going on than I think many people realize.
Type system protects against involuntary mistakes, the -release switch is voluntary.
Mar 26 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 7:24 AM, Adam D. Ruppe wrote:
 I suspect 95+% of C's problems already are extremely rare in D, yet the  safe 
 advocates never seem to consider this at all.
Unfortunately, that other 5% can cost companies millions of dollars. 95% safe isn't good enough anymore. safe also saves myself (and companies) time because certain types of C errors no longer have to be manually checked for. D will not prevent you from doing anything you want in the code, just add system annotation.
Mar 26 2020
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 26 March 2020 at 14:12:24 UTC, Steven Schveighoffer 
wrote:
 [snip]
 And in actuality, most D code is  safe, so for most D code out 
 there, this is not going to be a huge problem. You will have to 
 mark few things. Wrappers/bindings for C libraries are going to 
 be an exception, and that's just the pain we have to deal with. 
 Either use trusted markings to make everything safe (after 
 verification), or just punt to the user.
There is a lot of functionality that depends on C libraries. For instance, every or almost every function in lubeck calls at least one C function. If someone comes to D from python and wants to replace something from numpy/scipy with a lubeck equivalent, they will need to start slapping trusted or system on everything. That means they will need to understand the safety system and why stuff like that matters. For some people, that may be a big enough burden that they just throw up their hands and keep using python.
Mar 26 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 10:40 AM, jmh530 wrote:
 On Thursday, 26 March 2020 at 14:12:24 UTC, Steven Schveighoffer wrote:
 [snip]
 And in actuality, most D code is  safe, so for most D code out there, 
 this is not going to be a huge problem. You will have to mark few 
 things. Wrappers/bindings for C libraries are going to be an 
 exception, and that's just the pain we have to deal with. Either use 
 trusted markings to make everything safe (after verification), or just 
 punt to the user.
There is a lot of functionality that depends on C libraries. For instance, every or almost every function in lubeck calls at least one C function. If someone comes to D from python and wants to replace something from numpy/scipy with a lubeck equivalent, they will need to start slapping trusted or system on everything. That means they will need to understand the safety system and why stuff like that matters. For some people, that may be a big enough burden that they just throw up their hands and keep using python.
Writing system: At the top of your modules is not a big burden. If that drives you away from the language, then I'm sorry to say that you are missing out on the awesomeness of D, but I can't really help you. How many people were driven away from windows development because they had to deal with __stdcall? You just googled for it (or whatever the hell was available at the time), said "oh this is how you do it", and did it. It wasn't a problem, you just did it. This is only even a discussion because of the current situation. If D started out this way, you would have no idea there was even a problem here. -Steve
Mar 26 2020
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 26 March 2020 at 15:02:15 UTC, Steven Schveighoffer 
wrote:
 [snip]

 At the top of your modules is not a big burden. If that drives 
 you away from the language, then I'm sorry to say that you are 
 missing out on the awesomeness of D, but I can't really help 
 you.
[snip]
I was talking about a hypothetical python person thinking about learning D, not about me personally.
Mar 26 2020
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 1:00 PM, jmh530 wrote:
 On Thursday, 26 March 2020 at 15:02:15 UTC, Steven Schveighoffer wrote:
 [snip]

 At the top of your modules is not a big burden. If that drives you 
 away from the language, then I'm sorry to say that you are missing out 
 on the awesomeness of D, but I can't really help you.
 [snip]
I was talking about a hypothetical python person thinking about learning D, not about me personally.
So was I. Python user: how do I call libX from D? tutorial: Make sure you mark your functions as system, or use trusted escapes. To make things easy, just put system: at the top of your module. User: ok, that's not too bad. (probably) -Steve
Mar 26 2020
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 10:00 AM, jmh530 wrote:
 I was talking about a hypothetical python person thinking about learning D,
not 
 about me personally.
Consider how the Rust folk have been successful at getting people to entirely re-engineer their programs and data structures and learn an entirely different language to get a small increment in memory safety. Rust's marketing department is very good.
Mar 27 2020
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/03/2020 10:26 PM, Walter Bright wrote:
 On 3/26/2020 10:00 AM, jmh530 wrote:
 I was talking about a hypothetical python person thinking about 
 learning D, not about me personally.
Consider how the Rust folk have been successful at getting people to entirely re-engineer their programs and data structures and learn an entirely different language to get a small increment in memory safety. Rust's marketing department is very good.
It is a much younger language with people who signed on for this. Its a bit late to take their approach.
Mar 27 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/27/2020 2:30 AM, rikki cattermole wrote:
 Its a bit late to take their approach.
Not at all too late. Plenty of room there.
Mar 31 2020
parent reply Arine <arine123445128843 gmail.com> writes:
On Tuesday, 31 March 2020 at 20:16:45 UTC, Walter Bright wrote:
 On 3/27/2020 2:30 AM, rikki cattermole wrote:
 Its a bit late to take their approach.
Not at all too late. Plenty of room there.
With the current implementation and proposal of live, it is effectively the equivalent of comparing a pair of scissors to a lawn mower. To be comparable to something like Rust would require an entire language rewrite from the ground up. Even though there are already significant breaking changes, they aren't sufficient and I don't imagine breaking everything completely is on the table.
Mar 31 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/31/2020 2:12 PM, Arine wrote:
 With the current implementation and proposal of  live, it is effectively the 
 equivalent of comparing a pair of scissors to a lawn mower. To be comparable
to 
 something like Rust would require an entire language rewrite from the ground
up. 
 Even though there are already significant breaking changes, they aren't 
 sufficient and I don't imagine breaking everything completely is on the table.
I've encountered such opinions my entire career. Fortunately, I never pay attention to them.
Apr 01 2020
parent Arine <arine123445128843 gmail.com> writes:
On Wednesday, 1 April 2020 at 21:32:29 UTC, Walter Bright wrote:
 On 3/31/2020 2:12 PM, Arine wrote:
 With the current implementation and proposal of  live, it is 
 effectively the equivalent of comparing a pair of scissors to 
 a lawn mower. To be comparable to something like Rust would 
 require an entire language rewrite from the ground up. Even 
 though there are already significant breaking changes, they 
 aren't sufficient and I don't imagine breaking everything 
 completely is on the table.
I've encountered such opinions my entire career. Fortunately, I never pay attention to them.
That's why Rust, a completely new language that doesn't follow similar syntax of another language and requires developers to completely rewrite their code is doing much much better than D? Gotcha. https://github.com/dlang/dlang.org/commit/9bede81001c1b1486e749dbaf3ee81087476c9c6#diff-f40612a5a1a025f217fe29cb0df257ddR56
 The future of programming will be multicore, multithreaded. 
 Languages that
 make it easy to program them will supplant languages that don't.
 Transitive const is key to bringing D into this paradigm. The 
 surge in
 use of Haskell and Erlang is evidence of this coming trend (the 
 killer
 feature of those languages is they make it easy to do 
 multiprogramming).
This all sounds so familiar. Here you have one of D's most avoided features that was the key to bringing D into the "future". People that are gravitating towards Rust would not deem the implementation of live suitable. But ya'know, thank goodness you just ignored everyone else's opinions and continued in your spearheaded stubborn ways. I'll ask again since you didn't reply last time. Have you ever written Rust? Have you ever used Rust? From your implementation of live, I feel as though you haven't.
Apr 02 2020
prev sibling next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 27 March 2020 at 09:26:35 UTC, Walter Bright wrote:
 [snip]

 Consider how the Rust folk have been successful at getting 
 people to entirely re-engineer their programs and data 
 structures and learn an entirely different language to get a 
 small increment in memory safety.
There are use cases for Rust where an increment in memory safety without sacrificing performance is important. However, I rarely hear of anyone doing anything that interests me personally in Rust. I'm more interested in statistics and stuff like that.
 Rust's marketing department is very good.
Rust has the support of Mozilla in its marketing.
Mar 27 2020
prev sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Friday, 27 March 2020 at 09:26:35 UTC, Walter Bright wrote:
 On 3/26/2020 10:00 AM, jmh530 wrote:
 I was talking about a hypothetical python person thinking 
 about learning D, not about me personally.
Consider how the Rust folk have been successful at getting people to entirely re-engineer their programs and data structures and learn an entirely different language to get a small increment in memory safety. Rust's marketing department is very good.
It's not marketing. They created a solution to a problem no one else has been able to. Trying the language, and it solving your problems and then continuing to use the language afterwards isn't marketing. It is a completely refined package all the way down to it's package manager. It's not just memory safety. Have you used Rust? How you written Rust? Have you had to deal with a new version of Rust that breaks all your code? Have you used cargo? Have you used a cargo package that uses a different version of Rust that you use and it just works TM? Rust is memory safe, but to attribute its success entirely on that is moronic. A GC is memory safe, do you think Rust would have been as successful with a GC? Rust provides more guarantees than simply being memory safe. Something a GC doesn't provide.
Mar 28 2020
parent reply Atila Neves <atila.neves gmail.com> writes:
On Saturday, 28 March 2020 at 13:53:28 UTC, Arine wrote:
 On Friday, 27 March 2020 at 09:26:35 UTC, Walter Bright wrote:
 On 3/26/2020 10:00 AM, jmh530 wrote:
 I was talking about a hypothetical python person thinking 
 about learning D, not about me personally.
Consider how the Rust folk have been successful at getting people to entirely re-engineer their programs and data structures and learn an entirely different language to get a small increment in memory safety. Rust's marketing department is very good.
It's not marketing. They created a solution to a problem no one else has been able to.
I think they created a solution to a problem 99.9% of programmers don't have, which is "how to be memory safe without using a tracing GC". The reason it's marketing genius is it'll convince people who really really think "GCs are slow" without ever having run a profiler. Genius, I tells ya. Genius.
Mar 28 2020
parent reply Arine <arine123445128843 gmail.com> writes:
On Saturday, 28 March 2020 at 14:18:46 UTC, Atila Neves wrote:
 On Saturday, 28 March 2020 at 13:53:28 UTC, Arine wrote:
 On Friday, 27 March 2020 at 09:26:35 UTC, Walter Bright wrote:
 On 3/26/2020 10:00 AM, jmh530 wrote:
 I was talking about a hypothetical python person thinking 
 about learning D, not about me personally.
Consider how the Rust folk have been successful at getting people to entirely re-engineer their programs and data structures and learn an entirely different language to get a small increment in memory safety. Rust's marketing department is very good.
It's not marketing. They created a solution to a problem no one else has been able to.
I think they created a solution to a problem 99.9% of programmers don't have, which is "how to be memory safe without using a tracing GC". The reason it's marketing genius is it'll convince people who really really think "GCs are slow" without ever having run a profiler. Genius, I tells ya. Genius.
Like I said, Rust provides more than only a safety guarantee. Something that a GC can't provide (on its own at least). I see a lot of projects that are using C converting to Rust. That includes projects that Apple, Microsoft, etc is working on. D is advertised as a replacement for C and C++, but it really isn't. It's designed to be easily convertible from those languages but obviously that hasn't worked out, you don't get a benefit by keeping most of your same code and then trying to shoehorn in different design principles. People don't use C and C++ just cause they "think" GCs are slow. They use it because GCs don't fit their requirements. Otherwise there's a plethora of better languages that are built around solely using a GC, that have more options in terms of a better suited GC. I see Rust being used exactly where it makes sense to see it being used. As a systems programming language where C would otherwise have been used. I've also seen it used to replace GC languages like Go on servers as well: Anyways, here's a good example of where GC failed to meet their requirements, and Rust solved their problem as it doesn't use a GC. https://blog.discordapp.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f
Mar 30 2020
next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 People don't use C and C++ just cause they "think" GCs are slow.
True. They use those languages because they were already using those languages.
 They use it because GCs don't fit their requirements.
Most of the time, because they *think* GCs don't fit their requirements. On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 Anyways, here's a good example of where GC failed to meet their 
 requirements,
Cases like this happen, but more often than not it's just, like, their opinion man.
Mar 30 2020
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 30.03.20 18:50, Atila Neves wrote:
 
 
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 Anyways, here's a good example of where GC failed to meet their 
 requirements,
Cases like this happen, but more often than not it's just, like, their opinion man.
They were moving from _Go_ to Rust. The GC-related issue they were having seems as good an excuse as any to justify the move. :)
Mar 30 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/30/20 1:24 PM, Timon Gehr wrote:
 On 30.03.20 18:50, Atila Neves wrote:
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 Anyways, here's a good example of where GC failed to meet their 
 requirements,
Cases like this happen, but more often than not it's just, like, their opinion man.
They were moving from _Go_ to Rust. The GC-related issue they were having seems as good an excuse as any to justify the move. :)
Ya, I have not used go, but I would suspect that some of these issues have mechanisms to mitigate GC problems in D. Sociomantic avoids unpredictable GC cycles, but doesn't disable it (they still allow collections periodically IIRC). And they are built to be as fast as possible. That doesn't mean D would beat Rust in a competition on who makes the best discord software. It really depends on a lot of factors, and I don't think generalizing Go and D to be the same because they both have a GC is fair or accurate. -Steve
Mar 30 2020
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Monday, 30 March 2020 at 18:12:03 UTC, Steven Schveighoffer 
wrote:
 Sociomantic avoids unpredictable GC cycles, but doesn't disable 
 it (they still allow collections periodically IIRC). And they 
 are built to be as fast as possible.
There were apps where a single GC cycle was not really an option, because even a small GC pause would delay our responding to requests where we couldn't afford the delay. But the solution was not really _that_ hard: be strict about using recyclable buffers and object pools, preallocate in advance so that there would be minimal resizing ... and then be really strict about keeping that policy. There's no reason Discord couldn't have done that with their Go app, but if I understood their blog post right, Go's GC force-activates a cycle every 2 minutes regardless of whether any new allocation has actually happened. (TBH I do wonder if this is really _really_ true, or whether they were just generating sufficient garbage to ensure this happened, despite their claims of efficiency.) But in any case Sociomantic could rely on the fact that in a D app no new allocations means no chance to trigger a GC cycle (which of course is why we preallocated as well as recycling buffers and objects: ideally, we wanted no heap allocation after app startup). However, it was only a few apps where this was really necessary. In fact I think a lot of the time we were much more strict about preallocation and reusable buffers than we needed to be, and the strictness was more of a hangover from working around historical bugs that occurred when using 32-bit DMD. Basically, the _other_ problem that arose in Sociomantic's use case was that if you want to keep a given app running indefinitely on the same server (and there were some apps that we never wanted to restart if we didn't absolutely have to for new deployments), then you really, really want to be sure that its long term memory usage is stable. A small daily growth can add up to a lot over months, and wind up bringing down the app or the server. And in the early days, what they found was that if they generated garbage, then slowly, over time, the memory usage would creep up and up ... so they instigated this strong "preallocate and reuse" policy to work around it. When I was fairly new in the company I got the chance to implement a new app, and quite early on my team lead sat down with me to show me how to implement and validate the prellocate-and-recycle way of doing things. The use-case meant that it was unlikely there would be a problem if we had a GC pause, and we wanted to iterate fast on this app, so I suggested we make the code simpler and just rely on the GC. He explained the long-term memory leak issue, but we agreed to let me try and observe to see what happened. And it turned out that no garbage-based memory leak emerged. Which was a nice surprise for my lead and all the other old lags in the R&D team. I don't think anybody ever did work out exactly what the problem had been in the early days, but it's likely relevant that by the time I broke the rules, the company had been using 64-bit DMD for a long time. IIRC what was suspected (N.B. this is from memory and from someone who is not an expert on the internals of the GC:-) was that with the 32-bit GC there was something about the size of GC pools or memory chunks that meant that it was very likely that you could wind up with a chunk of GC memory where all of it was in principle recyclable except for a couple of bytes, and hence you would allocate new chunks and then the same thing would happen with them, and so on until you were using far more chunks than should really have been needed. So, either in 64-bit DMD that didn't happen, or whatever GC bug it was had long been fixed anyway. And once that discovery was clearly established, I think we started relaxing the strictness a bit in apps that didn't need to care about GC pauses. The team that grew out of the app I was working on never did have to really care about GC issues, but ironically I did wind up rewriting that same app to make a lot more use of recyclable buffers, though not preallocation. I don't recall that it was ever really _necessary_, though: it was more of a precaution to try and ensure the same memory consumption for D1 and D2 builds of the same app, given that D2's GC seemed happy to allocate a lot more memory for the same "real" levels of use. Most likely D2 just allowed the size of the GC heap to grow a lot more before triggering a collection, but we were hyper-cautious about getting identical resource usage just on the offchance it might have been something nastier.
 That doesn't mean D would beat Rust in a competition on who 
 makes the best discord software. It really depends on a lot of 
 factors, and I don't think generalizing Go and D to be the same 
 because they both have a GC is fair or accurate.
For those apps that really couldn't afford a single GC cycle, we did have some discussions about how, if we were writing from scratch, Rust's memory model might have been a nice fit (it was no fun having to monitor those apps for signs of GC cycles and then work out what was causing them). It would certainly have been _interesting_ to try to write those apps in Rust. But I think we would have missed a lot of other things that were also important: the friendliness of the code, the ease of iteration, and especially the compile-time introspection and metaprogramming that even in D1 were a major, major help. I've had a little bit of a go at metaprogramming in Rust, and ... I can't say I like it :-) It's difficult not to feel that maybe what really made the difference for Discord was not really the language, but that this time they got the design right. But maybe, for them, Rust's strictness was a way of settling design questions that they could have sorted out for themselves but only by having debate and consensus and making sure that everyone was consistent in doing the right thing. And Rust probably took all that off the table.
Mar 31 2020
next sibling parent Jon Degenhardt <jond noreply.com> writes:
On Tuesday, 31 March 2020 at 21:26:41 UTC, Joseph Rushton 
Wakeling wrote:
 On Monday, 30 March 2020 at 18:12:03 UTC, Steven Schveighoffer 
 wrote:
 Sociomantic avoids unpredictable GC cycles, but doesn't 
 disable it (they still allow collections periodically IIRC). 
 And they are built to be as fast as possible.
There were apps where a single GC cycle was not really an option, because even a small GC pause would delay our responding to requests where we couldn't afford the delay. But the solution was not really _that_ hard: be strict about using recyclable buffers and object pools, preallocate in advance so that there would be minimal resizing ... and then be really strict about keeping that policy.
This is a very useful summary of Sociomantic's experience, thanks for taking the time to write it up and post it. The Discord blog post was a good read too.
Mar 31 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
Nice and interesting writeup Joe!

I might shine some light here:

On 3/31/20 5:26 PM, Joseph Rushton Wakeling wrote:
 
 I don't think anybody ever did work out exactly what the problem had 
 been in the early days, but it's likely relevant that by the time I 
 broke the rules, the company had been using 64-bit DMD for a long time.  
 IIRC what was suspected (N.B. this is from memory and from someone who 
 is not an expert on the internals of the GC:-) was that with the 32-bit 
 GC there was something about the size of GC pools or memory chunks that 
 meant that it was very likely that you could wind up with a chunk of GC 
 memory where all of it was in principle recyclable except for a couple 
 of bytes, and hence you would allocate new chunks and then the same 
 thing would happen with them, and so on until you were using far more 
 chunks than should really have been needed.
The biggest problem in 32-bit land is that the address space is so small. With a conservative GC, it treats things that aren't pointers as pointers. This means that depending on where the system lays out your memory, likely integers have a better chance of "pinning" memory. In other words, some int on a stack somewhere is actually treated as a pointer holding some piece of memory from being collected. If that memory has pointers in it, maybe it also has ints too. Those ints are treated as pointers, so now more memory could be "caught". As your address space available shrinks, the chances of having false pinnings get higher, so it's a degenerative cycle. With 64-bit address space, typically everything is allocated far away from typical long values, so the pinning is much rarer. I'm not sure if this matches your exact problem, but I definitely am sure that 64-bit D is much less likely to leak GC memory than 32-bit D.
 The team that grew out of the app I was working on never did have to 
 really care about GC issues, but ironically I did wind up rewriting that 
 same app to make a lot more use of recyclable buffers, though not 
 preallocation.  I don't recall that it was ever really _necessary_, 
 though: it was more of a precaution to try and ensure the same memory 
 consumption for D1 and D2 builds of the same app, given that D2's GC 
 seemed happy to allocate a lot more memory for the same "real" levels of 
 use.  Most likely D2 just allowed the size of the GC heap to grow a lot 
 more before triggering a collection, but we were hyper-cautious about 
 getting identical resource usage just on the offchance it might have 
 been something nastier.
This I'm sure I can answer :) It is actually something I added to the runtime -- the non-stomping array feature. In D1, an array was only appendable if it was pointing at the beginning of the block. There was no assumeSafeAppend. So if you for instance allocated a block of 16 bytes, you got a 16-byte block from the GC. But the drawback was that you could overwrite memory that was still referenced without meaning to. With the non-stomping feature, the "used" space of the array is stored in the block as well (at the end of the block). This allows the array runtime to know when it's safe to append in-place, or when a new block has to be allocated. This is actually quite necessary especially for immutable data such as strings (overwriting still-accessible immutable data is undefined behavior in D2). The drawback though, is that allocating an array of 16 bytes really needs 17 bytes (one byte for the array length stored in the block). Which actually ends up allocating a 32-byte block (GC blocks come in powers of 2). Since then, we are also storing the typeinfo in the block if the data has a destructor, meaning less space for actual data. So this probably explains why a D2 app is going to consume a bit more memory than a D1 app that is written the same. -Steve
Mar 31 2020
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Wednesday, 1 April 2020 at 02:43:09 UTC, Steven Schveighoffer 
wrote:
 Nice and interesting writeup Joe!

 I might shine some light here:
Thanks! :-)
 The biggest problem in 32-bit land is that the address space is 
 so small. With a conservative GC, it treats things that aren't 
 pointers as pointers. This means that depending on where the 
 system lays out your memory, likely integers have a better 
 chance of "pinning" memory. In other words, some int on a stack 
 somewhere is actually treated as a pointer holding some piece 
 of memory from being collected. If that memory has pointers in 
 it, maybe it also has ints too. Those ints are treated as 
 pointers, so now more memory could be "caught". As your address 
 space available shrinks, the chances of having false pinnings 
 get higher, so it's a degenerative cycle.
Ah right, this was it! I remember several different folks discussing that with me at some point (probably my lead and Luca, on different occasions).
 With 64-bit address space, typically everything is allocated 
 far away from typical long values, so the pinning is much rarer.
Right. In fact, I think may have been part of why my lead was happy to let me try to relax the rules with my app. I don't think we ever got _certainty_ that this was what had been impacting the older code and builds, but it was such a good contender that, with the problem no longer showing up (and 32-bit DMD long abandoned), it didn't seem worth anyone's time to dig deeper and prove it 100%. But I should check in with some folks to confirm. It may be that they explicitly identified the problem with 32-bit all those years ago, and that's why the strict rules were introduced in the first place.
 I'm not sure if this matches your exact problem, but I 
 definitely am sure that 64-bit D is much less likely to leak GC 
 memory than 32-bit D.
Yup. But once practical experience showed that, we never dived too deep on whether the problem was really not there with 64-bit, or if it was just happening so slowly that it didn't matter even for the long-lifetime server apps.
 With the non-stomping feature, the "used" space of the array is 
 stored in the block as well (at the end of the block). This 
 allows the array runtime to know when it's safe to append 
 in-place, or when a new block has to be allocated. This is 
 actually quite necessary especially for immutable data such as 
 strings (overwriting still-accessible immutable data is 
 undefined behavior in D2).

 The drawback though, is that allocating an array of 16 bytes 
 really needs 17 bytes (one byte for the array length stored in 
 the block). Which actually ends up allocating a 32-byte block 
 (GC blocks come in powers of 2).
Ah, interesting! I don't think we ever explicitly considered this (Dicebot might recall, as he thought about all the transition issues in much more depth than anyone else). It certainly could have been a factor. As I recall, apps that had really strict preallocate-and-reuse policies (and which added all the required `assumeSafeAppend` to avoid stomping prevention on reusable buffers) in general wound up with very similar memory usage (getting all the `assumeSafeAppend` in place was the tricky thing). But likely for those apps the prellocated buffers were large enough that a 1-byte addition wouldn't push them up into a larger block size. The places where we saw a big difference tended to be apps with a relatively small overall memory usage, and a quite profligate attitude towards generating garbage. But there were several of these (doing arguably quite similar things from a general design point of view) and fairly significant discrepancies in behaviour. So I suspect that more than one factor was in play. But the impact of stomping prevention and typeinfo on block size would have certainly been worth investigating if any of us had thought of it at the time (assuming my memory is right and we didn't:-) Before we derail the discussion thread any more, maybe I ought to have a chat with a few former colleagues just to refresh memories, and write this up as a blog post ...
Apr 01 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/1/2020 5:40 AM, Joseph Rushton Wakeling wrote:
 Before we derail the discussion thread any more, maybe I ought to have a chat 
 with a few former colleagues just to refresh memories, and write this up as a 
 blog post ...
It'd make a great blog post. Please do!
Apr 01 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/31/2020 2:26 PM, Joseph Rushton Wakeling wrote:
 [...]
Thanks for the great read.
Apr 01 2020
prev sibling next sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 30 March 2020 at 17:24:44 UTC, Timon Gehr wrote:
 On 30.03.20 18:50, Atila Neves wrote:
 
 
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 Anyways, here's a good example of where GC failed to meet 
 their requirements,
Cases like this happen, but more often than not it's just, like, their opinion man.
They were moving from _Go_ to Rust. The GC-related issue they were having seems as good an excuse as any to justify the move. :)
Reading the comments, lot of them are related to the fact that they switched in May 2019 from go 1.9 to rust _nightly_, while, for example Go 1.12 was released in Feb 2019. They updated the notes saying that they tested 1.10 without noticing any improvements ... folks asks for comparison against 1.13 at least ... Well, knowing how much work was done in improving the GC performance in go in the latest releases, I think Timon is right!
Mar 31 2020
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Monday, 30 March 2020 at 17:24:44 UTC, Timon Gehr wrote:
 On 30.03.20 18:50, Atila Neves wrote:
 
 
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 Anyways, here's a good example of where GC failed to meet 
 their requirements,
Cases like this happen, but more often than not it's just, like, their opinion man.
They were moving from _Go_ to Rust. The GC-related issue they were having seems as good an excuse as any to justify the move. :)
Of course! That's why I said "cases like this happen", by which I meant "sometimes, it's true that the project can't afford a GC".
Mar 31 2020
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 I've also seen it used to replace GC languages like Go on 
 servers as well:

 Anyways, here's a good example of where GC failed to meet their 
 requirements, and Rust solved their problem as it doesn't use a 
 GC.

 https://blog.discordapp.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f
Let's be honest, *anything* would be better than Go, for a reasonable value of "anything". ;-)
Mar 30 2020
parent reply JN <666total wp.pl> writes:
On Monday, 30 March 2020 at 19:32:54 UTC, Meta wrote:
 On Monday, 30 March 2020 at 15:49:55 UTC, Arine wrote:
 I've also seen it used to replace GC languages like Go on 
 servers as well:

 Anyways, here's a good example of where GC failed to meet 
 their requirements, and Rust solved their problem as it 
 doesn't use a GC.

 https://blog.discordapp.com/why-discord-is-switching-from-go-to-rust-a190bbca2b1f
Let's be honest, *anything* would be better than Go, for a reasonable value of "anything". ;-)
Umm no. Go is a solid language, with a very good toolchain, tooling, documentation, ecosystem. It might not have the fanciest language features and it doesn't invent a new paradigm. I know this comment is sarcastic in nature, but I wouldn't underestimate Go.
Mar 31 2020
parent Mike Parker <aldacron gmail.com> writes:
On Tuesday, 31 March 2020 at 12:15:28 UTC, JN wrote:

 Umm no. Go is a solid language, with a very good toolchain, 
 tooling, documentation, ecosystem. It might not have the 
 fanciest language features and it doesn't invent a new 
 paradigm. I know this comment is sarcastic in nature, but I 
 wouldn't underestimate Go.
Everyone, this thread has gone way off topic. Let's please stick to discussion of DIP 1028. Thanks!
Mar 31 2020
prev sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Thursday, 26 March 2020 at 15:02:15 UTC, Steven Schveighoffer 
wrote:
 On 3/26/20 10:40 AM, jmh530 wrote:
 On Thursday, 26 March 2020 at 14:12:24 UTC, Steven 
 Schveighoffer wrote:
 [snip]
 And in actuality, most D code is  safe, so for most D code 
 out there, this is not going to be a huge problem. You will 
 have to mark few things. Wrappers/bindings for C libraries 
 are going to be an exception, and that's just the pain we 
 have to deal with. Either use trusted markings to make 
 everything safe (after verification), or just punt to the 
 user.
There is a lot of functionality that depends on C libraries. For instance, every or almost every function in lubeck calls at least one C function. If someone comes to D from python and wants to replace something from numpy/scipy with a lubeck equivalent, they will need to start slapping trusted or system on everything. That means they will need to understand the safety system and why stuff like that matters. For some people, that may be a big enough burden that they just throw up their hands and keep using python.
Writing system: At the top of your modules is not a big burden. If that drives you away from the language, then I'm sorry to say that you are missing out on the awesomeness of D, but I can't really help you.
I can, just write trusted: This will be my goto solution as D lacks the means many other languages have to maintain compatibility. In C++ I can choose the standard I want to use, and avoid any removed or changed features other established language.
 How many people were driven away from windows development 
 because they had to deal with __stdcall? You just googled for 
 it (or whatever the hell was available at the time), said "oh 
 this is how you do it", and did it. It wasn't a problem, you 
 just did it.
You don't need to do that anymore, they changed it (for the better). It probably did drive people away. Especially when you deal with small issues consistently. They start piling up, and you can only deal with so many tiny issues (which D has many of; and will only grow with all the new changes coming forward).
 This is only even a discussion because of the current 
 situation. If D started out this way, you would have no idea 
 there was even a problem here.

 -Steve
Right, because people don't want to have to keep updating their code or have it be broken. Other languages ensure backwards compatibility, and if they don't they provide a way to keep the code working without having to modify the code to work. D's solution to the problem: "this is the best practice now". That doesn't stop already written code from being broken.
Mar 26 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 2:56 PM, Arine wrote:
 On Thursday, 26 March 2020 at 15:02:15 UTC, Steven Schveighoffer wrote:
 Writing

  system:

 At the top of your modules is not a big burden. If that drives you 
 away from the language, then I'm sorry to say that you are missing out 
 on the awesomeness of D, but I can't really help you.
I can, just write trusted:
This isn't a good idea. But yeah, you can do that at your own peril (and anyone who uses your library). I'd highly recommend using system: instead.
 
 This will be my goto solution as D lacks the means many other languages 
 have to maintain compatibility.
The means are: use the correct markings. system works now, and will after this change.
 How many people were driven away from windows development because they 
 had to deal with __stdcall? You just googled for it (or whatever the 
 hell was available at the time), said "oh this is how you do it", and 
 did it. It wasn't a problem, you just did it.
You don't need to do that anymore, they changed it (for the better). It probably did drive people away. Especially when you deal with small issues consistently. They start piling up, and you can only deal with so many tiny issues (which D has many of; and will only grow with all the new changes coming forward).
Meh, you just do it. Saying "why do I have to write system when I don't care about safety" is like saying "why do I have to write void when I have no return value". Just do it, and your code works. For those who don't care.
 
 This is only even a discussion because of the current situation. If D 
 started out this way, you would have no idea there was even a problem 
 here.
Right, because people don't want to have to keep updating their code or have it be broken. Other languages ensure backwards compatibility, and if they don't they provide a way to keep the code working without having to modify the code to work.
I think the intention is to have an automated tool to mark things that are currently system as system explicitly. I would expect that feature in dfix, before this would become the default. You are free to use other languages if you feel that way. IMO this change is for the better, and provides a much healthier default. Without it, most code is safe-but-not-marked as that is the default. The new default will allow more usage of safe.
 D's solution to the problem: "this is the best practice now". That 
 doesn't stop already written code from being broken.
pick Swift though, they change stuff all the time, it probably is going to die soon, I doubt anyone will put up with that. -Steve
Mar 26 2020
next sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Thursday, 26 March 2020 at 19:30:16 UTC, Steven Schveighoffer 
wrote:
 On 3/26/20 2:56 PM, Arine wrote:
 On Thursday, 26 March 2020 at 15:02:15 UTC, Steven 
 Schveighoffer wrote:
 Writing

  system:

 At the top of your modules is not a big burden. If that 
 drives you away from the language, then I'm sorry to say that 
 you are missing out on the awesomeness of D, but I can't 
 really help you.
I can, just write trusted:
This isn't a good idea. But yeah, you can do that at your own peril (and anyone who uses your library). I'd highly recommend using system: instead.
There are problems with system:, they are outlined in great detail in the first review thread (just as safe: has the same issues). I don't really feel like repeating the same arguments over and over again just to have someone tell me to just use system: again. You can search the previous thread. trusted: will be the only true easiest solution. If you are writing system it really doesn't matter to you either way. Those that suffer are the people that care about safe, just as it is the people using system now that will suffer if this DIP goes through without any kind of proper backwards compatibility measure set in place (like any good language has).
 This will be my goto solution as D lacks the means many other 
 languages have to maintain compatibility.
The means are: use the correct markings. system works now, and will after this change.
They are correct for the *current* version of D. This DIP will break that. If I'm using a library and the author hasn't updated their library, it is now my burden to update their library as well. I have to maintain their library. The problem isn't what works now. The problem is that there is going to be a change, that change is going to break code. That code will have to be updated to work after the change. There's no easy way to just have that code work again. What you are basically telling people is that if they are using 10 different libraries, that they should take on the responsibility of maintaining those libraries and update them to be compatible with this change. This is not **practical**. You know what Rust does? There's a version indicator in cargo config files that specifies what version of Rust to use. Let's say Rust decided to change the default to be "unsafe" in the next version of Rust. If you created a new project and used a library that still required the old version of Rust. You can still build and use that library like normal. How is this such a foreign concept that people are willing to argue tooth and nail against having backwards compatibility instead of breaking a large amount of code because a default is being change (don't forget about nothrow; this is why DIPs shouldn't be looked at on their own and why most other languages create different versions/standards).
 How many people were driven away from windows development 
 because they had to deal with __stdcall? You just googled for 
 it (or whatever the hell was available at the time), said "oh 
 this is how you do it", and did it. It wasn't a problem, you 
 just did it.
You don't need to do that anymore, they changed it (for the better). It probably did drive people away. Especially when you deal with small issues consistently. They start piling up, and you can only deal with so many tiny issues (which D has many of; and will only grow with all the new changes coming forward).
Meh, you just do it. Saying "why do I have to write system when I don't care about safety" is like saying "why do I have to write void when I have no return value". Just do it, and your code works. For those who don't care.
Why do I have to fix my code because of a change the compiler made with no easy solution to maintain backwards compatibility. Updating my code isn't an easy solution. That's the problem. Practicality is what is at stake. People don't have time to waste to keep their code simply compiling.
 This is only even a discussion because of the current 
 situation. If D started out this way, you would have no idea 
 there was even a problem here.
Right, because people don't want to have to keep updating their code or have it be broken. Other languages ensure backwards compatibility, and if they don't they provide a way to keep the code working without having to modify the code to work.
I think the intention is to have an automated tool to mark things that are currently system as system explicitly. I would expect that feature in dfix, before this would become the default. You are free to use other languages if you feel that way. IMO this change is for the better, and provides a much healthier default. Without it, most code is safe-but-not-marked as that is the default. The new default will allow more usage of safe.
This isn't the only breaking change taking place. There are a large number of breaking changes happening. Yes this is only one DIP, and I'm sure someone will pop up saying "don't discuss other DIPs, this thread is only for DIPxxxx". That's the problem when there's only one "version" of D and DIPs are split apart without being able to look at the overall picture of what is happening. I agree, it is probably for the better. The changes made in python 3 was for the better. Look at what they did for changes that were for the better. Did they break all code in existence? Or did they provide a means for people to keep their existing code working without a significant waste of resources to update it.
 D's solution to the problem: "this is the best practice now". 
 That doesn't stop already written code from being broken.
C++. Don't pick Swift though, they change stuff all the time, it probably is going to die soon, I doubt anyone will put up with that. -Steve
The way Apple operates, I wouldn't doubt Swift being killed off :). I see Walter and Apple having a lot in common, except that one is actually successful.
Mar 26 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 7:31 PM, Arine wrote:
 There are problems with  system:, they are outlined in great detail in 
 the first review thread (just as  safe: has the same issues)
They are not the same issues. If you put safe: at the top, you get no inference of templates, so those that could compile as system code cannot compile now. system does indeed turn off inference, but there are less restrictions for system. So more stuff can compile. If you don't care, then just put system: at the top. On the off chance that something needs to be safe (like a lambda), mark it trusted. -Steve
Mar 26 2020
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2020-03-26 20:30, Steven Schveighoffer wrote:

 Don't  pick Swift though, they change stuff all the time, it probably is going 
 to die soon, I doubt anyone will put up with that.
In Swift 5, Apple stabilized the ABI. With every new version of Xcode, when you open your old project you get the option to automatically upgrade the code to the latest version. That's not something we're offering. -- /Jacob Carlborg
Mar 28 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 11:04 AM, Steven Schveighoffer wrote:
 This is overblown. Adding  system: at the top of a c library header is not
hard.
https://github.com/dlang/druntime/pull/3008/files
Mar 26 2020
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 10:34 AM, bachmeier wrote:
 To be perfectly honest, I can't imagine D being a sensible option for someone 
 wanting to work heavily with C code if you have to add pointless annotations
and 
 constantly deal with compiler errors. It's not a matter of annoyance, it's 
 simply impractical to add that kind of overhead, particularly if someone else
is 
 involved.
As someone who has converted a great deal of C code to D, I was more than satisfied with the results. Very few annotations were needed, and the ones that were improved the self-documenting clarity of the code. Needing an system annotation on your code should be rare, and it is certainly not pointless.
 If you're using C, you're well aware that it's not going to be safe.
It's very possible to write safe C code by writing in "D style". Of course, the C compiler won't check that. That's where DasBetterC comes in!
Mar 25 2020
prev sibling next sibling parent Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer 
wrote:
 The DIP should be rejected IMO unless all functions with no 
 mechanism to mangle  safe into the name (e.g. extern(C), 
 extern(C++), etc) that have no implementation are either:

 a) required to be marked, or
 b) default  system.
I realize my depreciation suggestion has some complication in static analysis, but I really think it is required to smooth the transition. * methods using unsafe language features must have a system annotation; trusted might be an option. * a method must be annotated if it calls an system method The first gives first chance to determine if your api is safe. The second creates the next layer of opportunity to make a safe interface. If you mark it system then the second rule continues to have you evaluate for a safe interface. This encourages safe interfaces early. While not doing this encourages safe interfaces at the top.
Mar 25 2020
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On Thu, Mar 26, 2020 at 12:15 AM Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 In response to Walter's response to ag*, I would say that there is a
 fatal problem with automatically allowing extern(C) function prototypes
 (and really, anything that does not mangle  safe) to be default  safe.

 The reason is simple -- the change is silent and automatically marks
 everything  safe that has not been checked.

 I would argue that if the compiler is going to make things  safe by
 default, then things that are not marked and are not  safe should not
 compile AT ALL COSTS. Otherwise the value of  safe is completely lost.

 The DIP should be rejected IMO unless all functions with no mechanism to
 mangle  safe into the name (e.g. extern(C), extern(C++), etc) that have
 no implementation are either:

 a) required to be marked, or
 b) default  system.

 Everything else in the DIP is possibly annoying to deal with but at
 least doesn't silently destroy the meaning of  safe.

 I will note that I support the notion of  safe by default. I would be in
 favor of the DIP as long as this fatal flaw is not included.

 -Steve
I'm really on board with this. My feeling though is that it should be b; extern(Language) should be system by default, except maybe extern(Rust), which nobody has ever asked for.
Mar 26 2020
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On Thu, Mar 26, 2020 at 5:17 PM Manu <turkeyman gmail.com> wrote:

 On Thu, Mar 26, 2020 at 12:15 AM Steven Schveighoffer via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:

 In response to Walter's response to ag*, I would say that there is a
 fatal problem with automatically allowing extern(C) function prototypes
 (and really, anything that does not mangle  safe) to be default  safe.

 The reason is simple -- the change is silent and automatically marks
 everything  safe that has not been checked.

 I would argue that if the compiler is going to make things  safe by
 default, then things that are not marked and are not  safe should not
 compile AT ALL COSTS. Otherwise the value of  safe is completely lost.

 The DIP should be rejected IMO unless all functions with no mechanism to
 mangle  safe into the name (e.g. extern(C), extern(C++), etc) that have
 no implementation are either:

 a) required to be marked, or
 b) default  system.

 Everything else in the DIP is possibly annoying to deal with but at
 least doesn't silently destroy the meaning of  safe.

 I will note that I support the notion of  safe by default. I would be in
 favor of the DIP as long as this fatal flaw is not included.

 -Steve
I'm really on board with this. My feeling though is that it should be b; extern(Language) should be system by default, except maybe extern(Rust), which nobody has ever asked for.
There's also the matter that with this DIP we REALLY need trusted expressions or scopes. It's inappropriate that you need to tag an entire function when you just want to make one unsafe call and the few lines of surrounding context assert any safety requirements. I also think that the names make little sense in a safe-by-default world, and really there should only be unsafe. It's not necessary to have system AND trusted.
Mar 26 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 12:29 AM, Manu wrote:
 There's also the matter that with this DIP we REALLY need  trusted
expressions 
() trusted { expression; } (); Yes, it's a bit inconvenient, and it's supposed to be inconvenient. Making it too convenient means it'll get used far too often.
 I also think that the names make little sense in a  safe-by-default world,
and 
 really there should only be  unsafe.
Please, no bikeshedding about the name. We considered unsafe originally, but decided on system because "unsafe" implies unsafe but that isn't what it means in this context, it means "not machine verified to be safe". "system" has the right connotations. We're not changing it. We've got far, far more important things to do than argue about the name.
 It's not necessary to have  system AND  trusted.
safe code isn't supposed to call system functions directly, because they have an unsafe interface.
Mar 26 2020
prev sibling next sibling parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Wednesday, 25 March 2020 at 14:10:18 UTC, Steven Schveighoffer 
wrote:
 I would argue that if the compiler is going to make things 
  safe by default, then things that are not marked and are not 
  safe should not compile AT ALL COSTS. Otherwise the value of 
  safe is completely lost.
I don't think extern(C) should get special behavior. - For code which can be machine-verified that it's safe, the compiler should default to assuming they're safe and ask the programmer to otherwise specify when it's not ( system or trusted). - For code which the compiler cannot verify whether it's safe, it should be assumed that it's system. This applies to extern(D) declarations, too. Effectively, this would mean that functions with a body won't need a safe annotation, but functions without a body will require it. However, I think this is a more correct solution despite this issue.
Mar 26 2020
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Friday, 27 March 2020 at 02:31:13 UTC, Vladimir Panteleev 
wrote:
 This applies to extern(D) declarations, too. Effectively, this 
 would mean that functions with a body won't need a  safe 
 annotation, but functions without a body will require it. 
 However, I think this is a more correct solution despite this 
 issue.
An afterthought. Because safety is part of D mangling, body-less extern(D) declarations could be assumed to be safe. A mismatch (assumption of safe but a system implementation) will result in a linker error. This applies to all mangling schemes which can represent safe, which is only extern(D) at the moment. So, if any calling convention would be special in this regard, it would be extern(D).
Mar 26 2020
parent reply Mathias Lang <pro.mathias.lang gmail.com> writes:
On Friday, 27 March 2020 at 02:35:56 UTC, Vladimir Panteleev 
wrote:
 On Friday, 27 March 2020 at 02:31:13 UTC, Vladimir Panteleev 
 wrote:
 This applies to extern(D) declarations, too. Effectively, this 
 would mean that functions with a body won't need a  safe 
 annotation, but functions without a body will require it. 
 However, I think this is a more correct solution despite this 
 issue.
An afterthought. Because safety is part of D mangling, body-less extern(D) declarations could be assumed to be safe. A mismatch (assumption of safe but a system implementation) will result in a linker error. This applies to all mangling schemes which can represent safe, which is only extern(D) at the moment. So, if any calling convention would be special in this regard, it would be extern(D).
Linker error are some of the most unfriendly way to do diagnostic. Having all declarations without definitions needing to be explicitly annotated is IMO much better. There aren't that many places out there where such a pattern is used anyway.
Mar 26 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/26/20 11:44 PM, Mathias Lang wrote:
 On Friday, 27 March 2020 at 02:35:56 UTC, Vladimir Panteleev wrote:
 On Friday, 27 March 2020 at 02:31:13 UTC, Vladimir Panteleev wrote:
 This applies to extern(D) declarations, too. Effectively, this would 
 mean that functions with a body won't need a  safe annotation, but 
 functions without a body will require it. However, I think this is a 
 more correct solution despite this issue.
An afterthought. Because safety is part of D mangling, body-less extern(D) declarations could be assumed to be safe. A mismatch (assumption of safe but a system implementation) will result in a linker error. This applies to all mangling schemes which can represent safe, which is only extern(D) at the moment. So, if any calling convention would be special in this regard, it would be extern(D).
Linker error are some of the most unfriendly way to do diagnostic. Having all declarations without definitions needing to be explicitly annotated is IMO much better. There aren't that many places out there where such a pattern is used anyway.
How does this fix the problem though? Today, if you use a wrong type, you get a linker error. I don't see why that's less confusing than not marking a function system. And what if you mark it system, but it's really safe? Linker error. Just another thing to mess up. I would say extern(D) functions are fine as prototypes without extra markings. -Steve
Mar 26 2020
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/25/20 10:10 AM, Steven Schveighoffer wrote:
 In response to Walter's response to ag*, I would say that there is a 
 fatal problem with automatically allowing extern(C) function prototypes 
 (and really, anything that does not mangle  safe) to be default  safe.
 
 The reason is simple -- the change is silent and automatically marks 
 everything  safe that has not been checked.
 
 I would argue that if the compiler is going to make things  safe by 
 default, then things that are not marked and are not  safe should not 
 compile AT ALL COSTS. Otherwise the value of  safe is completely lost.
 
 The DIP should be rejected IMO unless all functions with no mechanism to 
 mangle  safe into the name (e.g. extern(C), extern(C++), etc) that have 
 no implementation are either:
 
 a) required to be marked, or
 b) default  system.
 
 Everything else in the DIP is possibly annoying to deal with but at 
 least doesn't silently destroy the meaning of  safe.
 
 I will note that I support the notion of  safe by default. I would be in 
 favor of the DIP as long as this fatal flaw is not included.
I thought of a third option that could work: When an extern(C) safe function is compiled, produce 2 symbols that point at the same function text, one that is the normal extern(C) function symbol name (_foo), and one that is mangled with something like e.g. _dsafe_foo. When there is an extern(C) safe prototype, only look for the _dsafe_foo version. If there is an extern(C) trusted or or system prototype, then look for the normal _foo symbol. What does this accomplish? 1. you can keep safe by default extern(C) functions, as they now will not link if the real function is system or built in something other than D. 2. trusted "overrides" still work. In other words the function is implemented in C but can technically be trusted because it doesn't involve memory safety problems. 3. Calls from other languages that bind to C still work. In other words, you can still call foo from C/C++ through the normal C symbol. 4. Any changes to the actual implementation that cause the prototype to be trusted or system will now fail to link as the _dsafe_foo symbol goes away. I don't know how this might work for extern(C++), and I'm also not sure how the symbol emission works, but I feel like this should be possible. It would be a benefit even if we don't make safe the default for extern(C). -Steve
Apr 08 2020
prev sibling next sibling parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Wednesday, 25 March 2020 at 07:02:32 UTC, Mike Parker wrote:
 This is the discussion thread for the Final Review of DIP 1028, 
 "Make  safe the Default":

 https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/DIP1028.md

 The review period will end at 11:59 PM ET on April 8, or when I 
 make a post declaring it complete. Discussion in this thread 
 may continue beyond that point.

 Here in the discussion thread, you are free to discuss anything 
 and everything related to the DIP. Express your support or 
 opposition, debate alternatives, argue the merits, etc.

 However, if you have any specific feedback on how to improve 
 the proposal itself, then please post it in the feedback 
 thread. The feedback thread will be the source for the review 
 summary I write at the end of this review round. I will post a 
 link to that thread immediately following this post. Just be 
 sure to read and understand the Reviewer Guidelines before 
 posting there:

 https://github.com/dlang/DIPs/blob/master/docs/guidelines-reviewers.md

 And my blog post on the difference between the Discussion and 
 Feedback threads:

 https://dlang.org/blog/2020/01/26/dip-reviews-discussion-vs-feedback/

 Please stay on topic here. I will delete posts that are 
 completely off-topic.
Never thought D would actually do this. I'm on the fence as to whether this is a good idea. However, with all the D code I've written that would break from this I'd prefer not to do this; that is, unless the people who want this feature are willing to go through hundreds of thousands of lines of D code that I've written and fix it :) Looks like I've got I've got 79 repos on github with D in them (although some are just forks and some are private). https://github.com/marler8997?tab=repositories&q=&type=&language=d
Mar 25 2020
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/25/20 1:53 PM, Jonathan Marler wrote:

 that is, unless the 
 people who want this feature are willing to go through hundreds of 
 thousands of lines of D code that I've written and fix it :)  Looks like 
 I've got I've got 79 repos on github with D in them (although some are 
 just forks and some are private).
 
 https://github.com/marler8997?tab=repositories&q=&type=&language=d
I'm not willing, but I bet dfix will be. -Steve
Mar 25 2020
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 10:53 AM, Jonathan Marler wrote:
 I'm on the fence as to whether this is 
 a good idea.  However, with all the D code I've written that would break from 
 this I'd prefer not to do this;
Are you sure there'd be that much that breaks? If much does break, I'd suggest re-evaluating the coding techniques you use. How about the bugs in your code that it may find? I'm looking forward to applying this to the D code I've written. For example, I'm pretty happy with the printf validator added to DMD. I had to fix a lot of my own code, and was glad to get it fixed. I'm happy about every bug that gets fixed the easy way (at compile time). safe by default is going to improve your code. As professionals, we should be eager for that.
Mar 25 2020
next sibling parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Thursday, 26 March 2020 at 02:58:26 UTC, Walter Bright wrote:
 On 3/25/2020 10:53 AM, Jonathan Marler wrote:
 I'm on the fence as to whether this is a good idea.  However, 
 with all the D code I've written that would break from this 
 I'd prefer not to do this;
Are you sure there'd be that much that breaks? If much does break, I'd suggest re-evaluating the coding techniques you use.
I'm constantly re-evaluating and improving my coding techniques. Something I love doing actually. I'm not sure how this feature breaking code is related to poor coding techniques though? The point is that to fix code after this feature comes in, I'll need to go through thousands of source files and tag code appropriately. What does this have to do with the quality of the code itself?
 How about the bugs in your code that it may find? I'm looking 
 forward to applying this to the D code I've written.
If it finds bugs then that's definitely an improvement that could justify the breakage. I don't know about you but the code I write almost never has bugs. It can have errors, and it can be unfinished, but almost never bugs. That's from the way I code though. I use a combination of many techniques that make it very difficult for bugs to get through. If it does find bugs I would be surprised, but I'd love it if it did!
 For example, I'm pretty happy with the printf validator added 
 to DMD. I had to fix a lot of my own code, and was glad to get 
 it fixed. I'm happy about every bug that gets fixed the easy 
 way (at compile time).
Yeah this sounds like a good thing to do for printf.
  safe by default is going to improve your code. As 
 professionals, we should be eager for that.
If that's true then I welcome it, but I've got to see it to believe it. I'd be willing to go through some projects and try this feature out to see if that's the case. I assume there's a command-line switch to enable it so I could see for myself?
Mar 25 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, March 25, 2020 10:35:23 PM MDT Jonathan Marler via 
Digitalmars-d wrote:
 On Thursday, 26 March 2020 at 02:58:26 UTC, Walter Bright wrote:
 On 3/25/2020 10:53 AM, Jonathan Marler wrote:
 I'm on the fence as to whether this is a good idea.  However,
 with all the D code I've written that would break from this
 I'd prefer not to do this;
Are you sure there'd be that much that breaks? If much does break, I'd suggest re-evaluating the coding techniques you use.
I'm constantly re-evaluating and improving my coding techniques. Something I love doing actually. I'm not sure how this feature breaking code is related to poor coding techniques though? The point is that to fix code after this feature comes in, I'll need to go through thousands of source files and tag code appropriately. What does this have to do with the quality of the code itself?
Making it so that all code must be either verified by the compiler to be safe or be marked by the programmer to be trusted or system means that all code which could contain memory safety issues will be segregated by trusted or system, whereas right now, you can have large swathes of code which is not marked with anything and is unchecked. If the programmer is not using the compiler to verify safety and is not verifying system sections of code and marking it as trusted, then there are no compiler guarantees about memory safety in that code. Sure, the programmer may have done a good enough job that there are no memory safety bugs in the code (and that's far more likely with D code than C/C++ code), but by making safe the default, it makes it so that none of that will fall through the cracks unless the programmer explicitly tells the compiler to not check. In general, it should result in either far more code being checked for memory safety by the compiler or programmers just telling the compiler to shut up by using some combination of trusted and system. So, in the cases where the programmer allows the compiler to check, there is a real possibility that memory saftey bugs will be found (thus presumably resulting in those bugs being fixed), and in the cases where the programmer does not allow the compiler to check, it's much easier for other programmers to see that that's what's happening (e.g. simply grepping for system and trusted will show wherever it happens when templates are not involved, which is not currently the case). It's certainly possible that you will find very few bugs in your code as a result of this change, but I think that it should be pretty clear that code in general will benefit, and you could be surprised by what you find in your own code. The only real downsides to this change that I'm aware of are 1. You could have to add a bunch of annotations to existing code, which could be very annoying (which is what appears to be your main problem with the DIP). 2. It will make it a bit more annoying to throw together small programs, because you'll have to either actually make your code safe or slap system on it. However, in both cases, if you don't care about safety, you can just slap system at the top of all of your modules, much as that isn't great practice. So, it might be briefly annoying, but it shouldn't be a big deal long term, and having safe be the default should result in far more D code in general being verified for memory safety, which will result in fewer bugs in D code in general. - Jonathan M Davis
Mar 25 2020
parent reply IGotD- <nise nise.com> writes:
On Thursday, 26 March 2020 at 05:14:44 UTC, Jonathan M Davis 
wrote:
 Making it so that all code must be either verified by the 
 compiler to be  safe or be marked by the programmer to be 
  trusted or  system means that all code which could contain 
 memory safety issues will be segregated by  trusted or system, 
 whereas right now, you can have large swathes of code which is 
 not marked with anything and is unchecked. If the programmer is 
 not using the compiler to verify  safety and is not verifying 
  system sections of code and marking it as  trusted, then there 
 are no compiler guarantees about memory safety in that code. 
 Sure, the programmer may have done a good enough job that there 
 are no memory safety bugs in the code (and that's far more 
 likely with D code than C/C++ code), but by making  safe the 
 default, it makes it so that none of that will fall through the 
 cracks unless the programmer explicitly tells the compiler to 
 not check.
FFI functions like extern(C) must be system as the compiler cannot check this. Should extern(C) automatically mean system? Well I think so because that's the only feasible possibility. I think we are heading into the safe, trusted, system discussion and that's where I think the problem really lies, that trusted just messes things up. If safe code can call system code directly then we are good and we can us extern(C) just as before (with or without system block, another discussion) If we have this strange "human verified" trusted nomenclature then things starts because fuzzy. What is an FFI function, trusted or system? I think that trusted must go, then things start to clear up.
Mar 26 2020
next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 26 March 2020 at 11:40:41 UTC, IGotD- wrote:
 On Thursday, 26 March 2020 at 05:14:44 UTC, Jonathan M Davis 
 wrote:
 Making it so that all code must be either verified by the 
 compiler to be  safe or be marked by the programmer to be 
  trusted or  system means that all code which could contain 
 memory safety issues will be segregated by  trusted or system, 
 whereas right now, you can have large swathes of code which is 
 not marked with anything and is unchecked. If the programmer 
 is not using the compiler to verify  safety and is not 
 verifying  system sections of code and marking it as  trusted, 
 then there are no compiler guarantees about memory safety in 
 that code. Sure, the programmer may have done a good enough 
 job that there are no memory safety bugs in the code (and 
 that's far more likely with D code than C/C++ code), but by 
 making  safe the default, it makes it so that none of that 
 will fall through the cracks unless the programmer explicitly 
 tells the compiler to not check.
FFI functions like extern(C) must be system as the compiler cannot check this. Should extern(C) automatically mean system? Well I think so because that's the only feasible possibility. I think we are heading into the safe, trusted, system discussion and that's where I think the problem really lies, that trusted just messes things up. If safe code can call system code directly then we are good and we can us extern(C) just as before (with or without system block, another discussion) If we have this strange "human verified" trusted nomenclature then things starts because fuzzy. What is an FFI function, trusted or system? I think that trusted must go, then things start to clear up.
Safe code is about preventing memory corruptions: - external c library can have bugs, and corrupt memory - the kernel can have bugs, and corrupt memory - the CPU can have bugs, or designs that lead to memory dumping At least, D has a tool, the trusted/safe attribute that simply says: "hey folks, I can't mechanically or manually check you code, as I don't have the sources, but, function A API description assurers me that can't corrupt memory, so I mark its declaration trusted, while function B can corrupt memory if I use the wrong parameters, so I mark it system, and I will provide a trusted D wrapper that will assure that the correct parameters are used" At least, that's how I read the whole things. Coming back to the discussion, 'not D' externs should be system by default.
Mar 26 2020
parent reply IGotD- <nise nise.com> writes:
On Thursday, 26 March 2020 at 11:55:32 UTC, Paolo Invernizzi 
wrote:
 At least, D has a tool, the trusted/safe attribute that simply 
 says:

 "hey folks, I can't mechanically or manually check you code, as 
 I don't have the sources, but, function A API description 
 assurers me that can't corrupt memory, so I mark its 
 declaration trusted, while function B can corrupt memory if I 
 use the wrong parameters, so I mark it system, and I will 
 provide a trusted D wrapper that will assure that the correct 
 parameters are used"

 At least, that's how I read the whole things.

 Coming back to the discussion, 'not D' externs should be system 
 by default.
Another problem is that safe is "viral", if some function is system then the whole function chain must be system. trusted was meant to bridge this gap and as I think that trusted is an oxymoron, I think that we need system blocks in order to allow a system escape hatch in safe code. Apart from my worries that safe by default will break more code than we think. The whole safe, trusted, system patchwork needs an overhaul as well in order for safe by default should be usable and convenient to work with. The roadmap for safe as default, I think should be. 1. Remove trusted. 2. Implement system blocks. 3. Then we can set safe as default. Right now in order for safe to be usable, we more or less have to introduce a new keyword "AllowSystemInSafeCode:", otherwise code that use FFIs cannot use safe at all. Alternatively we just make everything system but that defeats the purpose of safe as default.
Mar 26 2020
parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 26 March 2020 at 12:18:05 UTC, IGotD- wrote:
 On Thursday, 26 March 2020 at 11:55:32 UTC, Paolo Invernizzi 
 wrote:
 At least, D has a tool, the trusted/safe attribute that simply 
 says:

 "hey folks, I can't mechanically or manually check you code, 
 as I don't have the sources, but, function A API description 
 assurers me that can't corrupt memory, so I mark its 
 declaration trusted, while function B can corrupt memory if I 
 use the wrong parameters, so I mark it system, and I will 
 provide a trusted D wrapper that will assure that the correct 
 parameters are used"

 At least, that's how I read the whole things.

 Coming back to the discussion, 'not D' externs should be 
 system by default.
<snip>
  The whole  safe,  trusted,  system patchwork needs an overhaul 
 as well in order for  safe by default should be usable and 
 convenient to work with.
I respectfully disagree: I'm pretty happy with the status quo. I would leave out of the discussion 'convenient', as anyone has its habits, but definitely the current situation is 'usable'.
Mar 26 2020
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Thursday, 26 March 2020 at 11:40:41 UTC, IGotD- wrote:
 FFI functions like extern(C) must be  system as the compiler 
 cannot check this. Should extern(C) automatically mean  system? 
 Well I think so because that's the only feasible possibility.
The problem I see is this: extern(C) int add1(int i, int j) safe { return i + j + 1; } extern(C) doesn't necessarily mean the code is written in C and can't be verified by the compiler. It might just be written in D (or C++, or Rust, or...). I do however understand why one would want to say that an external C library has no safe interface. But that can be done by applying system manually. I'm not sure what the best solution is.
Mar 26 2020
next sibling parent ag0aep6g <anonymous example.com> writes:
On 26.03.20 17:02, Atila Neves wrote:
 I do however understand why one would want to say that an 
 external C library has no  safe interface. But that can be done by 
 applying  system manually. I'm not sure what the best solution is.
Applying system manually is fine, as long as the compiler reminds you to do it with an error. Otherwise, the DIP that makes safe the default will end up weakening safe. Today, this rightfully fails compilation: ---- extern (C) void* memcpy(void* dst, const void* src, size_t n); int main() safe { auto x = new int; auto y = new int(13); immutable r = new int(0); memcpy(x, y, 100); return *r; } ---- With DIP 1028 in its current state, it passes. I think that's unacceptable. As far as I see, that's the majority opinion. Hopefully, you and/or Walter can agree and avoid accepting a DIP with a "fatal flaw" (as Steven put it).
Mar 26 2020
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, March 26, 2020 10:02:22 AM MDT Atila Neves via Digitalmars-d 
wrote:
 On Thursday, 26 March 2020 at 11:40:41 UTC, IGotD- wrote:
 FFI functions like extern(C) must be  system as the compiler
 cannot check this. Should extern(C) automatically mean  system?
 Well I think so because that's the only feasible possibility.
The problem I see is this: extern(C) int add1(int i, int j) safe { return i + j + 1; } extern(C) doesn't necessarily mean the code is written in C and can't be verified by the compiler. It might just be written in D (or C++, or Rust, or...). I do however understand why one would want to say that an external C library has no safe interface. But that can be done by applying system manually. I'm not sure what the best solution is.
The issue is that the compiler should never treat anything as safe unless it has verified that it's safe, or the programmer has explicitly marked it as trusted. Anything else introduces holes into the safety system. So, it should be fine to treat any and all function definitions as safe by default, because the compiler can verify their safety and provide the appropriate errors when the code isn't actually safe. However, any function _declarations_ which are not extern(D) must not be treated as anything other than system by default, or they introduce a hole into the safety system. If the compiler treats them as safe by default, then it's essentially marking them as trusted for the programmer. They haven't been verified by the compiler, and they haven't been verified by the programmer. So, they silently introduce code that is potentially memory unsafe into your program. It then becomes impossible to find all memory safety issues by looking for trusted code, and it becomes trivial to accidentally use a function which really isn't safe without realizing it, because the compiler implicitly marked its declaration as safe even though it wasn't verified for safety. extern(D) function declarations are of course fine, because the name mangling ensures that their definitions have actually been verified for safety if they're marked as safe, but the same is not true for any other function declarations. Either all non-extern(D) function declarations should be treated as system unless otherwise marked (as is currently the case), or they should require that the programmer explicitly mark them as system or trusted. Simply treating them as system by default would of course be the simplest, and I don't really see a problem with that. But no, it's not simply a question of extern(C) vs extern(D), because that's just a matter of linkage and name mangling. It's non-extern(D) function declarations specifically which are the problem. As long as the DIP only makes safe the default for function definitions and extern(D) function declarations, then it should be fine, but it currently isn't very clear on the matter. It clearly indicates that extern(D) functions will be treated safe by default, but it uses vague language about what's supposed to happen with non-extern(D) functions, and it makes no mention of function declaration vs definition. - Jonathan M Davis
Mar 26 2020
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
And then there is .di files which complicates the matters further.

But I agree with you.

If the compiler _cannot_ or has _not_ confirmed it is  safe, a function 
should not be marked as  safe and be callable.
Mar 26 2020
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, March 26, 2020 4:54:53 PM MDT rikki cattermole via Digitalmars-
d wrote:
 And then there is .di files which complicates the matters further.

 But I agree with you.

 If the compiler _cannot_ or has _not_ confirmed it is  safe, a function
 should not be marked as  safe and be callable.
Actually, .di files don't complicate things much. non-extern(D) declarations have to be treated the same no matter where they are, and anything with a function body in a .di file is the same as if it were in a .d file. The only real difference is that you then have extern(D) declarations added into the mix, and because whether they're safe, trusted, or system is part of their name mangling, they cannot link if they don't have the same safety level as the corresponding function definition (which was verified by the compiler if it's safe). So, the compiler can treat extern(D) function declarations as safe by default just like it does function definitions without there being a problem. The only real issue is if the declaration and definition don't match, which is a problem regardless of what the default safety level is. - Jonathan M Davis
Mar 26 2020
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/03/2020 12:16 PM, Jonathan M Davis wrote:
 On Thursday, March 26, 2020 4:54:53 PM MDT rikki cattermole via Digitalmars-
 d wrote:
 And then there is .di files which complicates the matters further.

 But I agree with you.

 If the compiler _cannot_ or has _not_ confirmed it is  safe, a function
 should not be marked as  safe and be callable.
Actually, .di files don't complicate things much. non-extern(D) declarations have to be treated the same no matter where they are, and anything with a function body in a .di file is the same as if it were in a .d file. The only real difference is that you then have extern(D) declarations added into the mix, and because whether they're safe, trusted, or system is part of their name mangling, they cannot link if they don't have the same safety level as the corresponding function definition (which was verified by the compiler if it's safe). So, the compiler can treat extern(D) function declarations as safe by default just like it does function definitions without there being a problem. The only real issue is if the declaration and definition don't match, which is a problem regardless of what the default safety level is. - Jonathan M Davis
The problem that concerns me is not its linkage or where that is occurring, its the fact that we treat .di files as trusted. We as a community have long said don't modify these files generated by the compiler. If it is generated by the compiler then the checks that have been said to have occurred should have occurred. For example safe on a function declaration without a body. But on a regular D file, these checks have not been checked yet. As these files come from a human being and not a compiler. Hence if a body isn't present, it cannot be safe. It has not been verified. Which means that in a .d file what is allowed is less than a .di file. If we follow this line of thought it creates a problematic situation.
Mar 27 2020
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, March 27, 2020 1:04:49 AM MDT rikki cattermole via Digitalmars-d 
wrote:
 On 27/03/2020 12:16 PM, Jonathan M Davis wrote:
 On Thursday, March 26, 2020 4:54:53 PM MDT rikki cattermole via
 Digitalmars->
 d wrote:
 And then there is .di files which complicates the matters further.

 But I agree with you.

 If the compiler _cannot_ or has _not_ confirmed it is  safe, a function
 should not be marked as  safe and be callable.
Actually, .di files don't complicate things much. non-extern(D) declarations have to be treated the same no matter where they are, and anything with a function body in a .di file is the same as if it were in a .d file. The only real difference is that you then have extern(D) declarations added into the mix, and because whether they're safe, trusted, or system is part of their name mangling, they cannot link if they don't have the same safety level as the corresponding function definition (which was verified by the compiler if it's safe). So, the compiler can treat extern(D) function declarations as safe by default just like it does function definitions without there being a problem. The only real issue is if the declaration and definition don't match, which is a problem regardless of what the default safety level is. - Jonathan M Davis
The problem that concerns me is not its linkage or where that is occurring, its the fact that we treat .di files as trusted. We as a community have long said don't modify these files generated by the compiler. If it is generated by the compiler then the checks that have been said to have occurred should have occurred. For example safe on a function declaration without a body. But on a regular D file, these checks have not been checked yet. As these files come from a human being and not a compiler. Hence if a body isn't present, it cannot be safe. It has not been verified. Which means that in a .d file what is allowed is less than a .di file. If we follow this line of thought it creates a problematic situation.
I really don't get what you're trying to say. Regardless of how a .di file was created, if an extern(D) function declaration in it is safe (be it because the compiler implicitly treats it as safe or it's explicitly marked with safe), then it will only link if the corresponding function definition is safe. So, any attribute mismatches will be caught when you link your program. As such, there is no way for there to be a hole in safe due to using a .di file with an extern(D) function. Because the function definition will have been verified by the compiler, and it must have the exact same set of attributes as the function declaration for the program to link, the function _was_ verified even if your program imports the .di file rather than the .d file. You can of course have linker errors due to the .di and .d files not matching, but that happens with _anything_ that affects the function signature. It's not at all unique to safe. - Jonathan M Davis
Mar 27 2020
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 26, 2020 at 04:22:01PM -0600, Jonathan M Davis via Digitalmars-d
wrote:
[...]
 So, it should be fine to treat any and all function definitions as
  safe by default, because the compiler can verify their  safety and
 provide the appropriate errors when the code isn't actually  safe.
 However, any function _declarations_ which are not extern(D) must not
 be treated as anything other than  system by default, or they
 introduce a hole into the  safety system.
[...] Very good, I think this is the key point here. If a function is *defined* (i.e., function body is visible to the compiler), then it doesn't matter whether it's extern(C) or not, it can be safe by default because the compiler will check the function body and reject it if it breaks safe. Where the problem crops up is when it's an extern(C) *declaration* without a function body. Then assuming safe by default is essentially equivalent to declaring trusted: at the top of the file, because who knows *what* that declaration will actually bind to at runtime. You're essentially blindly trusting that the C code (or D code, but the compiler can't tell) behind it is actually safe. And worse than the programmer writing trusted: at the top of the file, this is *implicit*, and the programmer may not even be aware that there's a problem. T -- There are four kinds of lies: lies, damn lies, and statistics.
Mar 26 2020
parent sarn <sarn theartofmachinery.com> writes:
On Thursday, 26 March 2020 at 23:09:23 UTC, H. S. Teoh wrote:
 On Thu, Mar 26, 2020 at 04:22:01PM -0600, Jonathan M Davis via 
 Digitalmars-d wrote: [...]
 So, it should be fine to treat any and all function 
 definitions as  safe by default, because the compiler can 
 verify their  safety and provide the appropriate errors when 
 the code isn't actually  safe. However, any function 
 _declarations_ which are not extern(D) must not be treated as 
 anything other than  system by default, or they introduce a 
 hole into the  safety system.
[...] Where the problem crops up is when it's an extern(C) *declaration* without a function body. Then assuming safe by default is essentially equivalent to declaring trusted: at the top of the file, because who knows *what* that declaration will actually bind to at runtime. You're essentially blindly trusting that the C code (or D code, but the compiler can't tell) behind it is actually safe. And worse than the programmer writing trusted: at the top of the file, this is *implicit*, and the programmer may not even be aware that there's a problem. T
I strongly agree with this viewpoint. Calling extern(C) code is like anything else safe prohibits. Yes, it *could* be safe, but the compiler should flag if safe code potentially calls code written in a language that isn't memory safe, and the onus should be on the programmer to check the code and mark it appropriately.
Mar 26 2020
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/26/2020 9:02 AM, Atila Neves wrote:
 extern(C) doesn't necessarily mean the code is written in C and can't be 
 verified by the compiler. It might just be written in D (or C++, or Rust, 
 or...).
That's right. It means "use a C function call interface". For example, one might be writing a function meant to be called from C code.
Apr 03 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/3/20 4:22 PM, Walter Bright wrote:
 On 3/26/2020 9:02 AM, Atila Neves wrote:
 extern(C) doesn't necessarily mean the code is written in C and can't 
 be verified by the compiler. It might just be written in D (or C++, or 
 Rust, or...).
That's right. It means "use a C function call interface". For example, one might be writing a function meant to be called from C code.
I want to make sure you understand that we are not talking about extern(C) functions that are written in D. extern(C) void foo() { import std.stdio; writeln("hello world!"); } can absolutely be assumed safe. It has an implementation. The compiler can verify right there that it works. This should fail to compile in that case: extern(C) void foo() { *cast(int *)0xdeadbeef = 5; } But what should absolutely not compile is: extern(C) int free(void *); void foo(int *ptr) // now inferred safe { free(ptr); } Notice I didn't import core.stdc.stdlib. You cannot fix this code, it will break. Silently. Anything that depends on it will also break, cascading the error all the way through D code. Things that were system will magically become safe even though they are not. safe will become a cruel joke. There are multiple options: 1. extern(C) (or really anything without safe name mangling) without implementation is assumed system. 2. extern(C) (et. al.) without implementation must be marked system, safe, or trusted explicitly. 3. option 1 or 2 PLUS such functions with implementation follow the same rules (for consistency). There is no grey area -- safe is COMPLETELY destroyed if we assume all unmarked extern(C) prototypes are safe. Even if the function is written in D, the fact that the prototype marking could be forgotten is going to cause huge issues. How many times has someone needed a function from druntime that's not public but is extern(C) and just threw in a prototype? All those would now be safe! The fact that we cannot control where/how people define their prototypes means we have to be firm on this. They need to opt-in to safe with extern(C), it cannot be default. -Steve
Apr 03 2020
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 03, 2020 at 05:06:28PM -0400, Steven Schveighoffer via
Digitalmars-d wrote:
[...]
 extern(C) int free(void *);
 
 void foo(int *ptr) // now inferred  safe
 {
    free(ptr);
 }
[...] To drive home the point even more: // ----- mymod.d ----- extern(C) void dealloc(void* p) system { import std.stdc.stdio : free; free(p); } // ----- main.d ----- // N.B.: does not import mymod directly extern(C) void dealloc(void* p); // assumed safe by proposed rules void main() safe { void* p; dealloc(p); // oops } Just because an extern(C) function is written in D, guarantees NOTHING, because the mangling name does not encode safety. Notice above that the prototype is assumed safe, but this does not match the actual D implementation, which is system. However, this will not be caught by the linker because of extern(C): 'dealloc' will bind to the system function even though main() thought it was safe. So yes, if this DIP gets implemented as-is, safe becomes a joke, and we might as well stop playing now. T -- Never trust an operating system you don't have source for! -- Martin Schulze
Apr 03 2020
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/3/20 5:06 PM, Steven Schveighoffer wrote:
 Even if the function is written in D, the fact that the prototype 
 marking could be forgotten is going to cause huge issues.
e.g. I don't want to see the day subtle bugs due to forgetting to go and mark this prototype as system crop up: https://github.com/dlang/phobos/blob/cd2b75560b089beaaf757011d308894dbe44dc93/std/stdio.d#L261 Consider it. On systems with HAS_GETDELIM, stdin.byLine which was previously inferred system now is inferred safe. All because of stuff like this was missed. What a fun bug to track down and fix. -Steve
Apr 03 2020
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/3/2020 2:06 PM, Steven Schveighoffer wrote:
 I want to make sure you understand that we are not talking about extern(C) 
 functions that are written in D.
I understand totally what you are talking about.
 But what should absolutely not compile is:
 
 extern(C) int free(void *);
 
 void foo(int *ptr) // now inferred  safe
 {
     free(ptr);
 }
I understand your proposal. You want C functions without bodies to be system.
 The fact that we cannot control where/how people define their prototypes means 
 we have to be firm on this. They need to opt-in to  safe with extern(C), it 
 cannot be default.
On the other hand, special cases like this tend to cause unexpected problems in the future. Experience pretty much guarantees it. It's likely to be tricky to implement as well. People remember simple rules. They don't remember rules with odd exceptions to them, that always winds up with trouble and bug reports. Simple rules applied evenly lead to a compiler that works and is reliable. I'm afraid the weight of all the special rules will crush D. Now, as to what to do. I spent a few minutes and added ` system:` in to the C headers in druntime for windows, posix, and linux. Done. I hope someone else will do it for freebsd, etc., if not I'll go do that to.
 is going to cause huge issues.
I doubt that for the simple reason that system by default has not caused huge issues. The rule is simple: "For a D module with a bunch of C declarations in it, start it with ` system:`." It's not a hard rule to check. It's one line. D's C interface has always relied on the user to get right. Recall that C doesn't do any name mangling at all: ----- mylib.di extern (C) int addOne(int i); ----- mylib.c double addOne(double d) { return d + 1; } That'll link fine and yet fail at runtime. Oops! Calling it system will not help at all. If the C implementation is bad, there's not a damn thing D can do about it, with or without system. It's always going to be this way.
Apr 03 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Saturday, April 4, 2020 12:53:57 AM MDT Walter Bright via Digitalmars-d 
wrote:
 On 4/3/2020 2:06 PM, Steven Schveighoffer wrote:
 I want to make sure you understand that we are not talking about
 extern(C) functions that are written in D.
I understand totally what you are talking about.
 But what should absolutely not compile is:

 extern(C) int free(void *);

 void foo(int *ptr) // now inferred  safe
 {

     free(ptr);

 }
I understand your proposal. You want C functions without bodies to be system.
Anything else would go against what safe is supposed to be promising. safe means that the compiler verified the function for memory safety, which it obviously didn't do in the case of extern(C) declarations. You're needlessly putting a hole in the safety system as part of making safe the default. We already have enough problems with trusted being used incorrectly without the compiler blindly applying it to declarations by assuming that they're safe with no verification whatsoever.
 The fact that we cannot control where/how people define their prototypes
 means we have to be firm on this. They need to opt-in to  safe with
 extern(C), it cannot be default.
On the other hand, special cases like this tend to cause unexpected problems in the future. Experience pretty much guarantees it. It's likely to be tricky to implement as well. People remember simple rules. They don't remember rules with odd exceptions to them, that always winds up with trouble and bug reports. Simple rules applied evenly lead to a compiler that works and is reliable. I'm afraid the weight of all the special rules will crush D.
I would have thought that a rule that function declarations which are not extern(D) would be system by default would be very clear and simple. But if it's not, would it be more straightforward to just make all function declarations be treated as system by default? Or would it be simpler to just say that anything that isn't extern(D) is system by default? Or would it be simpler to just outright require that all non-extern(D) functions be explicitly marked by the programmer instead of assuming anything? There has got to be a straightforward rule here that doesn't involve the compiler assuming that code is safe when it hasn't verified it, and the programmer hasn't told the compiler that it's trusted. If this DIP is accepted as-is, it will no longer be the case that you can find all memory safety problems problems by searching for trusted code. All non-extern(D) functions will be a potential problem and will have to be checked for whether they've been explicitly marked with any attributes by the programmer or were simply assumed to be safe by the compiler, whereas right now, they have to be marked with trusted in order to treated as safe, just like any function body where the compiler can't verify that it's memory safe.
 Now, as to what to do. I spent a few minutes and added ` system:` in to
 the C headers in druntime for windows, posix, and linux. Done. I hope
 someone else will do it for freebsd, etc., if not I'll go do that to.
If the programmer has to explicitly mark extern(C) declarations as system just so that they're not safe, then you're basically turning trusted on its head. Instead of the compiler guaranteeing that something is memory safe and only treating something that it can't guarantee is memory safe as memory safe if the programmer marks it as trusted, the compiler is then assuming that these functions are safe, because it can't verify that they're not, forcing the programming to mark them with system when they're not. I really don't understand how that can be viewed as acceptable given than safe is supposed to be about compiler-verified memory safety. And remember that not all extern(C) declarations are in druntime. Plenty of programmers interface with other C libraries. So, going through druntime and making sure that all of the extern(C) declarations in it are marked appropriately doesn't solve the problem. It just makes sure that those particular extern(C) declarations aren't being incorrectly treated as safe by the compiler. It very much sounds like you're putting a hole in safe, because you think that it's simpler to have a hole than it is to have the compiler tell the programmer that they need to actually verify extern(C) declarations for memory safety. This situation reminds me of how you tried to make it so that array bounds checking went away even in safe code. If you remove the checks, you remove the guarantees of memory safety.
  > is going to cause huge issues.

 I doubt that for the simple reason that  system by default has not caused
 huge issues.

 The rule is simple:

 "For a D module with a bunch of C declarations in it, start it with
 ` system:`."

 It's not a hard rule to check. It's one line. D's C interface has always
 relied on the user to get right. Recall that C doesn't do any name
 mangling at all:

    ----- mylib.di
      extern (C) int addOne(int i);

    ----- mylib.c
      double addOne(double d) { return d + 1; }


 That'll link fine and yet fail at runtime. Oops! Calling it  system will
 not help at all. If the C implementation is bad, there's not a damn thing
 D can do about it, with or without  system. It's always going to be this
 way.
The key difference is that right now, the programmer has to actually tell the compiler that a particular C declaration is trusted for it to be treated as safe. If they fail to take the time to mark an extern(C) declaration approriately or they miss it for whatever reason, the code using it will have to deal with the fact that it wasn't marked properly. There will be errors when safe code tries to use it, and the programmers working on that code will know that they have to make sure that what they're doing can be trusted. With this DIP, as it currently stands, those mistakes will then be invisible, and it's much less likely that they will be caught. It goes completely against the idea that the compiler only treats code as safe if it can actually verify it that is or if the programmer has told the compiler that it is. Of course, the programmer can use trusted incorrectly and screw themselves over, but then at least when the program has problems due to memory safety bugs, you only have to track down the trusted code to see what code you have to examine for memory safety bugs. None of it is invisible, and the compiler is not claiming that anything is safe when it's not. Personally, I would much rather see safe not be the default than to have holes like this put in it. Yes, as things stand, too much code is left as system, and too often, programmers don't take the time to make sure that their code is safe where it can be. But at least the compiler isn't claiming that something is safe when it hasn't actually verified that it is. And I really don't see why it would be a big problem to either treat all non-extern(D) function declarations as system by default (or even all function declarations as system by default if that's simpler) or to just simply require that the programmer explicitly mark them. It would avoid introducing holes into safe while still allowing us to have safe be the default, and it seems really simple to explain and easy to understand. Instead, with this DIP, we'll have to be explaining to people why they have to be extremely wary of any extern(C) declarations in their code leading to system code being treated as safe without the compiler saying anything about it. - Jonathan M Davis
Apr 04 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/4/2020 1:09 AM, Jonathan M Davis wrote:
 Anything else would go against what  safe is supposed to be promising.
Extern declarations are the USER making promises. Even with extern (D), the return type isn't mangled into the name. The User has to get it right.
Apr 05 2020
next sibling parent reply Arafel <er.krali gmail.com> writes:
On 5/4/20 11:12, Walter Bright wrote:
 On 4/4/2020 1:09 AM, Jonathan M Davis wrote:
 Anything else would go against what  safe is supposed to be promising.
Extern declarations are the USER making promises. Even with extern (D), the return type isn't mangled into the name. The User has to get it right.
I'm not a heavy user or a big contributor of D, but just my 2¢: I find this totally unexpected. To me as a non-guru it breaks the principle of least astonishment and I'm convinced it will just generate confusion and subtle bugs. Of course the user can always explicitly mark functions as system, but then I bet that more than 99% of the external declarations (without function body) will have to be tagged as such. Since Walter has always defended sane defaults, I think this is one prime case to apply this principle. In fact, a new user who is not that adept at D and its subtleties, and that just wants to interact with a C library would find very surprising that his expected safe code is no longer safe *without even a warning*. Even more, marking declarations without body as system wouldn't even be an exception to any rule. The way I understand the different safety levels is the following (please correct me if wrong): safe: Memory safety has been proven and enforced by the compiler according to a know set of rules. trusted: Memory safety has been manually proven / promised by the user. system: Everything else. If we follow this, marking *any* functions without body by default as system would be the logical thing to do, because they haven't been proven neither by the compiler nor (by default, one would assume) by the user. Furthermore, it shouldn't be possible for such a function to ever be safe, at most it should be trusted. In fact, assuming safe would break the promise above! The only exception would be then where the mangling already indicates this, as in "extern (D)". Since the mangling is already trusted, it would be no exception in this case. Of course, there's the argument that the user should know that he has to do the verification for each function, and that by just putting the declaration there he is implicitly taking responsibility for the memory safety of the function, and that it's up to him to mark it as system if needed. But then, wouldn't that be an argument to assume everywhere trusted by default? If the user is responsible for marking non-trusted function declarations, why not do the same for non-trusted function definitions? Just assume that all function bodies are safe, and ask the user to slap system on those who aren't! Isn't that an inconsistency as well? A.
Apr 05 2020
parent reply Arafel <er.krali gmail.com> writes:
On 5/4/20 11:50, Arafel wrote:
 But then, wouldn't that be an argument to assume everywhere  trusted by 
 default? If the user is responsible for marking non-trusted function 
 declarations, why not do the same for non-trusted function definitions?
 
 Just assume that all function bodies are safe, and ask the user to slap 
  system on those who aren't!
 
 Isn't that an inconsistency as well?
To expand on this final point, I think it's quite important. Consider this: ```d extern(C) void foo(); /* assumed safe, or more properly, trusted- */ extern(C) void bar() { } /* assumed system, why not trusted as well? */ void main() safe { foo(); // OK: Here we assume the user verified the function bar(); // ERROR: Here we don't!! } ``` Don't you find this inconsistent and confusing? For sure I do. A.
Apr 05 2020
parent reply Arafel <er.krali gmail.com> writes:
On 5/4/20 12:06, Arafel wrote:
 ```d
 extern(C) void foo(); /* assumed  safe, or more properly,  trusted- */
 extern(C) void bar() { } /* assumed  system, why not  trusted as well? */
 
 void main()  safe {
      foo(); // OK: Here we assume the user verified the function
      bar(); // ERROR: Here we don't!!
 }
 ```
 
 Don't you find this inconsistent and confusing? For sure I do.
 
 A.
My bad, in this case bar() would be assumned safe and verified as well, and it would work. A better example would be: ```c void foo(int **i) { /* assumed trusted! */ *i = 0xDEADBEEF; } ``` ```d extern(C) void foo(int **i); /* unsafe, but assumed safe, or more properly, trusted- */ extern(C) void bar(int **i) { /* properly checked, why not assume the user did it? */ *i = cast (int *) 0xDEADBEEF; } void main() safe { int **i; foo(i); // OK: Here we assume the user verified the function bar(i); // ERROR: Here we don't!! } ```
Apr 05 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/5/20 6:17 AM, Arafel wrote:
 ```d
 extern(C) void foo(int **i); /* unsafe, but assumed  safe, or more 
 properly,  trusted- */
 extern(C) void bar(int **i) { /* properly checked, why not assume the 
 user did it? */
      *i = cast (int *) 0xDEADBEEF;
 }
Just to clarify, the DIP marks all functions safe by default, which means bar will fail to compile. In my proposal to change it, bar would compile, both it and foo would be marked system (even in the cases where bar was actuallly safe).
 
 void main()  safe {
      int **i;
      foo(i); // OK: Here we assume the user verified the function
      bar(i); // ERROR: Here we don't!!
 }
 ```
The compiler won't get this far, bar will fail first. -Steve
Apr 05 2020
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Sunday, 5 April 2020 at 09:12:21 UTC, Walter Bright wrote:
 Extern declarations are the USER making promises. Even with 
 extern (D), the return type isn't mangled into the name. The 
 User has to get it right.
Yes but :-) The question is whether the language defaults help the user make the _right_ choices. If the easy thing to do is to do nothing -- not add any attribute -- is it more likely that this will be a correct choice, or a false promise? I'm actually starting to think that we're getting the idea of ` safe` back to front in this discussion. As jmh530 pointed out earlier <https://forum.dlang.org/post/zxwazhgynudymqyueeyy forum.dlang.org>, ` safe` is not a guarantee that the function is memory-safe: it's a request that the compiler validate that certain potentially-unsafe operations are not performed within the function body. BY DEFINITION those checks cannot be performed on functions where we do not have the body. So even if we ask for the checks of ` safe` to be on by default, there is nothing for them to validate in the case of external functions where we only have the signature. So it falls out quite naturally for the compiler to say here, "I can't validate this, so you have to tell me explicitly whether it is ` system` or ` trusted`". (Or perhaps to assume ` system` until explicitly told otherwise.) The one exception would be external D functions where the mangling tells us they are ` safe`, where it would be reasonable to assume that those were checked when the library concerned was compiled. (This is by the way exactly what jmh530 already said, but I thought I'd emphasize it:-)
Apr 05 2020
prev sibling next sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Sunday, April 5, 2020 3:12:21 AM MDT Walter Bright via Digitalmars-d 
wrote:
 On 4/4/2020 1:09 AM, Jonathan M Davis wrote:
 Anything else would go against what  safe is supposed to be promising.
Extern declarations are the USER making promises. Even with extern (D), the return type isn't mangled into the name. The User has to get it right.
Sure, the user has to get the declaration right, but safe is about the compiler verifying code for memory safety, and in the case of extern(C) declarations, it has not done that. IMHO, it's not even appropriate for a programmer to put safe on an extern(C) declaration, and it should be illegal, because the compiler didn't verify it. If the programmer verified it, then they should be using trusted. And this DIP will end up having the compiler slap safe on it even though it hasn't verified anything. Right now, if a programmer doesn't mark an extern(C) declaration with system, it's still treated as system (and thus unverified by either the programmer or the compiler), and there are no safety problems. When the function is used in safe code, the fact that declaration is system will be flagged as an error, and either the programmer will determine that it can be trusted (and thus explicitly mark it as trusted), or the code using it will have to do stuff in a way that _it_ can be trusted. Either way, you don't end up with code with memory safety bugs being treated as safe unless the programmer screws up with trusted. After this DIP, if the programmer screws up and forgets to mark an extern(C) declaration as system, the compiler will happily treat it as safe even though neither the programmer nor the compiler has verified it for memory safety. And voila, the programmer's screw-up caused a bug in the code instead of being caught by the compiler, whereas with the current behavior, the compiler would catch it and force the programmer to verify the code for safety. With this DIP, the programmer can no longer rely on safe code being memory safe so long as trusted was not incorrectly applied. Arguably even worse, all of the existing extern(C) declarations out there which currently are quite reasonably left unmarked and treated as system will suddenly be treated as safe. Code which worked perfectly fine before will break. This DIP effectively breaks _all_ existing non-extern(D) declarations which have not explicitly been marked as system or trusted, and it does so invisibly. To deal with this, every D library and program will have to have be searched for declarations which are not extern(D) so that they can be marked with system - something that the compiler really should be doing - and invariably, some of them will fall through the cracks and introduce safety bugs. I really don't understand your position on this. It's so simple and straightforward to just treat all declarations which are not extern(D) as system by default, and there's _no_ downside to it from the standpoint of anyone who isn't a compiler dev. On the contrary, it maintains the default that all non-extern(D) declarations should have, because they haven't actually been verified by the compiler for memory safety, and it prevents bugs. Making safe the default for non-extern(D) declarations will just introduce bugs rather than fix them. It solves nothing while being error-prone and introduing a hole into safe at the same time that we're trying to improve it. From the standpoint of a programmer using the language, there is literally _no_ upside to treating extern(C) declarations as safe by default. It just increases the odds that extern(C) declarations will be wrong and introduce bugs. safe is supposed to mean that the compiler verified the function for memory safety. trusted is supposed to mean that the programmer verified it. And system is supposed to mean that it's unverified. _Please_ don't change that. - Jonathan M Davis
Apr 05 2020
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/5/20 5:12 AM, Walter Bright wrote:
 On 4/4/2020 1:09 AM, Jonathan M Davis wrote:
 Anything else would go against what  safe is supposed to be promising.
Extern declarations are the USER making promises. Even with extern (D), the return type isn't mangled into the name. The User has to get it right.
But the user DID make a correct promise. Today, they promised "here's a C function, and it's system". Changing the language to mean the opposite of what it means today is not the user breaking the promise, it's the compiler moving the goalposts. -Steve
Apr 05 2020
prev sibling parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Sunday, 5 April 2020 at 09:12:21 UTC, Walter Bright wrote:
 On 4/4/2020 1:09 AM, Jonathan M Davis wrote:
 Anything else would go against what  safe is supposed to be 
 promising.
Extern declarations are the USER making promises.
Sorry but that is wrong. extern declarations is just that, it's a declaration of a call interface, nothing more. It tells how to call a function. It tells strictly nothing about the implementation of said function. Assuming safety just by looking at the interface is completely bonkers. All the safety issues in C come from relying on USER promises. extern (C) void hello(int); ---------------------------- void hello(int o) { system("format c:"); }
 Even with extern (D), the return type isn't mangled into the 
 name. The User has to get it right.
Has no bearing with the problem of safety.
Apr 05 2020
prev sibling next sibling parent Aliak <something something.com> writes:
On Saturday, 4 April 2020 at 06:53:57 UTC, Walter Bright wrote:
 On 4/3/2020 2:06 PM, Steven Schveighoffer wrote:
 I want to make sure you understand that we are not talking 
 about extern(C) functions that are written in D.
I understand totally what you are talking about.
 But what should absolutely not compile is:
 
 extern(C) int free(void *);
 
 void foo(int *ptr) // now inferred  safe
 {
     free(ptr);
 }
I understand your proposal. You want C functions without bodies to be system.
 The fact that we cannot control where/how people define their 
 prototypes means we have to be firm on this. They need to 
 opt-in to  safe with extern(C), it cannot be default.
On the other hand, special cases like this tend to cause unexpected problems in the future. Experience pretty much guarantees it. It's likely to be tricky to implement as well.
How are rules like: 1. All D functions are safe by default 2. All extern functions are system by default Complicated or special cased? They look simpler than D’s conversion rules or function overloading rules. At the end, any rule can be called a “special case”.
 People remember simple rules. They don't remember rules with
Also 60-80% of people stick with defaults. The one thing I consistently see people get wrong in APIs and UX is that they make the choice that does the most damage the default. I’ll leave this here for insight in to research on defaults: https://uxdesign.cc/design-by-default-the-impact-of-using-default-options-in-user-centered-design-926c4d24385c
 odd exceptions to them, that always winds up with trouble and
I think calling “you need to explicitly mark uncheckable functions as safe” an odd exception is highly exaggerating the situation.
 bug reports. Simple rules applied evenly lead to a compiler 
 that works and is reliable. I'm afraid the weight of all the 
 special rules will crush D.

 Now, as to what to do. I spent a few minutes and added 
 ` system:` in to the C headers in druntime for windows, posix, 
 and linux. Done. I hope someone else will do it for freebsd, 
 etc., if not I'll go do that to.

 is going to cause huge issues.
I doubt that for the simple reason that system by default has not caused huge issues. The rule is simple: "For a D module with a bunch of C declarations in it, start it with ` system:`." It's not a hard rule to check. It's one line. D's C interface has always relied on the user to get right. Recall that C doesn't do any name mangling at all: ----- mylib.di extern (C) int addOne(int i); ----- mylib.c double addOne(double d) { return d + 1; } That'll link fine and yet fail at runtime. Oops! Calling it system will not help at all. If the C implementation is bad, there's not a damn thing D can do about it, with or without system. It's always going to be this way.
And there’s not a damn thing D needs to do about bad C code. But D pretending all C code is safe is a disservice to users who care about their code being safe - and at the end of the day just plainly inaccurate.
Apr 04 2020
prev sibling next sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Saturday, 4 April 2020 at 06:53:57 UTC, Walter Bright wrote:

 D's C interface has always relied on the user to get right. 
 Recall that C doesn't do any name mangling at all:

   ----- mylib.di
     extern (C) int addOne(int i);

   ----- mylib.c
     double addOne(double d) { return d + 1; }


 That'll link fine and yet fail at runtime. Oops! Calling it 
  system will not help at all. If the C implementation is bad, 
 there's not a damn thing D can do about it, with or without 
  system. It's always going to be this way.
That's not what we are discussing: we are discussing about "memory safety", and marking an extern(C) declaration with trusted simply tells the reviewer to verify the C API, and check that the programmer has done no mistakes as "really", the C function can't corrupt memory. The same for trusted wrapper. I don't want to add more burden to reviewers (check that system: is present, or what-so-ever different solution) I want MORE mechanical checks and LESS reviewer work. It's the review of written code the bottleneck in most of companies, not writing code. That's also why I don't like DIP 1032, readability is much more important over syntactic sugars rules.
Apr 04 2020
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 03, 2020 at 11:53:57PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
 I understand your proposal. You want C functions without bodies to be
  system.
[...]
 On the other hand, special cases like this tend to cause unexpected
 problems in the future. Experience pretty much guarantees it. It's
 likely to be tricky to implement as well.
 
 People remember simple rules. They don't remember rules with odd
 exceptions to them, that always winds up with trouble and bug reports.
 Simple rules applied evenly lead to a compiler that works and is
 reliable.
The rule: extern(D) => safe by default extern(C) => system by default hardly sounds "odd" to me. It almost verbally describes what C is, and what we want D to be, there's nothing easier to remember.
 I'm afraid the weight of all the special rules will crush D.
Now *that's* an odd statement, considering that you recently just posted that memory safety by default is the way to go, and now you're proposing to add a huge big hole to safe, and even more ironically, this in the name of making D code safer. // Current situation: user forgot to write system on an // extern(C) function, the code fails to compile, the user is // informed of it, and makes the appropriate fix: extern(C) int myfunc(); // system by default void main() safe { myfunc(); // compile error } // Proposed situation: user forgot to write system on an // extern(C) function, the code compiles beautifully, the user // is not informed of any potential problem, and didn't fix it // until it explodes in the customer's production server: extern(C) int myfunc(); // safe by default, but actually system void main() safe { myfunc(); // no compile error, safe is bypassed } Yep, this certainly makes D all the more memory-safe, and D certainly won't be crushed by the weight of all the newly-added safety loopholes, nope, not at all! :-P [...]
 The rule is simple:
 
 "For a D module with a bunch of C declarations in it, start it with
 ` system:`."
[...] Since the rule is this simple, and can be mechanically automated, why is it still left as a burden on the user to do it? What we're proposing is simply to make this rule mechanically enforced, instead of coding by convention, which we all know all too well how it will end up. Whatever happened to mechanically-verified correctness vs. coding by convention? T -- Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald Knuth
Apr 04 2020
parent Jonathan Marler <johnnymarler gmail.com> writes:
On Saturday, 4 April 2020 at 11:57:50 UTC, H. S. Teoh wrote:
 [...]
Here's how I see function safety attributes: verify body is safe? | callable from safe code? ------------------------------------------------------------------- system | NO | NO trusted | NO | YES safe | YES | YES The compiler can only verify whether a function is safe if it has a body. When it doesn't, it is solely up to the programmer to indicate whether it should be callable from safe code. So what about defaults? If we enable safe by default on "functions with bodies", then we are telling the compiler to "verify" everything is safe by default. However, if we enable safe by default on "functions without bodies", then we are telling the compiler to assume everything is "safe to call" by default. The two are verify different changes and should not be conflated. Changing the default for functions with bodies makes some sense: function has body => verify it is safe => callable from safe code For function's without bodies, not so much: function has NO body => CANNOT verify it is safe => ??? callable from safe ??? P.S. Based on my table above, I don't think it makes sense to mark any function without a body as safe, rather, they are either system or trusted. P.S. Notice that there is a potential 4th attribute that verifies the body is safe, but does not allow it to be called from safe code. It's possible for a function to only do safe things, but also be "unsafe" to call. However, D has taken the route that if a function's body is safe, it should also be safe to call it from safe code. This decision indicates that D ignores function boundaries when it comes to safety. To me this indicates that functions without bodies should not be marked safe by default, because D treats code safety the same whether or not it's in another function, and the compiler would never assume a block of code is safe without analyzing it.
Apr 04 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/4/20 2:53 AM, Walter Bright wrote:
 On 4/3/2020 2:06 PM, Steven Schveighoffer wrote:
 But what should absolutely not compile is:

 extern(C) int free(void *);

 void foo(int *ptr) // now inferred  safe
 {
     free(ptr);
 }
I understand your proposal. You want C functions without bodies to be system.
This is the bare minimum proposal. There are various options as I laid out. My preference is that ALL extern(C), extern(C++), extern(Objective-c) functions are system by default, with or without body. This is the least intrusive and most consistent proposal that does not gut all safety guarantees in D.
 The fact that we cannot control where/how people define their 
 prototypes means we have to be firm on this. They need to opt-in to 
  safe with extern(C), it cannot be default.
On the other hand, special cases like this tend to cause unexpected problems in the future. Experience pretty much guarantees it. It's likely to be tricky to implement as well.
This is like saying we shouldn't keep loaded guns in a locked safe because we don't keep our shirts in a locked safe, because it's too complicated and "tricky to implement". Yep, it's not as easy. That's because it's dangerous.
 People remember simple rules. They don't remember rules with odd 
 exceptions to them, that always winds up with trouble and bug reports. 
 Simple rules applied evenly lead to a compiler that works and is 
 reliable. I'm afraid the weight of all the special rules will crush D.
This is not a "special rule" or even a complicated one. It's really simple -- extern(C) functions cannot be trusted, so they need to be system by default. H.S. Teoh put it perfectly:
 The rule:
 
 	extern(D) =>  safe by default
 	extern(C) =>  system by default
 
 hardly sounds "odd" to me.  It almost verbally describes what C is, and
 what we want D to be, there's nothing easier to remember.
Continuing...
 Now, as to what to do. I spent a few minutes and added ` system:` in to 
 the C headers in druntime for windows, posix, and linux. Done. I hope 
 someone else will do it for freebsd, etc., if not I'll go do that to.
I don't even have to look to know that you didn't get them all. There are peppered extern(C) prototypes all over Druntime and Phobos. I pointed out one in another post which you have not replied to. This does not even mention all D code in all repositories which frequently add an extern(C) prototype for functions needed. Our current documentation says [https://dlang.org/spec/interfaceToC.html#calling_c_functions]:
 C functions can be called directly from D. There is no need for wrapper
functions, argument swizzling, and the C functions do not need to be put into a
separate DLL.
 
 The C function must be declared and given a calling convention, most likely
the "C" calling convention, for example:
 
 extern (C) int strcmp(const char* string1, const char* string2);
 and then it can be called within D code in the obvious way:
 import std.string;
 int myDfunction(char[] s)
 {
     return strcmp(std.string.toStringz(s), "foo");
 }
Hey look, there's a safety violation right there! It doesn't say, "import core.stdc.string where I've helpfully already pre-marked your system functions for you" it says, just spit out a prototype (without a system tag) and you are good to go. Let's update that documentation, and then wait for the questions "why does it say to use system?" "Oh, that's because D decided to trust all C calls as perfectly memory safe, so it's on you to tell the compiler it's not safe. Make sure you do that." WAT.
 
  > is going to cause huge issues.
 
 I doubt that for the simple reason that  system by default has not 
 caused huge issues.
system by default is fine! It doesn't violate safety because you have to opt-in to trusting code. safe by default for code that CANNOT BE CHECKED is wrong. Just plain wrong, and breaks safety completely. safe by default for code that CAN be checked is fine, because the compiler will check it. If it's not safe, it won't compile. This is why it's ok to mark safe by default D functions with implementation, and even extern(D) prototypes (since the name mangling takes into account safety).
 
 The rule is simple:
 
 "For a D module with a bunch of C declarations in it, start it with 
 ` system:`."
This is your "simpler" solution? "Don't use the default because it's completely wrong" And that is not the only place C prototypes are declared. People who have a C function they need to call follow the spec, and add a prototype into their module that is full of D code. Putting system: at the top isn't a good idea.
 
 It's not a hard rule to check. It's one line. D's C interface has always 
 relied on the user to get right. Recall that C doesn't do any name 
 mangling at all:
 
    ----- mylib.di
      extern (C) int addOne(int i);
 
    ----- mylib.c
      double addOne(double d) { return d + 1; }
 
 
 That'll link fine and yet fail at runtime. Oops! Calling it  system will 
 not help at all. If the C implementation is bad, there's not a damn 
 thing D can do about it, with or without  system. It's always going to 
 be this way.
This is a red herring -- code that doesn't work is not code that I care about. ALL unmarked extern(C) prototypes in existence today THAT ARE CORRECTLY WRITTEN are system!!! Making them "magically" switch to safe is wrong, BUT THEY WILL STILL WORK. However D safety will be utterly gutted and destroyed. I can't say this enough. Consider a scenario: Before the switch: Some library has code that does i/o. They use prototypes to interface with the system calls as that's what the spec calls for: extern(C) size_t read(int fd, void* buf, size_t length); size_t doRead(int fd, void[] arr) { return read(int fd, &arr[0], arr.length); } Now, let's see what user code can do here: void main() { int[10] arr; doRead(0, arr[]); } This runs and builds, and is technically fine! But it's not safe. Which is OK because the user didn't opt-in to safety. This ALSO compiles: void main() { int[][10] arr; doRead(0, arr[]); } This compiles and builds and is NOT fine. It's now reading POINTERS out of stdin. It's still not safe, and the user did not declare it safe, so it's on him (maybe he knows what he's doing). Now, let's move to the future where your new DIP is the default. BOTH versions of user code COMPILE, and are treated as safe!!! Why? simply because the read prototype is now considered safe. This SILENTLY still works, and the library function doRead now is incorrectly safe. Perhaps the user knew what he was doing when he was reading pointers from stdin. Maybe it's OK for now, but the library DEFINITELY is wrong for other uses. You see, the problem isn't that "someone didn't do it right", it's that the thing that was right before is now wrong. Instantly, code that was completely correct in terms of memory safety is now completely incorrect. And it still silently builds and runs exactly as it did before. Changes like this should REQUIRE a deprecation and error period. But the easier thing is to simply avoid marking things safe that aren't safe. With that one change, this whole DIP becomes a benefit rather than the great destructor of all safe code in existence. Please reconsider your position. This is so important, I think a virtual live discussion is in order if you are not convinced by this. Let me know, I'm working from home for 3 weeks already, I'm fine having one any time. -Steve
Apr 04 2020
parent jmh530 <john.michael.hall gmail.com> writes:
On Saturday, 4 April 2020 at 16:44:48 UTC, Steven Schveighoffer 
wrote:
 [snip]
For all the discussion about safe meaning that the compiler has verified that a function is safe, that's not really what safe does. safe is a blacklist, preventing some operations in safe functions, not a white list that only allows verifiable ones in safe functions. Granted, I think a whitelist would have been a better idea if it was done from the start, but that's not really what we have and I'm not sure the current conversation appreciates that difference. safe (currently) doesn't actually verify that everything is safe, rather it just says you can't do these unsafe things. Maybe extern(C) functions with no body should be added to the safe blacklist? That would mean that if safe is the default, then those functions won't compile. The user will need to use them through a trusted interface. Now, I don't like that solution by itself, but at least it is consistent with how safe is designed (as a blacklist).
Apr 04 2020
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04.04.20 08:53, Walter Bright wrote:
 On 4/3/2020 2:06 PM, Steven Schveighoffer wrote:
 I want to make sure you understand that we are not talking about 
 extern(C) functions that are written in D.
I understand totally what you are talking about.
 But what should absolutely not compile is:

 extern(C) int free(void *);

 void foo(int *ptr) // now inferred  safe
 {
     free(ptr);
 }
I understand your proposal. You want C functions without bodies to be system.
 The fact that we cannot control where/how people define their 
 prototypes means we have to be firm on this. They need to opt-in to 
  safe with extern(C), it cannot be default.
On the other hand, special cases like this tend to cause unexpected problems in the future.
This is the special case! safe is supposed to mean the compiler has verified this function to not cause memory corruption. Why should extern(C) functions be an exception to that rule? Your are essentially saying something like: "why should pointer arithmetic be disallowed in safe functions when there are other operations on pointers that are okay? This seems like a special case rule that will cause problems in the future."
 Experience pretty much guarantees it. It's 
 likely to be tricky to implement as well.
 ...
I really doubt that. It's a simple rule. The version that is easiest to implement is you simply disallow extern(C) functions without body to be marked safe. It's a single `if` statement in an appropriate place.
 People remember simple rules. They don't remember rules with odd 
 exceptions to them, that always winds up with trouble and bug reports. 
I think you are the only one in this thread who considers this to be the odd exception instead of the general rule.
 Simple rules applied evenly lead to a compiler that works and is 
 reliable. I'm afraid the weight of all the special rules will crush D.
 
 Now, as to what to do. I spent a few minutes and added ` system:` in to 
 the C headers in druntime for windows, posix, and linux. Done. I hope 
 someone else will do it for freebsd, etc., if not I'll go do that to.
 
  > is going to cause huge issues.
 
 I doubt that for the simple reason that  system by default has not 
 caused huge issues.
 ...
That makes no sense at all. You are arguing in favor of _ trusted_ by default! It should not even be _possible_ to mark an extern(C) function safe, it has to be either system or trusted. The compiler does not do any checking here.
 The rule is simple:
 
 "For a D module with a bunch of C declarations in it, start it with 
 ` system:`."
 ...
So it's memory safety by convention.
 It's not a hard rule to check. It's one line. D's C interface has always 
 relied on the user to get right. Recall that C doesn't do any name 
 mangling at all:
 
    ----- mylib.di
      extern (C) int addOne(int i);
 
    ----- mylib.c
      double addOne(double d) { return d + 1; }
 
 
 That'll link fine and yet fail at runtime. Oops! Calling it  system will 
 not help at all. If the C implementation is bad, there's not a damn 
 thing D can do about it, with or without  system. It's always going to 
 be this way.
Right. How can you claim that this is safe code?
Apr 05 2020
next sibling parent reply tsbockman <thomas.bockman gmail.com> writes:
On Sunday, 5 April 2020 at 19:22:59 UTC, Timon Gehr wrote:
 That makes no sense at all. You are arguing in favor of 
 _ trusted_ by default! It should not even be _possible_ to mark 
 an extern(C) function  safe, it has to be either  system or 
  trusted. The compiler does not do any checking here.
I agree completely with this: it should be a compile-time error to declare a bodyless extern(C) function safe, regardless of whether it is done implicitly by default, or explicitly with an annotation. The only distinction between safe and trusted is the compiler verification of the safety of a function's implementation. If that compiler verification wasn't done, then safe cannot rightly apply. If having a different implicit default for bodyless extern(C) functions is too complicated, then just don't have a default for them at all: require all such declarations to be explicitly annotated either system or trusted.
Apr 05 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/5/20 3:57 PM, tsbockman wrote:
 On Sunday, 5 April 2020 at 19:22:59 UTC, Timon Gehr wrote:
 That makes no sense at all. You are arguing in favor of _ trusted_ by 
 default! It should not even be _possible_ to mark an extern(C) 
 function  safe, it has to be either  system or  trusted. The compiler 
 does not do any checking here.
I agree completely with this: it should be a compile-time error to declare a bodyless extern(C) function safe, regardless of whether it is done implicitly by default, or explicitly with an annotation. The only distinction between safe and trusted is the compiler verification of the safety of a function's implementation. If that compiler verification wasn't done, then safe cannot rightly apply. If having a different implicit default for bodyless extern(C) functions is too complicated, then just don't have a default for them at all: require all such declarations to be explicitly annotated either system or trusted.
I disagree with disallowing safe as a specific marking on extern(C) code. You can write safe extern(C) functions in D, and it makes no sense to require that they are trusted at the prototype. Assuming safe, no. Explicitly safe OK, you marked it, you own it. We have similar problems with inout -- preventing obvious incorrect cases makes sense, until it doesn't. I wish now I could go back and change what I thought, but unfortunately I'm not well-versed in the compiler to make the changes myself. -Steve
Apr 05 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Sunday, April 5, 2020 5:38:21 PM MDT Steven Schveighoffer via 
Digitalmars-d wrote:
 On 4/5/20 3:57 PM, tsbockman wrote:
 On Sunday, 5 April 2020 at 19:22:59 UTC, Timon Gehr wrote:
 That makes no sense at all. You are arguing in favor of _ trusted_ by
 default! It should not even be _possible_ to mark an extern(C)
 function  safe, it has to be either  system or  trusted. The compiler
 does not do any checking here.
I agree completely with this: it should be a compile-time error to declare a bodyless extern(C) function safe, regardless of whether it is done implicitly by default, or explicitly with an annotation. The only distinction between safe and trusted is the compiler verification of the safety of a function's implementation. If that compiler verification wasn't done, then safe cannot rightly apply. If having a different implicit default for bodyless extern(C) functions is too complicated, then just don't have a default for them at all: require all such declarations to be explicitly annotated either system or trusted.
I disagree with disallowing safe as a specific marking on extern(C) code. You can write safe extern(C) functions in D, and it makes no sense to require that they are trusted at the prototype. Assuming safe, no. Explicitly safe OK, you marked it, you own it. We have similar problems with inout -- preventing obvious incorrect cases makes sense, until it doesn't. I wish now I could go back and change what I thought, but unfortunately I'm not well-versed in the compiler to make the changes myself.
Except that it _does_ make sense to mark it as trusted, because the information about the body having been verified by the compiler to be safe is lost due to the lack of extern(D) name mangling. The compiler has no way of knowing whether the function was written in D and compiled by dmd elsewhere or whether it was compiled as C code elsewhere. When you mark an extern(C) declaration with anything other than system, you're telling the compiler that it's safe, and trusted is the appropriate attribute for the programmer to be telling the compiler that a function is safe, whereas safe is the approrpiate attribute for when the compiler does it on its own. Allowing safe on extern(C) declarations doesn't help at all over adding trusted on them, and it makes it easier for them to be accidentally treated as safe without the compiler verifying them if someone uses safe: Also - and perhaps most importantly - if an extern(C) function must be marked with trusted or system and cannot be marked with safe, then when there's a memory safety bug, you know that you only have to look for trusted code (and what it calls) to find the problem. Barring compiler bugs, you can ignore all safe code. However, if safe is allowed on non-extern(D) declarations, then you also have to go searching for all non-extern(D) declarations to make sure that they were not incorrectly marked with safe. Being able to find code with memory safety bugs by grepping for trusted is supposed to be a key feature of the safety system, but it doesn't work if safe is allowed on code that the compiler didn't explicitly verify - and it has no way of knowing whether the body of an extern(C) function was verified when all it sees is the declaration. safe should _only_ be used when the compiler is promising that it's mechanically verified the code for safety. trusted is what should be used if the programmer verified it, and in the case of extern(C) declarations, it's always going to be the programmer that verifies it regardless of whether the body itself was verified for safety by the D compiler. To allow safe on extern(C) declarations just muddies what safe means and makes it harder to track down bugs. If there were a way for the compiler to know that the extern(C) definition had been verified for safety, then the situation would be different, and it would be reasonable to mark the declaration as safe, but the compiler has no way of knowing that, so safe on extern(C) declarations is inappropriate and should be illegal. - Jonathan M Davis
Apr 05 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/5/20 9:28 PM, Jonathan M Davis wrote:
[snip]

I'm not disagreeing with any of this. Technically,  safe doesn't make 
sense on a C prototype.

But practically, preventing it doesn't buy us anything important. If you 
have a  safe extern(C) function (implemented in D), and it switches to 
 system, searching for  trusted to find its prototypes isn't the way to 
do it, you search for the function name. And technically, the  safe 
marking is correct, if the function is being checked.

There are a lot of rules in D which are technically sound, but result in 
pain and suffering in actual usage.

e.g:

static if(someBool)
    return x;
return y; // error statement not reachable

inout int[] arr;
writeln(arr); // OK
writeln(arr.filter!(a => a %5 == 0)); // Error: only parameters or stack 
based variables can be inout

I don't want to add to this list.

-Steve
Apr 06 2020
parent reply aliak <something something.com> writes:
On Monday, 6 April 2020 at 12:19:10 UTC, Steven Schveighoffer 
wrote:
 On 4/5/20 9:28 PM, Jonathan M Davis wrote:
 [snip]

 I'm not disagreeing with any of this. Technically,  safe 
 doesn't make sense on a C prototype.

 But practically, preventing it doesn't buy us anything 
 important. If you have a  safe extern(C) function (implemented 
 in D), and it switches to  system, searching for  trusted to 
 find its prototypes isn't the way to do it, you search for the 
 function name. And technically, the  safe marking is correct, 
 if the function is being checked.
It does make sense for a extern(C) D-defined function indeed. I think when it comes to an extern(C) function declaration that's linked to a C function, in that case safe doesn't really make sense since the body cannot be verified - which is what i believe is meant.
 There are a lot of rules in D which are technically sound, but 
 result in pain and suffering in actual usage.

 e.g:

 static if(someBool)
    return x;
 return y; // error statement not reachable

 inout int[] arr;
 writeln(arr); // OK
 writeln(arr.filter!(a => a %5 == 0)); // Error: only parameters 
 or stack based variables can be inout
I'm not sure those are technically sound. Maybe more incomplete implementations of the features?
 I don't want to add to this list.

 -Steve
Apr 06 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/6/20 11:35 AM, aliak wrote:
 e.g:

 static if(someBool)
    return x;
 return y; // error statement not reachable

 inout int[] arr;
 writeln(arr); // OK
 writeln(arr.filter!(a => a %5 == 0)); // Error: only parameters or 
 stack based variables can be inout
I'm not sure those are technically sound. Maybe more incomplete implementations of the features?
The issue is that filter returns a wrapper type that contains the original range. Since inout(int)[] is really the only working inout range in existence, it needs to be stored that way. But the compiler disallows it. So you see weird problems like this. -Steve
Apr 07 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06.04.20 01:38, Steven Schveighoffer wrote:
 On 4/5/20 3:57 PM, tsbockman wrote:
 On Sunday, 5 April 2020 at 19:22:59 UTC, Timon Gehr wrote:
 That makes no sense at all. You are arguing in favor of _ trusted_ by 
 default! It should not even be _possible_ to mark an extern(C) 
 function  safe, it has to be either  system or  trusted. The compiler 
 does not do any checking here.
I agree completely with this: it should be a compile-time error to declare a bodyless extern(C) function safe, regardless of whether it is done implicitly by default, or explicitly with an annotation. The only distinction between safe and trusted is the compiler verification of the safety of a function's implementation. If that compiler verification wasn't done, then safe cannot rightly apply. If having a different implicit default for bodyless extern(C) functions is too complicated, then just don't have a default for them at all: require all such declarations to be explicitly annotated either system or trusted.
I disagree with disallowing safe as a specific marking on extern(C) code. You can write safe extern(C) functions in D, and it makes no sense to require that they are trusted at the prototype.
The linker can hijack them. The function signature of a trusted function should be safe anyway.
 Assuming 
  safe, no. Explicitly  safe OK, you marked it, you own it.
 ...
safe: // a lot of code // ... extern(C) corrupt_all_the_memory(); When did safe become a matter of "it's your own fault if you shoot yourself in the foot, this memory corruption you are having is a good thing because it will teach you not to make more mistakes in the future"? If something may break safe-ty as it trusts the programmer to get it right, it ought to be trusted.
 We have similar problems with inout -- preventing obvious incorrect 
 cases makes sense, until it doesn't.
This is not analogous. Here, the problem is that the "obvious incorrect cases" where not actually incorrect.
 I wish now I could go back and 
 change what I thought, but unfortunately I'm not well-versed in the 
 compiler to make the changes myself.
 
 -Steve
I think you can just grep for the error messages and then remove the checks.
Apr 06 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/6/20 2:28 PM, Timon Gehr wrote:
 On 06.04.20 01:38, Steven Schveighoffer wrote:
 I disagree with disallowing  safe as a specific marking on extern(C) 
 code. You can write  safe extern(C) functions in D, and it makes no 
 sense to require that they are  trusted at the prototype.
The linker can hijack them. The function signature of a trusted function should be safe anyway.
The linker can always hijack it. Even without intention. Having it marked trusted isn't any better than having it marked safe in that case.
 
 Assuming  safe, no. Explicitly  safe OK, you marked it, you own it.
 ...
safe: // a lot of code // ... extern(C) corrupt_all_the_memory();
Why would you do this when safe is the default?
 
 When did  safe become a matter of "it's your own fault if you shoot 
 yourself in the foot, this memory corruption you are having is a good 
 thing because it will teach you not to make more mistakes in the 
 future"? If something may break  safe-ty as it trusts the programmer to 
 get it right, it ought to be  trusted.
If you mark a safe extern(C) function trusted, how does that help? I'm talking about extern(C) functions that are checked by the compiler as safe. Why should I have to mark the prototype of that function trusted? How does that prevent problems? It's functionally equivalent. It's not something that changes when the original extern(C) function for some reason becomes system (unlike extern(D) code). I agree with the concept that it's impossible for the compiler to know that an extern(C) prototype is safe, but I think it's more bureaucratic than effective to make you use trusted instead of safe. It's like a law that has roots in good policy, but results in a lot of frivolous enforcement.
 We have similar problems with inout -- preventing obvious incorrect 
 cases makes sense, until it doesn't.
This is not analogous. Here, the problem is that the "obvious incorrect cases" where not actually incorrect.
No, they were obvious. If you have an inout return but no inout variables, the "what goes in comes out" doesn't make any sense. Just use const in that case. But it turns out it's just easier for generic code to have these cases devolve into the equivalents that just use const/immutable, so you don't have to use static if's everywhere.
 I think you can just grep for the error messages and then remove the 
 checks.
I think there are more things than that. For example, inside non-inout code, you need to figure out what inout means, and enforce that. -Steve
Apr 07 2020
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07.04.20 23:14, Steven Schveighoffer wrote:
 On 4/6/20 2:28 PM, Timon Gehr wrote:
 On 06.04.20 01:38, Steven Schveighoffer wrote:
 I disagree with disallowing  safe as a specific marking on extern(C) 
 code. You can write  safe extern(C) functions in D, and it makes no 
 sense to require that they are  trusted at the prototype.
The linker can hijack them. The function signature of a trusted function should be safe anyway.
The linker can always hijack it.
I guess I used the wrong word. What I meant was that there is zero checking that the extern(C) function you are calling was actually checked to be safe. For extern(D), the language at least makes a small effort to ensure the safe qualifier has some meaning, but given that return types are not mangled, that also seems like a lie.
 Even without intention. Having it 
 marked  trusted isn't any better than having it marked  safe in that case.
 ...
Memory corruption in a safe context should be traceable to trusted code. It's the entire point.
 Assuming  safe, no. Explicitly  safe OK, you marked it, you own it.
 ...
safe: // a lot of code // ... extern(C) corrupt_all_the_memory();
Why would you do this when safe is the default? ...
To show it's broken. Besides that, it's not currently the default, and if it is, you might want to temporarily switch to system and back. In any case, safe code by definition is code written by untrusted programmers. Why do you insist there should be a good reason? Maybe the programmer was a monkey. Or malicious.
 When did  safe become a matter of "it's your own fault if you shoot 
 yourself in the foot, this memory corruption you are having is a good 
 thing because it will teach you not to make more mistakes in the 
 future"? If something may break  safe-ty as it trusts the programmer 
 to get it right, it ought to be  trusted.
If you mark a safe extern(C) function trusted, how does that help? I'm talking about extern(C) functions that are checked by the compiler as safe. Why should I have to mark the prototype of that function trusted?
Because the extern(C) function can be changed to be system. Hence you must trust the maintainer of the prototype to keep it in sync with the implementation.
 How does that prevent problems?
The point is to be able to trace any problems to some trusted annotations. safe alone doesn't completely prevent memory safety problems, but it advertises giving you a way to completely avoid being blamed for them. (Except for your choice of programming language, I suppose.)
 It's functionally equivalent.
$ grep trusted *.d Also, if you had in fact successfully identified a case where safe and trusted are functionally equivalent, why would that not ring alarm bells? If you write safe, you don't actually mean trusted.
 It's not something that changes when the original extern(C) function for 
 some reason becomes  system (unlike extern(D) code).
 
 I agree with the concept that it's impossible for the compiler to know 
 that an extern(C) prototype is  safe, but I think it's more bureaucratic 
 than effective to make you use  trusted instead of  safe. It's like a 
 law that has roots in good policy, but results in a lot of frivolous 
 enforcement.
 ...
I really don't understand why anyone would argue in favor of a policy that allows code to become less safe when you annotate it safe.
 We have similar problems with inout -- preventing obvious incorrect 
 cases makes sense, until it doesn't.
This is not analogous. Here, the problem is that the "obvious incorrect cases" where not actually incorrect.
No, they were obvious.
They were not incorrect, and the bad reasoning about those cases was caused by a lack of understanding of formal logic and type theory. This is also the underlying cause for the inout-related type system holes.
Apr 07 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/7/20 10:01 PM, Timon Gehr wrote:
 On 07.04.20 23:14, Steven Schveighoffer wrote:
 On 4/6/20 2:28 PM, Timon Gehr wrote:
 On 06.04.20 01:38, Steven Schveighoffer wrote:
 I disagree with disallowing  safe as a specific marking on extern(C) 
 code. You can write  safe extern(C) functions in D, and it makes no 
 sense to require that they are  trusted at the prototype.
The linker can hijack them. The function signature of a trusted function should be safe anyway.
The linker can always hijack it.
I guess I used the wrong word. What I meant was that there is zero checking that the extern(C) function you are calling was actually checked to be safe. For extern(D), the language at least makes a small effort to ensure the safe qualifier has some meaning, but given that return types are not mangled, that also seems like a lie.
If you change the return type, then there can be detectable differences at runtime. If you change the safe vs. trusted vs. system, everything works *exactly* as before (unless the linker stops it). The biggest problem is code that works but is wrong in terms of safety. In that case, safe and trusted on prototypes are identical.
 
 Even without intention. Having it marked  trusted isn't any better 
 than having it marked  safe in that case.
 ...
Memory corruption in a safe context should be traceable to trusted code. It's the entire point.
And in this case, there is no trusted code. It's all safe, but you have a boy-who-cried-wolf effect on the trusted prototype. "Oh, you can ignore those prototypes because the compiler made me do it." In fact I'm assuming you will see stuff like: // really safe trusted extern(C) ... causing a reviewer to pass over those as possible problems. I'm not saying safe marking of prototypes isn't prone to issues, I'm saying forcing trusted markings doesn't change that fact.
 
 Assuming  safe, no. Explicitly  safe OK, you marked it, you own it.
 ...
safe: // a lot of code // ... extern(C) corrupt_all_the_memory();
Why would you do this when safe is the default? ...
To show it's broken. Besides that, it's not currently the default, and if it is, you might want to temporarily switch to system and back.
In that case, all is well! you properly marked the right ones system. In the current regime, safe: at the top does the same thing you are trying to argue against. So there won't be existing (good) code that does this. In the new regime, safe is the default, so you wouldn't need to put safe: at the top. Any time the compiler forces you to mark things differently than the truth, you become more numb to these compiler warnings, and don't put any stock into their significance.
 In any case,  safe code 
 by definition is code written by untrusted programmers. Why do you 
 insist there should be a good reason? Maybe the programmer was a monkey. 
 Or malicious.
safe code is mechanically checked. Why shouldn't I be able to declare that when it's true? And if you make me mark it trusted, and it for some reason becomes system (because it's not trustable, a very unlikely occurrence), having them marked trusted doesn't help. You still have to go find the prototypes (all of them) and mark them system instead.
 
 When did  safe become a matter of "it's your own fault if you shoot 
 yourself in the foot, this memory corruption you are having is a good 
 thing because it will teach you not to make more mistakes in the 
 future"? If something may break  safe-ty as it trusts the programmer 
 to get it right, it ought to be  trusted.
If you mark a safe extern(C) function trusted, how does that help? I'm talking about extern(C) functions that are checked by the compiler as safe. Why should I have to mark the prototype of that function trusted?
Because the extern(C) function can be changed to be system. Hence you must trust the maintainer of the prototype to keep it in sync with the implementation.
So you are saying this scenario is OK: Hm... my safe function is turning into system. But that's OK because everyone had to mark their prototypes trusted! So now it becomes their fault I changed it. I don't get how this is helpful. I don't get how this is somehow logically superior to the same people being able to mark the functions safe. In both cases, you are pulling the rug from underneath them. It sounds more like a "good" non-monkey programmer should mark all extern(C) function prototypes system, regardless of the actual safety of the function, and require trusted escapes, because they can change at any time without warning and they might get blamed. This is what I meant by a bureaucratic solution.
 How does that prevent problems?
The point is to be able to trace any problems to some trusted annotations. safe alone doesn't completely prevent memory safety problems, but it advertises giving you a way to completely avoid being blamed for them. (Except for your choice of programming language, I suppose.)
In my interpretation, safe means the code inside the function was mechanically checked. This doesn't change when you are writing prototypes for your safe functions.
 
 It's functionally equivalent.
$ grep trusted *.d
And get a large noise/signal ratio of frivolous trusted markings for extern(C) functions that are really safe but were forced by the compiler to mark them trusted. How does this help find the problem?
 
 Also, if you had in fact successfully identified a case where  safe and 
  trusted are functionally equivalent, why would that not ring alarm 
 bells? If you write  safe, you don't actually mean  trusted.
Functionally equivalent in that you can interchange them and it doesn't change anything in terms of ability to call or compile the functions (marking the prototypes, that is). But to a reviewer, they are not functionally equivalent. safe says, the function was compiled as a safe function, trusted says it was compiled as a trusted function. At this level the difference between the two is informational, not functional. That's why they are functionally equivalent. I think we should allow the distinction on the prototype because we allow it on the implementation. And technically, if you don't trust the prototype writer to get the safety right, you should verify he got the parameters and return type right as well. So really it should be: grep extern(C) *.d
 
 It's not something that changes when the original extern(C) function 
 for some reason becomes  system (unlike extern(D) code).

 I agree with the concept that it's impossible for the compiler to know 
 that an extern(C) prototype is  safe, but I think it's more 
 bureaucratic than effective to make you use  trusted instead of  safe. 
 It's like a law that has roots in good policy, but results in a lot of 
 frivolous enforcement.
 ...
I really don't understand why anyone would argue in favor of a policy that allows code to become less safe when you annotate it safe.
I don't understand why someone arguing that actual safe functions cannot be marked safe because they later could become system would argue that as long as the prototypes are marked trusted that's OK to switch to system because it's someone else's fault. In neither case is the function author blameless because others were stupid to trust him with their prototypes. But with the ability to mark things safe, you have the ability to tell the truth about the actual implementation. What if the safe prototype is auto-generated? Then whenever it changes to system, the prototype will be changed. This means, you have a memory issue, you search for trusted, find all these prototypes that are actually prototypes for safe functions auto generated. You now get into the habit of ignoring all trusted prototypes as possible issues because it's just noise. Better to focus on the trusted implementations, which is where the real problem can happen.
 
 We have similar problems with inout -- preventing obvious incorrect 
 cases makes sense, until it doesn't.
This is not analogous. Here, the problem is that the "obvious incorrect cases" where not actually incorrect.
No, they were obvious.
They were not incorrect, and the bad reasoning about those cases was caused by a lack of understanding of formal logic and type theory. This is also the underlying cause for the inout-related type system holes.
I'll defer to you on type system stuff, and admit that my arguments for inout not working in the cases that cause problems were flawed. But they made obvious sense to me. I see the same philosophy in my bad consideration of cases for inout in your arguments for this trusted requirement. Which is why I brought up the analogy. -Steve
Apr 08 2020
next sibling parent reply aliak <something something.com> writes:
On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven Schveighoffer 
wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions? You can't mechanically check an extern(C) function declaration with no source code. And trusted on a mechanically checked extern(C) function definition doesn't help anyone.
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
 
 You can't mechanically check an extern(C) function declaration with no 
 source code. And  trusted on a mechanically checked extern(C) function 
 definition doesn't help anyone.
If you have an extern(C) function that is mechanically checked safe, should you have to mark the prototype trusted? This is the question we are discussing. My opinion is that requiring a marking of trusted doesn't make anything safer or more correct. If the function all of a sudden becomes system (truly system, and not semantically safe), then the trusted is still a lie. The result is just going to be people ignoring trusted C prototypes beyond the prototype definition itself, so it makes trusted less of a red flag than it should be. If you see: trusted extern(C) void memcpy(void *dest, void *src, size_t size) Then this is obviously wrong, and should be system. It wouldn't be any different for a safe marking. If you see: trusted extern(C) void copy(ubyte[] dest, ubyte[] src) Then you are going to go check it, see that it's really safe in the implementation, and start ignoring such prototypes as flags. Maybe you comment on it so next time you don't waste time going to check that one: // really safe in implementation trusted extern(C) void copy(ubyte[] dest, ubyte[] src) Now, if it all of a sudden changes to system, your trusted tag is still ignored, just like it was marked safe. That is my point. It doesn't change what is interpreted by a reviewer to require trusted, it just adds busywork to shut up the compiler. In the end, the difference between safe and trusted on an extern(C) function prototype is documentation. functionally, they are equivalent. -Steve
Apr 08 2020
next sibling parent reply aliak <something something.com> writes:
On Wednesday, 8 April 2020 at 13:58:12 UTC, Steven Schveighoffer 
wrote:
 On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven 
 Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
Ok 👍My bad then.
 
 You can't mechanically check an extern(C) function declaration 
 with no source code. And  trusted on a mechanically checked 
 extern(C) function definition doesn't help anyone.
If you have an extern(C) function that is mechanically checked safe, should you have to mark the prototype trusted? This is the question we are discussing.
Ah, then yeah, as mentioned above, I'm in your boat; if it's mechanically checked then there's no need to mark it trusted, that'd just be a waste of time in the hunt for memory corruption.
 My opinion is that requiring a marking of  trusted doesn't make 
 anything safer or more correct. If the function all of a sudden 
 becomes  system (truly  system, and not semantically safe), 
 then the  trusted is still a lie.

 The result is just going to be people ignoring  trusted C 
 prototypes beyond the prototype definition itself, so it makes 
  trusted less of a red flag than it should be.

 If you see:

  trusted extern(C) void memcpy(void *dest, void *src, size_t 
 size)

 Then this is obviously wrong, and should be  system. It 
 wouldn't be any different for a  safe marking.
Yes it would? If this was safe I wouldn't need to check its usage sights because I'm supposed to trust that safe can only be applied to functions that are mechanically checked. If it's trusted then I know I need to check it's usage sights because it's not mechanically guaranteed to be a safe function.
 If you see:

  trusted extern(C) void copy(ubyte[] dest, ubyte[] src)

 Then you are going to go check it, see that it's really  safe 
 in the implementation, and start ignoring such prototypes as 
 flags. Maybe you comment on it so next time you don't waste 
 time going to check that one:
Yes, if I go to that definition and see that the body is safe then I'll be annoyed. If that's a prototype to a function defined in a C library then it should not be allowed to be marked safe.
 // really  safe in implementation
  trusted extern(C) void copy(ubyte[] dest, ubyte[] src)

 Now, if it all of a sudden changes to  system, your  trusted 
 tag is still ignored, just like it was marked  safe. That is my 
 point. It doesn't change what is interpreted by a reviewer to 
 require  trusted, it just adds busywork to shut up the compiler.

 In the end, the difference between  safe and  trusted on an 
 extern(C) function prototype is documentation. functionally, 
 they are equivalent.

 -Steve
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 11:52 AM, aliak wrote:
 If you see:

  trusted extern(C) void memcpy(void *dest, void *src, size_t size)

 Then this is obviously wrong, and should be  system. It wouldn't be 
 any different for a  safe marking.
Yes it would? If this was safe I wouldn't need to check its usage sights because I'm supposed to trust that safe can only be applied to functions that are mechanically checked. If it's trusted then I know I need to check it's usage sights because it's not mechanically guaranteed to be a safe function.
There may be something you are missing. You can mark e.g. memcpy safe, and the compiler is fine with it. It will still link, and now be callable inside safe code. The connection between the prototype and the implementation is MANUALLY maintained for extern(C) code. So if you saw safe here, you should not be satisfied with it, as the marking is probably wrong, but will still compile. You shouldn't say "well, it's marked safe, so memcpy must be safe". This is the whole problem with extern(C) prototypes -- you are allowed to lie without consequence. But in a lot of cases, D-implemented extern(C) code *is* safe, so it wouldn't be a lie. -Steve
Apr 08 2020
parent aliak <something something.com> writes:
On Wednesday, 8 April 2020 at 17:01:29 UTC, Steven Schveighoffer 
wrote:
 On 4/8/20 11:52 AM, aliak wrote:
 If you see:

  trusted extern(C) void memcpy(void *dest, void *src, size_t 
 size)

 Then this is obviously wrong, and should be  system. It 
 wouldn't be any different for a  safe marking.
Yes it would? If this was safe I wouldn't need to check its usage sights because I'm supposed to trust that safe can only be applied to functions that are mechanically checked. If it's trusted then I know I need to check it's usage sights because it's not mechanically guaranteed to be a safe function.
There may be something you are missing. You can mark e.g. memcpy safe, and the compiler is fine with it. It will still link, and now be callable inside safe code.
I was under the assumption we were talking about how to go forward from whatever the current status quo was. Though, I did not know that was how it worked now because frankly I never tried marking an extern(C) function (declaration, not definition) as safe - never even occurred to me. If that's possible now then it should be fixed as I can't see how that's correct behaviour.
 The connection between the prototype and the implementation is 
 MANUALLY maintained for extern(C) code.

 So if you saw  safe here, you should not be satisfied with it, 
 as the marking is probably wrong, but will still compile. You 
 shouldn't say "well, it's marked  safe, so memcpy must be 
  safe".

 This is the whole problem with extern(C) prototypes -- you are 
 allowed to lie without consequence.

 But in a lot of cases, D-implemented extern(C) code *is*  safe, 
 so it wouldn't be a lie.

 -Steve
Apr 08 2020
prev sibling parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Wednesday, 8 April 2020 at 13:58:12 UTC, Steven Schveighoffer 
wrote:
 On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven 
 Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
Which Jonathan? I'm on the side that if the compiler can verify it, it should be safe. The extern'ness of a function seems to be orthogonal as to whether it can be verified. Having a function body seems to be the primary criteria.
Apr 08 2020
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 11:56 AM, Jonathan Marler wrote:
 On Wednesday, 8 April 2020 at 13:58:12 UTC, Steven Schveighoffer wrote:
 On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
Which Jonathan?  I'm on the side that if the compiler can verify it, it should be safe.  The extern'ness of a function seems to be orthogonal as to whether it can be verified.  Having a function body seems to be the primary criteria.
I was referring to the other one ;) But we are talking about prototypes not functions. If the function is implemented in D and safe, should you be required to mark the prototype trusted? If so, what benefit does that provide? I don't think the benefit is worth the cost of annoyance -- it ends up the same either way. I just thought of another way to fix this whole mess (even Walter's safe by default for extern(C)). I don't want to post it deep inside this thread, so I'll move it to the top. -Steve
Apr 08 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 17:56, Jonathan Marler wrote:
 On Wednesday, 8 April 2020 at 13:58:12 UTC, Steven Schveighoffer wrote:
 On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
Which Jonathan?  I'm on the side that if the compiler can verify it, it should be safe.  The extern'ness of a function seems to be orthogonal as to whether it can be verified.  Having a function body seems to be the primary criteria.
To be very clear, Steven says the following is desirable to allow as a possibility: --- module a; extern(C) corrupt_memory() system{ // ... } --- --- module b; extern(C) corrupt_memory() safe; void main() safe{ corrupt_memory(); } --- A "prototype" is a function declaration without a function body. Related: https://www.theregister.co.uk/2016/02/04/underhand_c_2015/
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 2:57 PM, Timon Gehr wrote:
 On 08.04.20 17:56, Jonathan Marler wrote:
 On Wednesday, 8 April 2020 at 13:58:12 UTC, Steven Schveighoffer wrote:
 On 4/8/20 9:35 AM, aliak wrote:
 On Wednesday, 8 April 2020 at 13:09:33 UTC, Steven Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 [...]
[...]
It really feels like you and Timon are talking past each other based on mixing up extern(C) declarations and extern(C) definitions?
No, I think we both understand that difference. I actually understand the point of view of Timon and Jonathan and agree with that point of view, I just think the restriction isn't helpful, and is just going to cause busywork for no true benefit.
Which Jonathan?  I'm on the side that if the compiler can verify it, it should be safe.  The extern'ness of a function seems to be orthogonal as to whether it can be verified.  Having a function body seems to be the primary criteria.
To be very clear, Steven says the following is desirable to allow as a possibility: --- module a; extern(C) corrupt_memory() system{     // ... } --- --- module b; extern(C) corrupt_memory() safe; void main() safe{     corrupt_memory(); } --- A "prototype" is a function declaration without a function body. Related: https://www.theregister.co.uk/2016/02/04/underhand_c_2015/
To be clear, Timon says the following EQUIVALENT situation is desirable to allow as a possibility, and somehow this is different than the above: extern(C) void corrupt_memory() trusted; // go ahead, trust this function named corrupt_memory. Simply because the unseen function is marked trusted (when it is really system). In other words, the function author made zero guarantees about memory safety, and the prototype author just trusts him blindly. And really, what I'm saying is that I want: --- module a; extern(C) void perfectly_safe safe { // ... } --- --- module b; extern(C) void perfectly_safe() safe; --- To be possible. If we can't have that then you have to lie (say it's trusted when it's actually mechanically checked). You have to have the first (very very unlikely) case allowed in order to have the second case allowed. Disallowing one disallows the other. Since the situation with trusted requirements isn't any safer (I'd argue it's less safe since it dilutes the meaning of trusted), I'd say don't put up red tape that doesn't help. -Steve
Apr 08 2020
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 21:19, Steven Schveighoffer wrote:
 
 ---
 module a;
 extern(C) void perfectly_safe  safe {
   // ...
 }
 ---
 ---
 module b;
 extern(C) void perfectly_safe()  safe;
 ---
--- module a; extern(C) size_t perfectly_safe safe { // ... } --- --- module b; extern(C) int* perfectly_safe() safe; ---
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 3:32 PM, Timon Gehr wrote:
 ---
 module a;
 extern(C) size_t perfectly_safe  safe {
   // ...
 }
 ---
 ---
 module b;
 extern(C) int* perfectly_safe()  safe;
 ---
--- module a; extern(C) size_t perfectly_safe safe { // ... } --- --- module b; extern(C) int* perfectly_safe() trusted; --- No difference. Same result. -Steve
Apr 08 2020
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 21:37, Steven Schveighoffer wrote:
 On 4/8/20 3:32 PM, Timon Gehr wrote:
 ---
 module a;
 extern(C) size_t perfectly_safe  safe {
   // ...
 }
 ---
 ---
 module b;
 extern(C) int* perfectly_safe()  safe;
 ---
--- module a; extern(C) size_t perfectly_safe safe {  // ... } --- --- module b; extern(C) int* perfectly_safe() trusted; --- No difference. Same result. -Steve
This is a waste of time.
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 3:44 PM, Timon Gehr wrote:
 On 08.04.20 21:37, Steven Schveighoffer wrote:
 No difference. Same result.
This is a waste of time.
That is something we can agree on. -Steve
Apr 08 2020
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 21:53, Steven Schveighoffer wrote:
 On 4/8/20 3:44 PM, Timon Gehr wrote:
 On 08.04.20 21:37, Steven Schveighoffer wrote:
 No difference. Same result.
This is a waste of time.
That is something we can agree on. -Steve
I'll explain why I think it's the case: If your opinion is truly that the following two code snippets are equivalent, we have reached an irreducible position: --- void corrupt_memory() trusted{ ... } void main() safe{ corrupt_memory(); } --- --- void corrupt_memory() system{ ... } void main() system{ corrupt_memory(); } --- Anyone who thinks those two code snippets are essentially the same can safely ignore my line of argumentation, but everyone else should dismiss yours.
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/8/20 4:04 PM, Timon Gehr wrote:
 
 I'll explain why I think it's the case: If your opinion is truly that 
 the following two code snippets are equivalent, we have reached an 
 irreducible position:
 
 ---
 void corrupt_memory() trusted{ ... }
 
 void main() safe{
      corrupt_memory();
 }
 ---
 ---
 void corrupt_memory() system{ ... }
 
 void main() system{
      corrupt_memory();
 }
 ---
 
 Anyone who thinks those two code snippets are essentially the same can 
 safely ignore my line of argumentation, but everyone else should dismiss 
 yours.
Strawman. I never said that trusted is the same as system. I said that a system function which has a safe prototype is identical to a system function that has a trusted prototype. In both cases, the compiler accepts the function as callable from safe code, does no checking, and never alerts the user. The only reasonable case where extern(C) code that is actually system should be prototyped trusted is when the code isn't written in D. In that case, it's not possible to mark it trusted, but one can reasonably assume that the code is technically safe. For instance libc's cos() or something similar. extern(C) code that is not checked safe SHOULD NEVER be prototyped safe. But it makes no sense to prevent extern(C) code that is safe from being prototyped as safe. Requiring one to write trusted doesn't change anything as the code is still callable via safe code, and just creates unnecessary annoyances. We should not prevent accurate prototypes just because it's possible to write them incorrectly. Especially when the prevention doesn't disallow the exact same inaccurate prototypes written with trusted vs. safe. If requiring trusted prototypes for safe functions gained any measurable quantity of safety, it would be worth it. It doesn't, so it's not. -Steve
Apr 08 2020
next sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 8 April 2020 at 20:47:40 UTC, Steven Schveighoffer 
wrote:
 The only reasonable case where extern(C) code that is actually 
  system should be prototyped  trusted is when the code isn't 
 written in D. In that case, it's not possible to mark it 
  trusted, but one can reasonably assume that the code is 
 technically  safe. For instance libc's cos() or something 
 similar.

 extern(C) code that is not checked  safe SHOULD NEVER be 
 prototyped  safe. But it makes no sense to prevent extern(C) 
 code that is  safe from being prototyped as  safe. Requiring 
 one to write  trusted doesn't change anything as the code is 
 still callable via  safe code, and just creates unnecessary 
 annoyances.
Just to make a counter argument against extern(C) being system by default which is perhaps a bit counter intuitive. The programmer wants get things done fast and in practice the programmer doesn't care about if the FFI functions are safe or system. The programmer just want to call the FFI function the safe function just as usual. In practice a programmer will almost every time put safe or trusted for every function that are imported. Semantically having extern(C) system makes sense but from a language usability point of view, every programmer will want to convert these in order to call the directly from safe code without any wrapper. Programmers don't care (like a honey badger), they just want to call their code.
Apr 08 2020
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 08, 2020 at 09:14:14PM +0000, IGotD- via Digitalmars-d wrote:
[...]
 Just to make a counter argument against extern(C) being  system by
 default which is perhaps a bit counter intuitive. The programmer wants
 get things done fast and in practice the programmer doesn't care about
 if the FFI functions are  safe or  system. The programmer just want to
 call the FFI function the  safe function just as usual. In practice a
 programmer will almost every time put  safe or  trusted for every
 function that are imported.
This argument does not make sense. The whole point of safe is to provide mechanically-checked guarantees that it does not do anything that might corrupt memory. If it calls a function that is not safe, that means the program as a whole does not live up to the promises of safe, and therefore should be marked system. It does not make sense to hack it so that you can label a program safe even though it calls something that ought to be system. If you need to call an FFI function (or whatever other function) and just want to get the job done, and you don't really care about how safe works, then just write system: at the top of your program and call it a day. Job done, we can all go home.
 Semantically having extern(C)  system makes sense but from a language
 usability point of view, every programmer will want to convert these
 in order to call the directly from  safe code without any wrapper.
 Programmers don't care (like a honey badger), they just want to call
 their code.
If they don't care, then their code is system. End of story. I don't understand why someone would want to jump through hoops to call their code safe when it actually isn't. T -- It only takes one twig to burn down a forest.
Apr 08 2020
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Wednesday, 8 April 2020 at 21:14:14 UTC, IGotD- wrote:
 On Wednesday, 8 April 2020 at 20:47:40 UTC, Steven 
 Schveighoffer wrote:
 [...]
Just to make a counter argument against extern(C) being system by default which is perhaps a bit counter intuitive. The programmer wants get things done fast and in practice the programmer doesn't care about if the FFI functions are safe or system. The programmer just want to call the FFI function the safe function just as usual. In practice a programmer will almost every time put safe or trusted for every function that are imported. [...]
The code of that programmer will never pass a decent code review, plain and simple.
Apr 09 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 22:47, Steven Schveighoffer wrote:
 On 4/8/20 4:04 PM, Timon Gehr wrote:
 I'll explain why I think it's the case: If your opinion is truly that 
 the following two code snippets are equivalent, we have reached an 
 irreducible position:

 ---
 void corrupt_memory() trusted{ ... }

 void main() safe{
      corrupt_memory();
 }
 ---
 ---
 void corrupt_memory() system{ ... }

 void main() system{
      corrupt_memory();
 }
 ---

 Anyone who thinks those two code snippets are essentially the same can 
 safely ignore my line of argumentation, but everyone else should 
 dismiss yours.
Strawman. ...
Right back at you.
 I never said that  trusted is the same as  system.
Nor did I claim you did. The snippets above differ only in who is to blame for the memory corruption. You claimed that's a non-essential detail, and that is not true, but I don't know how to make that point to you.
Apr 08 2020
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/9/20 12:31 AM, Timon Gehr wrote:
 On 08.04.20 22:47, Steven Schveighoffer wrote:
 I never said that  trusted is the same as  system.
Nor did I claim you did.
You just said: "If your opinion is truly that the following two code snippets are equivalent" and then presented two code snippets that showed the same function with implementation tagged as system or trusted. I don't know how I'm supposed to interpret your claim other than you think I believe they are equivalent.
 The snippets above differ only in who is to 
 blame for the memory corruption. You claimed that's a non-essential 
 detail, and that is not true, but I don't know how to make that point to 
 you.
The snippets are different than what we are debating. We were not talking about trusted code being called from safe code, rather system code being incorrectly prototyped as safe or trusted. Whether you mark it incorrectly safe or incorrectly trusted is non-essential. In both cases, the person who wrote the prototype is to blame, as the person who wrote the original code clearly meant it to be system, and the person who wrote the prototype got it wrong. We have 3 situations: 1. The code is written as extern(C) in D, in which case, the exact safety attribute should be repeated as the prototype. 2. The code is written in some other language, but is safe based on an examination either by spec or by actually proofreading the code. In which case, the prototype could be trusted. 3. The code is written in some other language, and does not follow safety rules of D (e.g. memcpy). It should be marked system. In no circumstances should extern(C) code that is written outside D should be marked safe. I would also consider it an error to write a prototype for an extern(C) D function other than what it actually is ( safe, trusted, system). I consider both of those an error on the prototype author. I would consider it a mistake to make it impossible to forward the exact attribute of a safe extern(C) D function to the prototype. -Steve
Apr 08 2020
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09.04.20 07:12, Steven Schveighoffer wrote:
 On 4/9/20 12:31 AM, Timon Gehr wrote:
 On 08.04.20 22:47, Steven Schveighoffer wrote:
 I never said that  trusted is the same as  system.
Nor did I claim you did.
You just said: "If your opinion is truly that the following two code snippets are equivalent" and then presented two code snippets that showed the same function with implementation tagged as system or trusted. I don't know how I'm supposed to interpret your claim other than you think I believe they are equivalent. ...
I showed two code snippets with the same runtime semantics and compiler diagnostics. This is precisely the condition you used to conclude that tagging extern(C) prototypes with safe or trusted is equivalent. Of course I am not claiming that you think safe and trusted are the same. I understand that you realize that one can write code that differs only in whether an annotation is safe or trusted but will give different compiler diagnostics. You are clearly very familiar with the mechanics implemented in the compiler but apparently you are confused about the underlying modular verification concepts.
 The snippets above differ only in who is to blame for the memory 
 corruption. You claimed that's a non-essential detail, and that is not 
 true, but I don't know how to make that point to you.
The snippets are different than what we are debating.
Not at all, but apparently I was wrong when I assumed you are aware of that fact, so I wonder why you agreed that this is a waste of time.
 We were not 
 talking about trusted code being called from safe code, rather system 
 code being incorrectly prototyped as  safe or  trusted. Whether you mark 
 it incorrectly  safe or incorrectly  trusted is non-essential.
 ...
Of course it is essential that if you write only safe code you can't get memory safety wrong. It does not matter if the safe annotation is implicit or explicit. If you took quick a step out of the current debate, would you really consider this statement controversial?
 In both cases, the person who wrote the prototype is to blame, as the 
 person who wrote the original code clearly meant it to be system, and 
 the person who wrote the prototype got it wrong.
 ...
You can't blame the programmer who wrote only safe code. It's just not an option. safe absolves that programmer from any responsibility for memory corruption. That's what safe is for and how it is advertised. system means: It's your responsibility to call this correctly. trusted means: I am taking responsibility for memory safety. safe means: The language is taking responsibility for memory safety. The checks in safe code are just the concrete way the language implements that responsibility. The details of how this is done in practice can evolve over time, but that fundamental spec is what it is.
 We have 3 situations:
 
 1. The code is written as extern(C) in D, in which case, the exact 
 safety attribute should be repeated as the prototype.
No, in this case ideally you would just use the module system and don't write any separate prototypes.
 2. The code is written in some other language, but is  safe based on an 
 examination either by spec or by actually proofreading the code. In 
 which case, the prototype could be  trusted.
Sure.
 3. The code is written in some other language, and does not follow 
 safety rules of D (e.g. memcpy). It should be marked  system.
 ...
Sure.
 In no circumstances should extern(C) code that is written outside D 
 should be marked  safe.
The compiler can not check that and has to assume the worst.
 I would also consider it an error to write a 
 prototype for an extern(C) D function other than what it actually is 
 ( safe,  trusted,  system).
 ...
So first you say there is no difference between safe and trusted on prototypes, and now suddenly it is wrong to interchange the two anyway.
 I consider both of those an error on the prototype author.
 
 I would consider it a mistake to make it impossible to forward the exact 
 attribute of a  safe extern(C) D function to the prototype.
 
 -Steve
As I said before, typeof(&foo) where foo is trusted should be safe. The difference is only important for the implementation/prototype, not the caller.
Apr 09 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/9/20 5:21 AM, Timon Gehr wrote:
 On 09.04.20 07:12, Steven Schveighoffer wrote:
 On 4/9/20 12:31 AM, Timon Gehr wrote:
 On 08.04.20 22:47, Steven Schveighoffer wrote:
 I never said that  trusted is the same as  system.
Nor did I claim you did.
You just said: "If your opinion is truly that the following two code snippets are equivalent" and then presented two code snippets that showed the same function with implementation tagged as system or trusted. I don't know how I'm supposed to interpret your claim other than you think I believe they are equivalent. ...
I showed two code snippets with the same runtime semantics and compiler diagnostics. This is precisely the condition you used to conclude that tagging extern(C) prototypes with safe or trusted is equivalent.
No, the system code will not prevent safe violations, so they are not the same diagnostics.
 The snippets are different than what we are debating.
Not at all, but apparently I was wrong when I assumed you are aware of that fact, so I wonder why you agreed that this is a waste of time.
They are different. Your example has trusted code that is compiled by the compiler. The example we were discussing does not. It is system code.
 
 We were not talking about trusted code being called from safe code, 
 rather system code being incorrectly prototyped as  safe or  trusted. 
 Whether you mark it incorrectly  safe or incorrectly  trusted is 
 non-essential.
 ...
Of course it is essential that if you write only safe code you can't get memory safety wrong. It does not matter if the safe annotation is implicit or explicit. If you took quick a step out of the current debate, would you really consider this statement controversial?
I now think this was NOT a waste of time! I didn't quite grasp what you are saying, though I still disagree. I look at the attributes differently. safe means "any code in here is mechanically checked". If I put safe incorrectly on an extern(C) function that is really trusted or system, then I am wrong to do so. I don't view the compiler as having responsibility to ensure my prototypes are correctly marked (or enforce that they should be incorrectly marked, as you are suggesting). Consider a library that is ONLY built with safe code. It may call trusted code from external sources, but has no trusted markings anywhere. This includes extern(C) implementations and extern(C) prototypes for those implementations. If I can say: grep -R ' trusted' source And get no results, I can be satisfied that the library is free from safety errors (obviously to a certain degree). If I have to mark those safe prototypes trusted, now I get hits that are frivolous. Now, I start to ignore them because they are just frivolously required by the compiler, in actuality they are perfectly safe. So trusted starts to lose its importance -- you now have "trusted you can ignore" and "trusted you need to check".
 
 In both cases, the person who wrote the prototype is to blame, as the 
 person who wrote the original code clearly meant it to be system, and 
 the person who wrote the prototype got it wrong.
 ...
You can't blame the programmer who wrote only safe code. It's just not an option. safe absolves that programmer from any responsibility for memory corruption. That's what safe is for and how it is advertised.
I only wrote safe, so I am blameless: --- pragma(mangle, "_D4core6memory2GC4freeFNaNbNiPvZv") safe nothrow nogc void gcfree(void*p); void main() safe { auto p = new int; gcfree(p); *p = 5; } --- This is where we differ -- safe is to designate functions for the compiler to check and to tag prototype functions that have been checked. It only can go so far, I would not say that safe code is undeniable *proof* that it's safe. It is a guarantee within a certain set of rules.
 
  system means: It's your responsibility to call this correctly.
  trusted means: I am taking responsibility for memory safety.
  safe means: The language is taking responsibility for memory safety.
I'm more focused on the mechanical side. safe is an instruction to the compiler for mechanical checking, callability and linking. In practice, safe means you can be reasonably sure that the compiler has checked such code, as long as everyone follows the rules. It's always possible to break the rules.
 We have 3 situations:

 1. The code is written as extern(C) in D, in which case, the exact 
 safety attribute should be repeated as the prototype.
No, in this case ideally you would just use the module system and don't write any separate prototypes.
Ideally yes. But sometimes you need extern(C) code for other reasons. For instance, if you want code to be callable from C and D. To clarify, I meant when specifying prototypes you have 3 situations.
 In no circumstances should extern(C) code that is written outside D 
 should be marked  safe.
The compiler can not check that and has to assume the worst.
Possibly there is a solution that satisfies your requirements. See my post here: https://forum.dlang.org/post/r6kvm4$1vq5$1 digitalmars.com
 
 I would also consider it an error to write a prototype for an 
 extern(C) D function other than what it actually is ( safe,  trusted, 
  system).
 ...
So first you say there is no difference between safe and trusted on prototypes, and now suddenly it is wrong to interchange the two anyway.
This is what I mean: --- extern(C) void systemFun() system {...} extern(C) void safeFun() safe {...} --- // no measurable difference. A lie is a lie. Both are callable from the same places. extern(C) void systemFun() safe; // lie extern(C) void systemFun() trusted; // lie // semantic difference, one is correct, one is wrong. extern(C) void safeFun() safe; // truth extern(C) void safeFun() trusted; // lie --- -Steve
Apr 09 2020
prev sibling parent Arine <arine123445128843 gmail.com> writes:
On Thursday, 9 April 2020 at 05:12:41 UTC, Steven Schveighoffer 
wrote:
 We have 3 situations:

 1. The code is written as extern(C) in D, in which case, the 
 exact safety attribute should be repeated as the prototype.
I don't think that should be the case. If the code is writen in D you should just import and use it. Otherwise all extern(C) should either be trusted or system. The extern(C) is a source for user error and the types aren't checked through mangling. So the trusted would indicate that declaration needs to be checked. Even if the body of the function is in D and the function is defined as safe. Since C mangling doesn't incorporate types or safety, if you do define it as safe in another module, then the implementation changes, it will still link to the function even if it becomes trusted. I think this is where the whole system/ trusted/ system system starts to fall flat on its face, along with other situations. If you think of this as either unsafe or safe, then it is a lot easier. A extern(C) declaration without body would always be unsafe, it would be unsafe by default and there's no "safe" keyword (ala rust) to make it safe. Something that needs to default to unsafe, should stay unsafe. If there's a body of the function it would would be safe as you can verify the implementation. If you import that module, it has all that information available and it knows the declaration is correct, what types and such it uses. If you declare extern(C) elsewhere then that's the possibility for error that the declaration goes out of sync of the implementation, and C's mangling won't save you. It *needs* to be verified by an individual.
Apr 09 2020
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08.04.20 15:09, Steven Schveighoffer wrote:
 On 4/7/20 10:01 PM, Timon Gehr wrote:
 On 07.04.20 23:14, Steven Schveighoffer wrote:
 On 4/6/20 2:28 PM, Timon Gehr wrote:
 On 06.04.20 01:38, Steven Schveighoffer wrote:
 I disagree with disallowing  safe as a specific marking on 
 extern(C) code. You can write  safe extern(C) functions in D, and 
 it makes no sense to require that they are  trusted at the prototype.
The linker can hijack them. The function signature of a trusted function should be safe anyway.
The linker can always hijack it.
I guess I used the wrong word. What I meant was that there is zero checking that the extern(C) function you are calling was actually checked to be safe. For extern(D), the language at least makes a small effort to ensure the safe qualifier has some meaning, but given that return types are not mangled, that also seems like a lie.
If you change the return type, then there can be detectable differences at runtime. If you change the safe vs. trusted vs. system, everything works *exactly* as before (unless the linker stops it). The biggest problem is code that works but is wrong in terms of safety. In that case, safe and trusted on prototypes are identical. ...
Wrong. In terms of _language_ and _type system_ design what's most important is that if code is marked safe, is actually memory safe conditional on programmers getting the trusted annotations right. That's the promise that the language is giving.
 Even without intention. Having it marked  trusted isn't any better 
 than having it marked  safe in that case.
 ...
Memory corruption in a safe context should be traceable to trusted code. It's the entire point.
And in this case, there is no trusted code.
You are trusting the prototype to be correct.
 It's all  safe, but you have 
 a boy-who-cried-wolf effect on the  trusted prototype. "Oh, you can 
 ignore those prototypes because the compiler made me do it." In fact I'm 
 assuming you will see stuff like:
 
 // really  safe
  trusted extern(C) ...
 
 causing a reviewer to pass over those as possible problems.
 ...
And of course if the module has no trusted code at all the reviewer will be extra careful?
 I'm not saying  safe marking of prototypes isn't prone to issues,
Purely safe code is not supposed to be prone to issues.
 I'm saying forcing  trusted markings doesn't change that fact.
 ...
trusted means "there may be issues here". safe means "no issues here".
 Assuming  safe, no. Explicitly  safe OK, you marked it, you own it.
 ...
safe: // a lot of code // ... extern(C) corrupt_all_the_memory();
Why would you do this when safe is the default? ...
To show it's broken. Besides that, it's not currently the default, and if it is, you might want to temporarily switch to system and back.
In that case, all is well! you properly marked the right ones system. ...
So we're right back to trusting the programmer even though that programmer did not actually write any trusted annotations.
 In the current regime,  safe: at the top does the same thing you are 
 trying to argue against. So there won't be existing (good) code that 
 does this.
 
 In the new regime,  safe is the default, so you wouldn't need to put 
  safe: at the top.
 ...
It does not have to be at the top of the file.
 Any time the compiler forces you to mark things differently than the 
 truth, you become more numb to these compiler warnings, and don't put 
 any stock into their significance.
 ...
trusted is the truth, safe is the lie.
 In any case,  safe code by definition is code written by untrusted 
 programmers. Why do you insist there should be a good reason? Maybe 
 the programmer was a monkey. Or malicious.
safe code is mechanically checked. Why shouldn't I be able to declare that when it's true?
Because the prototype is not mechanically checked. Programmers who are not allowed to write trusted code have no business writing safe extern(C) prototypes.
 And if you make me mark it  trusted, and it for 
 some reason becomes  system (because it's not  trustable, a very 
 unlikely occurrence), having them marked  trusted doesn't help. You 
 still have to go find the prototypes (all of them) and mark them  system 
 instead.
 ...
It's the prototype maintainer's responsibility to ensure it does not happen.
 When did  safe become a matter of "it's your own fault if you shoot 
 yourself in the foot, this memory corruption you are having is a 
 good thing because it will teach you not to make more mistakes in 
 the future"? If something may break  safe-ty as it trusts the 
 programmer to get it right, it ought to be  trusted.
If you mark a safe extern(C) function trusted, how does that help? I'm talking about extern(C) functions that are checked by the compiler as safe. Why should I have to mark the prototype of that function trusted?
Because the extern(C) function can be changed to be system. Hence you must trust the maintainer of the prototype to keep it in sync with the implementation.
So you are saying this scenario is OK: Hm... my safe function is turning into system. But that's OK because everyone had to mark their prototypes trusted! So now it becomes their fault I changed it. ...
It is their fault. Why did they make their safety conditional on the signature of an extern(C) function that apparently was outside their own control?
 I don't get how this is helpful. I don't get how this is somehow 
 logically superior to the same people being able to mark the functions 
  safe. In both cases, you are pulling the rug from underneath them.
 ...
But the code at the prototype is not lying about it.
 It sounds more like a "good" non-monkey programmer should mark all 
 extern(C) function prototypes  system, regardless of the actual safety 
 of the function, and require  trusted escapes, because they can change 
 at any time without warning and they might get blamed.
If that is in fact a possibility they should absolutely do that. Otherwise they need to have conventions in place that prevent it from happening. If the safety of a piece of code is conditional on such conventions, it should be marked trusted.
 ...
What if the safe prototype is auto-generated? Then whenever it changes to system, the prototype will be changed. This means, you have a memory issue, you search for trusted, find all these prototypes that are actually prototypes for safe functions auto generated. You now get into the habit of ignoring all trusted prototypes as possible issues because it's just noise. Better to focus on the trusted implementations, which is where the real problem can happen. ...
https://www.theregister.co.uk/2016/02/04/underhand_c_2015/
Apr 08 2020
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/5/2020 12:22 PM, Timon Gehr wrote:
 Right. How can you claim that this is  safe code?
Extern declarations are always going to rely on the user declaring them correctly. It's an inherent property of a language that supports separate compilation.
Apr 05 2020
parent Mathias LANG <geod24 gmail.com> writes:
On Monday, 6 April 2020 at 02:23:03 UTC, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 Right. How can you claim that this is  safe code?
Extern declarations are always going to rely on the user declaring them correctly. It's an inherent property of a language that supports separate compilation.
But correctly is not a black-and-white thing, it's a spectrum. Take `memcpy` for example. Its prototype in C is: ``` void* memcpy(void *restrict dst, const void *restrict src, size_t n); ``` In druntime, it is: ``` extern (C): system: nothrow: nogc: void* memcpy(return void* s1, scope const void* s2, size_t n) pure; ``` https://github.com/dlang/druntime/blob/eb6911eeb4f632d42abe4e28f5030158c9e7af52/src/core/stdc/string.d#L42 Now what happens if we omit *any* of those attributes: - `nothrow`: Can't call it from a `nothrow` function without a try/catch. So we *know* it does not throw, but the function doesn't guarantee it, so we are restricted in the way we call it. - ` nogc`: Can't call it from a ` nogc` function. We also *know* it is ` nogc`, but the function doesn't expose that. - `pure`: Same thing. While the function is `pure`, the lack of attribute *restricts usage*. - `scope` on parameter: Same thing, usage is *restricted* to passing points which are not scope. - A lack of `const` is also restrictive, because `const` is the loosest modifier (as mutable and immutable both implicitly convert to `const`) - `return` is the *only* one that can cause trouble, because not adding it allows escaping a reference to a local variable. So as long as I don't forget `return` on the first argument, and get the arguments type right, I won't violate *any* promise of the language even by forgetting to add `nothrow`, ` nogc` or `scope`. And even forgetting `const` will not violate any promise of the language, it will just restrict my usage.
Apr 05 2020
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is easiest to 
 implement is you simply disallow extern(C) functions without body to be marked 
  safe. It's a single `if` statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures. I wouldn't be surprised if more than half of the bugs in bugzilla are the result of an unexpected interaction between simple exceptions to rules. I've been around this block a few thousand times. Remember, it ain't just the compiler. The users don't remember these exceptions. Every one of them makes the language harder to learn and master.
Apr 05 2020
next sibling parent Mathias LANG <geod24 gmail.com> writes:
On Monday, 6 April 2020 at 02:43:51 UTC, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is 
 easiest to implement is you simply disallow extern(C) 
 functions without body to be marked  safe. It's a single `if` 
 statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures. I wouldn't be surprised if more than half of the bugs in bugzilla are the result of an unexpected interaction between simple exceptions to rules. I've been around this block a few thousand times. Remember, it ain't just the compiler. The users don't remember these exceptions. Every one of them makes the language harder to learn and master.
I fail to understand how you are not seeing it as a problem, provided every single other person on this thread has. And consensus isn't easily achieved in the D community (or any community, for that matter), but it seems that here, even if opinions on ` safe` & ` trusted` differ, everyone agrees that have `extern` functions without D linkage ` safe` by default is a bad idea. Surely the voice of all the long standing D contributors have to carry some weight ? Regardless, it doesn't have to be a rule. Just make it a compiler error. E.g. `extern(C) void foo();` leads to: "Error: `extern(C) void foo()` needs to be explicitly marked as ` system` or ` trusted`". If you don't want this error message, then let it be ` system`, and the compiler will complain with "Error: ` safe` function `...` cannot call system function `...`" and hopefully that'd be clear enough. You're right that `extern` rely on the user declaring things correctly to work. However, having the extern declaration potentially mis-attributed by default is a sure way to shoot oneself in the foot. By setting the default of `extern` (non D linkage) functions to ` system`, OR by requiring users to explicitly mark the prototype one way or the other, at least the user has to *actively* make the mistake.
Apr 05 2020
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 06.04.20 04:43, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is easiest 
 to implement is you simply disallow extern(C) functions without body 
 to be marked  safe. It's a single `if` statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures. I wouldn't be surprised if more than half of the bugs in bugzilla are the result of an unexpected interaction between simple exceptions to rules. I've been around this block a few thousand times. Remember, it ain't just the compiler. The users don't remember these exceptions. Every one of them makes the language harder to learn and master.
Moving goal posts. Your claim was it's hard to implement. The remainder of the post you quoted selectively out of context already refutes your argument comprehensively.
Apr 05 2020
prev sibling next sibling parent Jonathan Marler <johnnymarler gmail.com> writes:
On Monday, 6 April 2020 at 02:43:51 UTC, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is 
 easiest to implement is you simply disallow extern(C) 
 functions without body to be marked  safe. It's a single `if` 
 statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures. I wouldn't be surprised if more than half of the bugs in bugzilla are the result of an unexpected interaction between simple exceptions to rules. I've been around this block a few thousand times. Remember, it ain't just the compiler. The users don't remember these exceptions. Every one of them makes the language harder to learn and master.
I agree that every language needs to be very careful with complex rules. But I don't see this as a complex rule, I would word it like this: treat "the code" inside "code blocks" as safe unless explicitly marked as system Note that this would exclude function declarations themselves, as they are not "code", but instead, wrappers to executable code that comes from somewhere else and cannot be verified to be " safe". Whereas a function with a body (i.e. a code block) would be treated as safe, namely, because the compiler can verify it is safe. I think the rule is simple when you know the right way to define it. I don't see any exceptions or "ifs", just a simple straightforward rule. I would like to add however that I don't see you using this same "the rule is too complex" argument on DIP1032. Which would read something like this: function and delegate pointer types use the attributes that are listed, except when the type declarations themselves are function parameters or defined inside a function; in which case they would inherit all attribute from their contining function. (question: would nested functions also inherit attributes?). Although, if the type itself is defined outside the function and only an alias is used to declare a pointer inside the function then it will not inherit the attributes from the containing function I also have a low confidence that covers all the complexity this rule would require, but that's what I came up with so far. That rule seem much more complex to me, and the only benefit I see is saving some tokens on some function/delegate type declarations, which I don't see come up too often anyway. From my end it looks like you aren't applying the same rules to every proposal, rather, you're selectively cherry-picking arguments for each individual one.
Apr 05 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/5/20 10:43 PM, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is easiest 
 to implement is you simply disallow extern(C) functions without body 
 to be marked  safe. It's a single `if` statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures.
Is this really your argument? Do we need Scott Meyers to explain such "esoteric" compiler errors like: Error: cannot call system function memcpy from safe function main. I think this is not an hour-long talk, but a 10-second talk: "C functions aren't checked by the D compiler, so they are system by default." Done. I think possibly those folks are going to be much more vulnerable to the existing rules surrounding trusted that many of the core D developers can't seem to get right. system by default extern(C) functions are literally the most understandable and correct part of the whole D safe/ system/ trusted system. And you want to remove that. Please reconsider. -Steve
Apr 06 2020
next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Monday, 6 April 2020 at 12:35:20 UTC, Steven Schveighoffer 
wrote:
 Is this really your argument? Do we need Scott Meyers to 
 explain such "esoteric" compiler errors like:

 Error: cannot call  system function memcpy from  safe function 
 main.

 I think this is not an hour-long talk, but a 10-second talk: "C 
 functions aren't checked by the D compiler, so they are  system 
 by default." Done.
We can do this even in the compiler. Since ` safe` checks cannot be performed on a bodyless function, it should be an error to declare a bodyless `extern(C)` function without either ` trusted` or ` system` attribute explicitly marked. And it should be straightforward to report this error, something like: Error: default safe checks cannot be performed on extern function `{name}`. Please indicate explicitly whether this function is system or trusted. Honestly, I think we'd be far more in need of a Scott Meyers to explain extern C functions being treated as safe by default (and how to avoid unintended bugs caused by this) than by the straightforward fact (well accepted e.g. by Rust users) that extern functions cannot be assumed safe, because, well, how do you prove it?
Apr 06 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 06, 2020 at 06:45:14PM +0000, Joseph Rushton Wakeling via
Digitalmars-d wrote:
[...]
 Honestly, I think we'd be far more in need of a Scott Meyers to
 explain extern C functions being treated as safe by default (and how
 to avoid unintended bugs caused by this) than by the straightforward
 fact (well accepted e.g. by Rust users) that extern functions cannot
 be assumed safe, because, well, how do you prove it?
I can see it already. DIP 1028 gets merged, and the following DConf Scott Meyers comes to give a talk about how safe in D is full of holes and doesn't fulfill its promises. T -- Famous last words: I *think* this will work...
Apr 06 2020
next sibling parent 12345swordy <alexanderheistermann gmail.com> writes:
On Monday, 6 April 2020 at 19:12:18 UTC, H. S. Teoh wrote:
 On Mon, Apr 06, 2020 at 06:45:14PM +0000, Joseph Rushton 
 Wakeling via Digitalmars-d wrote: [...]
 Honestly, I think we'd be far more in need of a Scott Meyers 
 to explain extern C functions being treated as safe by default 
 (and how to avoid unintended bugs caused by this) than by the 
 straightforward fact (well accepted e.g. by Rust users) that 
 extern functions cannot be assumed safe, because, well, how do 
 you prove it?
I can see it already. DIP 1028 gets merged, and the following DConf Scott Meyers comes to give a talk about how safe in D is full of holes and doesn't fulfill its promises. T
Better yet, email Scott Meyers and had him explain to Walter why extern C functions being mark as safe by default is a terrible idea. -Alex
Apr 06 2020
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 6 April 2020 at 19:12:18 UTC, H. S. Teoh wrote:
 On Mon, Apr 06, 2020 at 06:45:14PM +0000, Joseph Rushton 
 Wakeling via Digitalmars-d wrote: [...]
 Honestly, I think we'd be far more in need of a Scott Meyers 
 to explain extern C functions being treated as safe by default 
 (and how to avoid unintended bugs caused by this) than by the 
 straightforward fact (well accepted e.g. by Rust users) that 
 extern functions cannot be assumed safe, because, well, how do 
 you prove it?
I can see it already. DIP 1028 gets merged, and the following DConf Scott Meyers comes to give a talk about how safe in D is full of holes and doesn't fulfill its promises. T
LOL!
Apr 07 2020
prev sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Monday, 6 April 2020 at 12:35:20 UTC, Steven Schveighoffer 
wrote:
 On 4/5/20 10:43 PM, Walter Bright wrote:
 On 4/5/2020 12:22 PM, Timon Gehr wrote:
 I really doubt that. It's a simple rule. The version that is 
 easiest to implement is you simply disallow extern(C) 
 functions without body to be marked  safe. It's a single `if` 
 statement in an appropriate place.
Famous last words. Just look at the swamp of misery from "simple" C rules, such as their effect on C++ overloading. The quagmire got a lot worse when C++ added type inference. I attended a Scott Meyers talk that was a full hour long just on the weird special cases forced on C++ due to those simple rules. Companies pay Scott a boatload of cash for these lectures.
Is this really your argument? Do we need Scott Meyers to explain such "esoteric" compiler errors like: Error: cannot call system function memcpy from safe function main. I think this is not an hour-long talk, but a 10-second talk: "C functions aren't checked by the D compiler, so they are system by default." Done. I think possibly those folks are going to be much more vulnerable to the existing rules surrounding trusted that many of the core D developers can't seem to get right. system by default extern(C) functions are literally the most understandable and correct part of the whole D safe/ system/ trusted system. And you want to remove that. Please reconsider.
Even though the formal review period is over I'd like to leave a response to an earlier post from Walter. I've tagged it on here since it picks up on Steve's thoughts on the topic, which I agree with, and because a direct response in the Feedback Thread was, understandably, deleted. For context, this is an excerpt from a post that Walter made in the Feedback Thread for DIP 1028 wherein he argues *against* system by default for extern declarations. Interspersed are my thoughts. [snip]
 1. it's a special case inconsistency, which has its own costs 
 and confusion.
C is not a safe language. Defaulting C externs to safe, as you propose, would be confusing.
 2. the compiler cannot verify any extern declarations as being 
 safe, even D
 ones. It's always going to be up to the user to annotate them 
 correctly.
safe should mean safe, as in machine checkable. My position is that if it can't be machine checked, it shouldn't be considered safe.
 3. the extern(C) specifies an ABI, it doesn't say anything 
 about how the
 function is implemented, or even which language it is 
 implemented in. A pretty
 big chunk of the dmd implementation (80-90%?) is extern(C++)
Exactly, "it doesn't say anything..." so the default should be system. I'm not sure what you're trying to say in the second part. Is it that we should default to safe because your code is safe-by-programmer-assertion and there's a lot of it?
 4. it's trivial to mark a block of C function declarations with 
  system and
 trivial to audit it. I've done it already to a bunch of 
 druntime headers
As you and others have noted, safety by convention doesn't scale.
 5. D's separate compilation model relies on extern declarations 
 where source is
 not available and safety cannot be machine checked. It's 
 inherent
Yes. Absent tooling upgrades, the super-safety conscious have whole program compilation and trusted reviews to work with
 6. We're just talking about the default. The whole point of 
  safe being the
 default is that it is far and away the most common case, even 
 for C functions.
safe doesn't work as a probability. It aspires to certainty. ('aspires' because could be bugs in the checker, or holes in the spec or, ...)
 7. A function having different attributes depending on whether 
 or not a body is
 present is surprising behavior
Not surprising to me. It just reflects the ability/inability of the compiler to check safety.
 8. annotating "extern(C) void free(void*);" as  safe doesn't 
 make it safe, either, again relying on the user
Relying on the user to overcome bad defaults isn't going to work. If it did you'd not need the safe by default DIP.
 [snip]
As many others before me, I urge you to change direction regarding extern defaults. Barring that I hope Atila will step in for the team to prevent an own goal.
Apr 11 2020
prev sibling parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Saturday, 4 April 2020 at 06:53:57 UTC, Walter Bright wrote:
 [....]
 Now, as to what to do. I spent a few minutes and added 
 ` system:` in to the C headers in druntime for windows, posix, 
 and linux. Done. I hope someone else will do it for freebsd, 
 etc., if not I'll go do that to.
The thought crossed my mind that this same reasoning could be applied as a counter-argument for this DIP: ... So functions are not safe by default? Now, as to what to do. I spent a few minutes and added ` safe:` in to the source files in druntime for windows, posix, and linux. Done. I hope someone else will do it for freebsd, etc., if not I'll go do that to. That being said, it seems like a weak argument, so I don't think it changes much, but I thought it was interesting.
Apr 05 2020
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 6 April 2020 at 00:09:41 UTC, Jonathan Marler wrote:
 Now, as to what to do. I spent a few minutes and added
 ` safe:` in to the source files in druntime for windows, posix,
 and linux. Done.
That's my preferred solution to all this. It is a breaking change but much less of one. I wrote about it a few months ago in my blog: http://dpldocs.info/this-week-in-d/Blog.Posted_2020_01_13.html#my-attribute-proposal---no-changes-to-defaults
Apr 05 2020
parent Jonathan Marler <johnnymarler gmail.com> writes:
On Monday, 6 April 2020 at 00:54:16 UTC, Adam D. Ruppe wrote:
 On Monday, 6 April 2020 at 00:09:41 UTC, Jonathan Marler wrote:
 Now, as to what to do. I spent a few minutes and added
 ` safe:` in to the source files in druntime for windows, posix,
 and linux. Done.
That's my preferred solution to all this. It is a breaking change but much less of one. I wrote about it a few months ago in my blog: http://dpldocs.info/this-week-in-d/Blog.Posted_2020_01_13.html#my-attribute-proposal---no-changes-to-defaults
Yeah, I'd like to see projects start doing this and report back what bugs it found and how it improved code. Would be great to get some data on the benefits of " safe by default" in the real world.
Apr 05 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06.04.20 02:09, Jonathan Marler wrote:
 On Saturday, 4 April 2020 at 06:53:57 UTC, Walter Bright wrote:
 [....]
 Now, as to what to do. I spent a few minutes and added ` system:` in 
 to the C headers in druntime for windows, posix, and linux. Done. I 
 hope someone else will do it for freebsd, etc., if not I'll go do that 
 to.
The thought crossed my mind that this same reasoning could be applied as a counter-argument for this DIP: ... So functions are not safe by default? Now, as to what to do. I spent a few minutes and added ` safe:` in to the source files in druntime for windows, posix, and linux. Done. I hope someone else will do it for freebsd, etc., if not I'll go do that to. ...
So you annotated all those extern(C) function prototypes as safe? :)
 That being said, it seems like a weak argument, so I don't think it 
 changes much, but I thought it was interesting.
 
Apr 05 2020
parent Jonathan Marler <johnnymarler gmail.com> writes:
On Monday, 6 April 2020 at 01:43:58 UTC, Timon Gehr wrote:
 On 06.04.20 02:09, Jonathan Marler wrote:
 On Saturday, 4 April 2020 at 06:53:57 UTC, Walter Bright wrote:
 [....]
 Now, as to what to do. I spent a few minutes and added 
 ` system:` in to the C headers in druntime for windows, 
 posix, and linux. Done. I hope someone else will do it for 
 freebsd, etc., if not I'll go do that to.
The thought crossed my mind that this same reasoning could be applied as a counter-argument for this DIP: ... So functions are not safe by default? Now, as to what to do. I spent a few minutes and added ` safe:` in to the source files in druntime for windows, posix, and linux. Done. I hope someone else will do it for freebsd, etc., if not I'll go do that to. ...
So you annotated all those extern(C) function prototypes as safe? :)
That paragraph wasn't something I was saying, I took what Walter said (see above) and replaced a couple words to show the argument he was using to say extern functions should be considered safe by default could also be used to argue against the DIP itself. A good way to test the merit of an argument is to apply it to multiple scenarios. For example, you could say "I like apples because they are red." To test if that's the real reason you like apples you can try the same argument with other scenarios. You could try "I like tomatoes because they are red", but if you don't acutally like tomatoes, which are also red, then that means the "redness" of objects may not have much to do with why you like apples. Redness may contribute to your "likeness" but there are probably more important factors. In this situation Walter said (paraphrased) "If you want to to make your extern functions system by default, put them in their own file and put " system:" at the top". But the whole point of this DIP is to change the default from system to safe, so the same argument he used would be an argument against his own DIP, "If you want to make your functions safe by default, put them in their own file and put " safe:" at the top". This leads me to believe that he hasn't shared or identified the real reason he doesn't agree with what people are saying about defaulting to system on unverifiable functions. It's like he is saying "I like apples because they are red", and then turning around later and saying "I don't like tomatoes because they are red". Either defaults matter or they don't, if they don't then you are saying your own DIP doesn't matter, if they do, then I wouldn't dismiss what people are saying about the default safeness of unverifiable functions.
Apr 06 2020
prev sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, March 26, 2020 5:40:41 AM MDT IGotD- via Digitalmars-d wrote:
 On Thursday, 26 March 2020 at 05:14:44 UTC, Jonathan M Davis

 wrote:
 Making it so that all code must be either verified by the
 compiler to be  safe or be marked by the programmer to be
  trusted or  system means that all code which could contain
 memory safety issues will be segregated by  trusted or system,
 whereas right now, you can have large swathes of code which is
 not marked with anything and is unchecked. If the programmer is
 not using the compiler to verify  safety and is not verifying
  system sections of code and marking it as  trusted, then there
 are no compiler guarantees about memory safety in that code.
 Sure, the programmer may have done a good enough job that there
 are no memory safety bugs in the code (and that's far more
 likely with D code than C/C++ code), but by making  safe the
 default, it makes it so that none of that will fall through the
 cracks unless the programmer explicitly tells the compiler to
 not check.
FFI functions like extern(C) must be system as the compiler cannot check this. Should extern(C) automatically mean system? Well I think so because that's the only feasible possibility. I think we are heading into the safe, trusted, system discussion and that's where I think the problem really lies, that trusted just messes things up. If safe code can call system code directly then we are good and we can us extern(C) just as before (with or without system block, another discussion) If we have this strange "human verified" trusted nomenclature then things starts because fuzzy. What is an FFI function, trusted or system? I think that trusted must go, then things start to clear up.
Without trusted, safe is completely broken. If you just think it through, that should be clear. Right now, if you have code such as auto foo(int* ptr) safe { ... ++ptr; ... } the function will fail to compile, because you're attempting to do something that's system inside an safe function. You may or may not have realized that you were doing something system, so it could very well be catching a bug for you. Either way, you then have four options: 1. Alter the code so that it doesn't need to do anything system in order to do whatever it is that you're trying to do. 2. Mark the function as trusted to indicate that you know that what you're doing is actually safe. 3. Put the system code in another function which you mark as trusted, and call it from the safe function. 4. Make it so that the function is system instead, requiring that the caller verify that what they're doing is safe and deal with trusted. Which solution is best depends on the code in question. However, what happens if trusted isn't a thing? Suddenly, there is no way to make this function safe. The compiler isn't smart enough to understand your code to the point that it can determine that what you're doing is safe, and you have no way to tell the compiler that you're sure that it's actually safe. So, there are then two possibilities: 1. The function must be system, and every function which calls it must be system. You therefore have no clue where the code is that potentially has memory safety problems, and none of that code is being verified by the compiler for safety. Ultimately, you end up with programs that might have pockets of safe code, but in general, they won't be able to do much with safe at all, because all it takes is a single system operation (like calling a C function for I/O), and then nothing in the call stack can be safe. 2. You allow safe functions to do system operations. This would then make safe utterly meaningless, because it wouldn't be verifying anything at all. You could get partial verification by disallowing system operations in safe functions while still allowing an safe function to call an system functions, but that still hides that something system is happening and makes it so that the compiler is neither checking those function calls nor flagging them as errors so that you know that you need to do something to actually use them in an safe manner. It would introduce a massive hole in the safety system. Without trusted, there is no bridge between safe and system code. safe needs to be verifying code for memory safety, or it's utterly pointless, and a program that can't call any system code is ultimately going to be useless. The downside to trusted is of course that it's up to the programmer to verify it, and they could get it wrong, but since the compiler isn't smart enough to verify that code for memory safety, there really isn't an alternative if you want the compiler to be verifying much of your program for memory safety at all. Ultimately, we do have to rely on the programmer to get trusted right, but if they don't, then the rules of the safety system make it so that when you have a memory safety issue, you only have to look at the code that's marked as trusted and the system code that it calls in order to track it down. It can certainly be argued whether safe or system should be the default, and it can argued whether it really makes sense to have trusted at the function level instead of having some kind of trusted block within a function, but trusted itself is vital to how the safety system works. - Jonathan M Davis
Mar 26 2020
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 9:35 PM, Jonathan Marler wrote:
 I assume there's a command-line switch to enable it so I could see 
 for myself?
https://github.com/dlang/dmd/pull/10709/files#diff-8248195efc96675f6bf930674f8fcf6eR744
Mar 25 2020
parent reply Jonathan Marler <johnnymarler gmail.com> writes:
On Thursday, 26 March 2020 at 05:16:23 UTC, Walter Bright wrote:
 On 3/25/2020 9:35 PM, Jonathan Marler wrote:
 I assume there's a command-line switch to enable it so I could 
 see for myself?
https://github.com/dlang/dmd/pull/10709/files#diff-8248195efc96675f6bf930674f8fcf6eR744
Hmmm, unfortunately the first project I tried it on exposed a compiler bug. I've seen this bug before in recent compilers. Symbols for certain templates aren't getting emitted. That being said, I ran it on a few projects and didn't find a single issue. However, I'm pretty sure I was compiling unsafe code a none of my code is tagged with safe/ system/ trusted. I was compiling my custom druntime/standard library and rootfs which contains assembly and all sorts of crazy stuff. There might be a bug in your PR? I rebased it and I've pushed the rebased version I compiled here: https://github.com/marler8997/dmd/tree/safeDefaultRebased NOTE: I rebased it so I could use druntime/phobos off of master so I wouldn't have to cross-reference which versions of those repos to use to get everything working
Mar 25 2020
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/25/2020 11:43 PM, Jonathan Marler wrote:
 NOTE: I rebased it so I could use druntime/phobos off of master so I wouldn't 
 have to cross-reference which versions of those repos to use to get everything 
 working
I rebased the PR. It was getting a bit stale anyway :-/
Mar 26 2020
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Mar 26, 2020 at 06:59:23AM +0000, Andrej Mitrovic via Digitalmars-d
wrote:
 On Thursday, 26 March 2020 at 04:35:23 UTC, Jonathan Marler wrote:
 I don't know about you but the code I write almost never has bugs.
I don't usually post around here anymore, but this gave me a good belly laugh. Thanks for that.
+1 :-) T -- I think Debian's doing something wrong, `apt-get install pesticide', doesn't seem to remove the bugs on my system! -- Mike Dresser
Mar 26 2020
prev sibling next sibling parent Kagamin <spam here.lot> writes:
On Thursday, 26 March 2020 at 02:58:26 UTC, Walter Bright wrote:
 Are you sure there'd be that much that breaks? If much does 
 break, I'd suggest re-evaluating the coding techniques you use.
Safety checks are quite conservative to the point that you even proposed https://issues.dlang.org/show_bug.cgi?id=20691 and they will be more conservative to plug more holes.
Mar 25 2020
prev sibling parent welkam <wwwelkam gmail.com> writes:
On Thursday, 26 March 2020 at 02:58:26 UTC, Walter Bright wrote:
  safe by default is going to improve your code.
This swayed my opinion about this dip. Now I am for it but please try to limit breaking changes to once a year.
Apr 03 2020
prev sibling next sibling parent reply aliak <something something.com> writes:
On Wednesday, 25 March 2020 at 07:02:32 UTC, Mike Parker wrote:
 This is the discussion thread for the Final Review of DIP 1028, 
 "Make  safe the Default":

 https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/DIP1028.md

 The review period will end at 11:59 PM ET on April 8, or when I 
 make a post declaring it complete. Discussion in this thread 
 may continue beyond that point.

 Here in the discussion thread, you are free to discuss anything 
 and everything related to the DIP. Express your support or 
 opposition, debate alternatives, argue the merits, etc.

 However, if you have any specific feedback on how to improve 
 the proposal itself, then please post it in the feedback 
 thread. The feedback thread will be the source for the review 
 summary I write at the end of this review round. I will post a 
 link to that thread immediately following this post. Just be 
 sure to read and understand the Reviewer Guidelines before 
 posting there:

 https://github.com/dlang/DIPs/blob/master/docs/guidelines-reviewers.md

 And my blog post on the difference between the Discussion and 
 Feedback threads:

 https://dlang.org/blog/2020/01/26/dip-reviews-discussion-vs-feedback/

 Please stay on topic here. I will delete posts that are 
 completely off-topic.
I feel like this has gotten lost in the discussion, a comment from ag0aep6g: Today, this rightfully fails compilation: ---- extern (C) void* memcpy(void* dst, const void* src, size_t n); int main() safe { auto x = new int; auto y = new int(13); immutable r = new int(0); memcpy(x, y, 100); return *r; } ---- With DIP 1028 in its current state, it passes. I also do not think allowing that to compile is acceptable. But it was unclear form the DIP if that's allowed or not because it specifically says D functions. So my understanding is if extern (C) is applied on a function declaration, then an annotation is explicitly required? Is this correct?
Mar 27 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function declaration,
then 
 an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
Apr 03 2020
next sibling parent reply Mathias LANG <geod24 gmail.com> writes:
On Friday, 3 April 2020 at 09:50:39 UTC, Walter Bright wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function 
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
And absolutely no one else thing it's a good idea, because it's essentially slapping ` trusted` on all `extern(C)` functions.
Apr 03 2020
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Friday, 3 April 2020 at 10:15:35 UTC, Mathias LANG wrote:
 On Friday, 3 April 2020 at 09:50:39 UTC, Walter Bright wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function 
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
And absolutely no one else thing it's a good idea, because it's essentially slapping ` trusted` on all `extern(C)` functions.
I think I agree. I'd sooner have no default at all on functions without implementation (i.e. you have to explicitly say system).
Apr 03 2020
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 04/04/2020 12:27 AM, John Colvin wrote:
 On Friday, 3 April 2020 at 10:15:35 UTC, Mathias LANG wrote:
 On Friday, 3 April 2020 at 09:50:39 UTC, Walter Bright wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function 
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
And absolutely no one else thing it's a good idea, because it's essentially slapping ` trusted` on all `extern(C)` functions.
I think I agree. I'd sooner have no default at all on functions without implementation (i.e. you have to explicitly say system).
Unless the compiler has checked the body for safe, it cannot be assumed to be safe. As that breaks its guarantees. So I am on the side of, without body, default = system.
Apr 03 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/3/20 10:39 AM, rikki cattermole wrote:
 On 04/04/2020 12:27 AM, John Colvin wrote:
 On Friday, 3 April 2020 at 10:15:35 UTC, Mathias LANG wrote:
 On Friday, 3 April 2020 at 09:50:39 UTC, Walter Bright wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function 
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
And absolutely no one else thing it's a good idea, because it's essentially slapping ` trusted` on all `extern(C)` functions.
I think I agree. I'd sooner have no default at all on functions without implementation (i.e. you have to explicitly say system).
Unless the compiler has checked the body for safe, it cannot be assumed to be safe. As that breaks its guarantees. So I am on the side of, without body, default = system.
For extern(D) functions, the mangling takes care of the problem. I would say everything that doesn't mangle safe-ty into the name, assume system (breaks less code but inconsistent/less clear) OR error (requires annotation, so breaks code, but is clearer). Basically, this only comes into play for .di files. -Steve
Apr 03 2020
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, April 3, 2020 3:50:39 AM MDT Walter Bright via Digitalmars-d 
wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
Which is really, really bad, because it means that the compiler is basically slapping trusted on all extern(C) declarations. It's introducing a hole in safe, and there is absolutely no reason to do so. IMHO, _nothing_ should ever be marked with safe (explicitly or implicitly) unless the compiler has actually verified that it's memory safe. As such, non-extern(D) declarations should either be system or trusted, and trusted needs to be done by the programmer, because it indicates that the programmer is saying that they verified that it was memory safe. It makes no sense for the compiler to ever treat extern(C) declarations as if they were marked with safe. For the compiler to treat them as safe means that it's is lying about what it's verified. - Jonathan M Davis
Apr 03 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/3/2020 8:39 AM, Jonathan M Davis wrote:
 [...]
Yes, this has all been brought up and discussed before in this thread.
Apr 03 2020
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, April 3, 2020 2:18:54 PM MDT Walter Bright via Digitalmars-d 
wrote:
 On 4/3/2020 8:39 AM, Jonathan M Davis wrote:
 [...]
Yes, this has all been brought up and discussed before in this thread.
Well, I don't think that you've actually acknowledged any of it, and what responses you do have make it seem like you're not aware of it or are ignoring it. - Jonathan M Davis
Apr 03 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/3/2020 5:44 PM, Jonathan M Davis wrote:
 Well, I don't think that you've actually acknowledged any of it, and what
 responses you do have make it seem like you're not aware of it or are
 ignoring it.
I have replied to it more than once.
Apr 03 2020
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04.04.20 08:55, Walter Bright wrote:
 On 4/3/2020 5:44 PM, Jonathan M Davis wrote:
 Well, I don't think that you've actually acknowledged any of it, and what
 responses you do have make it seem like you're not aware of it or are
 ignoring it.
I have replied to it more than once.
It however seems you have yet to reply in a way that makes sense to anyone (perhaps besides yourself). You could argue that the underlying problem is that it is even allowed to mark extern(C) functions as safe, and that this is a language design bug that exists today and is independent of DIP 1028, but acknowledging the problem (silent API breakage, insanely error prone defaults for extern functions) and then dismissing it as irrelevant by pointing out that there is no problem if you just fix all the existing code and then proceed to make no further mistakes is somewhat ridiculous.
Apr 05 2020
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Friday, 3 April 2020 at 09:50:39 UTC, Walter Bright wrote:
 On 3/27/2020 5:32 AM, aliak wrote:
 So my understanding is if extern (C) is applied on a function 
 declaration, then an annotation is explicitly required?
No. Without an explicit annotation, it will be the default ( safe).
That sounds risky, and is exactly the opposite of how Rust handles extern functions: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#using-extern-functions-to-call-external-code "Functions declared within `extern` blocks are always unsafe to call from Rust code. The reason is that other languages don’t enforce Rust’s rules and guarantees, and Rust can’t check them, so responsibility falls on the programmer to ensure safety." Now I'm not suggesting that D should do it because Rust does ;-) But their reasoning seems sound, and I don't see an obvious reason for assuming safe-ty of functions that the D compiler cannot verify.
Apr 03 2020
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/3/2020 2:42 PM, Joseph Rushton Wakeling wrote:
 Now I'm not suggesting that D should do it because Rust does ;-) But their 
 reasoning seems sound, and I don't see an obvious reason for assuming  safe-ty 
 of functions that the D compiler cannot verify.
In both Rust and D the reliance is on the programmer when calling functions in another language. D gives you the option to allow the programmer to treat them as safe if he wants to.
Apr 03 2020
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04.04.20 08:58, Walter Bright wrote:
 On 4/3/2020 2:42 PM, Joseph Rushton Wakeling wrote:
 Now I'm not suggesting that D should do it because Rust does ;-) But 
 their reasoning seems sound, and I don't see an obvious reason for 
 assuming  safe-ty of functions that the D compiler cannot verify.
In both Rust and D the reliance is on the programmer when calling functions in another language. D gives you the option to allow the programmer to treat them as safe if he wants to.
Which is fully unnecessary because trusted fills that role perfectly fine. The real D innovation that you appear to be arguing in favor of is that D allows you to treat the function as safe by accident, i.e., even if you don't want to. :) I don't understand your "elite C hacker" attitude towards this. If (teams of) programmers never make mistakes, they don't need safe. (BTW: I am still getting new messages from up to two days ago. Not sure what's up with that.)
Apr 05 2020
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
As I just said over in this other thread:

https://forum.dlang.org/post/jipdihpufuslnfayhakh forum.dlang.org

if we get DIP 1032 done correctly, with the obvious consequences 
of it done, it makes this DIP 1028 a totally unnecessary major 
breaking change.
Apr 03 2020
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
Okay, this is really a response to a post from Walter in the feedback
thread, but since we're not allowed to post responses there, I'm just
copy-pasting it all into the discussion thread.

On Saturday, April 11, 2020 1:30:26 AM MDT Walter Bright via Digitalmars-d
wrote:
 On 3/25/2020 9:57 PM, Jonathan M Davis wrote:
 There's also the question of declaration vs definition for functions
 which aren't extern(D). It makes no sense for a function declaration
 which is not extern(D) to be treated as  safe by default, because the
 compiler can't guarantee its  safety. However, I don't think that it
 would be a problem for non-extern(D) function definitions, because then
 the compiler _can_ check their  safety. Either way, the DIP should be
 explicit about what happens with declarations and definitions which are
 not extern(D) - regardless of whether declarations and definitions are
 treated differently.

 I'd also argue that  safe should not be allowed on function declarations
 which are not extern(D), because the compiler can't guarantee their
  safety (meaning that if an explicit  safety attribute is supplied, it
 must either be  system or  trusted), and allowing  safe on
 non-extern(D) declarations makes it much harder for programmers to grep
 for  trusted code that could be the source of  safety problems. In
 practice, right now, I'm unaware of anyone marking extern(C)
 declarations as  safe, but if someone did, it could be very hard to
 track down the problem if it were hiding a memory safety bug, and given
 the ease of marking all declarations and definitions in a module with
  safe via  safe: or  safe {}, it wouldn't surprise me if some code
 bases are accidentally marking extern(C) declarations with  safe right
 now and thus hiding memory safety issues. However, allowing  safe on
 non-extern(D) function _definitions_ should be fine, since then the
 compiler actually is verifying the code. Regardless, much as I think
 that the DIP _should_ make it illegal to mark non-extern(D)
 declarations as  safe, that would arguably be an improvement over what
 the DIP currently does rather than being required for the DIP to
 actually make sense or be acceptable.
 On the other hand,

 1. it's a special case inconsistency, which has its own costs and
 confusion.
What this DIP proposes will cause even more confusion, and the "special case" here should be incredibly clear. The compiler cannot treat _anything_ as safe unless it can mechanically verify that it's safe, or the programmer has marked it as trusted, indicating that _they_ have verified it. And thus, as the compiler cannot determine that a non-extern(D) declaration is memory safe, it cannot treat it as safe. To do anything else actually makes safe harder to understand, not easier. It also makes it far, far more likely that extern(C) declarations will be incorrectly marked - especially since right now, leaving them unmarked results in them being correctly marked as system, whereas after the change, properly written C bindings would suddenly be treated as safe even though they had not been verified by the programmer to be safe. So, this DIP _will_ introduce bugs into existing code, and it will make writing new extern(C) declarations that much more error-prone.
 2. the compiler cannot verify any extern declarations as being safe, even
 D ones. It's always going to be up to the user to annotate them
 correctly.
Sure, but with the status quo, the compiler isn't lying about what it's verified, and you won't accidentally end up with code being treated as safe just because an attribute was missed. safe is about mechanical verification of memory safety, and extern(C) declarations cannot be mechanically verified, so it makes no sense for the compiler to treat them as safe.
 3. the extern(C) specifies an ABI, it doesn't say anything about how the
 function is implemented, or even which language it is implemented in. A
 pretty big chunk of the dmd implementation (80-90%?) is extern(C++)
Sure, and because the compiler hasn't verified it for memory safety, it's _completely_ inappropriate for it to claim that it has by treating it as safe.
 4. it's trivial to mark a block of C function declarations with  system
 and trivial to audit it. I've done it already to a bunch of druntime
 headers

 5. D's separate compilation model relies on extern declarations where
 source is not available and safety cannot be machine checked. It's
 inherent
Sure, but extern(C) declarations aren't always in separate modules where it would make sense to just slap system: at the top, and you're basically proposing that trusted be reversed for extern(C) declarations. Right now, they're not assumed to be safe unless the programmer has gone to the effort of indicating that they've verified them, whereas with the DIP, they will assumed to have been verified by the programmer unless they've been marked with system to indicate that the programmer hasn't verified them or that they've been verified to _not_ be memory safe. That's completely backwards from how trusted and safety in general is supposed to work. It's also much more error-prone. You're essentially asking the programmer to explicitly mark code where there might be bugs rather than having them mark where they've verified that there aren't.
 6. We're just talking about the default. The whole point of  safe being
 the default is that it is far and away the most common case, even for C
 functions.
Yes, and you're making the default error-prone and confusing for no benefit to the programmer. These functions have _not_ been verified by the compiler, so having the compiler mark them as safe is a lie, and it makes the programmer's job harder. They then have to worry about tracking down extern(C) declarations that haven't been marked and whether there are bugs in the code, because a programmer failed to mark an extern(C) declaration explicitly as system (and it could easily be that that declaration had been written prior to this DIP coming into effect, meaning that it had been correct at the time it was written but isn't anymore). Regardless of how likely it is that a particular C function is actually memory safe, unless the compiler can verify that, having it treat it like it is just means that that function is not necessarily being checked by anyone, whereas with the status quo, the programmer will get errors if they try to use it in safe code, forcing them to realize that it hasn't been marked as trusted and that it either should be marked as trusted, or it needs to be treated as system. It would make far, far more sense to require that all non-extern(D) declarations be explicitly marked with system or trusted than to have the compiler blindly treat them as safe.
 7. A function having different attributes depending on whether or not a
 body is present is surprising behavior
Is it really? It should be pretty clear that the compiler can't verify a function declaration for memory safety (even in the case of extern(D) function declarations, the compiler isn't verifying anything when compiling the declaration - it just knows that the body was checked). I would think that having the compiler treat code that it can't verify (and can't know was verified) for memory safety as safe would be far more surprising behavior. And if you think that having function declarations being treated differently than function definitions would be confusing, then simply treating all non-extern(D) functions as system by default shouldn't be confusing. The rule for that is _really_ simple, and unlike what the DIP proposes, it wouldn't result in invisible problems. It's less user-friendly for anyone writing extern(C) functions in D, but at least, it wouldn't result in the compiler lying about what it's verified for memory safety, and you wouldn't even potentially be introducing confusion about declarations and definitions being treated differently. Alternatively, you could just always require that in any cases where the compiler can't even attempt to verify code for safety, an safety attribute must be explicitly provided. Arguably, that's what should be happening with extern(C) declarations anyway, and it's essentially what you're claiming is best practice in the DIP. Why not just enforce that with the compiler? It would avoid having the compiler claim that code is safe when it hasn't verified diddly-squat, and it would avoid extern(C) declarations being accidentally treated as safe.
 8. annotating "extern(C) void free(void*);" as  safe doesn't make it safe,
 either, again relying on the user
IMHO, marking such a function declaration with safe shouldn't even be legal, because the compiler has not verified it for memory safety. trusted is the appropriate attribute for that. And while the programmer could also incorrectly mark the function declaration with trusted, at least as long as you can't mark anything with safe which isn't mechanically verified for safety by the compiler, you know that any memory safety bugs you run into are caused by trusted code (or system code that trusted code calls) and that that's where you need to look for them. As long as non-extern(D) function declarations can be either implicitly or explicitly marked with safe, you can't find memory safety bugs just by looking for trusted code, and wasn't part the whole point of trusted to make it so that you could segregate all code that hasn't actually been verified for memory safety by the compiler so that the programmer can easily find the code that could be causing memory safety bugs? We should be making it illegal to mark anything as safe which the compiler has not actually verified rather than making the compiler treat code that it hasn't verified as safe just because it can't check it.
 9. what do we do with "nothrow" by default? Say this doesn't apply to
 extern(C++) functions? Is anyone going to remember all these special
 cases?
nothrow by default would be a disaster even for extern(D) given that exceptions are by far the best way in general to deal with error conditions, and trying to disable them by default makes them much harder to use. So, I _really_ hope that you're not seriously considering making nothrow the default. Regardless, treating non-extern(D) declarations as nothrow by default presents many of the same problems as treating them as safe by default and makes no sense for the same reasons. The compiler can't verify that they're nothrow, so it should _not_ be treating them as nothrow. That should be simple for anyone to remember and understand. And regardless, compiler error messages should make it crystal clear if you try to call an extern(C++) function from a nothrow function, just like they should make it crystal clear if you try to call an system function from an safe function. You seem to be the only person in any of the discussion for safe by default who thinks that it makes any sense for extern(C) function declarations to be treated as safe by default, and the only logic you seem to have to support is 1. You think that it would be a confusing special case if non-extern(D) function declarations were treated differently from function definitions (as would be the case if non-extern(D) function declarations were system by default or required explicit safety attributes), and you think that it would be a confusing special case to treat non-extern(D) functions differently from extern(D) functions (as would be the case if non-extern(D) declarations and definitions were system by default or required explicit safety attributes). 2. You seem to think that just because it's ultimately up to the programmer to not make mistakes with extern(C) declarations, the compiler shouldn't help them at all (especially if that help requires introducing a special case). It can easily be argued that treating extern(C) function declarations as safe instead of system is more confusing and that the fact that the compiler is claiming that code has been verified for memory safety when it hasn't been is _far_ more confusing. Regardless, I don't think that you can possibly argue that what you're proposing isn't more error-prone than the status quo. And pretty much everyone else in the discussion on this seems to think that because it's trivial for the compiler to see that an extern(C) declaration has not been verified for safety, the compiler should use that information to help the programmer, not ignore it. Treating extern(C) declarations as safe _will_ introduce bugs rather than catch them, and it doesn't make the programmers life any easier. _Please_ reconsider your position on this. If this DIP goes through, then it will definitely not be the case in D that safe code has been verified by the compiler for memory safety. It won't even be the case that it's either been verified by the compiler or the programmer. You're about to drive a truck through safety just so that you can make safe the default in places where in does make sense. There's no need to make it the default for non-extern(D) declarations in order to make it the default for function definitions, and you seem to be the only person here who thinks that it would make sense to do so. As this DIP stands, I pray to God that Atila rejects this DIP if you won't. And at least from the perspective of anyone actually using the compiler rather than writing it, it would be _so_ simple and straightforward to just treat extern(C) declarations as system - and it would avoid bugs in the process at no cost to the programmer, whereas this DIP will introduce and hide bugs. And if it really is best practice to mark all non-extern(D) declarations explicitly with either system or trusted, then why not just require it? It would result in fewer bugs without making anything confusing. If this DIP goes through as-is, we will be stuck explaining to D programmers for years to come why they have to be wary of the compiler treating extern(C) function declarations incorrectly and introducing invisible memory safety bugs into your program if you're not very careful. The compiler will actively be making dealing with extern(C) harder and more error-prone - and require more explanation to programmers learning D. On the other hand, if non-extern(D) declarations are system by default or require explicit safety attributes, then it's _simple_ to explain to people that that means that they need to be verifying the bindings for safety and use trusted appropriately - or that they need to just mark it them with system and leave it up to whoever is using them. The only memory bugs introduced at that point will be because trusted was used incorrectly rather than because an extern(C) declaration was incorrectly treated as safe, because the programmer missed it. - Jonathan M Davis
Apr 12 2020
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Sunday, 12 April 2020 at 11:49:34 UTC, Jonathan M Davis wrote:
 If this DIP goes through as-is, we will be stuck explaining to 
 D programmers for years to come why they have to be wary of the 
 compiler treating extern(C) function declarations incorrectly 
 and introducing invisible memory safety bugs into your program 
 if you're not very careful. The compiler will actively be 
 making dealing with extern(C) harder and more error-prone - and 
 require more explanation to programmers learning D. On the 
 other hand, if non-extern(D) declarations are  system by 
 default or require explicit  safety attributes, then it's 
 _simple_ to explain to people that that means that they need to 
 be verifying the bindings for  safety and use  trusted 
 appropriately - or that they need to just mark it them with 
  system and leave it up to whoever is using them. The only 
 memory bugs introduced at that point will be because  trusted 
 was used incorrectly rather than because an extern(C) 
 declaration was incorrectly treated as  safe, because the 
 programmer missed it.
I really want to strongly back Jonathan here. I don't usually like to pull the "as a corporate user..." line, but ... as a corporate user of D, it really matters to me that ` safe` means that memory safety checks have actually been performed on the code concerned, rather than just assumed. All things considered it seems to me like perhaps the most straightforward way to avoid a horrible clash of concerns here is just to insist that `extern` functions must always have an _explicit_ (not inferred) memory safety attribute. That doesn't ban putting ` safe` on them explicitly -- which might be something that could result e.g. from a `.di` file for a D lib implementing `extern(C)` code -- but it would mean that if ` safe` is on an extern function signature, it's because someone deliberately put it there.
Apr 14 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Tuesday, 14 April 2020 at 18:54:29 UTC, Joseph Rushton 
Wakeling wrote:
 On Sunday, 12 April 2020 at 11:49:34 UTC, Jonathan M Davis 
 wrote:
 If this DIP goes through as-is, we will be stuck explaining to 
 D programmers for years to come why they have to be wary of 
 the compiler treating extern(C) function declarations 
 incorrectly and introducing invisible memory safety bugs into 
 your program if you're not very careful. The compiler will 
 actively be making dealing with extern(C) harder and more 
 error-prone - and require more explanation to programmers 
 learning D. On the other hand, if non-extern(D) declarations 
 are  system by default or require explicit  safety attributes, 
 then it's _simple_ to explain to people that that means that 
 they need to be verifying the bindings for  safety and use 
  trusted appropriately - or that they need to just mark it 
 them with  system and leave it up to whoever is using them. 
 The only memory bugs introduced at that point will be because 
  trusted was used incorrectly rather than because an extern(C) 
 declaration was incorrectly treated as  safe, because the 
 programmer missed it.
I really want to strongly back Jonathan here. I don't usually like to pull the "as a corporate user..." line, but ... as a corporate user of D, it really matters to me that ` safe` means that memory safety checks have actually been performed on the code concerned, rather than just assumed. All things considered it seems to me like perhaps the most straightforward way to avoid a horrible clash of concerns here is just to insist that `extern` functions must always have an _explicit_ (not inferred) memory safety attribute. That doesn't ban putting ` safe` on them explicitly -- which might be something that could result e.g. from a `.di` file for a D lib implementing `extern(C)` code -- but it would mean that if ` safe` is on an extern function signature, it's because someone deliberately put it there.
The further we get from safe meaning "a vetted compiler checked this during this compilation" the less we should trust safety. safe via name mangling is, to me, just slightly less trustworthy than whole program compilation. A .di file is less trustworthy as it's more likely to have been human edited. An explicit safe attribute on an extern C/C++ declaration is even less trustworthy, I'd treat it like an trusted. As bad as those are, the DIP would be significantly worse. Defaulting extern C/C++ to safe means that you'd be on the hook for an trusted style audit against the transitive closure of any such routines that you pull in. Very few people are going to do that.
Apr 14 2020
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Apr 12, 2020 at 05:49:34AM -0600, Jonathan M Davis via Digitalmars-d
wrote:
[...]
 On Saturday, April 11, 2020 1:30:26 AM MDT Walter Bright via Digitalmars-d
 wrote:
[...]
 1. it's a special case inconsistency, which has its own costs and
 confusion.
What this DIP proposes will cause even more confusion, and the "special case" here should be incredibly clear. The compiler cannot treat _anything_ as safe unless it can mechanically verify that it's safe, or the programmer has marked it as trusted, indicating that _they_ have verified it. And thus, as the compiler cannot determine that a non-extern(D) declaration is memory safe, it cannot treat it as safe. To do anything else actually makes safe harder to understand, not easier.
[...] Yes, I think a reasonable compromise is "D vs. non-D". I.e., if it's extern(D), then the D-specific defaults apply: safe, nothrow, etc.. But if it's non-D, then pessimal defaults apply: system, throwing, etc., and it's up to the programmer to override it if it's otherwise. Blindly assuming safe (and nothrow, etc.) apply to non-D declarations by default makes no sense: you can't assume D-specific things apply to any other language. It will make safe essentially nothing more than lip service and programming by convention, and destroy whatever remnants of actual safety guarantees it may have had. T -- Bare foot: (n.) A device for locating thumb tacks on the floor.
Apr 12 2020
parent reply Dennis <dkorpel gmail.com> writes:
On Sunday, 12 April 2020 at 14:25:45 UTC, H. S. Teoh wrote:
 Blindly assuming  safe (and nothrow, etc.) apply to non-D 
 declarations by default makes no sense: you can't assume 
 D-specific things apply to any other language.
The thing is, like Walter said, extern(language) does not mean the code was written in another language. It strictly means to use the ABI and mangling scheme of that language. I can write my own math library in D: ``` module custommath; float customSqrt(float x) { // implementation doing nothing system } float customSin(float x) { // implementation doing nothing system } ``` Then I decide I want to make this a closed-source library available to C users, so I add extern(C): on top. Back on the D side, suddenly this code breaks: ``` import std; void main() { writeln(customSqrt(16.0)); // error: safe main can't call system customSqrt } ``` Also, I write this .di file by simply removing the function bodies: ``` module custommath; float customSqrt(float x); float customSin(float x); ``` Again, I suddenly can't call them from my safe `main` function. Why did the attributes change? I think the conflict of this discussion comes from balancing three expectations: 1. extern(language) merely specifies ABI and mangling 2. a non-auto/template function can be made extern by simply replacing the {body} into a ; 3. memory corruption can always be traced back to system or trusted code Now there is actually a way to satisfy all three, and it's with system by default :) But with safe by default, one of them has to go. Given that the DIP's rationale hinges on the importance of memory safety, you'd think point 3 is the most important and either point 1 is sacrificed (have only extern(D) safe by default) or point 2 (bodyless functions must be trusted or system). Surprisingly, Walter wants to keep 1. and 2. and justifies it by saying extern functions are out of scope for safe, similar to how safe cannot save you when bugs in the linker, operating system or processor cause memory corruption. If your bindings or the external code are faulty, you're in trouble regardless of whether you annotate it with safe / trusted / system. However, at least marking the bindings as trusted or system is in line with D's memory safety goal (point 3 above).
Apr 12 2020
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12.04.20 18:06, Dennis wrote:
 
 
 1. extern(language) merely specifies ABI and mangling
 2. a non-auto/template function can be made extern by simply replacing 
 the {body} into a ;
 3. memory corruption can always be traced back to  system or  trusted code
 
 Now there is actually a way to satisfy all three, and it's with  system 
 by default :)
No. 2 and 3 are mutually inconsistent.
Apr 12 2020