www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP1000 observation

reply Bruce Carneal <bcarneal gmail.com> writes:
The lesson I take from the DIP 1000 history is that we need 
something that is simpler to explain, something that is much 
easier to use correctly, something that models the problem more 
clearly.

Bug reporting/fixing is great, but sometimes the bug pattern 
indicates a rethink is in order.  I believe this is one of those 
times.
Aug 25
next sibling parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 26/08/2024 5:55 AM, Bruce Carneal wrote:
 The lesson I take from the DIP 1000 history is that we need something 
 that is simpler to explain, something that is much easier to use 
 correctly, something that models the problem more clearly.
I have another observation that I believe to be the final nail in DIP1000's current scope of design. What people refer to it as, is DIP1000. Its DIP number. This includes both Walter and Dennis. It is not called scoped pointers or escape analysis. The human mind is highly associative, the fact that it is not connecting the dots on this is a major concern that it is not in fact solving a problem that people have, let alone model in their mind. This should of happened in the last 10 years, yet it hasn't.
Aug 25
prev sibling next sibling parent reply Lance Bachmeier <no spam.net> writes:
On Sunday, 25 August 2024 at 17:55:04 UTC, Bruce Carneal wrote:
 The lesson I take from the DIP 1000 history is that we need 
 something that is simpler to explain, something that is much 
 easier to use correctly, something that models the problem more 
 clearly.

 Bug reporting/fixing is great, but sometimes the bug pattern 
 indicates a rethink is in order.  I believe this is one of 
 those times.
IMO the lesson is that this kind of complexity does not belong in the language by default. The second lesson is that the folks deciding on the direction of the language don't care at all about new users or basically anyone that's not doing Rust-style programming. But I'm not going to waste more time fighting this battle.
Aug 25
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 25 August 2024 at 20:46:39 UTC, Lance Bachmeier wrote:
 On Sunday, 25 August 2024 at 17:55:04 UTC, Bruce Carneal wrote:
 The lesson I take from the DIP 1000 history is that we need 
 something that is simpler to explain, something that is much 
 easier to use correctly, something that models the problem 
 more clearly.

 Bug reporting/fixing is great, but sometimes the bug pattern 
 indicates a rethink is in order.  I believe this is one of 
 those times.
IMO the lesson is that this kind of complexity does not belong in the language by default.
I agree, as do others that I've talked with.
 The second lesson is that the folks deciding on the direction 
 of the language don't care at all about new users or basically 
 anyone that's not doing Rust-style programming.
If you look at programming languages across three dimensions, safety X performance X convenience (thanks Paul), Rust appears to have capitulated on convenience in order to stand out in safety and performance. I believe we can and should do much better than Rust on this pareto surface but we'll need something better than DIP1000 to make headway on the (safety X performance) front.
 But I'm not going to waste more time fighting this battle.
Thanks for your past exertions on behalf of a better language for us all. I hope you can find some other battle worth fighting or rejoin this one as/when better alternatives come to the fore.
Aug 25
parent Lance Bachmeier <no spam.net> writes:
On Sunday, 25 August 2024 at 21:52:36 UTC, Bruce Carneal wrote:

 If you look at programming languages across three dimensions, 
 safety X performance X convenience (thanks Paul), Rust appears 
 to have capitulated on convenience in order to stand out in 
 safety and performance.  I believe we can and should do much 
 better than Rust on this pareto surface but we'll need 
 something better than DIP1000 to make headway on the (safety X 
 performance) front.
I wish we'd make all the safety stuff opt-in, because it adds complexity but in many cases doesn't provide any benefit. I've never understood the desire to force it on everyone whether they have a use for it or not.
 But I'm not going to waste more time fighting this battle.
Thanks for your past exertions on behalf of a better language for us all. I hope you can find some other battle worth fighting or rejoin this one as/when better alternatives come to the fore.
Oh, I'll keep using the language, but I'll focus on continuing to make D a full-featured option for data analysis, which is basically done. Arguing against language complexity just burns through time I could spend on that.
Aug 25
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 25 August 2024 at 20:46:39 UTC, Lance Bachmeier wrote:
 IMO the lesson is that this kind of complexity does not belong 
 in the language by default. The second lesson is that the folks 
 deciding on the direction of the language don't care at all 
 about new users or basically anyone that's not doing Rust-style 
 programming.

 But I'm not going to waste more time fighting this battle.
Actually the people running the language don't care about "Rust-style programming" either--that's why they've been clinging to the false simplicity of DIP 1000 instead of adopting a more powerful (but more complex) Rust-inspired approach to lifetimes. As far as I can tell, the only true motivating force is the desire to go on social media like Twitter and Hacker News and brag to uninformed internet users that "D is a memory safe language." The fact that this claim does not hold up to scrutiny is beside the point, because most people will never bother to check. Needless to say, with such leadership, D will never achieve anything of substance in this area.
Aug 25
parent Atila Neves <atila.neves gmail.com> writes:
On Monday, 26 August 2024 at 02:41:37 UTC, Paul Backus wrote:
 On Sunday, 25 August 2024 at 20:46:39 UTC, Lance Bachmeier 
 wrote:
 [...]
Actually the people running the language don't care about "Rust-style programming" either--that's why they've been clinging to the false simplicity of DIP 1000 instead of adopting a more powerful (but more complex) Rust-inspired approach to lifetimes. As far as I can tell, the only true motivating force is the desire to go on social media like Twitter and Hacker News and brag to uninformed internet users that "D is a memory safe language." The fact that this claim does not hold up to scrutiny is beside the point, because most people will never bother to check. Needless to say, with such leadership, D will never achieve anything of substance in this area.
The motivating force for me is to have compiler-enforced memory safety without GC allocated memory, for when that's actually needed. I agree however that we need to rethink DIP1000.
Aug 29
prev sibling next sibling parent claptrap <clap trap.com> writes:
On Sunday, 25 August 2024 at 17:55:04 UTC, Bruce Carneal wrote:
 The lesson I take from the DIP 1000 history is that we need 
 something that is simpler to explain, something that is much 
 easier to use correctly, something that models the problem more 
 clearly.

 Bug reporting/fixing is great, but sometimes the bug pattern 
 indicates a rethink is in order.  I believe this is one of 
 those times.
DIP1000 is like quantum mechanics, even the people who invented it don't understand it.
Aug 25
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Sunday, August 25, 2024 11:55:04 AM MDT Bruce Carneal via Digitalmars-d 
wrote:
 The lesson I take from the DIP 1000 history is that we need
 something that is simpler to explain, something that is much
 easier to use correctly, something that models the problem more
 clearly.

 Bug reporting/fixing is great, but sometimes the bug pattern
 indicates a rethink is in order.  I believe this is one of those
 times.
I am strongly of the opinion that DIP 1000 is far more complex than it's worth and that we'd be better off just dropping it. If you're primarily using the GC, then DIP 1000 doesn't even help you, because it's focused on stuff like trying to make taking the address of a local variable safe instead of just leaving that up to the programmer like we always have. D can already be memory-safe quite easily so long as you minimize stuff like taking the address of local variables, and we have trusted to deal with those issues. The GC allows us to treat the vast majority of stuff as safe, and trusted lets us deal with the rest. The main problem with the current situation IMHO is that slicing static arrays is not treated the same as taking the address of a local variable even though it's exactly the same thing except that the length of the array is passed along with the address. If we treat that as system (and preferably also remove the implicit slicing of static arrays), then the main safety hole that we currently have that I'm aware of is plugged. And if we have additional safety holes, we can address each of those individually by treating stuff that's system as system instead of complicating the language further to try to treat more as safe without needing trusted. The problem of course is then the folks who want to be able to do stuff like take the address of a local variable and have the compiler be smart enough to determine that that's safe - particularly since if they're doing it correctly, it will be. However, I _really_ don't think that the complexity that comes with DIP 1000 is worth fixing that problem. Unless a D program is actively trying to not use the GC, only a small portion of the program is going to be doing stuff like taking the address of a local, and normally, taking the address of a local is restricted enough with what it's doing (since it pretty much has to be if you don't want the address to escape) that it shouldn't be hard to manually verify that no escaping is taking place. And if it is hard to verify manually, the odds are that you're going to have to cast away scope to shut up the compiler with DIP 1000 anyway, because the compiler won't have been able to verify it either. If someone is able to come up with an alternate solution which is much simpler, then great! It would be awesome to be able to treat more stuff as safe without needing trusted. But as Walter has pointed out, a lot of the complexity that comes with DIP 1000 is because D is a complex language, and I'm not convinced that it's even possible to come up with a simple solution to this problem unless we do something like get rid of separate compilation so that the compiler can examine all of the code and track the lifetimes of everything (which we're obviously not going to do, since that has a very large cost in other areas like compilation times and integrating with C/++ code). D is already able to make a lot of code memory-safe thanks to the fact that it has a GC, and I think that we should stop digging this rabbit hole that's trying to make malloc and taking the address of locals memory-safe without requiring trusted. It's just not worth it. Personally, every time that I've even tried to use DIP 1000, I've given up, because it's just too much of a pain. And honestly, I suspect that if DIP 1000 ever gets made the default, I will either just be casting away scope everywhere that the compiler infers it and slap trusted on that code to shut the compiler up, or I'll just give up on making that code safe. I'm perfectly fine with manually verifying the rare case where I need to take the address of a local variable or slice a static array, and I do _not_ want to deal with figuring out where and how I need to slap scope everywhere to make the compiler happy - especially when it's then going to start complaining about stuff that worked perfectly fine and was quite memory safe prior to scope getting involved. - Jonathan M Davis
Aug 25
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/25/2024 6:28 PM, Jonathan M Davis wrote:
 I'm
 perfectly fine with manually verifying the rare case where I need to take
 the address of a local variable or slice a static array, and I do _not_ want
 to deal with figuring out where and how I need to slap scope everywhere to
 make the compiler happy - especially when it's then going to start
 complaining about stuff that worked perfectly fine and was quite memory safe
 prior to scope getting involved.
If you never take the address of a local, or a ref to a local, dip1000 is not going to complain about your code! For example: ``` struct S { safe ref int bar() { } } safe int* foo(int i) { S s; s.bar(); return null; } ``` compiles without error with -dip1000. The following does error: ``` safe int* foo(int i) { return bar(&i); } trusted int* bar(int* p) { return p; } ``` ``` reference to local variable `i` assigned to non-scope parameter `p` calling `bar` ``` Perhaps that error check on a trusted function call should be suppressed.
Aug 25
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Monday, August 26, 2024 12:20:18 AM MDT Walter Bright via Digitalmars-d 
wrote:
 On 8/25/2024 6:28 PM, Jonathan M Davis wrote:
 I'm
 perfectly fine with manually verifying the rare case where I need to take
 the address of a local variable or slice a static array, and I do _not_
 want to deal with figuring out where and how I need to slap scope
 everywhere to make the compiler happy - especially when it's then going
 to start complaining about stuff that worked perfectly fine and was quite
 memory safe prior to scope getting involved.
If you never take the address of a local, or a ref to a local, dip1000 is not going to complain about your code!
Sure, but in the cases where I do take the address of a local, I don't want to deal with the compiler inferring anything as scope. Then I'd have to start worrying about whether the functions that I pass the pointer to are scope, and if I mark up any code as scope to make the compiler happy, that quickly becomes viral, whereas right now, I can just take the address of a local and happily pass it around without any complaints. I just have to mark the function as trusted, and if the code is complex enough that it's not easy to see whether a pointer is escaping, then I'm just not going to do something like take the address of a local. Where it comes up more frequently for me though is static arrays. There are plenty of cases where it makes sense to create a static array to avoid allocations and slice it to actually operate on the data when there is no need to return any such slice, so it's easy to avoid any escaping. But using DIP 1000 means that scope has to be used all over the place. Part of the reason that I gave up on doing anything with DIP 1000 and dxml is because some of the tests use static arrays and slice them, and if DIP 1000 is enabled, then scope is needed all over the place. It's not helping. It just makes it so that I have to mark up a bunch of code with scope to make the compiler happy. I don't want to have to mark up code with scope just to handle the case where someone might decide to do something like pass in the slice of a static array with code where it's obvious whether anything is escaping. It quickly becomes viral, and I'm not at all convinced that it's actually making things safer given that if the code is complex enough that you need the compiler to tell you that stuff isn't escaping, then the compiler is probably going to fail at figuring it out somewhere in the process anyway, forcing you to cast away scope. At that point, it's better to just not do something like take the address of a local or slice a static array. And while to an extent, you can just avoid using scope, because DIP 1000 infers scope in various places, you sometimes end up with it whether you like it or not. So, if DIP 1000 ever becomes the default, it's going to affect folks who do not want the help. I fully expect that if DIP 1000 becomes the default, I will either be casting away scope whenever it gets inferred, or if that's enough of a pain, with my own code, I'll just give up on safe entirely and mark everything as system. I very much like the idea of safe catching cases where I accidentally do something system, but I don't want to have to mark up my code all over the place just so that the compiler can make a bit more code safe without trusted. The ROI isn't even vaguely there IMHO - especially with a language that already makes it possible to make the vast majority of code safe thanks to the GC. Right now, the issue of taking the address of locals is quite isolated, because it really only affects code around the place where you take the address (at the cost of some basic thinking to make sure that you're not doing something stupid with the pointer), whereas with DIP 1000, we're going to have cases where scope gets used all over the place just in case the pointer or array that's passed in has the address of a local variable. Instead of it being just the caller's problem, it becomes a problem for every function that's being called with that pointer or slice. And the situation is going to be particularly bad for library writers, since then even if you don't want to bother with scope, it's going to cause problems for users of your library who do want to use scope if you don't go to the extra effort of using it anywhere and everywhere that might need it for the calling code to be able to pass in something that's scope. And the fact that scope gets inferred is likely to cause problems in templated code in particular, because then anyone who uses scope in their code and passes it to your library that doesn't use scope will get it inferred in some places (and potentially in a _lot_ of places if member functions are marked with scope), causing compilation errors. It seems to me that scope risks becoming an utter nightmare with ranges in particular if anyone decides that they want to mark any functions on their ranges with scope given how common it is for range-based code to wrap ranges with other ranges using templated code. For my own libraries, I'll probably just say tough luck and say that folks will have to cast away scope or simply not use it if they want to use my libraries, whereas if I have to deal with it for Phobos, that will significantly reduce my interest in doing work for Phobos that I don't feel that I really need to do, because it's just going to be too much of a pain to deal with.
From where I sit, DIP 1000 is an extremely viral feature that provides
minimal benefit. If anything, I'd like to see fewer attributes in D, not more, and while scope does technically already exist, its use is quite limited without DIP 1000, whereas with DIP 1000, there's a real risk that it's going to have to be used all over the place - particularly in library code. - Jonathan M Davis
Aug 26
next sibling parent reply Dennis <dkorpel gmail.com> writes:
On Monday, 26 August 2024 at 10:56:06 UTC, Jonathan M Davis wrote:
 and if I mark up any code as scope to make the compiler happy, 
 that quickly becomes viral, whereas right now, I can just take 
 the address of a local and happily pass it around without any 
 complaints. I just have to mark the function as  trusted
 (...)
 But using DIP 1000 means that scope has to be used all over the 
 place.
No need! Even with dip1000, you can pass scope values to non-scope parameters in ` system` or ` trusted` functions: ```D void f(char[] str) safe; // parameter `str` is not marked `scope` void g() trusted // checking stack pointer lifetimes manually here { char[32] str; f(str[]); } ``` The scopeness of `str[]` ends then and there, no viral application of `scope` needed.
Aug 26
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Monday, August 26, 2024 12:39:54 PM MDT Dennis via Digitalmars-d wrote:
 On Monday, 26 August 2024 at 10:56:06 UTC, Jonathan M Davis wrote:
 and if I mark up any code as scope to make the compiler happy,
 that quickly becomes viral, whereas right now, I can just take
 the address of a local and happily pass it around without any
 complaints. I just have to mark the function as  trusted
 (...)
 But using DIP 1000 means that scope has to be used all over the
 place.
No need! Even with dip1000, you can pass scope values to non-scope parameters in ` system` or ` trusted` functions: ```D void f(char[] str) safe; // parameter `str` is not marked `scope` void g() trusted // checking stack pointer lifetimes manually here { char[32] str; f(str[]); } ``` The scopeness of `str[]` ends then and there, no viral application of `scope` needed.
Well, that does reduce the problem, but it still means that a function which is perfectly safe as long as you don't pass it scoped types is either going to have to use scope for scoped types to work, or the caller is going to have to cast away scope. For my own libraries, I'd likely just tell them to cast away scope, but it does add yet another attributes issue where some libraries are going to support it and some won't. And the problem will be worse with templated code - particularly if user-defined types get involved, and they have scope member functions. Attribute inferrence will both reduce the problem by making some stuff just work and increase the problem by introducing scope in places that wouldn't otherwise have it. As long as you're doing some really basic stuff with pointers and arrays, maybe what DIP 1000 does with scope is restricted enough to be sane, but the problem pretty much has to balloon once you bring user-defined types into the mix - particularly with templated code. Certainly, it was issues with user-defined types and member functions that made me give up on going to the effort of making dxml use DIP 1000 and instead hope that we'd eventually decide to do something other than DIP 1000 which wasn't so hard to figure out and use. And honestly, if system and trusted functions can accept scope data without having to cast away scope, that will probably just increase how much I use system instead of safe and overall result in more memory safety issues in my code, whereas right now, the issues that DIP 1000 is looking to help with are pretty rare in my code, since the cases where I take the address of a local or slice a static array are pretty restricted. But I'm sure that I'll figure out how to make things work with code that I write for myself or other code where I'm dealing with a relatively small number of people who can all get on the same page about what to do about attributes in a particular code base. Ultimately, I expect that the bigger problem is going to be libraries, since they have to deal with users' choices with regards to scope and memory safety. - Jonathan M Davis
Aug 26
parent reply Dennis <dkorpel gmail.com> writes:
On Monday, 26 August 2024 at 23:37:36 UTC, Jonathan M Davis wrote:
 Well, that does reduce the problem, but it still means that a 
 function which is perfectly  safe as long as you don't pass it 
 scoped types is either going to have to use scope for scoped 
 types to work, or the caller is going to have to cast away 
 scope.
As my example showed, there is no cast involved. The cast operator can't even be used for this purpose. All you need is the caller to be ` system` or ` trusted`, which without dip1000 must be the case anyway since you're taking the address of a local.
 And honestly, if  system and  trusted functions can accept 
 scope data without having to cast away scope
It's about whether the call site is ` system` or ` trusted`, not the called function.
 Attribute inferrence will both reduce the problem by making 
 some stuff just work and increase the problem by introducing 
 scope in places that wouldn't otherwise have it.
(...)
 that will probably just increase how much I use  system instead 
 of  safe and overall result in more memory safety issues in my 
 code,
Are you aware that DIP1000 strictly reduces the amount of ` system` / ` trusted` code you need? And also that adding / inferring `scope` to a parameter is always a strict improvement for the caller of the function with that parameter? If you can find a counterexample, where enabling dip1000 would require more ` system` code without relying on the current bug that slicing locals is allowed in ` safe` functions, or an example where inferring / adding `scope` to a parameter adds restrictions to the call site, please post it here or on a bugzilla.
Aug 27
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, August 27, 2024 4:51:26 AM MDT Dennis via Digitalmars-d wrote:
 On Monday, 26 August 2024 at 23:37:36 UTC, Jonathan M Davis wrote:
 that will probably just increase how much I use  system instead
 of  safe and overall result in more memory safety issues in my
 code,
Are you aware that DIP1000 strictly reduces the amount of ` system` / ` trusted` code you need?
It only reduces the amount of system or trusted code you need if you're willing to mark up code with scope, which I am not. I'm talking about needing system and trusted more in order to avoid the compiler inferring anything as scope or giving me any errors that relate to scope. If DIP 1000 becomes the default, then the amount of system code in my programs will strictly increase, because I will use system and trusted to avoid it entirely. I am perfectly fine with the status quo of taking the address of local variables being treated as system, and I don't want to have to mark up any code with scope in order to try to make it safe. When I did try to deal with DIP 1000, it was too hard to figure out what was going on, and it's simply not solving a problem that I feel needs to be solved. Right now, taking the address of a local variable results in a very small amount of code being affected, whereas any attempt to have the compiler verify with scope that it's not escaping then requires using scope on all of the places where that pointer or slice is used. It's trading system for annotations, and I'm utterly sick of the number of annotations in typical D code even when they're not hard to figure out, and with the changes that DIP 1000 adds, scope is way too hard to figure out. Since you've been spending a lot of time and effort working on it, you probably do understand it quite well, but the time that I spent working on it just highlighted for me that it was too much pain for too little gain, and I gave up.
 And also that adding / inferring `scope` to a parameter is always
 a strict improvement for the caller of the function with that
 parameter?
I find that hard to believe, because once you start wrapping types with scope member functions, you're going to end up with cases where scope gets inferred, and then the code won't compile if you pass it to something that isn't scope. If that just gets inferred as system, and the calling code isn't explictly marked as safe, then maybe there won't be compilation errors, but it was scope inferrence that made me give up on figuring out how to make dxml compile with DIP 1000. It doesn't use scope anywhere and IMHO shouldn't need it, but scope was being inferred on some basis (presumably due to some tests that slice static arrays, since the XML parsing itself doesn't do anything of the sort). It was not obvious how to fix it, and it was complaining about stuff that wasn't actually a problem.
 If you can find a counterexample, where enabling dip1000 would
 require more ` system` code without relying on the current bug
 that slicing locals is allowed in ` safe` functions, or an
 example where inferring / adding `scope` to a parameter adds
 restrictions to the call site, please post it here or on a
 bugzilla.
Honestly, I found DIP 1000 hard enough to understand that I gave up on it, and I'm just not willing to spend the time and energy to figure it out at this point. Even if the basic idea is simple, the sum total of it is not even vaguely simple, and it's trying to solve a problem that I simply don't have. So, maybe I could point out places where it doesn't do what it's supposed to if I spent a bunch of time trying to figure it out, and it's likely that some of what I do understand about it is wrong, but it seems to me that my time will be much better served focusing on other things as long as DIP 1000 is not the default and then figuring out how to completely avoid it if it ever does become the default. It's just too much complexity in an already complex language. Honestly, if I were making annotation-related changes to D, I would be removing some of the attributes that we already have in order to simplify things. I would not be looking to add more, which is essentially what DIP 1000 is doing even if scope technically already exists in the language. - Jonathan M Davis
Aug 27
parent Dennis <dkorpel gmail.com> writes:
On Tuesday, 27 August 2024 at 12:16:23 UTC, Jonathan M Davis 
wrote:
 It only reduces the amount of  system or  trusted code you need 
 if you're willing to mark up code with scope, which I am not.
No it doesn't. Look at my code example again, there's no `scope` anywhere in there! Consider taking the address of a local in all 3 safety scenarios, both before and after enabling dip1000: | | No dip1000 | dip1000 | | -------- | ----------- | ------------ | | system | No checks | No checks | | trusted | No checks | No checks | | safe | Not allowed | Scope checks | Nowhere is there an increase of checks from the left column to the right column. Either you were doing something not allowed to begin with (taking the address of a local in ` safe` code), or you were already in ` system` or ` trusted` code where `scope` isn't checked.
 I'm talking about needing  system and  trusted more in order to 
 avoid the compiler inferring anything as scope or giving me any 
 errors that relate to scope.
If you get an error with DIP1000 because of `scope`, you would have gotten an error without DIP1000 either way because you were taking the address of a local. DIP1000 is strictly not a breaking change.
 If DIP 1000 becomes the default, then the amount of  system 
 code in my programs will strictly increase, because I will use 
  system and  trusted to avoid it entirely.
Again, DIP1000 doesn't do scope checks in ` system`/` trusted` code, and won't introduce errors in code that was correctly ` safe` prior to DIP1000.
 It's trading  system for annotations
Not trading, but adding an alternative. You can still use ` system` to avoid DIP1000 checks.
 and I'm utterly sick of the number of annotations in typical D 
 code even when they're not hard to figure out
 (...) the time that I spent working on it just highlighted for 
 me that it was too much pain for too little gain, and I gave up.
I agree, and fully encourage you to not write `scope` attributes yourself. (I've spent a lot of time fixing wrong applications of `return` / `scope` attributes in druntime, Phobos, and certain dub projects.)
 I find that hard to believe, because once you start wrapping 
 types with scope member functions, you're going to end up with 
 cases where scope gets inferred, and then the code won't 
 compile if you pass it to something that isn't scope.
This is backwards. A `scope` parameter doesn't turn the argument into a `scope` value, if anything it removes `scope` from a value. (E.g. calling .dup on a scope array turns it into a non-scope array because dup's array parameter is `scope`). Here's another table of scenarios of passinig arguments to a (scope) parameter: | Parameter storage class: | non-`scope` | `scope` | | ---------------------------- | ----------- | --------- | | any argument in system | No error | No error | | non-scope argument in safe | No error | No error | | scope argument in safe | Error | No error | A strict decrease in errors going to the right column.
 So, maybe I could point out places where it doesn't do what 
 it's supposed to if I spent a bunch of time trying to figure it 
 out, and it's likely that some of what I do understand about it 
 is wrong, but it seems to me that my time will be much better 
 served focusing on other things as long as DIP 1000 is not the 
 default
I'm completely sympathetic to you not wanting to spend effort reducing code to create DIP1000 bug reports, or refactoring your code to work with `scope` values. By all means, keep using ` trusted` / ` system` code for the few parts that use stack-allocated buffers in your code. But please understand that unless you consider the current hole in ` safe` a feature, dip1000 by default will not break your code. If we disallow slicing local variables in ` safe` under `-preview=RobertsSimpleSafeD`, then that will break your codebase for real. Once you fix your code to compile with `-preview=RobertsSimpleSafeD`, then switching to `-preview=dip1000` will not cause any breakage, or it is a bug.
Aug 27
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
I hear you. Let me think about it.
Aug 26
prev sibling parent Dukc <ajieskola gmail.com> writes:
Walter Bright kirjoitti 26.8.2024 klo 9.20:
 Perhaps that error check on a trusted function call should be suppressed.
No, no, no! ` trusted` means the function body is trusted, not it's call sites. I'd be in favour if this compiled though, or alternatively reported error in `bar`, not in `foo`: ``` safe int* foo(int i) { return bar(&i); } // unsafe functions that // intentionally escapes the reference trusted int* bar(scope int* p) { return p; } ``` In real code `bar` like this has no business being ` trusted`, of course, but I have [complained before](https://forum.dlang.org/post/edtbjavjzkwogvutxpho forum.dlang.org) about non-safe functions inferring DIP1000 attributes themselves.
Aug 26
prev sibling next sibling parent reply Dukc <ajieskola gmail.com> writes:
Bruce Carneal kirjoitti 25.8.2024 klo 20.55:
 The lesson I take from the DIP 1000 history is that we need something 
 that is simpler to explain, something that is much easier to use 
 correctly, something that models the problem more clearly.
You mean Robert's Simple Safe D. The thing is, Simple Safe D will/would break just as much existing code, because it's the same things - static array slicing and addresses of struct/class fields - that you need to change. The question is, what are we really annoyed about with DIP1000? If it's because we have too many hotshots who overuse DIP1000 attributes and then expect others to understand it, Simple Safe D would indeed put stop to that. But if it's simply dealing with breakage (my impression), it's going to be the same either way, so better to just improve the experience of dealing with it. That probably means reining in the `scope` autoinference a bit and make the DIP1000 error message to suggest the GC-using alternatives. As an oversimplification, imagine DMD that understands `scope` / `return scope` when you write them, but always suggests Simple Safe D workarounds when you forget to apply the attributes.
Aug 26
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Monday, 26 August 2024 at 15:42:28 UTC, Dukc wrote:
 Bruce Carneal kirjoitti 25.8.2024 klo 20.55:
 The lesson I take from the DIP 1000 history is that we need 
 something that is simpler to explain, something that is much 
 easier to use correctly, something that models the problem 
 more clearly.
You mean Robert's Simple Safe D.
...
 The question is, what are we really annoyed about with DIP1000?
... My observation was/is that DIP1000 is overly complex for the value provided. This manifests in several ways. There are the, seemingly never ending, holes that get patched. There's the difficulty in explaining how it can be used to full benefit (apart from Timon, and maybe Paul, I don't trust anyone's explanation of what's going on in an even moderately complex dip1000 scenario and, frankly, I'd rather rewrite code than trust even those gurus). There's the methodology being employed wherein we apparently are trying to "prove" *safety* correctness by observing a fall off in bug reports (as opposed to attributing any fall off to people just moving on). There's the ... And finally, on a more positive note, there's the belief that we can do much better with a clean sheet design, something with a different model. There are three paths forward. In my order of preference these are: 1) rethink the whole thing from scratch 2) drop dip1000 and just live with gc/ trusted/... and 3) keep patching and whacking and trying to convince the D community that DIP1000 is worth it.
Aug 26
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Aug 26, 2024 at 08:27:31PM +0000, Bruce Carneal via Digitalmars-d wrote:
[...]
 My observation was/is that DIP1000 is overly complex for the value
 provided.
[...]
 There are three paths forward.  In my order of preference these are:  1)
 rethink the whole thing from scratch  2) drop dip1000 and just live with
 gc/ trusted/... and 3) keep patching and whacking and trying to convince the
 D community that DIP1000 is worth it.
[...] My initial reaction to dip1000 was reservedly positive. Then it became disappointment, as the number of discovered loopholes and unhandled cases grew. Finally it settled into indifference, since I don't need any of it in my own code. It feels like D is trying too hard to be what it isn't. The original design with GC was clean, ergonomic, and productive. This is the core of D that still constitutes the major reason why I'm still using it. Then the nogc crowd showed up, and we bent over backwards to please them. As a result, the language was bent out of shape with attribute soup and half-solutions to the wrong problems that did little to improve the experience of existing D users, while still failing to please the GC objectioners. The past few years' of language extension efforts have felt like a lone wolf clawing at the rest of the universe as it sinks deeper and deeper into a hole it never needed to fall into, while its primary strengths were left stagnating. The parts of D that are good have not improved much, in spite of gaps and corner cases remaining unfixed over the past decade, while new features that were important only to a minority of users have been tacked on one after another, never really succeeding at making the splash they were intended to make, and only adding more mental load to users who don't need them, and over time just fading into the corner of obscurity of Yet Another Incomplete D Feature. I feel like saying that it's time for D to admit that it can't be all things to everyone, and to take a stand and decide what it wants to be -- a GC- safe, ergonomic language with language features geared towards modern GC algorithms, or a low-level, Rust-style manage your own memory, bare minimum C replacement. But unfortunately I lost confidence that the leadership would be able to take on such a decision effectively, so this is probably all I'll say on this subject. T -- People tell me I'm stubborn, but I refuse to accept it!
Aug 26
parent reply Nick Treleaven <nick geany.org> writes:
On Monday, 26 August 2024 at 21:53:25 UTC, H. S. Teoh wrote:
 It feels like D is trying too hard to be what it isn't. The 
 original design with GC was clean, ergonomic, and productive. 
 This is the core of D that still constitutes the major reason 
 why I'm still using it.  Then the  nogc crowd showed up, and we 
 bent over backwards to please them. As a result, the language 
 was bent out of shape with attribute soup and half-solutions to 
 the wrong problems that did little to improve the experience of 
 existing D users, while still failing to please the GC 
 objectioners.
The goal of DIP1000 is to make pointers to stack memory safe, not really about making non-GC heap allocation safe (though it may help with that and is a consideration). Strong support for memory-safe stack allocation absolutely is what D needs, because its aims are to produce efficient code which is not bug-prone. There can be debate on how what kinds of interface can be safe, but the core of DIP1000 is doing what it should.
Aug 27
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, Aug 27, 2024 at 05:50:11PM +0000, Nick Treleaven via Digitalmars-d
wrote:
 On Monday, 26 August 2024 at 21:53:25 UTC, H. S. Teoh wrote:
 It feels like D is trying too hard to be what it isn't. The original
 design with GC was clean, ergonomic, and productive. This is the
 core of D that still constitutes the major reason why I'm still
 using it.  Then the  nogc crowd showed up, and we bent over
 backwards to please them. As a result, the language was bent out of
 shape with attribute soup and half-solutions to the wrong problems
 that did little to improve the experience of existing D users, while
 still failing to please the GC objectioners.
The goal of DIP1000 is to make pointers to stack memory safe, not really about making non-GC heap allocation safe (though it may help with that and is a consideration). Strong support for memory-safe stack allocation absolutely is what D needs, because its aims are to produce efficient code which is not bug-prone. There can be debate on how what kinds of interface can be safe, but the core of DIP1000 is doing what it should.
This analysis could have been done in compiler, transparently to the user. The user should not have to care about this. The fact that it resulted in attribute soup that almost nobody fully understands, indicates that something is wrong with its design. T -- Life is complex. It consists of real and imaginary parts. -- YHL
Aug 27
next sibling parent reply Nick Treleaven <nick geany.org> writes:
On Tuesday, 27 August 2024 at 18:35:06 UTC, H. S. Teoh wrote:
 This analysis could have been done in compiler, transparently 
 to the user.
No - it would slow down the compiler if inference was done everywhere. It could also cause link errors because attributes are part of the mangled symbol name.
 The user should not have to care about this.  The fact that it 
 resulted in attribute soup that almost nobody fully 
 understands, indicates that something is wrong with its design.
It could just be `scope` on function parameters (for functions that aren't inferred). But then it would be useful to allow scope parameters to escape in documented ways, so that those functions can be safe too. Hence the `return` attributes.
Aug 28
parent reply Adam Wilson <flyboynw gmail.com> writes:
On Wednesday, 28 August 2024 at 15:28:47 UTC, Nick Treleaven 
wrote:
 On Tuesday, 27 August 2024 at 18:35:06 UTC, H. S. Teoh wrote:
 This analysis could have been done in compiler, transparently 
 to the user.
No - it would slow down the compiler if inference was done everywhere. It could also cause link errors because attributes are part of the mangled symbol name.
Even though I know it's going to cause Atila to have a seizure, I'm going to say it anyways. We need to stop caring about compiler speed so much. If adding an entire second to the compile time can save a company from a multi-billion dollar lawsuit, the company is going to to tell you to shut-up and eat the extra second. Compile time is a feature, but it is not, and cannot be, the most important feature. Sorry Atila, but it had to be said. :)
Aug 31
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 01/09/2024 8:45 AM, Adam Wilson wrote:
 On Wednesday, 28 August 2024 at 15:28:47 UTC, Nick Treleaven wrote:
 On Tuesday, 27 August 2024 at 18:35:06 UTC, H. S. Teoh wrote:
 This analysis could have been done in compiler, transparently to the 
 user.
No - it would slow down the compiler if inference was done everywhere. It could also cause link errors because attributes are part of the mangled symbol name.
Even though I know it's going to cause Atila to have a seizure, I'm going to say it anyways. We need to stop caring about compiler speed so much. If adding an entire second to the compile time can save a company from a multi-billion dollar lawsuit, the company is going to to tell you to shut-up and eat the extra second. Compile time is a feature, but it is not, and cannot be, the most important feature. Sorry Atila, but it had to be said. :)
To follow up on this: specialized use cases like compiler as a daemon should not dictate how normal compilation should work. If you need to turn off memory safety analysis (that can be handled by JIT using read barriers for example) to make stuff like compiler as a daemon work fast then that is ok. This is not an either/or situation. You can have both.
Aug 31
prev sibling parent reply Donald Charles Allen <donaldcallen gmail.com> writes:
 This analysis could have been done in compiler, transparently 
 to the user.  The user should not have to care about this.  The 
 fact that it resulted in attribute soup that almost nobody 
 fully understands, indicates that something is wrong with its 
 design.
I hope the powers that be on this project pay careful attention to this comment because I believe "attribute soup" is absolutely right. In my opinion, D's strength is that it is a big *incremental* improvement over C/C++. It is not a revolutionary language like Rust. Section 20.24 of the Language Reference reads to me like an effort to somehow, desperately, contort this language into something it is not. I'm sorry, but this thing reads like a part of the US tax code or the legalese you need to agree to that appears when installing commercial software. (Any of you ever read that stuff carefully? No? I haven't either.) The mind-boggling complexity of that section is a clear indication that you are on the wrong track. I have not read DIP1000, but I get the clear impression from reading this and other threads that it is more of the same. It is not difficult to write memory-safe code in D. I've done it. You just need to know where the bunkers are, the traps, and avoid them. A good, clear document explaining this would be a great start. I'm sure there are things that could be done in the compiler and/or dscanner that would help to warn people that they are doing something legal but dangerous. But it makes no sense to me to try to turn D into Rust, because you don't have the luxury of starting with a blank sheet of paper as the Rust project did. D's multiple memory-management methods, some inherited from C, make it inherently memory-unsafe, so trying to provide memory-safety guarantees is very difficult and will almost certainly make a mess of the language. Section 20.24 says to me that a start has been made on that mess. I think the D project should focus on increasing the distance between D and C/C++ and forget about competing with Rust. Those who want guaranteed memory-safety are likely to just use Rust. I would also note that Zig has gotten a lot of attention despite not having been released yet and Zig is not a memory-safe language.
Sep 01
parent reply Nick Treleaven <nick geany.org> writes:
On Sunday, 1 September 2024 at 20:42:14 UTC, Donald Charles Allen 
wrote:
 But it makes no sense to me to try to turn D into Rust, because 
 you don't have the luxury of starting with a blank sheet of 
 paper as the Rust project did. D's multiple memory-management 
 methods, some inherited from C, make it inherently 
 memory-unsafe, so trying to provide memory-safety guarantees is 
 very difficult and will almost certainly make a mess of the 
 language. Section 20.24 says to me that a start has been made 
 on that mess.
Safe Rust is too restrictive about mutability. DIP1000 is about extending the amount of code that can be safe. DIP1000 removes restrictions.
 I think the D project should focus on increasing the distance 
 between D and C/C++ and forget about competing with Rust. Those 
 who want guaranteed memory-safety are likely to just use Rust.
D supports GC, so heap allocation does not have to restrict safe operations. DIP1000 shows you can often use safe pointers to stack memory without Rust's mutability restrictions.
Sep 02
next sibling parent reply Sergey <kornburn yandex.ru> writes:
On Monday, 2 September 2024 at 09:48:44 UTC, Nick Treleaven wrote:
 Safe Rust is too restrictive about mutability.
 DIP1000 is about extending the amount of code that can be 
  safe. DIP1000 removes restrictions.
I think the question is not in the technical ability or inability to achieve something, but more in the intense from the community to do that and write code in that way. It seems D community doesn't want to do these things, and Rust community wants. And it is fine. So the question is only to achieve some kind of recognition from authorities, so they will consider D as a "safe language" (nobody really knows what is it and at which level of safety they can mark this kind of label to the language - I think current language selection was kinda random). Just to make D available for hypothetical developers with government contracts (does DLF ever got any requests on that? really...)
Sep 02
parent Nick Treleaven <nick geany.org> writes:
On Monday, 2 September 2024 at 10:07:30 UTC, Sergey wrote:
 So the question is only to achieve some kind of recognition 
 from authorities, so they will consider D as a "safe language" 
 (nobody really knows what is it and at which level of safety 
 they can mark this kind of label to the language - I think 
 current language selection was kinda random).
Can you give an example of a memory-safety violation allowed in safe that there is no plan to fix? There are violations in bugzilla, but I think we have solutions for each of them, that can be implemented. For a few, that will require breaking existing code, but we can do that with editions.
Sep 02
prev sibling parent reply Donald Charles Allen <donaldcallen gmail.com> writes:
On Monday, 2 September 2024 at 09:48:44 UTC, Nick Treleaven wrote:
 On Sunday, 1 September 2024 at 20:42:14 UTC, Donald Charles 
 Allen wrote:
 But it makes no sense to me to try to turn D into Rust, 
 because you don't have the luxury of starting with a blank 
 sheet of paper as the Rust project did. D's multiple 
 memory-management methods, some inherited from C, make it 
 inherently memory-unsafe, so trying to provide memory-safety 
 guarantees is very difficult and will almost certainly make a 
 mess of the language. Section 20.24 says to me that a start 
 has been made on that mess.
Safe Rust is too restrictive about mutability. DIP1000 is about extending the amount of code that can be safe. DIP1000 removes restrictions.
 I think the D project should focus on increasing the distance 
 between D and C/C++ and forget about competing with Rust. 
 Those who want guaranteed memory-safety are likely to just use 
 Rust.
D supports GC, so heap allocation does not have to restrict safe operations. DIP1000 shows you can often use safe pointers to stack memory without Rust's mutability restrictions.
While I think it is beside the point, I agree with you about Rust's mutability restrictions. They assume everything you write is multi-threaded. There is no way, other than using "unsafe", to say to the compiler "just relax -- this code is single-threaded". What Rust does or does not do is not relevant to turning D into a language that is incomprehensible except to lawyers, which is what appears to be happening.
Sep 02
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 03/09/2024 1:05 AM, Donald Charles Allen wrote:
 On Monday, 2 September 2024 at 09:48:44 UTC, Nick Treleaven wrote:
 On Sunday, 1 September 2024 at 20:42:14 UTC, Donald Charles Allen wrote:
 But it makes no sense to me to try to turn D into Rust, because you 
 don't have the luxury of starting with a blank sheet of paper as the 
 Rust project did. D's multiple memory-management methods, some 
 inherited from C, make it inherently memory-unsafe, so trying to 
 provide memory-safety guarantees is very difficult and will almost 
 certainly make a mess of the language. Section 20.24 says to me that 
 a start has been made on that mess.
Safe Rust is too restrictive about mutability. DIP1000 is about extending the amount of code that can be safe. DIP1000 removes restrictions.
 I think the D project should focus on increasing the distance between 
 D and C/C++ and forget about competing with Rust. Those who want 
 guaranteed memory-safety are likely to just use Rust.
D supports GC, so heap allocation does not have to restrict safe operations. DIP1000 shows you can often use safe pointers to stack memory without Rust's mutability restrictions.
While I think it is beside the point, I agree with you about Rust's mutability restrictions. They assume everything you write is multi-threaded. There is no way, other than using "unsafe", to say to the compiler "just relax -- this code is single-threaded". What Rust does or does not do is not relevant to turning D into a language that is incomprehensible except to lawyers, which is what appears to be happening.
A lot of the issues surrounding Rust and its lifetime stuff is not related to the borrow checker. If you read any of the complaints they pretty much all center around trying to solve aliasing. This includes requiring you to use the ownership transfer system. This would never be the default in D. For D even with owner escape analysis, there is no reason for GC memory to not work in a graph. The core issue with using DIP1000 is that it is not trying to solve escape analysis or owner escape analysis for heap memory. It is being misused to try to model heap memory. I do not believe that it is fixable, it's simply solving a problem in a way that directly negatively affects most code, and it does so by not respecting the problem domain or the literature on the topic. - Someone that has well over ~120k LOC of DIP1000 annotated code
Sep 02
next sibling parent Paul Backus <snarwin gmail.com> writes:
On Monday, 2 September 2024 at 13:15:51 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
 The core issue with using DIP1000 is that it is not trying to 
 solve escape analysis or owner escape analysis for heap memory. 
 It is being misused to try to model heap memory. I do not 
 believe that it is fixable, it's simply solving a problem in a 
 way that directly negatively affects most code, and it does so 
 by not respecting the problem domain or the literature on the 
 topic.
As someone who has written a bunch of code using DIP 1000 to try and model ownership of heap memory, I can confirm that this is 100% correct. The way it's done is by essentially lying to the compiler, and telling it to treat a pointer to the heap *as though* it's a pointer to the stack. It's kind of like template metaprogramming in C++. It's a clever hack, and the fact that it works at all is kind of impressive. But we shouldn't *have* to resort to clever hacks like this to solve simple problems.
Sep 02
prev sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Monday, 2 September 2024 at 13:15:51 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
 The core issue with using DIP1000 is that it is not trying to 
 solve escape analysis or owner escape analysis for heap memory. 
 It is being misused to try to model heap memory.
Don't forget there are people using it as intended :)
Sep 02
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 03/09/2024 6:26 AM, Sebastiaan Koppe wrote:
 On Monday, 2 September 2024 at 13:15:51 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 The core issue with using DIP1000 is that it is not trying to solve 
 escape analysis or owner escape analysis for heap memory. It is being 
 misused to try to model heap memory.
Don't forget there are people using it as intended :)
Thanks to your comment I realized that DIP1000's attributes and my proposal's could live side by side during transition period. And yes, there absolutely should be a transition period. No way I want to hurt anyone doing just that! It would be great if you would comment on my proposal, my suspicion is at least for things that are not virtual, it should "just work" for you.
Sep 02
parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Tuesday, 3 September 2024 at 03:03:51 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
 On 03/09/2024 6:26 AM, Sebastiaan Koppe wrote:
 On Monday, 2 September 2024 at 13:15:51 UTC, Richard (Rikki) 
 Andrew Cattermole wrote:
 The core issue with using DIP1000 is that it is not trying to 
 solve escape analysis or owner escape analysis for heap 
 memory. It is being misused to try to model heap memory.
Don't forget there are people using it as intended :)
Thanks to your comment I realized that DIP1000's attributes and my proposal's could live side by side during transition period. And yes, there absolutely should be a transition period. No way I want to hurt anyone doing just that! It would be great if you would comment on my proposal, my suspicion is at least for things that are not virtual, it should "just work" for you.
Been swamped and didn't want to do a shallow one. Your coroutines one is still in the queue as well.
Sep 03
prev sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Sunday, 25 August 2024 at 17:55:04 UTC, Bruce Carneal wrote:
 The lesson I take from the DIP 1000 history is that we need 
 something that is simpler to explain, something that is much 
 easier to use correctly, something that models the problem more 
 clearly.

 Bug reporting/fixing is great, but sometimes the bug pattern 
 indicates a rethink is in order.  I believe this is one of 
 those times.
Reminds me of: “Everything should be made as simple as possible, but not simpler.” ― Albert Einstein
Sep 05