digitalmars.D - system blocks and safer trusted (ST) functions
- Bruce Carneal (33/33) Jul 24 2021 At beerconf I committed to putting forward a DIP regarding a new
- jfondren (63/65) Jul 24 2021 This has an unfortunate result: if in maintenance you edit a
- Bruce Carneal (41/91) Jul 25 2021 As opposed to never being checked for @safe in legacy @trusted?
- jfondren (29/48) Jul 25 2021 I think it's a fair reading of
- Bruce Carneal (25/74) Jul 25 2021 It may well have been a fair reading of the past. I reported on
- Paul Backus (6/12) Jul 25 2021 Memory safety is a global property. If even a single line of your
- Bruce Carneal (4/16) Jul 25 2021 I do not know of any competent programmer who would say otherwise.
- Paul Backus (6/24) Jul 25 2021 It is a response to the claim that "the compiler's assertions
- Bruce Carneal (11/38) Jul 25 2021 They don't mean the same thing to me. In one case @safe
- Walter Bright (9/12) Jul 25 2021 That's right. I did not think of the case of removing the @system block ...
- ShadoLight (2/8) Jul 26 2021 ...or rather simply marked as a @safe function then?
- Walter Bright (3/9) Jul 26 2021 It's still possible that the programmer wants to keep it @trusted becaus...
- Steven Schveighoffer (20/57) Jul 25 2021 Yes, but there is no point in a `@trusted` function without
- jfondren (68/79) Jul 25 2021 OK. I'll argue the opposite position for a bit, then.
- Steven Schveighoffer (12/39) Jul 26 2021 Yep. This function today is overly trusted (meaning that parts
- Dominikus Dittes Scherkl (6/12) Jul 26 2021 Yes, I agree.
- Steven Schveighoffer (16/33) Jul 26 2021 One thing that is incredibly important to get right here is the
- Bruce Carneal (13/47) Jul 26 2021 Yes. You have argued, persuasively, that the language should
- Bruce Carneal (3/15) Jul 26 2021 Rather, "if this compiler can't prove that the property holds,
- Paul Backus (25/34) Jul 25 2021 Both before and after this proposal, there are 3 kinds of code:
- Bruce Carneal (10/42) Jul 25 2021 There is no getting away from manually checking the @system
- Paul Backus (10/14) Jul 25 2021 We already have this ability: simply avoid writing `@trusted`
- Bruce Carneal (7/21) Jul 25 2021 I'd like to have assistance from the compiler to the maximum
- Paul Backus (3/9) Jul 25 2021 Under your proposal, the proportion of code that must be
- Bruce Carneal (34/44) Jul 25 2021 The direct translation of an embedded lambda @safe function to
- Paul Backus (49/53) Jul 25 2021 I have already demonstrated that this is false. Here's my example
- claptrap (11/39) Jul 25 2021 So...
- jfondren (13/19) Jul 25 2021 The questioner is what's at issue there, not the question.
- Paul Backus (19/23) Jul 25 2021 Strictly speaking, you're right; it is the `@system` block that
- claptrap (8/21) Jul 25 2021 Im sorry but it's nonsense.
- Paul Backus (6/13) Jul 25 2021 If the bug is "already there", you should be able to write a
- claptrap (16/34) Jul 25 2021 Consider this...
- Paul Backus (4/10) Jul 25 2021 Yes; I agree completely. :)
- Timon Gehr (8/20) Jul 25 2021 The original claim was that the new feature is a tool that allows the
- Paul Backus (7/12) Jul 26 2021 @trusted code is correct if and only if it cannot possibly allow
- claptrap (8/21) Jul 26 2021 Your example doesn't invoke undefined behaviour in safe code, it
- Paul Backus (9/17) Jul 26 2021 Well, it is in a `@trusted` function, which is callable from
- claptrap (28/46) Jul 26 2021 Its a pointless exercise because your example is a red herring,
- jfondren (11/20) Jul 26 2021 It's a response to overly strong claims about what this DIP will
- claptrap (22/42) Jul 28 2021 If you're saying the proposed "system blocks inside trusted
- Paul Backus (23/36) Jul 28 2021 If I understand correctly, there are two problems being diagnosed
- claptrap (8/32) Jul 28 2021 Agree 100%.
- Bruce Carneal (17/56) Jul 28 2021 I see this as two problems with a common solution, rather than a
- claptrap (4/23) Jul 28 2021 Do you have ideas on how to stop unsafe blocks accessing the
- Bruce Carneal (3/8) Jul 28 2021 Yes. The form and scope of the unsafe block(s) is under
- Joseph Rushton Wakeling (18/21) Jul 29 2021 I'm not sure it necessarily is. Consider the following example
- Bruce Carneal (6/27) Jul 29 2021 Yes, I was a bit sloppy earlier. Full "stopping" is a non-goal.
- claptrap (16/37) Jul 29 2021 Not exactly, obviously if they cant access variables from the
- Bruce Carneal (14/60) Jul 29 2021 A design tension that Joseph and I are working with is that
- Walter Bright (2/4) Jul 26 2021 https://github.com/dlang/dlang.org/pull/3077
- claptrap (4/15) Jul 26 2021 You could probably come up with an example bug that wouldn't be
- claptrap (9/13) Jul 25 2021 I think the problem is you're conflating memory safety and
- Bruce Carneal (23/62) Jul 25 2021 OK, I think I see where we're diverging. I never thought that
- jfondren (23/25) Jul 25 2021 Well, thanks, and good luck. I think if your DIP makes strong
- ag0aep6g (14/20) Jul 25 2021 The language doesn't have @trusted blocks. @trusted function
- Timon Gehr (4/10) Jul 25 2021 The documentation no longer says much about that.
- ag0aep6g (6/15) Jul 26 2021 That's just memory-safe-d.dd, which is an odd page that should be merged...
- Walter Bright (2/6) Jul 26 2021 https://github.com/dlang/dlang.org/pull/3076
- jfondren (42/54) Jul 25 2021 favoriteElement(), all on its own, has an unchecked type error:
- ag0aep6g (19/45) Jul 25 2021 What if favoriteNumber originally returns a ubyte, and
- Dominikus Dittes Scherkl (15/22) Jul 26 2021 No, it's not. You use something that is not a literal, so it may
- Paul Backus (8/20) Jul 26 2021 I agree that *future versions* of the code may not be memory-safe
- Steven Schveighoffer (29/50) Jul 26 2021 And I have [already
- Paul Backus (10/17) Jul 26 2021 If your theory of memory safety leads you to conclude that the
- Steven Schveighoffer (29/48) Jul 26 2021 It's not a comment, it's a specification. Whereby I can conclude
- Paul Backus (10/22) Jul 26 2021 The difference between POSIX `read` and `favoriteNumber` is that
- Steven Schveighoffer (9/17) Jul 28 2021 If you consider the source to be the spec, then that contradicts
- Paul Backus (17/34) Jul 28 2021 Again, I am in 100% in agreement with you that `favoriteElement`
- Steven Schveighoffer (28/62) Jul 25 2021 Split this into:
- Joseph Rushton Wakeling (8/13) Jul 25 2021 I'm very happy to hear that. I think this proposal is an
- =?UTF-8?Q?Ali_=c3=87ehreli?= (29/37) Jul 25 2021 In a recent discussion, I learned from you an idea of Steven
At beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function. I also committed at beerconf to starting the formal DIP on this in two weeks so I read the DIP docs just now in order to get a jump on things. Boy, that was an eye-opener. From the incessantly cautionary language there I'd say that I dramatically underestimated the effort required to bring a DIP over the finish line (or to see it clearly rejected). Long story short, I'll still do the DIP unless the ideas are discredited in subsequent discussion but it will have to be quite a bit later in the year as I dont have weeks of time to spend on it in the near future. In the mean time I'd invite others to comment on the ST (safer trusted) idea as sketched in the first paragraph. For starters, we might come up with a better name... A few notes: 1) Since system blocks would be a new syntactic element I believe ST would be a backward compatible addition. 2) The problematic trusted lambda escapes creeping in to " safe" code could be replaced going forward by a more honestly named form, trusted code with system blocks. Best practices could evolve to the point that safe actually meant safe again. 3) I believe quite a bit of trusted code is safe but can not currently benefit from automated checking. The proposal would allow a transition of such code to an safer form while reducing the amount of code that requires intense manual checking. 4) safe blocks within trusted code were considered but left things defaulting in a less useful direction (and they're just plain ugly). Questions and improvements are very welcome but, in the dlang tradition, I also welcome your help in destroying this proposal.
Jul 24 2021
On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.This has an unfortunate result: if in maintenance you edit a trusted function to remove its system blocks, it'll quietly no longer be checked as safe. Of the current behavior, Walter's said that he doesn't want trusted blocks because they should be discouraged in safe code. With this change, we'll have exactly what he doesn't want with different names: s/ trusted/ system/, s/ safe/ trusted/, and the exact same behavior: system blocks are just what trusted blocks would've been, and trusted code with system blocks in it is just safe code with a different name. Instead of people accepting that safe "isn't really safe" in the presence of trusted blocks, and that the whole body of the function has to be audited, with this change we'll have s/ safe/ trusted/ blocks that aren't really safe in the presence of system blocks, and that the whole body of the function has to be audited. The "you have to audit this" signifier is the same, an internal lower-protection block, and all that's gained is that the function attribute's spelled differently. Is this really worth it? One way to avoid the "unfortunate result" above is to permit trusted blocks in safe code. Which results in Rust equivalent functionality: safe code is checked as safe, safe code with internal trusted blocks is still checked as safe but people know to audit it, and system code isn't checked as safe. People like Rust's unsafe system, the current trusted-lambda abuse is a simulation of unsafe, Phobos already uses trusted lambdas in exactly the same way as unsafe blocks are used in Rust, and this proposed behavior is 99.9% identical to unsafe except it has this bonus "you can silently, accidentally remove safe checks from your code now" feature. I appreciate that there's a vision also to safe/ trusted/ system, but it doesn't seem to have stuck, with Phobos having more than twice as many trusted lambdas than trusted functions: ``` phobos$ grep -rP '\W\(\) trusted' --include '*.d'|wc -l 238 phobos$ grep -rP '\w\(\) trusted' --include '*.d'|wc -l 111 ``` I don't think that Rust has everything right. And, I don't pay attention to the Rust community at all; maybe they've a lot of gripes about how they're using unsafe blocks and unsafe functions. But, just look at all those trusted lambadas. If you run the first command without the "|wc -l" on the end you'll see they're almost all single statements or expressions. Adding a trusted block to safe code doesn't discard the safe/ trusted/ system vision, it just lets people follow the unsafe vision that they're already following without so many complaints about how ugly the workaround is, when D's good looks are one of its advantages over Rust. This proposal also doesn't immediately discard the safe/ trusted/ system vision, but it introduces a minor footgun because of a subtle conflict with that vision, and as people adopt it they'll also want another --preview switch to deal with the footgun, and that switch will break all current trusted code that's currently assuming no safe checks, and so there will a long deprecation cycle... trusted blocks win this. Or rather--not to be rude--but if you came out and said that this DIP was just some triangulation to get people to accept trusted blocks, I would say: "good job! It got me thinking!" If not, I'm sorry.
Jul 24 2021
On Sunday, 25 July 2021 at 06:13:41 UTC, jfondren wrote:On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:As opposed to never being checked for safe in legacy trusted? Or always requiring manual vigilance in an " safe" routine corrupted by an trusted lambda? More directly, IIUC, someone explicitly downgrades safety by removing an system block encapsulation from within a recently coded (non-legacy) trusted function and we're supposed to second guess them? They don't actually want to revert to legacy behavior they just happened to remove the system block in an trusted function? I think it would be great if we can get to the point where warnings against legacy trusted reversions are welcome but in the mean time I believe the danger of not protecting against explicit safety degradation maintenance errors is modest when compared to the other issues.The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.This has an unfortunate result: if in maintenance you edit a trusted function to remove its system blocks, it'll quietly no longer be checked as safe.Of the current behavior, Walter's said that he doesn't want trusted blocks because they should be discouraged in safe code. With this change, we'll have exactly what he doesn't want with different names: s/ trusted/ system/, s/ safe/ trusted/, and the exact same behavior: system blocks are just what trusted blocks would've been, and trusted code with system blocks in it is just safe code with a different name.I think you're misreading Walter on this. He was the one who recommended that I pursue this DIP at beerconf (it was just an idea that I'd thrown out up to that point).Instead of people accepting that safe "isn't really safe" in the presence of trusted blocks, and that the whole body of the function has to be audited, with this change we'll have s/ safe/ trusted/ blocks that aren't really safe in the presence of system blocks, and that the whole body of the function has to be audited. The "you have to audit this" signifier is the same, an internal lower-protection block, and all that's gained is that the function attribute's spelled differently. Is this really worth it?Definitely. We shouldn't "accept" contradictions, cognitive load, manual auditing, ... unless we have no alternative. The proposal opens the door to consistent nesting and naming and additional automated checking. Less "unsprung weight".I appreciate that there's a vision also to safe/ trusted/ system, but it doesn't seem to have stuck, with Phobos having more than twice as many trusted lambdas than trusted functions: ``` phobos$ grep -rP '\W\(\) trusted' --include '*.d'|wc -l 238 phobos$ grep -rP '\w\(\) trusted' --include '*.d'|wc -l 111 ```If the lambdas are mostly found within safe code then these stats would support the proposal. Also, the trusted functions could benefit from system block upgrades.... Adding a trusted block to safe code doesn't discard the safe/ trusted/ system vision, it just lets people follow the unsafe vision that they're already following without so many complaints about how ugly the workaround is, when D's good looks are one of its advantages over Rust.I'd like to localize system code and minimize un-automated checking. An trusted lambda in an safe function goes in the other direction. It's not the two or three liner workaround itself that is the main problem, it's the silent expansion of programmer responsibility to the surrounds.This proposal also doesn't immediately discard the safe/ trusted/ system vision, but it introduces a minor footgun because of a subtle conflict with that vision, and as people adopt it they'll also want another --preview switch to deal with the footgun, and that switch will break all current trusted code that's currently assuming no safe checks, and so there will a long deprecation cycle...I don't see it that way. I see a simple path to safer code. Your "footgun" is, in my opinion and mixing metaphors, small potatoes compared to the additional automated coverage evident in the proposal. As I hope you'll agree, there is a huge gap between "this isn't a big deal, most competent programmers would not mess up here very often..." and "no programmers will ever mess up here because they can't".trusted blocks win this. Or rather--not to be rude--but if you came out and said that this DIP was just some triangulation to get people to accept trusted blocks, I would say: "good job! It got me thinking!" If not, I'm sorry.No need to be sorry at all. If, broadly, people are happy about the current state of affairs and see no significant benefit to truth-in-naming, consistent nesting, and safe checking within trusted functions then we'll stick with what we've got.
Jul 25 2021
On Sunday, 25 July 2021 at 08:16:41 UTC, Bruce Carneal wrote:I think it's a fair reading of https://www.youtube.com/watch?v=nGX75xNeW_Y&t=379s From "With this change"&c that's all in my voice.Of the current behavior, Walter's said that he doesn't want trusted blocks because they should be discouraged in safe code. With this change, we'll have exactly what he doesn't want with different names: s/ trusted/ system/, s/ safe/ trusted/, and the exact same behavior: system blocks are just what trusted blocks would've been, and trusted code with system blocks in it is just safe code with a different name.I think you're misreading Walter on this. He was the one who recommended that I pursue this DIP at beerconf (it was just an idea that I'd thrown out up to that point).I'd like to localize system code and minimize un-automated checking. An trusted lambda in an safe function goes in the other direction.Presently, when you write a function that you want to be mostly safe but that needs to break those restrictions in part, you call it safe and you put the violation in a trusted lambda. With your proposal, when you write a function that you want to be mostly safe but that needs to break those restrictions in part, you call it trusted and you put the violation in a system block. These seem to me to be so identical that I don't see how they can be moving in opposite directions. You need to audit the same amount of code, and you find that code with the same exertion of effort.No need to be sorry at all. If, broadly, people are happy about the current state of affairs and see no significant benefit to truth-in-naming, consistent nesting, and safe checking within trusted functions then we'll stick with what we've got.I feel like you've started with this problem, " trusted functions don't benefit from safe checking at all", and you've found what seems like a good solution to it, but you haven't stepped back from your solution to realize that you've worked your way back to the existing state of affairs. If someone shows you some code with a long trusted function, and you would like safe checks in it, you can change it to a safe function with trusted lambdas. You can do that right now, and that completely solves the problem of trusted functions not getting checked. And this is the 2-to-1 choice of trusted in Phobos. A new option of "you can leave it as trusted and add system blocks" is just the same option that you already have. (Although if you skip back from the timestamp above, Walter points out that trusted blocks might not get inlined in some cases. A deliberate language feature wouldn't have that problem.)
Jul 25 2021
On Sunday, 25 July 2021 at 09:04:17 UTC, jfondren wrote:On Sunday, 25 July 2021 at 08:16:41 UTC, Bruce Carneal wrote:It may well have been a fair reading of the past. I reported on the present (yesterday).I think it's a fair reading of https://www.youtube.com/watch?v=nGX75xNeW_Y&t=379s From "With this change"&c that's all in my voice.Of the current behavior, Walter's said that he doesn't want trusted blocks because they should be discouraged in safe code. With this change, we'll have exactly what he doesn't want with different names: s/ trusted/ system/, s/ safe/ trusted/, and the exact same behavior: system blocks are just what trusted blocks would've been, and trusted code with system blocks in it is just safe code with a different name.I think you're misreading Walter on this. He was the one who recommended that I pursue this DIP at beerconf (it was just an idea that I'd thrown out up to that point).They are similar in some regards, but if trusted lambdas are the only practical option for this type of code going forward, then safe will require manual checking in perpetuity. I'm not saying mitigating tooling or procedures to cover this can not be devised/employed. I am saying that there is a qualitative difference between "the compiler asserts" and "my super-duper-xtra-linguistic-wonder-procedure asserts". Moving towards the former is a good idea. Moving towards the latter is not.I'd like to localize system code and minimize un-automated checking. An trusted lambda in an safe function goes in the other direction.Presently, when you write a function that you want to be mostly safe but that needs to break those restrictions in part, you call it safe and you put the violation in a trusted lambda. With your proposal, when you write a function that you want to be mostly safe but that needs to break those restrictions in part, you call it trusted and you put the violation in a system block. These seem to me to be so identical that I don't see how they can be moving in opposite directions. You need to audit the same amount of code, and you find that code with the same exertion of effort.The initial motivation was concern over safe practically transitioning from "compiler checkable" to "not checkable (reported) by the compiler".No need to be sorry at all. If, broadly, people are happy about the current state of affairs and see no significant benefit to truth-in-naming, consistent nesting, and safe checking within trusted functions then we'll stick with what we've got.I feel like you've started with this problem, " trusted functions don't benefit from safe checking at all", and you've found what seems like a good solution to it, but you haven't stepped back from your solution to realize that you've worked your way back to the existing state of affairs.If someone shows you some code with a long trusted function, and you would like safe checks in it, you can change it to a safe function with trusted lambdas. You can do that right now, and that completely solves the problem of trusted functions not getting checked. And this is the 2-to-1 choice of trusted in Phobos. A new option of "you can leave it as trusted and add system blocks" is just the same option that you already have.As hopefully understood from my earlier comments, these are, qualitatively, not the same thing. You will still have to check a conversion to a new style trusted function manually of course, no work savings there, but you'd gain something pretty important: the compiler's assertions regarding your remaining safe code might actually mean something. Also, I'm not sure how choices (2-to-1 or any N-to-1) made prior to the availability of an alternative are to be interpreted. What am I missing?...Finally, thanks for engaging.
Jul 25 2021
On Sunday, 25 July 2021 at 12:56:33 UTC, Bruce Carneal wrote:As hopefully understood from my earlier comments, these are, qualitatively, not the same thing. You will still have to check a conversion to a new style trusted function manually of course, no work savings there, but you'd gain something pretty important: the compiler's assertions regarding your remaining safe code might actually mean something.Memory safety is a global property. If even a single line of your new-style ` system`-block (or old-style ` trusted` lambda) causes undefined behavior, it does not matter one bit what the compiler asserts about the ` safe` code in your program: the entire process is corrupted.
Jul 25 2021
On Sunday, 25 July 2021 at 13:42:52 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 12:56:33 UTC, Bruce Carneal wrote:I do not know of any competent programmer who would say otherwise. I also do not know what this has to do with a discussion regarding debasing/improving safe. What am I missing?As hopefully understood from my earlier comments, these are, qualitatively, not the same thing. You will still have to check a conversion to a new style trusted function manually of course, no work savings there, but you'd gain something pretty important: the compiler's assertions regarding your remaining safe code might actually mean something.Memory safety is a global property. If even a single line of your new-style ` system`-block (or old-style ` trusted` lambda) causes undefined behavior, it does not matter one bit what the compiler asserts about the ` safe` code in your program: the entire process is corrupted.
Jul 25 2021
On Sunday, 25 July 2021 at 14:19:47 UTC, Bruce Carneal wrote:On Sunday, 25 July 2021 at 13:42:52 UTC, Paul Backus wrote:It is a response to the claim that "the compiler's assertions regarding your remaining safe code might actually mean something." They mean exactly the same thing with your proposal as they do without it: that the ` safe` portion of the program does not violate the language's memory-safety invariants directly.On Sunday, 25 July 2021 at 12:56:33 UTC, Bruce Carneal wrote:I do not know of any competent programmer who would say otherwise. I also do not know what this has to do with a discussion regarding debasing/improving safe. What am I missing?As hopefully understood from my earlier comments, these are, qualitatively, not the same thing. You will still have to check a conversion to a new style trusted function manually of course, no work savings there, but you'd gain something pretty important: the compiler's assertions regarding your remaining safe code might actually mean something.Memory safety is a global property. If even a single line of your new-style ` system`-block (or old-style ` trusted` lambda) causes undefined behavior, it does not matter one bit what the compiler asserts about the ` safe` code in your program: the entire process is corrupted.
Jul 25 2021
On Sunday, 25 July 2021 at 14:36:27 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 14:19:47 UTC, Bruce Carneal wrote:They don't mean the same thing to me. In one case safe invariants are asserted with a process that limits human error to group-visible forms (compiler errors in the code handling safe). In the other case we add in a more direct form of human error, the failure to correctly review " safe" code that contains trusted lambdas. In one case, your safe code comes in one flavor, machine checkable. In the other case your safe code comes in two flavors, machine checkable and needs-human-intervention since you have no opportunity to segregate to the new trusted form.On Sunday, 25 July 2021 at 13:42:52 UTC, Paul Backus wrote:It is a response to the claim that "the compiler's assertions regarding your remaining safe code might actually mean something." They mean exactly the same thing with your proposal as they do without it: that the ` safe` portion of the program does not violate the language's memory-safety invariants directly.On Sunday, 25 July 2021 at 12:56:33 UTC, Bruce Carneal wrote:I do not know of any competent programmer who would say otherwise. I also do not know what this has to do with a discussion regarding debasing/improving safe. What am I missing?As hopefully understood from my earlier comments, these are, qualitatively, not the same thing. You will still have to check a conversion to a new style trusted function manually of course, no work savings there, but you'd gain something pretty important: the compiler's assertions regarding your remaining safe code might actually mean something.Memory safety is a global property. If even a single line of your new-style ` system`-block (or old-style ` trusted` lambda) causes undefined behavior, it does not matter one bit what the compiler asserts about the ` safe` code in your program: the entire process is corrupted.
Jul 25 2021
On 7/25/2021 1:16 AM, Bruce Carneal wrote:He was the one who recommended that I pursue this DIP at beerconf (it was just an idea that I'd thrown out up to that point).That's right. I did not think of the case of removing the system block from a trusted function and then silently losing the safety checking in the rest of the trusted function. Thanks for pointing it out. One solution would be for the compiler to check such trusted functions for safe-ty anyway, and issue an error for trusted functions with no system{} blocks and no safe errors. The user could silence this error by adding an empty system block: system { }
Jul 25 2021
On Sunday, 25 July 2021 at 12:01:01 UTC, Walter Bright wrote:One solution would be for the compiler to check such trusted functions for safe-ty anyway, and issue an error for trusted functions with no system{} blocks and no safe errors. The user could silence this error by adding an empty system block: system { }...or rather simply marked as a safe function then?
Jul 26 2021
On 7/26/2021 1:21 AM, ShadoLight wrote:On Sunday, 25 July 2021 at 12:01:01 UTC, Walter Bright wrote:It's still possible that the programmer wants to keep it trusted because it's doing things not catchable by the safety checks.The user could silence this error by adding an empty system block: system { }...or rather simply marked as a safe function then?
Jul 26 2021
On Sunday, 25 July 2021 at 06:13:41 UTC, jfondren wrote:On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:Yes, but there is no point in a ` trusted` function without ` system` blocks. The compiler should warn about it so it gets changed (but should not ever be deprecated).The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.This has an unfortunate result: if in maintenance you edit a trusted function to remove its system blocks, it'll quietly no longer be checked as safe.Of the current behavior, Walter's said that he doesn't want trusted blocks because they should be discouraged in safe code. With this change, we'll have exactly what he doesn't want with different names: s/ trusted/ system/, s/ safe/ trusted/, and the exact same behavior: system blocks are just what trusted blocks would've been, and trusted code with system blocks in it is just safe code with a different name.The name is the important change. A ` safe` function shouldn't require manual verification. A ` trusted` function does. The ` safe` function with ` trusted` escape gives the impression that the ` safe` parts don't need checking, but they do. It's also difficult to use tools to check which parts need review when focusing on memory safety.Instead of people accepting that safe "isn't really safe" in the presence of trusted blocks, and that the whole body of the function has to be audited, with this change we'll have s/ safe/ trusted/ blocks that aren't really safe in the presence of system blocks, and that the whole body of the function has to be audited. The "you have to audit this" signifier is the same, an internal lower-protection block, and all that's gained is that the function attribute's spelled differently. Is this really worth it?Yes.One way to avoid the "unfortunate result" above is to permit trusted blocks in safe code. Which results in Rust equivalent functionality: safe code is checked as safe, safe code with internal trusted blocks is still checked as safe but people know to audit it, and system code isn't checked as safe.We already have that with ` trusted` lambdas today. This just changes the name to identify which parts are actually trusted and need review.I appreciate that there's a vision also to safe/ trusted/ system, but it doesn't seem to have stuck, with Phobos having more than twice as many trusted lambdas than trusted functions: ``` phobos$ grep -rP '\W\(\) trusted' --include '*.d'|wc -l 238 phobos$ grep -rP '\w\(\) trusted' --include '*.d'|wc -l 111 ```` trusted` lambdas are required inside templates where you want code to be ` safe` when the type parameters allow it. The resulting function itself needs to be marked ` trusted` with the lambdas replaced with ` system` blocks to achieve the same effect, as memory safety review is still needed. -Steve
Jul 25 2021
On Sunday, 25 July 2021 at 12:05:10 UTC, Steven Schveighoffer wrote:On Sunday, 25 July 2021 at 06:13:41 UTC, jfondren wrote:OK. I'll argue the opposite position for a bit, then. Here's a trusted function with a non- safe component: ```d ulong getAvailableDiskSpace(scope const(char)[] path) trusted { ULARGE_INTEGER freeBytesAvailable; auto err = GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); cenforce(err != 0, "Cannot get available disk space"); return freeBytesAvailable.QuadPart; } ``` With this proposal, I imagine: ```d ulong getAvailableDiskSpace(scope const(char)[] path) trusted { ULARGE_INTEGER freeBytesAvailable; auto err = system GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); // expression usage? system{ auto err = GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); } // scopeless block? cenforce(err != 0, "Cannot get available disk space"); return freeBytesAvailable.QuadPart; } ``` And in current practice: ```d ulong getAvailableDiskSpace(scope const(char)[] path) safe { ULARGE_INTEGER freeBytesAvailable; auto err = () trusted { return GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); } (); cenforce(err != 0, "Cannot get available disk space"); return freeBytesAvailable.QuadPart; } ``` So a naive take is "the last two versions are literally the same", but they become distinct when all three versions are framed by how they came to be, respectively: 1. a trusted function as written by someone trying to properly use the language as described by https://dlang.org/spec/memory-safe-d.html 2. a trusted function as written after that document is updated to reflect this DIP. 3. a trusted function in the current " trusted functions are bad because they don't check anything, avoid them" fail state, which is ugly because it's a late adaptation to a fault in the language rather than a result of deliberate design. In the first case, safe/ trusted/ system functions are dutifully written and then bugs are later found in the trusted functions. trusted functions are an attractive nuisance. In the second case, safe/ trusted/ system functions are written and that doesn't happen. trusted functions have been rehabilitated and have their intended role. In the last case, only safe/ system functions are written, and some of the safe functions are secretly, really, trusted functions that have be written in a weird way to work. trusted functions are either a mistake or a shorthand for a safe function whose entire body may as well be in a trusted block. Rather than bless the failure state by giving it a better syntax (and continuing to have trusted as a problem for new programmers), we'd like to fix trusted so that trusted functions are worth writing again. Does that sound about right?Instead of people accepting that safe "isn't really safe" in the presence of trusted blocks, and that the whole body of the function has to be audited, with this change we'll have s/ safe/ trusted/ blocks that aren't really safe in the presence of system blocks, and that the whole body of the function has to be audited. The "you have to audit this" signifier is the same, an internal lower-protection block, and all that's gained is that the function attribute's spelled differently. Is this really worth it?Yes.
Jul 25 2021
On Sunday, 25 July 2021 at 13:14:20 UTC, jfondren wrote:OK. I'll argue the opposite position for a bit, then. Here's a trusted function with a non- safe component: ```d ulong getAvailableDiskSpace(scope const(char)[] path) trusted { ULARGE_INTEGER freeBytesAvailable; auto err = GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); cenforce(err != 0, "Cannot get available disk space"); return freeBytesAvailable.QuadPart; } ```Yep. This function today is overly trusted (meaning that parts that can be at least partly mechanically checked are allowed to be checked.With this proposal, I imagine: ```d ulong getAvailableDiskSpace(scope const(char)[] path) trusted { ULARGE_INTEGER freeBytesAvailable; auto err = system GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); // expression usage? system{ auto err = GetDiskFreeSpaceExW(path.tempCStringW(), &freeBytesAvailable, null, null); } // scopeless block? cenforce(err != 0, "Cannot get available disk space"); return freeBytesAvailable.QuadPart; } ```Yes, that's about right. The exact semantics are TBD (scope or no scope, expressions or statements, etc.). [snip]Does that sound about right?I think all of what you are saying is along the same lines as what I'm thinking (though I look at it more as pragmatic reasoning for how to write such functions rather than some "blessed" way to do things). -Steve
Jul 26 2021
On Sunday, 25 July 2021 at 12:05:10 UTC, Steven Schveighoffer wrote:The name is the important change. A ` safe` function shouldn't require manual verification. A ` trusted` function does. The ` safe` function with ` trusted` escape gives the impression that the ` safe` parts don't need checking, but they do. It's also difficult to use tools to check which parts need review when focusing on memory safety.Yes, I agree. First I was skeptical about this DIP, but that I think is the important advantage above trusted blocks. So I would support this DIP.
Jul 26 2021
On Sunday, 25 July 2021 at 12:05:10 UTC, Steven Schveighoffer wrote:On Sunday, 25 July 2021 at 06:13:41 UTC, jfondren wrote:One thing that is incredibly important to get right here is the way to specify what to do with templates. I've been thinking about this a bit, and even this isn't the right way to do things (specify the function as ` trusted` and mark the actual trusted part as ` system`), because you want the compiler to infer ` system` for certain instantiations, and ` trusted` for others. What we need is a way to say "this function should be ` trusted` iff certain sections of code are inferred ` safe`". Perhaps the correct mechanism here is to encapsulate the ` trusted` parts into separate functions, and make sure the API is ` safe` (e.g. static inner functions). I think it would be too confusing to allow ` system` blocks inside inferred templates. -SteveI appreciate that there's a vision also to safe/ trusted/ system, but it doesn't seem to have stuck, with Phobos having more than twice as many trusted lambdas than trusted functions: ``` phobos$ grep -rP '\W\(\) trusted' --include '*.d'|wc -l 238 phobos$ grep -rP '\w\(\) trusted' --include '*.d'|wc -l 111 ```` trusted` lambdas are required inside templates where you want code to be ` safe` when the type parameters allow it. The resulting function itself needs to be marked ` trusted` with the lambdas replaced with ` system` blocks to achieve the same effect, as memory safety review is still needed.
Jul 26 2021
On Monday, 26 July 2021 at 13:26:56 UTC, Steven Schveighoffer wrote:On Sunday, 25 July 2021 at 12:05:10 UTC, Steven Schveighoffer wrote:Yes. You have argued, persuasively, that the language should infer these properties and others. IIUC, in a pervasive inference scenario, safe/ trusted/ nogc/... annotations would function as programmer assertions: "if this property doesn't hold, error out". Couple this with Ali's trusted-by-default proposal and you'd be living in a wonderful world (he says before walking off in to the unknown! :-) ).On Sunday, 25 July 2021 at 06:13:41 UTC, jfondren wrote:One thing that is incredibly important to get right here is the way to specify what to do with templates. I've been thinking about this a bit, and even this isn't the right way to do things (specify the function as ` trusted` and mark the actual trusted part as ` system`), because you want the compiler to infer ` system` for certain instantiations, and ` trusted` for others.I appreciate that there's a vision also to safe/ trusted/ system, but it doesn't seem to have stuck, with Phobos having more than twice as many trusted lambdas than trusted functions: ``` phobos$ grep -rP '\W\(\) trusted' --include '*.d'|wc -l 238 phobos$ grep -rP '\w\(\) trusted' --include '*.d'|wc -l 111 ```` trusted` lambdas are required inside templates where you want code to be ` safe` when the type parameters allow it. The resulting function itself needs to be marked ` trusted` with the lambdas replaced with ` system` blocks to achieve the same effect, as memory safety review is still needed.What we need is a way to say "this function should be ` trusted` iff certain sections of code are inferred ` safe`". Perhaps the correct mechanism here is to encapsulate the ` trusted` parts into separate functions, and make sure the API is ` safe` (e.g. static inner functions). I think it would be too confusing to allow ` system` blocks inside inferred templates.Definitely more to work out, but I now think that we have an opportunity to achieve a much bigger win than that implied by the initial proposal. It feels like there is a simple path forward patiently awaiting discovery.
Jul 26 2021
On Monday, 26 July 2021 at 14:18:29 UTC, Bruce Carneal wrote:On Monday, 26 July 2021 at 13:26:56 UTC, Steven Schveighoffer wrote:Rather, "if this compiler can't prove that the property holds, error out".On Sunday, 25 July 2021 at 12:05:10 UTC, Steven Schveighoffer wrote: ...Yes. You have argued, persuasively, that the language should infer these properties and others. IIUC, in a pervasive inference scenario, safe/ trusted/ nogc/... annotations would function as programmer assertions: "if this property doesn't hold, error out". Couple this with Ali's trusted-by-default proposal and you'd be living in a wonderful world (he says before walking off in to the unknown! :-) ).
Jul 26 2021
On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:At beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.Both before and after this proposal, there are 3 kinds of code: 1. Code that is automatically checked for memory safety. 2. Code that is assumed by the compiler to be safe, and must be manually checked for memory safety. 3. Code that is not checked for memory safety, and is assumed to be unsafe. Currently, (1) is marked ` safe`, (2) is marked ` trusted`, and (3) is marked ` system`. Under this proposal, (1) would be marked either ` safe` or ` trusted`, (2) would be marked either ` trusted` or ` system`, and (3) would be marked ` system`. I do not think this is an improvement relative to the status quo.The problematic trusted lambda escapes creeping in to " safe" code could be replaced going forward by a more honestly named form, trusted code with system blocks. Best practices could evolve to the point that safe actually meant safe again.What makes ` trusted` lambdas problematic is that they implicitly depend on everything in their enclosing scope, which makes it easy for a change in the ` safe` portion of the code to accidentally violate an assumption that the ` trusted` portion depends on. If we want to address this issue at the language level, the most direct solution is to require that ` trusted` lambdas make their dependencies explicit. This can be done by requiring all ` trusted` nested functions (of which lambdas are a special case) to be `static`. Of course, this is a breaking change, so it would require a deprecation process.
Jul 25 2021
On Sunday, 25 July 2021 at 13:19:55 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:There is no getting away from manually checking the system block(s) so (2) would be , IIUC, just as it is now wrt compiler assumptions. The improvements on the status quo include the ability to easily delimit "should check *very* closely" code and the corresponding ability to engage safety checking on any remainder.At beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.Both before and after this proposal, there are 3 kinds of code: 1. Code that is automatically checked for memory safety. 2. Code that is assumed by the compiler to be safe, and must be manually checked for memory safety. 3. Code that is not checked for memory safety, and is assumed to be unsafe. Currently, (1) is marked ` safe`, (2) is marked ` trusted`, and (3) is marked ` system`. Under this proposal, (1) would be marked either ` safe` or ` trusted`, (2) would be marked either ` trusted` or ` system`, and (3) would be marked ` system`. I do not think this is an improvement relative to the status quo.Yes.The problematic trusted lambda escapes creeping in to " safe" code could be replaced going forward by a more honestly named form, trusted code with system blocks. Best practices could evolve to the point that safe actually meant safe again.What makes ` trusted` lambdas problematic is that they implicitly depend on everything in their enclosing scope, which makes it easy for a change in the ` safe` portion of the code to accidentally violate an assumption that the ` trusted` portion depends on.If we want to address this issue at the language level, the most direct solution is to require that ` trusted` lambdas make their dependencies explicit. This can be done by requiring all ` trusted` nested functions (of which lambdas are a special case) to be `static`.Yes. As Walter has put this, we'd be forcing args/operands to "come in through the front door" rather than sneaking in the back.
Jul 25 2021
On Sunday, 25 July 2021 at 13:55:14 UTC, Bruce Carneal wrote:The improvements on the status quo include the ability to easily delimit "should check *very* closely" code and the corresponding ability to engage safety checking on any remainder.We already have this ability: simply avoid writing ` trusted` code whose safety depends on out-of-band knowledge about ` safe` code, and enforce this practice via code review. As I've discussed previously [1], there is no way to enforce this at the language level, because the language does not (and cannot possibly) know what knowledge ` trusted` code depends on for its memory safety. [1] https://forum.dlang.org/post/auqcjtbbamviembvcaps forum.dlang.org
Jul 25 2021
On Sunday, 25 July 2021 at 14:13:45 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 13:55:14 UTC, Bruce Carneal wrote:I'd like to have assistance from the compiler to the maximum extent possible and then conduct the code review(s). Assuming low (near zero) false positives out of the compiler, I'm not sure why one would prefer manual checking when compiler checking was available, but that option is certainly available in both the current and proposed environments.The improvements on the status quo include the ability to easily delimit "should check *very* closely" code and the corresponding ability to engage safety checking on any remainder.We already have this ability: simply avoid writing ` trusted` code whose safety depends on out-of-band knowledge about ` safe` code, and enforce this practice via code review. As I've discussed previously [1], there is no way to enforce this at the language level, because the language does not (and cannot possibly) know what knowledge ` trusted` code depends on for its memory safety. [1] https://forum.dlang.org/post/auqcjtbbamviembvcaps forum.dlang.org
Jul 25 2021
On Sunday, 25 July 2021 at 14:34:27 UTC, Bruce Carneal wrote:I'd like to have assistance from the compiler to the maximum extent possible and then conduct the code review(s). Assuming low (near zero) false positives out of the compiler, I'm not sure why one would prefer manual checking when compiler checking was available, but that option is certainly available in both the current and proposed environments.Under your proposal, the proportion of code that must be manually-checked vs. compiler-checked does not change at all.
Jul 25 2021
On Sunday, 25 July 2021 at 14:38:18 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 14:34:27 UTC, Bruce Carneal wrote:The direct translation of an embedded lambda safe function to the corresponding trusted/ system leaves you with the exact same amount of work at the point of translation wrt that function, without dispute. Machine advantage comes in other forms. Firstly, we now have a properly segregated code base. safe always means 'machine checkable'. Zero procedural trusted code review errors in that now easily identified class. Another possible benefit (hypothesis only here) is the ease with which existing trusted code can be made safer. External clients already believe that you're trusted, so at any time you can become safer, incrementally (shrink the, initially large, system block(s)). More machine checking available along a reduced friction path. TBD. Another possible benefit (bigger leap of faith here) is that the continuously-variable-safety mechanism will encourage safer coding generally by reducing the friction of moving functions the other way, from system code to safer trusted. More machine checking available along a reduced friction path. Again, TBD. Another possible benefit is that all new trusted code would start with a form that defaults to safe (you opt out). I think that defaults matter and that this one would lead to more machine checkability passively,or "by default". No need to wait for the "right" time to move to safe with trusted escape, just evolve the code. Architecture matters more here than the exact forms most would say, and I'd mostly agree, but I believe the forms are important. I also believe that there are other benefits, primarily in terms of human visuals (more easily discerned and more easily localizable system regions) but the above are what come to mind wrt machine checkability. I'm still thinking about it. It's less work if this is destroyed, so keep those sharp critiques coming! :-)I'd like to have assistance from the compiler to the maximum extent possible and then conduct the code review(s). Assuming low (near zero) false positives out of the compiler, I'm not sure why one would prefer manual checking when compiler checking was available, but that option is certainly available in both the current and proposed environments.Under your proposal, the proportion of code that must be manually-checked vs. compiler-checked does not change at all.
Jul 25 2021
On Sunday, 25 July 2021 at 16:29:38 UTC, Bruce Carneal wrote:Machine advantage comes in other forms. Firstly, we now have a properly segregated code base. safe always means 'machine checkable'. Zero procedural trusted code review errors in that now easily identified class.I have already demonstrated that this is false. Here's my example from the previous thread on this topic, rewritten slightly to use the new-style ` trusted`/` system` syntax from your proposal: ```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ``` I make the following claims: 1. This code is memory-safe. No matter how you call `favoriteElement`, it will not result in undefined behavior, or allow undefined behavior to occur in ` safe` code. 2. `favoriteNumber` is 100% machine-checkable ` safe` code. 3. Changes to `favoriteNumber` must be manually reviewed in order to ensure they do not result in memory-safety violations. The only way to ensure that ` safe` code never requires manual review is to enforce coding standards that forbid functions like `favoriteElement` from ever being merged in the first place. One example of a set of coding standards that would reject `favoriteElement` is: 1. Every ` system` block must be accompanied by a comment containing a proof (formal or informal) of memory safety. 2. A memory-safety proof for a ` system` block may not rely on any knowledge about symbols defined outside the block other than * knowledge that is implied by their type signatures. * knowledge that is implied by their documentation and/or any standards they are known to conform to. `favoriteElement` satisfies condition (1), but not condition (2)--there is nothing in `favoriteNumber`'s documentation or type signature that guarantees it will return 42. I believe (though I cannot prove) that any set of coding standards capable of rejecting `favoriteElement` under your proposal is also (mutatis mutandis) capable of rejecting `favoriteElement` in the current language. If this were true, then the proposal would be of no value--we could simply implement those coding standards and reap their benefits right away, with no language changes required. If you can give an example of a set of coding standards that rejects `favoriteElement` under your proposal but fails to reject `favoriteElement` in the D language as it currently exists, I would be very interested in seeing it.
Jul 25 2021
On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 16:29:38 UTC, Bruce Carneal wrote:So... Call a safe function and get an int. Use that int to index into an array with bounds checking turned off. Wheres the memory safety bug? In the function that returns the int or in the system code that bypasses bounds checking? So no that doesn't prove what you say it does, it doesn't mean favouriteNumber needs checking, it means the system block needs checking. favouriteNumber knows nothing about the array length, to assume it does or it should is bad design.Machine advantage comes in other forms. Firstly, we now have a properly segregated code base. safe always means 'machine checkable'. Zero procedural trusted code review errors in that now easily identified class.I have already demonstrated that this is false. Here's my example from the previous thread on this topic, rewritten slightly to use the new-style ` trusted`/` system` syntax from your proposal: ```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ``` I make the following claims: 1. This code is memory-safe. No matter how you call `favoriteElement`, it will not result in undefined behavior, or allow undefined behavior to occur in ` safe` code. 2. `favoriteNumber` is 100% machine-checkable ` safe` code. 3. Changes to `favoriteNumber` must be manually reviewed in order to ensure they do not result in memory-safety violations.
Jul 25 2021
On Sunday, 25 July 2021 at 20:36:09 UTC, claptrap wrote:So... Call a safe function and get an int. Use that int to index into an array with bounds checking turned off. Wheres the memory safety bug? In the function that returns the int or in the system code that bypasses bounds checking?The questioner is what's at issue there, not the question. safe function. This reviewer cannot determine from just the changes, to only safe functions, that they won't cause the patched program to develop an out of bounds access. safe/ trusted/ system is not really helping this questioner. access. This reviewer doesn't have to examine safe functions except when tracing the inputs to a system block/function where they isolated the error. safe/ trusted/ system has potentially saved this questioner a lot of time with getting an answer to "Where's the memory safety bug?"
Jul 25 2021
On Sunday, 25 July 2021 at 20:36:09 UTC, claptrap wrote:So no that doesn't prove what you say it does, it doesn't mean favouriteNumber needs checking, it means the system block needs checking. favouriteNumber knows nothing about the array length, to assume it does or it should is bad design.Strictly speaking, you're right; it is the ` system` block that needs checking, not `favoriteNumber`. However, any time you change `favoriteNumber`, you have to *re-check* the ` system` block. From a maintenance perspective, this is no different from `favoriteNumber` itself requiring manual checking--if someone submits a PR that changes `favoriteNumber`, and you accept it without any manual review, you risk introducing a memory-safety bug. The same logic applies to ` trusted` lambdas. Strictly speaking, it is the lambda that requires checking, not the surrounding ` safe` code. However, any changes to the surrounding code require you to *re-check* the lambda, so from a maintenance perspective, you must review changes to the ` safe` part just as carefully as changes to the ` trusted` part. The underlying problem in both cases is that the memory safety of the manually-checked code (` system` block/` trusted` lambda) depends on details of the automatically-checked code that are not robust against change.
Jul 25 2021
On Sunday, 25 July 2021 at 21:32:00 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 20:36:09 UTC, claptrap wrote:Im sorry but it's nonsense. You get an OOB error, it points you at the system block, you add bounds checking, job done. Changing favouriteNumber doesnt introduce a bug, the bug was *already* there in the system block. You cant expect favouriteNumber to be responsible for other code doing stupid things with its result.So no that doesn't prove what you say it does, it doesn't mean favouriteNumber needs checking, it means the system block needs checking. favouriteNumber knows nothing about the array length, to assume it does or it should is bad design.Strictly speaking, you're right; it is the ` system` block that needs checking, not `favoriteNumber`. However, any time you change `favoriteNumber`, you have to *re-check* the ` system` block. From a maintenance perspective, this is no different from `favoriteNumber` itself requiring manual checking--if someone submits a PR that changes `favoriteNumber`, and you accept it without any manual review, you risk introducing a memory-safety bug.
Jul 25 2021
On Sunday, 25 July 2021 at 22:05:26 UTC, claptrap wrote:Im sorry but it's nonsense. You get an OOB error, it points you at the system block, you add bounds checking, job done. Changing favouriteNumber doesnt introduce a bug, the bug was *already* there in the system block. You cant expect favouriteNumber to be responsible for other code doing stupid things with its result.If the bug is "already there", you should be able to write a program that uses the unmodified versions of `favoriteNumber` and `favoriteElement` to cause undefined behavior in ` safe` code. If you cannot, then you must admit that `favoriteElement` is memory safe as-written.
Jul 25 2021
On Sunday, 25 July 2021 at 22:43:26 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 22:05:26 UTC, claptrap wrote:Consider this... int foo() { return 42; } void bar() { int[2] what; if (foo() == 24) { what.ptr[2] = 100; // BUG } } Your argument the same as saying that bar() is memory safe as written. True, but it's not bug free. The bug is just waiting for the right set of circumstances to come to life and eat your face :) IE. Memory safe as written != bug freeIm sorry but it's nonsense. You get an OOB error, it points you at the system block, you add bounds checking, job done. Changing favouriteNumber doesnt introduce a bug, the bug was *already* there in the system block. You cant expect favouriteNumber to be responsible for other code doing stupid things with its result.If the bug is "already there", you should be able to write a program that uses the unmodified versions of `favoriteNumber` and `favoriteElement` to cause undefined behavior in ` safe` code. If you cannot, then you must admit that `favoriteElement` is memory safe as-written.
Jul 25 2021
On Sunday, 25 July 2021 at 23:34:35 UTC, claptrap wrote:Your argument the same as saying that bar() is memory safe as written. True, but it's not bug free. The bug is just waiting for the right set of circumstances to come to life and eat your face :) IE. Memory safe as written != bug freeYes; I agree completely. :) The point of the example is to show that the proposal advanced in this thread does not prevent this type of bug from occurring.
Jul 25 2021
On 26.07.21 01:50, Paul Backus wrote:On Sunday, 25 July 2021 at 23:34:35 UTC, claptrap wrote:The original claim was that the new feature is a tool that allows the code base to be properly segregated more easily, not that you can't still write incorrect trusted code. If you have to review safe code to ensure memory safety of your trusted code, your trusted code is incorrect. Note that the trusted lambda idiom is _basically always_ incorrect trusted code. Some people do it anyway, because it's convenient. The new feature allows combined convenience and correctness.Your argument the same as saying that bar() is memory safe as written. True, but it's not bug free. The bug is just waiting for the right set of circumstances to come to life and eat your face :) IE. Memory safe as written != bug freeYes; I agree completely. :) The point of the example is to show that the proposal advanced in this thread does not prevent this type of bug from occurring.
Jul 25 2021
On Monday, 26 July 2021 at 03:40:55 UTC, Timon Gehr wrote:The original claim was that the new feature is a tool that allows the code base to be properly segregated more easily, not that you can't still write incorrect trusted code. If you have to review safe code to ensure memory safety of your trusted code, your trusted code is incorrect.trusted code is correct if and only if it cannot possibly allow undefined behavior to be invoked in safe code. If my example is incorrect as-written, then you should be able to write a program that uses it, without modification, to cause undefined behavior in safe code. Same for any given trusted lambda.
Jul 26 2021
On Monday, 26 July 2021 at 07:32:24 UTC, Paul Backus wrote:On Monday, 26 July 2021 at 03:40:55 UTC, Timon Gehr wrote:Your example doesn't invoke undefined behaviour in safe code, it invokes undefined behaviour in system code. The UB is in the system block. The memory corruption happens in the system block. After that all bets are off. There's no way around that, which makes your example moot.The original claim was that the new feature is a tool that allows the code base to be properly segregated more easily, not that you can't still write incorrect trusted code. If you have to review safe code to ensure memory safety of your trusted code, your trusted code is incorrect.trusted code is correct if and only if it cannot possibly allow undefined behavior to be invoked in safe code.If my example is incorrect as-written, then you should be able to write a program that uses it, without modification, to cause undefined behavior in safe code. Same for any given trusted lambda.And that proves what? That you can write buggy system code that doesn't cause memory errors in some circumstances?
Jul 26 2021
On Monday, 26 July 2021 at 09:39:57 UTC, claptrap wrote:On Monday, 26 July 2021 at 07:32:24 UTC, Paul Backus wrote:Well, it is in a ` trusted` function, which is callable from ` safe` code, so any undefined behavior in the ` system` block is also possible undefined behavior in ` safe` code. If you can write a call to `favoriteElement` from ` safe` code that causes UB, that would be sufficient to demonstrate that it is not memory safe. Of course, it only counts as a mistake in my example if you use the version I wrote, not your own modified version. :)trusted code is correct if and only if it cannot possibly allow undefined behavior to be invoked in safe code.Your example doesn't invoke undefined behaviour in safe code, it invokes undefined behaviour in system code. The UB is in the system block. The memory corruption happens in the system block. After that all bets are off.
Jul 26 2021
On Monday, 26 July 2021 at 13:58:46 UTC, Paul Backus wrote:On Monday, 26 July 2021 at 09:39:57 UTC, claptrap wrote:Its a pointless exercise because your example is a red herring, but this breaks it. ```d ref int[50] ohYeah() trusted { return *(cast(int[50]*) 123456); } int redHerring() safe { return favoriteElement(ohYeah()); } ``` See what we've learnt? That trusted / system etc can break safety. Top down, or bottom up. Its a house of cards, you cant escape that, unless you get rid of trusted and system completely. Who knew? :) Another way to think about it is that you are conflating "memory safe by convention" with "memory safe by compiler". Your example is memory safe by convention, IE, it is only so because the programmer has checked favouriteNumber doesnt break favouriteElement. So when you change favouriteNumber to 52 you are violating "safe by convention". You are not violating "safe by compiler". IE when you change favouriteNumber it doesnt change anything in terms of what is checked by the compiler. Or in terms of runtime checks. At least AFAIK. I wonder what it is you expect the compiler to do. And TBH I'm not even sure what your overall point is.On Monday, 26 July 2021 at 07:32:24 UTC, Paul Backus wrote:Well, it is in a ` trusted` function, which is callable from ` safe` code, so any undefined behavior in the ` system` block is also possible undefined behavior in ` safe` code. If you can write a call to `favoriteElement` from ` safe` code that causes UB, that would be sufficient to demonstrate that it is not memory safe. Of course, it only counts as a mistake in my example if you use the version I wrote, not your own modified version. :)trusted code is correct if and only if it cannot possibly allow undefined behavior to be invoked in safe code.Your example doesn't invoke undefined behaviour in safe code, it invokes undefined behaviour in system code. The UB is in the system block. The memory corruption happens in the system block. After that all bets are off.
Jul 26 2021
On Monday, 26 July 2021 at 16:26:53 UTC, claptrap wrote:Its a pointless exercise because your example is a red herring, but this breaks it....And TBH I'm not even sure what your overall point is.It's a response to overly strong claims about what this DIP will achieve: https://forum.dlang.org/post/fnhvydmbguyagcmaepih forum.dlang.orgIt is a response to the claim that "the compiler's assertions regarding your remaining safe code might actually mean something." They mean exactly the same thing with your proposal as they do without it: that the safe portion of the program does not violate the language's memory-safety invariants directly.And it's again from the perspective of someone reviewing a patch rather than someone troubleshooting a bug. *Once there is a memory error in your program*, then safe helps you by telling you to look elsewhere for the direct violation. If you are reviewing patches with an eye towards not including a memory error in your program, then safe matters a lot less.
Jul 26 2021
On Monday, 26 July 2021 at 16:45:07 UTC, jfondren wrote:On Monday, 26 July 2021 at 16:26:53 UTC, claptrap wrote:If you're saying the proposed "system blocks inside trusted functions" provide no advantage over teh current "trusted lambdas inside safe functions" yes thats true. But I think the point is trusted functions get more checking. Even if you say well you can achieve the same by just using a trusted lambda inside a safe function its not the same once you consider what people actually do. If you have just one trusted function in your app, then switching to this new regime would automatically give you more checking. You have to take into account how people will actually behave, even if you can technically achieve the same thing with the current system.Its a pointless exercise because your example is a red herring, but this breaks it....And TBH I'm not even sure what your overall point is.It's a response to overly strong claims about what this DIP will achieve: https://forum.dlang.org/post/fnhvydmbguyagcmaepih forum.dlang.orgIt is a response to the claim that "the compiler's assertions regarding your remaining safe code might actually mean something." They mean exactly the same thing with your proposal as they do without it: that the safe portion of the program does not violate the language's memory-safety invariants directly.And it's again from the perspective of someone reviewing a patch rather than someone troubleshooting a bug. *Once there is a memory error in your program*, then safe helps you by telling you to look elsewhere for the direct violation. If you are reviewing patches with an eye towards not including a memory error in your program, then safe matters a lot less.This is still mischaracterizing the problem, if you add safe code and it causes a memory error to occur in trusted or system code, the problem was already there. You're not adding a memory error, just changing the conditions so it is triggered. It would be nice to have a system that would help with problems like that but I think its actually unreasonable, and probably impossible. And it's probably counterproductive to use examples like that to guide the design process. You're designing for constraints that cant be met.
Jul 28 2021
On Wednesday, 28 July 2021 at 08:40:47 UTC, claptrap wrote:If you're saying the proposed "system blocks inside trusted functions" provide no advantage over teh current "trusted lambdas inside safe functions" yes thats true. But I think the point is trusted functions get more checking. Even if you say well you can achieve the same by just using a trusted lambda inside a safe function its not the same once you consider what people actually do. If you have just one trusted function in your app, then switching to this new regime would automatically give you more checking. You have to take into account how people will actually behave, even if you can technically achieve the same thing with the current system.If I understand correctly, there are two problems being diagnosed in this discussion (overall--not just in your post), and two solutions being proposed: mistakes, and allow too much code to go without automatic checks. as it currently exists. Instead, automatic safety checks will only be disabled in specially-marked blocks (like in Rust). This will encourage programmers to disable automatic checking only for the specific lines of code where it is actually necessary, rather than for the entire function. and maintain because they have implicit dependencies on the surrounding ` safe` code. programmers to put their ` trusted` code into separate functions, which communicate with ` safe` code explicitly via arguments and return values. (See: [Walter's PR][1]). There is logic to both of these proposals, but their solutions conflict: one pushes for less use of function-level ` trusted`, the other for more use of it. [1]: https://github.com/dlang/dlang.org/pull/3077
Jul 28 2021
On Wednesday, 28 July 2021 at 12:49:28 UTC, Paul Backus wrote:On Wednesday, 28 July 2021 at 08:40:47 UTC, claptrap wrote:If I understand correctly, there are two problems being diagnosed in this discussion (overall--not just in your post), and two solutions being proposed: mistakes, and allow too much code to go without automatic checks. ` trusted` as it currently exists. Instead, automatic safety checks will only be disabled in specially-marked blocks (like in Rust). This will encourage programmers to disable automatic checking only for the specific lines of code where it is actually necessary, rather than for the entire function. and maintain because they have implicit dependencies on the surrounding ` safe` code. programmers to put their ` trusted` code into separate functions, which communicate with ` safe` code explicitly via arguments and return values. (See: [Walter's PR][1]). There is logic to both of these proposals, but their solutions conflict: one pushes for less use of function-level ` trusted`, the other for more use of it. [1]: https://github.com/dlang/dlang.org/pull/3077Agree 100%. Maybe you can just ban trusted lambdas from having access to the surrounding scope? Or ban unsafe blocks from having access to the surrounding scope, but then you need to have a way to pass stuff in, maybe like... system (int[] foo, float bar) { ... } unsafe blocks could be lowered to a trusted lambda maybe?
Jul 28 2021
On Wednesday, 28 July 2021 at 12:49:28 UTC, Paul Backus wrote:On Wednesday, 28 July 2021 at 08:40:47 UTC, claptrap wrote:I see this as two problems with a common solution, rather than a conflict. Problem 1: trusted lambdas within safe functions are very convenient but tricky. Problem 2: we want more practical safety in the evolving dlang code base with as close to zero additional programming load as we can manage. Rather than post some half-baked evolution of the original proposal sketch here, I'll wait til the back and forth with Joe converges. The current candidates there are, in my view, both simpler and more powerful than anticipated in the forum discussion. The biggest open question is how things should evolve to reduce/eliminate meta programming friction (thanks for the caution Steven). Unless we unexpectedly find ourselves with a lot of time on our hands, ETA on the DIP is still the end of the year.If you're saying the proposed "system blocks inside trusted functions" provide no advantage over teh current "trusted lambdas inside safe functions" yes thats true. But I think the point is trusted functions get more checking. Even if you say well you can achieve the same by just using a trusted lambda inside a safe function its not the same once you consider what people actually do. If you have just one trusted function in your app, then switching to this new regime would automatically give you more checking. You have to take into account how people will actually behave, even if you can technically achieve the same thing with the current system.If I understand correctly, there are two problems being diagnosed in this discussion (overall--not just in your post), and two solutions being proposed: mistakes, and allow too much code to go without automatic checks. ` trusted` as it currently exists. Instead, automatic safety checks will only be disabled in specially-marked blocks (like in Rust). This will encourage programmers to disable automatic checking only for the specific lines of code where it is actually necessary, rather than for the entire function. and maintain because they have implicit dependencies on the surrounding ` safe` code. programmers to put their ` trusted` code into separate functions, which communicate with ` safe` code explicitly via arguments and return values. (See: [Walter's PR][1]). There is logic to both of these proposals, but their solutions conflict: one pushes for less use of function-level ` trusted`, the other for more use of it. [1]: https://github.com/dlang/dlang.org/pull/3077
Jul 28 2021
On Wednesday, 28 July 2021 at 16:57:41 UTC, Bruce Carneal wrote:On Wednesday, 28 July 2021 at 12:49:28 UTC, Paul Backus wrote:Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?On Wednesday, 28 July 2021 at 08:40:47 UTC, claptrap wrote:I see this as two problems with a common solution, rather than a conflict. Problem 1: trusted lambdas within safe functions are very convenient but tricky. Problem 2: we want more practical safety in the evolving dlang code base with as close to zero additional programming load as we can manage. Rather than post some half-baked evolution of the original proposal sketch here, I'll wait til the back and forth with Joe converges. The current candidates there are, in my view, both simpler and more powerful than anticipated in the forum discussion. The biggest open question is how things should evolve to reduce/eliminate meta programming friction (thanks for the caution Steven). Unless we unexpectedly find ourselves with a lot of time on our hands, ETA on the DIP is still the end of the year.
Jul 28 2021
On Wednesday, 28 July 2021 at 17:25:18 UTC, claptrap wrote:On Wednesday, 28 July 2021 at 16:57:41 UTC, Bruce Carneal wrote:Yes. The form and scope of the unsafe block(s) is under discussion, with safety and readability in play.[...]Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?
Jul 28 2021
On Wednesday, 28 July 2021 at 17:25:18 UTC, claptrap wrote:Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?I'm not sure it necessarily is. Consider the following example (using the proposed trusted-with- system-blocks syntax): ```D /// Writes something into the provided buffer, e.g. filling the /// buffer with random bytes extern(C) void writeIntoCBuffer (int* ptr, size_t len) system; void writeIntoDBuffer (ref int[] buf) trusted { system { writeIntoCBuffer(buf.ptr, buf.length); } } ``` That seems like a reasonable use-case for a trusted wrapper of an underlying system function, but if the system block was forbidden from accessing variables from the surrounding scope, it wouldn't be possible. Does that make sense, or have I misunderstood what you had in mind?
Jul 29 2021
On Thursday, 29 July 2021 at 08:16:08 UTC, Joseph Rushton Wakeling wrote:On Wednesday, 28 July 2021 at 17:25:18 UTC, claptrap wrote:Yes, I was a bit sloppy earlier. Full "stopping" is a non-goal. There are, however, various restrictions and syntactic forms to be considered that usefully differ from the full access scopeless variant.Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?I'm not sure it necessarily is. Consider the following example (using the proposed trusted-with- system-blocks syntax): ```D /// Writes something into the provided buffer, e.g. filling the /// buffer with random bytes extern(C) void writeIntoCBuffer (int* ptr, size_t len) system; void writeIntoDBuffer (ref int[] buf) trusted { system { writeIntoCBuffer(buf.ptr, buf.length); } } ``` That seems like a reasonable use-case for a trusted wrapper of an underlying system function, but if the system block was forbidden from accessing variables from the surrounding scope, it wouldn't be possible. Does that make sense, or have I misunderstood what you had in mind?
Jul 29 2021
On Thursday, 29 July 2021 at 08:16:08 UTC, Joseph Rushton Wakeling wrote:On Wednesday, 28 July 2021 at 17:25:18 UTC, claptrap wrote:Not exactly, obviously if they cant access variables from the surround scope they'd be useless. But i think the idea (not something i knew about until this thread) is to have a safe api between trusted and system. So there's controlled / restricted access. Otherwise if you have a system block inside a trusted function, the system code could just trash anything it wants from the enclosing scope, which makes any guarantees you have from the code being checked a bit pointless. So it's not just about narrowing down the amount of code that is marked system, but also the amount of state that it can access. IIUC that was the original reason for limiting safe/ trusted/ system to only apply on functions. So it forces you think about API between them. But I guess it didn't work out as expected.Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?I'm not sure it necessarily is. Consider the following example (using the proposed trusted-with- system-blocks syntax): ```D /// Writes something into the provided buffer, e.g. filling the /// buffer with random bytes extern(C) void writeIntoCBuffer (int* ptr, size_t len) system; void writeIntoDBuffer (ref int[] buf) trusted { system { writeIntoCBuffer(buf.ptr, buf.length); } } ``` That seems like a reasonable use-case for a trusted wrapper of an underlying system function, but if the system block was forbidden from accessing variables from the surrounding scope, it wouldn't be possible. Does that make sense, or have I misunderstood what you had in mind?
Jul 29 2021
On Thursday, 29 July 2021 at 13:20:41 UTC, claptrap wrote:On Thursday, 29 July 2021 at 08:16:08 UTC, Joseph Rushton Wakeling wrote:A design tension that Joseph and I are working with is that between local readability and practical safety. One of the ideas being batted around is to allow parameterless read-only access to the preceding scope from within scoped ncbc (not checked by compiler) blocks. There are other ideas in this area. The DIP ideas are still, quite clearly, in the half-baked category. We (Joseph and I) are considering beerconf updates where we can solicit input on 3/4 baked ideas and open the door to new possibilities in a more productive setting. That said, if lightning strikes and you can't wait til beerconf please feel free to drop me an email. Finally, I've not tightly coordinated with Joseph on this response, so he may have additional/better information to impart.On Wednesday, 28 July 2021 at 17:25:18 UTC, claptrap wrote:Not exactly, obviously if they cant access variables from the surround scope they'd be useless. But i think the idea (not something i knew about until this thread) is to have a safe api between trusted and system. So there's controlled / restricted access. Otherwise if you have a system block inside a trusted function, the system code could just trash anything it wants from the enclosing scope, which makes any guarantees you have from the code being checked a bit pointless. So it's not just about narrowing down the amount of code that is marked system, but also the amount of state that it can access. IIUC that was the original reason for limiting safe/ trusted/ system to only apply on functions. So it forces you think about API between them. But I guess it didn't work out as expected.Do you have ideas on how to stop unsafe blocks accessing the variables from the surrounding scope? Is that even a goal for the DIP?I'm not sure it necessarily is. Consider the following example (using the proposed trusted-with- system-blocks syntax): ```D /// Writes something into the provided buffer, e.g. filling the /// buffer with random bytes extern(C) void writeIntoCBuffer (int* ptr, size_t len) system; void writeIntoDBuffer (ref int[] buf) trusted { system { writeIntoCBuffer(buf.ptr, buf.length); } } ``` That seems like a reasonable use-case for a trusted wrapper of an underlying system function, but if the system block was forbidden from accessing variables from the surrounding scope, it wouldn't be possible. Does that make sense, or have I misunderstood what you had in mind?
Jul 29 2021
On 7/25/2021 8:40 PM, Timon Gehr wrote:Note that the trusted lambda idiom is _basically always_ incorrect trusted code. Some people do it anyway, because it's convenient.https://github.com/dlang/dlang.org/pull/3077
Jul 26 2021
On Sunday, 25 July 2021 at 23:50:12 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 23:34:35 UTC, claptrap wrote:You could probably come up with an example bug that wouldn't be caught by any system you can think of as long as you have an system escape hatch. It doesnt really prove anything imo.Your argument the same as saying that bar() is memory safe as written. True, but it's not bug free. The bug is just waiting for the right set of circumstances to come to life and eat your face :) IE. Memory safe as written != bug freeYes; I agree completely. :) The point of the example is to show that the proposal advanced in this thread does not prevent this type of bug from occurring.
Jul 26 2021
On Sunday, 25 July 2021 at 21:32:00 UTC, Paul Backus wrote:The underlying problem in both cases is that the memory safety of the manually-checked code (` system` block/` trusted` lambda) depends on details of the automatically-checked code that are not robust against change.I think the problem is you're conflating memory safety and validity. When you slap safe on favouriteNumber you're not telling the world that it will always return a valid result, you're telling the world that it wont corrupt memory. IE.. safe doesnt mean you can blindly use the result of a function. EG.. if you have a safe version of getchar(), would you blindly use the result of that?
Jul 25 2021
On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 16:29:38 UTC, Bruce Carneal wrote:OK, I think I see where we're diverging. I never thought that any automated checking could absolve the programmer from actually understanding the code.Machine advantage comes in other forms. Firstly, we now have a properly segregated code base. safe always means 'machine checkable'. Zero procedural trusted code review errors in that now easily identified class.I have already demonstrated that this is false. Here's my example from the previous thread on this topic, rewritten slightly to use the new-style ` trusted`/` system` syntax from your proposal: ```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ``` I make the following claims: 1. This code is memory-safe. No matter how you call `favoriteElement`, it will not result in undefined behavior, or allow undefined behavior to occur in ` safe` code. 2. `favoriteNumber` is 100% machine-checkable ` safe` code. 3. Changes to `favoriteNumber` must be manually reviewed in order to ensure they do not result in memory-safety violations.The only way to ensure that ` safe` code never requires manual review is to enforce coding standards that forbid functions like `favoriteElement` from ever being merged in the first place.There is no way to ensure/prove anything if you are ignorant of the code so per such a framing we can assert, trivially again, that the only way of avoiding "manual review" (reading) is to disallow the code.One example of a set of coding standards that would reject `favoriteElement` is: ... If you can give an example of a set of coding standards that rejects `favoriteElement` under your proposal but fails to reject `favoriteElement` in the D language as it currently exists, I would be very interested in seeing it.Unless I'm really missing something, nothing interesting is being proven or disproven here beyond the importance of definitions when "proving" things. I'd like to conclude with two points from the beerconf discussion on this proposal, first your excellant observation that trusted might be viewed as the safe <==> system programming analog of "weakly pure": it's where some good degree of localization might occur but where interesting stuff also gets done. That safe actually might be profitably "purified" of the dreaded trusted lambda! :-) Secondly, there was a history-of-the-ideas-embodied-in-the-DIP put together nicely by Joe Wakeling that indicates several possible "origination points" for the ideas. Further posts are welcome but Joe and I are going mostly quiet now, aiming for a concentrated DIP effort very late in the year.
Jul 25 2021
On Sunday, 25 July 2021 at 21:26:27 UTC, Bruce Carneal wrote:Further posts are welcome but Joe and I are going mostly quiet now, aiming for a concentrated DIP effort very late in the year.Well, thanks, and good luck. I think if your DIP makes strong promises about how it will make the language much safer it is currently, that you'll see these same criticisms again about how potential safety actually remains the same. If you make much more restrained claims, that "The name is the important change.", then any anticipated inconvenience with the rollout will seem to outweigh the benefit. I recommend this justification for the DIP: what you are doing is *rehabilitating trusted functions*, which are currently (for newbies) a bug-filled trap, and (for experts) disused in favor of safe functions containing trusted blocks. The language documentation tells people to use trusted functions but the language in practice doesn't really have them. That's simultaneously a humble change and one that substantially improves the language, vs. alternate proposal like "conform to Rust" that would add to D but leave it still having this safe/ trusted/ system framework that doesn't work as intended. People complain about "half-baked features" and here you would be proposing to bake this feature the rest of the way. That's a lot easier to support, and a more coherent language is something people can think about when they're having to touch a bunch of code as a result of the DIP changing the language.
Jul 25 2021
On Sunday, 25 July 2021 at 23:16:16 UTC, jfondren wrote:I recommend this justification for the DIP: what you are doing is *rehabilitating trusted functions*, which are currently (for newbies) a bug-filled trap, and (for experts) disused in favor of safe functions containing trusted blocks. The language documentation tells people to use trusted functions but the language in practice doesn't really have them.The language doesn't have trusted blocks. trusted function literals (lambdas) are still functions. Everything the documentation says about trusted functions applies to function literals. People do like to treat trusted function literals as if they were the proposed system blocks. But then they're strictly speaking writing invalid code. They're cheating. And why not? It has significant advantages for the low price of (1) tainting some safe code and (2) having a technically invalid program that works just fine in practice. What the proposal does is turning the common cheat into an official part of the language, a best practice even. And the syntax gets a bit nicer.
Jul 25 2021
On 26.07.21 02:21, ag0aep6g wrote:On Sunday, 25 July 2021 at 23:16:16 UTC, jfondren wrote:The documentation no longer says much about that. https://github.com/dlang/dlang.org/pull/2453#commitcomment-53328593 Probably we should fix that soon....The language doesn't have trusted blocks. trusted function literals (lambdas) are still functions. Everything the documentation says about trusted functions applies to function literals.
Jul 25 2021
On 26.07.21 05:46, Timon Gehr wrote:On 26.07.21 02:21, ag0aep6g wrote:[...]That's just memory-safe-d.dd, which is an odd page that should be merged with function.dd. function.dd still says that trusted functions must have safe interfaces. https://dlang.org/spec/function.html#trusted-functionsThe language doesn't have trusted blocks. trusted function literals (lambdas) are still functions. Everything the documentation says about trusted functions applies to function literals.The documentation no longer says much about that. https://github.com/dlang/dlang.org/pull/2453#commitcomment-53328593 Probably we should fix that soon.
Jul 26 2021
On 7/25/2021 8:46 PM, Timon Gehr wrote:The documentation no longer says much about that. https://github.com/dlang/dlang.org/pull/2453#commitcomment-53328593 Probably we should fix that soon.https://github.com/dlang/dlang.org/pull/3076
Jul 26 2021
On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ```favoriteElement(), all on its own, has an unchecked type error: array is indexed by the int return value of favoriteNumber(), but int has a range outside of the 0..49 type of array's indices. In Ada, array's index type would be specified in the code and you'd have to either perform a checked type conversion to use favoriteNumber() there, or you'd have to change favoriteNumber() to return the index type rather than an int. ```ada with Ada.Text_IO; use Ada.Text_IO; procedure FavElm is subtype Index is Integer range 0 .. 49; type Arg is array (Index) of Integer; function favoriteNumber return Index is (142); function favoriteElement(A : Arg) return Integer is begin return A (favoriteNumber); end favoriteElement; MyArray : Arg := (42 => 5, others => 0); begin Put_Line (Natural'Image (favoriteElement (MyArray))); end FavElm; ``` which compiles with a warning, and fails at runtime as promised: ``` favelm.adb:7:45: warning: value not in range of type "Index" defined at line 4 favelm.adb:7:45: warning: "Constraint_Error" will be raised at run time ``` In a language with dependent types (or the theorem-proving variant of Ada, SPARK) you could get a compile time error, including from functions that return statically unknown values that still have the wrong type, like getchar() as mentioned elsewhere. In such languages the thing you *should* be doing, testing an int's range before using it as an index for an int[50] array, is a compile-time error to not do. You're not forced to check it at every index, but you have to check it at some point. This kind of precision with types isn't so pleasant in D but the class of error is the same and it's something a reviewer could spot when initially checking this code in.
Jul 25 2021
On Monday, 26 July 2021 at 00:08:51 UTC, jfondren wrote:On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:[...]```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ```favoriteElement(), all on its own, has an unchecked type error: array is indexed by the int return value of favoriteNumber(), but int has a range outside of the 0..49 type of array's indices. In Ada, array's index type would be specified in the code and you'd have to either perform a checked type conversion to use favoriteNumber() there, or you'd have to change favoriteNumber() to return the index type rather than an int.This kind of precision with types isn't so pleasant in D but the class of error is the same and it's something a reviewer could spot when initially checking this code in.What if favoriteNumber originally returns a ubyte, and favoriteElement takes an int[256]? ```d ubyte favoriteNumber() safe { return 42; } int favoriteElement(ref int[256] array) trusted { return array.ptr[favoriteNumber()]; } ``` To your reviewer, there's nothing wrong with favoriteElement, right? But later someone might change the return type of favoriteNumber to size_t and let it return 300. Badaboom: undefined behavior after touching safe code. As far as I can tell, there's no way to truly make it impossible. Maybe disallowing calls to safe functions from trusted and system code would do the trick, but that's impractical.
Jul 25 2021
On Monday, 26 July 2021 at 00:37:36 UTC, ag0aep6g wrote:What if favoriteNumber originally returns a ubyte, and favoriteElement takes an int[256]? ```d ubyte favoriteNumber() safe { return 42; } int favoriteElement(ref int[256] array) trusted { return array.ptr[favoriteNumber()]; } ``` To your reviewer, there's nothing wrong with favoriteElement, right? But later someone might change the return type of favoriteNumber to size_t and let it return 300. Badaboom: undefined behavior after touching safe code.That's a much more obviously program-affecting change though, you're changing a function signature. It wouldn't make as compelling an example of someone being surprised that they have to review more than just a safe function when that only that function is changed. If you do name the index type then you can do something like this Nim translation of the Ada: ```nim type Array = array[50, int] Index = range[0..49] var myarray: Array myarray[42] = 5 func favoriteNumber: Index = 42 func favoriteElement(arg: Array): int = let i: Index = favoriteNumber() return arg[i] echo favoriteElement(myarray) ``` (But Nim disappoints here: if you change favoriteNumber to return an int, and then change the number to 142, then Nim doesn't complain at all about this code that assigns an int to a Index variable.)
Jul 25 2021
On Monday, 26 July 2021 at 00:50:17 UTC, jfondren wrote:That's a much more obviously program-affecting change though, you're changing a function signature. It wouldn't make as compelling an example of someone being surprised that they have to review more than just a safe function when that only that function is changed.The point stands: Changes to safe code can compromise memory safety. Bruce claimed we would get "a properly segregated code base", and that safe code would be entirely "machine checkable". But reviewers still have to be on the lookout for safety issues, even when no trusted or system code is touched.
Jul 25 2021
On Monday, 26 July 2021 at 00:50:17 UTC, jfondren wrote:(But Nim disappoints here: if you change favoriteNumber to return an int, and then change the number to 142, then Nim doesn't complain at all about this code that assigns an int to a Index variable.)Well, it still checks the indexing at runtime, just like Ada, compile-time but then you would complain about the annotation effort and implementation complexity so it would never stop to "disappoint"...
Jul 25 2021
On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:```d system { return array.ptr[favoriteNumber()]; } ``` I make the following claims: 1. This code is memory-safe.No, it's not. You use something that is not a literal, so it may change. Even for a constant this should be checked for each new compile. I would do that even if it actually were a literal, but here we have a function call! The definition of the called function may be far away. How would this ever pass a review? The only way to make this memory safe, is to actually test it: ```d system { assert(array.length > favoriteNumber()); return array.ptr[favoriteNumber()]; } ```
Jul 26 2021
On Monday, 26 July 2021 at 09:08:05 UTC, Dominikus Dittes Scherkl wrote:On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:I agree that *future versions* of the code may not be memory-safe if `favoriteNumber` is changed, but that does not mean the *current version* is unsafe. If you believe that the current version is unsafe, you should be able to demonstrate this unsafety by writing a ` safe` program that uses the current version to cause undefined behavior.```d system { return array.ptr[favoriteNumber()]; } ``` I make the following claims: 1. This code is memory-safe.No, it's not. You use something that is not a literal, so it may change.
Jul 26 2021
On Sunday, 25 July 2021 at 17:47:40 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 16:29:38 UTC, Bruce Carneal wrote:And I have [already demonstrated](https://forum.dlang.org/post/sag7fp$18oj$1 digitalmars.com), `favoriteElement` is invalid. You need to have a definition of what `favoriteNumber` does, and this needs to be reconciled with the definition of `favoriteElement`. In this case, `favoriteElement` is invalid, because `favoriteNumber` has no formal specification. When reviewing `favoriteElement`, one must look at it like a black box: ```d /// Returns: a valid size_t size_t favoriteNumber() safe; int favoriteElement(ref int[50] array) trusted { /* code to review */ } ``` However, with a specification of `favoriteNumber`, `favoriteElement` can be reviewed as correct: ```d /// Returns: a size_t between 0 and 49, inclusive size_t favoriteNumber() safe; ... ``` Now, when reviewing `favoriteElement`, I can prove that it's a valid ` trusted` function given the formal definition of `favoriteNumber`. If reviewing `favoriteNumber`, I can reconcile the implementation changes against the spec for it, and know that as long as I follow the spec, I will not mess up any other code. If the spec changes, now you have to re-review anything that uses it (including other ` safe` code) for correctness. As I said before, code reviews of ` safe` functions are still required for correctness, just not for memory safety. -SteveMachine advantage comes in other forms. Firstly, we now have a properly segregated code base. safe always means 'machine checkable'. Zero procedural trusted code review errors in that now easily identified class.I have already demonstrated that this is false. Here's my example from the previous thread on this topic, rewritten slightly to use the new-style ` trusted`/` system` syntax from your proposal: ```d module example; size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 system { return array.ptr[favoriteNumber()]; } } ```
Jul 26 2021
On Monday, 26 July 2021 at 11:02:48 UTC, Steven Schveighoffer wrote:However, with a specification of `favoriteNumber`, `favoriteElement` can be reviewed as correct: ```d /// Returns: a size_t between 0 and 49, inclusive size_t favoriteNumber() safe; ... ```If your theory of memory safety leads you to conclude that the presence or absence of a comment can make otherwise-unsafe code memory safe, you have taken a wrong turn somewhere in your reasoning. I agree with you that the version with the comment is better, more maintainable code, and that we should hold our code to such standards in code review. But bad and hard-to-maintain code can still be memory safe (that is: free from possible UB).
Jul 26 2021
On Monday, 26 July 2021 at 13:54:33 UTC, Paul Backus wrote:On Monday, 26 July 2021 at 11:02:48 UTC, Steven Schveighoffer wrote:It's not a comment, it's a specification. Whereby I can conclude that if in review `favoriteNumber` returns something other than 0 to 49, it is in error. Without specifications, exactly zero trusted lines of code can be written.However, with a specification of `favoriteNumber`, `favoriteElement` can be reviewed as correct: ```d /// Returns: a size_t between 0 and 49, inclusive size_t favoriteNumber() safe; ... ```If your theory of memory safety leads you to conclude that the presence or absence of a comment can make otherwise-unsafe code memory safe, you have taken a wrong turn somewhere in your reasoning.I agree with you that the version with the comment is better, more maintainable code, and that we should hold our code to such standards in code review. But bad and hard-to-maintain code can still be memory safe (that is: free from possible UB).Consider that the posix function `read` has the specification that it will read data from a file descriptor, put the data into a passed-in buffer, *up to* the amount of bytes indicated in the third parameter. Its prototype is: ```d system extern(C) int read(int fd, void *ptr, size_t nBytes); ``` Without reading the code of `read`, you must conclude from the specification that it does what it says it should do, and not say, ignore `nBytes` and just use the pointed-at data for as many bytes as it wants. Without the specification, just relying on the types, you can conclude nothing, and can never use `read` from ` trusted` code, ever. It's no different from `favoriteNumber`. In order to use it and make the assumption that it always will return 42, the author of that function must agree that that is what it's going to do. Otherwise (as you rightly say), anyone can change it to return something else *legitimately* and that might be beyond the bounds of the array you are using it for an index. So without the specification for `favoriteNumber`, I must conclude that `favoriteElement` is invalid as written. With a specification I can reason about what does and does not constitute validity. -Steve
Jul 26 2021
On Monday, 26 July 2021 at 15:54:06 UTC, Steven Schveighoffer wrote:Consider that the posix function `read` has the specification that it will read data from a file descriptor, put the data into a passed-in buffer, *up to* the amount of bytes indicated in the third parameter. Its prototype is: ```d system extern(C) int read(int fd, void *ptr, size_t nBytes); ``` Without reading the code of `read`, you must conclude from the specification that it does what it says it should do, and not say, ignore `nBytes` and just use the pointed-at data for as many bytes as it wants. [...] It's no different from `favoriteNumber`.The difference between POSIX `read` and `favoriteNumber` is that you *can* read the source code of `favoriteNumber`. It's literally right there, in the same module. That's the entire reason why you can be certain it returns `42`. If `favoriteNumber` and `favoriteElement` were in different modules, your argument would be correct, because `favoriteElement` could no longer be certain about which version of `favoriteNumber` it was calling.
Jul 26 2021
On Monday, 26 July 2021 at 18:59:45 UTC, Paul Backus wrote:The difference between POSIX `read` and `favoriteNumber` is that you *can* read the source code of `favoriteNumber`. It's literally right there, in the same module. That's the entire reason why you can be certain it returns `42`. If `favoriteNumber` and `favoriteElement` were in different modules, your argument would be correct, because `favoriteElement` could no longer be certain about which version of `favoriteNumber` it was calling.If you consider the source to be the spec, then that contradicts your earlier suggestion that `favoriteNumber` can be changed -- its source is the spec, so changing the source to return something other than 42 will violate the spec. If you consider the source to be the spec *and* you think changing the spec at will is OK, then we have different philosophies on what a code review and "good software" means. -Steve
Jul 28 2021
On Wednesday, 28 July 2021 at 11:12:14 UTC, Steven Schveighoffer wrote:On Monday, 26 July 2021 at 18:59:45 UTC, Paul Backus wrote:Again, I am in 100% in agreement with you that `favoriteElement` is not "good software" and should not pass code review. I have never claimed otherwise. (In fact, I spent the rest of that post talking about how we can reject functions like `favoriteElement` in code review!) That does not change the fact that in the *current version* of my example module, `favoriteElement` is memory safe--meaning, it cannot possibly cause undefined behavior when called with [safe values][1]. It is possible for bad software to be memory safe. It seems to me like we have been talking past each other for most of this discussion, with differences in background assumptions obscured by words like "correct", "valid", "safe", etc. In the future, I will try to be much more careful about defining my terms. [1]: https://dlang.org/spec/function.html#safe-valuesThe difference between POSIX `read` and `favoriteNumber` is that you *can* read the source code of `favoriteNumber`. It's literally right there, in the same module. That's the entire reason why you can be certain it returns `42`. If `favoriteNumber` and `favoriteElement` were in different modules, your argument would be correct, because `favoriteElement` could no longer be certain about which version of `favoriteNumber` it was calling.If you consider the source to be the spec, then that contradicts your earlier suggestion that `favoriteNumber` can be changed -- its source is the spec, so changing the source to return something other than 42 will violate the spec. If you consider the source to be the spec *and* you think changing the spec at will is OK, then we have different philosophies on what a code review and "good software" means.
Jul 28 2021
On Sunday, 25 July 2021 at 13:19:55 UTC, Paul Backus wrote:On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:4 kinds.At beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.Both before and after this proposal, there are 3 kinds of code:1. Code that is automatically checked for memory safety.Split this into: 1. Code that is automatically checked for memory safety. 1a. Code that has portions that have mechanical checks, but overall still needs manual checking for memory safety.2. Code that is assumed by the compiler to be safe, and must be manually checked for memory safety. 3. Code that is not checked for memory safety, and is assumed to be unsafe. Currently, (1) is marked ` safe`, (2) is marked ` trusted`, and (3) is marked ` system`.(1) and (1a) are marked ` safe`. The latter have ` trusted` lambdas but require manual verification. The difference between 1 and 1a is pretty subtle.Under this proposal, (1) would be marked either ` safe` or ` trusted`, (2) would be marked either ` trusted` or ` system`, and (3) would be marked ` system`. I do not think this is an improvement relative to the status quo.Under this proposal (though I haven't seen exactly the proposal, but I think I've conversed with Bruce enough to have a good understanding), 1 becomes ` safe` *exclusively* (which is the main benefit), 1a becomes ` trusted`, 2 becomes legacy (marked ` trusted` but with no ` system` escapes) and IMO should be warned about to the user, 3 is still ` system`.This doesn't solve the problem exactly. A ` trusted` lambda is a localized piece of code, and is prone to be abused with the justification "well, it's just for this one time, and I know what this function does". The compiler cannot enforce that the code inside (current) ` trusted` functions actually obeys the ` safe` API.The problematic trusted lambda escapes creeping in to " safe" code could be replaced going forward by a more honestly named form, trusted code with system blocks. Best practices could evolve to the point that safe actually meant safe again.What makes ` trusted` lambdas problematic is that they implicitly depend on everything in their enclosing scope, which makes it easy for a change in the ` safe` portion of the code to accidentally violate an assumption that the ` trusted` portion depends on. If we want to address this issue at the language level, the most direct solution is to require that ` trusted` lambdas make their dependencies explicit. This can be done by requiring all ` trusted` nested functions (of which lambdas are a special case) to be `static`.Of course, this is a breaking change, so it would require a deprecation process.No, it shouldn't be. The idea is that ` trusted` code without ` system` escapes is still accepted as legacy code that works as before. I would recommend a message from the compiler though. ` trusted` lambdas should be required to be static, though that still doesn't solve the abuse problem. Perhaps they should just be disallowed? -Steve
Jul 25 2021
On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:At beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.I'm very happy to hear that. I think this proposal is an important and useful one, and I have been thinking of volunteering to write a DIP myself on the topic. I'll be dropping into tonight's BeerConf in a little while, so hopefully we can touch base then and see if we can find an opportunity for collaboration (maybe two heads and two people's spare time can overcome that burden of effort more readily than one).
Jul 25 2021
On 7/25/21 10:43 AM, Joseph Rushton Wakeling wrote:On Sunday, 25 July 2021 at 05:05:44 UTC, Bruce Carneal wrote:In a recent discussion, I learned from you an idea of Steven Schveighoffer. Add to that the relayed opinion of an academic to the effect of "it is not safe if you can escape out of it." And add to that our failed attempt at "safe by default", the following thought formed in my mind. This thought may be exactly what Steven Schveighoffer or Bruce Carneal are bringing up anyway. If so, sorry for only now understanding it. :) The problems: 1) system by default provides no checking by default 2) trusted is not checked either 3) safe is not safe because you can escape easily How about: 1) Make trusted the default 2) Change trusted's semantics to be safe (i.e. make it the same as today's safe) 3) Allow system inside trusted 4) Strengthen safe: No trusted or system allowed Result: 1) We have safe by default because now trusted is the default and trusted is checked by default 2) safe is actually safe; no embarrassment Existing code: 1) Replace safe keywords with trusted where the compiler complains. (No safety lost because trusted will be behaving exactly like today's safe.) 2) Add system where the compiler complains. This is because all code is trusted by default and trusted is safe. AliAt beerconf I committed to putting forward a DIP regarding a new syntactic element to be made available within trusted functions, the system block. The presence of one or more system blocks would enable safe checking elsewhere in the enclosing trusted function.I'm very happy to hear that. I think this proposal is an important and useful one, and I have been thinking of volunteering to write a DIP myself on the topic.
Jul 25 2021
On Sunday, 25 July 2021 at 21:42:16 UTC, Ali Çehreli wrote:[snip] Existing code: 1) Replace safe keywords with trusted where the compiler complains. (No safety lost because trusted will be behaving exactly like today's safe.) 2) Add system where the compiler complains. This is because all code is trusted by default and trusted is safe. AliI'd much rather add new attributes for new behavior than break existing code. If you want safe to not be able to call trusted functions, then introduce a new attribute with that behavior. I think too much is trying to get shoehorned into the safe/ trusted/ system dichotomy. I have favored the whitelist/blacklist approach that had been discussed on the forums previously (ignoring bikeshedding on names). whitelist would only allow operations known to be memory safe (so no trusted escape hatch). blacklist would be the opposite of what is currently safe, in that things that are not allowed in safe would have to be in a blacklist function (or block). This would contrast with system, which basically allows everything in them. Unmarked system functions would be assumed to be blacklist, unless they can be inferred to be whitelist. Whether whitelist is orthogonal to blacklist (such that everything that is not whitelist must be blacklist), I am not 100% sure on, but I would lean to putting things in blacklist only if they are not allowed in safe currently, which may allow for a middle ground. Regardless, a whitelist function can only call whitelist functions. A blacklist function can call any kind of function it wants. safe functions would be able to call whitelist functions, but nothing would stop them from calling trusted blacklist functions. trusted or system functions could call either whitelist or blacklist functions (the use of these might make it easier to narrow down where safety issues are). However, if either safe/ trusted/ system functions call blacklist functions (or include blocks) then they become blacklist (either explicitly or inferred). So this would distinguish an safe function that is whitelist from one that is blacklist. An safe function that is whitelist is verified by the compiler not to only use operations that the compiler verifies are memory safe, while one that is blacklist will have at some point called trusted functions that use blacklist behavior and needs to be manually verified. This approach doesn't break any code and allows for a lot of the flexibility that I think people want with the safe system.
Jul 27 2021
On Tuesday, 27 July 2021 at 16:36:48 UTC, jmh530 wrote:On Sunday, 25 July 2021 at 21:42:16 UTC, Ali Çehreli wrote:Presently D is shoehorning two different usages into safe/ trusted/ system: 1. the documented usage: safe for checked safe functions, system for unrestricted functions, and trusted for a manually-reviewed interface between the two. (complaint: trusted functions check too little) 2. a usage people have moved to, where the manually-reviewed interfaces are now trusted function literals embedded inside safe functions. (complaint: ' trusted blocks' are too hard to review and their containing safe functions are misleadingly 'safe') So that's already two ways to write SafeD code and both are found wanting. With whitelist/ blacklist there'd be a third way. At that point, why not a fourth? I'm sure some people would like a single unsafe attribute, and maybe it could be justified by a late discovery of a serious complaint with whitelist/ blacklist. And then we could have a function signature containing ` trusted whitelist unsafe`, and then we could have a "The Nightmare of Memory Safety in D" video at cppcon. Opposite the Scylla of "breaking code" isn't safe open water, but the Charbydis of "having to explain SafeD to a skeptical newbie." Why five attributes? Because three wasn't enough. The immediate result of this DIP would also be three SafeD styles: 3. the new way of safe functions without compromise, trusted functions containing system blocks, and system functions farther-off result is just one SafeD again, this time also with the documentation matching what experts actually do.[snip] Existing code: 1) Replace safe keywords with trusted where the compiler complains. (No safety lost because trusted will be behaving exactly like today's safe.) 2) Add system where the compiler complains. This is because all code is trusted by default and trusted is safe. AliI'd much rather add new attributes for new behavior than break existing code. If you want safe to not be able to call trusted functions, then introduce a new attribute with that behavior. I think too much is trying to get shoehorned into the safe/ trusted/ system dichotomy.
Jul 27 2021
On Tuesday, 27 July 2021 at 21:05:52 UTC, jfondren wrote:[snip] Presently D is shoehorning two different usages into safe/ trusted/ system: 1. the documented usage: safe for checked safe functions, system for unrestricted functions, and trusted for a manually-reviewed interface between the two. (complaint: trusted functions check too little) 2. a usage people have moved to, where the manually-reviewed interfaces are now trusted function literals embedded inside safe functions. (complaint: ' trusted blocks' are too hard to review and their containing safe functions are misleadingly 'safe')I don't use trusted function literals. I don't have a sense of how popular this usage is. In the context of my suggestion, an trusted literal that is blacklist would result in safe that is blacklist, which would make it a candidate for review.So that's already two ways to write SafeD code and both are found wanting. With whitelist/ blacklist there'd be a third way. At that point, why not a fourth? I'm sure some people would like a single unsafe attribute, and maybe it could be justified by a late discovery of a serious complaint with whitelist/ blacklist. And then we could have a function signature containing ` trusted whitelist unsafe`, and then we could have a "The Nightmare of Memory Safety in D" video at cppcon. [snip]The underlying problem is how to fix design issues with safe/ trusted/ system while minimizing code breakage. Only the bare minimum to address those issues would make sense. Adding an unsafe that overlaps with existing or proposed functionality wouldn't make sense.
Jul 28 2021