www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Greenwashing

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
 From https://www.investopedia.com/terms/g/greenwashing.asp, the top 
Google search result:

Greenwashing is the process of conveying a false impression or providing 
misleading information about how a company's products are more 
environmentally sound. Greenwashing is considered an unsubstantiated 
claim to deceive consumers into believing that a company's products are 
environmentally friendly.

Paraphrasing for our context:

Greenwashing is the process of conveying a false impression or providing 
misleading information about how a codebase is more memory-safe. 
Greenwashing is considered an unsubstantiated claim to deceive readers 
into believing that a codebase is memory-safe.

If this is greenwashing, then DIP 1028 is doing it.
May 27 2020
next sibling parent Max Haughton <maxhaton gmail.com> writes:
On Wednesday, 27 May 2020 at 11:37:17 UTC, Andrei Alexandrescu 
wrote:
 From https://www.investopedia.com/terms/g/greenwashing.asp, the 
 top Google search result:

 Greenwashing is the process of conveying a false impression or 
 providing misleading information about how a company's products 
 are more environmentally sound. Greenwashing is considered an 
 unsubstantiated claim to deceive consumers into believing that 
 a company's products are environmentally friendly.

 Paraphrasing for our context:

 Greenwashing is the process of conveying a false impression or 
 providing misleading information about how a codebase is more 
 memory-safe. Greenwashing is considered an unsubstantiated 
 claim to deceive readers into believing that a codebase is 
 memory-safe.

 If this is greenwashing, then DIP 1028 is doing it.
I think you're right - unless D has a thorough, rigorously specified static analysis this will be the case (At zero cost, at least). The current efforts to add that seem like they go in the right direction, but I feel like they'll have to have a breaking semantic/syntax change at some point (e.g. the way rust handles references seems much more decidable). FWIW: There's need to be a thorough design first, the current approach to safety seems to spread among dmd like an octopuses tentacles. However, talk is cheap so I'm not complaining about what we have so far.
May 27 2020
prev sibling next sibling parent reply welkam <wwwelkam gmail.com> writes:
On Wednesday, 27 May 2020 at 11:37:17 UTC, Andrei Alexandrescu 
wrote:
 Paraphrasing for our context:

 Greenwashing <...>
I propose safewashing
May 27 2020
next sibling parent David Gileadi <gileadisNOSPM gmail.com> writes:
On 5/27/20 5:25 AM, welkam wrote:
 On Wednesday, 27 May 2020 at 11:37:17 UTC, Andrei Alexandrescu wrote:
 Paraphrasing for our context:

 Greenwashing <...>
I propose safewashing
The bikeshed should clearly be green! ;)
May 27 2020
prev sibling parent user1234 <user1234 12.de> writes:
On Wednesday, 27 May 2020 at 12:25:48 UTC, welkam wrote:
 On Wednesday, 27 May 2020 at 11:37:17 UTC, Andrei Alexandrescu 
 wrote:
 Paraphrasing for our context:

 Greenwashing <...>
I propose safewashing
There's also what I call "digital washing". The process for a tech company to advertise how repespectful it is toward privacy and consumer data or how friendly it is toward open source software. Digital washing also includes Green washing when the company advertises that their datacenter use solar energy for example.
May 27 2020
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, May 27, 2020 5:37:17 AM MDT Andrei Alexandrescu via Digitalmars-
d wrote:
  From https://www.investopedia.com/terms/g/greenwashing.asp, the top
 Google search result:

 Greenwashing is the process of conveying a false impression or providing
 misleading information about how a company's products are more
 environmentally sound. Greenwashing is considered an unsubstantiated
 claim to deceive consumers into believing that a company's products are
 environmentally friendly.

 Paraphrasing for our context:

 Greenwashing is the process of conveying a false impression or providing
 misleading information about how a codebase is more memory-safe.
 Greenwashing is considered an unsubstantiated claim to deceive readers
 into believing that a codebase is memory-safe.

 If this is greenwashing, then DIP 1028 is doing it.
Indeed. A number of us have argued with Walter that DIP 1028 turns safe into a lie and that the compiler should _never_ treat anything as safe unless it can mechanicaly verify it or the programmer has explicitly marked it with trusted. But for some reason, he thinks that having a "special rule" that non-extern(D) function declarations aren't treated as safe by default like all of the functions that the compiler can actually mechanically verify adds too much complexity to the language. safe isn't necessarily a problem, but safe shouldn't have a huge blown in it in the process. Based on some of Walter's comments, it also sounds like he intends to make nothrow the default in another DIP, which is also a terrible idea. I'm increasingly worried about the future of D with some of where these DIPs are going. - Jonathan M Davis
May 27 2020
next sibling parent <a a.com> writes:
On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:

I'm increasingly worried about the future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
I foresee three things that could happen : 1. People get annoyed and stop using D. 2. Tools and editor features(think like dfmt) are added that slap system everywhere unless safe or trusted is specified.(the most probable in my opinion) 3. DMD gets forked and/or third party compilers keep system as default No matter what ends up happening i think there is reason to worry...
May 27 2020
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/27/20 2:50 PM, Jonathan M Davis wrote:

 Based on some of Walter's comments, it also sounds like he intends to make
 nothrow the default in another DIP, which is also a terrible idea. I'm
 increasingly worried about the future of D with some of where these DIPs are
 going.
In general, I think safe, nothrow, nogc, and pure by default have some benefits on code that is actually compiled by a D compiler. Simply because most of the time, code you write is generally in these categories, but isn't marked as such. Imagine instead of any of this, you simply wrote all functions that are not marked as no-arg templates. This would change nothing (only one symbol is generated anyway, Stefan), but provide a much better experience for users who care about certain attributes. Perhaps instead of making these things the default, we instead made *inference* the default. Then we could have something like noinfer (please, this is not a proposal, don't focus on the name) which would declare that you intend to make this something that the source code will not be available. Perhaps the focus on "X by default" is just looking at the wrong aspect. -Steve
May 27 2020
next sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 27 May 2020 at 21:01:51 UTC, Steven Schveighoffer 
wrote:
 Perhaps instead of making these things the default, we instead 
 made *inference* the default.
Thats a brilliant idea, however the issue of extern still remains with the additional complication of mangling due to attributes and also separate compilation/.di files. Separate compilation may be a lot slower if inference must be done for function bodies. For anyone that experiences significant slowdowns as a result I see no harm in a switch to disable it.
May 27 2020
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 28 May 2020 at 00:51:52 UTC, Nicholas Wilson wrote:
 Thats a brilliant idea, however the issue of extern still 
 remains with the additional complication of mangling due to 
 attributes and also separate compilation/.di files.
extern ones (including probably abstract/interface methods) require an annotation, otherwise it is inferred. Then you can explicitly put a thing on if you want a guarantee of it (as a user) or to provide a guarantee of it (as a lib designer / interface publisher). It really does lead to everyone wins... and this pattern can be extended to other functions as well. And we have precedent with templates.
 Separate compilation may be a lot slower if inference must be 
 done for function bodies.
Extremely unlikely - it already does those checks to confirm the annotation anyway!
May 27 2020
next sibling parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Thursday, 28 May 2020 at 00:59:30 UTC, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 00:51:52 UTC, Nicholas Wilson wrote:
 Thats a brilliant idea, however the issue of extern still 
 remains with the additional complication of mangling due to 
 attributes and also separate compilation/.di files.
extern ones (including probably abstract/interface methods) require an annotation, otherwise it is inferred.
I meant as in they still require default of _something_, and given recent times that contention is unlikely to go away. I hadn't though about inheritance at all, thats tricky.
 Extremely unlikely - it already does those checks to confirm 
 the annotation anyway!
Ah, I didn't know that! All the more reason for all at once (or at least package at a time)!
May 27 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/27/20 8:59 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 00:51:52 UTC, Nicholas Wilson wrote:
 Separate compilation may be a lot slower if inference must be done for 
 function bodies.
Extremely unlikely - it already does those checks to confirm the annotation anyway!
This isn't something I thought of. I am surprised if the compiler semantically analyzes only-imported functions. In fact, I'm almost sure this doesn't happen (based on D's track record with unittests). But in practice much of D is actually templates already. I doubt this would have a tremendous impact. In any case, this COULD be mitigated via some sort of intermediate form, or .di files. -Steve
May 27 2020
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 28 May 2020 at 01:20:31 UTC, Steven Schveighoffer 
wrote:
 This isn't something I thought of. I am surprised if the 
 compiler semantically analyzes only-imported functions. In 
 fact, I'm almost sure this doesn't happen (based on D's track 
 record with unittests).
aaaargh, I'm sorry, you're right. I forgot I habitually use -i now which brings it in for full analysis. It only does this for auto functions and templates in imported modules right now.
May 27 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/27/20 9:32 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 01:20:31 UTC, Steven Schveighoffer wrote:
 This isn't something I thought of. I am surprised if the compiler 
 semantically analyzes only-imported functions. In fact, I'm almost 
 sure this doesn't happen (based on D's track record with unittests).
aaaargh, I'm sorry, you're right. I forgot I habitually use -i now which brings it in for full analysis. It only does this for auto functions and templates in imported modules right now.
Even so, things like DUB could automatically build .di files if this was an issue. -Steve
May 27 2020
prev sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 28/05/2020 12:51 PM, Nicholas Wilson wrote:
 
 Thats a brilliant idea, however the issue of extern still remains with 
 the additional complication of mangling due to attributes and also 
 separate compilation/.di files.
.di files are not an issue. When the compiler writes them out, just include the attribute with it... Under such a scheme the compiler may not even know it was inferred.
May 28 2020
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Wednesday, 27 May 2020 at 21:01:51 UTC, Steven Schveighoffer 
wrote:
 On 5/27/20 2:50 PM, Jonathan M Davis wrote:

 [...]
In general, I think safe, nothrow, nogc, and pure by default have some benefits on code that is actually compiled by a D compiler. Simply because most of the time, code you write is generally in these categories, but isn't marked as such. [...]
+1 ....
May 28 2020
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he 
 intends to make nothrow the default in another DIP, which is 
 also a terrible idea. I'm increasingly worried about the future 
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
May 27 2020
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite. In general, exceptions are by far the best way to deal with error conditions that we have. They clean up code considerably and make it much harder for error conditions to be ignored or eaten. Alternatives such as error code are terrible in comparison. We do end up with far less code being able to be nothrow in D than we should because of how autodecoding works, but it's still the case that _way_ more than 97% of the code out there is going to potentially throw exceptions - especially in applications that have to deal with users and/or the network. Propagating error conditions without exceptions gets disgusting fast. There's no question that there is code that cannot afford the extra cost of exception handling and has to do something else, but it's much more error-prone and isn't the norm for programs in general. And of course, in some cases, exceptions just plain don't make sense (e.g. the fact that std.utf.validate throws instead of returning bool is terrible), so it's not like they're always the appropriate solution or used correctly. However, in general, they most definitely should be the default for how your typical application deals with error conditions. Anything else leads to far more bugs. I would have thought that the experience of the programming community at large at this point would make that pretty clear. Exceptions are by no means perfect, but for the general case, they're the best way that we have to cleanly propagate error conditions and avoid having error conditions be ignored or lost. Having nothrow be the default would encourage people to not use exceptions (leading to worse code in general) and would result in a _lot_ more attributes being needed on the code that does use exceptions. Certainly, I know that for the libraries and applications that I've worked on for most of my career (be it in D or some other language), if nothrow were the default, those libraries and applications would have to be marking throw all over the place (which is in fact basically what happens in Java, though they have they extra annoyance of having to mark _which_ exception types can be thrown, which has proven to be a terrible idea and which we fortunately have avoided in D). I think that it's great that we have nothrow for those cases where it's appropriate, but I don't think that it makes sense at all to assume that that's the _normal_ situation. In any case, when the topic has come up before, plenty of people have raised similar complaints to mine, so if/when Walter puts forth a DIP proposing nothrow to be the default as I think that he's planning on doing, I expect that he'll get quite a few dissenting opinions on it - likely far more than for making safe the default. Of course, since he won't listen when pretty much everyone else disagrees with him on what to do with extern(C) with safe being the default, who knows how well _that_ will go... - Jonathan M Davis
May 27 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/27/20 8:31 PM, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite.
It actually doesn't matter what's more common (and I agree with Jonathan, there's actually a lot of throwing calls because of the calls that you make into other functions). What matters is that there are functions that are actually nothrow that aren't marked nothrow. Hence the desire that these functions should actually be marked nothrow implicitly so people who care about that can just use the functions without issue. -Steve
May 27 2020
parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 28 May 2020 at 01:23:31 UTC, Steven Schveighoffer 
wrote:
 On 5/27/20 8:31 PM, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via 
 Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis 
 wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the 
 future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite.
It actually doesn't matter what's more common (and I agree with Jonathan, there's actually a lot of throwing calls because of the calls that you make into other functions). What matters is that there are functions that are actually nothrow that aren't marked nothrow. Hence the desire that these functions should actually be marked nothrow implicitly so people who care about that can just use the functions without issue. -Steve
What make me feel "mhmm" is that the motivation is always "because no throw is speediest, so should be the default" ... While I'm ok the "pay as you go" concept, I still think that a sane default for writing good code is still preferable: tuning the hot path is still the way to go if you care for speed. The same for switching from "virtual by default" to "final by default", the motivation should not be "because the code is speedier", but because it's the best way to promote encapsulation of the class public API (as W&A agreed years ago ...) The same for safety, if you don't care, well, slap system everywhere, or trusted if you want, anyway you are not caring. You just CAN code fast in that way. BUT if you care, you should have all the aid from the compiler to archive it, because writing safe code is much more task, so safe only for compiler checked code.
May 28 2020
next sibling parent reply welkam <wwwelkam gmail.com> writes:
On Thursday, 28 May 2020 at 07:36:05 UTC, Paolo Invernizzi wrote:

 tuning the hot path is still the way to go if you care for 
 speed.
You havent done many optimizations have you? The few hot spots happens when you have a simple program or there were no attempts made to optimize the code. If you profile say DMD code you will find that there are no hot spots. If you want the code to be fast you need to care about all of it. Some parts need more attention than others but you still need to care about it.
May 28 2020
parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 28 May 2020 at 12:21:31 UTC, welkam wrote:
 On Thursday, 28 May 2020 at 07:36:05 UTC, Paolo Invernizzi 
 wrote:

 tuning the hot path is still the way to go if you care for 
 speed.
You havent done many optimizations have you?
For sure I have ...
 The few hot spots happens when you have a simple program or 
 there were no attempts made to optimize the code.
Granted
 If you profile say DMD code you will find that there are no hot 
 spots. If you want the code to be fast you need to care about 
 all of it. Some parts need more attention than others but you 
 still need to care about it.
I'm on that boat, but I think that this is not always the state of affairs, it really depend on the domain of the application. DMD codebase is well known, written since a couple of decades and based (for the backend, for example) on the shoulders of DMC ... and written by one of the most brilliant programmers in the whole world ... Walter Bright: I will be surprised, if there's an hot path to squeeze in it. Anyway, I think you are right talking in general case, you still need to care about the whole .. and, figure out, we work in the real-time domain, as most of our customers are medical companies, so I understand very well what you are referring to ... :-P
May 28 2020
parent welkam <wwwelkam gmail.com> writes:
On Thursday, 28 May 2020 at 15:23:49 UTC, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 12:21:31 UTC, welkam wrote:
 On Thursday, 28 May 2020 at 07:36:05 UTC, Paolo Invernizzi 
 wrote:

 tuning the hot path is still the way to go if you care for 
 speed.
You havent done many optimizations have you?
For sure I have ...
Am I the only one that find the common saying that 20% of code is responsible for 80% of runtime to be not true? In my experience its either 5% eating up 90% of runtime (a stupid mistake) or the slowness spread out trough out the codebase. Have I not seen enough cases?
May 28 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 3:36 AM, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 01:23:31 UTC, Steven Schveighoffer wrote:
 On 5/27/20 8:31 PM, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite.
It actually doesn't matter what's more common (and I agree with Jonathan, there's actually a lot of throwing calls because of the calls that you make into other functions). What matters is that there are functions that are actually nothrow that aren't marked nothrow. Hence the desire that these functions should actually be marked nothrow implicitly so people who care about that can just use the functions without issue.
What make me feel "mhmm" is that the motivation is always "because no throw is speediest, so should be the default" ...
That's not the motivation for the default. throwing code can call nothrow code. Nothrow code cannot call throw code (without doing something about the exceptions). If code is actually nothrow (meaning it does not ever throw any exceptions) but not marked as nothrow, then it's an attribute away from being more useful. If we make nothrow the default, then code that already is nothrow, but simply not marked, now becomes more useful. Same goes for safe, nogc, pure. This is about properly marking code, not about speed. But I'm thinking we are approaching this wrong. We should simply make inference the default, and opt out by using an attribute or some other mechanism (pragma?). It would have the same effect but not break all code in existence. -Steve
May 28 2020
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 29/05/2020 3:00 AM, Steven Schveighoffer wrote:
 But I'm thinking we are approaching this wrong. We should simply make 
 inference the default, and opt out by using an attribute or some other 
 mechanism (pragma?). It would have the same effect but not break all 
 code in existence.
I had the same realization a few days ago. The fact that you have to type safe and system at all is the real problem. Apart from function pointers and explicit overriding you should not be writing them normally. .di files would be generated with the annotations regardless of if it is supplied by user or not. The trick to get this to work well AND have false positives (i.e. a function that is system but appears safe and vice versa) is to do the inferring as early as possible. This sounds crazy, but it would force people to consider if it should be trusted or if it was just something that needs fixing. The goal would be to make safe transitively poison the call stack to require fixing. But if most D code already is safe, what is there to worry about?
May 28 2020
prev sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 28 May 2020 at 15:00:11 UTC, Steven Schveighoffer 
wrote:
 On 5/28/20 3:36 AM, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 01:23:31 UTC, Steven Schveighoffer 
 wrote:
 On 5/27/20 8:31 PM, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via 
 Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis 
 wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which 
 is
 also a terrible idea. I'm increasingly worried about the 
 future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite.
It actually doesn't matter what's more common (and I agree with Jonathan, there's actually a lot of throwing calls because of the calls that you make into other functions). What matters is that there are functions that are actually nothrow that aren't marked nothrow. Hence the desire that these functions should actually be marked nothrow implicitly so people who care about that can just use the functions without issue.
What make me feel "mhmm" is that the motivation is always "because no throw is speediest, so should be the default" ...
That's not the motivation for the default.
DIP 1029, Rationale: "The problem is that exceptions are not cost-free, even in code that never throws. Exceptions should therefore be opt-in, not opt-out. Although this DIP does not propose making exceptions opt-in, the throw attribute is a key requirement for it. The attribute also serves well as documentation that yes, a function indeed can throw." Maybe I'm wrong, but when Walter uses "not cost-free" he seldom refers to something else than ... speed.
May 28 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 11:16 AM, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 15:00:11 UTC, Steven Schveighoffer wrote:
 On 5/28/20 3:36 AM, Paolo Invernizzi wrote:
 What make me feel "mhmm" is that the motivation is always "because no 
 throw is speediest, so should be the default" ...
That's not the motivation for the default.
DIP 1029, Rationale: "The problem is that exceptions are not cost-free, even in code that never throws. Exceptions should therefore be opt-in, not opt-out. Although this DIP does not propose making exceptions opt-in, the throw attribute is a key requirement for it. The attribute also serves well as documentation that yes, a function indeed can throw." Maybe I'm wrong, but when Walter uses "not cost-free" he seldom refers to something else than ... speed.
I should be clearer that MY motivation for having these things be the default is so that more code can be used in more situations. Walter's motivation may differ. The fact that this function is not nogc safe pure nothrow is a failure of the language: int multiply(int x, int y) { return x * y; } I shouldn't have to have attribute soup everywhere, and most likely I'm not going to bother. The motivation for the existence of nothrow in general is to avoid the cost of exception handling. But the motivation of making it the *default* is because people just don't mark their nothrow functions nothrow. The easier default is nothrow because it is callable from either situation. In other words, it enables more code. -Steve
May 28 2020
parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 28 May 2020 at 15:28:02 UTC, Steven Schveighoffer 
wrote:
 On 5/28/20 11:16 AM, Paolo Invernizzi wrote:
 [...]
I should be clearer that MY motivation for having these things be the default is so that more code can be used in more situations. Walter's motivation may differ. The fact that this function is not nogc safe pure nothrow is a failure of the language: int multiply(int x, int y) { return x * y; } I shouldn't have to have attribute soup everywhere, and most likely I'm not going to bother. The motivation for the existence of nothrow in general is to avoid the cost of exception handling. But the motivation of making it the *default* is because people just don't mark their nothrow functions nothrow. The easier default is nothrow because it is callable from either situation. In other words, it enables more code. -Steve
I can agree with you, but I would like to see a solid rationale for that kind of switches. But I guess that we should only wait till the discussion on the future DIP around that.
May 28 2020
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/28/20 11:16 AM, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 15:00:11 UTC, Steven Schveighoffer wrote:
 On 5/28/20 3:36 AM, Paolo Invernizzi wrote:
 On Thursday, 28 May 2020 at 01:23:31 UTC, Steven Schveighoffer wrote:
 On 5/27/20 8:31 PM, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via Digitalmars-d 
 wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the future
 of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite.
It actually doesn't matter what's more common (and I agree with Jonathan, there's actually a lot of throwing calls because of the calls that you make into other functions). What matters is that there are functions that are actually nothrow that aren't marked nothrow. Hence the desire that these functions should actually be marked nothrow implicitly so people who care about that can just use the functions without issue.
What make me feel "mhmm" is that the motivation is always "because no throw is speediest, so should be the default" ...
That's not the motivation for the default.
DIP 1029, Rationale: "The problem is that exceptions are not cost-free, even in code that never throws. Exceptions should therefore be opt-in, not opt-out. Although this DIP does not propose making exceptions opt-in, the throw attribute is a key requirement for it. The attribute also serves well as documentation that yes, a function indeed can throw." Maybe I'm wrong, but when Walter uses "not cost-free" he seldom refers to something else than ... speed.
There's cognitive cost, too. Coding with functions that throw is a lot more difficult.
May 28 2020
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/27/2020 5:31 PM, Jonathan M Davis wrote:
  since he won't listen
More accurately "does not agree".
May 28 2020
next sibling parent Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 28 May 2020 at 09:33:26 UTC, Walter Bright wrote:
 On 5/27/2020 5:31 PM, Jonathan M Davis wrote:
  since he won't listen
More accurately "does not agree".
...and fails to present sound arguments in defense of his position. The problem is not disagreeing.
May 28 2020
prev sibling parent welkam <wwwelkam gmail.com> writes:
On Thursday, 28 May 2020 at 09:33:26 UTC, Walter Bright wrote:
 On 5/27/2020 5:31 PM, Jonathan M Davis wrote:
  since he won't listen
More accurately "does not agree".
1. D`s safe is not powerful enough to be used with manual memory management. 2. Because of 1 people dont see enough value in using safe in their code especially if it interfaces with C libraries. 3. Because of 2 when safe becomes default, people will use the least effort required way to shut up the compiler. Meaning safe: at top level of a module. 4. Because of 3 you think compiler should mark extern C functions as safe by default. Most of the talks in this forum focused on 4. I think people missed the forest for the trees. The real problem is 1. As long as there are no way to mechanically verify use of malloc and free to be safe, or at least wrap in easy to use safe constructs, I dont see any way how DIP 1028 can be changed to not be problematic. As long as safe is not useful enough all potential outcomes of DIP 1028 are bad. But I am just random guy on the internet.
May 28 2020
prev sibling next sibling parent welkam <wwwelkam gmail.com> writes:
On Thursday, 28 May 2020 at 00:31:09 UTC, Jonathan M Davis wrote:
 There's no question that there is code that cannot afford the 
 extra cost of exception handling and has to do something else, 
 but it's much more error-prone and isn't the norm for programs 
 in general
I think you misunderstood why some people for example in game engine development are against the use of exceptions. When you have complex system and you have to verify that right things are done trough all possible control flow paths anything that complicates this is frowned upon. What exceptions do is they inflate control flow graph making almost impossible to verify that your system does the right thing in all situations and that would increase bugs in your system
May 28 2020
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Thursday, 28 May 2020 at 00:31:09 UTC, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via 
 Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis 
 wrote:
 Based on some of Walter's comments, it also sounds like he 
 intends to make nothrow the default in another DIP, which is 
 also a terrible idea. I'm increasingly worried about the 
 future of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite. <snip>
I find that response surprising, given that you used to use Haskell (do you still?), which gets along fine without exceptions.
May 28 2020
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 28.05.20 22:41, Meta wrote:
 On Thursday, 28 May 2020 at 00:31:09 UTC, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis wrote:
 Based on some of Walter's comments, it also sounds like he > 
intends to make nothrow the default in another DIP, which is > also a terrible idea. I'm increasingly worried about the > future of D with some of where these DIPs are going.
 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite. <snip>
I find that response surprising, given that you used to use Haskell (do you still?), which gets along fine without exceptions.
http://hackage.haskell.org/package/base-4.14.0.0/docs/Control-Exception.html Furthermore, imperative-style code in Haskell is based on monads, which are a generalization of exceptions.
May 29 2020
parent reply Meta <jared771 gmail.com> writes:
On Friday, 29 May 2020 at 08:52:14 UTC, Timon Gehr wrote:
 I find that response surprising, given that you used to use 
 Haskell (do you still?), which gets along fine without 
 exceptions.
http://hackage.haskell.org/package/base-4.14.0.0/docs/Control-Exception.html Furthermore, imperative-style code in Haskell is based on monads, which are a generalization of exceptions.
I'm going to step up the pedantry and say that this proves my assertion; Haskell (the language) gets along fine without exceptions, but they're there for you to use as a library, only if you want to.
May 29 2020
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Friday, 29 May 2020 at 12:40:45 UTC, Meta wrote:
 On Friday, 29 May 2020 at 08:52:14 UTC, Timon Gehr wrote:
 I find that response surprising, given that you used to use 
 Haskell (do you still?), which gets along fine without 
 exceptions.
http://hackage.haskell.org/package/base-4.14.0.0/docs/Control-Exception.html Furthermore, imperative-style code in Haskell is based on monads, which are a generalization of exceptions.
I'm going to step up the pedantry and say that this proves my assertion; Haskell (the language) gets along fine without exceptions, but they're there for you to use as a library, only if you want to.
Forgoing exceptions in D would be a lot more palatable if we had Currently, if you want to ensure that callers of your function can't ignore errors by accident, your only option in D is to throw an exception.
May 29 2020
parent Meta <jared771 gmail.com> writes:
On Friday, 29 May 2020 at 13:20:52 UTC, Paul Backus wrote:
 On Friday, 29 May 2020 at 12:40:45 UTC, Meta wrote:
 I'm going to step up the pedantry and say that this proves my 
 assertion; Haskell (the language) gets along fine without 
 exceptions, but they're there for you to use as a library, 
 only if you want to.
Forgoing exceptions in D would be a lot more palatable if we Currently, if you want to ensure that callers of your function can't ignore errors by accident, your only option in D is to throw an exception.
I agree, it would be nice to be able to express that in D. You can sort of emulate it at runtime: struct MustUse(T) { T payload; bool used; ref T use() { used = true; return payload; } alias use this; //Or, probably better, call use() manually ~this() { assert(used, MustUse.stringof ~ " must be used, but it wasn't"); } } MustUse!string read(File f) { ... }
May 29 2020
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 29.05.20 14:40, Meta wrote:
 On Friday, 29 May 2020 at 08:52:14 UTC, Timon Gehr wrote:
 I find that response surprising, given that you used to use Haskell 
 (do you still?), which gets along fine without exceptions.
http://hackage.haskell.org/package/base-4.14.0.0/docs/Control-Exception.html Furthermore, imperative-style code in Haskell is based on monads, which are a generalization of exceptions.
I'm going to step up the pedantry and say that this proves my assertion; Haskell (the language) gets along fine without exceptions, but they're there for you to use as a library, only if you want to.
Prelude> case (1,1) of (2,2) -> 0 *** Exception: <interactive>:4:1-24: Non-exhaustive patterns in case
May 29 2020
parent Meta <jared771 gmail.com> writes:
On Friday, 29 May 2020 at 20:02:37 UTC, Timon Gehr wrote:
 On 29.05.20 14:40, Meta wrote:
 On Friday, 29 May 2020 at 08:52:14 UTC, Timon Gehr wrote:
 I find that response surprising, given that you used to use 
 Haskell (do you still?), which gets along fine without 
 exceptions.
http://hackage.haskell.org/package/base-4.14.0.0/docs/Control-Exception.html Furthermore, imperative-style code in Haskell is based on monads, which are a generalization of exceptions.
I'm going to step up the pedantry and say that this proves my assertion; Haskell (the language) gets along fine without exceptions, but they're there for you to use as a library, only if you want to.
Prelude> case (1,1) of (2,2) -> 0 *** Exception: <interactive>:4:1-24: Non-exhaustive patterns in case
Okay, I'll concede that one. I forgot that non-exhaustive patterns throw exceptions; I thought I remembered them raising errors. Let's talk about Rust instead, then, which 100% absolutely does not have exceptions. They seem to make do with Option/Result/try (though I guess since they're monads you'd consider those equivalent to exceptions, but they don't behave like exceptions as they're just sugar over pattern matching).
May 29 2020
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, May 28, 2020 2:41:43 PM MDT Meta via Digitalmars-d wrote:
 On Thursday, 28 May 2020 at 00:31:09 UTC, Jonathan M Davis wrote:
 On Wednesday, May 27, 2020 5:57:00 PM MDT Meta via

 Digitalmars-d wrote:
 On Wednesday, 27 May 2020 at 18:50:50 UTC, Jonathan M Davis

 wrote:
 Based on some of Walter's comments, it also sounds like he
 intends to make nothrow the default in another DIP, which is
 also a terrible idea. I'm increasingly worried about the
 future of D with some of where these DIPs are going.

 - Jonathan M Davis
What's wrong with nothrow by default? Probably 97% of code doesn't need to throw exceptions.
If anything, I would say the opposite. <snip>
I find that response surprising, given that you used to use Haskell (do you still?), which gets along fine without exceptions.
Functional languages are a very different beast from imperative or multi-paradigm languages. So, to a great extent, we're talking apples and oranges when comparing them. Regardless, Haskell avoids some of the pitfalls of not using exceptions while being pretty firmly stuck with others. Because it's purely functional, you can't easily ignore the results of functions, which makes error-codes (or monads with error reporting) less error-prone than they are in languages like C++, Java, or D, but the result still clutters the code considerably. It's very typical in Haskell that you're stuck passing monads in one form or another well up the call stack, so the error-handling effectively infects the whole program instead of being segregated to the portions where it's most appropriate to deal with it. And of course, it gets that much more fun when you need to be passing multiple things up the call stack via monads. Ultimately though, Haskell is so different from D that it's hard to really talk about best practices from one applying to the other. Personally, I think that it's great to spend time using a functional language as your main language for a while, because it forces you to get better at functional programming practices such as recursion - but it forces it by not letting you have the full toolbox like a multi-paradigm language does. I'm _much_ more comfortable with stuff like templates and range-based code than I would have been had I not spent a fair bit of time programming in Haskell previously, but honestly, I hate functional languages. They're far too restrictive, and I don't understand how anyone can seriously program in them professionally. Debugging Haskell is a disgusting, unpleasant process in comparison to an imperative or OO language. I highly recommend that programmers spend some time in functional land to improve their skills, but I would never want to program with such tools for a living. exceptions are by far the cleanest and least error-prone way to deal with error conditions in general. They definitely aren't always appropriate, but for your average program, they're how I'd expect most error-handling to be done. - Jonathan M Davis
May 29 2020
parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Friday, 29 May 2020 at 22:19:37 UTC, Jonathan M Davis wrote:

 Personally, I think that it's great to spend time using a 
 functional language as your main language for a while, because 
 it forces you to get better at functional programming practices 
 such as recursion - but it forces it by not letting you have 
 the full toolbox like a multi-paradigm language does. I'm 
 _much_ more comfortable with stuff like templates and 
 range-based code than I would have been had I not spent a fair 
 bit of time programming in Haskell previously, but honestly, I 
 hate functional languages. They're far too restrictive, and I 
 don't understand how anyone can seriously program in them 
 professionally. Debugging Haskell is a disgusting, unpleasant 
 process in comparison to an imperative or OO language. I highly 
 recommend that programmers spend some time in functional land 
 to improve their skills, but I would never want to program with 
 such tools for a living.
If someone wants to practise or have fun with a functional language, and at the some time have some concrete and good tool for a real job, I suggest giving a try to Elm. It's rock solid, and much more easy then Haskell (its compiler is written in Haskell), and you can learn and be profitable in a really short time. Bonus point, fantastic error messages provided and ... well you can avoid Javascript!
May 30 2020
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Wednesday, 27 May 2020 at 23:57:00 UTC, Meta wrote:
 What's wrong with nothrow by default? Probably 97% of code 
 doesn't need to throw exceptions.
One point of view can be to consider the consequences of fixing code that incorrectly uses the default. If we have throws-by-default, then marking an existing non-templated method ` nothrow` is not a breaking change. If we have nothrow-by-default, then marking an existing non-templated method ` throws` is a breaking change. Another point of view could be that there's a benefit to being permissive by default in terms of what the developer can do.
May 28 2020
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 28 May 2020 at 13:42:24 UTC, Joseph Rushton Wakeling 
wrote:
 One point of view can be to consider the consequences of fixing 
 code that incorrectly uses the default.
The beauty of inferred by default is neither are breaking changes. ...and both are breaking changes, but an inferred default is explicitly not guaranteed to remain the same across version updates, so people should be more expecting of it to change.
May 28 2020
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Thursday, 28 May 2020 at 13:53:39 UTC, Adam D. Ruppe wrote:
 The beauty of inferred by default is neither are breaking 
 changes.
Not sure I agree about that TBH. Inferred-by-default just means the function signature doesn't by default tell you anything about what properties you can rely on, and any change to the implementation may alter the available properties, without any external sign that this has happened.
May 28 2020
next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Thursday, 28 May 2020 at 14:53:26 UTC, Joseph Rushton Wakeling 
wrote:
 On Thursday, 28 May 2020 at 13:53:39 UTC, Adam D. Ruppe wrote:
 The beauty of inferred by default is neither are breaking 
 changes.
Not sure I agree about that TBH. Inferred-by-default just means the function signature doesn't by default tell you anything about what properties you can rely on, and any change to the implementation may alter the available properties, without any external sign that this has happened.
... which OK, you explicitly did acknowledge, but the point is, this is exactly why I don't like inferred-by-default :-)
May 28 2020
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 10:53 AM, Joseph Rushton Wakeling wrote:
 On Thursday, 28 May 2020 at 13:53:39 UTC, Adam D. Ruppe wrote:
 The beauty of inferred by default is neither are breaking changes.
Not sure I agree about that TBH.  Inferred-by-default just means the function signature doesn't by default tell you anything about what properties you can rely on, and any change to the implementation may alter the available properties, without any external sign that this has happened.
1. Templates already do this, and it has not been a problem (much of Phobos and much of what I've written is generally templates). 2. If we went to an "inferred-by-default" regime, there would have to be a way to opt-out of it, to allow for crafting attributes of public extern functions. 3. You would still need to specify exact attributes for virtual functions. 4. Documentation should show the inferred attributes IMO (not sure if this already happens for auto functions for example). 5. Yes, inferred attributes might change. This would be a breaking change. It might be a breaking change for others where it is not for the library/function in question. But it would still be something that IMO would require a deprecation period. For things outside our control, it's very possible that these changes would be done anyway even if they were actual attributes. 6. One might also take the view that a lack of attributes means the function may or may not have those attributes inferred in the future (i.e. it's not part of the API). I think much code is already written this way. -Steve
May 28 2020
next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer 
wrote:
 Not sure I agree about that TBH.  Inferred-by-default just 
 means the function signature doesn't by default tell you 
 anything about what properties you can rely on, and any change 
 to the implementation may alter the available properties, 
 without any external sign that this has happened.
1. Templates already do this, and it has not been a problem (much of Phobos and much of what I've written is generally templates).
Yes, but the user tends to have a lot of control there in practice, by what template arguments they pass. So the template is less of a black box of surprise. I also think there's a bit of an implicit motivation for an author of templated code to try and _make_ the template args the dominant factor there (because that makes the template more usable).
May 28 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 11:30 AM, Joseph Rushton Wakeling wrote:
 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer wrote:
 Not sure I agree about that TBH.  Inferred-by-default just means the 
 function signature doesn't by default tell you anything about what 
 properties you can rely on, and any change to the implementation may 
 alter the available properties, without any external sign that this 
 has happened.
1. Templates already do this, and it has not been a problem (much of Phobos and much of what I've written is generally templates).
Yes, but the user tends to have a lot of control there in practice, by what template arguments they pass.  So the template is less of a black box of surprise.  I also think there's a bit of an implicit motivation for an author of templated code to try and _make_ the template args the dominant factor there (because that makes the template more usable).
I think a ton of templates get written without considering attribute inference at all. It just happens to work because the compiler is guaranteed to have all the source. I've seen people change a function in to a no-arg template function not even for inference, but to ensure the compiler only generates code if called. All of a sudden, it gets a new set of attributes, and nobody complains. The most pleasant thing about template inference is that it generally works out well because it provides the most restrictive attributes it can. It doesn't get in the way of the author who doesn't care about attributes or the user who does care. But it can also blow up if you can't figure out why some inference is happening and you expect something different. I think in addition to such a change to inference by default the compiler should provide a mechanism to explain how it infers things so you can root out the cause of it. I'd like to have this feature regardless of any defaults. -Steve
May 28 2020
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer 
wrote:
 2. If we went to an "inferred-by-default" regime, there would 
 have to be a way to opt-out of it, to allow for crafting 
 attributes of public extern functions.
You'd just have to write them out there.
 4. Documentation should show the inferred attributes IMO (not 
 sure if this already happens for auto functions for example).
Eeeeeeh, I'd be ok with that but it would need to actually point out that it was inferred - that this is NOT a promise of forward compatibility, it just happens to be so in this version
 6. One might also take the view that a lack of attributes means 
 the function may or may not have those attributes inferred in 
 the future (i.e. it's not part of the API). I think much code 
 is already written this way.
yes, i tend to explicitly write it out if im making a point about it.
May 28 2020
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 5/28/20 12:39 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer wrote:
 2. If we went to an "inferred-by-default" regime, there would have to 
 be a way to opt-out of it, to allow for crafting attributes of public 
 extern functions.
You'd just have to write them out there.
Not possible in some cases ( throws, gc, impure)
 
 4. Documentation should show the inferred attributes IMO (not sure if 
 this already happens for auto functions for example).
Eeeeeeh, I'd be ok with that but it would need to actually point out that it was inferred - that this is NOT a promise of forward compatibility, it just happens to be so in this version
Sure, that's a reasonable expectation. -steve
May 28 2020
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 28 May 2020 at 16:47:01 UTC, Steven Schveighoffer 
wrote:
 On 5/28/20 12:39 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer 
 wrote:
 2. If we went to an "inferred-by-default" regime, there would 
 have to be a way to opt-out of it, to allow for crafting 
 attributes of public extern functions.
You'd just have to write them out there.
Not possible in some cases ( throws, gc, impure)
 
 4. Documentation should show the inferred attributes IMO (not 
 sure if this already happens for auto functions for example).
Eeeeeeh, I'd be ok with that but it would need to actually point out that it was inferred - that this is NOT a promise of forward compatibility, it just happens to be so in this version
Sure, that's a reasonable expectation. -steve
I am all for nonothrow and nononothrow (which does not allow asserts :))
May 28 2020
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Thursday, 28 May 2020 at 18:09:51 UTC, Stefan Koch wrote:
 I am all for nonothrow and nononothrow (which does not allow 
 asserts :))
nonononononotheresnolimits ;-)
May 28 2020
parent H. S. Teoh <hsteoh quickfur.ath.cx> writes:
On Thursday, 28 May 2020 at 19:08:18 UTC, Joseph Rushton Wakeling 
wrote:
 On Thursday, 28 May 2020 at 18:09:51 UTC, Stefan Koch wrote:
 I am all for nonothrow and nononothrow (which does not allow 
 asserts :))
nonononononotheresnolimits ;-)
In the context of DIP 1028, I really want an attribute called nojustno that I can put at the top of my source file to make extern(C) declarations system by default. ;-)
May 28 2020
prev sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 28 May 2020 at 16:47:01 UTC, Steven Schveighoffer 
wrote:
 On 5/28/20 12:39 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer 
 wrote:
 2. If we went to an "inferred-by-default" regime, there would 
 have to be a way to opt-out of it, to allow for crafting 
 attributes of public extern functions.
You'd just have to write them out there.
Not possible in some cases ( throws, gc, impure)
 
 4. Documentation should show the inferred attributes IMO (not 
 sure if this already happens for auto functions for example).
Eeeeeeh, I'd be ok with that but it would need to actually point out that it was inferred - that this is NOT a promise of forward compatibility, it just happens to be so in this version
Sure, that's a reasonable expectation. -steve
I'm a fan of your auto-inference or inferred-by-default or whatever you'd like to call it ideas. Wider utility and less "annotation soup". If you've got the time, I'd suggest your starting a new thread. Now that Walter has withdrawn 1028 we could really use something to amp up safe, and other attribute, utility.
May 28 2020
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 28 May 2020 16:39:21 +0000 schrieb Adam D. Ruppe:

 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer wrote:
 2. If we went to an "inferred-by-default" regime, there would have to
 be a way to opt-out of it, to allow for crafting attributes of public
 extern functions.
You'd just have to write them out there.
I haven't though about it a lot, but I think I'd prefer to have stable library ABIs by default. So at least all export-ed, non-templated functions should not have inference. Otherwise we'd have to add a best-practice 'fully annotate all exported function to avoid ABI breakage' rule, which then again is some sort of programming by convention... -- Johannes
May 28 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Friday, 29 May 2020 at 06:26:29 UTC, Johannes Pfau wrote:
 Am Thu, 28 May 2020 16:39:21 +0000 schrieb Adam D. Ruppe:

 On Thursday, 28 May 2020 at 15:12:56 UTC, Steven Schveighoffer 
 wrote:
 2. If we went to an "inferred-by-default" regime, there would 
 have to be a way to opt-out of it, to allow for crafting 
 attributes of public extern functions.
You'd just have to write them out there.
I haven't though about it a lot, but I think I'd prefer to have stable library ABIs by default. So at least all export-ed, non-templated functions should not have inference. Otherwise we'd have to add a best-practice 'fully annotate all exported function to avoid ABI breakage' rule, which then again is some sort of programming by convention...
Unless it's believed that the separate compilation capability is both essential and plausibly intractable, experimenting with the "I have all the source code" subset first seems like a good idea. For one, I'm pretty sure that we can find a way to store/retrieve compiler inferences.
May 28 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/28/20 11:12 AM, Steven Schveighoffer wrote:
 On 5/28/20 10:53 AM, Joseph Rushton Wakeling wrote:
 On Thursday, 28 May 2020 at 13:53:39 UTC, Adam D. Ruppe wrote:
 The beauty of inferred by default is neither are breaking changes.
Not sure I agree about that TBH.  Inferred-by-default just means the function signature doesn't by default tell you anything about what properties you can rely on, and any change to the implementation may alter the available properties, without any external sign that this has happened.
1. Templates already do this, and it has not been a problem (much of Phobos and much of what I've written is generally templates). 2. If we went to an "inferred-by-default" regime, there would have to be a way to opt-out of it, to allow for crafting attributes of public extern functions. 3. You would still need to specify exact attributes for virtual functions. 4. Documentation should show the inferred attributes IMO (not sure if this already happens for auto functions for example). 5. Yes, inferred attributes might change. This would be a breaking change. It might be a breaking change for others where it is not for the library/function in question. But it would still be something that IMO would require a deprecation period. For things outside our control, it's very possible that these changes would be done anyway even if they were actual attributes. 6. One might also take the view that a lack of attributes means the function may or may not have those attributes inferred in the future (i.e. it's not part of the API). I think much code is already written this way.
Large projects separate compilation inter-procedural analysis does not scale yadda yadda yadda.
May 28 2020
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu 
wrote:
 Large projects separate compilation inter-procedural analysis 
 does not scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
May 28 2020
next sibling parent Johannes Pfau <nospam example.com> writes:
Am Thu, 28 May 2020 23:35:48 +0000 schrieb Adam D. Ruppe:

 On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu wrote:
 Large projects separate compilation inter-procedural analysis does not
 scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
Unfortunately what really limits us here is the C/C++ toolchain ecosystem (linker, library loader...). Ideally we'd do an attribute inference / code generation / inline information generation (function 'classification') pass exactly once per module, then store the information in some sort of compiled intermediary file. For (cross-module) inlining, this is essentially what LTO does. But I don't know whether you could include enough other information, such as function prototypes for all analyzed functions... Maybe it'd make sense to store this information to separate files, but then it's more difficult again to integrate into current toolflows. -- Johannes
May 28 2020
prev sibling next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 28 May 2020 at 23:35:48 UTC, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu 
 wrote:
 Large projects separate compilation inter-procedural analysis 
 does not scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
It's not surprising that copying a CompoundStatement all over the place and then discarding most of it is a waste.
May 29 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/28/20 7:35 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu wrote:
 Large projects separate compilation inter-procedural analysis does not 
 scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
Needs to be based in sound analysis, not measurements, as a quadratic is not scalable regardless of the constant. When I was in the field there was no escaping this fact, and all interprocedural analyses that did anything interesting were limited to programs a few KB in size. Perhaps things have improved since, and it would be great if they did. Timon should know about this.
May 29 2020
next sibling parent Arun Chandrasekaran <aruncxy gmail.com> writes:
On Friday, 29 May 2020 at 12:52:06 UTC, Andrei Alexandrescu wrote:
 On 5/28/20 7:35 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu 
 wrote:
 Large projects separate compilation inter-procedural analysis 
 does not scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
Needs to be based in sound analysis, not measurements, as a quadratic is not scalable regardless of the constant. When I was in the field there was no escaping this fact, and all interprocedural analyses that did anything interesting were limited to programs a few KB in size. Perhaps things have improved since, and it would be great if they did. Timon should know about this.
Where I work we typically use decision matrix. In it's current state, DIP process would benefit if a decision matrix can be introduced for controversial proposals. It will help to summarize the options and counter options proposed and the weightage for each option (based on sound reasoning). https://en.wikipedia.org/wiki/Decision_matrix
May 29 2020
prev sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Friday, 29 May 2020 at 12:52:06 UTC, Andrei Alexandrescu wrote:
 On 5/28/20 7:35 PM, Adam D. Ruppe wrote:
 On Thursday, 28 May 2020 at 22:56:15 UTC, Andrei Alexandrescu 
 wrote:
 Large projects separate compilation inter-procedural analysis 
 does not scale yadda yadda yadda.
We need to stop making assertions without measurements. This may be true, but it needs to be based on empirical fact - the one thing we should notice is compile speed is sensitive to surprising things and not to other surprising things....
Needs to be based in sound analysis, not measurements, as a quadratic is not scalable regardless of the constant. When I was in the field there was no escaping this fact, and all interprocedural analyses that did anything interesting were limited to programs a few KB in size. Perhaps things have improved since, and it would be great if they did. Timon should know about this.
Of course, we need both theory and the empirical. O(N lg N) may be the upper limit of what is countenanced for performance work, a useful filter, but the constants, as Andrei knows better than most, can really get you. Big-Oh is helpful but it is too loose to be the final word. Additionally, the empirical can help you catch big-Oh analysis errors.
May 29 2020
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/28/20 9:42 AM, Joseph Rushton Wakeling wrote:
 On Wednesday, 27 May 2020 at 23:57:00 UTC, Meta wrote:
 What's wrong with nothrow by default? Probably 97% of code doesn't 
 need to throw exceptions.
One point of view can be to consider the consequences of fixing code that incorrectly uses the default. If we have throws-by-default, then marking an existing non-templated method ` nothrow` is not a breaking change. If we have nothrow-by-default, then marking an existing non-templated method ` throws` is a breaking change. Another point of view could be that there's a benefit to being permissive by default in terms of what the developer can do.
And that should be a breaking change. So all is good. Changing the regime of a function from nothrow to throw is major. I'm sympathetic with making functions nothrow the default. The reader and compiler taking into account the possibility of throwing is a large upfront tax. That should not be paid without necessity.
May 28 2020
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Thursday, 28 May 2020 at 22:43:29 UTC, Andrei Alexandrescu 
wrote:
 And that should be a breaking change. So all is good.

 Changing the regime of a function from nothrow to throw is 
 major.
Of course changing the regime of a function from nothrow to throw is major. That will be the case whatever the default regime is. But the point I was making was about how default regime choice mixes with the requirement for breaking change. The scenario imagined here is "What happens if a function is using the default regime by accident because the developer forgot to add an attribute to change it?" If the default regime is 'throw', then one can switch a function from the default regime to its opposite, without breaking change. If the default regime is 'nothrow', then changing from the default regime to its opposite is inherently a breaking change. We can reasonably expect that it will be a common scenario for developers to just use the default and then realize later that wasn't intended. So when picking a default regime, it might be a good idea to pick the default that allows switching away without breakage. (The same argument has been advanced for final-by-default class methods, you may recall.)
May 30 2020
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
On Wednesday, 27 May 2020 at 11:37:17 UTC, Andrei Alexandrescu 
wrote:
(...)
 If this is greenwashing, then DIP 1028 is doing it.
Using the greenwashing phrase against Walter's case for DIP 1028 is a good rhetoric, but a weak argument and does little good I believe to further this conversation. At least you did bring a definition into the mix, almost everyone else seems to have misunderstood what greenwashing even means! It's exactly Walter's argument that *not* marking external declarations as safe by the compiler will lead to greenwashing by humans in practice, some programmer (not compiler!) putting safe declaration there to shut up the compiler. This act cannot be easily distinguished from careful analysis, thus misleading others about the memory safety of the codebase. The compiler simply follows the rules without agenda, but a human has more complex intentions. Thus, there is a very big difference is something has been 'calculated' or 'created'. And anyone reading the code who is up to date about the rules will make that difference. This is a matter of coder psychology, not language lawyers. At least this is how I understand Walters argument. The way that DIP 1028 would mislead coders about memory safety of the codebase is quite different. I think it boils down to 2 kinds of deception: 1. safe is a sham: you'd expect safe to only allow mechanically verifiably code to get compiled but it doesn't. This is a PR problem. It's the same as pure, you'd expect it to verify the referential integrity of a function, but it actually doesn't. If seen people invent to word 'weak purity' and 'strong purity' to cope with it. In practice, it's not really a problem but the PR around it is not nice. 2. relaxing safe to allow unannotated externs will make it less useful, because in order to assess the memory safety in a reasonable way every part of the codebase, dependency and all transitive dependencies will need to be (manually) checked for unannotated externs, and this needs to be done every time you update your dependencies. Only 1 could be thought of as greenwashing. I don't feel that's particularly convincing, but it also depends how safe-by-default is presented. There should be a huge caveat that it only applies to code the compiler can verify, and any unannotated code external to the compiler is assumed to be safe as well. As for 2: for a lot of cases it will only be reasonable if the whole codebase including transitive dependencies can be mechanically checked on the usage of unannotated externs. Those should be banned in a quality check. I know Walter hates warnings and thinks it should be in a linter, but this might be something the compiler could warn for and optionally treat as an error. The warning would also combat the false impression of wholesale greenwashing with a big 'told you so'.
May 27 2020