www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - assert semantic change proposal

reply "David Bregman" <drb sfu.ca> writes:
I am creating this thread because I believe the other ones [1,6] 
have gotten too bogged down in minutiae and the big picture has 
gotten lost.

Walter has proposed a change to D's assert function as follows 
[1]:
"The compiler can make use of assert expressions to improve 
optimization, even in -release mode."

I would like to raise a series of questions, comments, and 
potential objections to this proposal which I hope will help 
clarify the big picture.

1. Who and Why? What is the impetus behind this proposal? What is 
the case for it? Walter made strong statements such as "there is 
inexorable pressure for this", and "this will happen", and I am 
wondering where this is coming from. Is it just Walter? If not, 
who or what is pushing this idea? (the 'yea' side, referred to 
below)

2. Semantic change.
The proposal changes the meaning of assert(), which will result 
in breaking existing code. Regardless of philosophizing about 
whether or not the code was "already broken" according to some 
definition of assert, the fact is that shipping programs that 
worked perfectly well before may no longer work after this change.
Q2a. In other areas, code breakage has recently been anathema. 
Why is this case different?
Q2b. Has any attempt been made to estimate the impact of this 
change on existing code? Has code breakage been considered in 
making this proposal?
2c. I note that the proposal also breaks with (at least) one of 
D's stated "Major Design Goals".[2] ("Where D code looks the same 
as C code, have it either behave the same or issue an error.")

3. Undefined behavior.
The purpose of the proposal is to improve code generation, and 
this is accomplished by allowing the compiler to generate code 
with arbitrary (undefined) behavior in the case that the 
assertion does not hold. Undefined behavior is well known to be a 
source of severe problems, such as security exploits[3,4], and 
so-called "heisenbugs"[5].
3a. An alternate statement of the proposal is literally "in 
release mode, assert expressions introduce undefined behavior 
into your code in if the expression is false".
3b. Since assert is such a widely used feature (with the original 
semantics, "more asserts never hurt"), the proposal will inject a 
massive amount of undefined behavior into existing code bases, 
greatly increasing the probability of experiencing problems 
related to undefined behavior.
Q3c. Have the implications of so much additional undefined 
behavior been sufficiently considered and weighed with the 
performance benefits of the proposal?
Q3d. How can the addition of large amounts of undefined behavior 

3f. I note that it has been demonstrated in the other threads 
that the proposal as it stands can even break the memory safety 
guarantee of  safe code.

4. Performance.
Q4a. What level of performance increases are expected of this 
proposal, for a representative sample of D programs?
Q4b. Is there any threshold level of expected performance 
required to justify this proposal? For example, if a study 
determined that the average program could expect a speedup of 
0.01% or less, would that still be considered a good tradeoff 
against the negatives?
Q4c. Have any works or studies, empirical or otherwise, been done 
to estimate the expected performance benefit? Is there any 
evidence at all for a speedup sufficient to justify this proposal?
Q4d. When evaluating the potential negative effects of the 
proposal on their codebase, D users may decide it is now too 
risky to compile with -release. (Even if their own code has been 
constructed with the new assert semantics in mind, the libraries 
they use might not). Thus the effect of the proposal would 
actually be to decrease the performance of their program instead 
of increase it. Has this been considered in the evaluation of 
tradeoffs?

5. High level goals
The feedback so far demonstrates that the proposal is 
controversial at least. While I do not endorse democratic or 
design-by-committee approaches to language design, I do think it 
is relevant if a large subset of users have issues with a 
proposal. Note that this is not bikeshedding, I believe it has 
now been sufficiently demonstrated there are real concerns about 
real negative effects of the proposal.
5a. Is this proposal the best way to go or is there an 
alternative that would achieve the same goals while satisfying 
both sides?
5b. Has the 'yea' side been sufficiently involved in this 
discussion? Are they aware of the tradeoffs? Mostly what I've 
seen is Walter defending the yea side from the perspective that 
the decision has already been made. Maybe if the yea side was 
consulted, they might easily agree to an alternative way of 
achieving the improved optimization goal, such as creating a new 
function that has the proposed semantics.

References:
[1]: http://forum.dlang.org/thread/lrbpvj$mih$1 digitalmars.com
[2]: http://dlang.org/overview.html
[3]: 
http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
[4]: http://blog.regehr.org/archives/213
[5]: http://en.wikipedia.org/wiki/Heisenbug
[6]: 
http://forum.dlang.org/thread/jrxrmcmeksxwlyuitzqp forum.dlang.org
Aug 03 2014
next sibling parent reply "bachmeier" <no spam.net> writes:
Thanks for the summary. I apologize for the uninformed question, 
but is it possible to explain how the change wrt assert will 
break existing code? Those details are probably buried in the 
extensive threads you've referenced. I ask because my 
understanding of assert has always been that you should use it to 
test your programs but not rely on it at runtime.


On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:
 I am creating this thread because I believe the other ones 
 [1,6] have gotten too bogged down in minutiae and the big 
 picture has gotten lost.

 Walter has proposed a change to D's assert function as follows 
 [1]:
 "The compiler can make use of assert expressions to improve 
 optimization, even in -release mode."

 I would like to raise a series of questions, comments, and 
 potential objections to this proposal which I hope will help 
 clarify the big picture.

 1. Who and Why? What is the impetus behind this proposal? What 
 is the case for it? Walter made strong statements such as 
 "there is inexorable pressure for this", and "this will 
 happen", and I am wondering where this is coming from. Is it 
 just Walter? If not, who or what is pushing this idea? (the 
 'yea' side, referred to below)

 2. Semantic change.
 The proposal changes the meaning of assert(), which will result 
 in breaking existing code. Regardless of philosophizing about 
 whether or not the code was "already broken" according to some 
 definition of assert, the fact is that shipping programs that 
 worked perfectly well before may no longer work after this 
 change.
 Q2a. In other areas, code breakage has recently been anathema. 
 Why is this case different?
 Q2b. Has any attempt been made to estimate the impact of this 
 change on existing code? Has code breakage been considered in 
 making this proposal?
 2c. I note that the proposal also breaks with (at least) one of 
 D's stated "Major Design Goals".[2] ("Where D code looks the 
 same as C code, have it either behave the same or issue an 
 error.")

 3. Undefined behavior.
 The purpose of the proposal is to improve code generation, and 
 this is accomplished by allowing the compiler to generate code 
 with arbitrary (undefined) behavior in the case that the 
 assertion does not hold. Undefined behavior is well known to be 
 a source of severe problems, such as security exploits[3,4], 
 and so-called "heisenbugs"[5].
 3a. An alternate statement of the proposal is literally "in 
 release mode, assert expressions introduce undefined behavior 
 into your code in if the expression is false".
 3b. Since assert is such a widely used feature (with the 
 original semantics, "more asserts never hurt"), the proposal 
 will inject a massive amount of undefined behavior into 
 existing code bases, greatly increasing the probability of 
 experiencing problems related to undefined behavior.
 Q3c. Have the implications of so much additional undefined 
 behavior been sufficiently considered and weighed with the 
 performance benefits of the proposal?
 Q3d. How can the addition of large amounts of undefined 
 behavior be reconciled with D's Major Design Goals 

 3f. I note that it has been demonstrated in the other threads 
 that the proposal as it stands can even break the memory safety 
 guarantee of  safe code.

 4. Performance.
 Q4a. What level of performance increases are expected of this 
 proposal, for a representative sample of D programs?
 Q4b. Is there any threshold level of expected performance 
 required to justify this proposal? For example, if a study 
 determined that the average program could expect a speedup of 
 0.01% or less, would that still be considered a good tradeoff 
 against the negatives?
 Q4c. Have any works or studies, empirical or otherwise, been 
 done to estimate the expected performance benefit? Is there any 
 evidence at all for a speedup sufficient to justify this 
 proposal?
 Q4d. When evaluating the potential negative effects of the 
 proposal on their codebase, D users may decide it is now too 
 risky to compile with -release. (Even if their own code has 
 been constructed with the new assert semantics in mind, the 
 libraries they use might not). Thus the effect of the proposal 
 would actually be to decrease the performance of their program 
 instead of increase it. Has this been considered in the 
 evaluation of tradeoffs?

 5. High level goals
 The feedback so far demonstrates that the proposal is 
 controversial at least. While I do not endorse democratic or 
 design-by-committee approaches to language design, I do think 
 it is relevant if a large subset of users have issues with a 
 proposal. Note that this is not bikeshedding, I believe it has 
 now been sufficiently demonstrated there are real concerns 
 about real negative effects of the proposal.
 5a. Is this proposal the best way to go or is there an 
 alternative that would achieve the same goals while satisfying 
 both sides?
 5b. Has the 'yea' side been sufficiently involved in this 
 discussion? Are they aware of the tradeoffs? Mostly what I've 
 seen is Walter defending the yea side from the perspective that 
 the decision has already been made. Maybe if the yea side was 
 consulted, they might easily agree to an alternative way of 
 achieving the improved optimization goal, such as creating a 
 new function that has the proposed semantics.

 References:
 [1]: http://forum.dlang.org/thread/lrbpvj$mih$1 digitalmars.com
 [2]: http://dlang.org/overview.html
 [3]: 
 http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
 [4]: http://blog.regehr.org/archives/213
 [5]: http://en.wikipedia.org/wiki/Heisenbug
 [6]: 
 http://forum.dlang.org/thread/jrxrmcmeksxwlyuitzqp forum.dlang.org
Aug 03 2014
next sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 20:05:22 UTC, bachmeier wrote:
 Thanks for the summary. I apologize for the uninformed 
 question, but is it possible to explain how the change wrt 
 assert will break existing code? Those details are probably 
 buried in the extensive threads you've referenced. I ask 
 because my understanding of assert has always been that you 
 should use it to test your programs but not rely on it at 
 runtime.
Yes, it was discussed in the threads. The basic way it happens is something like this: assert(x); if(!x) { // some side effect on the program // the optimizer will remove this path under the proposal } It's much more insidious if the assert and the if are separated by some distance, such as being in different functions or even modules. for example: assert(x < 1000); // this assert is wrong, but has never been hit during testing. unfortunate but real life programs are not bug free. someFunction(x); now suppose someFunction is a library sort of function, coded in "defensive programming" style. It does something important so it validates its input first to make sure nothing bad happens. But now someFunction is inlined, and code is generated with the (wrong) assumption that x<1000. The input validation checks are removed by the optimizer. As a result, someFunction runs with invalid input, and [user's harddrive is formatted, hacker gains root access, etc]. There are other ways too. The code does not explicitly need to have an if statement checking for !x to be broken by this - any implicit checks, any kind of control flow structures can be broken just the same.
Aug 03 2014
prev sibling next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 03.08.2014 22:05, schrieb bachmeier:
 Thanks for the summary. I apologize for the uninformed question, but is
 it possible to explain how the change wrt assert will break existing
 code? Those details are probably buried in the extensive threads you've
 referenced. I ask because my understanding of assert has always been
 that you should use it to test your programs but not rely on it at runtime.
Example: assert(x !is null); if(x !is null) { x.foo = 42; } in release mode, the assert() will (as to Walters plans) tell the compiler, that x cannot be null, so it will optimize the "if(x !is null)" away. Currently this would not happen and this code won't segfault, with that optimization it would, in case in release mode x is null after all in some circumstance that hasn't occured during testing. Walters stance on this is, that if x is null under some circumstance, the program is buggy anyway and in an undefined state. Furthermore he interprets "assert()" as "the programmer asserts (as in promises) that the given condition is true". Other people say that assert(), as it's implemented in many programming languages (including D up to now), just is a runtime check with a funny name that can be deactivated (e.g. for/in release builds). Some have proposed to have an assume() that does what Walter wants assert() to do. Cheers, Daniel
Aug 03 2014
prev sibling parent reply "John Carter" <john.carter taitradio.com> writes:
On Sunday, 3 August 2014 at 20:05:22 UTC, bachmeier wrote:

 3. Undefined behavior.
Actually I have had an extensive battle within my own workplace on this subject and I think I have a reasonable insight in to both points of view. It comes down to two opposing view of what we use asserts for. My view, which I think corresponds with Walter's and Betrand Meyer's, is that asserts define what correct behaviour is. If an assert fires, your program is fundamentally defective in a manner that can only be corrected by a new version of the program. And the sooner you know that, preferably at compile time, the better. Continuing past such an assert inevitably results in defective, possibly catastrophic, possibly flaky behaviour. In the opposing view, an assert statement is a debug aid. In the same category as a logging printf. If it fires, it's "Huh. That's interesting. I didn't think that would happen, but OK, it does. Cool." Alas, these two uses have been given the same name. assert. One resolution would be to create two assert interfaces, one that the compiler pays attention to, and one that is just a "Huh. That's interesting, I didn't expect that."
Aug 03 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 3 August 2014 at 22:18:29 UTC, John Carter wrote:
 My view, which I think corresponds with Walter's and Betrand 
 Meyer's, is that asserts define what correct behaviour is.
No. The propositions describe what the correct behaviour ought to be. The asserts request them to be proved.
 And the sooner you know that, preferably at compile time, the 
 better.
And to do that you need a theorem prover capable of solving NP-Hard problems. So you need a veeeery intelligent programmer to write provably correct code without any special tools.
 Continuing past such an assert inevitably results in defective, 
 possibly catastrophic, possibly flaky behaviour.
And Walter thinks it would a great idea to make that catastrophic behaviour occur with a much greater probability and also every time you execute your program, undetected, not only in the select few cases where slight overlap in conditions were detected. So basically if your program contains an assert that states that the program should stop working in 30 years from now, it is a good idea to make it fail randomly right away. That's the view that Andrei, Don and Walter has expressed very explicitly. People who thinks this is a great idea defies reason. They only learn from failure. You have to realize that a deduction engine cannot tolerate a single contradiction in axioms. If there is a single contradiction it can basically deduce anything, possibly undetected. Turning asserts in program+libaries into globally available axioms is insane.
Aug 03 2014
next sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 22:57:24 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 3 August 2014 at 22:18:29 UTC, John Carter wrote:
 My view, which I think corresponds with Walter's and Betrand 
 Meyer's, is that asserts define what correct behaviour is.
No. The propositions describe what the correct behaviour ought to be. The asserts request them to be proved.
 And the sooner you know that, preferably at compile time, the 
 better.
And to do that you need a theorem prover capable of solving NP-Hard problems. So you need a veeeery intelligent programmer to write provably correct code without any special tools.
 Continuing past such an assert inevitably results in 
 defective, possibly catastrophic, possibly flaky behaviour.
And Walter thinks it would a great idea to make that catastrophic behaviour occur with a much greater probability and also every time you execute your program, undetected, not only in the select few cases where slight overlap in conditions were detected. So basically if your program contains an assert that states that the program should stop working in 30 years from now, it is a good idea to make it fail randomly right away. That's the view that Andrei, Don and Walter has expressed very explicitly. People who thinks this is a great idea defies reason. They only learn from failure. You have to realize that a deduction engine cannot tolerate a single contradiction in axioms. If there is a single contradiction it can basically deduce anything, possibly undetected. Turning asserts in program+libaries into globally available axioms is insane.
John proposes a separate function, so I think you two are in agreement on what really matters. Let's try to avoid going too deep into tangents, unlike the other threads - it didn't work well last time.
Aug 03 2014
prev sibling parent reply "John Carter" <john.carter taitradio.com> writes:
On Sunday, 3 August 2014 at 22:57:24 UTC, Ola Fosheim Grøstad 
wrote:

 Turning asserts in program+libaries into globally available 
 axioms is insane.
I guess this is the heart of the difference in the way DbC programmers program. I know that program proving is impossibly hard, so my asserts are a kind of short cut on it. I use asserts to define my architecture. So that I can say, " In my architecture, I know, by design, certain eventualities will never occur. I don't expect any compiler to be able prove that for me (although it would be nice if it could), but I certainly will be relying on these facets of the architecture." When I assert, I'm stating "In my architecture, as I designed it, this will always be true, and everything in the code downstream of here can AND DOES rely on this. My code explicitly relies on these simplifying assumptions, and will go hideously wrong if those assumptions are false.... So why can't the compiler rely on them too? Of course it can, as every single line I write after the assert is absolutely relying on the assert being true." My asserts are never "I believe this is true". They are _always_ "In this design, the following must be true, as I'm about to code absolutely relying on this fact." And if the compiler chooses to rely on it too... I can absolutely gaurantee you that differing optimizations will be the least of my worries if the expression is false. However, that said, it is very clear to me that this is a very different usage of "assert" to what many of colleagues do. Hence my suggestion we make explicit by differing names what usage we mean. Oh, and I will just thow this happy observation into the mix... in case you think this sort of optimization is too revolutionary... http://www.airs.com/blog/archives/120
Aug 03 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 3 August 2014 at 23:54:46 UTC, John Carter wrote:
 I know that program proving is impossibly hard, so my asserts 
 are a kind of short cut on it.
Yes, but the dual is that writing a correct program is impossibly hard. A correct program works as specified for all improbable input and configurations. No shipped programs are correct. However, if you turn asserts into assume, you let the compiler use any defect in the program or the specification to prove "true==false". And after that all bets are off. With asserts on, you can tell where the flaw is. With asserts off and logging on you can figure out where the flaw is not. With asserts turned to assumes, no amount of logging can help you. You just know there is a flaw. Worse, an improbably occurring bug can now become a probable one.
 When I assert, I'm stating "In my architecture, as I designed 
 it, this will always be true, and everything in the code 
 downstream of here can AND DOES rely on this.
But it does not matter if it holds. The deduction engine in the compiler is not required to limit itself to the location of the "assert turned into axiom". It can move it upstream and downstream. It is also not only a problem of mismatch between two axioms, but between any derivable theorems.
 My code explicitly relies on these simplifying assumptions, and 
 will go hideously wrong if those assumptions are false.... So 
 why can't the compiler rely on them too?
Because the compiler can move them around and will assume all improbable configurations and input sequences.
 Of course it can, as every single line I write after the assert 
 is absolutely relying on the assert being true."
Yes, but the axioms can move anywhere. And any contradiction derivable from any set of axioms can lead to boolean expressions turned to random values anywhere in your program. Not only near the flawed assert turned into an axiom.
 My asserts are never "I believe this is true".

 They are _always_ "In this design, the following must be true, 
 as I'm about to code absolutely relying on this fact."
Yes, but if you state it differently elsewhere, indirectly (through a series of axioms), you may have a contradiction from which you can deduce "true==false" Please note that any potentially reachable code will be included in the axiom database. Not only the ones that will execute, also branches that will never execute in a running program. Those can now propagate upwards since they are true. Almost no shipped programs are correct. They are all wrong, but we take them as "working" because we don't push them to extremes very often. Let me quote from the CompCert webpage: http://compcert.inria.fr/motivations.html <<More recently, Yang et al generalized their testing of C compilers and, again, found many instances of miscompilation: We created a tool that generates random C programs, and then spent two and a half years using it to find compiler bugs. So far, we have reported more than 325 previously unknown bugs to compiler developers. Moreover, every compiler that we tested has been found to crash and also to silently generate wrong code when presented with valid inputs. (PLDI 2011) For non-critical, "everyday" software, miscompilation is an annoyance but not a major issue: bugs introduced by the compiler are negligible compared to those already present in the source program. >>
Aug 04 2014
prev sibling next sibling parent reply "John Carter" <john.carter taitradio.com> writes:
On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 Walter has proposed a change to D's assert function as follows 
 [1]:
 "The compiler can make use of assert expressions to improve 
 optimization, even in -release mode."
Hmm. I really really do like that idea. I suspect it is one of those ideas of Walter's that has consequences that reach further than anyone foresees..... but that's OK, because it is fundamentally the correct course of action, it's implications foreseen and unforeseen will be correct. One "near term" implication is to permit deeper static checking of the code. Both in terms of "Well, actually there is a code path in which the assert expression could be false, flag it with a warning" and in terms of "There is a code path which is unused / incorrect / erroneous is the assert expression is true", flag as an error/warning. Furthermore, in the presence of the deeper compile time function evaluation, I suspect we will get deeper and even more suprising consequences from this decision. Suddenly we have, at compile time, an expression we know to be true, always, at run time. Thus where possible, the compiler can infer as much as it can from this. The implications of that will be very very interesting and far reaching. As I said, this choice will have very far reaching and unforeseen and unforeseeable consequences.... but that's OK, since it is fundamentally the correct choice, those consequences will be correct too.
Aug 03 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 2:57 PM, John Carter wrote:
 On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 Walter has proposed a change to D's assert function as follows [1]:
 "The compiler can make use of assert expressions to improve
 optimization, even in -release mode."
Hmm. I really really do like that idea. I suspect it is one of those ideas of Walter's that has consequences that reach further than anyone foresees..... but that's OK, because it is fundamentally the correct course of action, it's implications foreseen and unforeseen will be correct.
Agreed. One related point that has been discussed only a little is the competitive aspect of it all. Generating fast code is of paramount importance for D's survival and thriving in the market. Competition in language design and implementation is acerbic and only getting more cutthroat. In the foreseeable future efficiency will become more important at scale seeing as data is growing and frequency scaling has stalled. Availing ourselves of a built-in "assert" that has a meaning and informativeness unachievable to e.g. a C/C++ macro is a very important and attractive competitive advantage compared to these and other languages. Walter has always meant assert the way he discusses it today. Has he (and subsequently he and I) been imprecise in documenting it? Of course, but that just means it's Tuesday. That said, should we proceed carefully about realizing this advantage? Of course; that's a given. But I think it's very important to fully understand the advantages of gaining an edge over the competition. Andrei
Aug 03 2014
next sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 2:57 PM, John Carter wrote:
 On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 Walter has proposed a change to D's assert function as 
 follows [1]:
 "The compiler can make use of assert expressions to improve
 optimization, even in -release mode."
Hmm. I really really do like that idea. I suspect it is one of those ideas of Walter's that has consequences that reach further than anyone foresees..... but that's OK, because it is fundamentally the correct course of action, it's implications foreseen and unforeseen will be correct.
Agreed.
I hope that agree was referring to some bits from the other paragraphs, and that you don't seriously agree with that blatantly self contradictory statement about unforseeable unforseens :p
 One related point that has been discussed only a little is the 
 competitive aspect of it all. Generating fast code is of 
 paramount importance for D's survival and thriving in the 
 market. Competition in language design and implementation is 
 acerbic and only getting more cutthroat. In the foreseeable 
 future efficiency will become more important at scale seeing as 
 data is growing and frequency scaling has stalled.
Would you care to address the questions about performance raised in the OP?
 Availing ourselves of a built-in "assert" that has a meaning 
 and informativeness unachievable to e.g. a C/C++ macro is a 
 very important and attractive competitive advantage compared to 
 these and other languages.
Not really, you can redefine the C macro to behave exactly as proposed, using compiler specific commands to invoke undefined behavior. Didn't you say in the other thread that you tried exactly that?
 Walter has always meant assert the way he discusses it today. 
 Has he (and subsequently he and I) been imprecise in 
 documenting it? Of course, but that just means it's Tuesday.

 That said, should we proceed carefully about realizing this 
 advantage? Of course; that's a given. But I think it's very 
 important to fully understand the advantages of gaining an edge 
 over the competition.
Please comment on the concerns raised by the OP.
Aug 03 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 3:26 PM, David Bregman wrote:
 On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu wrote:
 One related point that has been discussed only a little is the
 competitive aspect of it all. Generating fast code is of paramount
 importance for D's survival and thriving in the market. Competition in
 language design and implementation is acerbic and only getting more
 cutthroat. In the foreseeable future efficiency will become more
 important at scale seeing as data is growing and frequency scaling has
 stalled.
Would you care to address the questions about performance raised in the OP?
I thought I just did.
 Availing ourselves of a built-in "assert" that has a meaning and
 informativeness unachievable to e.g. a C/C++ macro is a very important
 and attractive competitive advantage compared to these and other
 languages.
Not really, you can redefine the C macro to behave exactly as proposed, using compiler specific commands to invoke undefined behavior. Didn't you say in the other thread that you tried exactly that?
That might be possible, but one thing I was discussing with Walter (reverse flow analysis) may be more difficult with the traditional definition of assert. Also I'm not sure whether the C and C++ standards require assert to do nothing in NDEBUG builds.
 Walter has always meant assert the way he discusses it today. Has he
 (and subsequently he and I) been imprecise in documenting it? Of
 course, but that just means it's Tuesday.

 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Please comment on the concerns raised by the OP.
Probably not - there's little motivation to do so. The original post is little else than a self-important rehash of a matter in which everybody has stated their opinion, several times, in an exchange that has long ran its course. Having everyone paste their thoughts once again seems counterproductive. Andrei
Aug 03 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 3:26 PM, David Bregman wrote:
 On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu 
 wrote:
 One related point that has been discussed only a little is the
 competitive aspect of it all. Generating fast code is of 
 paramount
 importance for D's survival and thriving in the market. 
 Competition in
 language design and implementation is acerbic and only 
 getting more
 cutthroat. In the foreseeable future efficiency will become 
 more
 important at scale seeing as data is growing and frequency 
 scaling has
 stalled.
Would you care to address the questions about performance raised in the OP?
I thought I just did.
You made some generic statements about performance being good. This is obvious and undisputed. You did not answer any concerns raised in the OP. I am left to wonder if you even read it.
 Availing ourselves of a built-in "assert" that has a meaning 
 and
 informativeness unachievable to e.g. a C/C++ macro is a very 
 important
 and attractive competitive advantage compared to these and 
 other
 languages.
Not really, you can redefine the C macro to behave exactly as proposed, using compiler specific commands to invoke undefined behavior. Didn't you say in the other thread that you tried exactly that?
That might be possible, but one thing I was discussing with Walter (reverse flow analysis) may be more difficult with the traditional definition of assert. Also I'm not sure whether the C and C++ standards require assert to do nothing in NDEBUG builds.
 Walter has always meant assert the way he discusses it today. 
 Has he
 (and subsequently he and I) been imprecise in documenting it? 
 Of
 course, but that just means it's Tuesday.

 That said, should we proceed carefully about realizing this 
 advantage?
 Of course; that's a given. But I think it's very important to 
 fully
 understand the advantages of gaining an edge over the 
 competition.
Please comment on the concerns raised by the OP.
Probably not - there's little motivation to do so. The original post is little else than a self-important rehash of a matter in which everybody has stated their opinion, several times, in an exchange that has long ran its course. Having everyone paste their thoughts once again seems counterproductive.
Wow. Don't pretend like the questions are all "asked and answered". The concerns are legitimate, but the responses so far have been mostly arrogant handwaving. The fact that you believe you answered the performance concerns by merely stating "performance is important to make D competitive" is case in point. There has been no evidence presented that there are any nontrivial performance gains to be had by reusing information from asserts. There has been no case made that the performance gains (if they exist) justify code breakage and other issues. There has been no effort to determine if there are alternate ways to achieve the goals which satisfy all groups. I could go on, and on, but I refer you back to the OP. I really believe this whole thing could be handled much better, it doesn't have to be a zero sum game between the two sides of this issue. That's why I bothered to write the post, to try to achieve that.
Aug 03 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 5:57 PM, David Bregman wrote:
 On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu wrote:
 On 8/3/14, 3:26 PM, David Bregman wrote:
 On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei Alexandrescu wrote:
 One related point that has been discussed only a little is the
 competitive aspect of it all. Generating fast code is of paramount
 importance for D's survival and thriving in the market. Competition in
 language design and implementation is acerbic and only getting more
 cutthroat. In the foreseeable future efficiency will become more
 important at scale seeing as data is growing and frequency scaling has
 stalled.
Would you care to address the questions about performance raised in the OP?
I thought I just did.
You made some generic statements about performance being good. This is obvious and undisputed. You did not answer any concerns raised in the OP. I am left to wonder if you even read it.
I did read it. Forgive me, but I don't have much new to answer to it. It seems you consider the lack of a long answer accompanied by research and measurements offensive, and you also find my previous answers arrogant. This, to continue what I was mentioning in another post, is the kind of stuff I find difficult to answer meaningfully. Andrei
Aug 03 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 01:17:36 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 5:57 PM, David Bregman wrote:
 On Monday, 4 August 2014 at 00:24:19 UTC, Andrei Alexandrescu 
 wrote:
 On 8/3/14, 3:26 PM, David Bregman wrote:
 On Sunday, 3 August 2014 at 22:15:52 UTC, Andrei 
 Alexandrescu wrote:
 One related point that has been discussed only a little is 
 the
 competitive aspect of it all. Generating fast code is of 
 paramount
 importance for D's survival and thriving in the market. 
 Competition in
 language design and implementation is acerbic and only 
 getting more
 cutthroat. In the foreseeable future efficiency will become 
 more
 important at scale seeing as data is growing and frequency 
 scaling has
 stalled.
Would you care to address the questions about performance raised in the OP?
I thought I just did.
You made some generic statements about performance being good. This is obvious and undisputed. You did not answer any concerns raised in the OP. I am left to wonder if you even read it.
I did read it. Forgive me, but I don't have much new to answer to it. It seems you consider the lack of a long answer accompanied by research and measurements offensive, and you also find my previous answers arrogant. This, to continue what I was mentioning in another post, is the kind of stuff I find difficult to answer meaningfully.
Well, I don't want this to devolve to ad hominem level. I never used the word offensive by the way, though I will admit to being temporarily offended by your description of my carefully constructed post as a self important rehash :) Basically, I didn't find your reply useful because, as I said, you were simply stating a generality about performance (which I agree with), and not addressing any concerns at all. If you don't have time to address this stuff right now, I completely understand, you are an important and busy person. But please don't give a generality or dodge the question, and then pretend the issue is addressed. This is what I call arrogant and it is worse than no reply at all. w.r.t the one question about performance justification: I'm not necessarily asking for research papers and measurements, but based on these threads I'm not aware that there is any justification at all. For all I know this is all based on a wild guess that it will help performance "a lot", like someone who optimizes without profiling first. That certainly isn't enough to justify code breakage and massive UB injection, is it? I hope we can agree on that much at least!
Aug 03 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 6:59 PM, David Bregman wrote:
 w.r.t the one question about performance justification: I'm not
 necessarily asking for research papers and measurements, but based on
 these threads I'm not aware that there is any justification at all. For
 all I know this is all based on a wild guess that it will help
 performance "a lot", like someone who optimizes without profiling first.
 That certainly isn't enough to justify code breakage and massive UB
 injection, is it? I hope we can agree on that much at least!
I think at this point (without more data) a bit of trust in one's experience would be needed. I've worked on performance on and off for years, and so has Walter. We have plenty of war stories that inform our expertise in the matter, including weird stuff like "swap these two enum values and you'll get a massive performance regressions although code is correct either way". I draw from numerous concrete cases that the right/wrong optimization at the right/wrong place may as well be the difference between winning and losing. Consider the recent php engine that gets within 20% of hhvm; heck, I know where to go to make hhvm 20% slower with 50 lines of code (compare at 2M+). Conversely, gaining those 20% were months multiplied by Facebook's best engineers. Efficiency is hard to come by and easy to waste. I consider Walter's take on "assert" a modern, refreshing take on an old pattern that nicely preserves its spirit, and a good opportunity and differential advantage for D. If anything, these long threads have strengthened that belief. It has also clarified to me that: (a) We must make sure we don't transform safe code into unsafe code; in the first approximation that may simply mean assert() has no special meaning in release mode. Also bounds checking would need to probably be not elided by assert. I consider these challenging but in good, gainful ways. (b) Deployment of optimizations must be carefully staggered and documented. Andrei
Aug 03 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 8:22 PM, Andrei Alexandrescu wrote:
 (a) We must make sure we don't transform  safe code into unsafe code; in
 the first approximation that may simply mean assert() has no special
 meaning in release mode.
... in safe code! -- Andrei
Aug 03 2014
prev sibling next sibling parent "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 03:22:51 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 6:59 PM, David Bregman wrote:
 w.r.t the one question about performance justification: I'm not
 necessarily asking for research papers and measurements, but 
 based on
 these threads I'm not aware that there is any justification at 
 all. For
 all I know this is all based on a wild guess that it will help
 performance "a lot", like someone who optimizes without 
 profiling first.
 That certainly isn't enough to justify code breakage and 
 massive UB
 injection, is it? I hope we can agree on that much at least!
I think at this point (without more data) a bit of trust in one's experience would be needed. I've worked on performance on and off for years, and so has Walter. We have plenty of war stories that inform our expertise in the matter, including weird stuff like "swap these two enum values and you'll get a massive performance regressions although code is correct either way". I draw from numerous concrete cases that the right/wrong optimization at the right/wrong place may as well be the difference between winning and losing. Consider the recent php engine that gets within 20% of hhvm; heck, I know where to go to make hhvm 20% slower with 50 lines of code (compare at 2M+). Conversely, gaining those 20% were months multiplied by Facebook's best engineers. Efficiency is hard to come by and easy to waste. I consider Walter's take on "assert" a modern, refreshing take on an old pattern that nicely preserves its spirit, and a good opportunity and differential advantage for D. If anything, these long threads have strengthened that belief. It has also clarified to me that: (a) We must make sure we don't transform safe code into unsafe code; in the first approximation that may simply mean assert() has no special meaning in release mode. Also bounds checking would need to probably be not elided by assert. I consider these challenging but in good, gainful ways. (b) Deployment of optimizations must be carefully staggered and documented. Andrei
First of all, thank you for the reply. I agree with nearly everything you say. I also have significant experience with code optimization. I greatly enjoyed the talk you gave on C++ optimization, partly because it validated what I've spent so much of my own efforts doing. I think we reach different conclusions from our experience though, my feeling is that typical asserts are unlikely to contain much info that can give a speedup. This is not to say that the compiler can't be helped by extra information, on the contrary I wholeheartedly believe it can. However I would guess this will usually require the asserts to be specifically written for that purpose, using inside knowledge about the kinds of information the optimizer is capable of using. In the end there isn't a substitute for measurement, so if we rely on experience we're both just guessing. Is it really justified to say that we're going to break stuff on a hunch it'll help performance? Considering the downsides to reusing existing asserts, what if you're wrong about performance? If new, specialized asserts need to be written anyways, we might as well use a new keyword and avoid all the downsides, essentially giving the best of both worlds. Also, I'm still curious about how you are evaluating the performance tradeoff in the first place, or do you even see it as a tradeoff? Is your estimation of the downside so small that any performance increase at all is sufficient to justify semantic change, UB injection and code breakage? If so then I see why you treat it as a forgone conclusion, certainly in a large enough codebase there will be some asserts here and there that allow you to shave off some instructions.
Aug 03 2014
prev sibling parent "Kagamin" <spam here.lot> writes:
On Monday, 4 August 2014 at 03:22:51 UTC, Andrei Alexandrescu 
wrote:
 Efficiency is hard to come by and easy to waste.
It's perfectly understandable, why one would want unsafe optimizations, and D already has the way to provide unsafe features: safe is default, unsafe is possible when requested explicitly. I'd say -Ounsafe switch would be good, it would enable various unsafe optimizations; other examples are fast math and no signed overflow assumption. They just shouldn't be forced on everyone, then you will get the upside you want and others won't get the downside they didn't want. Efficiency is not very hard: currently `assert` is not the most prominent direction of optimizations, I'd recommend to improve inlining and deep inlining of ranges so that they will become as fast as hand-written cycles or even faster. That would be much more beneficial. Or ask bearophile, he can have more ideas for very beneficial optimizations, or ask everyone.
Aug 05 2014
prev sibling next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Gaining an edge over the competition? "A new DMD release broke my code in a totally unexpected way and people tell me it's because I'm using assert() wrong. I've been using it like this in C/C++/Java/Python and D since forever and *now*, after >10years, they changed D's meaning of assert() to somehow imply assume() and optimize my safety-checks away.. they pretend it was always planned like this but they never got around to tell anyone until recently. It took me a week to find this freaking bug!" Doesn't really sound like the kind of advantage over the competition that many people would appreciate. If some rant like this (from Reddit or whatever) is the first impression someone gets of D, he's unlikely to ever take a closer look, regardless of the many merits the language actually has. (Yes, Johhannes Pfau actually brought a similar argument somewhere in the depth of one of the other threads) D could still get this kind of edge over the competition by doing these kind of optimizations when another keyword (like assume()) is used - without breaking any code. Cheers, Daniel
Aug 03 2014
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
04-Aug-2014 02:35, Daniel Gibson пишет:
 Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Gaining an edge over the competition? "A new DMD release broke my code in a totally unexpected way and people tell me it's because I'm using assert() wrong. I've been using it like this in C/C++/Java/Python and D since forever and *now*, after >10years, they changed D's meaning of assert() to somehow imply assume() and optimize my safety-checks away.. they pretend it was always planned like this but they never got around to tell anyone until recently. It took me a week to find this freaking bug!"
Wait a sec - it's not any better even today, it already strips them away. So unless debugging "-release -O -inline" programs is considered a norm nothing changes. -- Dmitry Olshansky
Aug 03 2014
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 00:45, schrieb Dmitry Olshansky:
 04-Aug-2014 02:35, Daniel Gibson пишет:
 Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Gaining an edge over the competition? "A new DMD release broke my code in a totally unexpected way and people tell me it's because I'm using assert() wrong. I've been using it like this in C/C++/Java/Python and D since forever and *now*, after >10years, they changed D's meaning of assert() to somehow imply assume() and optimize my safety-checks away.. they pretend it was always planned like this but they never got around to tell anyone until recently. It took me a week to find this freaking bug!"
Wait a sec - it's not any better even today, it already strips them away. So unless debugging "-release -O -inline" programs is considered a norm nothing changes.
It strips them away, but it doesn't eliminate other code based on the (removed) assertion. See my other post for an example.
Aug 03 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 3:35 PM, Daniel Gibson wrote:
 Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Gaining an edge over the competition?
Yes, as I explained.
 "A new DMD release broke my code in a totally unexpected way and people
 tell me it's because I'm using assert() wrong.
This has been discussed several times, and I agree there's downside. All I want to do is raise awareness of the upside, which is solid but probably less obvious to some. There's no need to trot again in response the downside that has been mentioned many times already.
 I've been using it like this in C/C++/Java/Python and D since forever
 and *now*, after >10years, they changed D's meaning of assert() to
 somehow imply assume() and optimize my safety-checks away.. they pretend
 it was always planned like this but they never got around to tell anyone
 until recently.
 It took me a week to find this freaking bug!"

 Doesn't really sound like the kind of advantage over the competition
 that many people would appreciate.

 If some rant like this (from Reddit or whatever) is the first impression
 someone gets of D, he's unlikely to ever take a closer look, regardless
 of the many merits the language actually has.
From what I remember there has been good reception (both on reddit and at work) of increased aggressiveness of compiler optimizations.
 (Yes, Johhannes Pfau actually brought a similar  argument somewhere in
 the depth of one of the other threads)

 D could still get this kind of edge over the competition by doing these
 kind of optimizations when another keyword (like assume()) is used -
 without breaking any code.
I don't think D will add assume(). Andrei
Aug 03 2014
parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 02:28, schrieb Andrei Alexandrescu:
 On 8/3/14, 3:35 PM, Daniel Gibson wrote:
 Am 04.08.2014 00:15, schrieb Andrei Alexandrescu:
 That said, should we proceed carefully about realizing this advantage?
 Of course; that's a given. But I think it's very important to fully
 understand the advantages of gaining an edge over the competition.
Gaining an edge over the competition?
Yes, as I explained.
 "A new DMD release broke my code in a totally unexpected way and people
 tell me it's because I'm using assert() wrong.
This has been discussed several times, and I agree there's downside. All I want to do is raise awareness of the upside, which is solid but probably less obvious to some. There's no need to trot again in response the downside that has been mentioned many times already.
Ok, so you agree that there's a downside and code (that you consider incorrect, but that most probably exists and works ok so far) will *silently* break (as in: the compiler won't tell you about it). So when should this change be introduced? In 2.x or in D3? I don't really like the idea of introducing a silently breaking change in a minor version - and it destroys the trust into future decisions for D. Cheers, Daniel
Aug 03 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 5:40 PM, Daniel Gibson wrote:
 Ok, so you agree that there's a downside and code (that you consider
 incorrect, but that most probably exists and works ok so far) will
 *silently* break (as in: the compiler won't tell you about it).
Yes, I agree there's a downside. I missed the part where you agreed there's an upside :o).
 So when should this change be introduced? In 2.x or in D3?
More aggressive optimizations should be introduced gradually in future releases of the D compilers. I think your perception of the downside is greater, and that of the upside is lesser, than mine.
 I don't really like the idea of introducing a silently breaking change
 in a minor version - and it destroys the trust into future decisions for D.
I understand. At some point there are judgment calls to be made that aren't going to please everybody. Andrei
Aug 03 2014
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 03:02, schrieb Andrei Alexandrescu:
 On 8/3/14, 5:40 PM, Daniel Gibson wrote:
 Ok, so you agree that there's a downside and code (that you consider
 incorrect, but that most probably exists and works ok so far) will
 *silently* break (as in: the compiler won't tell you about it).
Yes, I agree there's a downside. I missed the part where you agreed there's an upside :o).
I see a small upside in the concept of "syntax that tells the compiler it can take something for granted for optimization and that causes an error in debug mode". For me this kind of optimization is similar to GCC's __builtin_expect() to aid branch-prediction: probably useful to get even more performance, but I guess I wouldn't use it in everyday code. However, I see no upside in redefining an existent keyword (that had a different meaning.. or at least behavior.. before and in most programming languages) to achieve this. /Maybe/ an attribute for assert() would also be ok, so we don't need a new keyword: optimize assert(x); or hint, promise, always, for_real, whatever. Cheers, Daniel
Aug 03 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/04/2014 12:15 AM, Andrei Alexandrescu wrote:
 I suspect it is one of those ideas of Walter's that has consequences
 that reach further than anyone foresees..... but that's OK, because it
 is fundamentally the correct course of action, it's implications
 foreseen and unforeseen will be correct.
Agreed.
No, please hold on. Walter is not a supernatural being.
 Walter has always meant assert the way he discusses it today.
This argument has no merit. Please stop bringing it up.
 That said, should we proceed carefully about realizing this advantage? Of
course; that's a given.
That is reasonable.
But I think it's very important to fully understand
 the advantages of gaining an edge over the competition.
Note that this is achievable without claiming the Humpty Dumpty privilege once again. Furthermore the potential for the development of concepts is actually usually larger if concepts stay properly separated from the beginning. E.g. the current proposal already has the issue that an assumption of unreachability cannot be expressed in the straightforward way: switch(x){ // ... default: assert(0); // cannot be optimized away in -release }
Aug 03 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 4:01 PM, Timon Gehr wrote:
 On 08/04/2014 12:15 AM, Andrei Alexandrescu wrote:
 I suspect it is one of those ideas of Walter's that has consequences
 that reach further than anyone foresees..... but that's OK, because it
 is fundamentally the correct course of action, it's implications
 foreseen and unforeseen will be correct.
Agreed.
No, please hold on. Walter is not a supernatural being.
There's something to be said about vision and perspective.
 Walter has always meant assert the way he discusses it today.
This argument has no merit. Please stop bringing it up.
Actually it does offer value: for a large fragment of the discussion, Walter has been confused that people have a very different understanding of assert than his. Andrei
Aug 03 2014
parent Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 02:30, schrieb Andrei Alexandrescu:
 On 8/3/14, 4:01 PM, Timon Gehr wrote:
 On 08/04/2014 12:15 AM, Andrei Alexandrescu wrote:
 I suspect it is one of those ideas of Walter's that has consequences
 that reach further than anyone foresees..... but that's OK, because it
 is fundamentally the correct course of action, it's implications
 foreseen and unforeseen will be correct.
Agreed.
No, please hold on. Walter is not a supernatural being.
There's something to be said about vision and perspective.
 Walter has always meant assert the way he discusses it today.
This argument has no merit. Please stop bringing it up.
Actually it does offer value: for a large fragment of the discussion, Walter has been confused that people have a very different understanding of assert than his.
Yes, this kinda helps understanding Walters point. But as his point has only been communicated to *you*, not D users in general, you (and Walter) could be more understanding towards them being surprised and confused by this change of asserts()'s semantics. Instead you insist that your interpretation of what assert() should *mean* is the ultimate truth, even though assert() doesn't *behave* like that way in current D or any popular programming language I know of. BTW, TCPL ("KnR") (Second Edition) defines assert as: "The assert macro is used to add diagnostics to programs: void assert(int expression) If expression is zero when assert(expression) is executed, the assert macro will print on stderr a message, (...) It then calls abort to terminate execution. (...) If NDEBUG is defined at the time <assert.h> is included, the assert macro is ignored." Of course KnR is not the ultimate truth, but it shows that there have been definitions (by clever people!) of assert() that contradict yours for a long time. Cheers, Daniel
Aug 03 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2014 4:01 PM, Timon Gehr wrote:
 Walter is not a supernatural being.
Yes, I am. I demand tribute.
Aug 05 2014
prev sibling next sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 21:57:08 UTC, John Carter wrote:

 One "near term" implication is to permit deeper static checking 
 of the code.
 Both in terms of "Well, actually there is a code path in which 
 the assert expression could be false, flag it with a warning" 
 and in terms of "There is a code path which is unused / 
 incorrect / erroneous is the assert expression is true", flag 
 as an error/warning.

 Furthermore, in the presence of the deeper compile time 
 function evaluation, I suspect we will get deeper and even more 
 suprising consequences from this decision.

 Suddenly we have, at compile time, an expression we know to be 
 true, always, at run time. Thus where possible, the compiler 
 can infer as much as it can from this.

 The implications of that will be very very interesting and far 
 reaching.
I totally agree, static analysis tools should consider information contained in asserts. In the case of C/C++, I believe many of the analysis tools already do this. That doesn't mean it's a good idea for this information to be used for optimization though, for reasons explained in the OP.
 As I said, this choice will have very far reaching and 
 unforeseen and unforeseeable consequences.... but that's OK, 
 since it is fundamentally the correct choice, those 
 consequences will be correct too.
This is mystical sounding gibberish. If the consequences are unforseen and unforseeable, then by definition you cannot forsee that they are correct.
Aug 03 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 3 August 2014 at 21:57:08 UTC, John Carter wrote:
 consequences that reach further than anyone foresees..... but 
 that's OK, because it is fundamentally the correct course of 
 action, it's implications foreseen and unforeseen will be 
 correct.
The implications are foreseen. Any assert that will depend on any kind of notion of progress over time risk blowing up random logic undetected with a decent optimizer (> dmd). But go ahead. This will lead to a fork.
Aug 03 2014
parent reply "John Carter" <john.carter taitradio.com> writes:
On Sunday, 3 August 2014 at 22:19:16 UTC, Ola Fosheim Grøstad 
wrote:

 But go ahead. This will lead to a fork.
What should fork is the two opposing intentions for assert. They should have two different names and different consequences. On Sunday, 3 August 2014 at 22:18:29 UTC, John Carter wrote:
 It comes down to two opposing view of what we use asserts for.
To give a more concrete example of this... in the team I work with we have the following issue. When the DbC type programmers turn "asserts are fatal" on, we get asserts firing all over the place in non-DbC programmers code. On closer inspection these come down to things like "stupid factory didn't connect cable A to device B, the installation instructions are clear, the cable always should be attached in the production model". And the solution is one of... * Find a Device B and plug cable A into it. * There is a bug somewhere in the firmware. * There is a bug in the firmware of device B * You have a debugger in the entrails of device B, so the heartbeat stopped. * Something somewhere increased the latency so the timeout fired, maybe increase timeout.. Whereas for DbC programmers a pre-condition assert firing meant _very_ directly that the function that _directly_ invoked me is clearly defective in this manner. The bug is undoubtably there, there may be a bug elsewhere as well, but it is undoubtably a bug in my calling routine if it let defective values propagate as far as me. Or if a postcondition assert fired, it means, _this_ function is defective in this manner. The simple harsh fact is DbC type programmers mean completely different things to non DbC type programmers by "assert", yet unfortunately it is mapped on to the same thing with the same name. The obvious simple correct resolution is to give them two different names.
Aug 03 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/04/2014 12:51 AM, John Carter wrote:
 On Sunday, 3 August 2014 at 22:19:16 UTC, Ola Fosheim Grøstad wrote:

 But go ahead. This will lead to a fork.
What should fork is the two opposing intentions for assert. They should have two different names and different consequences.
Yes. :)
Aug 03 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 3 August 2014 at 23:05:23 UTC, Timon Gehr wrote:
 On 08/04/2014 12:51 AM, John Carter wrote:
 But go ahead. This will lead to a fork.
What should fork is the two opposing intentions for assert. They should have two different names and different consequences.
Yes. :)
If "assert" remains having assume semantics then it basically means that you will have to rewrite all libraries. Switching the names of "assert" and "assume" is comparable to asking me to drive a car where the accelerator and break pedals have switched positions. Adjusting the compiler is less work…
Aug 04 2014
prev sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 22:51:58 UTC, John Carter wrote:
 The obvious simple correct resolution is to give them two 
 different names.
Agreed, but it remains to see if those in favor of the original proposal will agree also.
Aug 03 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 3 August 2014 at 21:57:08 UTC, John Carter wrote:
 On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 Walter has proposed a change to D's assert function as follows 
 [1]:
 "The compiler can make use of assert expressions to improve 
 optimization, even in -release mode."
Hmm. I really really do like that idea. I suspect it is one of those ideas of Walter's that has consequences that reach further than anyone foresees..... but that's OK, because it is fundamentally the correct course of action, it's implications foreseen and unforeseen will be correct. One "near term" implication is to permit deeper static checking of the code.
I allow myself to chime in. I don't have much time to follow the whole thing, but I have this in my mind for quite a while. First thing first, the proposed behavior is what I had in mind for SDC since pretty much day 1. It already uses hint for the optimizer to tell it the branch won't be taken, but I definitively want to go further. By definition, when an assert has been removed in release that would have failed in debug, you are in undefined behavior land already. So there is no reason not to optimize.
Aug 03 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 02:40:49 UTC, deadalnix wrote:
 I allow myself to chime in. I don't have much time to follow 
 the whole thing, but I have this in my mind for quite a while.

 First thing first, the proposed behavior is what I had in mind 
 for SDC since pretty much day 1. It already uses hint for the 
 optimizer to tell it the branch won't be taken, but I 
 definitively want to go further.
Not everyone had that definition in mind when writing their asserts.
 By definition, when an assert has been removed in release that 
 would have failed in debug, you are in undefined behavior land 
 already. So there is no reason not to optimize.
By the new definition, yes. But is it reasonable to change the definition, and then retroactively declare previous code broken? Maybe the ends justify the means in this case but it certainly isn't obvious that they do. I don't understand why breaking code is sacrilege one time, and the next time can be done without any justifications.
Aug 03 2014
parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 4 August 2014 at 02:56:35 UTC, David Bregman wrote:
 On Monday, 4 August 2014 at 02:40:49 UTC, deadalnix wrote:
 I allow myself to chime in. I don't have much time to follow 
 the whole thing, but I have this in my mind for quite a while.

 First thing first, the proposed behavior is what I had in mind 
 for SDC since pretty much day 1. It already uses hint for the 
 optimizer to tell it the branch won't be taken, but I 
 definitively want to go further.
Not everyone had that definition in mind when writing their asserts.
 By definition, when an assert has been removed in release that 
 would have failed in debug, you are in undefined behavior land 
 already. So there is no reason not to optimize.
By the new definition, yes. But is it reasonable to change the definition, and then retroactively declare previous code broken? Maybe the ends justify the means in this case but it certainly isn't obvious that they do. I don't understand why breaking code is sacrilege one time, and the next time can be done without any justifications.
The fact that the compiler can optimize based on assert is not new in D world. Maybe it wasn't advertized properly, but it always was an option. If one want to make sure a check is done, one can use expect.
Aug 03 2014
prev sibling next sibling parent reply Martin Krejcirik <mk-junk i-line.cz> writes:
On 3.8.2014 21:47, David Bregman wrote:
 Walter has proposed a change to D's assert function as follows [1]:
 "The compiler can make use of assert expressions to improve
 optimization, even in -release mode."
Couldn't this new assert behaviour be introduced as a new optimization switch ? Say -Oassert ? It would be off by default and would work both in debug and release mode. -- mk
Aug 03 2014
next sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 23:24:08 UTC, Martin Krejcirik wrote:
 On 3.8.2014 21:47, David Bregman wrote:
 Walter has proposed a change to D's assert function as follows 
 [1]:
 "The compiler can make use of assert expressions to improve
 optimization, even in -release mode."
Couldn't this new assert behaviour be introduced as a new optimization switch ? Say -Oassert ? It would be off by default and would work both in debug and release mode.
That would be an improvement over the current proposal in my opinion, but I see some issues. One is the general argument against more compiler switches: complexity, and people will always enable stuff that seems like it might give the fastest code. Another is how do you mix and match code which is meant to be compiled with or without the switch? I suppose it could also be used in complement to a new function that has the proposed behavior regardless of switches. Regardless, it goes to show there exists a design space of possible alternatives to the proposal.
Aug 03 2014
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 4:24 PM, Martin Krejcirik wrote:
 On 3.8.2014 21:47, David Bregman wrote:
 Walter has proposed a change to D's assert function as follows [1]:
 "The compiler can make use of assert expressions to improve
 optimization, even in -release mode."
Couldn't this new assert behaviour be introduced as a new optimization switch ? Say -Oassert ? It would be off by default and would work both in debug and release mode.
That sounds like a good idea for careful introducing of assert-driven optimizations. -- Andrei
Aug 03 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2014 4:24 PM, Martin Krejcirik wrote:
 Couldn't this new assert behaviour be introduced as a new optimization
 switch ? Say -Oassert ? It would be off by default and would work both
 in debug and release mode.
It could, but every one of these: 1. doubles the time it takes to test dmd, it doesn't take many of these to render dmd untestable 2. adds confusion to most programmers as to what switch does what 3. adds complexity, i.e. bugs 4. interactions between optimization switches often exhibits emergent behavior - i.e. extremely hard to test for
Aug 05 2014
parent "Kagamin" <spam here.lot> writes:
On Wednesday, 6 August 2014 at 00:50:07 UTC, Walter Bright wrote:
 On 8/3/2014 4:24 PM, Martin Krejcirik wrote:
 Couldn't this new assert behaviour be introduced as a new 
 optimization
 switch ? Say -Oassert ? It would be off by default and would 
 work both
 in debug and release mode.
It could, but every one of these: 1. doubles the time it takes to test dmd, it doesn't take many of these to render dmd untestable 2. adds confusion to most programmers as to what switch does what 3. adds complexity, i.e. bugs
If performance is not worth associated complexity, then it doesn't pull its weight.
 4. interactions between optimization switches often exhibits 
 emergent behavior - i.e. extremely hard to test for
Why do you think the emergent behavior is caused by interactions between switches, you think optimizations themselves don't interact? You said, it's factorial, is it a number of permutations of switches or a number of permutations of optimizations? Switches should not be sensitive to permutations. On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:
 1. it's long with an unappealing hackish look
It's an established practice in D that unsafe features should look unappealing. Example: __gshared.
 3. users will be faced with two kinds of asserts, with a subtle 
 difference that is hard to explain, hard to remember which is 
 which, and will most likely use inappropriately
Andrei already proposed `debug assert` - a safe flavor of (dangerous by default) assert. So two kinds of assert are inevitable and should be documented, because user should be informed about dangerous optimizations. But frankly "compiler will break your code" is not a subtle difference and can be easily explained in just 5 words. And BTW why safe behavior must be invoked with an extra syntax? That goes against best D practices and hence surprising, confusing and counterintuitive.
 I'll sum up with the old saw that any programming problem can 
 be solved with another level of indirection. I submit a 
 corollary that any language issue can be solved by adding 
 another keyword or compiler flag. The (much) harder thing is to 
 solve a problem with an elegant solution that does not involve 
 new keywords/flags, and fits in naturally.
So what is this elegant solution? To break code silently in the worst possible way and selectively and exactly at the most critical points? Another downside of assert being dangerous - you can't add it to code freely. Every assert can order the compiler to break the code, adding an assert becomes a difficult task, because the asserted expression (being assumed) begins to require a thorough future-proof environment-independent proof, which is very hard to come by: it needs to be an actual proof than just a guess, because the tradeoff is to be literally punished by death. Unittests don't provide such proof.
Aug 06 2014
prev sibling next sibling parent reply "Mike Farnsworth" <mike.farnsworth gmail.com> writes:
This all seems to have a very simple solution, to use something 
like: expect()

GCC for example has an intrinsic, __builtin_expect() that is used 
to notify the compiler of a data constraint it can use in 
optimization for branches.  Why not make something like this a 
first-class citizen in D (and even expand the concept to more 
than just branch prediction)?

That way you don't have to hijack the meaning of assert(), but 
optimizations can be made based on the condition.  
__buitin_expect() in gcc usually just figures the expected 
condition is fulfilled the vast majority of the time, but it 
could be expanded to make a lack of fulfillment trigger an 
exception (in non-release mode).  And the compiler is always free 
to optimize with the assumption the expectation is met.

On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:
 I am creating this thread because I believe the other ones 
 [1,6] have gotten too bogged down in minutiae and the big 
 picture has gotten lost.

 ...

 References:
 [1]: http://forum.dlang.org/thread/lrbpvj$mih$1 digitalmars.com
 [2]: http://dlang.org/overview.html
 [3]: 
 http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
 [4]: http://blog.regehr.org/archives/213
 [5]: http://en.wikipedia.org/wiki/Heisenbug
 [6]: 
 http://forum.dlang.org/thread/jrxrmcmeksxwlyuitzqp forum.dlang.org
Aug 03 2014
next sibling parent "David Bregman" <drb sfu.ca> writes:
On Sunday, 3 August 2014 at 23:51:30 UTC, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use something 
 like: expect()

 GCC for example has an intrinsic, __builtin_expect() that is 
 used to notify the compiler of a data constraint it can use in 
 optimization for branches.  Why not make something like this a 
 first-class citizen in D (and even expand the concept to more 
 than just branch prediction)?

 That way you don't have to hijack the meaning of assert(), but 
 optimizations can be made based on the condition.  
 __buitin_expect() in gcc usually just figures the expected 
 condition is fulfilled the vast majority of the time, but it 
 could be expanded to make a lack of fulfillment trigger an 
 exception (in non-release mode).  And the compiler is always 
 free to optimize with the assumption the expectation is met.
Indeed, having a new function instead of hijacking assert would seem to be the obvious solution. That's really interesting about the possibility of conveying probabilistic information to the compiler. Of course, we'd need different functions for constant axioms and probabilistic ones: we could use (for example) assume() for constants and expect() for probabilities.
Aug 03 2014
prev sibling next sibling parent Daniel Gibson <metalcaedes gmail.com> writes:
Well, this is not just about branch prediction, but about "let the 
compiler assume the condition is always true and let it eliminate code 
that handles other cases".

I /think/ that in C with GCC assume() could be implemented (for release 
mode, otherwise it's just like assert()) like
   #define assume(cond) if(!(cond)) __builtin_unreachable()
I'm not sure what kind of optimizations GCC does based on "unreachable", 
though.

However, something like expect() /might/ be a useful addition to the 
language as well. Maybe as an annotation for if()/else?

Cheers,
Daniel

Am 04.08.2014 01:51, schrieb Mike Farnsworth:
 This all seems to have a very simple solution, to use something like:
 expect()

 GCC for example has an intrinsic, __builtin_expect() that is used to
 notify the compiler of a data constraint it can use in optimization for
 branches.  Why not make something like this a first-class citizen in D
 (and even expand the concept to more than just branch prediction)?

 That way you don't have to hijack the meaning of assert(), but
 optimizations can be made based on the condition. __buitin_expect() in
 gcc usually just figures the expected condition is fulfilled the vast
 majority of the time, but it could be expanded to make a lack of
 fulfillment trigger an exception (in non-release mode).  And the
 compiler is always free to optimize with the assumption the expectation
 is met.

 On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:
 I am creating this thread because I believe the other ones [1,6] have
 gotten too bogged down in minutiae and the big picture has gotten lost.

 ...

 References:
 [1]: http://forum.dlang.org/thread/lrbpvj$mih$1 digitalmars.com
 [2]: http://dlang.org/overview.html
 [3]:
 http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
 [4]: http://blog.regehr.org/archives/213
 [5]: http://en.wikipedia.org/wiki/Heisenbug
 [6]: http://forum.dlang.org/thread/jrxrmcmeksxwlyuitzqp forum.dlang.org
Aug 03 2014
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 4:51 PM, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use something like:
 expect()

 GCC for example has an intrinsic, __builtin_expect() that is used to
 notify the compiler of a data constraint it can use in optimization for
 branches.  Why not make something like this a first-class citizen in D
 (and even expand the concept to more than just branch prediction)?
__builtin_expect is actually not that. It still generates code when the expression is false. It simply uses the static assumption to minimize jumps and maximize straight execution for the true case. -- Andrei
Aug 03 2014
parent "Mike Farnsworth" <mike.farnsworth gmail.com> writes:
On Monday, 4 August 2014 at 00:34:30 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 4:51 PM, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use 
 something like:
 expect()

 GCC for example has an intrinsic, __builtin_expect() that is 
 used to
 notify the compiler of a data constraint it can use in 
 optimization for
 branches.  Why not make something like this a first-class 
 citizen in D
 (and even expand the concept to more than just branch 
 prediction)?
__builtin_expect is actually not that. It still generates code when the expression is false. It simply uses the static assumption to minimize jumps and maximize straight execution for the true case. -- Andrei
Yes, that's why I pointed out expanding it to actually throw an exception when the expectation isn't meant. I guess that's really more like assume() that has been mentioned? At EA we used two versions of an assertion: assert() which compiled out in non-debug builds, etc; and verify() that was kept in non-debug builds but just boiled back down to the condition. The latter was when we relied on the side-effects of the logic (used the condition in a real runtime branch), but really we wanted to know if it ever took the else so to speak as that was an error we never wanted to ship with. FFIW, at my current job, in C++ we use assert() that compiles out for final builds (very performance-driven code, so even the conditions tested have to scat), and we also have likely() and unlikely() macros that take advantage of __builtin_expect(). There are only a few places where we do both, where the assertion may be violated and we still want to recover nicely from it, but still don't want the performance suck of the else case code polluting the instruction cache.
Aug 03 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use something like: expect()
I see code coming that looks like: expect(x > 2); // must be true assert(x > 2); // check that it is true All I can think of is, shoot me now :-)
Aug 05 2014
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright wrote:
 On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use 
 something like: expect()
I see code coming that looks like: expect(x > 2); // must be true assert(x > 2); // check that it is true All I can think of is, shoot me now :-)
How about something like expected assert(x > 2); or assumed assert(x > 2); It wouldn't introduce a new keyword, but still introduces the expected/assumed semantics. You should keep in mind that you might have to make a compromise, regardless of your feelings on the subject. Also, I am going to try to say this in as respectful a way as I can... Please stop responding in such a dismissive way, I think it is already pretty obvious that some are getting frustrated by these threads. Responding in a dismissive way makes it seem like you don't take the arguments seriously.
Aug 05 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/5/14, 11:28 PM, Tofu Ninja wrote:
 On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright wrote:
 On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
 This all seems to have a very simple solution, to use something like:
 expect()
I see code coming that looks like: expect(x > 2); // must be true assert(x > 2); // check that it is true All I can think of is, shoot me now :-)
How about something like expected assert(x > 2); or assumed assert(x > 2); It wouldn't introduce a new keyword, but still introduces the expected/assumed semantics. You should keep in mind that you might have to make a compromise, regardless of your feelings on the subject.
I think "assert" is good to use for optimization, and "debug assert" would be a good choice for soft assertions. Care must be exercised with tying new optimizations to build flags.
 Also, I am going to try to say this in as respectful a way as I
 can...

 Please stop responding in such a dismissive way, I think it is
 already pretty obvious that some are getting frustrated by these
 threads. Responding in a dismissive way makes it seem like you
 don't take the arguments seriously.
I have difficulty figuring how such answers can be considered dismissive. The quoted code is considered an antipattern at least e.g. at my workplace. (Wouldn't pass review, and disproportionate insistence on such might curb one's career.) Even though some might not agree with Walter's opinion, it's entirely reasonable to express dislike of that code; I don't quite get why that would be consider dismissive. I think we're at the point where everybody understands one another, and there must be a way to express polite but definite disagreement. What would that be? Thanks, Andrei
Aug 06 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 07:19:21 UTC, Andrei Alexandrescu 
wrote:
 The quoted code is considered an antipattern at least e.g. at 
 my workplace.
What about: « if(x==0){ …free of x…} …free of x… assume(x!=0) » being equivalent to « assume(x!=0) if(x==0){ …free of x…} …free of x… »
 I think we're at the point where everybody understands one 
 another
Really? I am the point where I realize that a significant portion of programmers have gullible expectations of their own ability to produce provably correct code and a very sloppy understanding of what computing is. So now we don't have Design by Contract, but Design by Gullible Assumptions. Great…
Aug 06 2014
prev sibling parent "eles" <eles eles.com> writes:
On Wednesday, 6 August 2014 at 07:19:21 UTC, Andrei Alexandrescu 
wrote:
 On 8/5/14, 11:28 PM, Tofu Ninja wrote:
 On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright 
 wrote:
 On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
 I think "assert" is good to use for optimization, and "debug 
 assert" would be a good choice for soft assertions. Care must
Conceptually, this means a "release assert" (both in debug and release builds) and a "debug assert" (only in debug builds). Thus, question: it is acceptable to optimize a (release) build based on code that is present only into another (debug) build?
Aug 06 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/5/2014 11:28 PM, Tofu Ninja wrote:
 Please stop responding in such a dismissive way, I think it is
 already pretty obvious that some are getting frustrated by these
 threads. Responding in a dismissive way makes it seem like you
 don't take the arguments seriously.
I responded to the equivalent design proposal several times already, with detailed answers. This one is shorter, but the essential aspects are there. I know those negative aspects came across because they are addressed with your counter:
 How about something like
  expected assert(x > 2); or  assumed assert(x > 2);
 It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics. The riposte: 1. it's long with an unappealing hackish look 2. it follows in the C++ tradition of the best practice being the long ugly way, and the deprecated practice is the straightforward way (see arrays in C++) 3. users will be faced with two kinds of asserts, with a subtle difference that is hard to explain, hard to remember which is which, and will most likely use inappropriately 4. everyone who wants faster assert optimizations will have to rewrite their (possibly extensive) use of asserts that we'd told them was best practice. I know I'd be unhappy about having to do such to my D code.
 You should keep in mind that you might have to make a compromise, regardless 
 of your feelings on the subject.
This is not about my feelings, other than my desire to find the best design based on a number of tradeoffs. I'll sum up with the old saw that any programming problem can be solved with another level of indirection. I submit a corollary that any language issue can be solved by adding another keyword or compiler flag. The (much) harder thing is to solve a problem with an elegant solution that does not involve new keywords/flags, and fits in naturally.
Aug 06 2014
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:

Thank you for the well thought out response.

 I responded to the equivalent design proposal several times 
 already, with detailed answers. This one is shorter, but the 
 essential aspects are there. I know those negative aspects came 
 across because they are addressed with your counter:
I was not trying to attack you, but rather inform you. I think that is all I want to say on the subject so I am going to stop talking about it.
 1. it's long with an unappealing hackish look
I find this a positive, the proposed assert has quite dangerous implications, having a slightly "hackish" look will serve to discourage using it when it is not obviously needed.
 2. it follows in the C++ tradition of the best practice being 
 the long ugly way, and the deprecated practice is the 
 straightforward way (see arrays in C++)
That is implying that the best practice is your version of assert. As it can introduce undefined behavior, I find it hard to believe.
 3. users will be faced with two kinds of asserts, with a subtle 
 difference that is hard to explain, hard to remember which is 
 which, and will most likely use inappropriately
This is why I think it would be best added as an annotation on assert. People who don't care enough to look up what the extra annotation does will just use assert by itself and have no risk of undefined behavior and it will behave as expected(similar to how it behaves in other languages).
 4. everyone who wants faster assert optimizations will have to 
 rewrite their (possibly extensive) use of asserts that we'd 
 told them was best practice. I know I'd be unhappy about having 
 to do such to my D code.
This is a fare point but it would happen the same either way. If some one used assert expecting it to act like the C assert and found out it did not any more, they would be forced to go through though there code and change all the asserts. As this is a change, I am inclined to say we should favor the old version and not force users of the old version to update.
 This is not about my feelings, other than my desire to find the 
 best design based on a number of tradeoffs.
I was just trying to say that it is a possibility that should not be forgotten, I think I may have said it in a harsher way than I meant to, I apologize. The annotated assert was my attempt at a possible compromise.
Aug 06 2014
prev sibling next sibling parent "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:
 On 8/5/2014 11:28 PM, Tofu Ninja wrote:
 Please stop responding in such a dismissive way, I think it is
 already pretty obvious that some are getting frustrated by 
 these
 threads. Responding in a dismissive way makes it seem like you
 don't take the arguments seriously.
I responded to the equivalent design proposal several times already, with detailed answers. This one is shorter, but the essential aspects are there. I know those negative aspects came across because they are addressed with your counter:
I don't think I've seen your arguments against adding assume(), last time this was discussed you were still claiming that there was no difference between the two, so we couldn't even get to discussing it!
 How about something like
  expected assert(x > 2); or  assumed assert(x > 2);
 It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics. The riposte: 1. it's long with an unappealing hackish look 2. it follows in the C++ tradition of the best practice being the long ugly way, and the deprecated practice is the straightforward way (see arrays in C++) 3. users will be faced with two kinds of asserts, with a subtle difference that is hard to explain, hard to remember which is which, and will most likely use inappropriately 4. everyone who wants faster assert optimizations will have to rewrite their (possibly extensive) use of asserts that we'd told them was best practice. I know I'd be unhappy about having to do such to my D code.
Nice, this is progress. You're giving us some actual reasoning! :) Is this the full list of your reasons or just a subset? 1, 2: Those don't apply to assume() without the annotations right? I agree that's hacky looking with annotations. 3: If the separation is not made official, a lot of people will end up having to roll their own, potentially with all kinds of names. This is much more fragmenting than just having official assert and assume. myAssert, debugCheck, smurfAssert, etc. ugh. As for the difference being subtle, hard to remember, understand, or explain: This totally supports my argument. If people don't even understand the subtlety, there's no way they are ready to be inserting undefined behavior everywhere in their code. assume (or your assert) is a dangerous power tool, joe coder probably shouldn't be using it at all. Yet you want it used everywhere by default? 4: This is a valid argument. The flip side is that everyone else has to invent their own function and rewrite their code with it, unless they have 100% faith their code is not buggy, or don't care about undefined behavior (or don't know they care about it until they get bitten by it later). Or if they go for the quick fix by disabling -release, then you just pessimized their code instead of optimized it. Plus what about people who miss the change, or unmaintained code? Isn't the default way of doing things to err on the side of not breaking people's code? Why is this time different? I know you hate more switches (for good reason), but this seems like a good case for -Oassert or -Ounsafe, for the elite few people whose code is actually suitable for such dangerous transformation. I would like to note some additional benefits of separating the two concepts and hopefully get your thoughts on these. Some of these are repeating from previous posts. -assert(0) is defined as a special case, so it can't express unreachability. This makes it less powerful for optimizing than it should be. assume(0) does not have this problem. e.g. switch(){ /* ... */ default: assume(0); } -the proposed assert is not safe, which creates complications since the current assert is. assume() does not have this problem, as it can be defined as system from the start. -with the functions separate, it can be made clear that assume() is a dangerous tool that should be used sparingly. It will mean that code is more searchable and easier to audit - if there is a heisenbug, try a search for trusted and assume() constructs. -retain C compatibility, which is supposed to be a goal of D.
Aug 06 2014
prev sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 6 August 2014 at 08:25:38 UTC, Walter Bright wrote:
 4. everyone who wants faster assert optimizations will have to 
 rewrite their (possibly extensive) use of asserts that we'd 
 told them was best practice. I know I'd be unhappy about having 
 to do such to my D code.
Also having the same syntax for both kinds of assert makes it easier to try unsafe optimizations: the code can be written safe, then unsafe optimizations can tried effortlessly and performance gains evaluated. In C one would only need to edit the macro to do this. Whether we want to allow such experiments is debatable, but I find it at least reasonable. One may also want the optimizations to propagate backwards for even more performance - this would be a different kind of optimization, which may or may not require yet another syntax.
Aug 06 2014
prev sibling next sibling parent reply "John Carter" <john.carter taitradio.com> writes:
On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 2. Semantic change.
 The proposal changes the meaning of assert(), which will result 
 in breaking existing code. Regardless of philosophizing about 
 whether or not the code was "already broken" according to some 
 definition of assert, the fact is that shipping programs that 
 worked perfectly well before may no longer work after this 
 change.
Subject to the caveat suggesting having two assert's with different names and different meanings, I am in the position to comment on this one from experience. So assuming we do have a "hard assert" that is used within the standard libraries and a "soft assert" in user code (unless they explicitly choose to use the "hard assert"....) What happens? Well, I'm the dogsbody who has the job of upgrading the toolchain and handling the fallout of doing so. So I have been walking multimegaline code bases through every gcc version in the last 15 years. This is relevant because on every new version they have added stricter warnings, and more importantly, deeper optimizations. It's especially the deeper optimizations that are interesting here. They are often better data flow analysis which result in more "insightful" warnings. So given I'm taking megalines of C/C++ code from a warnings free state on gcc version N to warnings free on version N+1, I'll make some empirical observations. * They have _always_ highlighted dodgy / non-portable / non-standard compliant code. * They have quite often highlighted existing defects in the code. * They have quite often highlighted error handling code as "unreachable", because it is... and the only sane thing to do is delete it. * They have often highlighted the error handling code of "defensive programmers" as opposed to DbC programmers. Why? Because around 30% of the code of a defensive programmer is error handling crud that has never been executed, not even in development and hence is untested and unmaintained. The clean up effort was often fairly largish, maybe a week or two, but always resulted in better code. Customer impacting defects introduced by the new optimizations have been.... a) Very very rare. b) Invariably from really bad code that was blatantly defective, non-standard compliant and non-portable. So what do I expect, from experience from Walter's proposed change? Another guy in this thread complained about the compiler suddenly relying on thousands of global axioms from the core and standard libraries. Yup. Exactly what is going to happen. As you get... * more and more optimization passes that rely on asserts, * in particular pre and post condition asserts within the standard libraries, * you are going to have flocks of user code that used to compile without warning * and ran without any known defect... ...suddenly spewing error messages and warnings. But that's OK. Because I bet 99.999% of those warnings will be pointing straight at bone fide defects. And yes, this will be a regular feature of life. New version of compiler, new optimization passes, new warnings... That's OK, clean 'em up, and a bunch of latent defects won't come back as customer complaints.
Aug 03 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/3/14, 6:17 PM, John Carter wrote:
 Well, I'm the dogsbody who has the job of upgrading the toolchain and
 handling the fallout of doing so.

 So I have been walking multimegaline code bases through every gcc
 version in the last 15 years.
Truth. This man speaks it. Great post, thanks! Andrei
Aug 03 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 01:19:28 UTC, Andrei Alexandrescu 
wrote:
 On 8/3/14, 6:17 PM, John Carter wrote:
 Well, I'm the dogsbody who has the job of upgrading the 
 toolchain and
 handling the fallout of doing so.

 So I have been walking multimegaline code bases through every 
 gcc
 version in the last 15 years.
Truth. This man speaks it. Great post, thanks! Andrei
His post basically says that his real life experience leads him to believe that a static analyzer based on using information from asserts will very likely generate a ton of warnings/errors, because real life code is imperfect. In other words, if you use that information to optimize instead, you are going to get a ton of bugs, because the asserts are inconsistent with the code. So his post completely supports the conclusion that you've disagreed with, unless this has convinced you and you're switching sides now (could it be?) :)
Aug 03 2014
parent reply "John Carter" <john.carter taitradio.com> writes:
On Monday, 4 August 2014 at 02:18:12 UTC, David Bregman wrote:
 His post basically says that his real life experience leads him 
 to believe that a static analyzer based on using information 
 from asserts will very likely generate a ton of 
 warnings/errors, because real life code is imperfect.

 In other words, if you use that information to optimize 
 instead, you are going to get a ton of bugs, because the 
 asserts are inconsistent with the code.
No. My experience says deeper optimization comes from deeper understanding of the dataflow, with deeper understanding of the dataflow comes stricter warnings about defective usage. ie. A Good compiler writer, as Walter and the gcc guys clearly are, don't just slap in an optimization pass out of nowhere. They are all too painfully aware that if their optimization pass breaks anything, they will be fending off thousands of complaints that "Optimization X broke....". Compiler users always blame the optimizer long before they blame their crappy code. Watching the gcc mailing list over the years, those guys bend over backwards to prevent that happening. But since an optimization has to be based on additional hard information, they have, with every new version of gcc, used that information both for warnings and optimization.
Aug 03 2014
next sibling parent "John Carter" <john.carter taitradio.com> writes:
On Monday, 4 August 2014 at 02:31:36 UTC, John Carter wrote:

 But since an optimization has to be based on additional hard 
 information, they have, with every new version of gcc, used 
 that information both for warnings and optimization.
Hmm. Not sure I made that clear. ie. Yes, it is possible that a defect may be injected by an optimization that assumes an assert is true when it isn't. However, experience suggests that many (maybe two full orders of magnitude) more defects will be flagged. ie. In terms of defect reduction it's a big win rather than a loss. The tragedy of C optimization and static analysis is that the language is so loosely defined in terms of how it is used, the compiler has very little to go on. This proposal looks to me to be a Big Win, because it gifts the compiler (and any analysis tools) with a huge amount of eminently usable information.
Aug 03 2014
prev sibling next sibling parent "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 02:31:36 UTC, John Carter wrote:
 On Monday, 4 August 2014 at 02:18:12 UTC, David Bregman wrote:
 His post basically says that his real life experience leads 
 him to believe that a static analyzer based on using 
 information from asserts will very likely generate a ton of 
 warnings/errors, because real life code is imperfect.

 In other words, if you use that information to optimize 
 instead, you are going to get a ton of bugs, because the 
 asserts are inconsistent with the code.
No. My experience says deeper optimization comes from deeper understanding of the dataflow, with deeper understanding of the dataflow comes stricter warnings about defective usage.
Yes, that isn't what is being proposed though. This is about optimization, not warnings or errors.
 ie. A Good compiler writer, as Walter and the gcc guys clearly 
 are, don't just slap in an optimization pass out of nowhere.

 They are all too painfully aware that if their optimization 
 pass breaks anything, they will be fending off thousands of 
 complaints that "Optimization X broke....".
If you read the earlier threads, you will see Walter freely admits this will break code. Actually he says that such code is already broken. This doesn't involve new warnings, it will just break silently. It would be very difficult to do otherwise (see Daniel Gibson's reply to your post).
Aug 03 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2014 7:31 PM, John Carter wrote:
 Compiler users always blame the optimizer long before they blame their crappy
code.

 Watching the gcc mailing list over the years, those guys bend over backwards to
 prevent that happening.

 But since an optimization has to be based on additional hard information, they
 have, with every new version of gcc, used that information both for warnings
and
 optimization.
Recent optimization improvements in gcc and clang have also broken existing code that has worked fine for decades. In particular, overflow checks often now get optimized out, as the check relied on, pedantically, undefined behavior. This is why D has added the core.checkedint module, to have overflow checks that are guaranteed to work. Another optimization that has broken existing code is removal of dead assignments. This has broken crypto code that would overwrite passwords after using them. It's also why D now has volatileStore() and volatileLoad(), if only someone will pull them. I.e. silent breakage of existing, working code is hardly unknown in the C/C++ world.
Aug 05 2014
prev sibling next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 04.08.2014 03:17, schrieb John Carter:
 As you get...

 * more and more optimization passes that rely on asserts,
 * in particular pre and post condition asserts within the standard
 libraries,
 * you are going to have flocks of user code that used to compile without
 warning
 * and ran without any known defect...

 ...suddenly spewing error messages and warnings.

 But that's OK.

 Because I bet 99.999% of those warnings will be pointing straight at
 bone fide defects.
Well, that would make the problem more acceptable.. However, it has been argued that it's very hard to warn about code that will be eliminated, because that code often only become dead or redundant due to inlining, template instantiation, mixin, ... and you can't warn in those cases. So I doubt that the compiler will warn every time it removes checks that are considered superfluous because of a preceding assert(). Cheers, Daniel
Aug 03 2014
parent reply "Tove" <tove fransson.se> writes:
On Monday, 4 August 2014 at 01:26:10 UTC, Daniel Gibson wrote:
 Am 04.08.2014 03:17, schrieb John Carter:
 But that's OK.

 Because I bet 99.999% of those warnings will be pointing 
 straight at
 bone fide defects.
Well, that would make the problem more acceptable.. However, it has been argued that it's very hard to warn about code that will be eliminated, because that code often only become dead or redundant due to inlining, template instantiation, mixin, ... and you can't warn in those cases. So I doubt that the compiler will warn every time it removes checks that are considered superfluous because of a preceding assert(). Cheers, Daniel
It is possible, just not as a default enabled warning. Some compilers offers optimization diagnostics which can be enabled by a switch, I'm quite fond of those as it's a much faster way to go through a list of compiler highlighted failed/successful optimizations rather than being forced to check the asm output after every new compiler version or minor code refactoring. In my experience, it actually works fine in huge projects, even if there are false positives you can analyse what changes from the previous version as well as ignoring modules which you know is not performance critical.
Aug 03 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2014 7:26 PM, Tove wrote:
 It is possible, just not as a default enabled warning.

 Some compilers offers optimization diagnostics which can be enabled by a
switch,
 I'm quite fond of those as it's a much faster way to go through a list of
 compiler highlighted failed/successful optimizations rather than being forced
to
 check the asm output after every new compiler version or minor code
refactoring.

 In my experience, it actually works fine in huge projects, even if there are
 false positives you can analyse what changes from the previous version as well
 as ignoring modules which you know is not performance critical.
If you build dmd in debug mode, and then run it with -O --c, it will give you a list of all the data flow transformations it does. But the list is a blizzard on non-trivial programs.
Aug 05 2014
next sibling parent "Tove" <tove fransson.se> writes:
On Wednesday, 6 August 2014 at 00:47:28 UTC, Walter Bright wrote:
 If you build dmd in debug mode, and then run it with -O --c, it 
 will give you a list of all the data flow transformations it 
 does.

 But the list is a blizzard on non-trivial programs.
Awesome, thanks! Will give it a whirl, as soon as my vacation is over.
Aug 05 2014
prev sibling parent "Sebastiaan Koppe" <mail skoppe.eu> writes:
On Wednesday, 6 August 2014 at 00:47:28 UTC, Walter Bright wrote:
 On 8/3/2014 7:26 PM, Tove wrote:
 It is possible, just not as a default enabled warning.

 Some compilers offers optimization diagnostics which can be 
 enabled by a switch,
 I'm quite fond of those as it's a much faster way to go 
 through a list of
 compiler highlighted failed/successful optimizations rather 
 than being forced to
 check the asm output after every new compiler version or minor 
 code refactoring.

 In my experience, it actually works fine in huge projects, 
 even if there are
 false positives you can analyse what changes from the previous 
 version as well
 as ignoring modules which you know is not performance critical.
If you build dmd in debug mode, and then run it with -O --c, it will give you a list of all the data flow transformations it does. But the list is a blizzard on non-trivial programs.
What about making a diff file? Or are the transformations so deep that it would be impossible to link them to lines in the source code?
Aug 08 2014
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
This. 1000x this.

Atila

On Monday, 4 August 2014 at 01:17:23 UTC, John Carter wrote:
 On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:

 2. Semantic change.
 The proposal changes the meaning of assert(), which will 
 result in breaking existing code. Regardless of philosophizing 
 about whether or not the code was "already broken" according 
 to some definition of assert, the fact is that shipping 
 programs that worked perfectly well before may no longer work 
 after this change.
Subject to the caveat suggesting having two assert's with different names and different meanings, I am in the position to comment on this one from experience. So assuming we do have a "hard assert" that is used within the standard libraries and a "soft assert" in user code (unless they explicitly choose to use the "hard assert"....) What happens? Well, I'm the dogsbody who has the job of upgrading the toolchain and handling the fallout of doing so. So I have been walking multimegaline code bases through every gcc version in the last 15 years. This is relevant because on every new version they have added stricter warnings, and more importantly, deeper optimizations. It's especially the deeper optimizations that are interesting here. They are often better data flow analysis which result in more "insightful" warnings. So given I'm taking megalines of C/C++ code from a warnings free state on gcc version N to warnings free on version N+1, I'll make some empirical observations. * They have _always_ highlighted dodgy / non-portable / non-standard compliant code. * They have quite often highlighted existing defects in the code. * They have quite often highlighted error handling code as "unreachable", because it is... and the only sane thing to do is delete it. * They have often highlighted the error handling code of "defensive programmers" as opposed to DbC programmers. Why? Because around 30% of the code of a defensive programmer is error handling crud that has never been executed, not even in development and hence is untested and unmaintained. The clean up effort was often fairly largish, maybe a week or two, but always resulted in better code. Customer impacting defects introduced by the new optimizations have been.... a) Very very rare. b) Invariably from really bad code that was blatantly defective, non-standard compliant and non-portable. So what do I expect, from experience from Walter's proposed change? Another guy in this thread complained about the compiler suddenly relying on thousands of global axioms from the core and standard libraries. Yup. Exactly what is going to happen. As you get... * more and more optimization passes that rely on asserts, * in particular pre and post condition asserts within the standard libraries, * you are going to have flocks of user code that used to compile without warning * and ran without any known defect... ...suddenly spewing error messages and warnings. But that's OK. Because I bet 99.999% of those warnings will be pointing straight at bone fide defects. And yes, this will be a regular feature of life. New version of compiler, new optimization passes, new warnings... That's OK, clean 'em up, and a bunch of latent defects won't come back as customer complaints.
Aug 04 2014
parent "David Bregman" <drb sfu.ca> writes:
On Monday, 4 August 2014 at 09:38:26 UTC, Atila Neves wrote:
 This. 1000x this.

 Atila
Yeah, I don't think anyone disagrees with getting better warning and error messages. Static analysis rocks. Anyways I just want to point out that this isn't what's being proposed, so it's kind of off topic. It's not an argument either for or against the proposal, just in case that wasn't clear.
Aug 04 2014
prev sibling next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Sunday, 3 August 2014 at 19:47:27 UTC, David Bregman wrote:
 4. Performance.
 Q4a. What level of performance increases are expected of this 
 proposal, for a representative sample of D programs?
 Q4b. Is there any threshold level of expected performance 
 required to justify this proposal? For example, if a study 
 determined that the average program could expect a speedup of 
 0.01% or less, would that still be considered a good tradeoff 
 against the negatives?
 Q4c. Have any works or studies, empirical or otherwise, been 
 done to estimate the expected performance benefit? Is there any 
 evidence at all for a speedup sufficient to justify this 
 proposal?
 Q4d. When evaluating the potential negative effects of the 
 proposal on their codebase, D users may decide it is now too 
 risky to compile with -release. (Even if their own code has 
 been constructed with the new assert semantics in mind, the 
 libraries they use might not). Thus the effect of the proposal 
 would actually be to decrease the performance of their program 
 instead of increase it. Has this been considered in the 
 evaluation of tradeoffs?
I'd like to add: Q4e: Have other alternatives been taken into consideration that could achieve the same performance gains, but in a safer way? I'm particularly thinking about whole program optimization. I suspect that WPO can prove most of what can only be assumed with asserts.
Aug 04 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
Should this semantics extend to array bounds checking, i.e. after 
the statement

foo[5] := 0;

can the optimizer assume that foo.length >= 6 ?
Aug 04 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 8/4/14, 7:27 AM, Matthias Bentrup wrote:
 Should this semantics extend to array bounds checking, i.e. after the
 statement

 foo[5] := 0;

 can the optimizer assume that foo.length >= 6 ?
Yes, definitely. -- Andrei
Aug 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/4/2014 8:01 AM, Andrei Alexandrescu wrote:
 On 8/4/14, 7:27 AM, Matthias Bentrup wrote:
 Should this semantics extend to array bounds checking, i.e. after the
 statement

 foo[5] := 0;

 can the optimizer assume that foo.length >= 6 ?
Yes, definitely. -- Andrei
Yes, after all, bounds checking is just another form of asserts.
Aug 05 2014
prev sibling next sibling parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
This has already been stated by others, but I just wanted to pile on - I
agree with Walter's definition of assert.

2. Semantic change.
 The proposal changes the meaning of assert(), which will result in
 breaking existing code.  Regardless of philosophizing about whether or not
 the code was "already broken" according to some definition of assert, the
 fact is that shipping programs that worked perfectly well before may no
 longer work after this change.
Disagree. Assert (as I always understood it) means 'this must be true, or my program is broken.' In -release builds the explicit explosion on a triggered assert is skipped, but any code after a non-true assert is, by definition, broken. And by broken I mean the fundamental constraints of the program are violated, and so all bets are off on it working properly. A shipping program that 'worked perfectly well' but has unfulfilled asserts is broken - either the asserts are not actually true constraints, or the broken path just hasn't been hit yet. Looking at the 'breaking' example: assert(x!=1); if (x==1) { ... } If the if is optimized out, this will change from existing behaviour. But it is also obviously (to me at least) broken code already. The assert says that x cannot be 1 at this point in the program, if it ever is then there is an error in the program.... and then it continues as if the program were still valid. If x could be one, then the assert is invalid here. And this code will already behave differently between -release and non-release builds, which is another kind of broken. 3a. An alternate statement of the proposal is literally "in release mode,
 assert expressions introduce undefined behavior into your code in if the
 expression is false".
This statement seems fundamentally true to me of asserts already, regardless of whether they are used for optimizations. If your assert fails, and you have turned off 'blow up on assert' then your program is in an undefined state. It is not that the assert introduces the undefined behaviour, it is that the assert makes plain an expectation of the code and if that expectation is false the code will have undefined behaviour. 3b. Since assert is such a widely used feature (with the original
 semantics, "more asserts never hurt"), the proposal will inject a massive
 amount of undefined behavior into existing code bases, greatly increasing
 the probability of experiencing problems related to undefined behavior.
I actually disagree with the 'more asserts never hurt' statement. Exactly because asserts get compiled out in release builds, I do not find them very useful/desirable. If they worked as optimization hints I might actually use them more. And there will be no injection of undefined behaviour - the undefined behaviour is already there if the asserted constraints are not valid. Maybe if the yea side was consulted, they might easily agree to an
 alternative way of achieving the improved optimization goal, such as
 creating a new function that has the proposed semantics.
Prior to this (incredibly long) discussion, I was not aware people had a different interpretation of assert. To me, this 'new semantics' is precisely what I always thought assert was, and the proposal is just leveraging it for some additional optimizations. So from my standpoint, adding a new function would make sense to support this 'existing' behaviour that others seem to rely on - assert is fine as is, if the definition of 'is' is what I think it is.
Aug 05 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/05/2014 08:18 PM, Jeremy Powers via Digitalmars-d wrote:
 And there will be no injection of undefined behaviour - the undefined
 behaviour is already there if the asserted constraints are not valid.
Well, no. http://en.wikipedia.org/wiki/Undefined_behavior
Aug 05 2014
parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 And there will be no injection of undefined behaviour - the undefined
 behaviour is already there if the asserted constraints are not valid.
Well, no. http://en.wikipedia.org/wiki/Undefined_behavior
Well, yes: Undefined behaviour in the sense the writer of the program has not defined it. A program is written with certain assumptions about the state at certain points. An assert can be used to explicitly state those assumptions, and halt the program (in non-release) if the assumptions are invalid. If the state does not match what the assert assumes it to be, then any code relying on that state is invalid, and what it does has no definition given by the programmer. (And here I've circled back to assert==assume... all because I assume what assert means) If the state that is being checked could actually ever be valid, then it is not valid for an assert - use some other validation.
Aug 05 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/05/2014 08:59 PM, Jeremy Powers via Digitalmars-d wrote:
 ...

 Well, yes: Undefined behaviour in the sense
"And there will be no injection of undefined behaviour ^~~~~~~~~~~~~~~~~~~ conventional sense - the undefined behaviour is already there if the asserted constraints ^~~~~~~~~~~~~~~~~~~ altered sense are not valid."
Aug 05 2014
parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 Well, yes: Undefined behaviour in the sense
"And there will be no injection of undefined behaviour ^~~~~~~~~~~~~~~~~~~ conventional sense - the undefined behaviour is already there if the asserted constraints ^~~~~~~~~~~~~~~~~~~ altered sense are not valid."
I still don't quite see your point. Perhaps I should have said: In the case where an asserted constraint is not met, the program is invalid. Being invalid it has undefined behaviour if it continues.
From another:
 There is a difference between invalid and undefined: A program is invalid
 ("buggy"), if it doesn't do what it's programmer intended, while
 "undefined" is a matter of the language specification. The (wrong)
 behaviour of an invalid program need not be undefined, and often isn't in
 practice.
I disagree with this characterization. Something can be buggy, not doing what the programmer intended, while also a perfectly valid program. You can make wrong logic that is valid/reasonable in the context of the program. Invalid in this case means the programmer has explicitly stated certain constraints must hold, and such constraints do not hold. So if you continue with the program in the face of invalid constraints, you have no guarantee what will happen - this is what I mean by 'undefined'.
Aug 05 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Tuesday, 5 August 2014 at 20:50:06 UTC, Jeremy Powers via 
Digitalmars-d wrote:
 Well, yes: Undefined behaviour in the sense
"And there will be no injection of undefined behaviour ^~~~~~~~~~~~~~~~~~~ conventional sense - the undefined behaviour is already there if the asserted constraints ^~~~~~~~~~~~~~~~~~~ altered sense are not valid."
I still don't quite see your point. Perhaps I should have said: In the case where an asserted constraint is not met, the program is invalid. Being invalid it has undefined behaviour if it continues.
From another:
 There is a difference between invalid and undefined: A program 
 is invalid
 ("buggy"), if it doesn't do what it's programmer intended, 
 while
 "undefined" is a matter of the language specification. The 
 (wrong)
 behaviour of an invalid program need not be undefined, and 
 often isn't in
 practice.
I disagree with this characterization. Something can be buggy, not doing what the programmer intended, while also a perfectly valid program. You can make wrong logic that is valid/reasonable in the context of the program. Invalid in this case means the programmer has explicitly stated certain constraints must hold, and such constraints do not hold. So if you continue with the program in the face of invalid constraints, you have no guarantee what will happen - this is what I mean by 'undefined'.
You're using a nonstandard definition of undefined behavior. Undefined behavior has a precise meaning, that's why Timon linked the wiki article for you. The regular definition of assert does not involve any undefined behavior, only the newly proposed one.
Aug 05 2014
parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 You're using a nonstandard definition of undefined behavior. Undefined
 behavior has a precise meaning, that's why Timon linked the wiki article
 for you.

 The regular definition of assert does not involve any undefined behavior,
 only the newly proposed one.
But the 'newly proposed one' is the definition that I have been using all along. The 'regular' definition of assert that you claim is what I see as the redefinition - it is a definition based on the particular implementation of assert in other languages, not on the conceptual idea of assert as I understand it (and as it appears to be intended in D). This appears to be the root of the argument, and has been circled repeatedly... it's not my intent to restart another round of discussion on that well traveled ground, I just wanted to state my support for the definition as I understand it.
Aug 05 2014
next sibling parent "bachmeier" <no spam.net> writes:
 But the 'newly proposed one' is the definition that I have been 
 using all
 along.
+1. Until this came up, I didn't know another definition existed.
 The 'regular' definition of assert that you claim is what I see 
 as
 the redefinition - it is a definition based on the particular
 implementation of assert in other languages, not on the 
 conceptual idea of
 assert as I understand it (and as it appears to be intended in 
 D).
In my view, it's also a redefinition of -release. My view is influenced by Common Lisp. If you want speed, you test your program, and then when you feel comfortable, set the optimization levels to get as much speed as possible. If you want safety and debugging, set the optimization levels accordingly. I was always under the impression that -release was a statement to the compiler that "I've tested the program, make it run as fast as possible, and let me worry about any remaining bugs."
Aug 05 2014
prev sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Tuesday, 5 August 2014 at 22:25:59 UTC, Jeremy Powers via 
Digitalmars-d wrote:
 You're using a nonstandard definition of undefined behavior. 
 Undefined
 behavior has a precise meaning, that's why Timon linked the 
 wiki article
 for you.

 The regular definition of assert does not involve any 
 undefined behavior,
 only the newly proposed one.
But the 'newly proposed one' is the definition that I have been using all along.
OK, but my point was you were using a different definition of undefined behavior. We can't communicate if we aren't using the same meanings of words.
 The 'regular' definition of assert that you claim is what I see 
 as
 the redefinition - it is a definition based on the particular
 implementation of assert in other languages, not on the 
 conceptual idea of
 assert as I understand it (and as it appears to be intended in 
 D).
The 'regular' definition of assert is used in C, C++ and for the last >10years (afaik), in D. If you want to change it you need a good justification. I'm not saying such justification necessarily exist doesn't either, maybe it does but I have not seen it.
 This appears to be the root of the argument, and has been 
 circled
 repeatedly... it's not my intent to restart another round of 
 discussion on
 that well traveled ground, I just wanted to state my support 
 for the
 definition as I understand it.
I disagree. I don't think the fact that some people already had the new definition in mind before is really all that relevant. That's in the past. This is all about the pros and cons of changing it now and for the future.
Aug 05 2014
next sibling parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 That's in the past. This is all about the pros and cons of changing it now
 and for the future.
The main argument seems to revolve around whether this is actually a change or not. In my (and others') view, the treatment of assert as 'assume' is not a change at all. It was in there all along, we just needed the wizard to tell us. The below can safely be ignored, as I just continue the pedantic discussions.... OK, but my point was you were using a different definition of undefined
 behavior. We can't communicate if we aren't using the same meanings of
 words.
Yes, very true. My definition of undefined in this case hinges on my definition of what assert means. If a failed assert means all code after it is invalid, then by definition (as I interpret the definition) that code is invalid and can be said to have undefined behaviour. That is, it makes sense to me that it is specified as undefined, by the spec that is incredibly unclear. I may be reading too much into it here, but this follows the strict definition of undefined - it is undefined because it is defined to be undefined. This is the 'because I said so' defense.
  The 'regular' definition of assert that you claim is what I see as
 the redefinition - it is a definition based on the particular
 implementation of assert in other languages, not on the conceptual idea of
 assert as I understand it (and as it appears to be intended in D).
The 'regular' definition of assert is used in C, C++ and for the last
10years (afaik), in D. If you want to change it you need a good
justification. I'm not saying such justification necessarily exist doesn't either, maybe it does but I have not seen it.
This 'regular' definition is a somewhat strict interpretation of the definition to only match how languages have implemented it. I have always interpreted the intent of assert to be 'here is something that must be true, if it is not then my program is in an invalid state' - the fact it is only a debug halting tool in practice means it falls short of its potential. And in fact I very rarely use it in practice for this reason, as I find the way it works almost useless and definitely dangerous.
 This appears to be the root of the argument, and has been circled
 repeatedly... it's not my intent to restart another round of discussion on
 that well traveled ground, I just wanted to state my support for the
 definition as I understand it.
I disagree. I don't think the fact that some people already had the new definition in mind before is really all that relevant.
It comes back to whether the new definition is actually new. If it is a new definition, then we can argue about whether it is good or not. If it is the old definition (which slightly differs from how assert works in practice in other languages) then we can argue about whether D should conform to other languages or leverage the existing definition... I contend that it is not new, and is simply an extension of the actual definition. Some people agree, some people don't... very lengthy and esoteric discussions have already spiraled through this repeatedly, so us arguing about it again probably won't get anywhere. My stance is that this new/old definition is a good thing, as it matches how I thought things were already, and any code that surfaces as broken because of it was already broken in my definition. Therefore this 'change' is good, does not introduce breaking changes, and arguments about such should be redirected towards mitigation and fixing of expectations. In an attempt to return this discussion to something useful, question: If assert means what I think it means (and assuming we agree on what the actual two interpretations are), how do we allay concerns about it? Is there anything fundamentally/irretrievably bad if we use this new/old definition? (Can we programmatically (sp?) identify and flag/resolve issues that occur from a mismatch of expectations?)
Aug 05 2014
next sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
Digitalmars-d wrote:
 That's in the past. This is all about the pros and cons of 
 changing it now
 and for the future.
The main argument seems to revolve around whether this is actually a change or not. In my (and others') view, the treatment of assert as 'assume' is not a change at all. It was in there all along, we just needed the wizard to tell us.
How can there be any question? This is a change in the compiler, a change in the docs, change in what your program does, change of the very bytes in the executable. If my program worked before and doesn't now, how is that not a change? This must be considered a change by any reasonable definition of the word change. I don't think I can take seriously this idea that someone's unstated, unmanifested intentions define change more so than things that are .. you know.. actual real changes. Much of the rest of your post seems to be predicated on this, so I don't think I can respond to it. Let me know if I missed something.
 In an attempt to return this discussion to something useful, 
 question:

 If assert means what I think it means (and assuming we agree on 
 what the
 actual two interpretations are), how do we allay concerns about 
 it?  Is
 there anything fundamentally/irretrievably bad if we use this 
 new/old
 definition?
Well I think I outlined the issues in the OP. As for solutions, there have been some suggestions in this thread, the simplest is to leave things as is and introduce the optimizer hint as a separate function, assume(). I don't think there was any argument presented against a separate function besides that Walter couldn't see any difference between the two behaviors, or the intention thing which doesn't really help us here. I guess the only real argument against it is that pre-existing asserts contain significant optimization information that we can't afford to not reuse. But this is a claim I'm pretty skeptical of. Andrei admitted it's just a hunch at this point. Try looking through your code base to see how many asserts would be useful for optimizing. For me, it's mostly default:assert(false); in switch statements, though ironically that is defined to produce a halt even in release, so the compiler won't optimize away the branch like it should. Heh, I just realized, that particular special case is another argument for a separate function, because assert(false) can't express unreachability. assume(false) could.
 (Can we programmatically (sp?) identify and flag/resolve
 issues that occur from a mismatch of expectations?)
I'm not an expert on this, but my guess is it's possible in theory but would never happen in practice. Such things are very complex to implement, if Walter won't agree to a simple and easy solution, I'm pretty sure there's no way in hell he would agree to a complex one that takes a massive amount of effort.
Aug 05 2014
next sibling parent reply "eles" <eles eles.com> writes:
On Wednesday, 6 August 2014 at 04:05:41 UTC, David Bregman wrote:
 On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
 Digitalmars-d wrote:
 How can there be any question? This is a change in the 
 compiler, a change in the docs, change in what your program 
 does, change of the very bytes in the executable. If my program
I feel that, at this stage, is only about how a compiler glag, specifically "-release" works. For other configurations, there is no problem: event if the optimizer optimizes based on asserts, the asserts themselves are part of the code: code is there and the assertion will fail before execution enters the optimized path. This is just like any other optimization, nothing special about it. The problem with "-release" might be formulated in that it optimizes based on a code that is no longer present (the asserts are wiped out). It keeps the optimization, but it dismisses the garde-fous.
Aug 05 2014
next sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 I feel that, at this stage, is only about how a compiler glag,
 specifically "-release" works. For other configurations, there is no
 problem: event if the optimizer optimizes based on asserts, the asserts
 themselves are part of the code: code is there and the assertion will fail
 before execution enters the optimized path. This is just like any other
 optimization, nothing special about it.

 The problem with "-release" might be formulated in that it optimizes based
 on a code that is no longer present (the asserts are wiped out). It keeps
 the optimization, but it dismisses the garde-fous.
Yes! [lengthy and imprecise rambling about assert definition omitted]
Aug 06 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:
 I feel that, at this stage, is only about how a compiler glag, 
 specifically "-release" works. For other configurations, there 
 is no problem: event if the optimizer optimizes based on 
 asserts, the asserts themselves are part of the code: code is 
 there and the assertion will fail before execution enters the 
 optimized path. This is just like any other optimization, 
 nothing special about it.
Not right: b = a+1 assume(b>C) implies assume(a+1>C) b = a+1
Aug 06 2014
parent reply "eles" <eles eles.com> writes:
On Wednesday, 6 August 2014 at 07:29:02 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:
 Not right:

 b = a+1
 assume(b>C)

 implies

 assume(a+1>C)
 b = a+1
b = a+1 if(C<=b) exit(1); implies if(C<=a+1) exit(1); b = a+1 Is not the same? Still, one would allow the optimizer to exit before executing b=a+1 line (in the first version) based on that condition (considering no volatile variables). I stick to my point: as long as the code is there, optimization based on it is acceptable (as long as the functionality of the program is not changed, of course). The sole floating point is what to do when the code that is used for optimization is discarded. Would you accept optimization of a C program based on code that is guarded by: #ifndef _NDEBUG //code that could be used for optimization #endif in the -D_NDEBUG version? (I am not convinced 100% that for D is the same, but should help with the concepts)
Aug 06 2014
parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 08/06/14 10:28, eles via Digitalmars-d wrote:
 On Wednesday, 6 August 2014 at 07:29:02 UTC, Ola Fosheim Grøstad wrote:
 On Wednesday, 6 August 2014 at 06:56:40 UTC, eles wrote:
 Not right:

 b = a+1
 assume(b>C)

 implies

 assume(a+1>C)
 b = a+1
b = a+1 if(C<=b) exit(1); implies if(C<=a+1) exit(1); b = a+1 Is not the same? Still, one would allow the optimizer to exit before executing b=a+1 line (in the first version) based on that condition (considering no volatile variables).
It is *very* different. In the `exit` case this kind of transformation is only valid if that assignment has no externally observable effects. `assume` makes the transformation *unconditionally* valid, even when executing 'b=a+1' would affect the output of the program. The compiler can /assume/ that the condition never fails. Hence, it does not need to generate any code to check that the assumption is valid. It does not need to emit any code for any path that would only be taken if the condition would be false. If that ever happens the CPU can start executing any random instructions. That's ok, since you explicitly told the compiler that this situation will never occur. Hence, it can also skip generating any code for any path that unconditionally leads to a failing `assume` directive. Because it is all dead code that will never be executed, as long as all `assume` conditions are true. artur
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 12:41:16 UTC, Artur Skawina via 
Digitalmars-d wrote:
 The compiler can /assume/ that the condition never fails. 
 Hence, it does
 not need to generate any code to check that the assumption is 
 valid.
Exactly, worse example using a coroutine: « label: … while(running) { // forevah! … yield … } … assume(anything local not assigned below label) // reachable, but never executed » is equivalent to: « assume(anything local not assigned below label) // optimize based on this label: … while(running) { … yield … } » Woops?
Aug 06 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Wednesday, 6 August 2014 at 13:31:54 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 12:41:16 UTC, Artur Skawina via 
 Digitalmars-d wrote:
 The compiler can /assume/ that the condition never fails. 
 Hence, it does
 not need to generate any code to check that the assumption is 
 valid.
Exactly, worse example using a coroutine: « label: … while(running) { // forevah! … yield … } … assume(anything local not assigned below label) // reachable, but never executed » is equivalent to: « assume(anything local not assigned below label) // optimize based on this label: … while(running) { … yield … } » Woops?
But even if there is no explicit assert()/assume() given by the developer, I guess the optimizer is free to insert assumes that are provably correct, e.g. while(running) { ...don't assign to running, don't break... } is equivalent to while(running) { ...don't assign to running, don't break... } assume(!running); is equivalent to assume(!running); while(running) { ...don't assign to running, don't break... } is equivalent to assume(!running); So I take the compiler is allowed to throw away code without any asserts already ?
Aug 06 2014
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:
 So I take the compiler is allowed to throw away code without 
 any asserts already ?
Yes it can, but only in the cases where it can prove it is the same. The main difference is that assume does not need to be proved, it is stated to always be true so no proof needed. Some have said that the compiler can still try to prove what it can and issue warnings if it is provable untrue, but it would actually be a little silly on the part of the compiler to waist it's time trying to disprove things it "knows" are true. If it "knows" it is true then why even bother checking?
Aug 06 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
wrote:
 is equivalent to

   while(running) {
     ...don't assign to running, don't break...
   }
   assume(!running);
You have to prove termination to get there? Plain Hoare logic cannot deal with non-terminating loops. Otherwise you risk proving any theorem below the loop to hold? But I was talking about explicit assumptions made by the programmer, who does not know the rules… It was obviously wrong to assume anything to hold if the loop never terminates. The rule for proving loop termination is quite involved where you basically prove that every time you move through the loop you get something "less" than you had before (ensured by t and D). This is no fun, and a good reason to hate reasoning about correctness: a < b: is a well-founded ordering on the set D if this holds: [P ∧ B ∧ t ∈ D ∧ t = z] S [P ∧ t ∈ D ∧ t < z ] then this holds: [P ∧ t ∈ D] while B do S done [¬B ∧ P ∧ t ∈ D] I think… or maybe not. Who can tell from just glancing at it?
Aug 06 2014
next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 15:29:13 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
 wrote:
 is equivalent to

  while(running) {
    ...don't assign to running, don't break...
  }
  assume(!running);
You have to prove termination to get there? Plain Hoare logic cannot deal with non-terminating loops. Otherwise you risk proving any theorem below the loop to hold?
This is an assume, not an assert. The user tells the compiler that `running` is false after the loop, it can thus conclude that the loop will never run. So there is no non-terminating loop in this case.
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 15:36:52 UTC, Marc Schütz wrote:
 This is an assume, not an assert.
Not sure what you mean. An assume is an assert proven (or defined) to hold. It can be a proven theorem which has been given the status of an axiom. It is known to keep the algebraic system sound. If you claim that something unsound is proven then all bets are off everywhere? I am quite confident that assume(false) anywhere in your program is basically stating that the program is unsound (true==false) and should not be compiled and run at all. If true==false anywhere in your program then it surely holds everywhere in your program? The only reason c-style assert(false) "works", is because you delay the verification until the last moment, at which point the system says "woopsie", gotta terminate this because this program should never have compiled in the first place. Sounds consistent to me?
Aug 06 2014
next sibling parent reply "Artur Skawina" <art.08.09 gmail.com> writes:
On 08/06/14 18:00, via Digitalmars-d wrote:
 I am quite confident that assume(false) anywhere in your 
 program is basically stating that the program is unsound 
 (true==false) and should not be compiled and run at all.
No, an assume(false) in a program only means that every _path_ _leading_to_that_statement is 'unsound'. For practical purposes it's better to treat 'unsound' as impossible and unreachable. IOW import std.stdio, std.array; int main(string[] argv) { if (argv.length<2) assume(0); if (argv.length==1) writeln("help text"); return argv.empty; } => 0000000000403890 <_Dmain>: 403890: 31 c0 xor %eax,%eax 403892: c3 retq The alternatives would be to make it either: a) always a compile error, or b) always a runtime error. The former would add little value (`static assert` already exists); the latter is already available as `assert(0)`. The above example after "s/assume(0)/assert(0)/" becomes: 0000000000403890 <_Dmain>: 403890: 48 83 ff 01 cmp $0x1,%rdi 403894: 76 03 jbe 403899 <_Dmain+0x9> 403896: 31 c0 xor %eax,%eax 403898: c3 retq 403899: 50 push %rax 40389a: e8 71 e7 ff ff callq 402010 <abort plt> IOW the compiler can still optimize based on the (un)reachability, but the behavior in no longer undefined. artur
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina wrote:
 No, an assume(false) in a program only means that every _path_
 _leading_to_that_statement is 'unsound'. For practical purposes
 it's better to treat 'unsound' as impossible and unreachable.
I don't think so. Because a program is only correct if all axioms and proposed theorems are proven true. All axioms are by definition true, so if you assume(false==true) anywhere it has to hold everywhere. Neither "false" or "true" are variables. Just like assume(PI==3.14…) has to hold everywhere. false, true and PI are constant throughout every path in the program. Not really sure how it is possible to disagree with that?
Aug 06 2014
next sibling parent reply "Artur Skawina" <art.08.09 gmail.com> writes:
On Wednesday, 6 August 2014 at 17:03:45 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina 
 wrote:
 No, an assume(false) in a program only means that every _path_
 _leading_to_that_statement is 'unsound'. For practical purposes
 it's better to treat 'unsound' as impossible and unreachable.
I don't think so. Because a program is only correct if all axioms and proposed theorems are proven true. All axioms are by definition true, so if you assume(false==true) anywhere it has to hold everywhere. Neither "false" or "true" are variables. Just like assume(PI==3.14…) has to hold everywhere. false, true and PI are constant throughout every path in the program. Not really sure how it is possible to disagree with that?
These threads have proven that it's apparently possible to disagree even about the definition of words like "disagreement" and "definition"... But, in a practical programming language definition context, consider: if (0) assume(0); Yes, `assume` could be defined in a way that would make this always a compile error. But then `assume(0)` would be useless. If it means 'unreachable' instead, then the behavior is still consistent - it becomes a way to to tell the compiler: "this can never happen, optimize accordingly". [gmail smtp servers seem to be broken right now; had to post this via the webform; probably won't be posting until the mailing list i/f starts working for me again] artur
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 17:36:00 UTC, Artur Skawina wrote:
    if (0)
       assume(0);

 Yes, `assume` could be defined in a way that would make this
 always a compile error. But then `assume(0)` would be useless.
I don't think so. It makes a lot of sense to halt compilation if you from templates or other constructs emit assume(true==false) or assert(true==false) or any other verification-annotation that contradicts predefined language axioms in a way that the compiler can detect at compile time. Otherwise you risk having "unreachable" unintended in generic code. It also basically gives you a very limited version of a theorem prover and thus the capability of doing real program verification. This should be extended later so that the language supports program verification within it's own syntax. I also dislike this kind of special casing in general. Semantics should be consistent. assert(false) and assume(false) are aberrations, I totally agree with what bearophile has previously stated. Making formal program verification impossible by odd special casing (C mal-practice) is counter productive in the long run.
Aug 06 2014
parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 08/06/14 22:57, via Digitalmars-d wrote:
 On Wednesday, 6 August 2014 at 17:36:00 UTC, Artur Skawina wrote:
    if (0)
       assume(0);

 Yes, `assume` could be defined in a way that would make this
 always a compile error. But then `assume(0)` would be useless.
I don't think so. It makes a lot of sense to halt compilation if you from templates or other constructs emit assume(true==false) or assert(true==false) or any other verification-annotation that contradicts predefined language axioms in a way that the compiler can detect at compile time. Otherwise you risk having "unreachable" unintended in generic code.
D already has `static assert`, which can be used for compile-time checks. Eliminating unreachable paths based on the extra information from (a combination of) `assume` statements is desirable, and exactly what this thread is about. I get what you're saying, but it's not really relevant in the context of `D`; such fundamental changes are not exactly likely to happen soon... OTOH the extremely dangerous assert->assume redefinition seems to be seriously considered, despite the grave consequences... artur
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 22:28:08 UTC, Artur Skawina via 
Digitalmars-d wrote:
 D already has `static assert`, which can be used for 
 compile-time checks.
The static modifier would just be a modifier to the regular c-style assert that basically tells the compiler to refuse to add run-time checks. * static assert: prove this or stop compilation * assert: prove this or emit runtime check * assume: define this to hold
 OTOH the extremely dangerous assert->assume redefinition seems 
 to be seriously considered, despite the grave consequences...
Well, I don't worry. It just means that some people, who believe in the concept, are going to spend a lot of effort on something that probably does not pay off. It would be a lot better if they rather spent some effort in adding sound annotations, like bearophile argues for, though. It could be put it to good use by annotating foreign functions Keep in mind that this is an issue that is easy to fix in a private fork. It is a small modification to turn "d-assert" into "assume" and add a "c-assert". It requires no changes to D code?
Aug 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
I'll also add that perhaps a source of confusion is that 
assert(X) is presumed to take a bool at compile time. That is not 
the case, there are 3 outcomes:

X==true : this has been proved to hold

X==false: this has been proved to not hold

X not computable: too weak theorem prover, check at runtime
Aug 06 2014
prev sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 17:03:45 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 16:59:18 UTC, Artur Skawina 
 wrote:
 No, an assume(false) in a program only means that every _path_
 _leading_to_that_statement is 'unsound'. For practical purposes
 it's better to treat 'unsound' as impossible and unreachable.
I don't think so. Because a program is only correct if all axioms and proposed theorems are proven true. All axioms are by definition true, so if you assume(false==true) anywhere it has to hold everywhere. Neither "false" or "true" are variables. Just like assume(PI==3.14…) has to hold everywhere. false, true and PI are constant throughout every path in the program. Not really sure how it is possible to disagree with that?
The crux is the axiom that is being defined. When you write `assume(condition)`, the axiom is not: "`condition` is true." but: "When control flow reaches this point, then `condition` is true." It's not an unconditionally true condition :-P
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 17:59:06 UTC, Marc Schütz wrote:
 The crux is the axiom that is being defined. When you write 
 `assume(condition)`, the axiom is not:

     "`condition` is true."

 but:

     "When control flow reaches this point, then `condition` is 
 true."

 It's not an unconditionally true condition :-P
Assume(X) does not provide a condition. It defines a proposition/relation to be true. If you define a relation between two constants then it has to hold globally. Otherwise they are not constant… Assume(P) defines that the proposition holds. If it does not involve variables, then it will be free to move anywhere by the rules of Hoare-logic (and propositional logic)? If assume(PI==3.14…) can move everywhere, by the rules, then so can assume(true==false). You surely don't want to special case "true==false" in the hoare logic rules? Please note that it is perfectly fine to define "true==false", but you then have to live by it, that is, you no longer have a boolean algebra, you have an unsound algebra. And that holds globally. So, please don't think that assume(true==false) is a test of a condition, it is just a definition that you impose on the equality relation (that also, by incident, happens to make the algebra unsound) If you special case assume(false) and assert(false) then you can no longer use deduction properly and it essentially becomes an aberration, syntactical sugar, that needs special casing everywhere. It would be much better to just halt compilation if you at compile time can evaluate X==false for assume(X) or assert(X). Why let this buggy code live? If it failed at compile time then let the compilation fail! This would work better with CTFE. " unreachable" is an alternative if you want to be explicit about it and matches __builtin_unreachable et al.
Aug 06 2014
next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 18:51:05 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 17:59:06 UTC, Marc Schütz wrote:
 The crux is the axiom that is being defined. When you write 
 `assume(condition)`, the axiom is not:

    "`condition` is true."

 but:

    "When control flow reaches this point, then `condition` is 
 true."

 It's not an unconditionally true condition :-P
Assume(X) does not provide a condition. It defines a proposition/relation to be true.
Well, let's call it proposition then...
 If you define a relation between two constants then it has to 
 hold globally. Otherwise they are not constant…
Yes, but as I wrote above, we're not defining a relation between two constants ("true == false"), we define the following axiom: "When control flow reaches this point, then true == false." Because true is not equal to false, it follows that the first part of the proposition cannot be true, meaning control flow will not reach this point.
 Assume(P) defines that the proposition holds. If it does not 
 involve variables, then it will be free to move anywhere by the 
 rules of Hoare-logic (and propositional logic)? If 
 assume(PI==3.14…) can move everywhere, by the rules, then so 
 can assume(true==false).
See above. Of course we could define our `assume(P)` to define `P` as an axiom directly, but this would be much less useful, because it would have exactly the consequences you write about. And it would be a lot more consistent, too. Surely you wouldn't want `assume(x == 42)` to mean "x is always and everywhere equal to 42", but just "x is equal to 42 when control flow passes this point".
 You surely don't want to special case "true==false" in the 
 hoare logic rules?

 Please note that it is perfectly fine to define "true==false", 
 but you then have to live by it, that is, you no longer have a 
 boolean algebra, you have an unsound algebra.  And that holds 
 globally. So, please don't think that assume(true==false) is a 
 test of a condition, it is just a definition that you impose on 
 the equality relation (that also, by incident, happens to make 
 the algebra unsound)

 If you special case assume(false) and assert(false) then you 
 can no longer use deduction properly and it essentially becomes 
 an aberration, syntactical sugar, that needs special casing 
 everywhere.

 It would be much better to just halt compilation if you at 
 compile time can evaluate X==false for assume(X) or assert(X). 
 Why let this buggy code live? If it failed at compile time then 
 let the compilation fail! This would work better with CTFE.

 " unreachable" is an alternative if you want to be explicit 
 about it and matches __builtin_unreachable et al.
See above, no special treatment is necessary.
Aug 06 2014
next sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 19:18:30 UTC, Marc Schütz wrote:
 Assume(P) defines that the proposition holds. If it does not 
 involve variables, then it will be free to move anywhere by 
 the rules of Hoare-logic (and propositional logic)? If 
 assume(PI==3.14…) can move everywhere, by the rules, then so 
 can assume(true==false).
See above. Of course we could define our `assume(P)` to define `P` as an axiom directly, but this would be much less useful, because it would have exactly the consequences you write about. And it would be a lot more consistent, too. Surely you wouldn't want `assume(x == 42)` to mean "x is always and everywhere equal to 42", but just "x is equal to 42 when control flow passes this point".
This is unclear. What I wanted to write is: "It would be more consistent to use the definition of `assume` that I propose.".
Aug 06 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 19:18:30 UTC, Marc Schütz wrote:
 On Wednesday, 6 August 2014 at 18:51:05 UTC, Ola Fosheim
 Yes, but as I wrote above, we're not defining a relation 
 between two constants ("true == false"), we define the 
 following axiom:

     "When control flow reaches this point, then true == false."

 Because true is not equal to false, it follows that the first 
 part of the proposition cannot be true, meaning control flow 
 will not reach this point.
But you are defining an axiom, not evaluating, that is how the optimizer can use it for deduction. Think of how Prolog works.
 See above. Of course we could define our `assume(P)` to define 
 `P` as an axiom directly, but this would be much less useful, 
 because it would have exactly the consequences you write about. 
 And it would be a lot more consistent, too. Surely you wouldn't 
 want `assume(x == 42)` to mean "x is always and everywhere 
 equal to 42", but just "x is equal to 42 when control flow 
 passes this point".
I think there is a misunderstanding here. If you have: uint x=0; then that invokes many axioms defined by the language: uint x=0; assume(x exists from here) assume(any value assigned to x has to be in the range [0,0xffffffff]) assume(x==0) assume(type(x)=='uint') assume(…etc…) So if you then do: x=x+1 assert(x==1) You have to move the assert upwards (3+ steps) and see if it satisfied by any of the axioms you run into. 3: assert(x==0) //covered by axiom(x==0), x==1 is satisfied 2: assert(x+1-1==1-1) 1: assert(x+1==1) x=x+1 assert(x==1) Without the "uint x=0;" you would then have moved all the way up to the global scope (conceptually speaking) and not been able to find any axioms matching 'x'. If you assume(x==0) on the other hand, then you don't need the other axioms at all. It is instantly satisfied. No need to prove anything.
 See above, no special treatment is necessary.
I don't follow. If you use assert(false) to signify anything special beyond requesting a proof or assume(false) for anything beyond defining an axiom, then you are special casing.
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Please also remember that they are all "constants" in math, but 
variables are constants that are automatically renamed for 
convinience…

uint x=0;
x=x+1;
x=x*2;

is…

uint x0=0;
x1=x0+1;
x2=x1*2;

perhaps that makes it more clear.
Aug 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 20:01:04 UTC, Ola Fosheim Grøstad 
wrote:
 Please also remember that they are all "constants" in math, but 
 variables are constants that are automatically renamed for 
 convinience…

 uint x=0;
 x=x+1;
 x=x*2;

 is…

 uint x0=0;
 x1=x0+1;
 x2=x1*2;

 perhaps that makes it more clear.
As you may recall, this is also the common representation in compilers: http://en.wikipedia.org/wiki/Static_single_assignment_form
Aug 06 2014
prev sibling parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Wednesday, 6 August 2014 at 18:51:05 UTC, Ola Fosheim Grøstad
wrote:
 On Wednesday, 6 August 2014 at 17:59:06 UTC, Marc Schütz wrote:
 The crux is the axiom that is being defined. When you write 
 `assume(condition)`, the axiom is not:

    "`condition` is true."

 but:

    "When control flow reaches this point, then `condition` is 
 true."

 It's not an unconditionally true condition :-P
Assume(X) does not provide a condition. It defines a proposition/relation to be true. If you define a relation between two constants then it has to hold globally. Otherwise they are not constant… Assume(P) defines that the proposition holds. If it does not involve variables, then it will be free to move anywhere by the rules of Hoare-logic (and propositional logic)? If assume(PI==3.14…) can move everywhere, by the rules, then so can assume(true==false).
Ah, I had a different understanding of assume, i.e. placing an assume(A) at some point in code adds not the axiom A, but rather the axiom "control flow reaches this spot => A". So if there was a different operator, assume_here() or so, with the semantics that I incorrectly assumed for assume(), could the optimizer insert assume_here() for assert() ?
Aug 06 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/06/2014 11:18 PM, Matthias Bentrup wrote:
 Ah, I had a different understanding of assume, i.e. placing an
 assume(A) at some point in code adds not the axiom A, but rather
 the axiom "control flow reaches this spot => A".
(Your understanding is the conventional one.)
Aug 06 2014
next sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 21:33:35 UTC, Timon Gehr wrote:
 On 08/06/2014 11:18 PM, Matthias Bentrup wrote:
 Ah, I had a different understanding of assume, i.e. placing an
 assume(A) at some point in code adds not the axiom A, but 
 rather
 the axiom "control flow reaches this spot => A".
(Your understanding is the conventional one.)
Yeah, the one with control flow is really the only useful way to define it, at least for use in an imperative language. If you can only write assumes which are globally valid, then you can't refer to local variables in the expression. That makes it pretty useless. As for assume(0), in the control flow interpretation it makes perfect sense: Assuming control flow reaches here, false = true. So by contradiction, "control flow reaches here" is false, i.e. that point is unreachable. see again the __assume extension in the Microsoft c++ compiler: http://msdn.microsoft.com/en-us/library/1b3fsfxw.aspx
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 22:03:11 UTC, David Bregman wrote:
 As for assume(0), in the control flow interpretation it makes 
 perfect sense: Assuming control flow reaches here, false = 
 true. So by contradiction, "control flow reaches here" is 
 false, i.e. that point is unreachable.

 see again the __assume extension in the Microsoft c++ compiler: 
 http://msdn.microsoft.com/en-us/library/1b3fsfxw.aspx
From the docs: «The __assume(0) statement is a special case.» So, it does not make perfect sense. If it did, it would not be a special case?
Aug 06 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim Grøstad 
wrote:
 «The __assume(0) statement is a special case.»

 So, it does not make perfect sense. If it did, it would not be 
 a special case?
It doesn't have to be a special case if you define it in the right way - in terms of control flow. Then the interpretation of assume(false) as unreachable follows quite naturally: instead of defining assume(x) to mean that x is an axiom, define assume(x) to mean that P=>x is an axiom, where P is the proposition that control flow reaches the assume statement. so assume(false) actually means P=>false, or simply !P and !P means !(control flow reaches the assume), as desired.
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 06:04:38 UTC, David Bregman wrote:
 On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim Grøstad 
 wrote:
 «The __assume(0) statement is a special case.»

 So, it does not make perfect sense. If it did, it would not be 
 a special case?
It doesn't have to be a special case if you define it in the right way - in terms of control flow. Then the interpretation of assume(false) as unreachable follows quite naturally: instead of defining assume(x) to mean that x is an axiom, define assume(x) to mean that P=>x is an axiom, where P is the proposition that control flow reaches the assume statement. so assume(false) actually means P=>false, or simply !P and !P means !(control flow reaches the assume), as desired.
Let's try the opposite way instead: assume(Q) if(B1){ if(B2){ } } implies: assume(Q) if(B1){ assume(B1&&Q) if(B2){ assume(B1&&B2&&Q) } } So your P in the inner if statement is B1&&B2. However assume(P&&false) is still a fallacy… Or?
Aug 07 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 07:53:44 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 06:04:38 UTC, David Bregman wrote:
 On Thursday, 7 August 2014 at 03:54:12 UTC, Ola Fosheim 
 Grøstad wrote:
 «The __assume(0) statement is a special case.»

 So, it does not make perfect sense. If it did, it would not 
 be a special case?
It doesn't have to be a special case if you define it in the right way - in terms of control flow. Then the interpretation of assume(false) as unreachable follows quite naturally: instead of defining assume(x) to mean that x is an axiom, define assume(x) to mean that P=>x is an axiom, where P is the proposition that control flow reaches the assume statement. so assume(false) actually means P=>false, or simply !P and !P means !(control flow reaches the assume), as desired.
Let's try the opposite way instead: assume(Q) if(B1){ if(B2){ } } implies: assume(Q) if(B1){ assume(B1&&Q) if(B2){ assume(B1&&B2&&Q) } } So your P in the inner if statement is B1&&B2. However assume(P&&false) is still a fallacy… Or?
If you use the definition of assume that I gave, assume(P && false) declares the axiom P => (P && false) which is again equivalent to !P.
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 14:09:26 UTC, David Bregman wrote:
 If you use the definition of assume that I gave, assume(P && 
 false) declares the axiom

 P => (P && false)

 which is again equivalent to !P.
Well, P=>(P&&X) is equivalent to P=>X. But you are allowed to have "false" in the preconditions, since we only are interested in preconditions => postconditions assume(input!=0) is ok… it does not say unreachable. It says, postconditions are not required to hold for input==0…
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 15:35:30 UTC, Ola Fosheim Grøstad 
wrote:
 assume(input!=0) is ok… it does not say unreachable.
 It says, postconditions are not required to hold for input==0…
And… assume(false) is a precondition that says that you don't have to care about the postconditions at all… Basically, "assume anything"…
Aug 07 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 15:35:30 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 14:09:26 UTC, David Bregman wrote:
 If you use the definition of assume that I gave, assume(P && 
 false) declares the axiom

 P => (P && false)

 which is again equivalent to !P.
Well, P=>(P&&X) is equivalent to P=>X.
Right. So if X is false, the axiom we declare is !P, not "false" or "a fallacy".
 But you are allowed to have "false" in the preconditions, since 
 we only are interested in

 preconditions => postconditions
That's ok. There is no contradiction if P is false. False => X is true for any X, so the axiom we declare is just a tautology. So is the control flow definition / unreachability argument clear now?
 assume(input!=0) is ok… it does not say unreachable.
 It says, postconditions are not required to hold for input==0…
 And… assume(false) is a precondition that says that you don't 
 have to care about the postconditions at all… Basically, 
 "assume anything"…
I don't get this part, maybe you can reword it if I haven't already answered the question.
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 16:36:09 UTC, David Bregman wrote:
 Right. So if X is false, the axiom we declare is !P, not 
 "false" or "a fallacy".
But Pn is always true where you assume…?
 False => X is true for any X, so the axiom we declare is just a 
 tautology.
I don't follow. If any assumption is false then anything goes…
 So is the control flow definition / unreachability argument 
 clear now?
Nope.
 assume(input!=0) is ok… it does not say unreachable.
 It says, postconditions are not required to hold for 
 input==0…
 And… assume(false) is a precondition that says that you don't 
 have to care about the postconditions at all… Basically, 
 "assume anything"…
I don't get this part, maybe you can reword it if I haven't already answered the question.
assume(true) r = calc() assert(specification_test(r)) //required to hold for any situation assume(false) r=calc() //r can be anything // input!=0 assume(input!=0) // true r=calc(input) assert(specification(input)==r) // required to hold for input!=0 //input=0 assume(input!=0) // false r=calc(input) assert(specification(input)==r) // may or may not hold
Aug 07 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 16:56:01 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 16:36:09 UTC, David Bregman wrote:
 Right. So if X is false, the axiom we declare is !P, not 
 "false" or "a fallacy".
But Pn is always true where you assume…?
No. Remember P is "control flow can reach this assume". If the assume is unreachable, then P is false. The compiler doesn't always know that P is false though, so we can use assume(false) to supply that as an axiom. Here's an example: x = rand() & 3; // x is 0,1,2 or 3. switch(x) { case 0: foo(); case 1: bar(); case 2: baz(); case 3: qux(); default: assume(false); // hopefully redundant with a decent compiler. } now the compiler can optimize this down to {foo, bar, baz, qux}[x](); or even better, a computed jump.
 False => X is true for any X, so the axiom we declare is just 
 a tautology.
I don't follow. If any assumption is false then anything goes…
Using my definition of assume, the axiom declared is P=>x. If P is false, then the axiom declared is false => x, which is true for any x. http://en.wikipedia.org/wiki/Truth_table#Logical_implication
 So is the control flow definition / unreachability argument 
 clear now?
Nope.
Ok, is it clear now? If not, which parts are not clear?
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 17:38:22 UTC, David Bregman wrote:
 Using my definition of assume, the axiom declared is P=>x. If P 
 is false, then the axiom declared is false => x, which is true 
 for any x.
I don't see how this will work out. If it is truly unreachable then you don't need any assume, because then you can assert anything. Therefore you shouldn't. valid: assume(true) while(true){} assert(angels_love_pixiedust) //proven true now… But there are alternatives to HL that I don't know much about. You might want to look at "Matching Logic Reachability". Looks complicated: http://fsl.cs.illinois.edu/pubs/rosu-stefanescu-2012-fm.pdf
 Ok, is it clear now? If not, which parts are not clear?
It doesn't make any sense to me since I don't see how you would discriminate between your Ps. {P1} while(true) skip; endwhile {P2} How do you define P1 and P2 to be different?
Aug 07 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim Grøstad 
wrote:
 I don't see how this will work out. If it is truly unreachable 
 then you don't need any assume
I just gave you a concrete example of where assume(false) might be useful for optimizing a switch statement. Just because it is truly unreachable doesn't mean the compiler is aware of that.
 It doesn't make any sense to me since I don't see how you would 
 discriminate between your Ps.

 {P1}
 while(true) skip; endwhile
 {P2}

 How do you define P1 and P2 to be different?
I'm not sure how you'd define them to be the same. They're different lines of code, for one thing.
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 18:20:20 UTC, David Bregman wrote:
 On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim Grøstad 
 wrote:
 I just gave you a concrete example of where assume(false) might 
 be useful for optimizing a switch statement.
But to me it sounds dangerous if it actually is reachable or if the compiler somehow think it is. So you should use halt() if it should be unreachable, and use unreachable() if you know 100% it holds. Encouraging assume(false) sounds like an aberration to me. You need to provide proofs if you attempt using assume(X) and assert(false). This is valid, but it is derived, not postulated: assume(P) while(false){ assume(false) // anything goes is acceptable over S here S… // since it never happens… assert(P) // valid since the assumption is false } assert(P) // valid since the content is just a noop When you postulate without proving you open a can of worms: assume(P) while(B){ assume(false) // woops! Should have been assume(B&&P) S… // ohoh we just stated anything goes in here… assert(P) // invalid } assert(P) // invalid since the loop invariant doesn't hold.
 How do you define P1 and P2 to be different?
I'm not sure how you'd define them to be the same. They're different lines of code, for one thing.
I don't believe there "is" such a thing as lines of code. We are doing math. Everything just "is". We don't have code, we have mathematical dependencies/relations. P1 and P2 are propositions, so what are the content of those? The ones you use for your P1=>X and P2=>Y? You have to define newassume(P,X)===assume(P=>X). But what is P? Anyway, you don't need those… You can use a regular assume(false), but not for indicating unreachable. You indicate that the postcondition does not have to hold. Otherwise you will run into trouble. But if you think about while() as a short-hand that is expanded into pure math, then it makes sense. If you have while(true){} then the loop cannot generate new mathematical constants… Thus it cannot be a source for new unique propositions that let you discriminate between before and after entering the loop? Or?
Aug 07 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Thursday, 7 August 2014 at 19:41:44 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 18:20:20 UTC, David Bregman wrote:
 On Thursday, 7 August 2014 at 18:02:08 UTC, Ola Fosheim 
 Grøstad wrote:
 I just gave you a concrete example of where assume(false) 
 might be useful for optimizing a switch statement.
But to me it sounds dangerous if it actually is reachable or if the compiler somehow think it is.
It is dangerous, and shouldn't be used lightly. That's what started this whole thread.
 So you should use halt() if it should be unreachable,
Not if your goal is to completely optimize away any unreachable code.
 and use unreachable() if you know 100% it holds.
This is just another name for the same thing, it isn't any more or less dangerous.
 Encouraging assume(false) sounds like an aberration to me.
If you accept my definition of assume there is no problem with it, it just means unreachable. I showed how this follows naturally from my definition.
 You need to provide proofs if you attempt using assume(X) and 
 assert(false).
There is no mechanism for providing proofs or checking proofs in D.
 How do you define P1 and P2 to be different?
I'm not sure how you'd define them to be the same. They're different lines of code, for one thing.
I don't believe there "is" such a thing as lines of code. We are doing math. Everything just "is". We don't have code, we have mathematical dependencies/relations.
Well, any correct mathematical formalism of the code must preserve its semantics, so it doesn't really matter which one we talk about. Different lines (or basic blocks, I guess is the right term here) can have different reachability, so you could just give each one its own P.
 P1 and P2 are propositions, so what are the content of those? 
 The ones you use for your P1=>X and P2=>Y?
This is just an implementation detail of the compiler. Probably each basic block has its own P, which is conservatively approximated with some data flow analysis. (Disclaimer: I am not a compiler expert)
 Anyway, you don't need those… You can use a regular 
 assume(false), but not for indicating unreachable. You indicate 
 that the postcondition does not have to hold. Otherwise you 
 will run into trouble.

 But if you think about while() as a short-hand that is expanded 
 into pure math, then it makes sense. If you have while(true){} 
 then the loop cannot generate new mathematical constants… Thus 
 it cannot be a source for new unique propositions that let you 
 discriminate between before and after entering the loop? Or?
Sorry, I don't understand much of this. Anyways, I don't have much more time to keep justifying my definition of assume. If you still don't like mine, maybe you can give your definition of assume, and demonstrate how that definition is more useful in the context of D.
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 23:13:15 UTC, David Bregman wrote:
 and use unreachable() if you know 100% it holds.
This is just another name for the same thing, it isn't any more or less dangerous.
Of course it is. unreachable() could lead to this: f(){ unreachable} g(){...} f: g: machinecode assume(false) just states that the post condition has not been proved/tested/verified yet. That does not signify that no code will reach it. It just means that the returned value is uncertain. It does not mean that the code will never be executed. It is also dangerous to mix them up since CTFE can lead to assume(false). If you want safety and consistence you need to have a designated notation for unreachable.
 Encouraging assume(false) sounds like an aberration to me.
If you accept my definition of assume there is no problem with it, it just means unreachable. I showed how this follows naturally from my definition.
Your "definition" is lacking. It mixes up two entirely different perspectives, the deductive mode and the imperative mode of reasoning.
 You need to provide proofs if you attempt using assume(X) and 
 assert(false).
There is no mechanism for providing proofs or checking proofs in D.
Actually, unit tests do. D just lacks the semantics to state that the input accepted by the precondition has been fully covered in unit tests.
 Well, any correct mathematical formalism of the code must 
 preserve its semantics, so it doesn't really matter which one 
 we talk about.
Preserve the semantics of the language, yes. For pure functions there is no difference. And that is generally what is covered by plain HL. The code isn't modified, but you only prove calculations, not output.
 Different lines (or basic blocks, I guess is the right term 
 here) can have different reachability, so you could just give 
 each one its own P.
"basic blocks" is an artifact of a specific intermediate representation. It can cease to exist. For instance MIPS and ARM have quite different conditionals than x86. With a MIPS instruction set a comparison is an expression returning 1 or 0 and you can get away from conditionals by multiplying successive calculations with it to avoid branch penalties. This is also a common trick for speeding up SIMD… Hoare Logic deals with triplets: precondition, statement, postcondition. (assume,code,assert) You start with assume(false), then widen it over the code until the assert is covered for the input you care about.
 P1 and P2 are propositions, so what are the content of those? 
 The ones you use for your P1=>X and P2=>Y?
This is just an implementation detail of the compiler. Probably each basic block has its own P, which is conservatively approximated with some data flow analysis. (Disclaimer: I am not a compiler expert)
Oh, there is of course no problem by inserting a unreachable barrier-like construct when you have proven assume(false), but it is a special case. It is not an annotation for deduction.
 Anyways, I don't have much more time to keep justifying my 
 definition of assume. If you still don't like mine, maybe you 
 can give your definition of assume, and demonstrate how that 
 definition is more useful in the context of D.
assume(x) is defined by the preconditions in Hoare Logic triplets. This is what you should stick to for in(){} preconditions. A backend_assume(x) is very much implementation defined and is a precondition where the postconditions is: program(input) == executable(input) for all x except those excluded by backend_assume. However an assume tied to defining a function's degree of verification is not a backend_assume. That is where D risk going seriously wrong: conflating postconditions (assert) with preconditions (assume) and conflating local guarantees (by unit tests or proofs) with global optimization and backend mechanisms (backend_assume).
Aug 09 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/09/2014 09:26 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Thursday, 7 August 2014 at 23:13:15 UTC, David Bregman wrote:
 and use unreachable() if you know 100% it holds.
This is just another name for the same thing, it isn't any more or less dangerous.
Of course it is. unreachable() could lead to this: f(){ unreachable} g(){...} f: g: machinecode assume(false) just states that the post condition has not been proved/tested/verified yet. That does not signify that no code will reach it.
Formally, that's what it assumes to be the case. If you can prove 'false', this means that the code is unreachable. if(Q){ ⦃ Q ⦄ if(¬Q){ ⦃ Q ∧ ¬Q ⦄ ⦃ false ⦄ // this means 'unreachable' assert(false); // goes through } } After assuming 'false' you can prove false. ⦃ false ⦄ still means 'unreachable' if it is 'assume'd. (Yes, of course it can also be read as 'inconsistent', if the code is actually reachable, but proving code 'unreachable' which can actually be reached also just means that the reasoning and the assumptions were 'inconsistent'. You can also read 'inconsistent' as: 'this program will never actually be executed', etc. David's interpretation of the formalism is adequate.)
 Encouraging assume(false) sounds like an aberration to me.
If you accept my definition of assume there is no problem with it, it just means unreachable. I showed how this follows naturally from my definition.
Your "definition" is lacking. It mixes up two entirely different perspectives, the deductive mode and the imperative mode of reasoning.
I think you should elaborate on this a little if you want to make a point. E.g. I don't think that such an opposition of 'reasoning modes' even exists.
 Hoare Logic deals with triplets: precondition, statement, postcondition. 
(assume,code,assert)
'assume' and 'assert' are "statement"s: ⦃ P ∧ Q ⦄ // ← precondition assert(Q); // ← statement ⦃ P ⦄ // ← postcondition 'assume' is just a dual concept: ⦃ P ⦄ // ← precondition assume(Q); // ← statement ⦃ P ∧ Q ⦄ // ← postcondition There is nothing 'lacking' here, right?
Aug 09 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 9 August 2014 at 20:22:36 UTC, Timon Gehr wrote:
 Formally, that's what it assumes to be the case. If you can 
 prove 'false', this means that the code is unreachable.
No, if you prove false that means either that it is nonterminating or that it cannot be proved by the the assumptions, hence the assumptions need strengthening.
         ⦃ Q ∧ ¬Q ⦄
         ⦃ false ⦄     // this means 'unreachable'
         assert(false); // goes through
implication is not equivalence
 After assuming 'false' you can prove false.
 ⦃ false ⦄ still means 'unreachable' if it is 'assume'd.
False only means that the postcondition does not hold. If the precondition is false then the triplet is valid i.e. correct.
 (Yes, of course it can also be read as 'inconsistent', if the 
 code is actually reachable, but proving code 'unreachable'
No, you can have this: main(){ Precondition(false) Stuff() Postcondition(anything) } This states that Stuff has not yet been verified for any input. It does not state that Stuff will never execute.
 Your "definition" is lacking. It mixes up two entirely 
 different perspectives,
 the deductive mode and the imperative mode of reasoning.
I think you should elaborate on this a little if you want to make a point. E.g. I don't think that such an opposition of 'reasoning modes' even exists.
Of course it does, that is why Hoare Logic and SSA exist. Deduction lacks a notion of time.
 ⦃ P ∧ Q ⦄  // ← precondition
 assert(Q); // ← statement
 ⦃ P ⦄      // ← postcondition
Makes no sense. Assert is not an imperative statement, it is an annotation. It is meta.
Aug 09 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/10/2014 04:40 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Saturday, 9 August 2014 at 20:22:36 UTC, Timon Gehr wrote:
 Formally, that's what it assumes to be the case. If you can prove
 'false', this means that the code is unreachable.
No, if you prove false that means either that it is nonterminating
I.e. everything afterwards is unreachable.
         ⦃ Q ∧ ¬Q ⦄
         ⦃ false ⦄     // this means 'unreachable'
         assert(false); // goes through
implication is not equivalence
Do you agree we indeed have equivalence for those programs with an empty precondition?
 or that it cannot be proved by the the assumptions, hence the assumptions
 need strengthening.
 ...
'false' is the strongest statement. Why would you need to strengthen assumptions?
 (Yes, of course it can also be read as 'inconsistent', if the code is
 actually reachable, but proving code 'unreachable'
No, you can have this: main(){ Precondition(false) Stuff() Postcondition(anything) } This states that Stuff has not yet been verified for any input. It does not state that Stuff will never execute. ...
If the precondition is actually assumed to hold for all executions of the program, a precondition of 'false' means that the program never executes. Maybe you don't assume this, but internally to the logic, this is what happens: You verify the postcondition under the assumption of the precondition. I.e. in this case you verify the postcondition of the program under the assumption that it never executes. Of course you may still execute such a program, but the ⦃ ⦄ assertions become irrelevant to this execution. Note that an optimizer needs to be able to use deduced facts for program transformations, or it is useless. To an optimizer, a contradiction means 'unreachable'. There are no non-trivial preconditions (in your sense, i.e. the program still may have interesting behaviour if they are violated) on the program used to deduce those facts. Do you agree that _if_ 'assume' is to be seen as an optimizer hint, then 'assume(false)' means 'assume unreachable'?
 Your "definition" is lacking. It mixes up two entirely different
 perspectives,
 the deductive mode and the imperative mode of reasoning.
I think you should elaborate on this a little if you want to make a point. E.g. I don't think that such an opposition of 'reasoning modes' even exists.
Of course it does, that is why Hoare Logic and SSA exist.
They exist for the virtue of being convenient formalisms for some applications, but they can be embedded into e.g. set theory just fine.
 Deduction lacks a notion of time.
 ...
What is 'a notion of time' and how does HL provide it?
 ⦃ P ∧ Q ⦄  // ← precondition
 assert(Q); // ← statement
 ⦃ P ⦄      // ← postcondition
Makes no sense.
It is a definition. If you actually want to understand David's point you should digest it. Does ⦃ false ⦄ abort; ⦃ P ⦄ make sense to you? The assert(Q); statement is equivalent to if(¬Q) abort;
 Assert is not an imperative statement,
Yes, it may be seen to be.
 it is an annotation. It is meta.
If it is part of the program it is not 'meta'. Here, there is a distinction between: assert(P) // statement and ⦃ P ⦄ // "meta" even though both are commonly called 'assertions'. assert(P); is a statement and hence it needs to be given a semantics, e.g. an axiomatic semantics or by rewriting, as above. (BTW: This is not about being right or wrong. It is about understanding each other's statements in the right context.)
Aug 10 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 10 August 2014 at 13:47:45 UTC, Timon Gehr wrote:
 No, if you prove false that means either that it is 
 nonterminating
I.e. everything afterwards is unreachable.
Depends on your take on it. If you support non-deterministic computations (e.g. Ada's "select") then non-termination is treated like bottom. So you can have infinite_recursion() or terminates() => terminates() So unreachable in terms of machine-language, sure, but not in terms of execution flow.
 Do you agree we indeed have equivalence for those programs with 
 an empty precondition?
You mean "false" or "true"?
 or that it cannot be proved by the the assumptions, hence the 
 assumptions
 need strengthening.
 ...
'false' is the strongest statement. Why would you need to strengthen assumptions?
If you succeed in proving the postcondition to be false then the postcondition does not hold and you have to strengthen the precondition or else the triplet is invalid/incorrect. If the precondition is defined false then the postcondition is not required to hold. It is not covered by the proof/verification/specification and the triplet is valid/provably correct. So if the precondition is defined false then the program is a priori proved correct to the specification, i.e. the specification does not require any guarantees for the result.
 If the precondition is actually assumed to hold for all 
 executions of the program, a precondition of 'false' means that 
 the program never executes.
That is the responsibility of the caller. It is the caller's responsibility to uphold the precondition in order to get guaranteed results. But you could have multiple pairs of preconditions/postconditions. E.g. you could for a generic function have different pairs for int/float/double or have a separate pair for termination, etc. Or have one set that specifies what is covered by unit-tests, what is vetted by review, what is totally unknown… You could have different types of guarantees for a library and users could take advantage of it based on the requirements for the application (high security vs computer game).
 Maybe you don't assume this, but internally to the logic, this 
 is what happens: You verify the postcondition under the 
 assumption of the precondition. I.e. in this case you verify 
 the postcondition of the program under the assumption that it 
 never executes.
It is not always possible to prove termination, so that would be unfortunate. It would be more useful to be able to restrict the precondition to what you cover by unit-tests. Or to have multiple sets of pre/postconditions (proven-by-machine vs vetted-by-review). Then if you can prove that the caller fulfills the precondition you are able to optimize based on the pre/post condition and assume the post-condition if it is machine-verified. That would give new optimization possibilities. That does not mean that you should never be able to use a function outside the verified range (by machine proof), but then you need to assert the results at runtime if you want verified correctness.
 Of course you may still execute such a program, but the ⦃ ⦄ 
 assertions become irrelevant to this execution.
Not really, they could still trap.
 Note that an optimizer needs to be able to use deduced facts 
 for program transformations, or it is useless. To an optimizer, 
 a contradiction means 'unreachable'.
No, to an optimizer a contradiction means "all bets are off for the postcondition to hold": program(x) != executable(x) That is why __assume(P) must always hold on any reachable path (e.g. any path that the optimizer considers to be a target for optimization). But __assume is not a regular precondition, it has a counterpart in the optimization invariant (postcondition): program(x) == executable(x), x not creating contradictions by __assume(…) The optimizer is presuming that anything goes as long as this optimization invariant can be upheld. This is not a postcondition specified by the programmer at the meta-level. It is implicit. And the optimizer does not check the implicit precondition, at all…
 to deduce those facts. Do you agree that _if_ 'assume' is to be 
 seen as an optimizer hint, then 'assume(false)' means 'assume 
 unreachable'?
No. I think the optimizer should only use assumptions when they are guaranteed to hold by the caller or the language axioms. Or in the rare case by other machine verifiable axioms (specifications for hardware registers etc).
 Of course it does, that is why Hoare Logic and SSA exist.
They exist for the virtue of being convenient formalisms for some applications, but they can be embedded into e.g. set theory just fine.
Not sure what you mean by this and how this relates to imperative programming.
 Deduction lacks a notion of time.
 ...
What is 'a notion of time' and how does HL provide it?
It doesn't. Which is why it trips when you don't prove termination. But HL does not do program transformations.
 The assert(Q); statement is equivalent to

 if(¬Q) abort;
Not if assert() is considered meta-level. However, since a triplet is tied to an entity/function/module, it would be ok to throw an exception as long as it isn't caught in the module covered by the pre/postcondition triplet if it does not have any side effects. If you don't reach the postcondition it does not matter.
 Assert is not an imperative statement,
Yes, it may be seen to be.
That would be an aberration, because the specification should either be independent of the implementation or generate the implementation… If the specification is considered part of the program then you will run into problems if there are contradictions.
 assert(P); is a statement and hence it needs to be given a 
 semantics, e.g. an axiomatic semantics or by rewriting, as 
 above.
Well, in C it is a statement by implementation, but it is always meant to not have side effects and one should be able to compile with NDEBUG with no difference in program behaviour. So, even though it is implemented as a macro it has always had the status of an annotation.
 (BTW: This is not about being right or wrong. It is about 
 understanding each other's statements in the right context.)
I think the whole game changes when Walter wants to optimize based on the annotations… Then you need to be very careful unless you want to amplify bugs rather than improve the overall performance of the program (speed + correct behaviour). So then the distinction between specification and implementation, proven and vetted, becomes even more important. Since D has unit tests as a formalism, then it would be a shame to not use it to verify contracts in a way that could provide safe optimization possibilities rather than relying on programmers being able to write provably correct code in a predictable manner, which just ain't gonna happen.
Aug 10 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Sunday, 10 August 2014 at 02:40:31 UTC, Ola Fosheim Grøstad 
wrote:
 Of course it does, that is why Hoare Logic and SSA exist. 
 Deduction lacks a notion of time.
Logic is ordered, and we have a notion of order because we know time, which is the only obviously ordered thing in nature. So in a sense any logic has time in its foundation and math can do the reverse: represent time in declarative manner.
Aug 11 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 12 August 2014 at 04:50:15 UTC, Kagamin wrote:
 Logic is ordered, and we have a notion of order because we know 
 time, which is the only obviously ordered thing in nature. So 
 in a sense any logic has time in its foundation and math can do 
 the reverse: represent time in declarative manner.
No, there is no order to boolean expressions. Deduction can be performed bottom up. Imperative programs rely on top down execution due to side effects. Recall that all pure functions over finite types can be implemented as tables. A table lookup is O(1). No time.
Aug 11 2014
parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 12 August 2014 at 05:47:22 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 12 August 2014 at 04:50:15 UTC, Kagamin wrote:
 Logic is ordered, and we have a notion of order because we 
 know time, which is the only obviously ordered thing in 
 nature. So in a sense any logic has time in its foundation and 
 math can do the reverse: represent time in declarative manner.
No, there is no order to boolean expressions. Deduction can be performed bottom up.
A true reversal would be when preconditions are derived from postconditions.
 Recall that all pure functions over finite types can be 
 implemented as tables. A table lookup is O(1). No time.
That's different logic, and algorithmic complexity is time.
Aug 12 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 12 August 2014 at 08:07:01 UTC, Kagamin wrote:
 A true reversal would be when preconditions are derived from 
 postconditions.
That's how you conduct a proof…
 Recall that all pure functions over finite types can be 
 implemented as tables. A table lookup is O(1). No time.
That's different logic, and algorithmic complexity is time.
Not at all. You can create a boolean expression for every bit of output and evaluate it only using NAND gates (or NOR). Not practical, but doable…
Aug 12 2014
parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 12 August 2014 at 09:17:54 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 12 August 2014 at 08:07:01 UTC, Kagamin wrote:
 A true reversal would be when preconditions are derived from 
 postconditions.
That's how you conduct a proof…
A proof usually flows from causes to consequences, the reverse (induction) is ambiguous. Ordering is more fundamental: you can define or redefine it, but it will remain, one can't think outside of this formalism.
 Not at all. You can create a boolean expression for every bit 
 of output and evaluate it only using NAND gates (or NOR). Not 
 practical, but doable…
Mathematics is pretty aware of differences between algorithms, and proving their equivalence is non-trivial.
Aug 12 2014
parent "Ola Fosheim Gr" <ola.fosheim.grostad+dlang gmail.com> writes:
On Tuesday, 12 August 2014 at 14:43:23 UTC, Kagamin wrote:
 A proof usually flows from causes to consequences, the reverse 
 (induction) is ambiguous.
If a deterministic imperative program is ambigious then there is something wrong. So no, ambiguity is not the problem. The size if the search space is. The preconditions are "the ambiguity" which the theorem prover try to find...
 Ordering is more fundamental: you can define or redefine it, 
 but it will remain, one can't think outside of this formalism.
Explain.
 Not at all. You can create a boolean expression for every bit 
 of output and evaluate it only using NAND gates (or NOR). Not 
 practical, but doable…
Mathematics is pretty aware of differences between algorithms, and proving their equivalence is non-trivial.
Define non-trivial. We were talking about the nature of finite computations. They are all trivial. Algorithms are just compression of space.
Aug 12 2014
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 21:33:35 UTC, Timon Gehr wrote:
 On 08/06/2014 11:18 PM, Matthias Bentrup wrote:
 Ah, I had a different understanding of assume, i.e. placing an
 assume(A) at some point in code adds not the axiom A, but 
 rather
 the axiom "control flow reaches this spot => A".
(Your understanding is the conventional one.)
Not right. You need to take into account that assume() provides a mathematical definition, it isn't an imperative. That would make no sense. So there is no control flow, only mathematical dependencies. You only deal with constants. "Variables" are just a tool for automatic labeling of constants. You would not be able to use logic otherwise. "Control flow" is embedded in the dependencies and relationships between the constants. j=0 for(i=0; i<3;++i){j+=i} Is just a short-hand for: j0 = 0 i0 = 0 j1 = j0 + i0 i1 = 1 j2 = j1 + i1 i2 = 2 j3 = j2 + i2 If you map this out as a graph you get the dependencies (mirroring "control flow"). Hoare-logic provide means to reason about the short-hand notation and make transformations on it. What is unconventional about this?
Aug 06 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Thursday, 7 August 2014 at 03:51:02 UTC, Ola Fosheim Grøstad
wrote:
 On Wednesday, 6 August 2014 at 21:33:35 UTC, Timon Gehr wrote:
 On 08/06/2014 11:18 PM, Matthias Bentrup wrote:
 Ah, I had a different understanding of assume, i.e. placing an
 assume(A) at some point in code adds not the axiom A, but 
 rather
 the axiom "control flow reaches this spot => A".
(Your understanding is the conventional one.)
Not right. You need to take into account that assume() provides a mathematical definition, it isn't an imperative. That would make no sense. So there is no control flow, only mathematical dependencies.
But control flow can introduce mathematical dependencies. E.g. for assert the code if(a) { assert(b); ... } is not equivalent to assert(b); if(a) { ... } but not because the assertion depends on a different incarnation of "b", but because it also depends on "a", so it is like assert(!a || b); if(a) { ... } If assume is independent of control flow, then clearly this cannot be related to assert. But I think the proposal was not to replace assert with the textbook definition of assume, rather to use the information in asserts to optimize the code. So if assume doesn't work, what about assume_here() ?
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 05:41:22 UTC, Matthias Bentrup 
wrote:
 If assume is independent of control flow, then clearly this
 cannot be related to assert.
Well, both assume and assert are "independent" of control flow since you don't have execution or control flow in mathematics. The dependency is tied to the "renaming of variables into constants" which allows Hoare-Logic to work on variables, not only constants. However, for proper constants, you don't need that limit for neither assert nor assume? if(B){ assume(B && PI==3.14) } else { assume(!B && PI==3.14) // must hold here too since it is a constant } apply hoare logic rule for if: if this holds: {B ∧ P} S {Q} , {¬B ∧ P } T {Q} then this holds: {P} if B then S else T endif {Q} which gives: assume(PI==3.14) f(B){ assume(PI==3.14) } else { assume(PI==3.14) } assume(PI==3.14) (Usually you only have assume at the top in the preconditions, since you otherwise don't get a proof :-).
Aug 06 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Thursday, 7 August 2014 at 06:32:18 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 05:41:22 UTC, Matthias Bentrup 
 wrote:
 If assume is independent of control flow, then clearly this
 cannot be related to assert.
Well, both assume and assert are "independent" of control flow since you don't have execution or control flow in mathematics. The dependency is tied to the "renaming of variables into constants" which allows Hoare-Logic to work on variables, not only constants. However, for proper constants, you don't need that limit for neither assert nor assume? if(B){ assume(B && PI==3.14) } else { assume(!B && PI==3.14) // must hold here too since it is a constant } apply hoare logic rule for if: if this holds: {B ∧ P} S {Q} , {¬B ∧ P } T {Q} then this holds: {P} if B then S else T endif {Q} which gives: assume(PI==3.14) f(B){ assume(PI==3.14) } else { assume(PI==3.14) } assume(PI==3.14) (Usually you only have assume at the top in the preconditions, since you otherwise don't get a proof :-).
So the D function (note that "a" is constant) int silly() { enum int a = 1; if( a == 2 ) { assert( a == 2 ); } return a; } has undefined semantics (at least in debug mode), because it contains a false assertion ? That is not my understanding of assert.
Aug 07 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 08:12:23 UTC, Matthias Bentrup 
wrote:
 So the D function (note that "a" is constant)

 int silly() {
   enum int a = 1;
   if( a == 2 ) {
     assert( a == 2 );
   }
   return a;
 }

 has undefined semantics (at least in debug mode), because it 
 contains a false assertion ?
It isn't reachable so it is not part of the program? If you support generic programming you probably should limit proofs to reachable portions of library code. However the specification or the implementation of it appears to be flawed, so perhaps it should be considered incorrect if this was in application code. You usually try to prove post conditions in terms of pre conditions. The invariants, variants and asserts are just tools to get there. So this is not a typical issue.
Aug 07 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Thursday, 7 August 2014 at 08:35:21 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 7 August 2014 at 08:12:23 UTC, Matthias Bentrup 
 wrote:
 So the D function (note that "a" is constant)

 int silly() {
  enum int a = 1;
  if( a == 2 ) {
    assert( a == 2 );
  }
  return a;
 }

 has undefined semantics (at least in debug mode), because it 
 contains a false assertion ?
It isn't reachable so it is not part of the program? If you support generic programming you probably should limit proofs to reachable portions of library code. However the specification or the implementation of it appears to be flawed, so perhaps it should be considered incorrect if this was in application code. You usually try to prove post conditions in terms of pre conditions. The invariants, variants and asserts are just tools to get there. So this is not a typical issue.
So reachability has an influence, but control flow hasn't ?
Aug 07 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 7 August 2014 at 08:52:43 UTC, Matthias Bentrup 
wrote:
 So reachability has an influence, but control flow hasn't ?
You have to define what you want to prove. 1. The post condition to hold? 2. A sound specification to be fully implemented? 3. That executed code is tested when it executes, but ignore that important code should have been executed instead. (1) and (2) is what program verification and aims for. (1) is what design by contract tries to establish. (3) is weak and what c-asserts do.
Aug 07 2014
prev sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 16:00:33 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 6 August 2014 at 15:36:52 UTC, Marc Schütz wrote:
 This is an assume, not an assert.
Not sure what you mean. An assume is an assert proven (or defined) to hold. It can be a proven theorem which has been given the status of an axiom. It is known to keep the algebraic system sound. If you claim that something unsound is proven then all bets are off everywhere?
I guess we're talking past each other. You were saying that Hoare logic doesn't work with non-terminating loops, and I was responding that there was no non-terminating loop in the example. That's all there is to it. To clarify: The assume condition wasn't `false`, it was `!running`. There is a situation in which this is true, namely when `running` is false at the point where the assume is. Because the variable isn't changed in the loop, it follows that it's also true before the loop. I see no contradiction here.
Aug 06 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 17:03:44 UTC, Marc Schütz wrote:
 I guess we're talking past each other. You were saying that 
 Hoare logic doesn't work with non-terminating loops, and I was 
 responding that there was no non-terminating loop in the 
 example. That's all there is to it.
Oh well, but a terminating loop that does not change the condition is just an empty statement. So then you have basically proved that it isn't a loop… ;)
Aug 06 2014
parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Wednesday, 6 August 2014 at 17:08:18 UTC, Ola Fosheim Grøstad
wrote:
 On Wednesday, 6 August 2014 at 17:03:44 UTC, Marc Schütz wrote:
 I guess we're talking past each other. You were saying that 
 Hoare logic doesn't work with non-terminating loops, and I was 
 responding that there was no non-terminating loop in the 
 example. That's all there is to it.
Oh well, but a terminating loop that does not change the condition is just an empty statement. So then you have basically proved that it isn't a loop… ;)
I still can't see how you can infer that the assume(!running) which clearly holds after the loop also holds before the loop.
Aug 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 17:19:45 UTC, Matthias Bentrup 
wrote:
 I still can't see how you can infer that the assume(!running)
 which clearly holds after the loop also holds before the loop.
It holds if the loop does not change running, trivially? But then you no longer have a loop. Note also that you can implement an if construct from while this way: while(B){ S; B=false; } If that was the case, then you could not infer anything about B before the while.
Aug 06 2014
prev sibling parent reply "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Wednesday, 6 August 2014 at 15:29:13 UTC, Ola Fosheim Grøstad
wrote:
 On Wednesday, 6 August 2014 at 15:02:10 UTC, Matthias Bentrup 
 wrote:
 is equivalent to

  while(running) {
    ...don't assign to running, don't break...
  }
  assume(!running);
You have to prove termination to get there? Plain Hoare logic cannot deal with non-terminating loops. Otherwise you risk proving any theorem below the loop to hold?
I'm pretty sure that running is false when the program reaches the end of the while loop, whether it is assigned in the loop or not. I only added the "don't assign to running" to make the example match your pattern.
Aug 06 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 6 August 2014 at 16:42:57 UTC, Matthias Bentrup 
wrote:
 I'm pretty sure that running is false when the program reaches
 the end of the while loop, whether it is assigned in the loop or
 not. I only added the "don't assign to running" to make the
 example match your pattern.
Well, if you don't assign to it in the loop, and it is known to terminate, then "running" is provably false before the loop for sure? But then it isn't a loop… It is an empty statement: skip, NOP…
Aug 06 2014
prev sibling parent Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 08/06/14 17:02, Matthias Bentrup via Digitalmars-d wrote:
 But even if there is no explicit assert()/assume() given by the developer, I
guess the optimizer is free to insert assumes that are provably correct, e.g.
 
   while(running) {
     ...don't assign to running, don't break...
   }
 
 is equivalent to
 
   while(running) {
     ...don't assign to running, don't break...
   }
   assume(!running);
In the "running==true" case that 'assume' will never be reached...
 is equivalent to
 
   assume(!running);
   while(running) {
     ...don't assign to running, don't break...
   }
... so, no, this transformation is not a valid one.
 So I take the compiler is allowed to throw away code without any asserts
already ?
It can do whatever it wants, as long as the observable behavior does not change. `Assume` removes that restriction as soon as a failing assert is reached. artur
Aug 06 2014
prev sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
 The main argument seems to revolve around whether this is actually a
change or not. In my (and others') view, the treatment of assert as 'assume' is not a change at all. It was in there all along, we just needed the wizard to tell us.
How can there be any question? This is a change in the compiler, a change in the docs, change in what your program does, change of the very bytes in the executable. If my program worked before and doesn't now, how is that not a change? This must be considered a change by any reasonable definition of the word change. I don't think I can take seriously this idea that someone's unstated, unmanifested intentions define change more so than things that are .. you know.. actual real changes.
Yes, sorry, there will be actual consequences if the optimizations are implemented. What I meant with the somewhat facetious statement was that there is no semantic change - broken code will still be broken, it will just be broken in a different way. If you subscribe to the idea that a failed assertion indicates all subsequent code is invalid, then subsequent code is broken (and undefined, if the spec says it is undefined). The change would be clarifying this in the spec, and dealing with the fallout of previously broken-but-still-working code behaving differently under optimization. At least, that's how I understand it... I hope I an not mischaracterizing others' positions here (let me know, and I'll shut up).
 Well I think I outlined the issues in the OP. As for solutions, there
have been some suggestions in this thread, the simplest is to leave things as is and introduce the optimizer hint as a separate function, assume(). I don't think there was any argument presented against a separate function besides that Walter couldn't see any difference between the two behaviors, or the intention thing which doesn't really help us here.
An argument against the separate function: we already have a function, called assert. Those that want the nonoptimizing version (a disable-able 'if false throw' with no wider meaning) should get their own method darnit. I guess the only real argument against it is that pre-existing asserts
 contain significant optimization information that we can't afford to not
 reuse.
Yessss. This gets to the argument - asserts contain information about the program. Specifically, a statement about the valid program state at a given point. So we should treat them as such.
 But this is a claim I'm pretty skeptical of.
Ah man, thought I had you.
 Andrei admitted it's just a hunch at this point. Try looking through your
 code base to see how many asserts would be useful for optimizing.
Ironically, I don't tend to use asserts at all in my code. I do not want code that will throw or not throw based on a compiler flag. Why am I even arguing about this stuff? If asserts were used for optimizing, I would look at actually using them more. (Can we programmatically (sp?) identify and flag/resolve
 issues that occur from a mismatch of expectations?)
I'm not an expert on this, but my guess is it's possible in theory but would never happen in practice. Such things are very complex to implement, if Walter won't agree to a simple and easy solution, I'm pretty sure there's no way in hell he would agree to a complex one that takes a massive amount of effort.
If gcc et all do similar optimizations, how do they handle messaging?
Aug 06 2014
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
Digitalmars-d wrote:
 That's in the past. This is all about the pros and cons of 
 changing it now
 and for the future.
The main argument seems to revolve around whether this is actually a change or not. In my (and others') view, the treatment of assert as 'assume' is not a change at all. It was in there all along, we just needed the wizard to tell us.
This is already the first misunderstanding: The argument is about whether it's a good idea, not whether it's newly introduced or has been the intended meaning since assert's conception.
 The below can safely be ignored, as I just continue the pedantic
 discussions....


 OK, but my point was you were using a different definition of 
 undefined
 behavior. We can't communicate if we aren't using the same 
 meanings of
 words.
Yes, very true. My definition of undefined in this case hinges on my definition of what assert means. If a failed assert means all code after it is invalid, then by definition (as I interpret the definition) that code is invalid and can be said to have undefined behaviour. That is, it makes sense to me that it is specified as undefined, by the spec that is incredibly unclear. I may be reading too much into it here, but this follows the strict definition of undefined - it is undefined because it is defined to be undefined. This is the 'because I said so' defense.
Of course you can define your own concept and call it "undefined", but I don't see how it matters. The concept described by the usual definition of "undefined" still exists, and it still has very different implications than your concept has. To give a more practical example: You're writing an authentication function. It takes a username and a password, and returns true or false, depending on whether the password is correct for this username. Unfortunately, the verification algorithm is wrong: due to an integer overflow in the hash calculation, it rejects some valid passwords, but never accepts invalid ones. The program is clearly incorrect, but its behaviour is still well-defined and predictable (overflow is not undefined in D). Now, if the flaw in the algorithm were due to an actual undefined operation, everything could happen, including the function accepting invalid passwords. I hope it's clear that this is a very different class of brokenness.
 My stance is that this new/old definition is a good thing, as 
 it matches
 how I thought things were already, and any code that surfaces 
 as broken
 because of it was already broken in my definition.  Therefore 
 this 'change'
 is good, does not introduce breaking changes, and arguments 
 about such
 should be redirected towards mitigation and fixing of 
 expectations.

 In an attempt to return this discussion to something useful, 
 question:

 If assert means what I think it means (and assuming we agree on 
 what the
 actual two interpretations are), how do we allay concerns about 
 it?  Is
 there anything fundamentally/irretrievably bad if we use this 
 new/old
 definition?  (Can we programmatically (sp?) identify and 
 flag/resolve
 issues that occur from a mismatch of expectations?)
My understanding (which is probably the same as that of most people participating in the discussion, because as I said above, I _don't_ think the argument is about a misunderstanding on this level): Walter's assert: * Expresses a statement that the programmer intends to be true. It is only checked in non-release mode. * The compiler can assume that it is true - even in release mode - because the programmer explicitly said so, and the compiler may not have figured it out by itself (similar to casts, which also express assumptions by the programmer that the compiler cannot know otherwise). * Asserting a condition that is false is undefined behaviour. The other assert: * Expresses a statement that the programmer intends to be true. It is only checked in non-release mode. * Because it is unlikely that the programmer has proved the correctness in the general case, the compiler must not assume it is true unless it can prove it to be, either at compile time, or with a runtime check. Release mode disables the runtime checks. * Asserting a condition that is false either raises an error at runtime, aborts compilation, or doesn't do anything. It never causes undefined behaviour by itself. As I already wrote elsewhere, an assert with the suggested/intended behaviour is a very dangerous tool that should not be used as widely as it is today. If the asserted condition is wrong (for whatever reason), it would create not just wrong behaviour, but undefined behaviour (as described above, not your concept). H.S. Teoh however suggested to extend compile time checking for assertions. I believe this is the way to go forward, and it has great potential. What I don't agree with, of course, is to just believe anything in the assertions to be true without verifying it.
Aug 06 2014
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 8/6/2014 8:18 AM, David Bregman wrote:

 This appears to be the root of the argument, and has been circled
 repeatedly... it's not my intent to restart another round of
 discussion on
 that well traveled ground, I just wanted to state my support for the
 definition as I understand it.
I disagree. I don't think the fact that some people already had the new definition in mind before is really all that relevant. That's in the past. This is all about the pros and cons of changing it now and for the future.
You keep going on the premise that your definition is the intended definition. I completely disagree. My understanding of assert has always been as Walter has described it. To me, *that* is the existing definition and yours is completely new. Nothing is being changed. He's just talking about starting to take advantage of it as he has always intended to. --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com
Aug 05 2014
next sibling parent "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:
 On 8/6/2014 8:18 AM, David Bregman wrote:

 This appears to be the root of the argument, and has been 
 circled
 repeatedly... it's not my intent to restart another round of
 discussion on
 that well traveled ground, I just wanted to state my support 
 for the
 definition as I understand it.
I disagree. I don't think the fact that some people already had the new definition in mind before is really all that relevant. That's in the past. This is all about the pros and cons of changing it now and for the future.
You keep going on the premise that your definition is the intended definition. I completely disagree. My understanding of assert has always been as Walter has described it. To me, *that* is the existing definition and yours is completely new. Nothing is being changed. He's just talking about starting to take advantage of it as he has always intended to.
No, intention is orthogonal to this. Again, this is all about the pros and cons of changing the *actual* semantics of assert.
Aug 05 2014
prev sibling parent reply "eles" <eles215 gzk.dot> writes:
On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:
 On 8/6/2014 8:18 AM, David Bregman wrote:
 You keep going on the premise that your definition is the 
 intended definition. I completely disagree. My understanding of 
 assert has always been as Walter has described it.
I did not use to think the same, but once Walter stated his vision of assert(), it was like a kind of revelation to me: why the optimizer won't make use of such obvious information like assert() provides just like asimple: if(x<5) { // you can safely optimize this code knowing (*assuming*) that always x<5 } But, what started to bother me lately and, I think, is the root of the problem: to have programmer code disabled by a compiler flag. I do not speak about boundschecking, where the code is never explicitely written by the programmer, but of real programmer code. Until now, versioning (or, in C/C++, the #ifdef) was the sole acceptable way to disable programmer code. The C assert slipped through as being based on a #ifdef or #if (I know good compilers will also optimize a if(0), but this is just because it happens to be obvious). OTOH, the D assert is no longer based (directly) on versioning, so having it disabled by a flag is not easily grasped. This, combined with the sudden revelation of the optimizer messing with it, produced a shock and this thread illustrates it. People are just to used to its secondary meaning from C, that is, besides testing conditions: "easily obtain a log of what and where was wrong". So, it was an assertion, but also a logging feature (albeit a fatal one). People got used with assert() becoming noop code in the release mode, just like they would disable the logs for release. The more I think about it, the more I feel like assert would be more naturally an annotation or a kind of versioning. Still, I cannot come with a clear cut proposition, as my mind is also entangled in old habits. One one hand, it feels natural as an instruction, on the other hand, being disable-able, maybe even ignorable (in release builds) and an invalidating point for the program logic*, it should belong somewhere else. *I think this is important: is not only a tested condition that is then handled, but a tested condition exposing a "does not work as intended". It looks more like a kind of "this code should be unreachable".
Aug 05 2014
parent "eles" <eles215 gzk.dot> writes:
On Wednesday, 6 August 2014 at 02:43:17 UTC, eles wrote:
 On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:
 On 8/6/2014 8:18 AM, David Bregman wrote:
 *I think this is important: is not only a tested condition that 
 is then handled, but a tested condition exposing a "does not 
 work as intended". It looks more like a kind of "this code 
 should be unreachable".
nitpicking: is not "my program does not behave like expected (for example, because the config file is broken)" but "my code wasn't supposed to say that and I did not mean to write such code"
Aug 05 2014
prev sibling parent "David Bregman" <drb sfu.ca> writes:
On Tuesday, 5 August 2014 at 18:19:00 UTC, Jeremy Powers via 
Digitalmars-d wrote:
 This has already been stated by others, but I just wanted to 
 pile on - I
 agree with Walter's definition of assert.

 2. Semantic change.
 The proposal changes the meaning of assert(), which will 
 result in
 breaking existing code.  Regardless of philosophizing about 
 whether or not
 the code was "already broken" according to some definition of 
 assert, the
 fact is that shipping programs that worked perfectly well 
 before may no
 longer work after this change.
Disagree. Assert (as I always understood it) means 'this must be true, or my program is broken.' In -release builds the explicit explosion on a triggered assert is skipped, but any code after a non-true assert is, by definition, broken. And by broken I mean the fundamental constraints of the program are violated, and so all bets are off on it working properly. A shipping program that 'worked perfectly well' but has unfulfilled asserts is broken - either the asserts are not actually true constraints, or the broken path just hasn't been hit yet.
This is the 'already broken' argument, which I mentioned in the quote above. This kind of change could never be made in C or C++, because there is too much legacy code that depends on it. Perhaps D can still afford to break this because it's still a young language. That is a strength of young languages. If you believe in this case that the upside justifies the breakage, by all means, just say so and accept the consequences. Don't try to escape responsibility by retroactively redefining previously working code as broken :)
 Looking at the 'breaking' example:

 assert(x!=1);
 if (x==1) {
  ...
 }

 If the if is optimized out, this will change from existing 
 behaviour.  But
 it is also obviously (to me at least) broken code already.  The 
 assert says
 that x cannot be 1 at this point in the program, if it ever is 
 then there
 is an error in the program.... and then it continues as if the 
 program were
 still valid.  If x could be one, then the assert is invalid 
 here.  And this
 code will already behave differently between -release and 
 non-release
 builds, which is another kind of broken.
Not everything that breaks will be so obvious as that. It can get much more hairy when assumptions propagate across multiple levels of inlining. Also, some code purposely uses that pattern. It is (or rather, was) valid for a different use case of assert.
 3a. An alternate statement of the proposal is literally "in 
 release mode,
 assert expressions introduce undefined behavior into your code 
 in if the
 expression is false".
This statement seems fundamentally true to me of asserts already, regardless of whether they are used for optimizations. If your assert fails, and you have turned off 'blow up on assert' then your program is in an undefined state. It is not that the assert introduces the undefined behaviour, it is that the assert makes plain an expectation of the code and if that expectation is false the code will have undefined behaviour.
This is not the standard definition of undefined behavior. With regular assert, execution is still well defined. If you want to know what happens in release mode when the assert condition is not satisfied, all you need to do is read the source code to find out. With assume, if the condition is not satisfied, there is no way to know what will happen. _anything_ can happen, it can even format your hard drive. That's true undefined behavior.
 3b. Since assert is such a widely used feature (with the 
 original
 semantics, "more asserts never hurt"), the proposal will 
 inject a massive
 amount of undefined behavior into existing code bases, greatly 
 increasing
 the probability of experiencing problems related to undefined 
 behavior.
I actually disagree with the 'more asserts never hurt' statement. Exactly because asserts get compiled out in release builds, I do not find them very useful/desirable. If they worked as optimization hints I might actually use them more. And there will be no injection of undefined behaviour - the undefined behaviour is already there if the asserted constraints are not valid.
This uses your own definition of UB again, it isn't true for the regular definition.
  Maybe if the yea side was consulted, they might easily agree 
 to an
 alternative way of achieving the improved optimization goal, 
 such as
 creating a new function that has the proposed semantics.
Prior to this (incredibly long) discussion, I was not aware people had a different interpretation of assert. To me, this 'new semantics' is precisely what I always thought assert was, and the proposal is just leveraging it for some additional optimizations. So from my standpoint, adding a new function would make sense to support this 'existing' behaviour that others seem to rely on - assert is fine as is, if the definition of 'is' is what I think it is.
That doesn't mean you don't have buggy asserts in your code, or buggy assers in libraries you use (or the standard library), so you are not immune from undefined behavior.
Aug 05 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
(limited connectivity for me)

For some perspective, recently gcc and clang have introduced 
optimizations based on undefined behavior in C/C++. The undefined 
behavior has been interpreted by modern optimizers as "these cases will 
never happen". This has wound up breaking a significant amount of 
existing code. There have been a number of articles about these, with 
detailed explanations about how they come about and the new, more 
correct, way to write code.

The emerging consensus is that the code breakage is worth it for the 
performance gains. That said, I do hear what people are saying about 
potential code breakage and agree that we need to address this properly.
Aug 05 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 05, 2014 at 11:35:14AM -0700, Walter Bright via Digitalmars-d wrote:
 (limited connectivity for me)
 
 For some perspective, recently gcc and clang have introduced
 optimizations based on undefined behavior in C/C++. The undefined
 behavior has been interpreted by modern optimizers as "these cases
 will never happen". This has wound up breaking a significant amount of
 existing code. There have been a number of articles about these, with
 detailed explanations about how they come about and the new, more
 correct, way to write code.
And I'd like to emphasize that code *should* have been written in this new, more correct way in the first place. Yes, it's a pain to have to update legacy code, but where would progress be if we're continually hampered by the fear of breaking what was *already* broken to begin with?
 The emerging consensus is that the code breakage is worth it for the
 performance gains. That said, I do hear what people are saying about
 potential code breakage and agree that we need to address this
 properly.
The way I see it, we need to educate D users to use 'assert' with the proper meaning, and to replace all other usages with alternatives (perhaps a Phobos function that does what they want without the full implications of assert -- i.e., "breaking" behaviour like influencing the optimizer, etc.). Once reasonable notice and time has been given, I'm all for introducing optimizer hinting with asserts. I think in the long run, this will turn out to be an important, revolutionary development not just in D, but in programming languages in general. T -- Ignorance is bliss... until you suffer the consequences!
Aug 05 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 5 August 2014 at 19:14:57 UTC, H. S. Teoh via 
Digitalmars-d wrote:
T

--
Ignorance is bliss... until you suffer the consequences!
(sic!)
Aug 05 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/5/2014 12:13 PM, H. S. Teoh via Digitalmars-d wrote:
 The way I see it, we need to educate D users to use 'assert' with the
 proper meaning,
I agree. We're starting with improving the spec wording.
 I think in the
 long run, this will turn out to be an important, revolutionary
 development not just in D, but in programming languages in general.
I agree. It also opens the door for programmers providing simple, checkable hints to the optimizer. I don't know how far we can go with that, but I suspect significant opportunity.
Aug 05 2014
prev sibling parent "David Bregman" <drb sfu.ca> writes:
On Tuesday, 5 August 2014 at 18:35:32 UTC, Walter Bright wrote:
 (limited connectivity for me)

 For some perspective, recently gcc and clang have introduced 
 optimizations based on undefined behavior in C/C++. The 
 undefined behavior has been interpreted by modern optimizers as 
 "these cases will never happen". This has wound up breaking a 
 significant amount of existing code. There have been a number 
 of articles about these, with detailed explanations about how 
 they come about and the new, more correct, way to write code.

 The emerging consensus is that the code breakage is worth it 
 for the performance gains.

 That said, I do hear what people are saying about potential 
 code breakage and agree that we need to address this properly.
Well, then at least we agree there is some kind of tradeoff being made here if the definition of assert is changed: potential performance vs a bunch of negatives. In my estimation, the upside is small and the tradeoff is not close to being justified. If performance is a top goal, there are many other things that could be done which have lesser (or zero) downsides. Just to give one example, addition of a forceinline attribute would be of great assistance to those attempting to micro optimize their code. And of course, adding this as a new function instead of redefining an existing one would eliminate the code breakage and C compatibility issues. The undefined behavior issue would remain, but at least defining assume as system would alleviate the safe issue somewhat (there could still be leaks due to inlining), and make it more clear to users that it's a dangerous feature. It would also make it more searchable, for code auditing purposes. Anyways, if I have at least made you and others aware of all the downsides, and all the contradictions of this proposal with D's stated goals, then I guess I have done my part for this issue.
Aug 05 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:
 This has already been stated by others, but I just wanted to pile on -
 I agree with Walter's definition of assert.
 
 2. Semantic change.
 The proposal changes the meaning of assert(), which will result in
 breaking existing code.  Regardless of philosophizing about whether
 or not the code was "already broken" according to some definition of
 assert, the fact is that shipping programs that worked perfectly
 well before may no longer work after this change.
Disagree. Assert (as I always understood it) means 'this must be true, or my program is broken.' In -release builds the explicit explosion on a triggered assert is skipped, but any code after a non-true assert is, by definition, broken. And by broken I mean the fundamental constraints of the program are violated, and so all bets are off on it working properly. A shipping program that 'worked perfectly well' but has unfulfilled asserts is broken - either the asserts are not actually true constraints, or the broken path just hasn't been hit yet.
Exactly. I think part of the problem is that people have been using assert with the wrong meaning. In my mind, 'assert(x)' doesn't mean "abort if x is false in debug mode but silently ignore in release mode", as some people apparently think it means. To me, it means "at this point in the program, x is true". It's that simple. Now if it turns out that x actually *isn't* true, then you have a contradiction in your program logic, and therefore, by definition, your program is invalid, which means any subsequent behaviour is undefined. If you start with an axiomatic system where the axioms contain a contradiction, then any results you derive from the system will be meaningless, since a contradiction vacuously proves everything. Similarly, any program behaviour that follows a false assertion is undefined, because one of the "axioms" (i.e., assertions) introduces a contradiction to the program logic.
 Looking at the 'breaking' example:
 
 assert(x!=1);
 if (x==1) {
  ...
 }
 
 If the if is optimized out, this will change from existing behaviour.
 But it is also obviously (to me at least) broken code already.  The
 assert says that x cannot be 1 at this point in the program, if it
 ever is then there is an error in the program.... and then it
 continues as if the program were still valid.  If x could be one, then
 the assert is invalid here.  And this code will already behave
 differently between -release and non-release builds, which is another
 kind of broken.
Which is what Walter has been saying: the code is *already* broken, and is invalid by definition, so it makes no difference what the optimizer does or doesn't do. If your program has an array overrun bug that writes garbage to an unrelated variable, then you can't blame the optimizer for producing a program where the unrelated variable acquires a different garbage value from before. The optimizer only guarantees (in theory) consistent program behaviour if the program is valid to begin with. If the program is invalid, all bets are off as to what its "optimized" version does.
 3a. An alternate statement of the proposal is literally "in release
 mode, assert expressions introduce undefined behavior into your code
 in if the expression is false".
This statement seems fundamentally true to me of asserts already, regardless of whether they are used for optimizations. If your assert fails, and you have turned off 'blow up on assert' then your program is in an undefined state. It is not that the assert introduces the undefined behaviour, it is that the assert makes plain an expectation of the code and if that expectation is false the code will have undefined behaviour.
I agree.
 3b. Since assert is such a widely used feature (with the original
 semantics, "more asserts never hurt"), the proposal will inject a
 massive amount of undefined behavior into existing code bases,
 greatly increasing the probability of experiencing problems related
 to undefined behavior.
I actually disagree with the 'more asserts never hurt' statement. Exactly because asserts get compiled out in release builds, I do not find them very useful/desirable. If they worked as optimization hints I might actually use them more. And there will be no injection of undefined behaviour - the undefined behaviour is already there if the asserted constraints are not valid.
And if people are using asserts in ways that are different from what it's intended to be (expressions that must be true if the program logic has been correctly implemented), then their programs are already invalid by definition. Why should it be the compiler's responsibility to guarantee consistent behaviour of invalid code?
 Maybe if the yea side was consulted, they might easily agree to an
 alternative way of achieving the improved optimization goal, such as
 creating a new function that has the proposed semantics.
Prior to this (incredibly long) discussion, I was not aware people had a different interpretation of assert. To me, this 'new semantics' is precisely what I always thought assert was, and the proposal is just leveraging it for some additional optimizations. So from my standpoint, adding a new function would make sense to support this 'existing' behaviour that others seem to rely on - assert is fine as is, if the definition of 'is' is what I think it is.
Yes, the people using assert as a kind of "check in debug mode but ignore in release mode" should really be using something else instead, since that's not what assert means. I'm honestly astounded that people would actually use assert as some kind of non-release-mode-check instead of the statement of truth that it was meant to be. Furthermore, I think Walter's idea to use asserts as a source of optimizer hints is a very powerful concept that may turn out to be a revolutionary feature in D. It could very well develop into the answer to my long search for a way of declaring identities in user-defined types that allow high-level optimizations by the optimizer, thus allowing user-defined types to be on par with built-in types in optimizability. Currently, the compiler is able to optimize x+x+x+x into 4*x if x is an int, for example, but it can't if x is a user-defined type (e.g. BigInt), because it can't know if opBinary was defined in a way that obeys this identity. But if we can assert that this holds for the user-defined type, e.g., BigInt, then the compiler can make use of that axiom to perform such an optimization. This would then allow code to be written in more human-readable forms, and still maintain optimal performance, even where user-defined types are involved. While manually-written code generally doesn't need this kind of optimization (instead of writing x+x+x+x, just write 4*x to begin with), this becomes an important issue with generic code and metaprogramming. The generic version of the code may very well be w+x+y+z, which cannot be reduced to n*x, so when you instantiate that code for the case where w==x==y==z, you have to pay the penalty of genericity. But we can eliminate this cost if we can tell the compiler that when w==x==y==z, then w+x+y+z == 4*x. Then we don't have to separately implement this special case in order to achieve optimal performance, but we will be able to continue using the generic, more maintainable code. Something like this will require much more development of Walter's core concept than currently proposed, of course, but the current proposal is an important step in this direction, and I fully support it. T -- There are 10 kinds of people in the world: those who can count in binary, and those who can't.
Aug 05 2014
next sibling parent reply "Araq" <rumpf_a web.de> writes:
 Furthermore, I think Walter's idea to use asserts as a source of
 optimizer hints is a very powerful concept that may turn out to 
 be a
 revolutionary feature in D. It could very well develop into the 
 answer
 to my long search for a way of declaring identities in 
 user-defined
 types that allow high-level optimizations by the optimizer, thus
 allowing user-defined types to be on par with built-in types in
 optimizability.
The answer to your search is "term rewriting macros (with sideeffect and alias analysis)" as introduced by Nimrod. Watch my talk. ;-) 'assume' is not nearly powerful enough for this and in no way "revolutionary".
Aug 05 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/5/2014 12:25 PM, Araq wrote:
 'assume' is not nearly powerful enough for this and in no way "revolutionary".
More in the near-term realm of possibility are asserts that constrain the range of values for a variable, which can subsequently eliminate the extra code needed to handle the full range of the type. One case is the one that started off this whole discussion - constraining the range of values so that an overflow-checking-multiply need not actually check the overflow, because an overflow might be impossible. This kind of situation can come up in generic code, where the generic code is written conservatively and defensively, then relying on the caller to provide a few asserts which will in effect "customize" the generic code. What's exciting about this is it'll give us a lever we can use to generate better code than other languages are capable of.
Aug 05 2014
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d
wrote:
 Furthermore, I think Walter's idea to use asserts as a source of
 optimizer hints is a very powerful concept that may turn out to be a
 revolutionary feature in D.
LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic By the way, I think Walter said "assert can be potentially used to make optimizations" not "Oh, I just had an idea! We could use assert to optimize code". I think the code already does this. Of course, we would have to look at the source code to find out... By the way, most of the time in this list I hear "We could use this and that feature to allow better optimizations" and no optimizations are ever implemented. Look at all those pure nosafe nothrow const that you have to put and yet you don't get any performance boost from that.
Aug 05 2014
next sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Tuesday, 5 August 2014 at 20:09:44 UTC, Ary Borenszweig wrote:
 By the way, most of the time in this list I hear "We could use 
 this and that feature to allow better optimizations" and no 
 optimizations are ever implemented. Look at all those  pure 
 nosafe nothrow const that you have to put and yet you don't get 
 any performance boost from that.
Hmm... I've never seen these annotations as a performance feature. They're there to help writing correct programs. If this allows some performance gains, great, but IMHO it's not their primary purpose.
Aug 05 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 05, 2014 at 05:09:43PM -0300, Ary Borenszweig via Digitalmars-d
wrote:
 On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:
On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:
Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to be a
revolutionary feature in D.
LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic
Even better, so there's precedent for this. Even if it's only exposed at the LLVM level, rather than the source language. Introducing this at the source language level (like proposed in D) is a good step forward IMO.
 By the way, I think Walter said "assert can be potentially used to
 make optimizations" not "Oh, I just had an idea! We could use assert
 to optimize code". I think the code already does this. Of course, we
 would have to look at the source code to find out...
If the code already does this, then what are we arguing about?
 By the way, most of the time in this list I hear "We could use this
 and that feature to allow better optimizations" and no optimizations
 are ever implemented. Look at all those  pure nosafe nothrow const
 that you have to put and yet you don't get any performance boost from
 that.
Automatic attribute inference is the way to go. More and more, I'm beginning to be convinced that manually-specified attributes are a dead end. Having said that, though, I'm pretty sure the compiler could (and should) do more with pure/nothrow/const, etc.. I think it may already be taking advantage of nothrow/const (nothrow elides throw/catch scaffolding, e.g., which could be important in high-performance inner loops). With pure the current situation could be improved, since it currently only has effect when you call the same pure function multiple times within a single expression. But there are lots more things that could be done with it, e.g., memoization across different statements in the function body. PR's would be welcome. ;-) T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Aug 05 2014
parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 8/5/14, 5:26 PM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Aug 05, 2014 at 05:09:43PM -0300, Ary Borenszweig via Digitalmars-d
wrote:
 On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d
wrote:
 Furthermore, I think Walter's idea to use asserts as a source of
 optimizer hints is a very powerful concept that may turn out to be a
 revolutionary feature in D.
LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic
Even better, so there's precedent for this. Even if it's only exposed at the LLVM level, rather than the source language. Introducing this at the source language level (like proposed in D) is a good step forward IMO.
 By the way, I think Walter said "assert can be potentially used to
 make optimizations" not "Oh, I just had an idea! We could use assert
 to optimize code". I think the code already does this. Of course, we
 would have to look at the source code to find out...
If the code already does this, then what are we arguing about?
Exactly. I think the OP doesn't know that Walter wasn't proposing any semantic change in assert. Walter was just stating how assert works for him (or should work, but probably some optimizations are not implemented). We should ask Walter, but I think he's offline...
Aug 05 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/5/2014 1:09 PM, Ary Borenszweig wrote:
 On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:
 On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d
wrote:
 Furthermore, I think Walter's idea to use asserts as a source of
 optimizer hints is a very powerful concept that may turn out to be a
 revolutionary feature in D.
LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic
That's a language extension. A language extension is not a language feature. But it is strong evidence supporting my assertion that these sorts of things are inexorably coming. As bearophile posted, Microsoft also has such an intrinsic for their C++.
 By the way, I think Walter said "assert can be potentially used to make
 optimizations" not "Oh, I just had an idea! We could use assert to optimize
 code". I think the code already does this. Of course, we would have to look at
 the source code to find out...
It is hardly a new idea, or my idea. I got it from this 1981 book: http://www.amazon.com/Program-Flow-Analysis-Application-Prentice-Hall/dp/0137296819/ which I've had a copy of since '82 or so. My notions on asserts being contracts, regardless of switch settings, date to a similar time, see "Object Oriented Software Construction", a 1985 book.
 By the way, most of the time in this list I hear "We could use this and that
 feature to allow better optimizations" and no optimizations are ever
 implemented. Look at all those  pure nosafe nothrow const that you have to put
 and yet you don't get any performance boost from that.
Not only is that currently quite incorrect, don't put the chicken before the egg. The 'nothrow', etc., feature must exist before it can be taken advantage of.
Aug 05 2014
prev sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Exactly. I think part of the problem is that people have been 
 using
 assert with the wrong meaning. In my mind, 'assert(x)' doesn't 
 mean
 "abort if x is false in debug mode but silently ignore in 
 release mode",
 as some people apparently think it means. To me, it means "at 
 this point
 in the program, x is true".  It's that simple.
A language construct with such a meaning is useless as a safety feature. If I first have to prove that the condition is true before I can safely use an assert, I don't need the assert anymore, because I've already proved it. If it is intended to be an optimization hint, it should be implemented as a pragma, not as a prominent feature meant to be widely used. (But I see that you have a different use case, see my comment below.)
 The optimizer only guarantees (in theory)
 consistent program behaviour if the program is valid to begin 
 with. If
 the program is invalid, all bets are off as to what its 
 "optimized"
 version does.
There is a difference between invalid and undefined: A program is invalid ("buggy"), if it doesn't do what it's programmer intended, while "undefined" is a matter of the language specification. The (wrong) behaviour of an invalid program need not be undefined, and often isn't in practice. An optimizer may only transform code in a way that keeps the resulting code semantically equivalent. This means that if the original "unoptimized" program is well-defined, the optimized one will be too.
 Yes, the people using assert as a kind of "check in debug mode 
 but
 ignore in release mode" should really be using something else 
 instead,
 since that's not what assert means. I'm honestly astounded that 
 people
 would actually use assert as some kind of 
 non-release-mode-check instead
 of the statement of truth that it was meant to be.
Well, when this "something else" is introduced, it will need to replace almost every existing instance of "assert", as the latter must only be used if it is proven that the condition is always true. To name just one example, it cannot be used in range `front` and `popFront` methods to assert that the range is not empty, unless there is an additional non-assert check directly before it.
 Furthermore, I think Walter's idea to use asserts as a source of
 optimizer hints is a very powerful concept that may turn out to 
 be a
 revolutionary feature in D. It could very well develop into the 
 answer
 to my long search for a way of declaring identities in 
 user-defined
 types that allow high-level optimizations by the optimizer, thus
 allowing user-defined types to be on par with built-in types in
 optimizability. Currently, the compiler is able to optimize 
 x+x+x+x into
 4*x if x is an int, for example, but it can't if x is a 
 user-defined
 type (e.g. BigInt), because it can't know if opBinary was 
 defined in a
 way that obeys this identity. But if we can assert that this 
 holds for
 the user-defined type, e.g., BigInt, then the compiler can make 
 use of
 that axiom to perform such an optimization.  This would then 
 allow code
 to be written in more human-readable forms, and still maintain 
 optimal
 performance, even where user-defined types are involved.
This is a very powerful feature indeed, but to be used safely, the compiler needs to be able to detect invalid uses reliably at compile time. This is currently not the case: void onlyOddNumbersPlease(int n) { assert(n % 2); } void foo() { onlyOddNumbersPlease(42); // shouldn't compile, but does } It would be great if this were possible. In the example of `front` and `popFront`, programs that call these methods on a range that could theoretically be empty wouldn't compile. This might be useful for optimization, but above that it's useful for verifying correctness. Unfortunately this is not what has been suggested (and was evidently intended from the beginning)...
Aug 05 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Aug 05, 2014 at 08:11:16PM +0000, via Digitalmars-d wrote:
 On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via Digitalmars-d
 wrote:
Exactly. I think part of the problem is that people have been using
assert with the wrong meaning. In my mind, 'assert(x)' doesn't mean
"abort if x is false in debug mode but silently ignore in release
mode", as some people apparently think it means. To me, it means "at
this point in the program, x is true".  It's that simple.
A language construct with such a meaning is useless as a safety feature.
I don't see it as a safety feature at all.
 If I first have to prove that the condition is true before I can
 safely use an assert, I don't need the assert anymore, because I've
 already proved it.
I see it as future proofing: I may have proven the condition for *this* version of the program, but all software will change (unless it's dead), and change means the original proof may no longer be valid, but this part of the code is still written under the assumption that the condition holds. In most cases, it *does* still hold, so in general you're OK, but sometimes a change invalidates an axiom that, in consequence, invalidates the assertion. Then the assertion will trip (in non-release mode, of course), telling me that my program logic has become invalid due to the change I made. So I'll have to fix the problem so that the condition holds again.
 If it is intended to be an optimization hint, it should be implemented
 as a pragma, not as a prominent feature meant to be widely used. (But
 I see that you have a different use case, see my comment below.)
And here is the beauty of the idea: rather than polluting my code with optimization hints, which are out-of-band (and which are generally unverified and may be outright wrong after the code undergoes several revisions), I am stating *facts* about my program logic that must hold -- which therefore fits in very logically with the code itself. It even self-documents the code, to some extent. Then as an added benefit, the compiler is able to use these facts to emit more efficient code. So to me, it *should* be a prominent, widely-used feature. I would use it, and use it a *lot*.
The optimizer only guarantees (in theory) consistent program
behaviour if the program is valid to begin with. If the program is
invalid, all bets are off as to what its "optimized" version does.
There is a difference between invalid and undefined: A program is invalid ("buggy"), if it doesn't do what it's programmer intended, while "undefined" is a matter of the language specification. The (wrong) behaviour of an invalid program need not be undefined, and often isn't in practice.
To me, this distinction doesn't matter in practice, because in practice, an invalid program produces a wrong result, and a program with undefined behaviour also produces a wrong result. I don't care what kind of wrong result it is; what I care is to fix the program to *not* produce a wrong result.
 An optimizer may only transform code in a way that keeps the resulting
 code semantically equivalent. This means that if the original
 "unoptimized" program is well-defined, the optimized one will be too.
That's a nice property to have, but again, if my program produces a wrong result, then my program produces a wrong result. As a language user, I don't care that the optimizer may change one wrong result to a different wrong result. What I care about is to fix the code so that the program produces the *correct* result. To me, it only matters that the optimizer does the Right Thing when the program is correct to begin with. If the program was wrong, then it doesn't matter if the optimizer makes it a different kind of wrong; the program should be fixed so that it stops being wrong.
Yes, the people using assert as a kind of "check in debug mode but
ignore in release mode" should really be using something else
instead, since that's not what assert means. I'm honestly astounded
that people would actually use assert as some kind of
non-release-mode-check instead of the statement of truth that it was
meant to be.
Well, when this "something else" is introduced, it will need to replace almost every existing instance of "assert", as the latter must only be used if it is proven that the condition is always true. To name just one example, it cannot be used in range `front` and `popFront` methods to assert that the range is not empty, unless there is an additional non-assert check directly before it.
I don't follow this reasoning. For .front and .popFront to assert that the range is non-empty, simply means that user code that attempts to do otherwise is wrong by definition, and must be fixed. I don't care if it's wrong as in invalid, or wrong as in undefined, the bottom line is that code that calls .front or .popFront on an empty range is incorrectly written, and therefore must be fixed. If I were to implement ranges in assembly language, for example, I would not write .front or .popFront to check for emptiness before performing operations, because that introduces needless overhead in correct programs. In incorrect programs, it will crash (horribly), but that's just the consequence of the program being incorrect. Adding a check for emptiness does not help fix incorrectness -- you will still get a wrong result. The only difference is an exception instead of a crash -- nicer, I'll concede, but either way, you're not getting the correct result. Now, during development, I might add emptiness checks to my assembly language range program for the purpose of locating incorrectly-written parts of the program. But before I ship the product, I'd better make darned sure that I've eliminated every bug that I can find, and convinced myself that the program will operate correctly. If I can't even convince myself that the program is correct, I have no business shipping it to customers. But if I've convinced myself that it is correct, then I might as well disable the emptiness checks so that my product will deliver top performance -- since that wouldn't be a problem in a correct program.
Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to be a
revolutionary feature in D. It could very well develop into the
answer to my long search for a way of declaring identities in
user-defined types that allow high-level optimizations by the
optimizer, thus allowing user-defined types to be on par with
built-in types in optimizability. Currently, the compiler is able to
optimize x+x+x+x into 4*x if x is an int, for example, but it can't
if x is a user-defined type (e.g. BigInt), because it can't know if
opBinary was defined in a way that obeys this identity. But if we can
assert that this holds for the user-defined type, e.g., BigInt, then
the compiler can make use of that axiom to perform such an
optimization.  This would then allow code to be written in more
human-readable forms, and still maintain optimal performance, even
where user-defined types are involved.
This is a very powerful feature indeed, but to be used safely, the compiler needs to be able to detect invalid uses reliably at compile time. This is currently not the case: void onlyOddNumbersPlease(int n) { assert(n % 2); } void foo() { onlyOddNumbersPlease(42); // shouldn't compile, but does }
In theory, the optimizer could use CTFE to reduce the function call, and thereby discover that the code is invalid. We don't have that today, but conceivably, we can achieve that one day. But taking a step back, there's only so much the compiler can do at compile-time. You can't stop *every* unsafe usage of something without also making it useless. While the manufacturer of a sharp cutting tool will presumably do their best to ensure the tool is safe to use, it's impossible to prevent *every* possible unsafe usage of said tool. If the user points the chainsaw to his foot, he will lose his foot, and there's nothing the manufacturer can do to prevent this except shipping a non-functional chainsaw. If the user insists on asserting things that are untrue, there will always be a way to bypass the compiler's static checks and end up with undefined behaviour at runtime.
 It would be great if this were possible. In the example of `front` and
 `popFront`, programs that call these methods on a range that could
 theoretically be empty wouldn't compile. This might be useful for
 optimization, but above that it's useful for verifying correctness.
A sufficiently-aggressive optimizer might be able to verify this at compile-time by static analysis. But even that has its limits... for example: MyRange range; assert(range.empty); if (solveRiemannHypothesis()) // <-- don't know if this is true range.addElements(...); range.popFront(); // <-- should this compile or not?
 Unfortunately this is not what has been suggested (and was evidently
 intended from the beginning)...
I don't think static analysis is *excluded* by the current proposal. I can see it as a possible future enhancement. But the fact that we don't have it today doesn't mean we shouldn't step in that direction. T -- I've been around long enough to have seen an endless parade of magic new techniques du jour, most of which purport to remove the necessity of thought about your programming problem. In the end they wind up contributing one or two pieces to the collective wisdom, and fade away in the rearview mirror. -- Walter Bright
Aug 05 2014
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Tuesday, 5 August 2014 at 21:17:14 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Tue, Aug 05, 2014 at 08:11:16PM +0000, via Digitalmars-d 
 wrote:
 On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via 
 Digitalmars-d
 wrote:
Exactly. I think part of the problem is that people have been 
using
assert with the wrong meaning. In my mind, 'assert(x)' 
doesn't mean
"abort if x is false in debug mode but silently ignore in 
release
mode", as some people apparently think it means. To me, it 
means "at
this point in the program, x is true".  It's that simple.
A language construct with such a meaning is useless as a safety feature.
I don't see it as a safety feature at all.
Sorry, I should have written "correctness feature". I agree that it's not very useful for safety per se. (But of course, safety and correctness are not unrelated.)
 If I first have to prove that the condition is true before I 
 can
 safely use an assert, I don't need the assert anymore, because 
 I've
 already proved it.
I see it as future proofing: I may have proven the condition for *this* version of the program, but all software will change (unless it's dead), and change means the original proof may no longer be valid, but this part of the code is still written under the assumption that the condition holds. In most cases, it *does* still hold, so in general you're OK, but sometimes a change invalidates an axiom that, in consequence, invalidates the assertion. Then the assertion will trip (in non-release mode, of course), telling me that my program logic has become invalid due to the change I made. So I'll have to fix the problem so that the condition holds again.
Well, I think it's unlikely that you actually did prove the assert condition, except in trivial situations. This is related to the discussion about the ranges example, so I'll respond there.
 If it is intended to be an optimization hint, it should be 
 implemented
 as a pragma, not as a prominent feature meant to be widely 
 used. (But
 I see that you have a different use case, see my comment 
 below.)
And here is the beauty of the idea: rather than polluting my code with optimization hints, which are out-of-band (and which are generally unverified and may be outright wrong after the code undergoes several revisions), I am stating *facts* about my program logic that must hold -- which therefore fits in very logically with the code itself. It even self-documents the code, to some extent. Then as an added benefit, the compiler is able to use these facts to emit more efficient code. So to me, it *should* be a prominent, widely-used feature. I would use it, and use it a *lot*.
I think this is where we disagree mainly: What you call facts is something I see as intentions that *should* be true, but are not *proven* to be so. Again, see below.
 
The optimizer only guarantees (in theory) consistent program
behaviour if the program is valid to begin with. If the 
program is
invalid, all bets are off as to what its "optimized" version 
does.
There is a difference between invalid and undefined: A program is invalid ("buggy"), if it doesn't do what it's programmer intended, while "undefined" is a matter of the language specification. The (wrong) behaviour of an invalid program need not be undefined, and often isn't in practice.
To me, this distinction doesn't matter in practice, because in practice, an invalid program produces a wrong result, and a program with undefined behaviour also produces a wrong result. I don't care what kind of wrong result it is; what I care is to fix the program to *not* produce a wrong result.
Please see my response to Jeremy; the distinction is important: http://forum.dlang.org/thread/hqxoldeyugkazolllsna forum.dlang.org?page=11#post-eqlyruvwmzbpemvnrebw:40forum.dlang.org
 An optimizer may only transform code in a way that keeps the 
 resulting
 code semantically equivalent. This means that if the original
 "unoptimized" program is well-defined, the optimized one will 
 be too.
That's a nice property to have, but again, if my program produces a wrong result, then my program produces a wrong result. As a language user, I don't care that the optimizer may change one wrong result to a different wrong result. What I care about is to fix the code so that the program produces the *correct* result. To me, it only matters that the optimizer does the Right Thing when the program is correct to begin with. If the program was wrong, then it doesn't matter if the optimizer makes it a different kind of wrong; the program should be fixed so that it stops being wrong.
We're not living in an ideal world, unfortunately. It is bad enough that programs are wrong as they are written, we don't need the compiler to transform these programs to do something that is still wrong, but also completely different. This would make your goal of fixing the program very hard to achieve. In an extreme case, a small error in several million lines of code could manifest at a completely different place, because you cannot rely on any determinism once undefined behaviour is involved.
Yes, the people using assert as a kind of "check in debug 
mode but
ignore in release mode" should really be using something else
instead, since that's not what assert means. I'm honestly 
astounded
that people would actually use assert as some kind of
non-release-mode-check instead of the statement of truth that 
it was
meant to be.
Well, when this "something else" is introduced, it will need to replace almost every existing instance of "assert", as the latter must only be used if it is proven that the condition is always true. To name just one example, it cannot be used in range `front` and `popFront` methods to assert that the range is not empty, unless there is an additional non-assert check directly before it.
I don't follow this reasoning. For .front and .popFront to assert that the range is non-empty, simply means that user code that attempts to do otherwise is wrong by definition, and must be fixed. I don't care if it's wrong as in invalid, or wrong as in undefined, the bottom line is that code that calls .front or .popFront on an empty range is incorrectly written, and therefore must be fixed.
Just above you wrote that you "may have proven the condition". But in code like the following, there cannot be a proof: property T front() { assert(!empty); return _other_range.front; } This is in the standard library. The authors of this piece of code cannot have proven that the user of the library only calls `front` on a non-empty range. Now consider the following example (mostly made up, but not unrealistic) that parses a text file (this could be a simple text-based data format): // ... // some processing // ... input.popFront(); // end of line? => swallow and process next line if(input.front == '\n') { // <- this is wrong input.popFront(); continue; } // ... // more code that doesn't call `input.popFront` // ... // more processing of input if(!input.empty) { // <- HERE // use input.front } With the above definition of `front`, the second check marked "HERE" can be removed by the compiler. Even worse, if you insert `writeln(input.empty)` before the check for debugging, it might also output "false" (depending on how far the compiler goes). Yes, this code is wrong. But it's an easy mistake to make, it might not be detected during testing because you only use correctly formatted input files, and it might also not lead to crashes (the buffer is unlikely to end at a boundary to unmapped memory). Now the assert - which is supposed to be helping the programmer write correct code - has made it _harder_ to detect the cause of an error. What's worse is that it also removed a check that was necessary. This check could have been inserted by the programmer because the section of the code is security relevant, and they didn't want to rely on the input file to be correct. The compiler has thereby turned a rather harmless mistake that would under normal circumstances only lead to an incorrect output into a potentially exploitable security bug.
 -- snip --
 But if I've convinced myself that it is
 correct, then I might as well disable the emptiness checks so 
 that my
 product will deliver top performance -- since that wouldn't be 
 a problem
 in a correct program.
The problem is, as I explained above, that it doesn't just disable the emptiness checks where the asserts are. A simple mistake can have subtle and hard to debug effects all over your program.
 In theory, the optimizer could use CTFE to reduce the function 
 call, and
 thereby discover that the code is invalid. We don't have that 
 today, but
 conceivably, we can achieve that one day.

 But taking a step back, there's only so much the compiler can 
 do at
 compile-time. You can't stop *every* unsafe usage of something 
 without
 also making it useless. While the manufacturer of a sharp 
 cutting tool
 will presumably do their best to ensure the tool is safe to 
 use, it's
 impossible to prevent *every* possible unsafe usage of said 
 tool. If the
 user points the chainsaw to his foot, he will lose his foot, 
 and there's
 nothing the manufacturer can do to prevent this except shipping 
 a
 non-functional chainsaw. If the user insists on asserting 
 things that
 are untrue, there will always be a way to bypass the compiler's 
 static
 checks and end up with undefined behaviour at runtime.
I wouldn't be so pessimistic ;-) I guess most assert conditions are simple, mostly just comparisons or equality checks of one value with a constant. This should be relatively easy to verify with some control/data flow analysis (which Walter avoided until now, understandably). But CTFE is on the wrong level. It could only detect some of the failed conditions. It needs to be checked on a higher lever, as real correctness proofs. If an assert conditions cannot be proved - because it's always wrong, or just sometimes, or because the knowledge available to the compiler is not enough - it must be rejected. Think of it like an extension of type and const checking.
 It would be great if this were possible. In the example of 
 `front` and
 `popFront`, programs that call these methods on a range that 
 could
 theoretically be empty wouldn't compile. This might be useful 
 for
 optimization, but above that it's useful for verifying 
 correctness.
A sufficiently-aggressive optimizer might be able to verify this at compile-time by static analysis. But even that has its limits... for example: MyRange range; assert(range.empty); if (solveRiemannHypothesis()) // <-- don't know if this is true range.addElements(...); range.popFront(); // <-- should this compile or not?
It shouldn't, because it's not provable. However, most asserts are far less involved. There could be a specification of what is guaranteed to work, and what all compilers must therefore support.
 Unfortunately this is not what has been suggested (and was 
 evidently
 intended from the beginning)...
I don't think static analysis is *excluded* by the current proposal. I can see it as a possible future enhancement. But the fact that we don't have it today doesn't mean we shouldn't step in that direction.
I just don't see how we're stepping into that direction at all. It seems like the opposite: instead of trying to prove the assertions statically, they're going to be believed without verification.
Aug 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/6/2014 5:14 AM, "Marc Schütz" <schuetzm gmx.net>" wrote:
 We're not living in an ideal world, unfortunately. It is bad enough that
 programs are wrong as they are written, we don't need the compiler to transform
 these programs to do something that is still wrong, but also completely
 different. This would make your goal of fixing the program very hard to
achieve.
 In an extreme case, a small error in several million lines of code could
 manifest at a completely different place, because you cannot rely on any
 determinism once undefined behaviour is involved.
You are technically correct, and I used to worry about that. But after using assert()s for 30 years, I can only think of this happening once. assert()s tend to trip very shortly after the actual error occurred. Of course, there's a bit of an art to appropriate placement of those assert()s.
 It seems like
 the opposite: instead of trying to prove the assertions statically, they're
 going to be believed without verification.
The point of an assert is it is something the programmer says must be true, not the compiler.
Aug 06 2014
parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 22:31:00 UTC, Walter Bright wrote:
 On 8/6/2014 5:14 AM, "Marc Schütz" <schuetzm gmx.net>" wrote:
 We're not living in an ideal world, unfortunately. It is bad 
 enough that
 programs are wrong as they are written, we don't need the 
 compiler to transform
 these programs to do something that is still wrong, but also 
 completely
 different. This would make your goal of fixing the program 
 very hard to achieve.
 In an extreme case, a small error in several million lines of 
 code could
 manifest at a completely different place, because you cannot 
 rely on any
 determinism once undefined behaviour is involved.
You are technically correct, and I used to worry about that. But after using assert()s for 30 years, I can only think of this happening once. assert()s tend to trip very shortly after the actual error occurred. Of course, there's a bit of an art to appropriate placement of those assert()s.
But for those 30 years you only used asserts with the semantics they have in C, not with the semantics you want for D. I don't see how you can come to the conclusion that the same is true for the "new style" assertions. The problems with finding errors that I talk about are not because the cause of the error is far from the assert. They stem from the fact that - with the proposed semantics - the asserts themselves can influence code in different parts of the program, far away from both the cause of the error and the failed assert.
Aug 07 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
So from a functional perspective, assuming this is a 
multithreaded app, what should the procedure be when an 
AssertError is thrown in some thread?
Aug 06 2014
next sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what should the procedure be when an 
 AssertError is thrown in some thread?
afaik, AssertError works the same as any other exception. If it reaches the top without being handled, then the entire app will be terminated including all the other threads.
Aug 06 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman wrote:
 On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what should the procedure be when an 
 AssertError is thrown in some thread?
afaik, AssertError works the same as any other exception. If it reaches the top without being handled, then the entire app will be terminated including all the other threads.
Except that's not really how unhandled exceptions are treated in threads. They're stored by the thread and (optionally) re-thrown in the context of whatever thread calls join, if any thread calls join at all. The assumption at the time was that the joining thread cared about the state of the thread being joined and so re-throwing the exception was a way to communicate the error to the most interested party. So if the main thread joins a thread that terminated with an unhandled AssertError, the error will be re-thrown in the context of the main thread, which will in turn join all remaining running threads and wait for them to complete. This is the same behavior as if an AssertError is generated during normal processing of the main thread. But in neither case is the application forcibly terminated when an assertion failure occurs. So again, my question is twofold: what *should* happen, given this new treatment for assertions, and then *how* will we accomplish this? A multithreaded process is really pretty much equivalent to a bunch of standalone processes with shared memory. There is no facility for forcing a clean termination of another thread. The best I can think of would be to assume that std.concurrency is being used for multithreading and sort things out similar to the existing LinkTerminated signaling, but looking for a reasonable solution within Druntime would be tricky.
Aug 06 2014
next sibling parent reply "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 19:34:48 UTC, Sean Kelly wrote:
 On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman 
 wrote:
 On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what should the procedure be when an 
 AssertError is thrown in some thread?
afaik, AssertError works the same as any other exception. If it reaches the top without being handled, then the entire app will be terminated including all the other threads.
Except that's not really how unhandled exceptions are treated in threads. They're stored by the thread and (optionally) re-thrown in the context of whatever thread calls join, if any thread calls join at all. The assumption at the time was that the joining thread cared about the state of the thread being joined and so re-throwing the exception was a way to communicate the error to the most interested party. So if the main thread joins a thread that terminated with an unhandled AssertError, the error will be re-thrown in the context of the main thread, which will in turn join all remaining running threads and wait for them to complete. This is the same behavior as if an AssertError is generated during normal processing of the main thread. But in neither case is the application forcibly terminated when an assertion failure occurs.
Ah, I see. Then I apologize for giving incorrect information. I am not that familiar with threads in D yet and I assumed unhandled exceptions worked like C++. But if you already knew the answer then why were you asking?
 So again, my question is twofold: what *should* happen, given 
 this new treatment for assertions, and then *how* will we 
 accomplish this?  A multithreaded process is really pretty much 
 equivalent to a bunch of standalone processes with shared 
 memory.
  There is no facility for forcing a clean termination of 
 another thread.  The best I can think of would be to assume 
 that std.concurrency is being used for multithreading and sort 
 things out similar to the existing LinkTerminated signaling, 
 but looking for a reasonable solution within Druntime would be 
 tricky.
I think there is some kind of misunderstanding here, I have not proposed any new treatment for assertions. Walter's proposal only affects release mode, so it's not related to exceptions.
Aug 06 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 6 August 2014 at 20:22:58 UTC, David Bregman wrote:
 On Wednesday, 6 August 2014 at 19:34:48 UTC, Sean Kelly wrote:
 On Wednesday, 6 August 2014 at 18:15:41 UTC, David Bregman 
 wrote:
 On Wednesday, 6 August 2014 at 17:18:19 UTC, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what should the procedure be when an 
 AssertError is thrown in some thread?
afaik, AssertError works the same as any other exception. If it reaches the top without being handled, then the entire app will be terminated including all the other threads.
Except that's not really how unhandled exceptions are treated in threads. They're stored by the thread and (optionally) re-thrown in the context of whatever thread calls join, if any thread calls join at all. The assumption at the time was that the joining thread cared about the state of the thread being joined and so re-throwing the exception was a way to communicate the error to the most interested party. So if the main thread joins a thread that terminated with an unhandled AssertError, the error will be re-thrown in the context of the main thread, which will in turn join all remaining running threads and wait for them to complete. This is the same behavior as if an AssertError is generated during normal processing of the main thread. But in neither case is the application forcibly terminated when an assertion failure occurs.
Ah, I see. Then I apologize for giving incorrect information. I am not that familiar with threads in D yet and I assumed unhandled exceptions worked like C++. But if you already knew the answer then why were you asking?
It seems like this change in treatment of assertion failures might require a change in runtime behavior as well. But I don't know exactly what people have in mind. Also, assuming that a change is expected, depending on what the desired effect is, I'm not sure I'll know how to do it.
 So again, my question is twofold: what *should* happen, given 
 this new treatment for assertions, and then *how* will we 
 accomplish this?  A multithreaded process is really pretty 
 much equivalent to a bunch of standalone processes with shared 
 memory.
 There is no facility for forcing a clean termination of 
 another thread.  The best I can think of would be to assume 
 that std.concurrency is being used for multithreading and sort 
 things out similar to the existing LinkTerminated signaling, 
 but looking for a reasonable solution within Druntime would be 
 tricky.
I think there is some kind of misunderstanding here, I have not proposed any new treatment for assertions. Walter's proposal only affects release mode, so it's not related to exceptions.
Well, it seems like the underlying idea is to treat assertion failures more strongly than we do now. So changes in how these are handled by the runtime might be a side effect of this proposal.
Aug 06 2014
parent reply "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 20:27:45 UTC, Sean Kelly wrote:
 Well, it seems like the underlying idea is to treat assertion 
 failures more strongly than we do now.  So changes in how these 
 are handled by the runtime might be a side effect of this 
 proposal.
Again, since Walter's proposal only affects release mode, I don't see how exceptions or the runtime are related. What am I missing? You agree assertion failures (AssertError) are only thrown in debug mode, right?
Aug 06 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 6 August 2014 at 21:16:42 UTC, David Bregman wrote:
 On Wednesday, 6 August 2014 at 20:27:45 UTC, Sean Kelly wrote:
 Well, it seems like the underlying idea is to treat assertion 
 failures more strongly than we do now.  So changes in how 
 these are handled by the runtime might be a side effect of 
 this proposal.
Again, since Walter's proposal only affects release mode, I don't see how exceptions or the runtime are related. What am I missing? You agree assertion failures (AssertError) are only thrown in debug mode, right?
In non-release mode, but yes. However: 4. An assert failure is a non-recoverable error. The compiler may assume that execution does not proceed after one is tripped. The language does allow attempts to shut a program down gracefully after one is tripped, but that must not be misconstrued as assuming that the program is in a valid state at that point. 5. assert(0); is equivalent to a halt, and the compiler won't remove it. Clearly, the intention is for assertion failures to terminate the program, but that isn't done now. Or at least this isn't done in the multithreaded case. So I was asking for clarification on what should be done on that end, and whether that behavior should inform how assert is treated from a language perspective.
Aug 06 2014
parent "David Bregman" <drb sfu.ca> writes:
On Wednesday, 6 August 2014 at 21:35:26 UTC, Sean Kelly wrote:
 On Wednesday, 6 August 2014 at 21:16:42 UTC, David Bregman 
 wrote:
 On Wednesday, 6 August 2014 at 20:27:45 UTC, Sean Kelly wrote:
 Well, it seems like the underlying idea is to treat assertion 
 failures more strongly than we do now.  So changes in how 
 these are handled by the runtime might be a side effect of 
 this proposal.
Again, since Walter's proposal only affects release mode, I don't see how exceptions or the runtime are related. What am I missing? You agree assertion failures (AssertError) are only thrown in debug mode, right?
In non-release mode, but yes. However: 4. An assert failure is a non-recoverable error. The compiler may assume that execution does not proceed after one is tripped. The language does allow attempts to shut a program down gracefully after one is tripped, but that must not be misconstrued as assuming that the program is in a valid state at that point. 5. assert(0); is equivalent to a halt, and the compiler won't remove it. Clearly, the intention is for assertion failures to terminate the program, but that isn't done now. Or at least this isn't done in the multithreaded case. So I was asking for clarification on what should be done on that end, and whether that behavior should inform how assert is treated from a language perspective.
Ah, ok. So it's an issue with the existing implementation not matching some other expectations, not related to the main topic of this thread. Sorry for the confusion. Anyway, Walter just replied to your first post, I'll defer to him on this.
Aug 06 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/6/2014 12:34 PM, Sean Kelly wrote:
 There is no facility for forcing a clean termination of another
 thread.
Understood - so the only option is to force an unclean termination. Let me emphasize that when an assert trips, the program has failed. It is in an invalid, unknown, indeterminate state from that moment forward. Use enforce() for errors that are anticipated and cleanly recoverable, not assert().
Aug 06 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 6 August 2014 at 22:01:47 UTC, Walter Bright wrote:
 On 8/6/2014 12:34 PM, Sean Kelly wrote:
 There is no facility for forcing a clean termination of another
 thread.
Understood - so the only option is to force an unclean termination.
So in all cases, it seems like what we should be doing is call exit(1) or the equivalent and forego any attempt at cleanup. Otherwise an assertion failure in a spawned thread would be handled even more strongly than in the main thread. Alternately, if we build on std.concurrency we could have the next call to receive throw a FatalError or something to force it to terminate. But there's no telling how long this would be. I assume you prefer the forceful route.
 Let me emphasize that when an assert trips, the program has 
 failed. It is in an invalid, unknown, indeterminate state from 
 that moment forward.
I think in many cases the scope of possible invalid states can actually be determined given a reasonable understanding of the program. For example, if a thread within a safe app encounters an error, it might be possible to assert that the damage does not extend beyond the limits of data referenced by the thread. But I do appreciate the desire to not deal lightly with detected logic errors so I won't push the point.
 Use enforce() for errors that are anticipated and cleanly 
 recoverable, not assert().
Given what I said above, I'm inclined to say that enforce() should be used by in contracts and the like rather than assert(), and to save assert() for the situations where I'm certain there's absolutely nothing to be done. However, I think of enforce() to be more for run-time problems that might be corrected by trying again later with the same input. I'll have to give this some thought.
Aug 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/6/2014 5:22 PM, Sean Kelly wrote:
 On Wednesday, 6 August 2014 at 22:01:47 UTC, Walter Bright wrote:
 On 8/6/2014 12:34 PM, Sean Kelly wrote:
 There is no facility for forcing a clean termination of another
 thread.
Understood - so the only option is to force an unclean termination.
So in all cases, it seems like what we should be doing is call exit(1) or the equivalent and forego any attempt at cleanup. Otherwise an assertion failure in a spawned thread would be handled even more strongly than in the main thread. Alternately, if we build on std.concurrency we could have the next call to receive throw a FatalError or something to force it to terminate. But there's no telling how long this would be. I assume you prefer the forceful route.
Yes, I prefer that.
 Let me emphasize that when an assert trips, the program has failed. It is in
 an invalid, unknown, indeterminate state from that moment forward.
I think in many cases the scope of possible invalid states can actually be determined given a reasonable understanding of the program. For example, if a thread within a safe app encounters an error, it might be possible to assert that the damage does not extend beyond the limits of data referenced by the thread.
I can't agree with that, especially not for the standard library.
 Use enforce() for errors that are anticipated and cleanly recoverable, not
 assert().
Given what I said above, I'm inclined to say that enforce() should be used by in contracts and the like rather than assert(), and to save assert() for the situations where I'm certain there's absolutely nothing to be done. However, I think of enforce() to be more for run-time problems that might be corrected by trying again later with the same input. I'll have to give this some thought.
Aug 06 2014
parent Johannes Pfau <nospam example.com> writes:
Am Wed, 06 Aug 2014 20:19:33 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 8/6/2014 5:22 PM, Sean Kelly wrote:
 On Wednesday, 6 August 2014 at 22:01:47 UTC, Walter Bright wrote:
 On 8/6/2014 12:34 PM, Sean Kelly wrote:
 There is no facility for forcing a clean termination of another
 thread.
Understood - so the only option is to force an unclean termination.
So in all cases, it seems like what we should be doing is call exit(1) or the equivalent and forego any attempt at cleanup. Otherwise an assertion failure in a spawned thread would be handled even more strongly than in the main thread. Alternately, if we build on std.concurrency we could have the next call to receive throw a FatalError or something to force it to terminate. But there's no telling how long this would be. I assume you prefer the forceful route.
Yes, I prefer that.
I can see some benefits here, but how does this interact with asserts in unittests? You might special-case these, but it's also common practice to use these asserts not only in the main unittest function but also in helper functions.
Aug 07 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/6/2014 10:18 AM, Sean Kelly wrote:
 So from a functional perspective, assuming this is a multithreaded app, what
 should the procedure be when an AssertError is thrown in some thread?
Print message, stop the process and all its threads. Rationale: the program is in an unknown, invalid state, which will include any thread (or process) that has access to memory shared with that failed thread. Attempting to continue operating is irresponsible if the program is doing or can do anything important.
Aug 06 2014
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 06 Aug 2014 14:57:50 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 8/6/2014 10:18 AM, Sean Kelly wrote:
 So from a functional perspective, assuming this is a multithreaded
 app, what should the procedure be when an AssertError is thrown in
 some thread?
Print message, stop the process and all its threads. Rationale: the program is in an unknown, invalid state, which will include any thread (or process) that has access to memory shared with that failed thread. Attempting to continue operating is irresponsible if the program is doing or can do anything important.
Again: unittests?
Aug 07 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/7/2014 12:54 AM, Johannes Pfau wrote:
 Again: unittests?
Asserts in unittests are not the same as asserts elsewhere. I wasn't a big fan of this, but I was probably the only one :-)
Aug 07 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Walter Bright"  wrote in message news:lrvglk$25dt$1 digitalmars.com...

 Asserts in unittests are not the same as asserts elsewhere. I wasn't a big 
 fan of this, but I was probably the only one :-)
This has the fun side-effect that asserts in unittest helper functions are fatal, while ones in the unittest body are not! And we've also got asserts in pre-conditions, which are recoverable by definition.
Aug 07 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Aug 07, 2014 at 07:29:43PM +1000, Daniel Murphy via Digitalmars-d wrote:
 "Walter Bright"  wrote in message news:lrvglk$25dt$1 digitalmars.com...
 
Asserts in unittests are not the same as asserts elsewhere. I wasn't
a big fan of this, but I was probably the only one :-)
This has the fun side-effect that asserts in unittest helper functions are fatal, while ones in the unittest body are not! And we've also got asserts in pre-conditions, which are recoverable by definition.
Huh, what? I thought asserts in pre-conditions are non-recoverable, because they imply that user code has broken the contract governing the use of that function. T -- Answer: Because it breaks the logical sequence of discussion. Question: Why is top posting bad?
Aug 07 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 7 August 2014 at 15:21:14 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Huh, what? I thought asserts in pre-conditions are 
 non-recoverable,
 because they imply that user code has broken the contract 
 governing the use of that function.
I suspect this new treatment of assert will affect the way that people check for errors. For example, in the server apps I write, I don't always want a hard stop of the entire process if I detect a logic error because it provides a potential attack vector for a user to bring down an entire service. As soon as someone determines that a particular request kills all transactions on a process... This isn't just paranoia in a new direction either. The servers I work on are literally under constant attack by hackers. Largely botnets trying to crack passwords, but not exclusively that. And so I might want to at least drain out all pending requests before halting the process, or I might want to just kill the transaction that detected the logic error and leave everything running, assuming I can be sure that the potential collateral damage is local to the transaction (which is typically the case). Longer term, this new treatment of assert will probably encourage better program design which guarantees memory isolation by multiprocessing. I favor this direction, but messaging has a way to go first. And by extension, I do think that memory sharing in a language that allows direct memory access is the real risk. That a logic error might cause a thread / process to generate an invalid message is not an issue any more than it would be an issue to call a function with bad parameters. In both cases, the implication is that the caller is in an invalid state, not the callee. So long as there is no vector through which one context can alter the data accessed by another context without proper validation, the corruption cannot spread.
Aug 07 2014
prev sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.674.1407424873.16021.digitalmars-d puremagic.com...

 And we've also got asserts in pre-conditions, which are recoverable by
 definition.
Huh, what? I thought asserts in pre-conditions are non-recoverable, because they imply that user code has broken the contract governing the use of that function.
I meant asserts in pre-conditions when used with inheritance. It's a pass if any of the preconditions pass, so the compiler runs them in turn and catches all but the last.
Aug 07 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 08, 2014 at 03:44:06AM +1000, Daniel Murphy via Digitalmars-d wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.674.1407424873.16021.digitalmars-d puremagic.com...
 
 And we've also got asserts in pre-conditions, which are recoverable
 by definition.
Huh, what? I thought asserts in pre-conditions are non-recoverable, because they imply that user code has broken the contract governing the use of that function.
I meant asserts in pre-conditions when used with inheritance. It's a pass if any of the preconditions pass, so the compiler runs them in turn and catches all but the last.
Oh, I see it now. Hmph. Now it's making me wonder if preconditions should be treated as a *single* assert outside the body of 'in{}', rather than allowing individual asserts inside. Perhaps the body of 'in{}' should return a boolean where false indicates a failed precondition, so that instead of writing: auto func(...) in { assert(cond1); assert(cond2); assert(cond3); } body { ... } you'd write instead: auto func(...) in { cond1 && cond2 && cond3 } body { ... } and the compiler translates this to: bool __func__in(...) { return cond1 && cond2 && cond3; } auto func(...) { assert(__func_in(...)); // body goes here } Well, technically it should be the caller that invokes __func_in(), but that's a different issue. T -- We are in class, we are supposed to be learning, we have a teacher... Is it too much that I expect him to teach me??? -- RL
Aug 07 2014
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.684.1407434193.16021.digitalmars-d puremagic.com...

 Hmph. Now it's making me wonder if preconditions should be treated as a
 *single* assert outside the body of 'in{}', rather than allowing
 individual asserts inside. Perhaps the body of 'in{}' should return a
 boolean where false indicates a failed precondition,
I think that ship has sailed.
Aug 08 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Aug 08, 2014 at 07:15:22PM +1000, Daniel Murphy via Digitalmars-d wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.684.1407434193.16021.digitalmars-d puremagic.com...
 
Hmph. Now it's making me wonder if preconditions should be treated as
a *single* assert outside the body of 'in{}', rather than allowing
individual asserts inside. Perhaps the body of 'in{}' should return a
boolean where false indicates a failed precondition,
I think that ship has sailed.
I know. It was more of a wish than anything. T -- Don't modify spaghetti code unless you can eat the consequences.
Aug 08 2014
parent reply "Vlad Levenfeld" <vlevenfeld gmail.com> writes:
On Friday, 8 August 2014 at 14:27:56 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Fri, Aug 08, 2014 at 07:15:22PM +1000, Daniel Murphy via 
 Digitalmars-d wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.684.1407434193.16021.digitalmars-d puremagic.com...
 
Hmph. Now it's making me wonder if preconditions should be 
treated as
a *single* assert outside the body of 'in{}', rather than 
allowing
individual asserts inside. Perhaps the body of 'in{}' should 
return a
boolean where false indicates a failed precondition,
I think that ship has sailed.
I know. It was more of a wish than anything. T
Since the ship HAS sailed... why must contracts be elided with --release? Seems to me if asserts go away, then eliding in and out is redundant. It'd be nice if I could put all my recoverable pre and post conditions inside those blocks as well. Just for organization's sake.
Aug 08 2014
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 8 August 2014 at 18:43:22 UTC, Vlad Levenfeld wrote:
 Since the ship HAS sailed... why must contracts be elided with 
 --release? Seems to me if asserts go away, then eliding in and 
 out is redundant. It'd be nice if I could put all my 
 recoverable pre and post conditions inside those blocks as 
 well. Just for organization's sake.
An in or out block could have other code which is doing work in preparation for asserting rather than just assertions, so leaving them in wouldn't make sense. - Jonathan M Davis
Aug 08 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Aug 07, 2014 at 10:54:53AM -0700, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Aug 08, 2014 at 03:44:06AM +1000, Daniel Murphy via Digitalmars-d
wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message
 news:mailman.674.1407424873.16021.digitalmars-d puremagic.com...
 
 And we've also got asserts in pre-conditions, which are recoverable
 by definition.
Huh, what? I thought asserts in pre-conditions are non-recoverable, because they imply that user code has broken the contract governing the use of that function.
I meant asserts in pre-conditions when used with inheritance. It's a pass if any of the preconditions pass, so the compiler runs them in turn and catches all but the last.
Oh, I see it now.
[...] P.S. The current implementation also does not distinguish between a broken contract vs. a bug or problem encountered by the contract code. For example: auto func(T...)(T args) in { assert(checkConsistency(args)); } body { ... } bool checkConsistency(T...)(T args) { int i; for (i=0; i < args.length; i++) { ... i = 1000; // suppose an inadvertent typo causes this } assert(i == args.length); // so this will fail ... return result; } Suppose the i=1000 line is a mistake by the programmer, so it's a genuine bug in checkConsistency. This would trip the assert in checkConsistency, which would throw an AssertError. But if it was called from the in-contract of func(), and func() is in a derived class for which the base class version of func() has an in-contract that passes, then we're basically swallowing the AssertError triggered by the failed malloc(), and thereby causing the program to continue running in an invalid state! Basically, the problem is that the in-contract can't tell the difference between a precondition failure (triggered by the outer assert) and a genuine program bug (triggered by the inner assert). This makes DbC in D really shaky. :-( T -- Freedom: (n.) Man's self-given right to be enslaved by his own depravity.
Aug 07 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"H. S. Teoh via Digitalmars-d"  wrote in message 
news:mailman.689.1407436311.16021.digitalmars-d puremagic.com...

 P.S. The current implementation also does not distinguish between a
 broken contract vs. a bug or problem encountered by the contract code.
 For example:
Last time I checked it also treated OOM and access violation (on windows) as contract failure too. Any Throwable.
Aug 08 2014
prev sibling next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 7 August 2014 at 17:44:02 UTC, Daniel Murphy wrote:
 "H. S. Teoh via Digitalmars-d"  wrote in message 
 news:mailman.674.1407424873.16021.digitalmars-d puremagic.com...

 And we've also got asserts in pre-conditions, which are 
 recoverable by
 definition.
Huh, what? I thought asserts in pre-conditions are non-recoverable, because they imply that user code has broken the contract governing the use of that function.
I meant asserts in pre-conditions when used with inheritance. It's a pass if any of the preconditions pass, so the compiler runs them in turn and catches all but the last.
Oh man, I forgot about this. I wonder how this works from a codegen perspective. Is precondition inheritance properly implemented yet?
Aug 07 2014
parent "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Sean Kelly"  wrote in message news:trnhymzwfkflgotxykmb forum.dlang.org...

 Oh man, I forgot about this.  I wonder how this works from a codegen 
 perspective.  Is precondition inheritance properly implemented yet?
Properly is a bit strong. It is somewhat implemented. I think there are still corner cases when some functions have no preconditions that people are still arguing over, but it does work for simple inheritance examples.
Aug 08 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/7/2014 10:44 AM, Daniel Murphy wrote:
 I meant asserts in pre-conditions when used with inheritance.  It's a pass if
 any of the preconditions pass, so the compiler runs them in turn and catches
all
 but the last.
That's right, they are OR'd together.
Aug 09 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Aug 09, 2014 at 05:54:21PM -0700, Walter Bright via Digitalmars-d wrote:
 On 8/7/2014 10:44 AM, Daniel Murphy wrote:
I meant asserts in pre-conditions when used with inheritance.  It's a
pass if any of the preconditions pass, so the compiler runs them in
turn and catches all but the last.
That's right, they are OR'd together.
Which is fine if they are evaluated as a single expression. The problem right now is that the OR is simulated by catching AssertErrors, which by definition only happens when the program in an invalid state, so the fact that this works at all is only a "lucky coincidence". :-( If you're "not so lucky" you end up catching a genuine program bug but ignoring it because the other condition in the simulated OR turned out to pass. This unfortunately puts DbC in D on rather shaky ground. T -- Кто везде - тот нигде.
Aug 09 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/7/2014 2:29 AM, Daniel Murphy wrote:
 And we've also got asserts in pre-conditions, which are recoverable by
definition.
Pre-condition asserts are not recoverable.
Aug 09 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/9/2014 5:52 PM, Walter Bright wrote:
 On 8/7/2014 2:29 AM, Daniel Murphy wrote:
 And we've also got asserts in pre-conditions, which are recoverable by
 definition.
Pre-condition asserts are not recoverable.
Eh, ignore that.
Aug 09 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 7 August 2014 at 07:57:14 UTC, Johannes Pfau wrote:
 
 Attempting to continue operating is irresponsible if the 
 program is doing or can do anything important.
Again: unittests?
unittests shouldn't use assert. Or alternately, we can have onAssertError (the assertion failure callback) do something different if unit testing is taking place than when the program is actually running.
Aug 07 2014
parent David Gileadi <gileadis NSPMgmail.com> writes:
On 8/7/14, 9:51 AM, Sean Kelly wrote:
 On Thursday, 7 August 2014 at 07:57:14 UTC, Johannes Pfau wrote:
 Attempting to continue operating is irresponsible if the program is
 doing or can do anything important.
Again: unittests?
unittests shouldn't use assert. Or alternately, we can have onAssertError (the assertion failure callback) do something different if unit testing is taking place than when the program is actually running.
Considering that TDPL, Ali's Book[1] and dlang.org[2] all use assert when introducing unittest, I think the latter solution is probably the right one. [1] http://ddili.org/ders/d.en/unit_testing.html [2] http://dlang.org/unittest.html
Aug 07 2014
prev sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright wrote:
 On 8/6/2014 10:18 AM, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what
 should the procedure be when an AssertError is thrown in some 
 thread?
Print message, stop the process and all its threads. Rationale: the program is in an unknown, invalid state, which will include any thread (or process) that has access to memory shared with that failed thread.
Strictly speaking we would need to kill only those threads. But I guess it wouldn't be worth the effort, because the remaining threads are most likely not expecting them to disappear and couldn't recover from this.
Aug 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/7/2014 2:10 AM, "Marc Schütz" <schuetzm gmx.net>" wrote:
 On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright wrote:
 On 8/6/2014 10:18 AM, Sean Kelly wrote:
 So from a functional perspective, assuming this is a multithreaded app, what
 should the procedure be when an AssertError is thrown in some thread?
Print message, stop the process and all its threads. Rationale: the program is in an unknown, invalid state, which will include any thread (or process) that has access to memory shared with that failed thread.
Strictly speaking we would need to kill only those threads.
No, all the threads. Threads share memory, so corruption can spread from one to the next.
Aug 07 2014
parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 7 August 2014 at 09:26:01 UTC, Walter Bright wrote:
 On 8/7/2014 2:10 AM, "Marc Schütz" <schuetzm gmx.net>" wrote:
 On Wednesday, 6 August 2014 at 21:57:50 UTC, Walter Bright 
 wrote:
 On 8/6/2014 10:18 AM, Sean Kelly wrote:
 So from a functional perspective, assuming this is a 
 multithreaded app, what
 should the procedure be when an AssertError is thrown in 
 some thread?
Print message, stop the process and all its threads. Rationale: the program is in an unknown, invalid state, which will include any thread (or process) that has access to memory shared with that failed thread.
Strictly speaking we would need to kill only those threads.
No, all the threads. Threads share memory, so corruption can spread from one to the next.
Yes, that's what I was saying: "those threads ... that ha[ve] access to memory shared with that failed thread". But even without real sharing of memory, a failed assertion in one thread might result from invalid data being sent to that thread from another one (via spawn or send). Better to just kill all threads.
Aug 07 2014