www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Sutter's ISO C++ Trip Report - The best compliment is when someone

reply John Carter <john.carter taitradio.com> writes:
https://herbsutter.com/2018/07/02/trip-report-summer-iso-c-standards-meeting-rapperswil/

This looks to me like a huge step forward for C++....

 * You get to install your own violation handler and ship a 
 release build with the option of turning on enforcement at run 
 time.
 * You get to express audit to distinguish expensive checks to 
 be run only when explicitly requested.
 * You get to express axiom contracts that are intended to never 
 generate run-time code but are available to static analysis 
 tools.
 * Finally, you will likely get better performance, because 
 contracts should enable compilers to perform more 
 optimizations, more easily, than expressing them using 
 assertions.
The last to look very important to me. I have been looking closely at what the compiler (and splint) does with asserts in our code https://stackoverflow.com/questions/50165291/how-can-one-implement-assert-to-make-use-of-gccs-optimizers-static-dataflo And found that counter intuitively (in C at least), asserts weakened gcc's static analysis abilities!
 Step 2 is to (gradually) migrate std:: standard library 
 precondition violations in particular from exceptions (or error 
 codes) to contracts. The programming world now broadly 
 recognizes that programming bugs (e.g., out-of-bounds access, 
 null dereference, and in general all pre/post/assert-condition 
 violations) cause a corrupted state that cannot be recovered 
 from programmatically, and so they should never be reported to 
 the calling code as exceptions or error codes that code could 
 somehow handle.
Ah, that's a really nice statement.
Jul 02 2018
next sibling parent reply Ali <fakeemail example.com> writes:
Well, D is not exactly known for contract oriented programming or 
DbC (Design by Contract)
we have to thank Bertrand Meyer and his language Eiffel, for that
Jul 02 2018
next sibling parent John Carter <john.carter taitradio.com> writes:
On Tuesday, 3 July 2018 at 03:27:06 UTC, Ali wrote:

 we have to thank Bertrand Meyer and his language Eiffel, for 
 that
True. I was referring to the ideas in Walter's proposal https://forum.dlang.org/thread/lrbpvj$mih$1 digitalmars.com
Jul 02 2018
prev sibling parent Ecstatic Coder <ecstatic.coder gmail.com> writes:
On Tuesday, 3 July 2018 at 03:27:06 UTC, Ali wrote:
 Well, D is not exactly known for contract oriented programming 
 or DbC (Design by Contract)
 we have to thank Bertrand Meyer and his language Eiffel, for 
 that
Thanks for pointing this out ! His book "Object-Oriented Software Construction" is an absolute MUST-READ for any decent programmer. Contracts, large-scale object-oriented architecture, how to assign responsabilities to the right class, etc. Even somthing seemingly insignificant as using uppercase typenames is a complete life changer, as this way you can immediately see the role of a single-word identifier just by its case. That's after reading his book almost three decades ago that I've decided to use the following conventions for my personal code : - PLAYER : type - Player : member variable - player : local variable Still don't understand why people are still adding silly prefixes or suffixes ("m_", "_", "this->", etc etc) to differentiate local variables from member variables etc : - Player : type - player_, _player, m_player, this->player, etc : member variable - player : local variable While using the identifier case gets the job done in a simpler and more readable way... IMO reading this book should be mandatory for any second-year student who is learning professional software development...
Jul 23 2018
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/2/2018 7:53 PM, John Carter wrote:
 Step 2 is to (gradually) migrate std:: standard library precondition 
 violations in particular from exceptions (or error codes) to contracts. The 
 programming world now broadly recognizes that programming bugs (e.g., 
 out-of-bounds access, null dereference, and in general all 
 pre/post/assert-condition violations) cause a corrupted state that cannot be 
 recovered from programmatically, and so they should never be reported to the 
 calling code as exceptions or error codes that code could somehow handle.
Ah, that's a really nice statement.
So, I have finally convinced the C++ world about that! Now if I can only convince the D world :-) (I'm referring to the repeated and endless threads here where people argue that yes, they can recover from programming bugs!)
Jul 02 2018
next sibling parent ixid <nuaccount gmail.com> writes:
On Tuesday, 3 July 2018 at 04:54:46 UTC, Walter Bright wrote:
 So, I have finally convinced the C++ world about that! Now if I 
 can only convince the D world :-)

 (I'm referring to the repeated and endless threads here where 
 people argue that yes, they can recover from programming bugs!)
It seemed like you were perhaps talking at crossed purposes a little on that. You seemed to advocate blowing up the whole world when anything went wrong while some people were making the point that in some contexts while one bit might explode you could need to keep other things going.
Jul 03 2018
prev sibling next sibling parent reply John Carter <john.carter taitradio.com> writes:
On Tuesday, 3 July 2018 at 04:54:46 UTC, Walter Bright wrote:
 On 7/2/2018 7:53 PM, John Carter wrote:
 In general all pre/post/assert-condition violations) cause a 
 corrupted state that cannot be recovered from 
 programmatically, and so they should never be reported to the 
 calling code as exceptions or error codes that code could 
 somehow handle.
Ah, that's a really nice statement.
So, I have finally convinced the C++ world about that! Now if I can only convince the D world :-)
Oh, I'm convinced. At work here I have emerged from a long, dark, debate on the subject within the team. The ultimately solution was to realize there are actually TWO facilities with TWO entirely different purposes that have been overloaded with the same name. Alas, the word "assert" now is inextricably mired in this confusion. Half our team used asserts to mean "This mustn't _ever_ happen in unit tests (unless we set up a specific test case for that), and it will never happen if the incoming signal is standards compliant, but it may happen (due to RF noise, and/or competitor violating the standard and/or adding in proprietary stuff into the data and/or we're being attacked) so we _must_ fall through the assert at run time, and handle that case somehow, but preferably make a note that something unexpected happened." The other half of the team meant, "If the expression is false, it means the code on this line or on the execution path prior to it is definitely defective and must be fixed immediately. And there is absolutely no hope the code after it will work, and continuing on will make the system flaky and unreliable, and the only path back to reliable function is a reset. All code after this line may explicitly assume, and depend on, the expression being true. Any attempt to handle the possibility of the expression is false is buggy, useless and probably will be removed by the optimizer. Any urge to handle the possibility of the expression being false after the assert, should be replaced by the inclination to review the code on the execution path prior to the assert, to ensure that the expression will always be true." Alas, both sides were using the same word "assert" to mean these different things, resulting in conversations that went around and around in meaningless circles. We have resolved the debate by identifying these two different meanings, and given the facilities implementing them two different names and documenting the difference in meaning and intent.
Jul 05 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jul 05, 2018 at 10:05:44PM +0000, John Carter via Digitalmars-d wrote:
[...]
 At work here I have emerged from a long, dark, debate on the subject
 within the team.
 
 The ultimately solution was to realize there are actually TWO
 facilities with TWO entirely different purposes that have been
 overloaded with the same name.
 
 Alas, the word "assert" now is inextricably mired in this confusion.
 
 Half our team used asserts to mean "This mustn't _ever_ happen in unit
 tests (unless we set up a specific test case for that), and it will
 never happen if the incoming signal is standards compliant, but it may
 happen (due to RF noise, and/or competitor violating the standard
 and/or adding in proprietary stuff into the data and/or we're being
 attacked) so we _must_ fall through the assert at run time, and handle
 that case somehow, but preferably make a note that something
 unexpected happened."
 
 The other half of the team meant, "If the expression is false, it
 means the code on this line or on the execution path prior to it is
 definitely defective and must be fixed immediately.
[...] In D, I believe the first meaning is assigned to "enforce" (i.e., std.exception.enforce), and the second meaning is assigned to "assert" (the built-in assert). Unfortunately, the word "assert" has been used outside the context of D to mean either thing, so people keep tripping over the terminology and using assert for the wrong thing. And it doesn't help that "enforce" is a library function rather than a built-in construct, which some people wrongly interpret as "secondary, therefore somehow inferior, and probably not what I intend". T -- "I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet. " -- swr
Jul 05 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/5/2018 3:26 PM, H. S. Teoh wrote:
 people keep tripping over the terminology
Some people do. However, the long threads of debate on this topic was with people who were clear on what the terminology meant.
Jul 06 2018
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/6/18 4:21 AM, Walter Bright wrote:
 On 7/5/2018 3:26 PM, H. S. Teoh wrote:
 people keep tripping over the terminology
Some people do. However, the long threads of debate on this topic was with people who were clear on what the terminology meant.
My question has never been the difference between programming and input errors. My question always has been how the compiler ascribes programming errors to things that could easily also be checkable and recoverable, depending on context. For instance, array bounds checks. D makes this choice for you, and it's unfortunate to have to check things twice, or not rely on the compiler to insert those checks. But luckily D is powerful enough to add types that do the right thing :) -Steve
Jul 06 2018
prev sibling next sibling parent reply wjoe <none example.com> writes:
On Tuesday, 3 July 2018 at 04:54:46 UTC, Walter Bright wrote:
 On 7/2/2018 7:53 PM, John Carter wrote:
 Step 2 is to (gradually) migrate std:: standard library 
 precondition violations in particular from exceptions (or 
 error codes) to contracts. The programming world now broadly 
 recognizes that programming bugs (e.g., out-of-bounds access, 
 null dereference, and in general all 
 pre/post/assert-condition violations) cause a corrupted state 
 that cannot be recovered from programmatically, and so they 
 should never be reported to the calling code as exceptions or 
 error codes that code could somehow handle.
So, I have finally convinced the C++ world about that! Now if I can only convince the D world :-)
But that's not how D works. It throws an Error which can be caught. If people are allowed to do something they assume it's legitimate. It should be a compile time error to catch an Error, but it doesn't even emit a warning and it seems to work well which is probably proof enough for people to assume it's good. In my opinion it shouldn't throw an error but abort at once after printing the stack trace. Which would have the nice side effect of stopping almost exactly on the line in a debugger. An out of bounds range error for instance prints the assertion message and then gracefully exits (exit code is 1, which indicates a code that can be handled by the calling process, rather than a return code of -6 for abnormal termination). Also there is, or at least was a few months ago, that gotcha in concurrency that when a thread throws an Error it goes silent, the thread simple gone. The only way I could finally get to the root of the cause was to catch everything in that thread. Aborting would be preferable so the debugger can catch such things. Makes it a matter of rerunning the program instead of hours of bug hunting and guess work. Also in spite of the above, a bug in a thread should probably bring down the whole program since if a thread is in unrecoverable, corrupt state, it follows that the entire program is in corrupt state.
Jul 06 2018
parent reply John Carter <john.carter taitradio.com> writes:
On Saturday, 7 July 2018 at 01:18:21 UTC, wjoe wrote:
 But that's not how D works. It throws an Error which can be 
 caught.

 If people are allowed to do something they assume it's 
 legitimate.

 It should be a compile time error to catch an Error, but it 
 doesn't even emit a warning and it seems to work well which is 
 probably proof enough for people to assume it's good.
I got myself so tangled up in knots with the equivalent in Ruby.... You can "rescue" the base Exception class... which initially I did everywhere to try give better error messages... Which more often than not would result in everything going weird and insane instead of useful. Eventually I replaced _every_ "rescue Exception" with "rescue StandardError" and life improved majorly. Seriously folks, trying to "catch and handle" a programming bug leads to the very dark side of life. Especially in a 'C/C++/D" like language where the exception is concrete evidence that the system is _already_ in an undefined and unreliable state.
Jul 08 2018
next sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Sunday, 8 July 2018 14:55:15 MDT John Carter via Digitalmars-d wrote:
 On Saturday, 7 July 2018 at 01:18:21 UTC, wjoe wrote:
 But that's not how D works. It throws an Error which can be
 caught.

 If people are allowed to do something they assume it's
 legitimate.

 It should be a compile time error to catch an Error, but it
 doesn't even emit a warning and it seems to work well which is
 probably proof enough for people to assume it's good.
I got myself so tangled up in knots with the equivalent in Ruby.... You can "rescue" the base Exception class... which initially I did everywhere to try give better error messages... Which more often than not would result in everything going weird and insane instead of useful. Eventually I replaced _every_ "rescue Exception" with "rescue StandardError" and life improved majorly. Seriously folks, trying to "catch and handle" a programming bug leads to the very dark side of life. Especially in a 'C/C++/D" like language where the exception is concrete evidence that the system is _already_ in an undefined and unreliable state.
I agree, though I'm increasingly of the opinion that we would have been better off just printing out the debug information at the call site and then killing the program with a HLT (or whatever the appropriate instruction would be) so that a core dump gets generated right there. It makes the program far more debuggable in situations where the problem is not easily reproduced, and it avoids the entire issue of whether it's okay to catch Error or Throwable. What we have instead is a situation where someone can catch something that they shouldn't be catching (thus putting their program in an even more invalid state), and when the program exits, we've lost the program state from the point of the failure. - Jonathan M Davis
Jul 08 2018
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2018-07-08 22:55, John Carter wrote:
 On Saturday, 7 July 2018 at 01:18:21 UTC, wjoe wrote:
 But that's not how D works. It throws an Error which can be caught.

 If people are allowed to do something they assume it's legitimate.

 It should be a compile time error to catch an Error, but it doesn't 
 even emit a warning and it seems to work well which is probably proof 
 enough for people to assume it's good.
I got myself so tangled up in knots with the equivalent in Ruby.... You can "rescue" the base Exception class... which initially I did everywhere to try give better error messages... Which more often than not would result in everything going weird and insane instead of useful. Eventually I replaced _every_ "rescue Exception" with "rescue StandardError" and life improved majorly.
There's even a SyntaxError exception class (inheriting from Exception), which you do not want to catch. -- /Jacob Carlborg
Jul 09 2018
prev sibling parent wjoe <none example.com> writes:
On Sunday, 8 July 2018 at 20:55:15 UTC, John Carter wrote:
 On Saturday, 7 July 2018 at 01:18:21 UTC, wjoe wrote:
 But that's not how D works. It throws an Error which can be 
 caught.

 If people are allowed to do something they assume it's 
 legitimate.

 It should be a compile time error to catch an Error, but it 
 doesn't even emit a warning and it seems to work well which is 
 probably proof enough for people to assume it's good.
I got myself so tangled up in knots with the equivalent in Ruby.... You can "rescue" the base Exception class... which initially I did everywhere to try give better error messages... Which more often than not would result in everything going weird and insane instead of useful. Eventually I replaced _every_ "rescue Exception" with "rescue StandardError" and life improved majorly. Seriously folks, trying to "catch and handle" a programming bug leads to the very dark side of life. Especially in a 'C/C++/D" like language where the exception is concrete evidence that the system is _already_ in an undefined and unreliable state.
I'll keep that advice in mind. In case of the program I made with std.concurrency, I did not catch Throwable to try to recover, but to be able to debug the cause because there was no stack trace printed out or any indication at all that something went wrong. And because the debugger wouldn't break on anything either and a catch block at least allowed for setting a breakpoint there it seemed like an idea. A catch all Exception didn't trigger and the thread still just vanished. So I just gave it a shot and broadened that to Throwable to rule out this case, too, and sure enough an Error was caught. Would have been a lot easier to debug if that Error brought down the whole thing with it and then there wouldn't have been a reason to catch Throwable in the first place. Except, a D program which is terminated by the D runtime via Error mechanism, at least on Linux, prints an Error with a stack trace and then exits with code 1. That's a normal exit since positive exit codes are supposed to be handled by the caller something akin to success, or file not found. In my opinion this behavior is a defect in the D runtime. It should abort abnormally and indicate that fact with an exit code of -6, or the OS equivalent of SIGABRT. It is really annoying if you run the program in a debugger, too, which simply tells you the program exited normally and you can't walk the stack trace, print variables, etc. The behavior on segfault is identical to a C program (exit code -11, SIGSEGV), but this is probably because the OS terminates the program before the runtime gets any opportunity to throw an Error. Meaning you don't get a stack trace, however a debugger breaks exactly at the offending line in the source code. Before Jonathan's explanation on how the Error mechanism works I had considered abusing the Error mechanism to do some cleanup of secrets in memory (cached passwords or some such) in case of an abnormal termination, etc. - the reasoning being it's possible and if it can reliably print a stack trace, why couldn't it do some zeroing of RAM prior to that, right? But thinking about it some more, all the things I would have considered doing in a catch Error block can be solved without catching. If catch *Error{} would be a compile time error, I would have just accepted the fact and came up with alternatives all the same. Saving him a lot of nerves in the process. So, thanks again, Jonathan, for bearing with me :) Much appreciated! Considering the implications, it really baffles me that there isn't so much as a warning when the compiler encounters: catch *Error...
Jul 10 2018
prev sibling next sibling parent reply Mr.Bingo <Bingo Namo.com> writes:
On Tuesday, 3 July 2018 at 04:54:46 UTC, Walter Bright wrote:
 On 7/2/2018 7:53 PM, John Carter wrote:
 Step 2 is to (gradually) migrate std:: standard library 
 precondition violations in particular from exceptions (or 
 error codes) to contracts. The programming world now broadly 
 recognizes that programming bugs (e.g., out-of-bounds access, 
 null dereference, and in general all 
 pre/post/assert-condition violations) cause a corrupted state 
 that cannot be recovered from programmatically, and so they 
 should never be reported to the calling code as exceptions or 
 error codes that code could somehow handle.
Ah, that's a really nice statement.
So, I have finally convinced the C++ world about that! Now if I can only convince the D world :-) (I'm referring to the repeated and endless threads here where people argue that yes, they can recover from programming bugs!)
If this is the case then why do we need a reboot switch? Never say never! If you really believe this then why do you print out minimal debug information when an error occurs? If programming bugs were essentially fatal, then wouldn't be important to give as much information when they occur so they can easily be fixed so they do not happen again? Having too much information is a good thing!
Jul 09 2018
parent reply John Carter <john.carter taitradio.com> writes:
On Monday, 9 July 2018 at 22:50:07 UTC, Mr.Bingo wrote:
 On Tuesday, 3 July 2018 at 04:54:46 UTC, Walter Bright wrote:
 On 7/2/2018 7:53 PM, John Carter wrote:
 Step 2 is to (gradually) migrate std:: standard library 
 precondition violations in particular from exceptions (or 
 error codes) to contracts. The programming world now broadly 
 recognizes that programming bugs (e.g., out-of-bounds 
 access, null dereference, and in general all 
 pre/post/assert-condition violations) cause a corrupted 
 state that cannot be recovered from programmatically, and so 
 they should never be reported to the calling code as 
 exceptions or error codes that code could somehow handle.
Ah, that's a really nice statement.
So, I have finally convinced the C++ world about that! Now if I can only convince the D world :-) (I'm referring to the repeated and endless threads here where people argue that yes, they can recover from programming bugs!)
If this is the case then why do we need a reboot switch? Never say never! If you really believe this then why do you print out minimal debug information when an error occurs? If programming bugs were essentially fatal, then wouldn't be important to give as much information when they occur so they can easily be fixed so they do not happen again? Having too much information is a good thing!
I have learnt some very hard and painful lessons over the last few years of working on an embedded device without an MMU. The chief one is that relying on corrupted services, which are in an undefined state, are a startling Bad Thing to use to extract and record information. It's a toss up as to whether the information extraction routine will crash or loop or produce garbage, and whether the routine that records the crash information crashes, or loops or records garbage. The solution is to extract and stash only that information using services you can verify line by line. ie. If it is possible it may be corrupted (eg. heap, RTOS services) don't use it. Then reboot to put it into a defined state, and then persist the information. With an MMU life is easier... you can rely on the kernel to take a coredump and persist that for you. But again, that is "outside" the run time of the program.
 Having too much information is a good thing!
Not if it is garbage, or crashes, or freezes the system because the services it uses are corrupt. Then its a Very Very Bad Thing. The best approach I have found is to "crash early and often". Seriously. The earlier in the execution path you find the defect and fix it, the more robust your system will be. Nothing creates flaky and unreliable systems more than allowing them to wobble on past the first point where you already know that things are wrong.
Jul 09 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/9/2018 6:50 PM, John Carter wrote:
 Nothing creates flaky and unreliable systems more than allowing them to wobble 
 on past the first point where you already know that things are wrong.
Things got so bad with real mode DOS development that I rebooted the system every time my program crashed, making for rather painfully slow development. Salvation came in the form of OS/2 (!). Although OS/2 was a tiny market, it was a godsend for me. I developed all the 16 bit code on OS/2, which had memory protection. Only the final step was recompiling it for real mode DOS.
Jul 09 2018
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jul 09, 2018 at 11:37:59PM -0700, Walter Bright via Digitalmars-d wrote:
 On 7/9/2018 6:50 PM, John Carter wrote:
 Nothing creates flaky and unreliable systems more than allowing them
 to wobble on past the first point where you already know that things
 are wrong.
Things got so bad with real mode DOS development that I rebooted the system every time my program crashed, making for rather painfully slow development.
[...] The saving grace to real mode DOS was that rebooting was so fast. Sad to say, modern OSes are horribly slow and inefficient at booting up. So much for progress. T -- No! I'm not in denial!
Jul 10 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2018 8:39 AM, H. S. Teoh wrote:
 The saving grace to real mode DOS was that rebooting was so fast.
I beg to differ. Boot time has been about the same for the last 40 years :-)
Jul 10 2018
parent reply Jacob Carlborg <doob me.com> writes:
On 2018-07-11 03:50, Walter Bright wrote:
 On 7/10/2018 8:39 AM, H. S. Teoh wrote:
 The saving grace to real mode DOS was that rebooting was so fast.
I beg to differ. Boot time has been about the same for the last 40 years :-)
The boot time of my computer was reduced from several minutes to around 30 seconds when I switch to SSD disks. -- /Jacob Carlborg
Jul 11 2018
parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 11 July 2018 at 16:17:30 UTC, Jacob Carlborg wrote:
 The boot time of my computer was reduced from several minutes 
 to around 30 seconds when I switch to SSD disks.
My NVMe ssd is very fast.
Jul 11 2018
prev sibling parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/10/18 2:37 AM, Walter Bright wrote:
 On 7/9/2018 6:50 PM, John Carter wrote:
 Nothing creates flaky and unreliable systems more than allowing them 
 to wobble on past the first point where you already know that things 
 are wrong.
Things got so bad with real mode DOS development that I rebooted the system every time my program crashed, making for rather painfully slow development. Salvation came in the form of OS/2 (!). Although OS/2 was a tiny market, it was a godsend for me. I developed all the 16 bit code on OS/2, which had memory protection. Only the final step was recompiling it for real mode DOS.
All this talk about DOS, I also saw this in the news recently:https://kotaku.com/in-2018-a-pc-game-is-being-made-in-dos-1827463766 -Steve
Jul 12 2018
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03.07.2018 06:54, Walter Bright wrote:
 ...
 
 (I'm referring to the repeated and endless threads here where people 
 argue that yes, they can recover from programming bugs!)
Which threads are those?
Jul 10 2018
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, 10 July 2018 13:21:28 MDT Timon Gehr via Digitalmars-d wrote:
 On 03.07.2018 06:54, Walter Bright wrote:
 ...

 (I'm referring to the repeated and endless threads here where people
 argue that yes, they can recover from programming bugs!)
Which threads are those?
Pretty much any thread arguing for having clean-up done when an Error is thrown instead of terminating ASAP. Usually, folks don't try to claim that trying to fully continue the program in spite of the Error is a good idea, but even that gets suggested sometimes (e.g. trying to catch and recover from a RangeError comes up periodically). - Jonathan M Davis
Jul 10 2018
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/10/18 6:26 PM, Jonathan M Davis wrote:
 On Tuesday, 10 July 2018 13:21:28 MDT Timon Gehr via Digitalmars-d wrote:
 On 03.07.2018 06:54, Walter Bright wrote:
 ...

 (I'm referring to the repeated and endless threads here where people
 argue that yes, they can recover from programming bugs!)
Which threads are those?
Pretty much any thread arguing for having clean-up done when an Error is thrown instead of terminating ASAP. Usually, folks don't try to claim that trying to fully continue the program in spite of the Error is a good idea, but even that gets suggested sometimes (e.g. trying to catch and recover from a RangeError comes up periodically).
Or aside from that strawman that RangeError shouldn't be an Error even... -Steve
Jul 10 2018
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, 10 July 2018 16:48:41 MDT Steven Schveighoffer via Digitalmars-d 
wrote:
 On 7/10/18 6:26 PM, Jonathan M Davis wrote:
 On Tuesday, 10 July 2018 13:21:28 MDT Timon Gehr via Digitalmars-d 
wrote:
 On 03.07.2018 06:54, Walter Bright wrote:
 ...

 (I'm referring to the repeated and endless threads here where people
 argue that yes, they can recover from programming bugs!)
Which threads are those?
Pretty much any thread arguing for having clean-up done when an Error is thrown instead of terminating ASAP. Usually, folks don't try to claim that trying to fully continue the program in spite of the Error is a good idea, but even that gets suggested sometimes (e.g. trying to catch and recover from a RangeError comes up periodically).
Or aside from that strawman that RangeError shouldn't be an Error even...
I suspect that we're going to have to agree to disagree on that one. In the vast majority of cases, indices do not come from program input, and in the cases where they do, they can be checked by the programmer to ensure that they don't violate the contract of indexing dynamic arrays. And when you consider that the alternative would be for it to be a RangeException, having it be anything other than an error would quickly mean that pretty much no code using arrays could be nothrow. Regardless, there are sometimes cases where the programmer decides what the contract of an API is (whether that be the creator of the language for something standard like dynamic arrays or for a function in a stray programmer's personal library), and any time that that contract is violated, it's a bug in the program, at which point, the logic is faulty, and continuing to execute the program is risky by definition. Whether a particular contract was the right choice can of course be debated, but as long as it's the contract for that particular API, anyone using it needs to be obey it, or they'll have bugs in their program with potentially fatal consequences. - Jonathan M Davis
Jul 10 2018
next sibling parent reply crimaniak <crimaniak gmail.com> writes:
On Tuesday, 10 July 2018 at 22:59:08 UTC, Jonathan M Davis wrote:

 Or aside from that strawman that RangeError shouldn't be an 
 Error even...
I suspect that we're going to have to agree to disagree on that one. ... ... continuing to execute the program is risky by definition. ...
This error handling policy makes D not applicable for creating WEB applications and generally long-running services. I think anyone who has worked in the enterprise sector will confirm that any complex WEB service contains some number of errors that were not detected during the tests. These errors are detected randomly during operation. And the greatest probability of their detection - during the peak traffic of the site. Do you kill the whole application even in the case of undisturbed memory, with one suspicion of a logical error? At the peak of attendance? To prevent a potential catastrophe, which could theoretically arise as a result of this error? Congratulations! The catastrophe is already here. And in the case of services, the strategy for responding to errors must be exactly the opposite. The error should be maximally localized, and the programmer should be able to respond to any type of errors. The very nature of the work of WEB applications contributes to this. As a rule, queries are handled by short-lived tasks that work with thread-local memory, and killing only the task that caused the error, with the transfer of the exception to the calling task, would radically improve the situation. And yes, RangeError shouldn't be an Error.
Jul 11 2018
next sibling parent reply Joakim <dlang joakim.fea.st> writes:
On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 On Tuesday, 10 July 2018 at 22:59:08 UTC, Jonathan M Davis 
 wrote:

 Or aside from that strawman that RangeError shouldn't be an 
 Error even...
I suspect that we're going to have to agree to disagree on that one. ... ... continuing to execute the program is risky by definition. ...
This error handling policy makes D not applicable for creating WEB applications and generally long-running services. I think anyone who has worked in the enterprise sector will confirm that any complex WEB service contains some number of errors that were not detected during the tests. These errors are detected randomly during operation. And the greatest probability of their detection - during the peak traffic of the site. Do you kill the whole application even in the case of undisturbed memory, with one suspicion of a logical error? At the peak of attendance? To prevent a potential catastrophe, which could theoretically arise as a result of this error? Congratulations! The catastrophe is already here. And in the case of services, the strategy for responding to errors must be exactly the opposite. The error should be maximally localized, and the programmer should be able to respond to any type of errors. The very nature of the work of WEB applications contributes to this. As a rule, queries are handled by short-lived tasks that work with thread-local memory, and killing only the task that caused the error, with the transfer of the exception to the calling task, would radically improve the situation. And yes, RangeError shouldn't be an Error.
Sounds like you're describing the "Let it crash" philosophy of Erlang: https://ferd.ca/the-zen-of-erlang.html The crucial point is whether you can depend on the error being isolated, as in Erlang's lightweight processes. I guess D assumes it isn't.
Jul 11 2018
parent reply crimaniak <crimaniak gmail.com> writes:
On Wednesday, 11 July 2018 at 13:19:01 UTC, Joakim wrote:
 ...
 Sounds like you're describing the "Let it crash" philosophy of 
 Erlang:

 https://ferd.ca/the-zen-of-erlang.html
I never program Erlang, but I think yes, something like this. The people who developed Erlang definitely have a lot of experience developing services.
 The crucial point is whether you can depend on the error being 
 isolated, as in Erlang's lightweight processes. I guess D 
 assumes it isn't.
I think if we have a task with safe code only, and communication with message passing, it's isolated good enough to make error kill this task only. In any case, I still can drop the whole application myself if I think it will be the more safe way to deal with errors. So paranoids do not lose anything in the case of this approach.
Jul 11 2018
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 11 July 2018 at 22:35:06 UTC, crimaniak wrote:
 The people who developed Erlang definitely have a lot of 
 experience developing services.
Yes, it was created for telephone-centrals. You don't want a phone central to go completely dead just because there is a bug in the code. That would be a disaster. And very dangerous too (think emergency calls).
 The crucial point is whether you can depend on the error being 
 isolated, as in Erlang's lightweight processes. I guess D 
 assumes it isn't.
I think if we have a task with safe code only, and communication with message passing, it's isolated good enough to make error kill this task only. In any case, I still can drop the whole application myself if I think it will be the more safe way to deal with errors. So paranoids do not lose anything in the case of this approach.
Yup, keep critical code that rarely change, such as committing transactions, in a completely separate service and keep constantly changing code where bugs will be present separate from it. Anyway, completey idiotic to terminate a productivity application because an optional editing function (like a filter in a sound editor) generates a division-by-zero. End users would be very unhappy. If people want access to a low-level programming language, then they should also be able to control error-handling. Make tradeoffs regarding denial-of-service attack-vectors and 100% correctness (think servers for entertainment services like game servers). What people completely fail to understand is that if an assert trips then it isn't sufficient to reboot the program. So if an assert always should lead to shut-down, then it should also prevent the program from being run again, using the same line of argument. The bug is still there. That means that all bugs leads to complete service shutdown, until the bug has been fix, and that would make for a very shitty entertainment experience and many customers lost.
Jul 15 2018
prev sibling next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
On 7/11/2018 5:45 AM, crimaniak via Digitalmars-d wrote:
 On Tuesday, 10 July 2018 at 22:59:08 UTC, Jonathan M Davis wrote:

 Or aside from that strawman that RangeError shouldn't be an Error 
 even...
I suspect that we're going to have to agree to disagree on that one. ... ... continuing to execute the program is risky by definition. ...
This error handling policy makes D not applicable for creating WEB applications and generally long-running services. I think anyone who has worked in the enterprise sector will confirm that any complex WEB service contains some number of errors that were not detected during the tests. These errors are detected randomly during operation. And the greatest probability of their detection - during the peak traffic of the site. Do you kill the whole application even in the case of undisturbed memory, with one suspicion of a logical error? At the peak of attendance? To prevent a potential catastrophe, which could theoretically arise as a result of this error? Congratulations! The catastrophe is already here. And in the case of services, the strategy for responding to errors must be exactly the opposite. The error should be maximally localized, and the programmer should be able to respond to any type of errors. The very nature of the work of WEB applications contributes to this. As a rule, queries are handled by short-lived tasks that work with thread-local memory, and killing only the task that caused the error, with the transfer of the exception to the calling task, would radically improve the situation. And yes, RangeError shouldn't be an Error.
From experience, on multiple teams with extremely large fleets of machines running some of the largest websites and services, one of the most powerful tools that helped us go from good up times (2-4 9's) to excellent up times (4-6 9's) was from using application exiting asserts in production.  Yes, you kill the app.  You exit as fast and often as the errors occur.  You know what happens?  You find the bugs faster, you fix them even faster, and the result is solid software. One caveat that's affects this is delivered vs managed software. The rules and patterns are drastically different if you're burning software on cd and selling it through stores with no path to make updates, but that's less and less the case every day. When you're afraid of your software and afraid to make changes to it, you make bad choices.  Embrace every strategy you can find to help you find problems as quickly as possible.
Jul 11 2018
next sibling parent reply crimaniak <crimaniak gmail.com> writes:
On Wednesday, 11 July 2018 at 18:27:33 UTC, Brad Roberts wrote:

 ... application exiting asserts in production.  Yes, you kill 
 the app.  You exit as fast and often as the errors occur.  You 
 know what happens?  You find the bugs faster, you fix them even 
 faster, and the result is solid software.
You mean that the serious consequences of errors better motivate programmers? Then I have an idea. If you connect the current to the chairs of the developers, and with each failed assert the programmer responsible for this part will receive an electrical discharge, the code will surely become even more reliable. But I want the error found in the production not to lead to a drop in the service, affecting all the users who are currently on the site, and this is a slightly different aspect.
 When you're afraid of your software and afraid to make changes 
 to it, you make bad choices.  Embrace every strategy you can 
 find to help you find problems as quickly as possible.
Sorry, but I'm not sure I understand how this relates to the topic. Still, I do not think that a failed assert message in the log allows you to find an error faster than a similar message, but about an exception.
Jul 11 2018
parent reply Brad Roberts <braddr puremagic.com> writes:
On 7/11/2018 3:24 PM, crimaniak via Digitalmars-d wrote:
 On Wednesday, 11 July 2018 at 18:27:33 UTC, Brad Roberts wrote:
 
 ... application exiting asserts in production.  Yes, you kill the 
 app.  You exit as fast and often as the errors occur.  You know what 
 happens?  You find the bugs faster, you fix them even faster, and the 
 result is solid software.
You mean that the serious consequences of errors better motivate programmers? Then I have an idea. If you connect the current to the chairs of the developers, and with each failed assert the programmer responsible for this part will receive an electrical discharge, the code will surely become even more reliable. But I want the error found in the production not to lead to a drop in the service, affecting all the users who are currently on the site, and this is a slightly different aspect.
Motivation is a part of it, to be sure, but only a tiny part. Asserts and the heavy use of them changes how you think about system state validation. Yes, you can do that without asserts but I've found that when you tend towards system recovery and error mitigation style thinking you tend to be thinking about getting out of that state, not never getting into it. As to applying punishments for errors, that tends to be a bad motivator too. It encourages hiding problems rather than preventing them. All in all, I'm mostly presenting anecdotal that embracing the style of programming you're arguing against has produced very good results, repeatably, in my work experience. There's a big topic / discussion area in here about fault isolation. If you really want things to be able to fail independently, then they need to be separate enough that faults in one cannot affect the other. Most languages today don't provide the barriers within a process to have multiple fault domains. None in the c family of languages does. Erlang is a good example of one that does. Given the industry and userbase that uses the language, it's not at all shocking that it too embraces the concept of fail fast, don't try to recover. Anyway, this is one of the areas where people clearly have different philosophies and changing minds is unlikely to happen.
Jul 11 2018
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2018 6:54 PM, Brad Roberts wrote:
 Anyway, this is one of the areas where people clearly have different 
 philosophies and changing minds is unlikely to happen.
True, but that doesn't mean each philosophy is equally valid. Some ideas are better than others :-) BTW, the "fail fast with asserts" is one I was pretty much forced into with DOS real mode programming, and it has served me well for a very long time. It is also based on my experience with Boeing engineering philosophy - and that has resulted in incredibly safe airliners.
Jul 11 2018
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, 11 July 2018 23:39:49 MDT Walter Bright via Digitalmars-d 
wrote:
 On 7/11/2018 6:54 PM, Brad Roberts wrote:
 Anyway, this is one of the areas where people clearly have different
 philosophies and changing minds is unlikely to happen.
True, but that doesn't mean each philosophy is equally valid. Some ideas are better than others :-) BTW, the "fail fast with asserts" is one I was pretty much forced into with DOS real mode programming, and it has served me well for a very long time. It is also based on my experience with Boeing engineering philosophy - and that has resulted in incredibly safe airliners.
This discussion reminds me of an interview from Bryan Cantrill a couple of years back when he was complaining about how Linus was talking about turning all of the BUG_ONs in the Linux kernel into WARN_ONs, because they were getting too many crashes, since that's just hiding bugs rather than shoving them in your face so that you can find them and fix them. Yes, it really sucks when your program crashes, but if you have a check for invalid state in your program, you want it to fail fast so that it does not continue to execute and do who knows what (which is particularly bad for an OS kernel) and so that you know that it's happening - and preferably have it provide a core dump so that you hopefully have the information you need to debug it. Then the problem can be fixed and stop being a problem, reducing the number of bugs in your program and increasing its stability, whereas if you try to hide bugs and continue, then you never even find out that it's a problem, and it doesn't get fixed. So, there are definitely programmers out there who agree with you even if there are also plenty out there who don't. - Jonathan M Davis
Jul 12 2018
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2018 11:27 AM, Brad Roberts wrote:
 When you're afraid of your software and afraid to make changes to it, you make 
 bad choices.  Embrace every strategy you can find to help you find problems
as 
 quickly as possible.
It's good to hear my opinions on the subject backed by major experience! Thanks for posting.
Jul 11 2018
prev sibling next sibling parent reply John Carter <john.carter taitradio.com> writes:
On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 The error should be maximally localized, and the programmer 
 should be able to respond to any type of errors. The very 
 nature of the work of WEB applications contributes to this. As 
 a rule, queries are handled by short-lived tasks that work with 
 thread-local memory, and killing only the task that caused the 
 error, with the transfer of the exception to the calling task, 
 would radically improve the situation.
Hmm. The fun fun fun thing about undefined behaviour in the absence of MMU's is the effects are maximally _unlocalized_. ie. It can corrupt _any_ part of the system. A use after free for example, or an index out of bounds on the heap, can corrupt all and any subsystem sharing the same virtual address space. Part of the reason why Walter is pushing so hard for memory safety. Memory Safety is truly a huge step away from the world of pain that is C/C++.... it removes a truly huge class of defects. However, it also removes a common terminology. Odds on you know what I mean when I say "use after free" or "index out of bounds". Now in the levels above the language and the library, humans are equally capable of screwing up and corrupting our own work.... except the language can no longer help you. Above the language and the library, we no longer have a common terminology for describing the myriad ways you can shoot yourself in the foot. The language can, through encapsulation "minimize the blast radius", but can't stop you. I disagree with Bjarne Stroustrup on many things.... but in this article he is absolutely spot on. https://www.artima.com/intv/goldilocks3.html Please read it, it's probably the most important article on Object Oriented Design you'll find. Now the problem with "unexpected" exceptions is, odds on you are left with a broken invariant. ie. Odds on you are left with an object you now cannot reasonably expect to function. ie. Odds on that object you cannot expect to function, is part of a larger object or subsystem you now cannot reasonably expect to function. ie. You left with a system that will progressively become flakier and flakier and less responsive and less reliable. The only sane response really is to reset to a defined state as quickly as possible. ie. Capture a backtrace, exit process and restart. Your effort in trying to catch and handle unexpected events to achieve uptime is misplaced, you are much better served by Chaos Monkeys. ie. Deliberately randomly "hard kill" your running systems at random moments and spend your efforts on designing for no resulting corruption and rapid and reliable reset. I certainly wouldn't unleash Chaos Monkeys on a production system until I was really comfortable with the behaviour of on a test system....
Jul 11 2018
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/11/2018 4:56 PM, John Carter wrote:
 I disagree with Bjarne Stroustrup on many things.... but in this article he is 
 absolutely spot on. https://www.artima.com/intv/goldilocks3.html
It's a great article, and a quick read.
Jul 11 2018
prev sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 This error handling policy makes D not applicable for creating 
 WEB applications and generally long-running services.
You use process isolation so it is easy to restart part of it without disrupting others. Then it can crash without bringing the system down. This is doable with segfaults and range errors, same as with exceptions. This is one of the most important systems engineering principles: expect failure from any part, but keep the system as a whole running anyway.
Jul 13 2018
next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/13/18 8:55 AM, Adam D. Ruppe wrote:
 On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 This error handling policy makes D not applicable for creating WEB 
 applications and generally long-running services.
You use process isolation so it is easy to restart part of it without disrupting others. Then it can crash without bringing the system down. This is doable with segfaults and range errors, same as with exceptions. This is one of the most important systems engineering principles: expect failure from any part, but keep the system as a whole running anyway.
But it doesn't scale if you use OS processes, it's too heavyweight. Of course, it depends on the application. If you only need 100 concurrent connections, processes might be OK. -Steve
Jul 13 2018
next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer 
wrote:
 On 7/13/18 8:55 AM, Adam D. Ruppe wrote:
 On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 This error handling policy makes D not applicable for 
 creating WEB applications and generally long-running services.
You use process isolation so it is easy to restart part of it without disrupting others. Then it can crash without bringing the system down. This is doable with segfaults and range errors, same as with exceptions. This is one of the most important systems engineering principles: expect failure from any part, but keep the system as a whole running anyway.
But it doesn't scale if you use OS processes, it's too heavyweight. Of course, it depends on the application. If you only need 100 concurrent connections, processes might be OK. -Steve
Came on, Steve... 100 concurrent connections? /P
Jul 13 2018
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 7/13/18 3:53 PM, Paolo Invernizzi wrote:
 On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer wrote:
 On 7/13/18 8:55 AM, Adam D. Ruppe wrote:
 On Wednesday, 11 July 2018 at 12:45:40 UTC, crimaniak wrote:
 This error handling policy makes D not applicable for creating WEB 
 applications and generally long-running services.
You use process isolation so it is easy to restart part of it without disrupting others. Then it can crash without bringing the system down. This is doable with segfaults and range errors, same as with exceptions. This is one of the most important systems engineering principles: expect failure from any part, but keep the system as a whole running anyway.
But it doesn't scale if you use OS processes, it's too heavyweight. Of course, it depends on the application. If you only need 100 concurrent connections, processes might be OK.
Came on, Steve...  100 concurrent connections?
Huh? What'd I say? -Steve
Jul 13 2018
parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Friday, 13 July 2018 at 20:12:36 UTC, Steven Schveighoffer 
wrote:
 On 7/13/18 3:53 PM, Paolo Invernizzi wrote:
 On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer 
 wrote:
 On 7/13/18 8:55 AM, Adam D. Ruppe wrote:
 [...]
But it doesn't scale if you use OS processes, it's too heavyweight. Of course, it depends on the application. If you only need 100 concurrent connections, processes might be OK.
Came on, Steve...  100 concurrent connections?
Huh? What'd I say?
orders of magnitudes too small. 100 concurrent connections you can handle with a sleeping arduino... :-)
Jul 13 2018
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 7/13/18 4:47 PM, Patrick Schluter wrote:
 On Friday, 13 July 2018 at 20:12:36 UTC, Steven Schveighoffer wrote:
 On 7/13/18 3:53 PM, Paolo Invernizzi wrote:
 On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer wrote:
 On 7/13/18 8:55 AM, Adam D. Ruppe wrote:
 [...]
But it doesn't scale if you use OS processes, it's too heavyweight. Of course, it depends on the application. If you only need 100 concurrent connections, processes might be OK.
Came on, Steve...  100 concurrent connections?
Huh? What'd I say?
orders of magnitudes too small. 100 concurrent connections you can handle with a sleeping arduino... :-)
Meh, I admit I don't know the specifics. I just know that there is a reason async i/o is used for web services. Let's say there is a number N concurrent connections that processes are OK to use. Then you can scale to 100xN if you use something better. -Steve
Jul 14 2018
prev sibling parent reply John Carter <john.carter taitradio.com> writes:
On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer 
wrote:

 But it doesn't scale if you use OS processes, it's too 
 heavyweight. Of course, it depends on the application. If you 
 only need 100 concurrent connections, processes might be OK.
I think you may have fallen for Microsoft FUD. In the Early Days of Windows Microsoft was appalling Bad at multiple processes.... Rather than fix their OS, they cranked up their Marketing machine and Hyped threads as "Light Weight Processes". Unixy land has had "COW" (Copy on Write) page handling for years and years and process creation and processes are light weight. There are very very few Good reasons for threads, but threads being "light weight processes" is definitely not one of them
Jul 17 2018
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 7/18/18 1:58 AM, John Carter wrote:
 On Friday, 13 July 2018 at 13:15:39 UTC, Steven Schveighoffer wrote:
 
 But it doesn't scale if you use OS processes, it's too heavyweight. Of 
 course, it depends on the application. If you only need 100 concurrent 
 connections, processes might be OK.
I think you may have fallen for Microsoft FUD. In the Early Days of Windows Microsoft was appalling Bad at multiple processes.... Rather than fix their OS, they cranked up their Marketing machine and Hyped threads as "Light Weight Processes".
Wikipedia [1] seems to have a lot of references to "Light weight processes" from Unixy sources. Seems more like a good definition of thread than FUD.
 Unixy land has had "COW" (Copy on Write) page handling for years and 
 years and process creation and processes are light weight.
That depends on how much memory has to be marked as COW. It's definitely more heavyweight than thread creation, which does none of that.
 There are very very few Good reasons for threads, but threads being 
 "light weight processes" is definitely not one of them
Interesting, but I wasn't talking about using threads, vibe.d uses fibers, and can scale much better than using processes or threads alone. See dconf presentations from Vladimir Panteleev [2] and Ali Chereli [3] to see why I was drawn to this conclusion. Besides, using processes if you are ONLY going to read from the shared state makes some sense, but as soon as you need to change the shared state, you need to devise some mechanism to communicate that back to the main process. With threads/fibers, it's trivial. With web services, most of the time the shared state you want elsewhere anyway (to make it persistent), so it's a better fit for processes than most program domains. -Steve [1] https://en.wikipedia.org/wiki/Light-weight_process [2] https://www.youtube.com/watch?v=Zs8O7MVmlfw [3] https://www.youtube.com/watch?v=7FJYc0Ydewo
Jul 19 2018
parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 19 July 2018 at 09:44:16 UTC, Steven Schveighoffer 
wrote:
 On 7/18/18 1:58 AM, John Carter wrote:
 With web services, most of the time the shared state you want 
 elsewhere anyway (to make it persistent), so it's a better fit 
 for processes than most program domains.
Sharing a _complex_ state to thousand of users is the job for an ACID database. And that couples very well with separate processes. If the shared state is trivial, there's a pletora of library that abstract aways the fact that you are sending messages to a process, thread, or corutine: well, it seems that was the goal of std.concurrency, also! :-P /Paolo
Jul 19 2018
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 13 July 2018 at 12:55:33 UTC, Adam D. Ruppe wrote:
 You use process isolation so it is easy to restart part of it 
 without disrupting others. Then it can crash without bringing 
 the system down. This is doable with segfaults and range 
 errors, same as with exceptions.

 This is one of the most important systems engineering 
 principles: expect failure from any part, but keep the system 
 as a whole running anyway.
If we are talking about something application-specific and in probabilistic terms, then yes certainly. But that is not the absolutist position where any failure should lead to a shutdown (and consequently a ban on reboot as the failed assert might happen hours after the actual buggy code executed). The absolutist position would also have to assume that all communicated state is corrupted so a separate process does not improve the situation. Since you don't know with a 100% certainty what the bug consists of you should not retain any state from any source after the _earliest_ time where the buggy logic in theory could have been involved. All databases should be assumed corrupted, no messages should be accepted etc (messages and databases are no different from memory in this regard). In reality absolutist positions are usually not possible to uphold so you have to move to a probabilistic position. And the compiler cannot make probabilistic assumptions, you need a lot of contextual understanding to make those probabilistic assessment (e.g. the architect or programmer has to be involved). Fully reactive systems does not retain state of course, and those would change the argument somewhat, but they are very rare... mostly limited to control systems (cars, airplanes etc). The idea behind actor-based programming (e.g. Erlang) isn't that bugs don't occur or that the overall system will exhibit correct behaviour, but that it should be able to correct or adapt to situations despite bugs being present. But that is really, predominantly, not available to us with the very "crisp" logic we use in current languages (true/false, all or nothing). Maybe something better will come out of probabilistic programming paradigms and software synthesis some time in the future. Within the current paradigms we are stuck with the judgment of the humans involved. Interestingly biological systems are much better at robustness, fault tolerance and self-healing, but that involves a lot of overhead and also assumes that some failures are acceptable as long as the overall system can recover from it. Actor-programming is based on the same assumption, the health of the overall (big) system.
Jul 15 2018
prev sibling parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 7/10/18 6:59 PM, Jonathan M Davis wrote:
 On Tuesday, 10 July 2018 16:48:41 MDT Steven Schveighoffer via Digitalmars-d
 wrote:
 On 7/10/18 6:26 PM, Jonathan M Davis wrote:
 On Tuesday, 10 July 2018 13:21:28 MDT Timon Gehr via Digitalmars-d
wrote:
 On 03.07.2018 06:54, Walter Bright wrote:
 ...

 (I'm referring to the repeated and endless threads here where people
 argue that yes, they can recover from programming bugs!)
Which threads are those?
Pretty much any thread arguing for having clean-up done when an Error is thrown instead of terminating ASAP. Usually, folks don't try to claim that trying to fully continue the program in spite of the Error is a good idea, but even that gets suggested sometimes (e.g. trying to catch and recover from a RangeError comes up periodically).
Or aside from that strawman that RangeError shouldn't be an Error even...
I suspect that we're going to have to agree to disagree on that one. In the vast majority of cases, indices do not come from program input, and in the cases where they do, they can be checked by the programmer to ensure that they don't violate the contract of indexing dynamic arrays. And when you consider that the alternative would be for it to be a RangeException, having it be anything other than an error would quickly mean that pretty much no code using arrays could be nothrow.
It's all wishful thinking on my part. At this point, no way we can make a non opt-in change to RangeException, because so much code will break. But to be honest, I don't really think RangeException makes much sense either. It really is a programming error, but one that is eminently recoverable in some cases (it depends completely on the program). It stops memory corruption from happening, and as long as you unwind the stack out to a place where you can report the issue and continue on, then it's not going to affect other parts of the program. The classic example is a fiber- or thread-based service, where the tasks run are independent of each other. It makes no sense to kill all the tasks just because one has an off-by-one indexing problem that was properly prevented from causing any issues.
 Regardless, there are sometimes cases where the programmer decides what the
 contract of an API is (whether that be the creator of the language for
 something standard like dynamic arrays or for a function in a stray
 programmer's personal library), and any time that that contract is violated,
 it's a bug in the program, at which point, the logic is faulty, and
 continuing to execute the program is risky by definition. Whether a
 particular contract was the right choice can of course be debated, but as
 long as it's the contract for that particular API, anyone using it needs to
 be obey it, or they'll have bugs in their program with potentially fatal
 consequences.
We are not so much in disagreement on this, I don't think it makes any sense to make a RangeError not a programming error. But the problem I have with the choice is that an Error *necessarily* makes the entire program unusable. In other words, the scope of the problem is expanded by the language to include more than it should. And really, it's not so much throwing the Error, it's the choice by the language to make nothrow functions not properly clean up on an Error throw. Without that "feature", this would be a philosophical discussion, and not a real problem. -Steve
Jul 12 2018
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2018 12:21 PM, Timon Gehr wrote:
 On 03.07.2018 06:54, Walter Bright wrote:
 (I'm referring to the repeated and endless threads here where people argue 
 that yes, they can recover from programming bugs!)
Which threads are those?
I'd have to google for them. Maybe try searching for "assert terminate program logic bug"
Jul 10 2018
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/10/2018 12:21 PM, Timon Gehr wrote:
 Which threads are those?
Here's one: https://digitalmars.com/d/archives/digitalmars/D/Program_logic_bugs_vs_input_environmental_errors_244143.html Have fun, it may take upwards of a week to read that one!
Jul 10 2018