www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Creator of LLVM, Clang, and Swift Says To Not Write Security Critical

reply "Jack Stouffer" <jack jackstouffer.com> writes:
http://article.gmane.org/gmane.comp.compilers.llvm.devel/87749

Safety is one of the more important things that D offers over 
C++, even though people keep saying C++11/14 makes D unimportant.
Jul 13 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 03:35:08 UTC, Jack Stouffer wrote:
 http://article.gmane.org/gmane.comp.compilers.llvm.devel/87749

 Safety is one of the more important things that D offers over 
 C++, even though people keep saying C++11/14 makes D 
 unimportant.
Uhm, no. The linked page concludes that security-oriented software should be written in languages that trap on integer overflow by default. D is not better off by having modulo-arithmetics, that means you cannot even catch overflow related issues by semantic analysis, since overflow does not exist. There are C-like languages that ensures that overflow is not possible at compile time (by putting limits on loop iterations and doing heavy duty proofs).
Jul 14 2015
next sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Tuesday, 14 July 2015 at 07:43:27 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 July 2015 at 03:35:08 UTC, Jack Stouffer wrote:
 http://article.gmane.org/gmane.comp.compilers.llvm.devel/87749

 Safety is one of the more important things that D offers over 
 C++, even though people keep saying C++11/14 makes D 
 unimportant.
Uhm, no. The linked page concludes that security-oriented software should be written in languages that trap on integer overflow by default. D is not better off by having modulo-arithmetics, that means you cannot even catch overflow related issues by semantic analysis, since overflow does not exist. There are C-like languages that ensures that overflow is not possible at compile time (by putting limits on loop iterations and doing heavy duty proofs).
The article concludes: "There are many more modern and much safer languages that either eliminate the UB entirely through language design (e.g. using a garbage collector to eliminate an entire class of memory safety issues, completely disallowing pointer casts to enable TBAA safely, etc), or by intentionally spending a bit of performance to provide a safe and correct programming model (e.g. by guaranteeing that integers will trap if they overflow). My hope is that the industry will eventually move to better systems programming languages, but that will take a very very long time..." __e.g. using a garbage collector to eliminate an entire class of memory safety issues__ Now one may say that this isn't all he was saying, that the GC in D can be improved, that D could be safer, and so on. But it's hardly fair to suggest the original poster is not right about one of the advantages of D vs C and C++. Or at least you ought to make that argument rather than just pick on one fragment of the linked piece, without considering the overall point.
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 08:54:42 UTC, Laeeth Isharc wrote:
 Now one may say that this isn't all he was saying, that the GC 
 in D can be improved, that D could be safer, and so on.  But 
 it's hardly fair to suggest the original poster is not right 
 about one of the advantages of D vs C and C++.
The linked webpage explicitly states that C UB for integer overflow gives 2x performance in some scenarios. Something these forums have tried to play down many times. You have compiler-options/libraries/tools for C++ that allows you to write as safe and safer code than D (which lacks sanitizers) at the cost of performance and convenience. Just like D defaults sacrifice performance for the same, but D tries to give priority to convenience. Which C++ does not have much of (convenience that is). C++ has an impressive range of options that beats all alternatives in the flexibility-department, but also high complexity levels and tedious syntax. It is just that if you use C++ over Java you do it because you need performance and deliberately through-and-through avoid those features/library/compiler options/tools and resort writing code that is less robust. My C++ libraries do void casts and lots of other low level stuff, not because C++-eco system does not provide robust alternatives, but in order to work with raw memory. I would never dream of doing that in a language like Go or Java. I do it because I _need_ the performance that C/C++ brings and also need to tailor low level constructs/hardware/OS. Which I could not do at all in Go/Java. If you do the same in D, you are in the same boat as C++, except C++ is more tedious and C++ provide sanitizers. But there are restricted and annotated versions of C that offers provable safety at the cost of development time, but with C performance. Thus is much better than D for _secure_ performant system level programming since you also have both termination and run-time guarantees. Of course, nobody use D for critical system level programming, so that is not really an issue in the forseeable future. Good enough rebuttal? Slamdunk C/C++ for the right reasons: complexity and tedium. Complexity and tedium make people more likely to make fatal mistakes... but that's ergonomics, not semantics.
Jul 14 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Omitted from your comments are any mention of D's language support for purity 
and transitive const/immutability and  safe/ trusted/ system. These are
critical 
for writing robust, encapsulated code. I know of no C++ sanitizers that attempt 
to address these.

Also, the Warp project shows that D can be every bit as performant as C++,
while 
being a lot easier to get there.

And lastly, writing code in C++ does not get you performance. Performance comes 
from careful balancing of algorithms, data structures, memory layout, cache 
issues, threading, and having intimate knowledge of how your particular
compiler 
generates code, and then using a profiler.

D provides access to tuning every one of these issues. Anyone familiar with 
those issues at the level needed to write performant C++ will also find it 
straightforward to avoid performance issues with D and the GC.

Have you used a profiler for your performance critical C++ code? You don't have 
to answer, but very, very few C++ programmers do. And I can guarantee you that 
those that don't use a profiler are not writing performant code (though they 
certainly may believe they are).
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 10:32:35 UTC, Walter Bright wrote:
 Omitted from your comments are any mention of D's language 
 support for purity and transitive const/immutability and 
  safe/ trusted/ system. These are critical for writing robust, 
 encapsulated code. I know of no C++ sanitizers that attempt to 
 address these.
Yes, if your codebase benefits from transitive const/immutability then that can simplify reasoning about the program. I usually don't write code that way in C-like programming and prefer more fine-grained const to prevent myself from making encapsulation-related mistakes more than multi-threading issues etc. In the ideal world I would want both. In the real world complexity is already higher by having const in the first place... so I understand the trade-off.
 And lastly, writing code in C++ does not get you performance. 
 Performance comes from careful balancing of algorithms, data 
 structures, memory layout, cache issues, threading, and having 
 intimate knowledge of how your particular compiler generates 
 code, and then using a profiler.
Indeed. But you would not have written DMD in C++ if you did not need performance? That was more my point. C++ is not an attractive language if you can afford slower execution. Languages like D and Go can stay interesting alternatives even when you can accept slower execution, but want strict typing and GC. So for people who only want one language D has an advantage over C++, right there. (I don't really care, since I already deal with many languages every month.)
 Have you used a profiler for your performance critical C++ 
 code? You don't have to answer, but very, very few C++ 
 programmers do. And I can guarantee you that those that don't 
 use a profiler are not writing performant code (though they 
 certainly may believe they are).
I basically don't care about raw throughput, but latency and meeting real time deadlines. Instrumentation can be useful… but I consider that "debugging". My C/C++ bottlenecks are in tight loops with real time constraints on a single thread. Meaning, if it is too slow I will get dropped audio/video frames. So I write the code as fast as I conveniently can, taking heights for the possibility that I might have to go even lower level... (and hoping I don't have to). So I don't write optimal code from the get go. I try to write maintainable code first using "CPU independent simd" and decent cache-friendly layout. Of course, a compiler is different since there are no latency issues, but throughput issues. Then you need to look at the whole call-tree using perf-counters etc. But I think many situations where you need C/C++/D/Rust/Go are situations dominated by responsiveness/latency.
Jul 14 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/14/2015 4:01 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 But you would not have written DMD in C++ if you did not need
 performance?
I wrote DMD in C++ because I did not have a D compiler. The next version of DMD will be in D.
 Languages like D and Go can stay interesting alternatives even when you can
 accept slower execution,
You don't have to accept slower execution with D. If you don't know how to write fast code in D, then you don't have to try any harder to do it with D than with C++. All the knobs to turn are there.
 I basically don't care about raw throughput, but latency and meeting real time
 deadlines. Instrumentation can be useful… but I consider that "debugging".
I infer from that that you aren't using profilers. I've said before and many times that if you're not using a profiler, you aren't getting top performance. You just aren't. Just like you aren't going to get an efficient airplane shape without wind tunnel tests. Too many variables. BTW, take a look at "streamlined" car designs from the 1930s, where they did not do any wind tunnel testing. It was all based on intuition. Compare with car designs today, that are all wind tunnel tested. They're very different (and why car designs today have a sameness about them).
Jul 14 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Tue, Jul 14, 2015 at 01:11:51PM -0700, Walter Bright via Digitalmars-d wrote:
 On 7/14/2015 4:01 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
[...]
I basically don't care about raw throughput, but latency and meeting
real time deadlines. Instrumentation can be useful… but I consider
that "debugging".
I infer from that that you aren't using profilers. I've said before and many times that if you're not using a profiler, you aren't getting top performance. You just aren't. Just like you aren't going to get an efficient airplane shape without wind tunnel tests. Too many variables.
[...] +1. On the same level of evil with premature optimization, is uninformed optimization. I have experienced this first-hand, where I was absolutely confident that a particular part of a program was responsible for its slowness, and that I had already optimized it to death and it just cannot possibly go any faster. Until I ran a profiler on it... and then discovered that that part of the program wasn't even a blip on the radar (after all, I *did* optimize it to the death already). The *real* bottleneck was somewhere else completely unexpected -- it was a leftover debugging printf that had been overlooked and left in an inner loop. A 1-line change sped up the program (that I thought was already maxed out) by at least 20-30%, probably more. Moral: Use a profiler. Don't believe in what you think is the bottleneck, because you're almost always wrong. Don't believe in your absolute confidence that the bottleneck must be that ugly bit of code in object.d. Don't believe what your Finger of Blame tells you is the bottleneck (in somebody else's code). Only a profiler will tell you the true bottleneck. The true bottleneck is almost always somewhere completely unexpected, and nowhere near where you thought it would be. The sooner you learn to use a profiler, the less time you'll waste optimizing things that don't matter while the real bottlenecks continue to plague your performance. T -- "The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell. "How come he didn't put 'I think' at the end of it?" -- Anonymous
Jul 14 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Tuesday, 14 July 2015 at 21:18:03 UTC, H. S. Teoh wrote:
 [...]

 The sooner you learn to use a profiler, the less time you'll 
 waste optimizing things that don't matter while the real 
 bottlenecks continue to plague your performance.


 T
Profilers are great <3 I love that dmd has built in profiler support, it saves me so much time.
Jul 14 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 21:18:03 UTC, H. S. Teoh wrote:
 Moral: Use a profiler. Don't believe in what you think is the 
 bottleneck, because you're almost always wrong. Don't believe 
 in your absolute confidence that the bottleneck must be that 
 ugly bit of code in object.d. Don't believe what your Finger of 
 Blame tells you is the bottleneck (in somebody else's code).
The only "somebody else's" code I call into in real time sections are FFT routines. I don't do system calls, I don't do allocations, I lock memory to RAM, I use my own lock free queues, I lay out data to avoid cache misses (at the expense of RAM). I do as little as possible in time critical sections... and as much as possible outside them, and yes those are "premature optimizations" that are difficult to work in later because then you essentially have to redesign the guts of the program...
 Only a profiler will tell you the true bottleneck. The true 
 bottleneck is almost always somewhere completely unexpected, 
 and nowhere near where you thought it would be.
Argh, no. The time spent is usually _exactly_ where I thought it would be. And the typical profiler won't help me all that much. For that I would need something more advanced (like v-tune etc).
 The sooner you learn to use a profiler, the less time you'll 
 waste optimizing things that don't matter while the real 
 bottlenecks continue to plague your performance.
Whhyyyyy are you guys assuming that people don't know how to use a profiler? It isn't exactly rocket science. I use them when I need them, and have since the early 90s, but I usually don't need them, and if I do I most likely need a specialized one that says something about hardware utilization. Regular profilers tell me very little. I know where my loops are! I don't do allocations, I don't do system calls, I have algorithms that can only be improved by putting more work into them at the expense of higher complexity... Which I don't want until I need it. I am happy if I see that I have consistent 20% headroom, and I am not going to be happier by making my program faster...
Jul 14 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/14/2015 5:28 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Whhyyyyy are you guys assuming that people don't know how to use a profiler?
Experience. Whenever I work with someone who tells me they don't need to profile because they know where their bottlenecks are, and I badger them into using a profiler, they turn out to be wrong. 100% of the time. Myself included. --------- But that's all beside the point, which is that a programmer who is capable of writing top shelf performant programs in C++ can match or exceed that using D. All of the low level features of C++ are available in D.
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 04:39:13 UTC, Walter Bright wrote:
 Experience. Whenever I work with someone who tells me they 
 don't need to profile because they know where their bottlenecks 
 are, and I badger them into using a profiler, they turn out to 
 be wrong.

 100% of the time.
Well, I don't do batch programming in C++. I only do interactive applications in C++ and most glitches are usually not about tuning performance, but about how things interact. Given that many interactive applications are ~90% idle you can improve responsiveness to a large extent by doing as little work as possible in the time critical hand-optimized region and push work over to the idle region (in the background).
 But that's all beside the point, which is that a programmer who 
 is capable of writing top shelf performant programs in C++ can 
 match or exceed that using D. All of the low level features of 
 C++ are available in D.
I use the GCC extensions/compiler hints… I think many include those in their C++ usage since Clang also support them. I think D would be better off by incorporating the same feature set (with a nicer syntax).
Jul 15 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 15 July 2015 at 07:22:36 UTC, Ola Fosheim Grøstad 
wrote:
 I use the GCC extensions/compiler hints… I think many include 
 those in their C++ usage since Clang also support them. I think 
 D would be better off by incorporating the same feature set 
 (with a nicer syntax).
The only thing I've found lacking in D compared to C++ is the extremely fragmented SIMD support. Some compiler hints that gcc offers are provided as language features in D(function attributes,) and the others are offered as vendor specific features(LDC, GDC both offer the equivalent of __builtin_expect for example.) GDC and LDC both respectively offer just as many tunables as their C++ counterparts, IMO. I'm not even sure if most people realize e.g, PGO works with D, or LTO, or the new LLVM/GCC sanitizers, etc.
Jul 15 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 09:15:46 UTC, rsw0x wrote:
 GDC and LDC both respectively offer just as many tunables as 
 their C++ counterparts, IMO. I'm not even sure if most people 
 realize e.g, PGO works with D, or LTO, or the new LLVM/GCC 
 sanitizers, etc.
That's interesting. I really don't have much incentive to look into it as I don't use D for anything commercial ATM, but if this is true, then D really would benefit from a visible tools-overview-tour on the website.
Jul 15 2015
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 15 July 2015 at 00:28:47 UTC, Ola Fosheim Grøstad 
wrote:
 I am happy if I see that I have consistent 20% headroom, and I 
 am not going to be happier by making my program faster...
But then why optimizations would matter? If the program is fast, you won't improve it by improving performance by 2 times: it will remain fast, and if it's slow, it's probably an algorithmic complexity. You said it yourself that to get performance from C you need extensions, it's not provided by C semantics itself.
Jul 15 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 08:32:19 UTC, Kagamin wrote:
 On Wednesday, 15 July 2015 at 00:28:47 UTC, Ola Fosheim Grøstad 
 wrote:
 I am happy if I see that I have consistent 20% headroom, and I 
 am not going to be happier by making my program faster...
But then why optimizations would matter? If the program is fast, you won't improve it by improving performance by 2 times: it will remain fast, and if it's slow, it's probably an algorithmic complexity.
Ability to optimize later matters because I need to keep the deadline, or else the real time thread will be killed by the OS. If I miss the deadline occasionally, maybe I only need 10% improvement. I have many options. I can reduce fidelity in an audio application. I can put more work into integrating two loops into one loop and keep values in SIMD registers based on the number of SIMD register the particular CPU supports etc... What I don't want to do is restructure the entire dataset, so I put more work into memory layout than the initial loop. If the loop completes in time I'm good, if not, I put more work into it (or reluctantly reduce fidelity).
 You said it yourself that to get performance from C you need 
 extensions, it's not provided by C semantics itself.
No? I said I write "cpu independent simd" in my core performance oriented loop as a starting point. Whether I need to do that is debatable... but it ensures that data structures are designed to be simd friendly from the start.
Jul 15 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2015 2:36 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 You said it yourself that to get performance from C you need extensions, it's
 not provided by C semantics itself.
No? I said I write "cpu independent simd" in my core performance oriented loop as a starting point. Whether I need to do that is debatable... but it ensures that data structures are designed to be simd friendly from the start.
You also said: "I use the GCC extensions/compiler hints…" Being extensions, they aren't part of C itself.
Jul 15 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 09:52:28 UTC, Walter Bright wrote:
 You also said:

 "I use the GCC extensions/compiler hints…"

 Being extensions, they aren't part of C itself.
Yes, I use them in hot spots, because I think it makes sense to explicitly vectorize a loop if I make it a requirement. The annotations document what I want from the loop. I don't have to, I could recompile until the asm looks right. But that is more tedium and code changes down the road can affect it.
Jul 15 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
See http://llvm.org/docs/Vectorizers.html#diagnostics

-Rpass=loop-vectorize identifies loops that were successfully 
vectorized.

-Rpass-missed=loop-vectorize identifies loops that failed 
vectorization and indicates if vectorization was specified.

-Rpass-analysis=loop-vectorize identifies the statements that 
caused vectorization to fail.
Jul 15 2015
parent reply "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 15 July 2015 at 10:28:10 UTC, Ola Fosheim Grøstad 
wrote:
 See http://llvm.org/docs/Vectorizers.html#diagnostics

 -Rpass=loop-vectorize identifies loops that were successfully 
 vectorized.

 -Rpass-missed=loop-vectorize identifies loops that failed 
 vectorization and indicates if vectorization was specified.

 -Rpass-analysis=loop-vectorize identifies the statements that 
 caused vectorization to fail.
-pass-remarks=<pattern> - Enable optimization remarks from passes whose name match the given regular expression -pass-remarks-analysis=<pattern> - Enable optimization analysis remarks from passes whose name match the given regular expression -pass-remarks-missed=<pattern> - Enable missed optimization remarks from passes whose name match the given regular expression from LDC's help output
Jul 15 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 10:55:19 UTC, rsw0x wrote:
   -pass-remarks-missed=<pattern>                  - Enable 
 missed optimization remarks from passes whose name match the 
 given regular expression

 from LDC's help output
Good, that will encourage Walter to extend the D syntax with clean annotations! ;^) Something less ugly-looking than this: http://clang.llvm.org/docs/LanguageExtensions.html
Jul 15 2015
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 15 July 2015 at 09:36:01 UTC, Ola Fosheim Grøstad 
wrote:
 You said it yourself that to get performance from C you need 
 extensions, it's not provided by C semantics itself.
No? I said I write "cpu independent simd" in my core performance oriented loop as a starting point.
I mean this: On Monday, 13 July 2015 at 07:11:35 UTC, Ola Fosheim Grøstad wrote:
 Here's the deal: there is no such thing as a general purpose 
 (system) language in the empirical sense. We might have been 
 lead to believe that C or C++ were general purpose, but that 
 only happend because there were no visible viable alternatives. 
 C is more and more becoming a kernel/embedded language, C++ is 
 more and more becoming a legacy/niche language. C++ is only a 
 game dev language after you add various extensions (e.g. simd). 
 It is only a number-crunching language after you add some other 
 extensions.
Jul 15 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 July 2015 at 13:10:02 UTC, Kagamin wrote:
 I mean this:

 On Monday, 13 July 2015 at 07:11:35 UTC, Ola Fosheim Grøstad 
 wrote:
 Here's the deal: there is no such thing as a general purpose 
 (system) language in the empirical sense. We might have been 
 lead to believe that C or C++ were general purpose, but that 
 only happend because there were no visible viable 
 alternatives. C is more and more becoming a kernel/embedded 
 language, C++ is more and more becoming a legacy/niche 
 language. C++ is only a game dev language after you add 
 various extensions (e.g. simd). It is only a number-crunching 
 language after you add some other extensions.
Yes, that is what I believe is about to happe, as LLVM has lowered the threshold for new languages... People slowly gravitate towards the most comfortable language for their application domain as they establish themselves.
Jul 15 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 20:11:53 UTC, Walter Bright wrote:
 On 7/14/2015 4:01 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 I basically don't care about raw throughput, but latency and 
 meeting real time
 deadlines. Instrumentation can be useful… but I consider that 
 "debugging".
I infer from that that you aren't using profilers. I've said before and many times that if you're not using a profiler, you aren't getting top performance. You just aren't. Just like you aren't going to get an efficient airplane shape without wind tunnel tests. Too many variables.
No… You infer way too much. I use a profiler IF I have a performance issue, but not otherwise. There is no point. I know exactly where time is spent. I don't care about average performance, I care about _worst_ case consumption within a real time thread (think IRQ). And I know exactly what it does. Performance here means evening out the load over many frames so that I don't get spikes on a single frame... Why would I care about whether I have 40% or 50% idle CPU? I don't. I care about not having spikes and that basically takes planning, not profiling.
Jul 14 2015
prev sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
On Tuesday, 14 July 2015 at 09:44:08 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 July 2015 at 08:54:42 UTC, Laeeth Isharc wrote:
 Now one may say that this isn't all he was saying, that the GC 
 in D can be improved, that D could be safer, and so on.  But 
 it's hardly fair to suggest the original poster is not right 
 about one of the advantages of D vs C and C++.
The linked webpage explicitly states that C UB for integer overflow gives 2x performance in some scenarios. Something these forums have tried to play down many times.
Perhaps - I haven't measured these things myself, and don't recall downplaying of this in the short term I have been here, but I have no reason to think that's wrong. But my point was one of good form. You seemed to slam the original poster, who who made no stronger or more concrete assertion than that safety is a benefit of D [we surely know by now that linking to a piece implies endorsement of it only as being interesting, not more]. Perhaps that's a point well-founded in reality, but I didn't see you argue for it. I have no interest in pursuing controversy further, but it really is a mystery to me as to why you don't do something constructive towards shaping the world as you would like to see it. Code wins debates, and a DIP is less work than implementing the whole thing oneself. It is of course possible there is background to which I am not privy, and that that is why it is mysterious.
 You have compiler-options/libraries/tools for C++ that allows 
 you to write as safe and safer code than D (which lacks 
 sanitizers) at the cost of performance and convenience. Just 
 like D defaults sacrifice performance for the same, but D tries 
 to give priority to convenience. Which C++ does not have much 
 of (convenience that is). C++ has an impressive range of 
 options that beats all alternatives in the 
 flexibility-department, but also high complexity levels and 
 tedious syntax.
I'll certainly take programmer productivity if I don't need to pay too much for it, and if the aesthetic experience of getting my stuff done can be made something pleasurable. I am not sure however if you are making an argument to justify your implicit position, since it seems we both have similar assessments of C++ as its used, or easy to use. I do not think it is controversial either to observe that C++ has more options! Life in the commercial world involves pragmatic choices. So the missing piece in your argument would be to demonstrate that programmer productivity ('convenience') and non-tedious syntax are not all that important. Conventional wisdom seems to be that productivity matters more than efficiency (personally I don't mind trading off a little, but there are limits!)
 It is just that if you use C++ over Java you do it because you 
 need performance and deliberately through-and-through avoid 
 those features/library/compiler options/tools and resort 
 writing code that is less robust.
I don't know - I think there are many factors that go into such a decision. It's a big world, and I wouldn't pretend to know what shapes problems in different domains. What is discussed in the media is not necessarily representative of what actually goes on amongst people quietly getting their work done. I am told that Java is often faster than C++ and needn't be a memory hog - maybe that is right, but in practice stuff written in C,C++,D in a sensible, mature, thoughtful fashion but without too much special effort given to optimisation seems to just run fast without a need for tuning or any dark magic. I am probably not tuning it right, but even in 2015 one doesn't seem to be impressed by the performance and memory efficiency of Java apps.
 My C++ libraries do void casts and lots of other low level 
 stuff, not because C++-eco system does not provide robust 
 alternatives, but in order to work with raw memory. I would 
 never dream of doing that in a language like Go or Java. I do 
 it because I _need_ the performance that C/C++ brings and also 
 need to tailor low level constructs/hardware/OS. Which I could 
 not do at all in Go/Java.

 If you do the same in D, you are in the same boat as C++, 
 except C++ is more tedious and C++ provide sanitizers.
Do you need to in the whole part of what you are writing, and if you do, are you in the same boat?
 But there are restricted and annotated versions of C that 
 offers provable safety at the cost of development time, but 
 with C performance. Thus is much better than D for _secure_ 
 performant system level programming since you also have both 
 termination and run-time guarantees.
But commercial life is about trade-offs and pragmatic choices, and the Pareto principle applies here too. Ie I should think the subset of reasonably secure, reasonably efficient systems level programming is rather larger than the narrow domain you speak of above.
 Of course, nobody use D for critical system level programming, 
 so that is not really an issue in the forseeable future.
I don't know what "systems programming" means any more, only what it meant 30 years ago. Wouldn't you say that Weka, Sociomantic, possibly the guys doing the Norwegian subway announcement system have at least aspects of system level programming and can be described as critical? The hedge fund about which Andy Smith spoke - their application may or may not be systems level, but it strikes me as not so far from that.
 Good enough rebuttal? Slamdunk C/C++ for the right reasons: 
 complexity and tedium. Complexity and tedium make people more 
 likely to make fatal mistakes... but that's ergonomics, not 
 semantics.
A rebuttal would have been to demonstrate that the OP was making a silly point that cannot be justified. You have yourself suggested that if you want to use C and C++ in a safe way then it comes at quite a price. Commercial adoption is after all driven by these sorts of things.
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 11:58:08 UTC, Laeeth Isharc wrote:
 hog - maybe that is right, but in practice stuff written in 
 C,C++,D in a sensible, mature, thoughtful fashion but without 
 too much special effort given to optimisation seems to just run 
 fast without a need for tuning or any dark magic.  I am 
 probably not tuning it right, but even in 2015 one doesn't seem 
 to be impressed by the performance and memory efficiency of 
 Java apps.
Well, programs either run fast enough or not fast enough. If it runs fast enough or you run out of money, then you're done ;). But "objectively fast" will have to be measured up to theoretical peak throughput (which you can calculate for a CPU). Most programs are nowhere close that since you need to be very careful with the size of the working set and layout in order to preload the cache, stay within cache level 1, store full cache lines, and keep all the "math units" (ALU ports) in the CPU busy. The more abstraction levels you have, the more difficult it is to understand what will happen in the CPU (assuming you fully understand the internals of the _specific_ CPU, which changes from generation to generation).
 But there are restricted and annotated versions of C that 
 offers provable safety at the cost of development time, but 
 with C performance. Thus is much better than D for _secure_ 
 performant system level programming since you also have both 
 termination and run-time guarantees.
But commercial life is about trade-offs and pragmatic choices, and the Pareto principle applies here too. Ie I should think the subset of reasonably secure, reasonably efficient systems level programming is rather larger than the narrow domain you speak of above.
Well, either the program is correct or it isn't. The question is whether you want to detect it (trap on overflow), pretend it didn't happen (D style wrap around), assume it should not happen (gcc/clang at high optimization level), or prevent compilation until it is guaranteed not to happen. In some domains it is best to halt when something wrong happens (before you sell all your stock at the wrong price?), in other domains you should keep going (serving ads), in yet other domains a rare crash is ok, but shoddy performance isn't (computer game with real time ray tracing)
 You have yourself suggested that if you want to use C and C++ 
 in a safe way then it comes at quite a price.
No, I suggested that if you pick C++ over Java for rational reasons you probably would get upset if you were told to use overflow trapping ints and GC by default in C++. The C++ default is based on what most people use C++ for.
Jul 14 2015
parent =?UTF-8?B?Ik3DoXJjaW8=?= Martins" <marcioapm gmail.com> writes:
On Tuesday, 14 July 2015 at 13:28:44 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 14 July 2015 at 11:58:08 UTC, Laeeth Isharc wrote:
 hog - maybe that is right, but in practice stuff written in 
 C,C++,D in a sensible, mature, thoughtful fashion but without 
 too much special effort given to optimisation seems to just 
 run fast without a need for tuning or any dark magic.  I am 
 probably not tuning it right, but even in 2015 one doesn't 
 seem to be impressed by the performance and memory efficiency 
 of Java apps.
Well, programs either run fast enough or not fast enough. If it runs fast enough or you run out of money, then you're done ;). But "objectively fast" will have to be measured up to theoretical peak throughput (which you can calculate for a CPU). Most programs are nowhere close that since you need to be very careful with the size of the working set and layout in order to preload the cache, stay within cache level 1, store full cache lines, and keep all the "math units" (ALU ports) in the CPU busy. The more abstraction levels you have, the more difficult it is to understand what will happen in the CPU (assuming you fully understand the internals of the _specific_ CPU, which changes from generation to generation).
 But there are restricted and annotated versions of C that 
 offers provable safety at the cost of development time, but 
 with C performance. Thus is much better than D for _secure_ 
 performant system level programming since you also have both 
 termination and run-time guarantees.
But commercial life is about trade-offs and pragmatic choices, and the Pareto principle applies here too. Ie I should think the subset of reasonably secure, reasonably efficient systems level programming is rather larger than the narrow domain you speak of above.
Well, either the program is correct or it isn't. The question is whether you want to detect it (trap on overflow), pretend it didn't happen (D style wrap around), assume it should not happen (gcc/clang at high optimization level), or prevent compilation until it is guaranteed not to happen. In some domains it is best to halt when something wrong happens (before you sell all your stock at the wrong price?), in other domains you should keep going (serving ads), in yet other domains a rare crash is ok, but shoddy performance isn't (computer game with real time ray tracing)
 You have yourself suggested that if you want to use C and C++ 
 in a safe way then it comes at quite a price.
No, I suggested that if you pick C++ over Java for rational reasons you probably would get upset if you were told to use overflow trapping ints and GC by default in C++. The C++ default is based on what most people use C++ for.
I also think UB is acceptable as long as the triggering conditions are clear, well understood, and it has a raison d'etre. Often this is allowing more aggressive optimizations. I don't think you can always trade these sort of abstractions or compiler aid for performance. Sometimes you need to ensure security, performance, and readability/maintainability, all by yourself, the hard way, by trading in developer time. Many people are willing to do so, and many companies depend on this, because that is the only way to do it in the present, with the current hardware and/or budget constraints. Sometimes even with decades old hardware constraints - anyone familiar with the demoscene? :) My point is that there are also many developers out there, for whom a language with no undefined behavior, and theoretically sound is not appealing at all, if those feature put any sort of barriers on getting the most out of the hardware. I do have the feeling, perhaps wrongly, that there aren't many on these forums, though, given the direction of most discussions.
Jul 14 2015
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 14 July 2015 at 07:43:27 UTC, Ola Fosheim Grøstad 
wrote:
 Uhm, no. The linked page concludes that security-oriented 
 software should be written in languages that trap on integer 
 overflow by default.

 D  is not better off by having modulo-arithmetics, that means 
 you cannot even   catch overflow related issues by semantic 
 analysis, since overflow does not exist. There are  C-like 
 languages that ensures that overflow is not possible at compile 
 time (by putting limits on loop iterations and doing heavy duty 
 proofs).
Correct software can't be written in C because of UB, that's why safer languages are praised for elimination of UB.
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 09:29:03 UTC, Kagamin wrote:
 On Tuesday, 14 July 2015 at 07:43:27 UTC, Ola Fosheim Grøstad 
 wrote:
 Uhm, no. The linked page concludes that security-oriented 
 software should be written in languages that trap on integer 
 overflow by default.

 D  is not better off by having modulo-arithmetics, that means 
 you cannot even   catch overflow related issues by semantic 
 analysis, since overflow does not exist. There are  C-like 
 languages that ensures that overflow is not possible at 
 compile time (by putting limits on loop iterations and doing 
 heavy duty proofs).
Correct software can't be written in C because of UB, that's why safer languages are praised for elimination of UB.
This is 100% wrong. UB only happens in the code gen for programs that are illegal (per definition incorrect source code). If your program is correct, then the code cannot trigger UB.
Jul 14 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Please also note that C is a low level language geared towards 
supporting all kinds of reasonable ALUs. A language like D or 
Rust cannot efficiently compile to a CPU that is hardwired to 
trap on overflow. C can. A language that requires detection of 
overflow cannot efficiently compile to an ALU that cannot detect 
overflow directly (like some SIMD instructions). C can.

In C undefined behaviour just means that overflow is defined as 
an illegal situation and is underspecified in order to allow 
efficient code gen for a wide variety of hardware (like trapping 
or spilling over into a different "simd" register). It does not 
mean that the compiler MUST do something weird, it means that the 
compiler isn't absolutely required to provide sensible output for 
incorrect programs.

You are free to use a C/C++ compiler that provides a switch where 
overflow leads either to an abitrary value (Rust semantics) or 
the wrap around (D code gen).

At the cost of performance or portability.

Making unpleasant choices for "undefined behaviour" is not a 
language feature. It is a compiler vendor customer-relation 
strategy or a RTFM issue...
Jul 14 2015
parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 14 July 2015 at 10:22:51 UTC, Ola Fosheim Grøstad 
wrote:
 You are free to use a C/C++ compiler that provides a switch 
 where overflow leads either to an abitrary value (Rust 
 semantics) or the wrap around (D code gen).
That's the whole point: use a language without UB and the situation will be better.
Jul 14 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 12:18:23 UTC, Kagamin wrote:
 On Tuesday, 14 July 2015 at 10:22:51 UTC, Ola Fosheim Grøstad 
 wrote:
 You are free to use a C/C++ compiler that provides a switch 
 where overflow leads either to an abitrary value (Rust 
 semantics) or the wrap around (D code gen).
That's the whole point: use a language without UB and the situation will be better.
My point is that C UB for overflow on signed int does not imply not having the same code-gen as D has. So it is essentially not a language problem per se. The "problem" is cultural. C programmers have this idea that they should compile everything with the compiler/compiler setting that gives the absolutely highest performance no matter what the quality the code. The same thing would happen if LDC added a switch named "-FAST_AND_RISKY" ;-).
Jul 14 2015
parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 14 July 2015 at 12:59:34 UTC, Ola Fosheim Grøstad 
wrote:
 My point is that C UB for overflow on signed int does not imply 
 not having the same code-gen as D has. So it is essentially not 
 a language problem per se.
UB implies anything. Yes, it's not a problem, safer languages based on C are possible, and were done.
 The "problem" is cultural. C programmers have this idea that 
 they should compile everything with the compiler/compiler 
 setting that gives the absolutely highest performance no matter 
 what the quality the code.
It's believed that there's no problem with optimized code and optimizations don't change behavior.
 The same thing would happen if LDC added a switch named 
 "-FAST_AND_RISKY" ;-).
I proposed -Ounsafe, it can actually help with correctness, because it clearly states the tradeoff and keeps it opt-in instead of being default, as C compilers do, and it also fits well into D approach to unsafety.
Jul 14 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 14 July 2015 at 15:09:55 UTC, Kagamin wrote:
 UB implies anything. Yes, it's not a problem, safer languages 
 based on C are possible, and were done.
I'd rather say it implies what you set your compiler switches to, and if you use seperate compilation you can have different settings for different files (e.g. only have aggressive optimization for the files you have vetted thoroughly).
 I proposed -Ounsafe, it can actually help with correctness, 
 because it clearly states the tradeoff and keeps it opt-in 
 instead of being default, as C compilers do, and it also fits 
 well into D approach to unsafety.
Yes, perhaps you could set it per file. Perhaps even some annotation in the source that says that the file is free of overflow issues? Why not?
Jul 14 2015
prev sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Tuesday, 14 July 2015 at 15:09:55 UTC, Kagamin wrote:
 On Tuesday, 14 July 2015 at 12:59:34 UTC, Ola Fosheim Grøstad 
 wrote:
 The "problem" is cultural. C programmers have this idea that 
 they should compile everything with the compiler/compiler 
 setting that gives the absolutely highest performance no 
 matter what the quality the code.
It's believed that there's no problem with optimized code and optimizations don't change behavior.
I thought this was an interesting illustration of your point. https://www.reddit.com/r/programming/comments/3d7xxn/crazy_performance_deviations_after_replacing/
Jul 14 2015
prev sibling parent "Kagamin" <spam here.lot> writes:
On Tuesday, 14 July 2015 at 09:47:00 UTC, Ola Fosheim Grøstad 
wrote:
 This is 100% wrong. UB only happens in the code gen for 
 programs that are illegal (per definition incorrect source 
 code).

 If your program is correct, then the code cannot trigger UB.
How successful compilation of incorrect source code helps with writing correct software?
Jul 14 2015