www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Bug in ^^

reply Brett <Brett gmail.com> writes:
10^^16 = 1874919424	???

10L^^16 is valid, but

enum x = 10^^16 gives wrong value.

I didn't catch this ;/
Sep 16 2019
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424 Probably more appropriate for the Learn forum.
Sep 16 2019
parent reply Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
 On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424 Probably more appropriate for the Learn forum.
Um, duh, but the problem why are they ints? It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory). For it to wrap around silently is error prone and can introduce bugs in to programs. The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int. The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit). Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior. Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here. Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself? Of course I imagine you still don't get it or believe me so I can prove it: enum x = 100000000000000000; enum y = 10^^17; void main() { ulong a = x; ulong b = y; } What do you think a and b are, do you think they are the same or different? Do you think they *should* be the same or different?
Sep 17 2019
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
 [snip]


 Um, duh, but the problem why are they ints?
 [snip]
They are ints because that is how enums work in the D language. See 17.3 [1]. [1] https://dlang.org/spec/enum.html#named_enums
Sep 17 2019
parent Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 13:59:54 UTC, jmh530 wrote:
 On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
 [snip]


 Um, duh, but the problem why are they ints?
 [snip]
They are ints because that is how enums work in the D language. See 17.3 [1]. [1] https://dlang.org/spec/enum.html#named_enums
Then why does
 enum x = 100000000000000000;
 enum y = 10^^17;
x store 10000000000000000? If it were an int then it would wrap, it doesn't. Did you try the code? import std.stdio; enum x = 100000000000000000; enum y = 10^^17; void main() { ulong xx = x; ulong yy = y; writeln(x); writeln(y); writeln(xx); writeln(yy); } 100000000000000000 1569325056 100000000000000000 1569325056 You seem to either make stuff up, misunderstand the compiler, or trust the docs to much. I have code that proves I'm right, why is it so hard for you to accept it? You can make your claims, but it is meaningless if they are not true.
Sep 17 2019
prev sibling next sibling parent Dominikus Dittes Scherkl <dominikus.scherkl continental-corporation.com> writes:
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:

 enum x = 100000000000000000;
this is of type long, because the literal is too large to be an int
 enum y = 10^^17;
this is of type int (the default) the exponentiation operator (like any other operator) produces a result of same type as the input, so still an int. if you want long, you should write enum y = 10L^^17 You should have a look at the language specification. D inherits C's bad behaviour of defaulting to int (not even uint) and even large literals are per default signed (sigh!) Anyway, nothing can always prevent you from overflow, or what should be the result of enum z = 10L ^^ 122 automatically import bigInt from a library? And even then, how about 1000_000_000 ^^ 1000_000_000 --> try it and throw some outOfMemory error?
Sep 17 2019
prev sibling next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
 On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
 On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424 Probably more appropriate for the Learn forum.
Um, duh, but the problem why are they ints? It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory). For it to wrap around silently is error prone and can introduce bugs in to programs. The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int. The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit). Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior. Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here. Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself? Of course I imagine you still don't get it or believe me so I can prove it: enum x = 100000000000000000; enum y = 10^^17; void main() { ulong a = x; ulong b = y; } What do you think a and b are, do you think they are the same or different? Do you think they *should* be the same or different?
integer literals without any suffixes (e.g. L) are typed int or long based on their size. Any arithmetic done after that is is done according to the same rules as as at runtime. Roughly speaking: The process is not: we have an enum, let's work out any and all calculations leading to it with arbitrary size integers and then infer the type of the enum as the smallest that fits it. The process is: we have an enum, lets calculate it's value using the same logic as at runtime and then type of the enum is the type of the answer.
Sep 17 2019
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
 The process is:
It might be a good idea to change that process. It hasn't worked as well in practice as we hoped earlier - leading to all kinds of weird stuff.
Sep 17 2019
parent John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 17 September 2019 at 14:29:32 UTC, Adam D. Ruppe 
wrote:
 On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin 
 wrote:
 The process is:
It might be a good idea to change that process. It hasn't worked as well in practice as we hoped earlier - leading to all kinds of weird stuff.
It would lead to a strange difference between CTFE and runtime, or a strange difference between the evaluation of some constants and the rest of CTFE.
Sep 17 2019
prev sibling parent Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
 On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
 On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
 On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
10 and 16 are ints. The largest int is 2147483647, which is several orders of magnitude below 1e16. So you can think of it as wrapping around multiple times and that is the remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424 Probably more appropriate for the Learn forum.
Um, duh, but the problem why are they ints? It is a compile time constant, it doesn't matter the size, there are no limitations in type size at compile time(in theory). For it to wrap around silently is error prone and can introduce bugs in to programs. The compiler should always use the largest value possible and if appropriate cast down, an enum is not appropriate to cast down to int. The issue is not how 32-bit math works BUT that it is using 32-bit math by default(and my app is 64-bit). Even if I use ulong as the type it still computes it in 32-bit. It should not do that, that is the point. It's wrong and bad behavior. Else, what is the difference of it first calculating in L and then casting down and wrapping silently? It's the same problem yet if I do that in a program it will complain about precision, yet it does not do that here. Again, just so it is clear, it has nothing to do with 32-bit arithmetic but that 32-bit arithmetic is used as instead of 64-bit. I could potentially go with it in a 32-bit program but not in 64-bit, but even then it would be difficult because it is a constant... it's shorthand for writing out the long version, it shouldn't silently wrap, If I write out the long version it craps out so why not the computation itself? Of course I imagine you still don't get it or believe me so I can prove it: enum x = 100000000000000000; enum y = 10^^17; void main() { ulong a = x; ulong b = y; } What do you think a and b are, do you think they are the same or different? Do you think they *should* be the same or different?
integer literals without any suffixes (e.g. L) are typed int or long based on their size. Any arithmetic done after that is is done according to the same rules as as at runtime. Roughly speaking: The process is not: we have an enum, let's work out any and all calculations leading to it with arbitrary size integers and then infer the type of the enum as the smallest that fits it. The process is: we have an enum, lets calculate it's value using the same logic as at runtime and then type of the enum is the type of the answer.
it doesn't matter, I've already proved that the same mathematical equivalence gives two different results... your claim that it is an int is unfounded... did you look at the code I gave? You can make claims about whatever you want but facts are facts.
 enum x = 100000000000000000;
 enum y = 10^^17;
Those we should have x==y, no ands buts or anything to justify the difference. no matter how you want to justify the compilers behavior, it is wrong. It is ok to accept it, it actually makes the world a better place to accept when something is wrong, that is is the only way things can get fixed.
Sep 17 2019
prev sibling parent reply bachmeier <no spam.net> writes:
On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:

 it's shorthand for writing out the long version, it shouldn't 
 silently wrap, If I write out the long version it craps out so 
 why not the computation itself?
I think you should be using https://dlang.org/phobos/std_experimental_checkedint.html rather than getting into the weeds of the best language design choices long ago. My thought is that it's relatively easy to work with long if that's what I want: 10L^^16 long(10)^^16 I have to be explicit, but it's not Java levels of verbosity. Using long doesn't solve overflow problems. A different default would be better in your example, but it's not clear to me why that would always be better - the proper default would be checkedint.
Sep 17 2019
parent reply Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 16:16:44 UTC, bachmeier wrote:
 On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:

 it's shorthand for writing out the long version, it shouldn't 
 silently wrap, If I write out the long version it craps out so 
 why not the computation itself?
I think you should be using https://dlang.org/phobos/std_experimental_checkedint.html rather than getting into the weeds of the best language design choices long ago. My thought is that it's relatively easy to work with long if that's what I want: 10L^^16 long(10)^^16 I have to be explicit, but it's not Java levels of verbosity. Using long doesn't solve overflow problems. A different default would be better in your example, but it's not clear to me why that would always be better - the proper default would be checkedint.
Wrong: import std.stdio; enum x = 100000000000000000; enum y = 10^^17; void main() { ulong xx = x; ulong yy = y; writeln(x); writeln(y); writeln(xx); writeln(yy); } 100000000000000000 1569325056 100000000000000000 1569325056 I gave code to prove that I was right, why is it so difficult for people to accept? All I see is people trying to justify the compilers current behavior rather than think for themselves and realize something wrong! This not a difficult issue.
Sep 17 2019
parent reply bachmeier <no spam.net> writes:
On Tuesday, 17 September 2019 at 16:50:29 UTC, Brett wrote:

 Wrong:
 import std.stdio;
 enum x = 100000000000000000;
 enum y = 10^^17;

 void main()
 {
     ulong xx = x;
     ulong yy = y;
     writeln(x);
     writeln(y);
     writeln(xx);
     writeln(yy);
 }

 100000000000000000
 1569325056
 100000000000000000
 1569325056

 I gave code to prove that I was right, why is it so difficult 
 for people to accept? All I see is people trying to justify the 
 compilers current behavior rather than think for themselves and 
 realize something wrong!

 This not a difficult issue.
That output looks correct to me.
Sep 17 2019
parent reply Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 17:05:33 UTC, bachmeier wrote:
 On Tuesday, 17 September 2019 at 16:50:29 UTC, Brett wrote:

 Wrong:
 import std.stdio;
 enum x = 100000000000000000;
 enum y = 10^^17;

 void main()
 {
     ulong xx = x;
     ulong yy = y;
     writeln(x);
     writeln(y);
     writeln(xx);
     writeln(yy);
 }

 100000000000000000
 1569325056
 100000000000000000
 1569325056

 I gave code to prove that I was right, why is it so difficult 
 for people to accept? All I see is people trying to justify 
 the compilers current behavior rather than think for 
 themselves and realize something wrong!

 This not a difficult issue.
That output looks correct to me.
enum x = 100000000000000000; enum y = 10^^17; Why do you think 10^^17 and 100000000000000000 should be different? First I'm told that enum's are ints and so 10^17 should wrap... yet 100000000000000000 is not wrapped(yet you say it looks correct)... then I'm told I have to use L's to get it to not wrap, yet 100000000000000000 does not have L... and it doesn't wrap(so the L is implicit). So which is it? Do you not understand that something is going on that makes no sense and this creates problems? It doesn't make sense... even if you think it does. Either the compiler needs to warn or there has to be a consistent behavior and there clearly is not consistent behavior... just because it makes sense to you it only means that you are choosing the behavior the compiler uses, but the compiler can be wrong and hence that means you would be wrong too.
Sep 17 2019
next sibling parent Johan Engelen <j j.nl> writes:
Calm down Brett :-)
People are only trying to help here, and as far as I can tell 
they fully understood what you wrote.

On Tuesday, 17 September 2019 at 17:23:06 UTC, Brett wrote:
 
 enum x = 100000000000000000;
 enum y = 10^^17;

 Why do you think 10^^17 and 100000000000000000

 should be different?

 First I'm told that enum's are ints
That is not what was meant. Enums are not always ints. The type of the initializing expression determines the type of the enum. Numbers are by default `int`, unless it must be another type. 10 ---> is an `int` 17 ---> is an `int` 100000000000000000 --> cannot be an `int`, so is a larger type
 and so 10^17 should wrap...
10^17 is equal to "number ^^ number". What's the first number? 10. So that's an `int`. What's the second number? 17, also an `int`. `int ^^ int` results in another `int`. Thus the type of the expression "10^^17" is `int` --> the enum that is initialized by 10^^17 will also be an `int`. The wrapping that you see is not that the enum is wrapping. It is the wrapping of the calculation `int ^^ int`. That wrapped calculation result is then used as initializer for the enum. Again, the fact that 10^^17 is wrapping has nothing to do with enum. -Johan
Sep 17 2019
prev sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 09/17/2019 10:23 AM, Brett wrote:

 First I'm told that enum's are ints
Enums can be ints, in which case the following rather lengthy rules apply (lister after the grammer spec): https://dlang.org/spec/lex.html#integerliteral Ali
Sep 17 2019
prev sibling parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
The same can be observed with multiplication: // This compiles, but the result is "non-sensical" due to oveflow. enum n = 1_000_000 * 1_000_000; The same can happen with C: static const int n = 1000000 * 1000000; However, C compilers warn about this: gcc: test.c:1:30: warning: integer overflow in expression of type ‘int’ results in ‘-727379968’ [-Woverflow] 1 | static const int n = 1000000 * 1000000; | ^ clang: test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow] static const int n = 1000000 * 1000000; ^ 1 warning generated. I think D should warn about any overflows which happen at compile-time too.
Sep 17 2019
next sibling parent reply Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 16:49:46 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424	???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
The same can be observed with multiplication: // This compiles, but the result is "non-sensical" due to oveflow. enum n = 1_000_000 * 1_000_000; The same can happen with C: static const int n = 1000000 * 1000000; However, C compilers warn about this: gcc: test.c:1:30: warning: integer overflow in expression of type ‘int’ results in ‘-727379968’ [-Woverflow] 1 | static const int n = 1000000 * 1000000; | ^ clang: test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow] static const int n = 1000000 * 1000000; ^ 1 warning generated. I think D should warn about any overflows which happen at compile-time too.
I have no problem with warnings, at least it would then be detected rather than a silent fall through that can make things unsafe. What's more concerning to me is how many people defend the compilers behavior. Why enum x = 100000000000000000; enum y = 10^^17; should produce two different results is moronic to me. I realize that 10^^17 is a computation but at the compile time the compiler should use the maximum precision to compute values since it actually can do this without issue(up to the a point). If enums actually are suppose to be int's then it should give an error about overflow. If enums can scale depending on what the compiler see's fit then it should use L here and when the values are used in the program it should then error because they will be to large when stuck in to ints. Regardless of the behavior, it shouldn't produce silent undetectable errors, which is what I have seen at least 4 people advocate in here right of the bat. rather than have a sane solution that prevents those errors. That is very concerning... why would anyone think allowing undetectable errors to be reasonable behavior? I actually don't care how it works, as long as I know how it works. If it forces me to add an L, so be it, not a big deal. If it causes crashes in my application and I have to spend hours trying to figure out because I made a logical assumption and the compiler made a different logical assumption but both are equally viable, then that is a problem and it should be understood as a problem, not my problem, not but the compilers problem. Compilers are suppose to make our lives easier, not harder.
Sep 17 2019
next sibling parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Tuesday, 17 September 2019 at 17:34:18 UTC, Brett wrote:
 Why

 enum x = 100000000000000000;
 enum y = 10^^17;

 should produce two different results
I think the biggest argument would be that computing an expression at runtime and compile-time should produce the same result, because CTFE is expected to only simulate the effect of running something at run-time.
 Regardless of the behavior, it shouldn't produce silent 
 undetectable errors,
I agree, a warning or error for overflows at compile-time would be appropriate. We already have a precedent for a similar diagnostic - out-of-bounds array access where the index is known at compile-time. I suggest filing an enhancement request, if one isn't filed for this already.
Sep 17 2019
parent reply Johan Engelen <j j.nl> writes:
On Tuesday, 17 September 2019 at 17:41:23 UTC, Vladimir Panteleev 
wrote:
 I agree, a warning or error for overflows at compile-time would 
 be appropriate.
Do you have a suggestion for the syntax to write overflowing CTFE code without triggering the warning? What I mean is: how can the programmer tell the compiler that overflow is acceptable in a particular case. I briefly looked for it, but couldn't find how to do that with GCC/clang (other than #pragma diagnostic push/pop). -Johan
Sep 17 2019
next sibling parent Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Tuesday, 17 September 2019 at 17:51:59 UTC, Johan Engelen 
wrote:
 On Tuesday, 17 September 2019 at 17:41:23 UTC, Vladimir 
 Panteleev wrote:
 I agree, a warning or error for overflows at compile-time 
 would be appropriate.
Do you have a suggestion for the syntax to write overflowing CTFE code without triggering the warning?
When a bigger type exists which fits the non-overflown result, the obvious solution is to make one of the operands of that type, then explicitly cast the result back to the smaller one. When a bigger type does not exist, explicit overflow could be indicated by using binary-and with the type's full bit mask, i.e. `(1_000_000 * 1_000_000) & 0xFFFFFFFF`. This fits with D's existing range propagation logic, i.e. the following is not an error: uint i = void; ubyte b = i & 0xFF;
Sep 17 2019
prev sibling parent lithium iodate <whatdoiknow doesntexist.net> writes:
On Tuesday, 17 September 2019 at 17:51:59 UTC, Johan Engelen 
wrote:
 I briefly looked for it, but couldn't find how to do that with 
 GCC/clang (other than #pragma diagnostic push/pop).
It does not appear to me that either GCC* or Clang warns about wrapping/overflow unless you're directly invoking undefined behavior. In that case, of course, the proper solution is to fix the broken code. *compiling C code
Sep 17 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.09.19 19:34, Brett wrote:
 
 What's more concerning to me is how many people defend the compilers 
 behavior.
 ...
What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
 Why
 
 enum x = 100000000000000000;
 enum y = 10^^17;
 
 should produce two different results is moronic to me. I realize that 
 10^^17 is a computation but at the compile time the compiler should use 
 the maximum precision to compute values since it actually can do this 
 without issue(up to the a point).
The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)
Sep 17 2019
parent reply Brett <Brett gmail.com> writes:
On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
 On 17.09.19 19:34, Brett wrote:
 
 What's more concerning to me is how many people defend the 
 compilers behavior.
 ...
What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.
 Why
 
 enum x = 100000000000000000;
 enum y = 10^^17;
 
 should produce two different results is moronic to me. I 
 realize that 10^^17 is a computation but at the compile time 
 the compiler should use the maximum precision to compute 
 values since it actually can do this without issue(up to the a 
 point).
The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)
No, that is not the right behavior because you've already said that wrapping is *defined* behavior... and it is not! One if we multiply two numbers together that may be generated at ctfe using mixins or by using a complex constant expression that may be near the upper bound and it happens to overflow? Then what? You are saying it is ok for undefined behavior to exist in a program and that is never true! Undefined behavior accounts for 100% of all program bugs. Even a perfectly written program is undefined behavior if it doesn't do what the user wants/programmer wants. The compiler can warn us at compile time for ambiguous cases, that is the best solution. To say it is not because wrapping is "defined behavior" is the thing that creates inconsistencies.
Sep 17 2019
next sibling parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Tuesday, 17 September 2019 at 19:31:49 UTC, Brett wrote:
 On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
 On 17.09.19 19:34, Brett wrote:
 
 What's more concerning to me is how many people defend the 
 compilers behavior.
 ...
What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.
 Why
 
 enum x = 100000000000000000;
 enum y = 10^^17;
 
 should produce two different results is moronic to me. I 
 realize that 10^^17 is a computation but at the compile time 
 the compiler should use the maximum precision to compute 
 values since it actually can do this without issue(up to the 
 a point).
The reason why you get different results is that someone argued, not unlike you, that the compiler should be "smart" and implicitly promote the 100000000000000000 literal to type 'long'. This is why you now observe this apparently inconsistent behavior. If we really care about the inconsistency you are complaining about, the right fix is to remove 'long' literals without suffix L. Trying to address it by introducing additional inconsistencies in how code is interpreted in CTFE and at runtime is plain stupid. (D currently does things like this with floating point types, and it is annoying.)
No, that is not the right behavior because you've already said that wrapping is *defined* behavior... and it is not! One if we multiply two numbers together that may be generated at ctfe using mixins or by using a complex constant expression that may be near the upper bound and it happens to overflow? Then what? You are saying it is ok for undefined behavior to exist in a program and that is never true! Undefined behavior accounts for 100% of all program bugs. Even a perfectly written program is undefined behavior if it doesn't do what the user wants/programmer wants. The compiler can warn us at compile time for ambiguous cases, that is the best solution. To say it is not because wrapping is "defined behavior" is the thing that creates inconsistencies.
Brett, read the fine manual. The promotion rules [1] and the usual arithmetic conversions [2] are explained in detail. The reason why the grammar is as it is, has to do that the language was not defined in a void. One of the goals of the development of D is to be a successor of C. To reach that goal, the language has to balance between fixing what is wrong with its predecessor and maintaining its legacy (i.e. not estranging developers coming from it by modifying rules willi nilli). The thing with integer promotion and arithmetic conversions is that there is NO absolute right or wrong approach to it. The C developers chose to privilege the approach that tended to maintain the sign when mixing signed and unsigned types, other languages took other choices. One of the stated goals of the D language that Walter has stated several times, is that D expression that are also valid in C, behave like C, to minimize the surprize for people coming from C (or C++). [1]: https://dlang.org/spec/type.html#integer-promotions [2]: https://dlang.org/spec/type.html#usual-arithmetic-conversions
Sep 17 2019
prev sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 17 September 2019 at 19:31:49 UTC, Brett wrote:
 On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr wrote:
 On 17.09.19 19:34, Brett wrote:
 
 What's more concerning to me is how many people defend the 
 compilers behavior.
 ...
What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.
Carelessly doing everything you can to reduce problems is a good way to create lots of problems. For example, there can be a trade-off between consistently (and therefore predictably) wrong and inconsistently right.
 No, that is not the right behavior because you've already said 
 that wrapping is *defined* behavior... and it is not! One if we 
 multiply two numbers together that may be generated at ctfe 
 using mixins or by using a complex constant expression that may 
 be near the upper bound and it happens to overflow? Then what?

 You are saying it is ok for undefined behavior to exist in a 
 program and that is never true! Undefined behavior accounts for 
 100% of all program bugs. Even a perfectly written program is 
 undefined behavior if it doesn't do what the user 
 wants/programmer wants.

 The compiler can warn us at compile time for ambiguous cases, 
 that is the best solution. To say it is not because wrapping is 
 "defined behavior" is the thing that creates inconsistencies.
Just to make sure you don't misunderstand: For better or worse, integer overflow is defined behaviour in D, the reality of the overwhelming majority of CPU hardware is encoded in the language. That is using the meaning of the term "defined" as it used in e.g. the C standard.
Sep 18 2019
parent Brett <Brett gmail.com> writes:
On Wednesday, 18 September 2019 at 09:52:34 UTC, John Colvin 
wrote:
 On Tuesday, 17 September 2019 at 19:31:49 UTC, Brett wrote:
 On Tuesday, 17 September 2019 at 19:19:46 UTC, Timon Gehr 
 wrote:
 On 17.09.19 19:34, Brett wrote:
 
 What's more concerning to me is how many people defend the 
 compilers behavior.
 ...
What you apparently fail to understand is that there are trade-offs to be considered, and your use case is not the only one supported by the language. Clearly, any wraparound behavior in an "integer" type is stupid, but the hardware has a fixed word size, programmers are lazy, compilers are imperfect and efficiency of the generated code matters.
And this is why compilers should do everything they can to reduce problems... it doesn't just effect one person but everyone that uses the compiler. If the onus is on the programmer then it means that a very large percentage of people(thousands, 10's of thousands, millions) are going to have to deal with it as you've already said, they are lazy, so they won't.
Carelessly doing everything you can to reduce problems is a good way to create lots of problems. For example, there can be a trade-off between consistently (and therefore predictably) wrong and inconsistently right.
 No, that is not the right behavior because you've already said 
 that wrapping is *defined* behavior... and it is not! One if 
 we multiply two numbers together that may be generated at ctfe 
 using mixins or by using a complex constant expression that 
 may be near the upper bound and it happens to overflow? Then 
 what?

 You are saying it is ok for undefined behavior to exist in a 
 program and that is never true! Undefined behavior accounts 
 for 100% of all program bugs. Even a perfectly written program 
 is undefined behavior if it doesn't do what the user 
 wants/programmer wants.

 The compiler can warn us at compile time for ambiguous cases, 
 that is the best solution. To say it is not because wrapping 
 is "defined behavior" is the thing that creates 
 inconsistencies.
Just to make sure you don't misunderstand: For better or worse, integer overflow is defined behaviour in D, the reality of the overwhelming majority of CPU hardware is encoded in the language. That is using the meaning of the term "defined" as it used in e.g. the C standard.
I do not care if it is defined, it is wrong. Things that are wrong should be righted... Few seem to get that here. You can claim that it is right because that is how it is done but you fail to realize that the logic you used to come to that conclusion is wrong. To wrongs do not make a right, no matter how hard you try. See, at worst we get a warning. You want that warning to be surprised, I want it to be explicit. You want obscure errors to exist I do not. You are wrong, I'm right. You can huff and puff and try to blow the house down but you still will be wrong.
Sep 18 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.09.19 18:49, Vladimir Panteleev wrote:
 On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
 10^^16 = 1874919424    ???

 10L^^16 is valid, but

 enum x = 10^^16 gives wrong value.

 I didn't catch this ;/
The same can be observed with multiplication: // This compiles, but the result is "non-sensical" due to oveflow. enum n = 1_000_000 * 1_000_000; The same can happen with C: static const int n = 1000000 * 1000000; However, C compilers warn about this: gcc: test.c:1:30: warning: integer overflow in expression of type ‘int’ results in ‘-727379968’ [-Woverflow]     1 | static const int n = 1000000 * 1000000;       |                              ^ clang: test.c:1:30: warning: overflow in expression; result is -727379968 with type 'int' [-Winteger-overflow] static const int n = 1000000 * 1000000;                              ^ 1 warning generated. I think D should warn about any overflows which happen at compile-time too.
It's not the same. C compilers warn about overflows that are UB. They don't complain about overflows that have defined behavior: static const int n = 1000000u * 1000000u; // no warning In D, all overflows in operations on basic integer types have defined behavior, not just those operating on unsigned integers.
Sep 17 2019
parent reply Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Tuesday, 17 September 2019 at 19:22:44 UTC, Timon Gehr wrote:
 It's not the same. C compilers warn about overflows that are 
 UB. They don't complain about overflows that have defined 
 behavior:
I'm not so sure that's the actual distinction. The error messages do not mention undefined behavior. The GCC source code for this does not mention undefined behavior: https://github.com/gcc-mirror/gcc/blob/5fe20025f581fb0c215611434d76696161d4cbd3/gcc/c-family/c-warn.c#L70 The clang source code does not mention anything about undefined behavior: https://github.com/CyberShadow/llvm-project/blob/6e4932ebe9448b9bab922b225a8012669972ff0c/clang/lib/AST/ExprConstant.cpp#L2310 It seems to me that the more likely explanation is that making the operands unsigned is a method of squelching the warning.
 In D, all overflows in operations on basic integer types have 
 defined behavior, not just those operating on unsigned integers.
Regardless of what other languages do, or the pedantic details involved, it seems to me that warning on detectable overflows would simply be more useful for D users (provided there is a way to squelch the warning). Therefore, D should do it.
Sep 17 2019
parent reply lithium iodate <whatdoiknow doesntexist.net> writes:
On Tuesday, 17 September 2019 at 19:36:14 UTC, Vladimir Panteleev 
wrote:
 I'm not so sure that's the actual distinction.

 The error messages do not mention undefined behavior.
Formally, operations with unsigned integers can never overflow in C and you can therefore not warn about overflow. Since the warning can then only occur for signed integers (as observed), any such warning directly implies undefined behavior as per the C standard.
Sep 17 2019
parent Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Tuesday, 17 September 2019 at 20:13:21 UTC, lithium iodate 
wrote:
 Formally, operations with unsigned integers can never overflow 
 in C and you can therefore not warn about overflow. Since the 
 warning can then only occur for signed integers (as observed), 
 any such warning directly implies undefined behavior as per the 
 C standard.
No, you're implying causation from a correlation. In any case, compiler warnings are not governed by what's defined behavior or not. Compilers can and do warn about many code fragments which are fully defined, and the world is a better place for that. A warning here would be useful, so there should be one.
Sep 17 2019