www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - issue 7006 - std.math.pow (integral, integral) crashes on negative

reply berni44 <dlang d-ecke.de> writes:
A few hours ago I closed issue 7006 [1] as a WONTFIX. Now 
timon.gehr gmx.ch opened it again, without any explanation. As I 
don't wont to start an edit war, I prefere to get some other 
opinions from the community.

It's about the integer overload of pow() in std.math and the 
issue askes for adding support for negative exponents. IMHO there 
are two possibilites and non of them makes sense to me:

a) Result type integral: There is no usecase, because in almost 
all cases the result is a fraction which cannot be expressed as 
an integral type. Even when looking at this as a division with 
reminder, the value would always almost be 0. Again not very 
useful.

b) Result type floating: This would be a breaking change. If the 
user wishes this behaviour he could convert the base to a 
floating type first and then call pow. Additionally it 
occasionally would produce wrong results as I pointed out in my 
closing message.

What do you think about this?

[1] https://issues.dlang.org/show_bug.cgi?id=7006
Dec 15 2019
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 15.12.19 19:22, berni44 wrote:
 A few hours ago I closed issue 7006 [1] as a WONTFIX. Now 
 timon.gehr gmx.ch opened it again, without any explanation.
You didn't provide an explanation for closing the issue, so I assumed it was an accident. -1^^-1 leading to "division by zero" instead of the correct answer of -1 makes no sense at all.
 As I don't 
 wont to start an edit war, I prefere to get some other opinions from the 
 community.
 
 It's about the integer overload of pow() in std.math and the issue askes 
 for adding support for negative exponents. IMHO there are two 
 possibilites and non of them makes sense to me:
 
 a) Result type integral: There is no usecase,
Nonsense, e.g., (-1)^^i.
 because in almost all 
 cases the result is a fraction which cannot be expressed as an integral 
 type. Even when looking at this as a division with reminder, the value 
 would always almost be 0. Again not very useful.
 
 b) Result type floating: This would be a breaking change. If the user 
 wishes this behaviour he could convert the base to a floating type first 
 and then call pow. Additionally it occasionally would produce wrong 
 results as I pointed out in my closing message.
 ...
This is not an option.
 What do you think about this?
 
 [1] https://issues.dlang.org/show_bug.cgi?id=7006
A negative exponent should behave like a negative exponent. I.e., a^^-1 = 1/a. There's no good reason to do anything else.
Dec 15 2019
next sibling parent reply berni44 <dlang d-ecke.de> writes:
On Sunday, 15 December 2019 at 18:31:14 UTC, Timon Gehr wrote:
 You didn't provide an explanation for closing the issue, so I 
 assumed it was an accident.
I wrote something about the floating result stuff, which was meant as an explanation. Sorry, if that wasn't clear.
 -1^^-1 leading to "division by zero" instead of the correct 
 answer of -1 makes no sense at all.
I agree, that "division by zero" is not the best here. I guess, that the original programmer wanted to avoid throwing an exception.
 a) Result type integral: There is no usecase,
Nonsense, e.g., (-1)^^i.
Yeah, but what do you want to do with that? If the base is something other than -1, 0 or 1 the result is a fraction, which cannot be represented by an integral type. What should be the outcome of the function in that case?
 b) Result type floating: This would be a breaking change. If 
 the user wishes this behaviour he could convert the base to a 
 floating type first and then call pow. Additionally it 
 occasionally would produce wrong results as I pointed out in 
 my closing message.
 ...
This is not an option.
OK. Here we have the same oppinion. :-)
 A negative exponent should behave like a negative exponent. 
 I.e., a^^-1 = 1/a. There's no good reason to do anything else.
Meanwhile I think, I understand, what you want to have. That would mean, that the answer of my question above would be 0 for all fractions. Correct? I'm just not sure, if this is the best alternative... I'll file a PR with that and we'll see, what the reviewers will say.
Dec 15 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 15.12.19 20:52, berni44 wrote:
 On Sunday, 15 December 2019 at 18:31:14 UTC, Timon Gehr wrote:
 You didn't provide an explanation for closing the issue, so I assumed 
 it was an accident.
I wrote something about the floating result stuff, which was meant as an explanation. Sorry, if that wasn't clear.
 -1^^-1 leading to "division by zero" instead of the correct answer of 
 -1 makes no sense at all.
I agree, that "division by zero" is not the best here. I guess, that the original programmer wanted to avoid throwing an exception. ...
What I wanted you to agree to is that computing the correct result is the correct thing to do. Division by zero and throwing an exception both make no sense here.
 a) Result type integral: There is no usecase,
Nonsense, e.g., (-1)^^i.
Yeah, but what do you want to do with that?
Arithmetic. (I think it I was solving some combinatorics task involving (-1)^^i, but it has been a few years and I don't remember specifics.) I wrote a correct program and instead of getting the correct answer I got a floating point exception. I then filed a bug report. Anyway, do I really have to argue that arithmetic is useful, or that D should compute the correct result for arithmetic expressions? The whole line of reasoning where I am somehow required to justify my point of view just makes no sense to me. E.g., what's the use of 37637663*3 evaluating to 112912989 ? You can't think of a concrete use off the top of your head? Well, then maybe that expression should cause a divide by zero error because it is obviously indicative of a bug if such a thing occurred in your program?
 If the base is something 
 other than -1, 0 or 1 the result is a fraction, which cannot be 
 represented by an integral type. What should be the outcome of the 
 function in that case?
 ...
Ideally 0, but that's mostly about consistency with integer division. I hope we can all agree that there is really no justification for not computing the correct result if it actually fits into the return type.
... I'm just not sure, if this is the best alternative...
What's in Phobos now is plain broken and that's why the bug report exists.
 
 I'll file a PR with that and we'll see, what the reviewers will say.
 
Thanks.
Dec 15 2019
parent reply mipri <mipri minimaltype.com> writes:
On Monday, 16 December 2019 at 02:05:46 UTC, Timon Gehr wrote:
 Anyway, do I really have to argue that arithmetic is useful, or
 that D should compute the correct result for arithmetic
 expressions? The whole line of reasoning where I am somehow
 required to justify my point of view just makes no sense to me.
It really is the case that *someone* needs to do this, because 1. the author of the code didn't share your view; 2. several other people responding to the ticket didn't share your view; and 3. this condition has persisted for 8 years after the bug was reported. But you don't need to justify it that far. Just one level back. You expect some specific behavior of pow(), but what do you expect of functions like pow in general? What principle does pow() violate by having this weirdly implemented domain restriction? Would you be satisfied if the line were changed to enforce(n >= 0, "only natural powers are supported"); or assert(n >= 0); or deprecated("use floating point pow instead") ? BEGIN Long digression Suppose D had a filesystem interface without fine error handling, so that an SomethingHappenedException is all you get from trying and failing to open a file? The proper importance level in the ticket system for this would have to be 'shameful' or 'disqualifying': this would mean that Phobos's file handling was only suitable for the most trivial of applications, and that for anything else the first thing you'd have to do is fall back to C's or some other interface. And the condition itself would serve as a red flag in general: don't even think about D for system administration because clearly nobody involved with it takes those tasks seriously, so even after you work around this problem you'll certainly run into some other obnoxious fault that was less obvious. Real example of that: Lua doesn't come with a regex library. It's too good for regex, and has some NIH pattern-matcher that's less powerful and at least as slow as libc regex() in practice (so, a dozen times slower than PCRE). This is an obvious problem. Should you work around it and still try to use Lua for system administration? No! There's a much more dangerous problem that isn't obvious at all: the normal 'lua' binary that you'd use to run Lua scripts, on startup it'll silently eval() the contents of a specific environment variable. This is a 'feature', and also an instant privilege escalation exploit for any setuid Lua script. "setuid scripts are bad"? Sure, but languages that system administrators actually use take some pains to make them less bad, and sysadmins are not all top-tier. People make mistakes. Quick fixes are applied. If people started circulating completely safe only-to-be-run-by-root Lua scripts then one day someone will set one of them setuid and someone else will notice and gain root on a box because of it. So it's best just to not use Lua for this entire class of application from the very beginning, and the earliest hint that you should do this is Lua's pattern-matching eccentricity. END Long digression I can say all that because I've worked as a sysadmin and have written and maintained a lot of sysadmin code. I can't say much at all about std.math. The easy path, "just do what some other languages do", leads me to libc that doesn't have an integer pow, or to scripting languages that eagerly return a floating point result. Or to https://gmplib.org/manual/Integer-Exponentiation.html
Dec 15 2019
parent reply Dominikus Dittes Scherkl <dominikus.scherkl continental-corporation.com> writes:
On Monday, 16 December 2019 at 03:20:32 UTC, mipri wrote:
 You expect some specific behavior of pow(), but what do you
 expect of functions like pow in general? What principle does
 pow() violate by having this weirdly implemented domain
 restriction? Would you be satisfied if the line were changed to

   enforce(n >= 0, "only natural powers are supported");

 or

   assert(n >= 0);

 or

    deprecated("use floating point pow instead")
In this special case, I would think changing the function signature to take only "uint" as exponent would be sufficient. If you need negative exponents using the floating point pow is more useful anyway. Of course -1^^x is a useful function, but if you need it, I would still think using floating point makes more sense. And if its time critical and you need only integers, there are much faster solutions than using pow (e.g. odd(x)?-1:1 )
Dec 15 2019
parent reply Johannes Loher <johannes.loher fg4f.de> writes:
On Monday, 16 December 2019 at 06:44:31 UTC, Dominikus Dittes 
Scherkl wrote:
 [...] And if its time critical and you need only integers, 
 there are much faster solutions than using pow (e.g. 
 odd(x)?-1:1 )
The thing is: In math, it is extremely common to write this as (-1)^^x. I don‘t understand why we should not allow this notation. As Timon explained, the restriction just makes no sense and was not what he had expected (I am also very surprised by this). I admit that Timon’s explanations sounded a bit harsh, but that does not makes them any less true. This is simply an arbitrary limitation. The definition of pow in math can be perfectly adapted to integers by simply using integer division, i.e. 2^^(-1) == 0, so this is how it should work.
Dec 16 2019
next sibling parent reply M.M. <matus email.cz> writes:
On Monday, 16 December 2019 at 12:25:11 UTC, Johannes Loher wrote:
 On Monday, 16 December 2019 at 06:44:31 UTC, Dominikus Dittes 
 Scherkl wrote:
 [...] And if its time critical and you need only integers, 
 there are much faster solutions than using pow (e.g. 
 odd(x)?-1:1 )
The thing is: In math, it is extremely common to write this as (-1)^^x. I don‘t understand why we should not allow this notation. As Timon explained, the restriction just makes no sense and was not what he had expected (I am also very surprised by this). I admit that Timon’s explanations sounded a bit harsh, but that does not makes them any less true. This is simply an arbitrary limitation. The definition of pow in math can be perfectly adapted to integers by simply using integer division, i.e. 2^^(-1) == 0, so this is how it should work.
As mentioned by both Timon and Johannes, in mathematics, and especially in combinatorics, the usage of (-1)^^i for natural number i is extremely common. For example, the definition of a determinant of a matrix uses this concept. Also, the binomial expansion of (a-b)^n can be elegantly expressed using (-1)^i. In general, any computation where the sign depends on the parity of a number i can be expressed using (-1)^i. (Another example would be the inclusion-exclusion principle for expressing the size of the union of k sets). As such, (-1)^^i is _extremely_ useful and common, and changing how (common) math works in a programming language is asking for troubles.
Dec 16 2019
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 16 December 2019 at 12:39:11 UTC, M.M. wrote:
 [snip]

 As such, (-1)^^i is _extremely_ useful and common, and changing 
 how (common) math works in a programming language is asking for 
 troubles.
I'm a little confused by this thread... 1) the bug report is about a function taking an int and an int. Just focusing on the math, the result of the function with a negative exponent would be a float. So, it makes perfect sense for the function to prevent this when dealing with ints only. Just cast part of it to float and you get the right result. 2) I'm so confused by why everyone is bringing up (-1)^^i. a) i normally represents sqrt(-1) in math, so that's a complex number...the overload deals with ints. You should be concerned by an overload for complex numbers. b) This conversation would make a little more sense is if we are considering i as a member of the set (0, 1, 2, 3, ...). In which case, the value is still a complex number most of the time. IMO, people concerned with this case should cast -1 to complex and then call pow. 3) In math, we have different types of numbers: natural numbers, rational numbers, real numbers. Just because a formula has a defined result for every value of complex numbers, doesn't mean that is the case for the other types of numbers. Computer science types, like int and float, have some analog in math numbers, but they aren't the same thing. I think you'll get confused when trying to think they are the exact same thing.
Dec 16 2019
parent reply M.M. <matus email.cz> writes:
On Monday, 16 December 2019 at 13:20:58 UTC, jmh530 wrote:
 On Monday, 16 December 2019 at 12:39:11 UTC, M.M. wrote:
 [snip]

 As such, (-1)^^i is _extremely_ useful and common, and 
 changing how (common) math works in a programming language is 
 asking for troubles.
I'm a little confused by this thread... 1) the bug report is about a function taking an int and an int. Just focusing on the math, the result of the function with a negative exponent would be a float. So, it makes perfect sense for the function to prevent this when dealing with ints only. Just cast part of it to float and you get the right result. 2) I'm so confused by why everyone is bringing up (-1)^^i. a) i normally represents sqrt(-1) in math, so that's a complex number...the overload deals with ints. You should be concerned by an overload for complex numbers. b) This conversation would make a little more sense is if we are considering i as a member of the set (0, 1, 2, 3, ...). In which case, the value is still a complex number most of the time. IMO, people concerned with this case should cast -1 to complex and then call pow. 3) In math, we have different types of numbers: natural numbers, rational numbers, real numbers. Just because a formula has a defined result for every value of complex numbers, doesn't mean that is the case for the other types of numbers. Computer science types, like int and float, have some analog in math numbers, but they aren't the same thing. I think you'll get confused when trying to think they are the exact same thing.
In my comment, "i" stands for an iterator (in a for-loop, for example), and not for a complex number. But I also see that my answer stressed the usefulness of (-1)^i for the cases where i is a _positive_ integer, which is not the part of this discussion. So, while determinants or binomial coefficients do not use naturally (-1)^i with negative value of i, there are other cases (which Timon probably referred to) in combinatorics, where using (-1)^i with negative value comes naturally.
Dec 16 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 16 December 2019 at 13:53:06 UTC, M.M. wrote:
 So, while determinants or binomial coefficients do not use 
 naturally (-1)^i with negative value of i, there are other 
 cases (which Timon probably referred to) in combinatorics, 
 where using (-1)^i with negative value comes naturally.
It is easy to implement, isn't it? But what would you optimize for? Would you make x^i = 0 a bit faster for i < 0 in the general case? Or would you test for -1 and special case it for significant speedups? Without special casing it will be slow: (-1)^(-1) = 1/((-1)^1 ) = -1 (-1)^(-2) = 1/((-1)^2) = 1 (-1)^(-3) = 1/((-1)^3) = -1 With special casing you get: 1 - ((i&1)<<1) or something like that. (1-2 and 1-0)
Dec 16 2019
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 16 December 2019 at 13:53:06 UTC, M.M. wrote:
 [snip]

 In my comment, "i" stands for an iterator (in a for-loop, for 
 example), and not for a complex number.

 But I also see that my answer stressed the usefulness of (-1)^i 
 for the cases where i is a _positive_ integer, which is not the 
 part of this discussion.

 So, while determinants or binomial coefficients do not use 
 naturally (-1)^i with negative value of i, there are other 
 cases (which Timon probably referred to) in combinatorics, 
 where using (-1)^i with negative value comes naturally.
I don't doubt that (-1)^i, where i is an iterator, is useful in many, many cases. However, you would need for pow(int, int) to return a complex number all the time, if you want to handle that use case, not just when i<0. That will make a lot of other code a lot more complicated. It is much simpler to convert -1 to complex and then call pow(complex, int) and return a complex number. That means that whether i=1 or i=2, the result is complex. That will make your life a lot less complicated, particularly if you pass that result to other functions.
Dec 16 2019
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 16 December 2019 at 15:00:47 UTC, jmh530 wrote:
 

 I don't doubt that (-1)^i, where i is an iterator, is useful in 
 many, many cases. However, you would need for pow(int, int) to 
 return a complex number all the time, if you want to handle 
 that use case, not just when i<0. That will make a lot of other 
 code a lot more complicated. It is much simpler to convert -1 
 to complex and then call pow(complex, int) and return a complex 
 number. That means that whether i=1 or i=2, the result is 
 complex. That will make your life a lot less complicated, 
 particularly if you pass that result to other functions.
What I mean is that if i is 0.5, then you have to return a complex. So you have to add a special case for that.
Dec 16 2019
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 16 December 2019 at 15:13:44 UTC, jmh530 wrote:
 [snip]

 What I mean is that if i is 0.5, then you have to return a 
 complex. So you have to add a special case for that.
Simple work-around for the (-1)^^i: import std; void main() { auto x = iota(10).map!(a => a % 2 ? 1 : -1); }
Dec 16 2019
next sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Monday, 16 December 2019 at 15:33:33 UTC, jmh530 wrote:
 On Monday, 16 December 2019 at 15:13:44 UTC, jmh530 wrote:
 [snip]

 What I mean is that if i is 0.5, then you have to return a 
 complex. So you have to add a special case for that.
Simple work-around for the (-1)^^i: import std; void main() { auto x = iota(10).map!(a => a % 2 ? 1 : -1); }
Why can't that pow(int, int) function implement that workaround and return 0 on all negative exponents and not crash otherwise? That a function balks at mathematically nonsense values like 1/0 or 0^^0 ok, it's expected. That it does so on a function that mathematically has valid parameters (even if the result can not be represented) is not normal. Nobody expects 2^^70 to crash with divide by 0 error unless explicitly requesting checked integers that catch overflows.
Dec 16 2019
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 16 December 2019 at 15:49:35 UTC, Patrick Schluter 
wrote:
 [snip]

 Why can't that pow(int, int) function implement that workaround 
 and return 0 on all negative exponents and not crash otherwise?
 That a function balks at mathematically nonsense values like 
 1/0 or 0^^0 ok, it's expected. That it does so on a function 
 that mathematically has valid parameters (even if the result 
 can not be represented) is not normal. Nobody expects 2^^70 to 
 crash with divide by 0 error unless explicitly requesting 
 checked integers that catch overflows.
I think I made some mistakes earlier because I kept mixing stuff up. Particularly because an exponent of 0.5 wouldn't matter with ints. It wouldn't return 0 on all negative exponents. For -2 and more negative, it should return always 0. For -1, it would return zero if x is -2 or smaller and otherwise 1 or -1 depending on whether the exponent is odd or even.
Dec 16 2019
parent reply Dominikus Dittes Scherkl <dominikus.scherkl continental-corporation.com> writes:
On Monday, 16 December 2019 at 15:57:13 UTC, jmh530 wrote:
 On Monday, 16 December 2019 at 15:49:35 UTC, Patrick Schluter 
 wrote:
 [snip]

 Why can't that pow(int, int) function implement that 
 workaround and return 0 on all negative exponents and not 
 crash otherwise?
I still don't understand why anybody want a function with signature int pow(int, int) I think there are only two interesting cases: int pow(int, ubyte) and complex pow(complex, complex) both working correct for any possible input, but of course the integer version is likely to overflow even for relatively small exponents. To make this more prominent for the user, I would only allow 8bit exponents anyway. If you call it with something sensible, the exponent should fit that, else the result will be some garbage anyway. Same with negative exponents - rounding away any bit of information from the result is just an offense. Why offer such nonsense? Anybody interested in using pow with negative exponents cannot be interested in an interger result. maybe (for performance reasons?) real pow(real, real) would also be of interest, but there are (strange) constraints to when the result is real, so I cannot really recommend this. In fact for an negative integral exponent n there are n solutions of which only one is real (or two if n is even). So the signature would better be real pow(real x, real e) in(e>=0) But I would always prefer the two cases without constraints.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 13:03, Dominikus Dittes Scherkl wrote:
 On Monday, 16 December 2019 at 15:57:13 UTC, jmh530 wrote:
 On Monday, 16 December 2019 at 15:49:35 UTC, Patrick Schluter wrote:
 [snip]

 Why can't that pow(int, int) function implement that workaround and 
 return 0 on all negative exponents and not crash otherwise?
I still don't understand
Please read the entire thread.
 why anybody want a function with signature
 
 int pow(int, int)
 ...
You are apparently not aware that std.math.pow is the implementation of the built-in `^^` operator. Removing it is not even on the table, it should just work correctly.
 I think
You are wrong.
 there are only two interesting cases:
 
 int pow(int, ubyte)
  > and
 
 complex pow(complex, complex)
 ...
Those cases already work fine, and they are off-topic. The bug report is for an existing feature that does not work correctly.
 both working correct for any possible input, but of course the integer 
 version is likely to overflow even for relatively small exponents.
As I am stating for the third time now, there are x such that `pow(x,int.max)` and/or `pow(x,int.min)` neither overflow nor cause any rounding.
 To 
 make this more prominent for the user, I would only allow 8bit exponents 
 anyway.
I am the user in question and I am not a moron. Thanks a lot.
 If you call it with something sensible, the exponent should fit 
 that, else the result will be some garbage anyway. Same with negative 
 exponents - rounding away any bit of information from the result is just 
 an offense. Why offer such nonsense? Anybody interested in using pow 
 with negative exponents cannot be interested in an interger result.
 ...
I really don't understand the source of those weird `pow`-related prejudices. You are the one who is spewing nonsense. This is such a trivial issue. This shouldn't be this hard.
Dec 17 2019
parent reply Dominikus Dittes Scherkl <dominikus.scherkl continental-corporation.com> writes:
On Tuesday, 17 December 2019 at 12:31:15 UTC, Timon Gehr wrote:
 I still don't understand
 why anybody want a function with signature
 
 int pow(int, int)
 ...
You are apparently not aware that std.math.pow is the implementation of the built-in `^^` operator. Removing it is not even on the table, it should just work correctly.
Did I suggest to remove it? NO. But I'm of the opinion that having each internal operator returning the same type as the given types is not always useful. ^^ should result in a real if the exponent is outside ubyte range. Is this wrong? Am I crazy? Ok, this would be a huge language change, so I agree. giving x^^negative_exp == 0 is something we could do.
 I think
You are wrong.
Please no ad hominem attacks!
 there are only two interesting cases:
 
 int pow(int, ubyte)
  > and
 
 complex pow(complex, complex)
 ...
Those cases already work fine, and they are off-topic.
No, they do not work fine, because an implausible input range for the exponent is defined. I still think this should be fixed.
 As I am stating for the third time now, there are x such that 
 `pow(x,int.max)` and/or `pow(x,int.min)` neither overflow nor 
 cause any rounding.
Yes, the cases with x == -1, 0 or 1. And only them. Maybe it should be fixed for these cases, for those who insist to use the operator instead of some very fast bit-muggling if they need a toggling function.
 To make this more prominent for the user, I would only allow 
 8bit exponents anyway.
I am the user in question and I am not a moron. Thanks a lot.
I didn't say that. Don't lay words into my mouth.
 This is such a trivial issue. This shouldn't be this hard.
It's not hard. But there are a lot of cases where giving a negative exponent is NOT intended, and maybe it is a better idea to throw to indicate misuse instead of rounding away all information by returning 0? (except for the three special cases where I already agreed treating them correct would be a win). To summarize: If we need to stay with int ^^ int == int, I vote NOT to return 0 for negative exponent and still throw, except for the three special cases, where the correct result should be given.
Dec 17 2019
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 14:39, Dominikus Dittes Scherkl wrote:
 I think
You are wrong.
Please no ad hominem attacks!
 there are only two interesting cases: 
This is not an ad hominem. An ad hominem is dismissing your argument because you were the one who made it. (As a hypothetical example, if I said you were wrong because I did not like the shape of your nose or something like that.) Aside from not being an ad hominem, the statement above is not even a personal attack. You simply made a wrong statement. There are other interesting cases, therefore your statement was wrong. I have made wrong statements before and I usually just apologize if it happens.
Dec 17 2019
prev sibling next sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Tuesday, 17 December 2019 at 13:39:13 UTC, Dominikus Dittes 
Scherkl wrote:
 On Tuesday, 17 December 2019 at 12:31:15 UTC, Timon Gehr wrote:
 I still don't understand
 why anybody want a function with signature
 
 int pow(int, int)
 ...
You are apparently not aware that std.math.pow is the implementation of the built-in `^^` operator. Removing it is not even on the table, it should just work correctly.
Did I suggest to remove it? NO. But I'm of the opinion that having each internal operator returning the same type as the given types is not always useful. ^^ should result in a real if the exponent is outside ubyte range. Is this wrong? Am I crazy? Ok, this would be a huge language change, so I agree. giving x^^negative_exp == 0 is something we could do.
 I think
You are wrong.
Please no ad hominem attacks!
That's not an ad hominem. That's just a statement of opinion. It would be ad hominem if something about you was the justification of the wrongness. You're wrong because your name is too long or because you have a big nose etc.
 there are only two interesting cases:
 
 int pow(int, ubyte)
  > and
 
 complex pow(complex, complex)
 ...
Those cases already work fine, and they are off-topic.
No, they do not work fine, because an implausible input range for the exponent is defined. I still think this should be fixed.
 As I am stating for the third time now, there are x such that 
 `pow(x,int.max)` and/or `pow(x,int.min)` neither overflow nor 
 cause any rounding.
Yes, the cases with x == -1, 0 or 1. And only them. Maybe it should be fixed for these cases, for those who insist to use the operator instead of some very fast bit-muggling if they need a toggling function.
It is better that the library function takes care correctly of all cases than requiring all user of a function to insert boilerplate code to handle special cases that are well defined.
 To make this more prominent for the user, I would only allow 
 8bit exponents anyway.
I am the user in question and I am not a moron. Thanks a lot.
I didn't say that. Don't lay words into my mouth.
 This is such a trivial issue. This shouldn't be this hard.
It's not hard. But there are a lot of cases where giving a negative exponent is NOT intended, and maybe it is a better idea to throw to indicate misuse instead of rounding away all information by returning 0? (except for the three special cases where I already agreed treating them correct would be a win). To summarize: If we need to stay with int ^^ int == int, I vote NOT to return 0 for negative exponent and still throw, except for the three special cases, where the correct result should be given.
Except for 0^^0 there is no reason to throw or crap out. int.max+1 also doesn't throw and it is also generally a probable bug in the user code.
Dec 17 2019
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 15:02, Patrick Schluter wrote:
 
 Except for 0^^0 there is no reason to throw or crap out.
0^^0 = 1 and D gets this right already.
Dec 17 2019
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 15:29, Timon Gehr wrote:
 On 17.12.19 15:02, Patrick Schluter wrote:
 Except for 0^^0 there is no reason to throw or crap out.
0^^0 = 1 and D gets this right already.
Also, 0^^x for x<0 is _actually_ a division by zero.
Dec 17 2019
prev sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Tuesday, 17 December 2019 at 14:29:58 UTC, Timon Gehr wrote:
 On 17.12.19 15:02, Patrick Schluter wrote:
 
 Except for 0^^0 there is no reason to throw or crap out.
0^^0 = 1 and D gets this right already.
Not as clear cut as you say, but generally it is agreed upon being set to 1. https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero but to clarify what I meant except for 0^0 where an exception could be justified, all other cases have no reason to even contemplate throwing an exception. It is not mathematically justified. If D was a managed language where integer overflows are handled by default it would be another story. That's why I consider that your position is the right one.
Dec 17 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 16:43, Patrick Schluter wrote:
 On Tuesday, 17 December 2019 at 14:29:58 UTC, Timon Gehr wrote:
 On 17.12.19 15:02, Patrick Schluter wrote:
 Except for 0^^0 there is no reason to throw or crap out.
0^^0 = 1 and D gets this right already.
Not as clear cut as you say, but generally it is agreed upon being set to 1. https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero ...
- Anyone can edit Wikipedia, and laymen like to preserve outdated conventions from 200 years ago that are sometimes taught in primary school. - That article actually explains why a computer scientist must consider the value to be 1 if your domain of exponents models a discrete set. (Knuth says so!) - Many other modern programming languages also get it right, even C99. - D is sometimes proud to fix design mistakes in C++. Add this to the list.
 but to clarify what I meant
 
 except for 0^0 where an exception could be justified,
It can't be justified. The reason IEEE 754 supports multiple conventions, with 1 being the default, is that floats are often used to approximately model continuous functions and an exact value of 0 could be the result of a rounding error.
 all other cases have no reason to even contemplate throwing an exception.
0^^-1.
Dec 17 2019
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 15:02, Patrick Schluter wrote:

 You are wrong.
Please no ad hominem attacks!
That's not an ad hominem. That's just a statement of opinion. It would be ad hominem if something about you was the justification of the wrongness. You're wrong because your name is too long or because you have a big nose etc.
Although some people do count disagreeing with mathematically wrong statements as a personal attack: https://github.com/pandas-dev/pandas/issues/9422#issuecomment-343550192 I think this is not a useful way to run a community. Note that the provided "proof" is not entirely right, the mathematical justification for sum([])=0 is actually this: https://en.wikipedia.org/wiki/Free_monoid#Morphisms
Dec 17 2019
parent reply Jab <jab_293 gmall.com> writes:
On Tuesday, 17 December 2019 at 15:08:31 UTC, Timon Gehr wrote:
 On 17.12.19 15:02, Patrick Schluter wrote:

 You are wrong.
Please no ad hominem attacks!
That's not an ad hominem. That's just a statement of opinion. It would be ad hominem if something about you was the justification of the wrongness. You're wrong because your name is too long or because you have a big nose etc.
Although some people do count disagreeing with mathematically wrong statements as a personal attack:
Just quoting "I think" with a reply "you are wrong" isn't any better way to run a community. It's not so much that you are disagreeing with it, so much as you don't know how to convey that you are disagreeing with it like a human being.
 https://github.com/pandas-dev/pandas/issues/9422#issuecomment-343550192
So you had to pull an example from close to 3 years ago?
Dec 17 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 16:44, Jab wrote:
 On Tuesday, 17 December 2019 at 15:08:31 UTC, Timon Gehr wrote:
 On 17.12.19 15:02, Patrick Schluter wrote:

 You are wrong.
Please no ad hominem attacks!
That's not an ad hominem. That's just a statement of opinion. It would be ad hominem if something about you was the justification of the wrongness. You're wrong because your name is too long or because you have a big nose etc.
Although some people do count disagreeing with mathematically wrong statements as a personal attack:
Just quoting "I think" with a reply "you are wrong" isn't any better way to run a community. It's not so much that you are disagreeing with it, so much as you don't know how to convey that you are disagreeing with it like a human being. ...
You are setting a great example.
 https://github.com/pandas-dev/pandas/issues/9422#issuecomment-343550192
So you had to pull an example from close to 3 years ago?
It's the one I am familiar with. I'm not very active in other communities.
Dec 17 2019
prev sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Tuesday, 17 December 2019 at 14:02:53 UTC, Patrick Schluter 
wrote:
[...]
 To summarize:
 If we need to stay with int ^^ int == int, I vote NOT to 
 return 0 for negative exponent and still throw, except for the 
 three special cases, where the correct result should be given.
Except for 0^^0 there is no reason to throw or crap out. int.max+1 also doesn't throw and it is also generally a probable bug in the user code.
But 0^^0 in general, is very often replaced by lim x-> 0 x^^x witch is 1. The question is answered here: https://stackoverflow.com/questions/19955968/why-is-math-pow0-0-1 ...long story but may be this is the point: "as a general rule, native functions to any language should work as described in the language specification. Sometimes this includes explicitly "undefined behavior" where it's up to the implementer to determine what the result should be, however this is not a case of undefined behavior." If you look at pow in the cpp.reference language definition, you realize how 'complex' the pow issue becomes... https://en.cppreference.com/w/cpp/numeric/math/pow
Dec 17 2019
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 17:48, Martin Tschierschke wrote:
 On Tuesday, 17 December 2019 at 14:02:53 UTC, Patrick Schluter wrote:
 [...]
 To summarize:
 If we need to stay with int ^^ int == int, I vote NOT to return 0 for 
 negative exponent and still throw, except for the three special 
 cases, where the correct result should be given.
Except for 0^^0 there is no reason to throw or crap out. int.max+1 also doesn't throw and it is also generally a probable bug in the user code.
But 0^^0 in general, is very often replaced by lim x-> 0 x^^x witch is 1. ...
0^^0 is 1 because there is one canonical function mapping the empty set to itself. Also, if one really wants a justification involving calculus look no further than Taylor series.
 The question is answered here:
 
 https://stackoverflow.com/questions/19955968/why-is-math-pow0-0-1
 ...
This is probably the most useful answer: https://stackoverflow.com/a/20376146
 ...long story but may be this is the point:
 
 "as a general rule, native functions to any language should work as 
 described in the language specification. Sometimes this includes 
 explicitly "undefined behavior" where it's up to the implementer to 
 determine what the result should be, however this is not a case of 
 undefined behavior."
 
 If you look at pow in the cpp.reference language definition, you realize 
 how 'complex' the pow issue becomes...
 
 https://en.cppreference.com/w/cpp/numeric/math/pow
This function does not handle integers separately. 0^^0=1 is completely mundane.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 18:09, Timon Gehr wrote:
 On 17.12.19 17:48, Martin Tschierschke wrote:
 ...

 https://en.cppreference.com/w/cpp/numeric/math/pow
...
Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is guaranteed to be 1.0. So I guess the C++ standards committee considers the base a pristine value from ℝ while the exponent is rounded floating-point garbage. I wonder how much thought they actually put into this. I also wonder if there are any implementations who indeed choose to deviate from the floating-point standard. (GNU C++ evaluates pow(0.0,0.0) as 1.0).
Dec 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
 Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is 
 guaranteed to be 1.0.
The limits for 0^0 does not exist and floating point does not represent exactly zero, but approximately 0.
Dec 17 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 18:41:01 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
 Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is 
 guaranteed to be 1.0.
Besides, that is not what it said on the page. It said that 0^0 may lead to a domain error. Which is reasonable for implementors, but it also referred to IEC60559 which set x^0 to 1.0 even if the base is NaN. The goal of ISO standards is to codify and streamline existing practices for better interoperability, C++ is an ISO standard.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On Tuesday, 17 December 2019 at 19:12:00 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 18:41:01 UTC, Ola Fosheim 
 Grøstad wrote:
 On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
 Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is 
 guaranteed to be 1.0.
Besides, that is not what it said on the page.
Yes, this is precisely what is says on the page. It may not be what you read, and that is because you cut corners and didn't read the entire page. I take a lot of care to validate my own statements. Please do the same.
 It said that 0^0 may lead to a domain error. [...]
"If a domain error occurs, an implementation-defined value is returned (NaN where supported)"
Dec 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 20:35:33 UTC, Timon Gehr wrote:
 Besides, that is not what it said on the page.
Yes, this is precisely what is says on the page.
Er.. No. As I said, it is an ISO standard, and thus exists to codify existing practice. That means that some representatives from countries can block decisions. So first the webpage say that you may get a domain error. Then it refers to an IEC standard from 1989. The may part is usually there to not make life difficult for existing implementations. So the foundation is IEC, but to bring all on board they probably put in openings that _MAY_ be used. This is what you get from standardization. The purpose of ISO standardization is not create something new and pretty, but to reduce tendencies towards diverging ad hoc or proprietary standards. It is basically there to support international markets and fair competition... Not to create beautiful objects. The process isn't really suited for programming language design, I think C++ is an outlier.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On Tuesday, 17 December 2019 at 20:55:07 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 20:35:33 UTC, Timon Gehr wrote:
 Besides, that is not what it said on the page.
Yes, this is precisely what is says on the page.
Er.. No.
It says implementations that support NaN may choose to return NaN instead of 1. If we agree, as I intended, to not consider implementations with no NaN support, how exactly is this not what it says on the page? (Please no more pointless elaborations on what common terms mean, ideally just tell me what, in your opinion, is a concise description of what results an implementation is _allowed_ to give me for std::pow(0.0,0.0) on x86. For instance, if I added a single special case for std::pow(0.0,0.0) to a standards-compliant C++17 implementation for x86-64 with floating-point support, which values could I return without breaking C++17 standard compliance?)
 As I said, it is an ISO standard, and thus exists to codify 
 existing practice. That means that  some representatives from 
 countries can block decisions.
(I'm aware.)
 So first the webpage say that you may get a domain error. Then 
 it refers to an IEC standard from 1989.
 ...
They don't say that C++ `std::pow` itself is supposed to satisfy the constraints of `pow` in that standard, and as far as I can tell it is either not the case, or that constraint was not in the floating point standard at the time, as this article states that C++ leaves the result unspecified: https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero#Programming_languages If you think this is inaccurate, you should probably take the fight to whoever wrote that article, as this is where I double-checked my claim and the article has been linked in this thread before I made that claim, but it seems like it is right as it also says that the C99 standard was explicitly amended to require pow(0.0,0.0)==1.0.
 The may part is usually there to not make life difficult for 
 existing implementations. So the foundation is IEC, but to 
 bring all on board they probably put in openings that _MAY_ be 
 used.

 This is what you get from standardization. The purpose of ISO 
 standardization is not create something new and pretty, but to 
 reduce tendencies towards diverging ad hoc or proprietary 
 standards. It is basically there to support international 
 markets and fair competition... Not to create beautiful objects.
 ...
I didn't say that the result was _supposed_ to be beautiful, just that on face value it is ugly and funny. In any case, you will probably agree that it's not a place to draw inspiration from for the subject matter of this thread.
 The process isn't really suited for programming language 
 design, I think C++ is an outlier.
Indeed, however it is still somewhat common for very popular languages: https://en.wikipedia.org/wiki/Category:Programming_languages_with_an_ISO_standard
Dec 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
 what it says on the page? (Please no more pointless 
 elaborations on what common terms mean,
Well, «may» have other connotations in standard texts that in oridinary language, so I read such texts differently than you, obviously.
 on x86. For instance, if I added a single special case for 
 std::pow(0.0,0.0) to a standards-compliant C++17 implementation 
 for x86-64 with floating-point support, which values could I 
 return without breaking C++17 standard compliance?)
Whatever you like. It is implementation defined. That does not mean it is encouraged to return something random. According to the standard x^y is defined as: exp(y * log(x)) The problem with floating point is that what you want depends on the application. If you want to be (more) certain that you don't return inaccurate calculations then you want NaN or some other "exception" for all inaccurate operations. So that you can switch to a different algorithm. If you do something real time you probably just want something "reasonable".
 Indeed, however it is still somewhat common for very popular 
 languages:
Yes, but some have built up the standard under less demanding regimes like ECMA, then improve on it under ISO. I am quite impressed that C++ ISO moves anywhere (and mostly in the right direction) given how hard it is to reach consensus on anything related to langauge design and changes! :-)
Dec 17 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 18 December 2019 at 00:03:14 UTC, Ola Fosheim 
Grøstad wrote:
 On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
 what it says on the page? (Please no more pointless 
 elaborations on what common terms mean,
Well, «may» have other connotations in standard texts that in oridinary language, so I read such texts differently than you, obviously.
 on x86. For instance, if I added a single special case for 
 std::pow(0.0,0.0) to a standards-compliant C++17 
 implementation for x86-64 with floating-point support, which 
 values could I return without breaking C++17 standard 
 compliance?)
Please note that you can test for 60559.212 conformance at compile time using: static constexpr bool is_iec559; 57 true if and only if the type adheres to ISO/IEC/IEEE 60559.212 58 Meaningful for all floating-point types. Which gives guarantees: For the pown function (integral exponents only): pown(x, 0) is 1 for any x (even a zero, quiet NaN, or infinity) pown(±0, n) is ±∞ and signals the divideByZero exception for odd integral n<0 pown(±0, n) is +∞ and signals the divideByZero exception for even integral n<0 pown(±0, n) is +0 for even integral n>0 pown(±0, n) is ±0 for odd integral n>0. For the pow function (integral exponents get special treatment): pow(x, ±0) is 1 for any x (even a zero, quiet NaN, or infinity) pow(±0, y) is ±∞ and signals the divideByZero exception for y an odd integer <0 pow(±0, −∞) is +∞ with no exception pow(±0, +∞) is +0 with no exception pow(±0, y) is +∞ and signals the divideByZero exception for finite y<0 and not an odd integer pow(±0, y) is ±0 for finite y>0 an odd integer pow(±0, y) is +0 for finite y>0 and not an odd integer pow(−1, ±∞) is 1 with no exception pow(+1, y) is 1 for any y (even a quiet NaN) pow(x, y) signals the invalid operation exception for finite x<0 and finite non-integer y. For the powr function (derived by considering only exp(y×log(x))): powr(x, ±0) is 1 for finite x>0 powr(±0, y) is +∞ and signals the divideByZero exception for finite y<0 powr(±0, −∞) is +∞ powr(±0, y) is +0 for y>0 powr(+1, y) is 1 for finite y powr(x, y) signals the invalid operation exception for x<0 powr(±0, ±0) signals the invalid operation exception powr(+∞, ±0) signals the invalid operation exception powr(+1, ±∞) signals the invalid operation exception powr(x, qNaN) is qNaN for x≥0 powr(qNaN, y) is qNaN.
Dec 17 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On Wednesday, 18 December 2019 at 00:03:14 UTC, Ola Fosheim 
Grøstad wrote:
 On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
 ...
 on x86. For instance, if I added a single special case for 
 std::pow(0.0,0.0) to a standards-compliant C++17 
 implementation for x86-64 with floating-point support, which 
 values could I return without breaking C++17 standard 
 compliance?)
Whatever you like. It is implementation defined. That does not mean it is encouraged to return something random. ...
"If a domain error occurs, an implementation-defined value is returned (NaN where supported)" I.e., what you are saying is that even if the implementation supports NaN, it may return non-NaN, the above statement notwithstanding?
 According to the standard x^y  is defined as:

 exp(y * log(x))
 ...
Well, that's pretty lazy. Also, it can't be true simultaneously with your claim that pow(0.0,0.0) can be modified to return _anything_, as it would then need to be consistent with exp(0.0*log(0.0)). Also: $ cat test.cpp #include <cmath> #include <iostream> using namespace std; int main(){ cout<<pow(0.0,0.0)<<endl; // 1 cout<<exp(0.0*log(0.0))<<endl; // -nan double x=328.78732, y=36.3; // (random values I entered) cout<<(pow(x,y)==exp(y*log(x)))<<endl; // 0 } $ g++ -std=c++11 -m64 -pedantic test.cpp && ./a.out 1 -nan 0 I may soon just go back to ignoring all your posts (like Walter also does).
Dec 17 2019
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 01:37, Timon Gehr wrote:
 ...
 Also:
 
 ...
Oh, and not to forget, of course exp((1.0/0.0)*log(1.0)) is NaN while pow(1.0,1.0/0.0) is 1, also invalidating the claim that allowing the implementation of pow(x,y) as exp(y*log(x)) was a goal somehow aided by treating pow(0.0,0.0) and pow(1.0,1.0/0.0) differently.
Dec 17 2019
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 01:37, Timon Gehr wrote:
 On Wednesday, 18 December 2019 at 00:03:14 UTC, Ola Fosheim Grøstad wrote:
 On Tuesday, 17 December 2019 at 23:29:53 UTC, Timon Gehr wrote:
 ...
 on x86. For instance, if I added a single special case for 
 std::pow(0.0,0.0) to a standards-compliant C++17 implementation for 
 x86-64 with floating-point support, which values could I return 
 without breaking C++17 standard compliance?)
Whatever you like.  It is implementation defined.  That does not mean it is encouraged to return something random. ...
"If a domain error occurs, an implementation-defined value is returned (NaN where supported)" I.e., what you are saying is that even if the implementation supports NaN, it may return non-NaN, the above statement notwithstanding?
Perhaps what you mean to say is that the C++ standard is understood to be so lax that it doesn't actually define the expected result of pow for anything but the listed special cases, such that pedantically speaking, pow could return NaN (or, usually, any other value) for all other pairs of arguments (usually, without raising a domain error)? The webpage says that the function raises the first argument to the power of the second. For floating point, this usually means it returns the correct result rounded according to the current rounding mode. However, if it is indeed true that in the context of the C++ standard, this instead means absolutely nothing, this would successfully refute my claim that the webpage (means to) state(s) _precisely_ that pow(0.0,0.0) may return 1 or NaN after you claimed that the webpage does not say that 1 or NaN are both allowed values. It can't be true that the standard does not allow one of those two values, as NaN is explicitly allowed and actual implementations return 1. In this case, pow(0.0,0.0) being unspecified would be exactly as significant as pow(2.0,2.0) being unspecified, and it would have exactly as much bearing on the topic of this thread. The Wikipedia article could then perhaps also be updated to explain that pow being unspecified is something that holds for most arguments, including (0.0,0.0).
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 03:14, Timon Gehr wrote:
 
 Perhaps what you mean to say is that the C++ standard is understood to 
 be so lax that it doesn't actually define the expected result of pow for 
 anything but the listed special cases, such that pedantically speaking, 
 pow could return NaN (or, usually, any other value) for all other pairs 
 of arguments (usually, without raising a domain error)?
Reviewing this page, this does not appear to be the case either: https://en.cppreference.com/w/cpp/numeric/fenv/FE_round So I guess I still don't understand why you think an implementation could return an arbitrary value.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 03:30, Timon Gehr wrote:
 On 18.12.19 03:14, Timon Gehr wrote:
 Perhaps what you mean to say is that the C++ standard is understood to 
 be so lax that it doesn't actually define the expected result of pow 
 for anything but the listed special cases, such that pedantically 
 speaking, pow could return NaN (or, usually, any other value) for all 
 other pairs of arguments (usually, without raising a domain error)?
Reviewing this page, this does not appear to be the case either: https://en.cppreference.com/w/cpp/numeric/fenv/FE_round So I guess I still don't understand why you think an implementation could return an arbitrary value.
The following simple test that would have been able to refute my interpretation of the standard failed to do so: #include <cmath> #include <cfenv> #include <iostream> using namespace std; int main(){ #pragma STDC FENV_ACCESS ON double x=328.78732,y=36.3; fesetround(FE_DOWNWARD); double r1=pow(x,y); fesetround(FE_UPWARD); double r2=pow(x,y); cout<<*(unsigned long long*)&r1<<endl; // 5973659313751886762 cout<<*(unsigned long long*)&r2<<endl; // 5973659313751886763 } (Of course, this is not by itself enough to show that my interpretation is right.)
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 03:44, Timon Gehr wrote:
 On 18.12.19 03:30, Timon Gehr wrote:
 On 18.12.19 03:14, Timon Gehr wrote:
 Perhaps what you mean to say is that the C++ standard is understood 
 to be so lax that it doesn't actually define the expected result of 
 pow for anything but the listed special cases, such that pedantically 
 speaking, pow could return NaN (or, usually, any other value) for all 
 other pairs of arguments (usually, without raising a domain error)?
Reviewing this page, this does not appear to be the case either: https://en.cppreference.com/w/cpp/numeric/fenv/FE_round So I guess I still don't understand why you think an implementation could return an arbitrary value.
The following simple test that would have been able to refute my interpretation of the standard failed to do so: ...
The following D code shows that Phobos's floating point pow is a worse implementation than the one in glibc++: import std.math, std.stdio; void main(){ FloatingPointControl fpctrl; double x=328.78732,y=36.2; fpctrl.rounding = FloatingPointControl.roundDown; double r1=x^^y; fpctrl.rounding = FloatingPointControl.roundUp; double r2=x^^y; writeln(*cast(ulong*)&r1); // 5969924476430611442 writeln(*cast(ulong*)&r2); // 5969924476430611444 } With glibc++, the two values differ by a single ulp.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 03:58, Timon Gehr wrote:
 On 18.12.19 03:44, Timon Gehr wrote:
 On 18.12.19 03:30, Timon Gehr wrote:
 On 18.12.19 03:14, Timon Gehr wrote:
 Perhaps what you mean to say is that the C++ standard is understood 
 to be so lax that it doesn't actually define the expected result of 
 pow for anything but the listed special cases, such that 
 pedantically speaking, pow could return NaN (or, usually, any other 
 value) for all other pairs of arguments (usually, without raising a 
 domain error)?
Reviewing this page, this does not appear to be the case either: https://en.cppreference.com/w/cpp/numeric/fenv/FE_round So I guess I still don't understand why you think an implementation could return an arbitrary value.
The following simple test that would have been able to refute my interpretation of the standard failed to do so: ...
Ok, I found a case where glibc++ computes a wrong result: #include <cmath> #include <cfenv> #include <iostream> using namespace std; int main(){ #pragma STDC FENV_ACCESS ON double x=193513.887169782; double y=44414.97148164646; fesetround(FE_DOWNWARD); double r1=atan2(y,x); fesetround(FE_UPWARD); double r2=atan2(y,x); cout<<*(unsigned long long*)&r1<<endl; // 4597296506280443981 cout<<*(unsigned long long*)&r2<<endl; // 4597296506280443981 } If I use `long double` instead for intermediate calculations, the upper bound is the correct 4597296506280443982. So I guess the C++ standard (like IEEE floating point) does not require exact rounding for some transcendental functions, most likely including pow. Unfortunately, I haven't found any details about precision requirements for C++ floating point library functions using a cursory Google search, so it may indeed be the case that there are absolutely none. Do you have any source that says as much?
Dec 17 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 04:29, Timon Gehr wrote:
 So I guess the C++ standard (like IEEE floating point) does not require 
 exact rounding for some transcendental functions, most likely including pow.
It seems IEEE 754 recommends exactly rounded `pow`, but it is not required for conformance (but `pow(0.0,0.0)==1.0` is required). Also, I finally found this: https://en.cppreference.com/w/cpp/numeric/math/sqrt "Notes std::sqrt is required by the IEEE standard to be exact. The only other operations required to be exact are the arithmetic operators and the function std::fma. After rounding to the return type (using default rounding mode), the result of std::sqrt is indistinguishable from the infinitely precise result. In other words, the error is less than 0.5 ulp. Other functions, including std::pow, are not so constrained." What a weird place to hide this information. I guess I was wrong about C++11 `pow(0.0,0.0)` being _required_ to return either 1.0 or NaN. Sorry about that. (However, any implementation that chooses to implement `pow` conforming to IEEE 754 will return `1.0`.) Unfortunately, none of this is actually relevant for the case of D's `pow(int,int)`.
Dec 17 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 01:37, Timon Gehr wrote:
 
 According to the standard x^y  is defined as:

 exp(y * log(x))
 ...
Well, that's pretty lazy. Also, it can't be true simultaneously with your claim that pow(0.0,0.0) can be modified to return _anything_, as it would then need to be consistent with exp(0.0*log(0.0)).
I guess what's going on is that exp and log in your expression as it occurs in the standard are the actual mathematical functions and `pow` is defined to approximate this exact result for arguments where it is defined, while for other arguments there is an explicit definition. Another thing I noticed is that https://en.cppreference.com/w/cpp/numeric/math/pow says: "except where specified above, if any argument is NaN, NaN is returned" As pow(NaN,0.0) is not "specified above", this seems to say that pow(0.0/0.0,0.0) should be NaN. However, g++ gives me 1. I'm not sure what's going on here. I guess either the documentation is wrong or g++ violates the C++ standard in order to satisfy recommendations in IEEE 754-2008.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.12.19 05:23, Timon Gehr wrote:
 
 As pow(NaN,0.0) is not "specified above", this seems to say that 
 pow(0.0/0.0,0.0) should be NaN. However, g++ gives me 1.
https://en.cppreference.com/w/cpp/numeric/math/pow "pow(base, ±0) returns 1 for any base, even when base is NaN" Rofl. Ola somehow managed to gaslight me into thinking that wasn't there after I had already used it to draw conclusions. Therefore, the page _really_ states that the result must be either 1 or NaN (in case the implementation supports NaN, otherwise it can be anything), as I originally claimed, and the only thing I was wrong about tonight was I being wrong. Sorry for the noise. Also, that does it. Ola, you are back in my kill file.
Dec 17 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 18 December 2019 at 04:58:46 UTC, Timon Gehr wrote:
 On 18.12.19 05:23, Timon Gehr wrote:
 
 As pow(NaN,0.0) is not "specified above", this seems to say 
 that pow(0.0/0.0,0.0) should be NaN. However, g++ gives me 1.
https://en.cppreference.com/w/cpp/numeric/math/pow "pow(base, ±0) returns 1 for any base, even when base is NaN" Rofl. Ola somehow managed to gaslight me into thinking that wasn't there after I had already used it to draw conclusions.
Nah, you are gaslighting yourself. You are also being generally hostile in this thread and keep going ad hominem repeatedly for no good reason. That is not healthy. cppreference.com is a user manual, written by C++ users. g++ is irrelevant. You asked me (for God knows what reason) what the C++17 ISO STANDARD says. Why can't you look it up yourself? Oh... I get it. You are perfection, a priori… There is only one correct view, and that is yours. I get it. ISO standards are generally only available for free as drafts. I gave you what a standard draft from 2017 says. I assume (perhaps wrongly) that the final standard is close to this. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4713.pdf PAGE 949, full quote: « template<class T> complex<T> pow(const complex<T>& x, const complex<T>& y); template<class T> complex<T> pow(const complex<T>& x, const T& y); template<class T> complex<T> pow(const T& x, const complex<T>& y); Returns: The complex power of base x raised to the y th power, defined as exp(y * log(x)). The value returned for pow(0, 0) is implementation-defined. Remarks: The branch cuts are along the negative real axis. »
 Also, that does it. Ola, you are back in my kill file.
And that is something no sane person would not make a point of announcing. If you choose to be ignorant, keep it to yourself. Welcome to the kindergarten... actually most kids behave better, to be honest.
Dec 18 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On Tuesday, 17 December 2019 at 18:41:01 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 17:31:41 UTC, Timon Gehr wrote:
 Haha. pow(0.0,0.0) is either 1.0 or NaN, but pow(1.0,∞) is 
 guaranteed to be 1.0.
The limits for 0^0 does not exist and floating point does not represent exactly zero, but approximately 0.
That's precisely why it is funny that the two cases are handled differently!
Dec 17 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 19:41:22 UTC, Timon Gehr wrote:
 That's precisely why it is funny that the two cases are handled 
 differently!
I wish I could see the humour in this. I want to laugh as well... :-/ But all I see there is pragmatism. Anyway, for numeric programming one should in general stay away from 0.0. Some people add noise to their calculations just to avoid issues that arise close to 0.0.
Dec 17 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On Tuesday, 17 December 2019 at 19:53:06 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 19:41:22 UTC, Timon Gehr wrote:
 That's precisely why it is funny that the two cases are 
 handled differently!
I wish I could see the humour in this. I want to laugh as well... :-/ ...
pow(1-ε,∞) is 0. pow(1+ε,∞) is ∞. pow is unstable at ∞ as much as at 0. It's plain weird to think 0.0 is rounded garbage but 1.0 is not, as 1.0+0.0 = 1.0.
Dec 17 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 20:43:20 UTC, Timon Gehr wrote:
 pow is unstable at ∞ as much as at 0. It's plain weird to think 
 0.0 is rounded garbage but 1.0 is not, as 1.0+0.0 = 1.0.
You need to look at this from the standardization POV, for instance: what do exisiting machine language instructions produce. There are many angles to this, some implementors will use hardware instructions that trap on low accuracy results and then switch to a software implementation. However in practice, inifinity is much less of an issue and relatively easy to avoid. And low accuracy around 0.0 that leads to instability is much more frequent. But there are various tricks that can be used to increase accuracy. For instance you can convert a*b*c*… to log(a)+log(b)+log(c)+… and so on.
Dec 17 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 16:48:42 UTC, Martin Tschierschke 
wrote:
 But 0^^0 in general, is very often replaced by lim x-> 0 x^^x
Well, but if you do the lim of x^^y you either get 1 or 0 depending on how you approach it.
Dec 17 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On Tuesday, 17 December 2019 at 18:49:37 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 17 December 2019 at 16:48:42 UTC, Martin 
 Tschierschke wrote:
 But 0^^0 in general, is very often replaced by lim x-> 0 x^^x
Well, but if you do the lim of x^^y you either get 1 or 0 depending on how you approach it.
No, you can get any real value at all. Anything you want: For x>0, lim[t→0⁺] (x^(-1/t))^(-t) = x. lim[t→0⁺] 0^t = 0. For x<0, lim[n→∞] (x^(-(2·n+1))^(-1/(2·n+1)) = x You can also get infinity or negative infinity. pow for real arguments is maximally discontinuous at (0,0) (and it does not matter at all). The following wikipedia article, which was helpfully pasted earlier and you clearly did not read, clearly states this: https://en.wikipedia.org/wiki/Zero_to_the_power_of_zero It also says that if you restrict yourself to analytic functions f, g: ℝ_{≥0} → ℝ with f(0)=g(0)=0 and f(x)≠0 for x in some neighbourhood around 0, then we actually do have lim[t→0⁺] f(t)^g(t) = 1. I.e., while possible, in many cases it is actually unlikely that your computation does not want a result of 1, even if you are using floating point operations. There are multiple functions defined in the floating-point standard that use different conventions, and 1 is the default, for good reason.
Dec 17 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 14:39, Dominikus Dittes Scherkl wrote:
 On Tuesday, 17 December 2019 at 12:31:15 UTC, Timon Gehr wrote:
 I still don't understand
 why anybody want a function with signature

 int pow(int, int)
 ...
You are apparently not aware that std.math.pow is the implementation of the built-in `^^` operator. Removing it is not even on the table, it should just work correctly.
Did I suggest to remove it? NO. But I'm of the opinion that having each internal operator returning the same type as the given types is not always useful. ^^ should result in a real if the exponent is outside ubyte range. Is this wrong?
Yes.
 Am I crazy?
 
Probably not, but note that I am wasting hours of my finite life on what should be a trivial issue with a trivial fix, so I hope you do understand my slight frustration. This is not efficient.
 ...
 there are only two interesting cases:

 int pow(int, ubyte)
  > and

 complex pow(complex, complex)
 ...
Those cases already work fine, and they are off-topic.
No, they do not work fine,
pow(int,ubyte) works. pow(complex,complex) works.
 because an implausible input range for the 
 exponent is defined.
There are reasonable outputs for the entire range of each individual argument.
 I still think this should be fixed.
 ...
I will argue against this and we will both lose some more time, but feel free to open an enhancement request on bugzilla if you think it is worth it.
 As I am stating for the third time now, there are x such that 
 `pow(x,int.max)` and/or `pow(x,int.min)` neither overflow nor cause 
 any rounding.
Yes, the cases with x == -1, 0 or 1. And only them. Maybe it should be fixed for these cases, for those who insist to use the operator instead of some very fast bit-muggling if they need a toggling function. ...
I don't insist on doing that. I only insist on the function producing correct outputs _when_ it is used. There is no reason to expect it not to work. Also, "very fast bit-muggling" often just does not matter in practice (especially because the compiler developers also know it), it is more typing and it is less readable especially if there is some analogous domain-specific notation.
 To make this more prominent for the user, I would only allow 8bit 
 exponents anyway.
I am the user in question and I am not a moron. Thanks a lot.
I didn't say that. Don't lay words into my mouth. ...
I didn't do that. In the context of the thread it is however reasonable to interpret your statement as suggesting that I need hand-holding. This is further reinforced by your statement about bit-muggling. (There is a certain kind of smugness that is somewhat common with mediocre programmers, and your statements will resonate with the afflicted, even if you didn't intend them that way.)
 This is such a trivial issue. This shouldn't be this hard.
It's not hard. But there are a lot of cases where giving a negative exponent is NOT intended, and maybe it is a better idea to throw to indicate misuse instead of rounding away all information by returning 0?
I'd highly prefer it to consistently return 0 rather than kill my program without a stack trace. However, I agree that this case is slightly less clear-cut.
 (except for the three special cases where I already agreed treating them 
 correct would be a win).
 ...
(After I repeated my explanation for the third time.)
Dec 17 2019
parent reply Dominikus Dittes Scherkl <dominikus.scherkl continental-corporation.com> writes:
On Tuesday, 17 December 2019 at 14:26:42 UTC, Timon Gehr wrote:
 On 17.12.19 14:39, Dominikus Dittes Scherkl wrote:
 ^^ should result in a real if the exponent is outside ubyte 
 range.
 Is this wrong?
Yes.
Why? It's highly likely that the result is out of range or is a fraction. A floatginpoint value would give much more useful information.
 There are reasonable outputs for the entire range of each 
 individual argument.
If you look at it from mathematical point of view: pow is defined as a mapping ℕ × ℕ → ℕ but if we extend it to ℤ × ℤ → ℤ we have undefined points, because 1/n is not in ℤ for all n>1. Maybe it is convenient to return 0 in such cases, but not correct. Throwing is more plausible, if restricting to ℤ × ℕ → ℤ or extending to ℤ × ℤ → ℚ is wrong as you state above.
 I'd highly prefer it to consistently return 0 rather than kill 
 my program without a stack trace. However, I agree that this 
 case is slightly less clear-cut.
At least it is not what I would call "consistent". Meanwhile I looked at the implementation and it has more places to optimize. E.g. it should give 1 for any value x^^0, not only for 0^^0 (nothing to calc here) also all comments talk about n for the exponent, while in fact m is used in the code.
Dec 17 2019
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.12.19 16:07, Dominikus Dittes Scherkl wrote:
 On Tuesday, 17 December 2019 at 14:26:42 UTC, Timon Gehr wrote:
 On 17.12.19 14:39, Dominikus Dittes Scherkl wrote:
 ^^ should result in a real if the exponent is outside ubyte range.
 Is this wrong?
Yes.
Why?
It's neither sound nor complete, you can't even enforce it in a precise manner and it defies expectations. Let's say I write: auto foo(T...)(T args){ // ... return a^^(b?2:3) } Someone comes along that has a dislike for ternary operators and my formatting preferences and refactors the code to the obviously equivalent auto foo(T...)(T args) { // ... (rename "a" to "base") int exponent; // typeof(2) = typeof(3) = int if (b) { exponent = 2; } else { exponent = 3; } return base^^exponent; } Now suddenly my program starts to implicitly use `real` computations all over the place, potentially creating wrong outputs and producing different results on different machines. Thanks, but no. Emphatically. This is a terrible idea. It's highly likely that the result is out of range or is a fraction.
 A floatginpoint value would give much more useful information.
 
 There are reasonable outputs for the entire range of each individual 
 argument.
If you look at it from mathematical point of view: pow is defined as a mapping ℕ × ℕ → ℕ but if we extend it to ℤ × ℤ → ℤ
There's no such thing.
 we have undefined points, because 1/n is not in ℤ for all n>1.
Or for n=0.
 Maybe it is convenient to return 0 in such cases, but not correct.
 Throwing is more plausible, if restricting to ℤ × ℕ → ℤ or
 extending to ℤ × ℤ → ℚ is wrong as you state above.
 ...
uint is not ℕ, int is not ℤ, real is not ℚ, and unsigned types don't help prevent programming errors (the opposite is the case). Also, you can't extend xʸ to ℤ×ℤ→ℚ.
 I'd highly prefer it to consistently return 0 rather than kill my 
 program without a stack trace. However, I agree that this case is 
 slightly less clear-cut.
At least it is not what I would call "consistent". ...
x⁻¹ = 1/x. 1/2 = 0. 2^^-1 = 0. Consistent. The interpretation ⟦.⟧ of integer division admits: ⟦a/b⟧=truncate(⟦a⟧/⟦b⟧) in case (⟦a⟧,⟦b⟧) and truncate(⟦a⟧/⟦b⟧) are in range. Therefore it is consistent to say: ⟦a^^b⟧=truncate(⟦a⟧^^⟦b⟧) in case (⟦a⟧,⟦b⟧) and truncate(⟦a⟧^^⟦b⟧) are in range.
Dec 17 2019
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 17 December 2019 at 15:07:40 UTC, Dominikus Dittes 
Scherkl wrote:
 If you look at it from mathematical point of view:
 pow is defined as a mapping
 ℕ × ℕ → ℕ
 but if we extend it to
 ℤ × ℤ → ℤ
 we have undefined points, because 1/n is not in ℤ for all n>1.
Not sure what you mean. You can construct any algebra you want. D has "defined" x/y for integer values as: (x/y)*y = x - x%y So integer division in D is defined through modulo y. That holds if x=1 and y=n ? (In typical math you would only use multiplication and modulo on integers and simply not define a division operator.)
Dec 17 2019
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 16 December 2019 at 15:33:33 UTC, jmh530 wrote:
 void main() {
     auto x = iota(10).map!(a => a % 2 ? 1 : -1);
 }
xor or unrolling would be faster. I wouldn't trust a compiler on that one.
Dec 16 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 16 December 2019 at 17:07:42 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 16 December 2019 at 15:33:33 UTC, jmh530 wrote:
 void main() {
     auto x = iota(10).map!(a => a % 2 ? 1 : -1);
 }
xor or unrolling would be faster. I wouldn't trust a compiler on that one.
To explain, in case that was a bit brief (8 bit case): -1 = 11111111 1 = 00000001 So all you have to do is xor with ~1 = 11111110
Dec 16 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 16.12.19 16:33, jmh530 wrote:
 On Monday, 16 December 2019 at 15:13:44 UTC, jmh530 wrote:
 [snip]
 ...
Simple work-around for the (-1)^^i: import std; void main() {     auto x = iota(10).map!(a => a % 2 ? 1 : -1); }
Yes, of course. The expression (-1)^^i is even special-cased by DMD so it can actually be used. Does this mean pow(-1,-1) should cause a divide by zero error? Of course not. Just to elaborate on how ridiculous the current behavior is: import std.math; void main(){ int x=-1; writeln((-1)^^(-1)); // ok, writes -1 writeln(pow(-1,-1)); // divide by zero error writeln((-1)^^(-2)); // ok writes 1 writeln(pow(-1,-2)); // divide by zero error writeln(x^^(-1)); // compile error writeln(pow(x,-1)); // divide by zero error writeln((-1)^^x); // ok, writes -1 writeln(pow(-1,x)); // divide by zero error writeln(x^^x); // divide by zero error writeln(pow(x,x)); // divide by zero error }
Dec 17 2019
parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 17 December 2019 at 13:30:48 UTC, Timon Gehr wrote:
 [snip]

 Note that in existing compiler releases, the specific notation 
 (-1)^^x is currently supported as a special rewrite rule in the 
 frontend. This is going away though: 
 https://github.com/dlang/dmd/commit/0f2889c3aa9fba5534e754dade0cae574b636d55

 I.e., I will raise the severity of the issue to regression.
On Tuesday, 17 December 2019 at 13:11:35 UTC, Timon Gehr wrote:
 [snip]

 void main(){
     int x=-1;
     writeln((-1)^^(-1)); // ok, writes -1
     writeln(pow(-1,-1)); // divide by zero error
     writeln((-1)^^(-2)); // ok writes 1
     writeln(pow(-1,-2)); // divide by zero error
     writeln(x^^(-1));    // compile error
     writeln(pow(x,-1));  // divide by zero error
     writeln((-1)^^x);    // ok, writes -1
     writeln(pow(-1,x));  // divide by zero error
     writeln(x^^x);       // divide by zero error
     writeln(pow(x,x));   // divide by zero error
 }
I think these are both really good points*. If constant folding weren't being removed, I wouldn't have a strong feeling on this, but I think it is probably important to prevent the regression. The current behavior clearly has some compiler magic going on before this recent change to constant folding. In some sense, you would be enshrining this "special" behavior so that writeln((2)^^(-1)); //compiler error writeln((2)^^(-2)); //compiler error writeln((-2)^^(-1)); //compiler error writeln((-2)^^(-2)); //compiler error would no longer be errors and would print 0. After thinking on it, it probably makes sense to make these changes for language consistency. I'm certainly the type of person who could get tripped up by these changes, but it still is starting to make sense to me. Can anyone remind me again why ^^ depends on a phobos function? * Your example with x^^-1 is perhaps overblown as an issue because ^^-1 works through constant folding, if you make it enum int x = -1, then it's not an error.
Dec 17 2019
prev sibling parent Martin Tschierschke <mt smartdolphin.de> writes:
On Monday, 16 December 2019 at 12:39:11 UTC, M.M. wrote:
[...]
 As such, (-1)^^i is _extremely_ useful and common, and changing 
 how (common) math works in a programming language is asking for 
 troubles.
!!! +1
Dec 16 2019
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 16.12.19 13:25, Johannes Loher wrote:
 I admit that Timon’s explanations sounded a bit harsh
I reported an obvious issue. It's really a no-brainer, but some people somehow feel the need to intervene and argue in favour of the trivially wrong behavior (this is an issue of basic arithmetic), calling my precious code nonsensical or trying to argue that I must not be aware that built-in integer types have finite precision, etc. I think a bit of harshness is well-justified, as if those people had spent just 5 to 10 minutes actually thinking, they wouldn't find themselves on the wrong side of this argument and wouldn't feel the need to further defend an opinion born out of some strange irrational bias.
Dec 17 2019
parent =?UTF-8?B?UmVuw6k=?= Heldmaier <rene.heldmaier gmail.com> writes:
On Tuesday, 17 December 2019 at 12:40:10 UTC, Timon Gehr wrote:
 On 16.12.19 13:25, Johannes Loher wrote:
 I admit that Timon’s explanations sounded a bit harsh
I reported an obvious issue. It's really a no-brainer, but some people somehow feel the need to intervene and argue in favour of the trivially wrong behavior (this is an issue of basic arithmetic), calling my precious code nonsensical or trying to argue that I must not be aware that built-in integer types have finite precision, etc.
+1 It's integer arithmetic, so it should just return 0 if the mathematically correct result would be less than 1. A short example: --- int a = 5; int b = 25; int c = a / b; --- Should this throw an exception or cause a divide by zero error? If not, why should the "^^" operator behave different?
Dec 18 2019
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 16.12.19 13:25, Johannes Loher wrote:
 
 The thing is: In math, it is extremely common to write this as (-1)^^x. 
 I don‘t understand why we should not allow this notation.
Note that in existing compiler releases, the specific notation (-1)^^x is currently supported as a special rewrite rule in the frontend. This is going away though: https://github.com/dlang/dmd/commit/0f2889c3aa9fba5534e754dade0cae574b636d55 I.e., I will raise the severity of the issue to regression.
Dec 17 2019
prev sibling parent NaN <divide by.zero> writes:
On Sunday, 15 December 2019 at 18:31:14 UTC, Timon Gehr wrote:
 On 15.12.19 19:22, berni44 wrote:
 What do you think about this?
 
 [1] https://issues.dlang.org/show_bug.cgi?id=7006
A negative exponent should behave like a negative exponent. I.e., a^^-1 = 1/a. There's no good reason to do anything else.
+1 There's no grey area IMO. Mathematically pow is defined for negative exponents, even if almost all the time the result would be truncated to zero when computing an integer result, that's what it should do.
Dec 17 2019
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 15 December 2019 at 18:22:28 UTC, berni44 wrote:
 What do you think about this?
That to desire a result is such useless it's probably indicative of a logic error aka bug. Or the user expected a FP return type and as such it should crash and warn the user.
Dec 15 2019
next sibling parent mipri <mipri minimaltype.com> writes:
On Monday, 16 December 2019 at 00:51:55 UTC, Guillaume Piolat 
wrote:
 On Sunday, 15 December 2019 at 18:22:28 UTC, berni44 wrote:
 What do you think about this?
That to desire a result is such useless it's probably indicative of a logic error aka bug. Or the user expected a FP return type and as such it should crash and warn the user.
It can be "probably a logical error" for subtraction to return a negative result. It can be "probably a logic error" for integer division to have a remainder. Mathematical functions should do as they're told and let the caller sort out what meaning a calculation has.
Dec 15 2019
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 16.12.19 01:51, Guillaume Piolat wrote:
 On Sunday, 15 December 2019 at 18:22:28 UTC, berni44 wrote:
 What do you think about this?
That to desire a result is such useless ...
(-1)^^i is not useless.
Dec 15 2019
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 16.12.19 01:51, Guillaume Piolat wrote:
 it's probably indicative of a logic error aka bug. Or the user expected 
 a FP return type and as such it should crash and warn the user.
Also, please rest assured that I wrote a perfectly fine program that expected an integer result prior to filing that bug report and all the crash did was to delay me and create painful and pointless discussions on bugzilla and on the forums.
Dec 15 2019