digitalmars.D - std.math.TAU
- James Fisher (21/21) Jul 05 2011 Hopefully this won't be taken as frivolous. I (and possibly some of you...
- Steven Schveighoffer (28/50) Jul 05 2011 ou)
- James Fisher (18/57) Jul 05 2011 )
- James Fisher (4/9) Jul 05 2011 (I think this is why the constants in
- Don (14/26) Jul 05 2011 I understand what you're getting at, but actually multiplication by
- James Fisher (8/35) Jul 05 2011 Great explanation, thanks.
- Don (6/57) Jul 05 2011 The ones defined in decimal are obsolete, they haven't had a conversion
- Walter Bright (7/10) Jul 05 2011 The ones in hex I got out of a book that helpfully printed them as octal...
- KennyTM~ (2/16) Jul 05 2011 http://www.wolframalpha.com/input/?i=pi+in+hexadecimal
- Walter Bright (2/6) Jul 05 2011 sweet!
- James Fisher (7/13) Jul 05 2011 ess
- Nick Sabalausky (4/7) Jul 05 2011 He had me at "TAU == 2PI"
Hopefully this won't be taken as frivolous. I (and possibly some of you) have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient when= one does not have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF=80, = one must define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, TAU_4= , etc. As well as being a typing inconvenience, I also think things are not that easy due to loss of precision (though I'm far from an expert on intricacies of floating point). There is an initiative to add TAU to the Python standard library: http://www.python.org/dev/peps/pep-0628/ To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4. In any case, I'd like to know what's necessary in order for me to define these constants without loss of precision. d
Jul 05 2011
On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher gmail.com==wrote:Hopefully this won't be taken as frivolous. I (and possibly some of y=ou)have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient=when one =does not have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF==80, one mustdefine `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, ==TAU_4, etc. As well as being a typing inconvenience, I also think things are not t=hateasy due to loss of precision (though I'm far from an expert on =intricacies of floating point). There is an initiative to add TAU to the Python standard library: http://www.python.org/dev/peps/pep-0628/ To this end, I suggest adding the constant TAU to std.math, and possib=lyalso TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI=_4.In any case, I'd like to know what's necessary in order for me to defi=nethese constants without loss of precision. dI read an article about this recently, it's definitely interesting. The= = one place where I haven't seen it mentioned is what happens when you wan= t = the area of a circle, since that necessarily involves the radius. I'd = guess you'd have to use =CF=84/2 * r^2, but even then, that's one formul= a vs. = the rest. It's probably a good tradeoff. I can definitely see the = advantage when using radians. Never thought I'd have to re-learn trig = again... One thing I like about Pi vs Tau is that it cannot be mistaken for a = normal character. I'm not a floating point expert, but I would expect since floating point= = is stored in binary, dividing or multiplying by 2 loses no precision at = = all. But I could be wrong... -Steve
Jul 05 2011
On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote:On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher gmail.com> wrote: Hopefully this won't be taken as frivolous. I (and possibly some of you=)hen one doeshave been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient w==80, one mustnot have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF=U_4,define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, TA=tetc. As well as being a typing inconvenience, I also think things are not tha=s/pep-0628/>easy due to loss of precision (though I'm far from an expert on intricacies of floating point). There is an initiative to add TAU to the Python standard library: http://www.python.org/dev/**peps/pep-0628/<http://www.python.org/dev/pep=.To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4=essIn any case, I'd like to know what's necessary in order for me to define these constants without loss of precision. dI read an article about this recently, it's definitely interesting. The one place where I haven't seen it mentioned is what happens when you want the area of a circle, since that necessarily involves the radius. I'd gu=you'd have to use =CF=84/2 * r^2, but even then, that's one formula vs. t=he rest.It's probably a good tradeoff. I can definitely see the advantage when using radians. Never thought I'd have to re-learn trig again... One thing I like about Pi vs Tau is that it cannot be mistaken for a norm=alcharacter. I'm not a floating point expert, but I would expect since floating point =isstored in binary, dividing or multiplying by 2 loses no precision at all. But I could be wrong...Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense?
Jul 05 2011
On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com>wrote:Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense?(I think this is why the constants in math.d<https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>are each defined separately rather than in terms of each other.)
Jul 05 2011
James Fisher wrote:On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com <mailto:jameshfisher gmail.com>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense?I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either.(I think this is why the constants in math.d <https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206> are each defined separately rather than in terms of each other.)Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that.
Jul 05 2011
On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam nospam.com> wrote:James Fisher wrote: On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com<mailto:Great explanation, thanks. (I think this is why the constants in math.d <https://github.com/D-**jameshfisher gmail.com**>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense?I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either.Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix? And is there a significance to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies?Programming-Language/phobos/**blob/master/std/math.d#L206<https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>> are each defined separately rather than in terms of each other.)Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that.
Jul 05 2011
James Fisher wrote:On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: James Fisher wrote: On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com <mailto:jameshfisher gmail.com> <mailto:jameshfisher gmail.com <mailto:jameshfisher gmail.com>__>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense? I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either. Great explanation, thanks. (I think this is why the constants in math.d <https://github.com/D-__Programming-Language/phobos/__blob/master/std/math.d#L206 <https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>> are each defined separately rather than in terms of each other.) Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that. Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix?The ones defined in decimal are obsolete, they haven't had a conversion to hex yet.And is there a significance to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies?Yes, it's 80 bit. Currently there's a problem with DMC's floating-point parser, all those numbers should really be 128 bit (we should be ready for 128 bit quads).
Jul 05 2011
On 7/5/2011 3:45 PM, Don wrote:The ones in hex I got out of a book that helpfully printed them as octal values. I wanted exact bit patterns, not decimal conversions that might suffer if there's a flaw in the lexer. It's hard to come by textbook values for some of these that are high precision. It's definitely not good enough to just write some simple fp program to generate them.Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix?The ones defined in decimal are obsolete, they haven't had a conversion to hex yet.
Jul 05 2011
On Jul 6, 11 06:59, Walter Bright wrote:On 7/5/2011 3:45 PM, Don wrote:http://www.wolframalpha.com/input/?i=pi+in+hexadecimalThe ones in hex I got out of a book that helpfully printed them as octal values. I wanted exact bit patterns, not decimal conversions that might suffer if there's a flaw in the lexer. It's hard to come by textbook values for some of these that are high precision. It's definitely not good enough to just write some simple fp program to generate them.Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix?The ones defined in decimal are obsolete, they haven't had a conversion to hex yet.
Jul 05 2011
On 7/5/2011 11:12 PM, KennyTM~ wrote:On Jul 6, 11 06:59, Walter Bright wrote:sweet!It's definitely not good enough to just write some simple fp program to generate them.http://www.wolframalpha.com/input/?i=pi+in+hexadecimal
Jul 05 2011
On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote:I read an article about this recently, it's definitely interesting. The one place where I haven't seen it mentioned is what happens when you want the area of a circle, since that necessarily involves the radius. I'd gu=essyou'd have to use =CF=84/2 * r^2, but even then, that's one formula vs. t=he rest.It's probably a good tradeoff. I can definitely see the advantage when using radians. Never thought I'd have to re-learn trig again...It embarasses me to say that, after many years, working with radians and pi still makes my head hurt. "So I have to multiply -- no wait, divide -- no wait, multiply that by 2 ..."
Jul 05 2011
"James Fisher" <jameshfisher gmail.com> wrote in message news:mailman.1426.1309854678.14074.digitalmars-d puremagic.com...Hopefully this won't be taken as frivolous. I (and possibly some of you) have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here.He had me at "TAU == 2PI" I'm sold.
Jul 05 2011