digitalmars.D - You do WHAT with floating point numbers?
- Tim Starling (7/7) Sep 26 2004 I wasn't sure how closely you guys watch your wiki, so I thought I'd bet...
- Sean Kelly (5/11) Sep 27 2004 Makes sense. Though I have to wonder why Intel has "effectively depreca...
- Dave (30/40) Sep 27 2004 In the context of the Intel doc., it looks like what they are suggesting...
- Ben Hinkle (10/50) Sep 27 2004 is
- Walter (1/1) Sep 30 2004 You should add that to the wiki!
- Dave (2/3) Oct 01 2004 Done..
I wasn't sure how closely you guys watch your wiki, so I thought I'd better post this here as well. http://www.prowiki.org/wiki4d/wiki.cgi?DocComments/Float Using large types for intermediate values will make floating point calculations much slower, especially on a Pentium 4 or later. It's a really bad idea. You should use the minimum by default. -- Tim Starling
Sep 26 2004
In article <cj886h$16gn$1 digitaldaemon.com>, Tim Starling says...I wasn't sure how closely you guys watch your wiki, so I thought I'd better post this here as well. http://www.prowiki.org/wiki4d/wiki.cgi?DocComments/Float Using large types for intermediate values will make floating point calculations much slower, especially on a Pentium 4 or later. It's a really bad idea. You should use the minimum by default.Makes sense. Though I have to wonder why Intel has "effectively deprecated" 80-bit floating point. I haven't taken the time to read their docs. I don't suppose it says? Sean
Sep 27 2004
Tim Starling wrote:I wasn't sure how closely you guys watch your wiki, so I thought I'd better post this here as well. http://www.prowiki.org/wiki4d/wiki.cgi?DocComments/Float Using large types for intermediate values will make floating point calculations much slower, especially on a Pentium 4 or later. It's a really bad idea. You should use the minimum by default. -- Tim StarlingIn the context of the Intel doc., it looks like what they are suggesting is that the /programmer/ (not the compiler developer) use single precision when double precision is not needed. AFAIK, it's always been recommended that the developer use single precision (floats) rather than doubles if the extra precision is not needed and there is a lot of fp data moving around, because it is often faster. On Intel (including the P4) the fp registers are 80 bit. All the author is suggesting in the context of the D language is that compiler developers shouldn't have to limit precision to 32 bits (floats) or 64 bits (doubles) if keeping 80 bit precision results in faster code. D is allowing for this where other languages may specify a maximum precision regardless of the what is best for the hardware. GCC/++, MSVC and the latest Intel compiler all use 80 bit precision to/from the fp registers for intermediate data, and all have a switch to "improve floating point consistency" by rounding/truncating intermediate values, which is often a speed "deoptimization". This includes the P4 and AMD64 chips. D on the other hand follows IEEE 754 minimum precision guidelines for floats and doubles, doesn't specify a maximum precision and also offers the real (80 bit floating point) type for code that would benefit from that. I don't see anywhere in that Intel doc. where it says that 80 bit fp register operations are "deprecated". It is a different ballgame when you are talking vectorization with SIMD instructions. For operations (and compilers) that take advantage of that, then it is probably best to stick to 32 or 64 bit floating-point for code that can be vectorized. From what I've seen, even Intel compilers don't do a great job of vectorizing though, and fall back to using the 80 bit fp general register operations. - Dave
Sep 27 2004
"Dave" <Dave_member pathlink.com> wrote in message news:cj9s52$1893$1 digitaldaemon.com...Tim Starling wrote:isI wasn't sure how closely you guys watch your wiki, so I thought I'd better post this here as well. http://www.prowiki.org/wiki4d/wiki.cgi?DocComments/Float Using large types for intermediate values will make floating point calculations much slower, especially on a Pentium 4 or later. It's a really bad idea. You should use the minimum by default. -- Tim StarlingIn the context of the Intel doc., it looks like what they are suggestingthat the /programmer/ (not the compiler developer) use single precision when double precision is not needed. AFAIK, it's always been recommended that the developer use single precision (floats) rather than doubles iftheextra precision is not needed and there is a lot of fp data moving around, because it is often faster. On Intel (including the P4) the fp registers are 80 bit. All the author is suggesting in the context of the D language is that compiler developers shouldn't have to limit precision to 32 bits (floats) or 64 bits (doubles) if keeping 80 bit precision results in faster code. D is allowing for this where other languages may specify a maximum precision regardless of the what is best for the hardware. GCC/++, MSVC and the latest Intel compiler all use 80 bit precisionto/fromthe fp registers for intermediate data, and all have a switch to "improve floating point consistency" by rounding/truncating intermediate values, which is often a speed "deoptimization". This includes the P4 and AMD64 chips. D on the other hand follows IEEE 754 minimum precision guidelines for floats and doubles, doesn't specify a maximum precision and alsooffersthe real (80 bit floating point) type for code that would benefit from that. I don't see anywhere in that Intel doc. where it says that 80 bit fp register operations are "deprecated". It is a different ballgame when you are talking vectorization with SIMD instructions. For operations (and compilers) that take advantage of that, then it is probably best to stick to 32 or 64 bit floating-point for code that can be vectorized. From what I've seen, even Intel compilers don't do a great job of vectorizing though, and fall back to using the 80 bit fp general register operations. - Davewell said. I just want to add that Java's original requirement that all fp operations happen in exactly single or double precision IEEE 754 and no more or less meant performance went down the tubes. D generally seems to have learned from Java's mistakes in this regard.
Sep 27 2004
Walter wrote:You should add that to the wiki!Done..
Oct 01 2004