www.digitalmars.com         C & C++   DMDScript  

c++ - 80 bit long doubles, loss of precision

reply John Jameson <John_member pathlink.com> writes:
I've run into a problem with using long doubles in our SPICE simulator. Clearly,
the whole point of using long doubles is to get 80 bit precision in the maths.
However, the initially observable effect of my problem is that an addition of a
time t, plus a very small increment delta, is producing an unchanged value of t,
whereas there should be enough resolution at 80 bits for the addition to 'work'.

Tests with a simple test case and the same binary encoding of the numbers seem
to work OK.

Alot of head scratching and research has led me to establish that in the full
application, the x87 control word has value 0x027F - and if I read the docs
correctly - this means it is running with 64 bit precision at this time.

But in the simple test case, the value is 0x137F, implying 80 bit precision.

Question1: where/ does DM set the x87 control word? Clearly not that often
because I've inspected the assembly language quite alot in the course of
debugging this!

Question2: what happens where the application is running a mix of code from SC
7.5, VC++ and DM - who is responsilble for (re)-setting the x87 control word and
when does this happen?

Cheers
John Jameson.
Jul 18 2005
parent "Walter" <newshound digitalmars.com> writes:
"John Jameson" <John_member pathlink.com> wrote in message
news:dbghlc$2pm$1 digitaldaemon.com...
 I've run into a problem with using long doubles in our SPICE simulator.
Clearly,
 the whole point of using long doubles is to get 80 bit precision in the
maths.
 However, the initially observable effect of my problem is that an addition
of a
 time t, plus a very small increment delta, is producing an unchanged value
of t,
 whereas there should be enough resolution at 80 bits for the addition to
'work'.
 Tests with a simple test case and the same binary encoding of the numbers
seem
 to work OK.

 Alot of head scratching and research has led me to establish that in the
full
 application, the x87 control word has value 0x027F - and if I read the
docs
 correctly - this means it is running with 64 bit precision at this time.

 But in the simple test case, the value is 0x137F, implying 80 bit
precision.
 Question1: where/ does DM set the x87 control word? Clearly not that often
 because I've inspected the assembly language quite alot in the course of
 debugging this!
It sets it in the startup code \dm\src\win32\_8087.asm, where it executes an 'finit' instruction.
 Question2: what happens where the application is running a mix of code
from SC
 7.5, VC++ and DM - who is responsilble for (re)-setting the x87 control
word and
 when does this happen?
DM doesn't go around resetting it, except for temporary uses. It's entirely possible that VC is setting it to 64 bits of precision, since VC does not support 80 bit reals in any way, shape, or form. In fact, it's likely that VC is setting it to 64 bits in order to mimic the behavior of the newer, 64 bit FP instructions. I suggest that you either abandon VC for doing floating point work, or set/reset the FPU control word when calling into VC code and upon its return.
Jul 20 2005