Archives
D Programming
DD.gnu digitalmars.D digitalmars.D.bugs digitalmars.D.dtl digitalmars.D.dwt digitalmars.D.announce digitalmars.D.learn digitalmars.D.debugger C/C++ Programming
c++c++.announce c++.atl c++.beta c++.chat c++.command-line c++.dos c++.dos.16-bits c++.dos.32-bits c++.idde c++.mfc c++.rtl c++.stl c++.stl.hp c++.stl.port c++.stl.sgi c++.stlsoft c++.windows c++.windows.16-bits c++.windows.32-bits c++.wxwindows digitalmars.empire digitalmars.DMDScript |
c++ - 80 bit long doubles, loss of precision
I've run into a problem with using long doubles in our SPICE simulator. Clearly, the whole point of using long doubles is to get 80 bit precision in the maths. However, the initially observable effect of my problem is that an addition of a time t, plus a very small increment delta, is producing an unchanged value of t, whereas there should be enough resolution at 80 bits for the addition to 'work'. Tests with a simple test case and the same binary encoding of the numbers seem to work OK. Alot of head scratching and research has led me to establish that in the full application, the x87 control word has value 0x027F - and if I read the docs correctly - this means it is running with 64 bit precision at this time. But in the simple test case, the value is 0x137F, implying 80 bit precision. Question1: where/ does DM set the x87 control word? Clearly not that often because I've inspected the assembly language quite alot in the course of debugging this! Question2: what happens where the application is running a mix of code from SC 7.5, VC++ and DM - who is responsilble for (re)-setting the x87 control word and when does this happen? Cheers John Jameson. Jul 18 2005
"John Jameson" <John_member pathlink.com> wrote in message news:dbghlc$2pm$1 digitaldaemon.com...I've run into a problem with using long doubles in our SPICE simulator. Jul 20 2005
|