www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Calculation differences between Debug and Release mode

reply "Jeremy DeHaan" <dehaan.jeremiah gmail.com> writes:
I have a function that will calculate a random point on a circle 
based on a specified radius and total number of points. The only 
point in question is the first point. I get different values when 
the code compiles in Release and Debug mode.

Here is some code:

Vector2f getPoint(uint index)
{
		
	static const(float) pi = 3.141592654f;
		
	float angle = index * 2 * pi / m_pointCount - pi / 2;

		
	float x = cos(angle) * m_radius;
	float y = sin(angle) * m_radius;
		

	return Vector2f(m_radius + x, m_radius + y);
}

Vector2f is simply a struct that has 2 floats.

In debug mode this works as expected. Let's say the radius is 50. 
getPoint(0) returns a vector that prints X: 50 Y: 0. For some 
reason, the same function will return a vector that prints X: 50 
Y: 4.77673e-14. Now, 4.77673e-14 is a crazy small number that 
might as well be 0, but why the difference?

Also, consider the following change in the function:

Vector2f getPoint(uint index)
{
		
	static const(float) pi = 3.141592654f;
		
	float angle = index * 2 * pi / m_pointCount - pi / 2;

	
	float x = cos(angle) * m_radius;
	float y = sin(angle) * m_radius;

	Vector2f temp = Vector2f(m_radius + x, m_radius + y);
		
	return temp;
}

Surprisingly, when calling getpoint(0), this will return a 
Vector2 that will print X: 50 Y: 0 even in Release mode.

Again, it's probably not that big of a deal since 4.77673e-14 is 
so small, but I'm curious about this. Anyone wanna shed some 
light on this?
Apr 12 2013
next sibling parent reply Alexandr Druzhinin <drug2004 bk.ru> writes:
I'm not sure, but I suspect this is because of 80-bit intermediary float 
point operation result. Its precision too excessive and gives us this 
inexpectible result. But when you use an intermediary variable this 
exessive intermediary result is rounded properly and you get what you 
expect. See here - http://d.puremagic.com/issues/show_bug.cgi?id=6531. 
The reason is:
float a, b, c, d, foo;
foo = a + b / c;
if (d < foo) { // you compare two float value
} else {
}
but:
if (d < (a + b / c)) { // you compare float value and 80bit value and 
sometimes the result won't be what you expected
} else {
}

in your case in release mode compiler may do some optimization and do 
not round value properly but using temp you force do proper rounding
Apr 13 2013
parent "bearophile" <bearophileHUGS lycos.com> writes:
Alexandr Druzhinin:

 I'm not sure, but I suspect this is because of 80-bit 
 intermediary float point operation result.
Maybe D uses higher precision FP values in some of those intermediate computations. In general float is useful if you have to store many of them, for storage reasons (reduce memory use, reduce cache pressure), but otherwise for computations in a functions it's better to use doubles. Bye, bearophile
Apr 13 2013
prev sibling next sibling parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Sat, 13 Apr 2013 08:07:39 +0200, Jeremy DeHaan  
<dehaan.jeremiah gmail.com> wrote:

 In debug mode this works as expected. Let's say the radius is 50.  
 getPoint(0) returns a vector that prints X: 50 Y: 0. For some reason,  
 the same function will return a vector that prints X: 50 Y: 4.77673e-14.  
 Now, 4.77673e-14 is a crazy small number that might as well be 0, but  
 why the difference?
Sounds to me like a bug. I've tried recreating the problem on my machine (Win7, dmd 2.062 32-bit, no flags other than debug/release), but can't see it happen here. My (perceived) version of your code: import std.stdio : writeln; import std.math : sin, cos; struct Vector2f { float x,y; } int m_pointCount = 25; float m_radius = 50; Vector2f getPoint(uint index) { static const(float) pi = 3.141592654f; float angle = index * 2 * pi / m_pointCount - pi / 2; float x = cos(angle) * m_radius; float y = sin(angle) * m_radius; return Vector2f(m_radius + x, m_radius + y); } void main( string[] args ) { writeln( getPoint( 0 ) ); } Could you please post here the minimum code necessary to get the behavior you describe, as well as the platform and compiler flags you're using? -- Simen
Apr 13 2013
parent reply "Jeremy DeHaan" <dehaan.jeremiah gmail.com> writes:
On Saturday, 13 April 2013 at 11:59:12 UTC, Simen Kjaeraas wrote:
 On Sat, 13 Apr 2013 08:07:39 +0200, Jeremy DeHaan 
 <dehaan.jeremiah gmail.com> wrote:

 In debug mode this works as expected. Let's say the radius is 
 50. getPoint(0) returns a vector that prints X: 50 Y: 0. For 
 some reason, the same function will return a vector that 
 prints X: 50 Y: 4.77673e-14. Now, 4.77673e-14 is a crazy small 
 number that might as well be 0, but why the difference?
Sounds to me like a bug. I've tried recreating the problem on my machine (Win7, dmd 2.062 32-bit, no flags other than debug/release), but can't see it happen here. My (perceived) version of your code: import std.stdio : writeln; import std.math : sin, cos; struct Vector2f { float x,y; } int m_pointCount = 25; float m_radius = 50; Vector2f getPoint(uint index) { static const(float) pi = 3.141592654f; float angle = index * 2 * pi / m_pointCount - pi / 2; float x = cos(angle) * m_radius; float y = sin(angle) * m_radius; return Vector2f(m_radius + x, m_radius + y); } void main( string[] args ) { writeln( getPoint( 0 ) ); } Could you please post here the minimum code necessary to get the behavior you describe, as well as the platform and compiler flags you're using?
After playing around I discovered that Mono-D automatically uses -O for release builds and it looks like that is what is causing this. After compiling using that switch just from the command line I reproduced the problem. I'm on Windows, and I my compilation was nothing more than "dmd -O -release main.d" to get the issue I described.
Apr 13 2013
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Sat, 13 Apr 2013 18:36:21 +0200, Jeremy DeHaan  
<dehaan.jeremiah gmail.com> wrote:

 I'm on Windows, and I my compilation was nothing more than "dmd -O  
 -release main.d" to get the issue I described.
Turns out, the problem starts here: static const(float) pi = 3.141592654f; If we compare that to std.math.PI, we see that they're different: >> writeln( 3.141592654f - std.math.PI ); 4.10207e-10 If, however, we assign these values to some temporary floats, we see that they're equal: >> float a = 3.141592654f; >> float b = std.math.PI; >> writeln( a - b ); 0 Replace float with double or real in the above, and the difference reappears. So, we have established that 3.141592654f is a valid approximation to pi for a float. The problem thus has to be one of precision. I'm not sure if it's a valid optimization for the compiler to use doubles instead of floats (it certainly seem innocuous enough). I'd say file a bug on it. Worst case, it gets closed as invalid. -- Simen
Apr 20 2013
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
Thanks for the analysis.

On 04/20/2013 05:30 AM, Simen Kjaeraas wrote:

      static const(float) pi = 3.141592654f;

 If we compare that to std.math.PI, we see that they're different:

      >> writeln( 3.141592654f - std.math.PI );
      4.10207e-10
std.math.PI is a 'real'. According to the language definition, the calculation above must be done as 'real'. It is described under "Usual Arithmetic Conversions" here: http://dlang.org/type.html Quoting: "If either operand is real, the other operand is converted to real." Unfortunately, the variable 'pi' above cannot be as good a representation of pi as std.math.PI.
 If, however, we assign these values to some temporary floats, we see that
 they're equal:

      >> float a = 3.141592654f;
      >> float b = std.math.PI;
It is arguable whether that lossy conversion from real to float should be allowed. Such rules have been inherited all the way from C. They will not change at this point for D. :/
      >> writeln( a - b );
      0
 Replace float with double or real in the above, and the difference
 reappears.

 So, we have established that 3.141592654f is a valid approximation to pi
 for a
 float. The problem thus has to be one of precision. I'm not sure if it's
 a valid
 optimization for the compiler to use doubles instead of floats (it
 certainly
 seem innocuous enough). I'd say file a bug on it. Worst case, it gets
 closed as
 invalid.
Unfortunately, it is not a bug in D or dmd. Ali
Apr 20 2013
parent reply "Casper =?UTF-8?B?RsOmcmdlbWFuZCI=?= <shorttail gmail.com> writes:
The D book has a diagram that shows implicit conversions. All 
implicit conversions from integral types to floating point go to 
real, not double or float.
Apr 20 2013
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/20/2013 11:04 AM, "Casper Færgemand" <shorttail gmail.com>" wrote:

 The D book has a diagram that shows implicit conversions.
It is Figure 2.3 on page 44 of my copy of TDPL.
 All implicit
 conversions from integral types to floating point go to real, not double
 or float.
Yes. The figure shows that implicit conversions are possible between float<-->double and double<-->real (in both directions). It indicates that float<-->real is possible as well. Ali
Apr 20 2013
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 13 Apr 2013 02:07:39 -0400, Jeremy DeHaan  
<dehaan.jeremiah gmail.com> wrote:

 I have a function that will calculate a random point on a circle based  
 on a specified radius and total number of points. The only point in  
 question is the first point. I get different values when the code  
 compiles in Release and Debug mode.

 Here is some code:

 Vector2f getPoint(uint index)
 {
 		
 	static const(float) pi = 3.141592654f;
 		
 	float angle = index * 2 * pi / m_pointCount - pi / 2;

 		
 	float x = cos(angle) * m_radius;
 	float y = sin(angle) * m_radius;
 		

 	return Vector2f(m_radius + x, m_radius + y);
 }

 Vector2f is simply a struct that has 2 floats.

 In debug mode this works as expected. Let's say the radius is 50.  
 getPoint(0) returns a vector that prints X: 50 Y: 0. For some reason,  
 the same function will return a vector that prints X: 50 Y: 4.77673e-14.  
 Now, 4.77673e-14 is a crazy small number that might as well be 0, but  
 why the difference?
I would suspect that the issue is floating point error. On certain hardware, the CPU uses higher-precision 80-bit floating points. When you store those back to doubles, the extra precision is truncated. In debug mode, without optimization, the compiler will do exactly as you say, storing intermediate calculations and using the truncated stored data for the next line. But with optimizations turned on, the compiler might take shortcuts which allows it to use the higher-precision data still in the registers in the next line. -Steve
Apr 15 2013
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 15 Apr 2013 11:51:07 -0400, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 I would suspect that the issue is floating point error.  On certain  
 hardware, the CPU uses higher-precision 80-bit floating points.  When  
 you store those back to doubles, the extra precision is truncated.
I see you use float not double, but the point is still valid. -Steve
Apr 15 2013
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 15 Apr 2013 11:51:43 -0400
schrieb "Steven Schveighoffer" <schveiguy yahoo.com>:

 On Mon, 15 Apr 2013 11:51:07 -0400, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:
 
 I would suspect that the issue is floating point error.  On certain  
 hardware, the CPU uses higher-precision 80-bit floating points.  When  
 you store those back to doubles, the extra precision is truncated.
I see you use float not double, but the point is still valid. -Steve
What worries me is that this jeopardizes the efforts put into C to make floating point calculations the same under all circumstances. GCC for example has the fast-math switch if you really want just the fastest way to get a result, but otherwise follows strict evaluation and assignment order of floating point math even in const-folding, so situations like these don't occur. I completely agree with your explanation why it happens and I think it is a bug that must be reported to the respective compiler developers. -- Marco
Apr 28 2013
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/28/2013 12:39 PM, Marco Leise wrote:

 What worries me is that this jeopardizes the efforts put into
 C to make floating point calculations the same under all
 circumstances.
That is news to me. I remember knowing this problem from C. Perhaps something new in the C standard that I haven't been following?
 GCC for example has the fast-math switch if you
 really want just the fastest way to get a result, but
 otherwise follows strict evaluation and assignment order of
 floating point math even in const-folding, so situations like
 these don't occur.
That may be gcc's attempt at bringing sanity to this aspect of C. Ali
Apr 28 2013