www.digitalmars.com         C & C++   DMDScript  

D - cast(int) real BUG

reply "Dario" <supdar yahoo.com> writes:
int main()
{
    real a = 0.9;
    int b = cast(int) a;
    printf("%i", b);
    return 0;
}

This prints -2147483648...
The compiler casted 'a' to uint.
If I change it to "printf("%u", b);" it correctly prints 0.

Anyway, does a casted-to-int floating-point always floor the value?
Jun 13 2003
next sibling parent "Dario" <supdar yahoo.com> writes:
int main()
{
    real a = 0.9;
    ulong b = cast(ulong) a;
    printf("%u", cast(uint) b);
    return 0;
}

Prints: 1717987328
In hex it would be: 0x66666800
I have no idea about what's happening here.

ulong b = cast(ulong)cast(float) a; <- this fails as well
ulong b = cast(ulong)cast(double) a; <- OK
Jun 13 2003
prev sibling parent "Walter" <walter digitalmars.com> writes:
"Dario" <supdar yahoo.com> wrote in message
news:bccpsa$14je$1 digitaldaemon.com...
 int main()
 {
     real a = 0.9;
     int b = cast(int) a;
     printf("%i", b);
     return 0;
 }

 This prints -2147483648...
 The compiler casted 'a' to uint.
 If I change it to "printf("%u", b);" it correctly prints 0.
This looks like a bug with %i.
 Anyway, does a casted-to-int floating-point always floor the value?
Yes.
Jun 14 2003