digitalmars.D - Portability of uint over/underflow behavior
- dsimcha (14/14) Jan 03 2009 Is it portable to rely on unsigned integer types wrapping to their max v...
- Don (3/21) Jan 03 2009 Not so. You can rely on this. It's a fundamental mathematical property.
- bearophile (4/5) Jan 03 2009 If integral overflow checks are introduced, they will become compile/run...
- Don (3/10) Jan 03 2009 The question was about incrementing uint, not int. Preventing wraparound...
- bearophile (5/7) Jan 05 2009 If optional runtime overflow controls are added to integral values, then...
- Don (15/24) Jan 05 2009 But uints HAVE no overflow! In the case of an int, you are approximating...
- Nick Sabalausky (6/21) Jan 05 2009 A uint is an int with the domain of possible values shifted by +uint.max...
- dsimcha (4/27) Jan 05 2009 Mostly, I was thinking that relying on this behavior wouldn't work if so...
- Don (16/38) Jan 05 2009 I suspect that in most of the cases you're thinking of, you actually
- Nick Sabalausky (9/49) Jan 05 2009 I was referring to the detection of wraparounds regardless of what CPU f...
- Don (12/65) Jan 06 2009 It's been present since Pentium MMX. eg here's the instruction for
- Andrei Alexandrescu (3/21) Jan 03 2009 Walter's intent is to put that guarantee in the language.
- bearophile (9/12) Jan 07 2009 I may need to work with numbers that are always >= 0 that can't fit in 6...
Is it portable to rely on unsigned integer types wrapping to their max value when they are subtracted from too many times, i.e. uint foo = 0; foo--; assert(foo == uint.max); ulong bar = 0; bar--; assert(bar == ulong.max); ubyte baz = 0; baz--; assert(baz == ubyte.max); I assume that in theory this is hardware-specific behavior and not guaranteed by the spec, but is it universal enough that it can be considered portable in practice?
Jan 03 2009
dsimcha wrote:Is it portable to rely on unsigned integer types wrapping to their max value when they are subtracted from too many times, i.e. uint foo = 0; foo--; assert(foo == uint.max); ulong bar = 0; bar--; assert(bar == ulong.max); ubyte baz = 0; baz--; assert(baz == ubyte.max); I assume that in theory this is hardware-specific behavior and not guaranteed by the spec,Not so. You can rely on this. It's a fundamental mathematical property. but is it universal enough that it can be considered portable inpractice?
Jan 03 2009
Don:Not so. You can rely on this. It's a fundamental mathematical property.If integral overflow checks are introduced, they will become compile/runtime errors. Bye, bearophile
Jan 03 2009
bearophile wrote:Don:The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!Not so. You can rely on this. It's a fundamental mathematical property.If integral overflow checks are introduced, they will become compile/runtime errors.Bye, bearophile
Jan 03 2009
Don:The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless. In modules where you need wraparounds, you can tell the compiler to disable such controls (recently I have suggested a syntax that works locally: safe(...) {...}, but Walter seems to prefer a module-level syntax for this). Bye, bearophile
Jan 05 2009
bearophile wrote:Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are. Consider for example machine addresses on a 32-bit address bus. Given pointers p and q, p + q - p is perfectly well defined, and is always equal to q. It makes no difference whether p is greater than or less than q. q-p is definitely not an int. (q could be uint.max, and p could be 0). Likewise, p+1 is always defined, even if p is uint.max (p+1 will then be 0).The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.In modules where you need wraparounds, you can tell the compiler to disable such controls (recently I have suggested a syntax that works locally: safe(...) {...}, but Walter seems to prefer a module-level syntax for this).Bye, bearophile
Jan 05 2009
"Don" <nospam nospam.com> wrote in message news:gjsnf2$26g4$1 digitalmars.com...bearophile wrote:A uint is an int with the domain of possible values shifted by +uint.max/2 (while retaining binary compatibility with the overlapping values, of course). Modulo 2^32 arithmetic is just one possible use for them. For other uses, detecting overflow can be useful.Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are.The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.
Jan 05 2009
== Quote from Nick Sabalausky (a a.a)'s article"Don" <nospam nospam.com> wrote in message news:gjsnf2$26g4$1 digitalmars.com...Mostly, I was thinking that relying on this behavior wouldn't work if some DS9K architecture used a signed int representation other than two's complement or used saturation arithmetic. Am I wrong here, too?bearophile wrote:A uint is an int with the domain of possible values shifted by +uint.max/2 (while retaining binary compatibility with the overlapping values, of course). Modulo 2^32 arithmetic is just one possible use for them. For other uses, detecting overflow can be useful.Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are.The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.
Jan 05 2009
Nick Sabalausky wrote:"Don" <nospam nospam.com> wrote in message news:gjsnf2$26g4$1 digitalmars.com...I suspect that in most of the cases you're thinking of, you actually want to detect when the result is greater than int.max, not when it exceeds uint.max? What you're calling 'overflow' in unsigned operations is actually the carry flag. The CPU also an overflow flag which applies to signed operations. When it's set, it means the result was so big that the sign was corrupted. (eg int.max + int.max gives a negative result). An overflow is always an error, I think. (And if you were using (say) a sign-magnitude representation instead of 2-s complement, int.max+int.max would be a _different_ wrong number). But a carry is not an error. It's expected, and indicates that a wraparound occured. By the way, there are other forms of integer which _are_ supported in x86 hardware. Integers which saturate to a maximum value can be useful. ie, (int.max + 1 == int.max)bearophile wrote:A uint is an int with the domain of possible values shifted by +uint.max/2 (while retaining binary compatibility with the overlapping values, of course). Modulo 2^32 arithmetic is just one possible use for them. For other uses, detecting overflow can be useful.Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are.The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.
Jan 05 2009
"Don" <nospam nospam.com> wrote in message news:gjt7cb$1jn$1 digitalmars.com...Nick Sabalausky wrote:I was referring to the detection of wraparounds regardless of what CPU flag is used to indicate that a wraparound occurred. I'd say the vast majority of the time you're working above the asm level, you care much more about variables potentially exceeding their limits than "overflow flag" vs "carry flag"."Don" <nospam nospam.com> wrote in message news:gjsnf2$26g4$1 digitalmars.com...I suspect that in most of the cases you're thinking of, you actually want to detect when the result is greater than int.max, not when it exceeds uint.max? What you're calling 'overflow' in unsigned operations is actually the carry flag. The CPU also an overflow flag which applies to signed operations. When it's set, it means the result was so big that the sign was corrupted. (eg int.max + int.max gives a negative result). An overflow is always an error, I think. (And if you were using (say) a sign-magnitude representation instead of 2-s complement, int.max+int.max would be a _different_ wrong number). But a carry is not an error. It's expected, and indicates that a wraparound occured.bearophile wrote:A uint is an int with the domain of possible values shifted by +uint.max/2 (while retaining binary compatibility with the overlapping values, of course). Modulo 2^32 arithmetic is just one possible use for them. For other uses, detecting overflow can be useful.Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are.The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.By the way, there are other forms of integer which _are_ supported in x86 hardware. Integers which saturate to a maximum value can be useful. ie, (int.max + 1 == int.max)You're kidding me! The x86 has a native capability for that? Since when? How is it used? (I'd love to see D support for it ;) )
Jan 05 2009
Nick Sabalausky wrote:"Don" <nospam nospam.com> wrote in message news:gjt7cb$1jn$1 digitalmars.com...It's been present since Pentium MMX. eg here's the instruction for addition of shorts in MMX. PADDSW mmx1, mmx2/mem64 "Packed add signed with saturation words" For each packed value in the destination, if the value is larger than 0x7FFF it is saturated to 0x7FFF, and if it is less than -0x7FFF it is saturated to 0x8000. There are signed and unsigned versions for add and subtract. There are also similar instructions in SSE2, but they use the 128 bit registers, obviously. There's some other cool instructions in SSE, such as PAVGB which does byte-by-byte averaging, using an extra bit for the intermediate calculation; and there are MIN and MAX instructions as well.Nick Sabalausky wrote:I was referring to the detection of wraparounds regardless of what CPU flag is used to indicate that a wraparound occurred. I'd say the vast majority of the time you're working above the asm level, you care much more about variables potentially exceeding their limits than "overflow flag" vs "carry flag"."Don" <nospam nospam.com> wrote in message news:gjsnf2$26g4$1 digitalmars.com...I suspect that in most of the cases you're thinking of, you actually want to detect when the result is greater than int.max, not when it exceeds uint.max? What you're calling 'overflow' in unsigned operations is actually the carry flag. The CPU also an overflow flag which applies to signed operations. When it's set, it means the result was so big that the sign was corrupted. (eg int.max + int.max gives a negative result). An overflow is always an error, I think. (And if you were using (say) a sign-magnitude representation instead of 2-s complement, int.max+int.max would be a _different_ wrong number). But a carry is not an error. It's expected, and indicates that a wraparound occured.bearophile wrote:A uint is an int with the domain of possible values shifted by +uint.max/2 (while retaining binary compatibility with the overlapping values, of course). Modulo 2^32 arithmetic is just one possible use for them. For other uses, detecting overflow can be useful.Don:But uints HAVE no overflow! In the case of an int, you are approximating a mathematical infinite-precision integer. An overflow means you went outside the available precision. A uint is quite different. uint arithmetic is perfectly standard modulo 2^32 arithmetic. Don't be confused by the fact that many people use them as approximations to infinite-precision positive integers. That's _not_ what they are.The question was about incrementing uint, not int. Preventing wraparound on uints would break everything!If optional runtime overflow controls are added to integral values, then they are performed on ubyte/ushort/uint/ulong/ucent too, because leaving a hole in that safety net is very bad and useless.By the way, there are other forms of integer which _are_ supported in x86 hardware. Integers which saturate to a maximum value can be useful. ie, (int.max + 1 == int.max)You're kidding me! The x86 has a native capability for that? Since when? How is it used? (I'd love to see D support for it ;) )
Jan 06 2009
dsimcha wrote:Is it portable to rely on unsigned integer types wrapping to their max value when they are subtracted from too many times, i.e. uint foo = 0; foo--; assert(foo == uint.max); ulong bar = 0; bar--; assert(bar == ulong.max); ubyte baz = 0; baz--; assert(baz == ubyte.max); I assume that in theory this is hardware-specific behavior and not guaranteed by the spec, but is it universal enough that it can be considered portable in practice?Walter's intent is to put that guarantee in the language. Andrei
Jan 03 2009
Don:What you're calling 'overflow' in unsigned operations is actually the carry flag. The CPU also an overflow flag which applies to signed operations.I may need to work with numbers that are always >= 0 that can't fit in 63 bits. This has actually happened to me once. A third possible syntax, even more localized (to be used with or instead of the other two possible syntaxes): unsafe uint x = uint.max; uint y = uint.max; x++; // ==> OK, now x == 0 y++; // ==> throws runtime overflow exception Bye, bearophile
Jan 07 2009