February 23, 2011
written by Walter Bright
Systems programming languages come with a menagerie of integer types. There's usually one for each of small power of 2 byte sizes, with signed and unsigned flavors. Twos-complement arithmetic has some inherent gotchas - overflow, truncation, and signed vs unsigned. Different languages deal with these gotchas in different ways.
The D programming language chose to follow the C (and C++) rules for such, because experience shows they work and they are very familiar to programmers. The gotchas are also familiar, and programmers usually take them in stride. But can we improve on them and still be compatible with the C rules?
A Quick Review
The C rules are called value preserving rules, and are from C99 184.108.40.206-3 and 220.127.116.11-1. Summarizing, for operands of a binary expression:
- If any operand is of a integral type smaller than int? Convert to int.
- Is any operand unsigned long? Convert the other to unsigned long.
- (else) Is any operand signed long? Convert the other to signed long.
- (else) Is any operand unsigned int? Convert the other to unsigned int.
This means that:
unsigned char c1 = 200; unsigned char c2 = 200; int i = c1 + c2;
will produce 400 as the answer, not 144 which would happen if the types were not promoted (!). That gives us what we want, and we're happy.
But there's a dahhk side to this. C will happily implicitly convert an integer to a smaller type, resulting in silent truncation:
unsigned char c3 = i; // 400 is truncated to 144
This is not so nice. Many compilers offer a warning about this, unless an explicit cast is also inserted:
unsigned char c3 = (unsigned char)i;
So far, so good. But this starts to get annoying with expressions like:
unsigned char c3 = i & 0x3F;
Cripes, we programmers know there is no problem with this statement, no bits are lost. But the compiler keeps complaining that we should write it as:
unsigned char c3 = (unsigned char)(i & 0x3F);
Ugh. But wait, there's more:
short s = c3 + 1;
We know that'll fit in a short. Guaranteed. But the compiler complains anyway, and we have to rewrite it as:
short s = (short)(c3 + 1);
Can we do better? Yes, with “Value Range Propagation”, a historically obscure compiler optimization that became a handy feature in the D programming language.
Value Range Propagation
The idea behind this is simple. For each expression, a maximum and minimum value it can have is computed statically (during compilation). When exact values are not possible, a conservative estimate is computed. When it comes time for an implicit conversion, if the max and min value can fit in the converted type, then no complaint is issued. If not, a cast is still necessary.
For example, for the integer literal 100, the min and max values are (100, 100). For unsigned char c, the min and max values are (0, 255). The min and max values of (c + 100) will then be (100, 355). This fits into, say, a short:
short s = c + 100;
and no error is issued. The min and max values of (i & 0x3F) are (0, 0x3F), which also fit in an unsigned char, so:
unsigned char c3 = i & 0x3F;
compiles without complaint. But,
unsigned char c3 = i & 0x14A;
does not compile. Similarly,
unsigned char c1, c2; unsigned char c3 = c1 + c2;
also will not compile, although this may initially be surprising. The min and max values of the (c1 + c2) expression will be (0, 512), which will not fit back into a char.
The complexity of doing value range propagation stems from figuring out all the rules for computing min and max values for each of the arithmetic operations, and all the various integer types the operands can be.
It's Not Perfect
unsigned char c1, c2; ... unsigned char c3 = c1 + c2;
The compiler complains about this, because an int (due to the integral promotion rules) is being assigned to an unsigned char. Ok, but for:
unsigned u1, u2; ... unsigned u3 = u1 + u2;
no error is issued. Despite the fact that overflow can occur, according to the language rules an unsigned is being assigned to an unsigned, and that's allowed. Similarly,
int i; ... unsigned u = i;
is allowed because the types are the same size. Trying to force using casts on those expressions, while pedantically pure, would be excessively annoying.
I don't think there's any way to have fix sized ints that have fast implementations and do not suffer from some sort of problems with sign mixups, overflow, etc. Value range propagation doesn't solve all these problems, either, but it does make protections against truncation errors more palatable by getting rid of a slew of annoying false positives.
Java, for example, gets rid of signed/unsigned mixup issues by eliminating unsigned types. This goes down a little hard for me, although some argue that Java has demonstrated that unsigned types really aren't necessary. Unsigned types are nice for implementing memory management for objects that span more than half of the address space, for primitives implementing multiprecision arithmetic, etc.
Go gets rid of implicit conversion truncation errors by getting rid of implicit integer conversions. But that doesn't really solve the issue, as overflow is not detected. The c3 = c1 + c2 where the variables are byte sized problem remains.
Python avoids the problems by using infinite precision arithmetic, but pays a very steep penalty in runtime performance for this.
Value range propagation can be used for other things, too, like eliminating run time array bounds checking. If the value of the array index expression can be proved to be within the range of the array, the bounds check is not necessary. Other optimizations can as well be done with this, but the current D compiler does not attempt them.
The D programming language has had value range propagation for a while, and it's an unequivocal win. The neat thing about it is that people don't notice it's there. It's like a headache that goes away, and you weren't really aware when it did fade away. The errors on implicit conversions that the D compiler does kick out tend to be real areas of concern, rather than being nagged with false positives.
C, being the lingua franca of the programming world, is used to illustrate the examples. Here are the corresponding D types.
Here is Andrei Alexandrescu's original description of the technique.
Thanks to Andrei Alexandrescu, Brad Roberts, Bartosz Milewski and David Held for their most helpful comments on a draft of this.