www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Is int faster than byte/short?

reply =?UTF-8?B?TWFyaXVzeiBHbGl3acWEc2tp?= <alienballance gmail.com> writes:
Hello,
I'm trying to learn high-performance real-time programming.

One of my wonderings are:
Should i use int/uint for all standard arithmetic operations or 
int/short/byte (depends on actual case)?
I believe this question has following subquestions:
* Arithmetic computations performance
* Memory access time

My actual compiler is DMD, but I'm interested in GDC as well.

Lastly, one more question would be:
Could someone recommend any books/resources for this kind of 
informations and tips that could be applied to D? I'd like to defer my 
own experiments with generated assembly and profiling, but i suppose 
people already published general rules that i could apply for my 
programming.

Thanks,
Mariusz Gliwiński
Apr 30 2011
next sibling parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 4/30/2011 10:34 AM, Mariusz Gliwiński wrote:
 Hello,
 I'm trying to learn high-performance real-time programming.

 One of my wonderings are:
 Should i use int/uint for all standard arithmetic operations or
 int/short/byte (depends on actual case)?
 I believe this question has following subquestions:
 * Arithmetic computations performance
 * Memory access time

 My actual compiler is DMD, but I'm interested in GDC as well.

 Lastly, one more question would be:
 Could someone recommend any books/resources for this kind of
 informations and tips that could be applied to D? I'd like to defer my
 own experiments with generated assembly and profiling, but i suppose
 people already published general rules that i could apply for my
 programming.

 Thanks,
 Mariusz Gliwiński
My experience with this pattern of thinking is to use the largest data type that makes sense, unless you have a profiler saying you need to do something different. However, if you get being obsessive-compulsive about having 'the perfectly sized integer types' for the code, it is possible to fall into the trap of over-using unsigned types 'because the value can never be negative'. Unsigned 8 and 16 bit values usually have a good reason to be unsigned, but when you start getting to 32 and 64 bit values it makes a lot less sense most of the time. When working with non-X86 platforms other problems are usually much more severe: More expensive thread synchronization primitives, lack of efficient variable bit bit-shifting (run-time determined number of bits shifted), non-existent branch prediction, or various floating point code promoting to emulated double precision code silently on hardware that can only do single precision floating point etc.
Apr 30 2011
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 30.04.2011 19:34, Mariusz Gliwiński wrote:
 Hello,
 I'm trying to learn high-performance real-time programming.

 One of my wonderings are:
 Should i use int/uint for all standard arithmetic operations or 
 int/short/byte (depends on actual case)?
 I believe this question has following subquestions:
 * Arithmetic computations performance
 * Memory access time

 My actual compiler is DMD, but I'm interested in GDC as well.

 Lastly, one more question would be:
 Could someone recommend any books/resources for this kind of 
 informations and tips that could be applied to D? I'd like to defer my 
 own experiments with generated assembly and profiling, but i suppose 
 people already published general rules that i could apply for my 
 programming.

 Thanks,
 Mariusz Gliwiński
I find Agner Fog's guides on optimization for x86 the best source on such architecture specific matters. http://www.agner.org/optimize/ Citing releveant part from C++ optimization guide (on Integers):
Integers of smaller sizes (char, short int) are only slightly less
efficient. In most cases, the compiler will convert these types to 
integers of the default size
when doing calculations, and then use only the lower 8 or 16 bits of 
the result. You can
assume that the type conversion takes zero or one clock cycle. In 
64-bit systems, there is
only a minimal difference between the efficiency of 32-bit integers 
and 64-bit integers, as
long as you are not doing divisions.
-- Dmitry Olshansky
May 01 2011
parent Steven Wawryk <stevenw acres.com.au> writes:
This is a good point.  Further to that, keep in mind locality of 
reference, ie the performance impact of data getting pushed out of the 
caches.  While using machine word size variables for a small number of 
variables that really need high performance can give a small speed-up, 
using them extensively can increase the programs data size, increasing 
the frequency of cache misses and resulting in large slow-downs.


On 01/05/11 19:28, Dmitry Olshansky wrote:
 On 30.04.2011 19:34, Mariusz Gliwiński wrote:
 Hello,
 I'm trying to learn high-performance real-time programming.

 One of my wonderings are:
 Should i use int/uint for all standard arithmetic operations or
 int/short/byte (depends on actual case)?
 I believe this question has following subquestions:
 * Arithmetic computations performance
 * Memory access time

 My actual compiler is DMD, but I'm interested in GDC as well.

 Lastly, one more question would be:
 Could someone recommend any books/resources for this kind of
 informations and tips that could be applied to D? I'd like to defer my
 own experiments with generated assembly and profiling, but i suppose
 people already published general rules that i could apply for my
 programming.

 Thanks,
 Mariusz Gliwiński
I find Agner Fog's guides on optimization for x86 the best source on such architecture specific matters. http://www.agner.org/optimize/ Citing releveant part from C++ optimization guide (on Integers): >Integers of smaller sizes (char, short int) are only slightly less >efficient. In most cases, the compiler will convert these types to integers of the default size >when doing calculations, and then use only the lower 8 or 16 bits of the result. You can >assume that the type conversion takes zero or one clock cycle. In 64-bit systems, there is >only a minimal difference between the efficiency of 32-bit integers and 64-bit integers, as >long as you are not doing divisions.
May 01 2011