digitalmars.D - Proposal: c_int
- Thomas Kuehne (31/54) Jan 06 2006 -----BEGIN PGP SIGNED MESSAGE-----
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (21/24) Jan 06 2006 For what it is worth, on PPC64 "int" is 32 bits and "long" is 64 bits.
- Sean Kelly (8/16) Jan 06 2006 Hrm... I think that's actually correct. I don't know of any platforms
- Thomas Kuehne (13/28) Jan 06 2006 -----BEGIN PGP SIGNED MESSAGE-----
- =?UTF-8?B?QW5kZXJzIEYgQmrDtnJrbHVuZA==?= (5/10) Jan 06 2006 You got it... 2 bytes "wasted" on X86, and 6 bytes on X64.
- =?UTF-8?B?QW5kZXJzIEYgQmrDtnJrbHVuZA==?= (6/13) Jan 06 2006 Wonder if it makes a difference, though ? It doesn't on PowerPC,
- Sean Kelly (4/11) Jan 06 2006 *sigh* I'd forgotten about this. I'd be happy to update the C headers
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Problem: On 64-bit Windows systems "int" is defined as "std.stdint.int32_t". On 64-bit Unix systems "int" is defined as "std.stdint.int64_t" (64-bit mode) or "std.stdint.int32_t" (32-bit mode). All C bindings would have to write their own kludges to evade those problems. The current problem is that those kludges can't be written in D. No, "version(X86_64)" isn't sufficent. The situation is the same for "unsigned int", "long", "unsigned long", "long long" and "unsigned long long". Implementing at least on of the suggestions below would allow a "pure D" soultion. Suggestion: 1/3 Define size_t and ptrdiff_t as true D types directly in the compiler and add the following snipplet to std.stdint:version(Win32){ alias int32_t c_int; }else version(Win64){ alias int32_t c_int; }else version(Windows){ static assert(0); }else{ alias ptrdiff_t c_int; }Suggestion: 2/3 Define D_BITS_PER_C_INT_32 or D_BITS_PER_C_INT_64 as predefined versions in the compiler and add the following to std.stdint:version(D_BITS_PER_C_INT_32){ alias int32_t c_int; }else version(D_BITS_PER_C_INT_64){ alias int64_t c_int; }else{ static assert(0); }Suggestion: 3/3 Enhance "version" so, that a version can have a numerical value and define "version[D_BITS_PER_C_INT=32" or "version[D_BITS_PER_C_INT=64" inside the compiler and add the following to std.stdint:version(D_BITS_PER_C_INT == 32){ alias int32_t c_int; }version(D_BITS_PER_C_INT == 64){ alias int64_t c_int; }else{ static assert(0); }Suggestion 3 seems to be the most powerfull one and might solve other common implementation problems too. Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFDvpqf3w+/yD4P9tIRAtAtAKC4Ei69tNmg+57MND75XAlHI6RJ/gCdEUKO BwdJtEAyhsz0zsi4S3de9pc= =KGxd -----END PGP SIGNATURE-----
Jan 06 2006
Thomas Kuehne wrote:On 64-bit Windows systems "int" is defined as "std.stdint.int32_t". On 64-bit Unix systems "int" is defined as "std.stdint.int64_t" (64-bit mode) or "std.stdint.int32_t" (32-bit mode).For what it is worth, on PPC64 "int" is 32 bits and "long" is 64 bits. I believe the same is true in the AMD64 arch ABI for Linux, as well ? (I think the programming model is called LP64 ?) See http://developer.apple.com/macosx/64bit.html This still means that ints can no longer hold pointers, for instance... (nor can it hold the length of a pointer, need to use size_t for that) Side note: I think it would be nice if there was a short D name to use for "largest hardware implemented integer size", for loops and such ? (since "int" is always 32 bits and "size_t" is hard to pronounce) Like the current variable-size "real" type for floating point ? Too bad that the natural name for such a thing is already taken... Then again I haven't benchmarked, maybe 32-bit is just as "fast" ? Just thinking about something that would be 32 bits on a 32-bit machine and 64 bits on a 64-bit machine (i.e. in 64 bit mode) ? --anders PS. Another fun change is that "bool" is now back at 1 byte, from 4. (change in the Mac OS X ppc ABI that was, from 32-bit to 64-bit) On the X86->X64 side, one noteworthy change is that "long double" is now 16 bytes in size instead of 12 (but still *using* just 10)
Jan 06 2006
Anders F Björklund wrote:Thomas Kuehne wrote:Hrm... I think that's actually correct. I don't know of any platforms offhand where the size of 'int' are changing--it was 'long' that was the issue. This should improve things considerably, though it will mean more work modifying the headers as a search-replace isn't possible. I'm of a mind to just create an alias for each size type to address this: c_int, c_long, etc. I wish I'd done that at the outset. SeanOn 64-bit Windows systems "int" is defined as "std.stdint.int32_t". On 64-bit Unix systems "int" is defined as "std.stdint.int64_t" (64-bit mode) or "std.stdint.int32_t" (32-bit mode).For what it is worth, on PPC64 "int" is 32 bits and "long" is 64 bits. I believe the same is true in the AMD64 arch ABI for Linux, as well ?
Jan 06 2006
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Anders F Björklund schrieb am 2006-01-06:Thomas Kuehne wrote:Right, please replace "int" with "long" in my posting.On 64-bit Windows systems "int" is defined as "std.stdint.int32_t". On 64-bit Unix systems "int" is defined as "std.stdint.int64_t" (64-bit mode) or "std.stdint.int32_t" (32-bit mode).For what it is worth, on PPC64 "int" is 32 bits and "long" is 64 bits. I believe the same is true in the AMD64 arch ABI for Linux, as well ? (I think the programming model is called LP64 ?) See http://developer.apple.com/macosx/64bit.htmlSide note: I think it would be nice if there was a short D name to use for "largest hardware implemented integer size", for loops and such ? (since "int" is always 32 bits and "size_t" is hard to pronounce) Like the current variable-size "real" type for floating point ?Sounds usefull.On the X86->X64 side, one noteworthy change is that "long double" is now 16 bytes in size instead of 12 (but still *using* just 10)Don't tell me they wasted 3/8 of the storage to be compatible with old binaries and have a decent alignment... Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFDvt7v3w+/yD4P9tIRAhruAJ9hIb8RKRLKMzGQ10/Q1MJwk9k+3ACguGx5 /+t2u3YABSTLQLSyMmtZddc= =01d7 -----END PGP SIGNATURE-----
Jan 06 2006
Thomas Kuehne wrote:You got it... 2 bytes "wasted" on X86, and 6 bytes on X64. Read about it in: http://gcc.fyxm.net/summit/2003/Porting%20to%2064%20bit.pdf --andersOn the X86->X64 side, one noteworthy change is that "long double" is now 16 bytes in size instead of 12 (but still *using* just 10)Don't tell me they wasted 3/8 of the storage to be compatible with old binaries and have a decent alignment...
Jan 06 2006
Thomas Kuehne wrote:Wonder if it makes a difference, though ? It doesn't on PowerPC, since the PPC is a 64-bit architecture with a 32-bit subset... Perhaps it does on X86, but I would imagine that being able to quickly handle 32-bit integers would be a pretty high priority ? --andersI think it would be nice if there was a short D name to use for "largest hardware implemented integer size", for loops and such ? (since "int" is always 32 bits and "size_t" is hard to pronounce) Like the current variable-size "real" type for floating point ?Sounds usefull.
Jan 06 2006
Thomas Kuehne wrote:On 64-bit Windows systems "int" is defined as "std.stdint.int32_t". On 64-bit Unix systems "int" is defined as "std.stdint.int64_t" (64-bit mode) or "std.stdint.int32_t" (32-bit mode). All C bindings would have to write their own kludges to evade those problems. The current problem is that those kludges can't be written in D.*sigh* I'd forgotten about this. I'd be happy to update the C headers in Ares with whatever method seems appropriate. Sean
Jan 06 2006