digitalmars.D - cent and ucent?
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (8/8) Jan 28 2012 Hi,
- Daniel Murphy (7/15) Jan 28 2012 There are no current plans that I'm aware of. Implementing cent/ucent w...
- bearophile (5/7) Jan 28 2012 Integer numbers have some proprieties that compilers use with built-in f...
- Daniel Murphy (6/14) Jan 28 2012 Yes, but the advantages in implementation ease and portability currently...
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (6/21) Jan 28 2012 Can't speak for GCC, but LLVM allows arbitrary-size integers. SDC maps
- Daniel Murphy (3/10) Jan 28 2012 That's good news. I can't find any information about int128_t in 32 bit...
- Walter Bright (3/6) Jan 28 2012 There is some support for 128 bit ints already in the backend, but it is...
- Jonathan M Davis (3/5) Jan 28 2012 Gotta love the pun there, intended or otherwise... :)
- Daniel Murphy (2/5) Jan 28 2012
- Daniel Murphy (3/5) Jan 28 2012 No rush. The backend is still a mystery to me.
- Walter Bright (2/3) Jan 28 2012 Better call Nancy Drew!
- Jonathan M Davis (5/16) Jan 28 2012 gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don'...
- Timon Gehr (2/18) Jan 29 2012 long long is 64-bit on 64-bit linux.
- Jonathan M Davis (20/21) Jan 29 2012 Are you sure? I'm _certain_ that we looked at this at work when we were
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (5/26) Jan 29 2012 Well, with LLVM and GCC supporting it, there shouldn't be any problems
- Walter Bright (2/3) Jan 29 2012 Sort of. It's 80 bits of useful data with 48 bits of unused padding.
- H. S. Teoh (7/11) Jan 29 2012 Really?! Ugh. Hopefully D handles it better?
- Daniel Murphy (3/5) Jan 29 2012 No. D has to be abi compatible.
- Timon Gehr (2/9) Jan 30 2012 It is what the x86 hardware supports.
- H. S. Teoh (7/21) Jan 30 2012 I know, I was referring to the 48 bits of padding. Seems like such a
- Stewart Gordon (4/15) Jan 31 2012 As I try it, real.sizeof == 10. And by According to DMC 8.42n (where is...
- Marco Leise (5/21) Jan 31 2012 pragma(msg, real.sizeof);
- Stewart Gordon (7/11) Jan 31 2012 Prints 10u for me (2.057, Win32).
- Walter Bright (2/6) Jan 31 2012 Both the alignment and padding of reals changes from platform to platfor...
- Iain Buclaw (7/37) Feb 01 2012 It varies from platform to platform, and depending on what target
- Marco Leise (6/13) Jan 30 2012 From Wikipedia:
- Don Clugston (4/20) Jan 30 2012 Not quite all. An 80-bit double, padded with zeros to 128 bits, is
- Walter Bright (4/8) Jan 31 2012 10 bytes on Windows.
- H. S. Teoh (20/30) Jan 29 2012 IMNSHO, you need both, and I can't say I'm 100% satisfied with how D
- Walter Bright (5/10) Jan 29 2012 size_t does have a C99 Standard official format %z. The trouble is,
- Jonathan M Davis (10/25) Jan 29 2012 It's even worse with code which you're trying to have be cross-platform
- H. S. Teoh (19/34) Jan 29 2012 And C++ doesn't officially support C99. Prior to C++11 anyway, but I
- Iain Buclaw (9/30) Jan 29 2012 Can be turned on via compiler switch:
- Iain Buclaw (6/37) Jan 29 2012 Oh wait... I've just re-read that and realised it's to do with reals
- Jonathan M Davis (20/40) Jan 29 2012 In an ideal language, I'd probably go with an integer type with an unspe...
- Walter Bright (9/13) Jan 29 2012 I believe the notion of "most efficient integer type" was obsolete 10 ye...
- H. S. Teoh (17/24) Jan 29 2012 I agree. It's not perfect, but it definitely beats the C system.
- Walter Bright (7/15) Jan 29 2012 I've had people tell me this was an advantage because there are some chi...
- H. S. Teoh (17/37) Jan 29 2012 I can just see all those string malloc()'s screaming in pain as buffer
- Walter Bright (3/10) Jan 29 2012 Yes. Those chips exist, and there are Standard C compilers for them. But...
- H. S. Teoh (8/20) Jan 29 2012 Interesting. How would D fare in that kind of environment, I wonder? I
- Walter Bright (2/5) Jan 30 2012 You could write a custom D compiler for it.
-
Stewart Gordon
(6/8)
Jan 31 2012
- ponce (5/6) Mar 29 2012 I implemented cent and ucent as a library, using division algorithm from...
Hi, Are there any current plans to implement cent and ucent? I realize no current processors support 128-bit integers natively, but I figure they could be implemented the same way 64-bit integers are on 32-bit machines. I know I could use std.bigint, but there's no good way to declare a bigint as fixed-size... -- - Alex
Jan 28 2012
"Alex Rønne Petersen" <xtzgzorex gmail.com> wrote in message news:jg26nr$29bh$1 digitalmars.com...Hi, Are there any current plans to implement cent and ucent? I realize no current processors support 128-bit integers natively, but I figure they could be implemented the same way 64-bit integers are on 32-bit machines. I know I could use std.bigint, but there's no good way to declare a bigint as fixed-size... -- - AlexThere are no current plans that I'm aware of. Implementing cent/ucent would probably require adding support for the type to the backend, and there are a limited number of people that can do that. It's much more likely that phobos will get something like Fixed!128 in addition to BigInt.
Jan 28 2012
Daniel Murphy:It's much more likely that phobos will get something like Fixed!128 in addition to BigInt.Integer numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins. Alternatively in theory special annotations are able to tell the compiler that a user-defined type shares some of the characteristics of integer numbers, allowing the compiler to optimize better at compile-time. But I think not even the Scala compiler is so powerful. Bye, bearophile
Jan 28 2012
"bearophile" <bearophileHUGS lycos.com> wrote in message news:jg2cku$2ljk$1 digitalmars.com...Integer numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins.Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?Alternatively in theory special annotations are able to tell the compiler that a user-defined type shares some of the characteristics of integer numbers, allowing the compiler to optimize better at compile-time. But I think not even the Scala compiler is so powerful.This would still require backend support for many things.
Jan 28 2012
On 29-01-2012 04:38, Daniel Murphy wrote:"bearophile"<bearophileHUGS lycos.com> wrote in message news:jg2cku$2ljk$1 digitalmars.com...Can't speak for GCC, but LLVM allows arbitrary-size integers. SDC maps cent/ucent to i128.Integer numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins.Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?Most of LLVM's optimizers work on arbitrary-size ints. -- - AlexAlternatively in theory special annotations are able to tell the compiler that a user-defined type shares some of the characteristics of integer numbers, allowing the compiler to optimize better at compile-time. But I think not even the Scala compiler is so powerful.This would still require backend support for many things.
Jan 28 2012
gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know about llvm, but it's supposed to be gcc-compatible, so I assume that it's the same. - Jonathan M DavisCan't speak for GCC, but LLVM allows arbitrary-size integers. SDC maps cent/ucent to i128. - AlexThat's good news. I can't find any information about int128_t in 32 bit gcc, but if the support is already there then it's just the dmd backend that need to be upgraded.
Jan 28 2012
On 1/28/2012 8:24 PM, Daniel Murphy wrote:That's good news. I can't find any information about int128_t in 32 bit gcc, but if the support is already there then it's just the dmd backend that need to be upgraded.There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.
Jan 28 2012
On Saturday, January 28, 2012 20:41:38 Walter Bright wrote:There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.Gotta love the pun there, intended or otherwise... :) - Jonathan M Davis
Jan 28 2012
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:jg2im4$30qi$1 digitalmars.com...There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.
Jan 28 2012
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:jg2im4$30qi$1 digitalmars.com...There is some support for 128 bit ints already in the backend, but it is incomplete. It's a bit low on the priority list.No rush. The backend is still a mystery to me.
Jan 28 2012
On 1/28/2012 9:20 PM, Daniel Murphy wrote:The backend is still a mysteryBetter call Nancy Drew!
Jan 28 2012
On Sunday, January 29, 2012 14:38:41 Daniel Murphy wrote:"bearophile" <bearophileHUGS lycos.com> wrote in message news:jg2cku$2ljk$1 digitalmars.com...gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know about llvm, but it's supposed to be gcc-compatible, so I assume that it's the same. - Jonathan M DavisInteger numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins.Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
Jan 28 2012
On 01/29/2012 04:56 AM, Jonathan M Davis wrote:On Sunday, January 29, 2012 14:38:41 Daniel Murphy wrote:long long is 64-bit on 64-bit linux."bearophile"<bearophileHUGS lycos.com> wrote in message news:jg2cku$2ljk$1 digitalmars.com...gcc does on 64-bit systems. long long is 128-bit on 64-bit Linux. I don't know about llvm, but it's supposed to be gcc-compatible, so I assume that it's the same. - Jonathan M DavisInteger numbers have some proprieties that compilers use with built-in fixed-size numbers to optimize code. I think such optimizations are not performed on library-defined numbers like a Fixed!128 or BigInt. This means there are advantages of having cent/ucent/BigInt as built-ins.Yes, but the advantages in implementation ease and portability currently favour a library solution. Do the gcc or llvm backends support 128 bit integers?
Jan 29 2012
On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:long long is 64-bit on 64-bit linux.Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Jan 29 2012
On 29-01-2012 23:26, Jonathan M Davis wrote:On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:Well, with LLVM and GCC supporting it, there shouldn't be any problems with implementing it today, I guess. -- - Alexlong long is 64-bit on 64-bit linux.Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Jan 29 2012
On 1/29/2012 2:26 PM, Jonathan M Davis wrote:long double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 29 2012
On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? T -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarotlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 29 2012
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message news:mailman.172.1327892267.25230.digitalmars-d puremagic.com...No. D has to be abi compatible.Sort of. It's 80 bits of useful data with 48 bits of unused padding.Really?! Ugh. Hopefully D handles it better?
Jan 29 2012
On 01/30/2012 03:59 AM, H. S. Teoh wrote:On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:It is what the x86 hardware supports.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 30 2012
On Mon, Jan 30, 2012 at 05:00:22PM +0100, Timon Gehr wrote:On 01/30/2012 03:59 AM, H. S. Teoh wrote:I know, I was referring to the 48 bits of padding. Seems like such a waste. T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:It is what the x86 hardware supports.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 30 2012
On 30/01/2012 16:00, Timon Gehr wrote:On 01/30/2012 03:59 AM, H. S. Teoh wrote:As I try it, real.sizeof == 10. And by According to DMC 8.42n (where is 8.52?), sizeof(long double) == 10 as well. Stewart.On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:It is what the x86 hardware supports.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 31 2012
Am 31.01.2012, 16:07 Uhr, schrieb Stewart Gordon <smjg_1998 yahoo.com>:On 30/01/2012 16:00, Timon Gehr wrote:pragma(msg, real.sizeof); Prints the expected platform alignment for me: DMD64 / GDC64: 16LU DMD32: 12LUOn 01/30/2012 03:59 AM, H. S. Teoh wrote:As I try it, real.sizeof == 10. And by According to DMC 8.42n (where is 8.52?), sizeof(long double) == 10 as well. Stewart.On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:It is what the x86 hardware supports.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 31 2012
On 31/01/2012 18:47, Marco Leise wrote: <snip>pragma(msg, real.sizeof);Prints 10u for me (2.057, Win32).Prints the expected platform alignment for me: DMD64 / GDC64: 16LU DMD32: 12LUThat isn't alignment, that's padding built into the type. I assume you're testing on Linux. I've heard before that long double/real is 12 bytes under Linux because it includes 2 bytes of padding. I don't know why Linux does it that way, but there you go. Stewart.
Jan 31 2012
On 1/31/2012 4:28 PM, Stewart Gordon wrote:That isn't alignment, that's padding built into the type. I assume you're testing on Linux. I've heard before that long double/real is 12 bytes under Linux because it includes 2 bytes of padding. I don't know why Linux does it that way, but there you go.Both the alignment and padding of reals changes from platform to platform.
Jan 31 2012
On 31 January 2012 18:47, Marco Leise <Marco.Leise gmx.de> wrote:Am 31.01.2012, 16:07 Uhr, schrieb Stewart Gordon <smjg_1998 yahoo.com>:ere isOn 30/01/2012 16:00, Timon Gehr wrote:On 01/30/2012 03:59 AM, H. S. Teoh wrote:As I try it, real.sizeof =3D=3D 10. =A0And by According to DMC 8.42n (wh=On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:It is what the x86 hardware supports.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.It varies from platform to platform, and depending on what target flags you pass to GDC. --=20 Iain Buclaw *(p < e ? p++ : p) =3D (c & 0x0f) + '0';8.52?), sizeof(long double) =3D=3D 10 as well. Stewart.=A0 =A0 =A0 =A0pragma(msg, real.sizeof); Prints the expected platform alignment for me: DMD64 / GDC64: 16LU
Feb 01 2012
Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh quickfur.ath.cx>:On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:From Wikipedia: "On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)." That's all there is to know I think.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 30 2012
On 30/01/12 18:06, Marco Leise wrote:Am 30.01.2012, 03:59 Uhr, schrieb H. S. Teoh <hsteoh quickfur.ath.cx>:Not quite all. An 80-bit double, padded with zeros to 128 bits, is binary compatible with a quadruple real. (Not much use in practice, as far as I know).On Sun, Jan 29, 2012 at 05:48:40PM -0800, Walter Bright wrote:From Wikipedia: "On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)." That's all there is to know I think.On 1/29/2012 2:26 PM, Jonathan M Davis wrote:Really?! Ugh. Hopefully D handles it better? Tlong double is 128-bit.Sort of. It's 80 bits of useful data with 48 bits of unused padding.
Jan 30 2012
On 1/30/2012 9:06 AM, Marco Leise wrote:"On the x86 architecture, most compilers implement long double as the 80-bit extended precision type supported by that hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment)." That's all there is to know I think.10 bytes on Windows. Anyhow, as far as the C ABI goes (which is what this is), "Ours is not to Reason Why, Ours is to Implement or Fail."
Jan 31 2012
On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote: [...]This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake.IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits). There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining.[...] Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right. T -- MSDOS = MicroSoft's Denial Of Service
Jan 29 2012
On 1/29/2012 3:31 PM, H. S. Teoh wrote:Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it. 2. that doesn't do you any good for any other typedef's that change size. printf is the single biggest nuisance in porting code between 32 and 64 bits.
Jan 29 2012
On Sunday, January 29, 2012 17:57:39 Walter Bright wrote:On 1/29/2012 3:31 PM, H. S. Teoh wrote:It's even worse with code which you're trying to have be cross-platform between 32-bit and 64-bit. Microsoft added I32 and I64. which helps, but then you still need to add a wrapper to printf for Posix to handle them unless you want to ifdef all of your printf calls. About the only positive thing that I can say about that whole mess is that it's because of that that I learned that string literals are unaffected by macros in C/C++. The fact that I can just do %s with writefln in D and not worry about it is so fantastic it's not even funny. - Jonathan M DavisYeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it. 2. that doesn't do you any good for any other typedef's that change size. printf is the single biggest nuisance in porting code between 32 and 64 bits.
Jan 29 2012
On Sun, Jan 29, 2012 at 05:57:39PM -0800, Walter Bright wrote:On 1/29/2012 3:31 PM, H. S. Teoh wrote:And C++ doesn't officially support C99. Prior to C++11 anyway, but I don't foresee myself doing any major projects in C++11 now that I have something better, i.e., D. I just can't see myself doing any more personal projects in C++, and at my day job we actually migrated from C++ to C a few years ago, and we're still happy we did so. (Don't ask, you don't want to know. When a single function call requires 6 layers of needless abstraction including a layer involving fwrite, fork, and exec, and when dtors do useful work other than cleanup, it's time to call it quits.)Yeah, size_t especially drives me up the wall. Is it %u, %lu, or %llu? I think either gcc or C99 actually has a dedicated printf format for size_t, except that C++ doesn't include parts of C99, so you end up with format string #ifdef nightmare no matter what you do. I'm so glad that %s takes care of it all in D. Yet another thing D has done right.size_t does have a C99 Standard official format %z. The trouble is, 1. many compilers *still* don't implement it.2. that doesn't do you any good for any other typedef's that change size. printf is the single biggest nuisance in porting code between 32 and 64 bits.[...] It could've been worse, though. We're lucky (most) compiler vendors decided not to make int 64 bits. That alone would've broken 90% of existing C code out there, some in obvious ways and others in subtle ways that you only find out after it's deployed on your client's production system. T -- Two wrongs don't make a right; but three rights do make a left...
Jan 29 2012
On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg gmx.com> wrote:On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:Can be turned on via compiler switch: -m128bit-long-double or set at the configure stage: --with-long-double-128 Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';long long is 64-bit on 64-bit linux.Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Jan 29 2012
On 30 January 2012 03:17, Iain Buclaw <ibuclaw ubuntu.com> wrote:On 29 January 2012 22:26, Jonathan M Davis <jmdavisProg gmx.com> wrote:Oh wait... I've just re-read that and realised it's to do with reals (must be 3am in the morning here). -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';On Sunday, January 29, 2012 16:26:02 Timon Gehr wrote:Can be turned on via compiler switch: -m128bit-long-double or set at the configure stage: --with-long-double-128long long is 64-bit on 64-bit linux.Are you sure? I'm _certain_ that we looked at this at work when we were sorting issue with moving some of our products to 64-bit and found that long long was 128 bits. Checking... Well, you're right. Now I'm seriously confused. Hmmm... long double is 128-bit. Maybe that's what threw me off. Well, thanks for correcting me in either case. I thought that I'd had all of that figured out. This is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake. C's type scheme is nothing but trouble as far as integral sizes go IMHO. printf in particular is one of the more annoying things to worry about with cross-platform development thanks to varying integer size. Bleh. Enough of my whining. In any case, gcc _does_ define __int128 ( http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html ), so as far as the question goes, gcc _does_ have 128 bit integers, even if long long isn't 128 bits on 64-bit systems. - Jonathan M Davis
Jan 29 2012
On Sunday, January 29, 2012 15:31:57 H. S. Teoh wrote:On Sun, Jan 29, 2012 at 02:26:55PM -0800, Jonathan M Davis wrote: [...]In an ideal language, I'd probably go with an integer type with an unspecified number of bits which is used when you don't care about the size of the integer. It'll be whatever is fastest for the particular architecture that it's compiled on, and it'll probably be guaranteed to be _at least_ a particular size (probably 32 bits at this point) so that you don't have to worry about average-sized numbers not fitting. Also, you should probably have a type like size_t that deals with the differing sizes of address spaces. But _all_ other types have a fixed size. So, you don't get this nonsense of int is this on that machine, and long is that, and long long is something else, etc. So, you use them when you need a variable to be a particular size or when you need a guarantee that a larger value will fit in it. The way that C did it with _everything_ varying is horrific IMHO. are fixed in size is _far_ better IMHO. So, if the choice is between the C/C++ definitely arguments for having an integral type which is the most efficient for whatever machine that it's compiled on, and D doesn't really have that. You'd probably have to use something like c_long if you really wanted that. - Jonathan M DavisThis is one of the many reasons why I think that any language which didn't define integers according to their _absolute_ size instead of relative size (with the possible exception of some types which vary based on the machine so that you're using the most efficient integer for that machine or are able to index the full memory space) made a huge mistake.IMNSHO, you need both, and I can't say I'm 100% satisfied with how D uses 'int' to mean 32-bit integer no matter what. The problem with C is that there's no built-in type for guaranteeing 32-bits (stdint.h came a bit too late into the picture--by then, people had already formed too many bad habits). There's a time when code needs to be able to say "please give me the default fastest int type on the machine", and a time for code to say "I want the int type with exactly n bits 'cos I'm assuming specific properties of n-bit numbers".
Jan 29 2012
On 1/29/2012 4:30 PM, Jonathan M Davis wrote:But there are definitely arguments for having an integral type which is the most efficient for whatever machine that it's compiled on, and D doesn't really have that. You'd probably have to use something like c_long if you really wanted that.I believe the notion of "most efficient integer type" was obsolete 10 years ago. In any case, D is hardly deficient even if such is valid. Just use an alias. C has varying size for builtin types and fixed size for aliases. D is just the reverse - fixed builtin sizes and varying alias sizes. My experience with both languages is that D's approach is far superior. C's varying sizes makes it clumsy to write portable numeric code, and the varying size of wchar_t is such a disaster that it is completely useless - the C++11 had to come up with completely new basic types to support UTF.
Jan 29 2012
On Sun, Jan 29, 2012 at 06:23:33PM -0800, Walter Bright wrote: [...]C has varying size for builtin types and fixed size for aliases. D is just the reverse - fixed builtin sizes and varying alias sizes. My experience with both languages is that D's approach is far superior.I agree. It's not perfect, but it definitely beats the C system.C's varying sizes makes it clumsy to write portable numeric code, and the varying size of wchar_t is such a disaster that it is completely useless - the C++11 had to come up with completely new basic types to support UTF.Not to mention the totally non-commital way the specs were written about wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it *could* be a non-unicode encoding, we don't guarantee anything. Oh, you want Unicode, right? Well for that you need to consult your OS-specific documentation on how to set up 15 different environment variables, all of which have non-commital descriptions, and any of which may or may not switch the system into/out of unicode mode. Oh, you want a function to guarantee unicode mode? We're sorry, that's not our department. Yeah. Useless is just about right. It's almost as bad as certain parts of the IPMI spec, which I had the misfortune to be given a project to code for at my day job once. T -- Amateurs built the Ark; professionals built the Titanic.
Jan 29 2012
On 1/29/2012 6:46 PM, H. S. Teoh wrote:Not to mention the totally non-commital way the specs were written about wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it *could* be a non-unicode encoding, we don't guarantee anything. Oh, you want Unicode, right? Well for that you need to consult your OS-specific documentation on how to set up 15 different environment variables, all of which have non-commital descriptions, and any of which may or may not switch the system into/out of unicode mode. Oh, you want a function to guarantee unicode mode? We're sorry, that's not our department.I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that? The only problem with that is that while the C standard supports it, I can't think of a single C program that would work on such a system without a major, and I mean major, rewrite. It's a useless facet of the standard.
Jan 29 2012
On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:On 1/29/2012 6:46 PM, H. S. Teoh wrote:I can just see all those string malloc()'s screaming in pain as buffer overflows trample them to their miserable deaths: void f(int length) { char *p = (char *)malloc(length); /* yikes! */ int i; for (i=0; i < length; i++) { /* do something with p[i] ... */ } ... } Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code? T -- Klein bottle for rent ... inquire within. -- Stephen MulraneyNot to mention the totally non-commital way the specs were written about wchar_t: it *could* be UTF-16, or it *could* be UTF-32, or it *could* be a non-unicode encoding, we don't guarantee anything. Oh, you want Unicode, right? Well for that you need to consult your OS-specific documentation on how to set up 15 different environment variables, all of which have non-commital descriptions, and any of which may or may not switch the system into/out of unicode mode. Oh, you want a function to guarantee unicode mode? We're sorry, that's not our department.I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that? The only problem with that is that while the C standard supports it, I can't think of a single C program that would work on such a system without a major, and I mean major, rewrite. It's a useless facet of the standard.
Jan 29 2012
On 1/29/2012 8:21 PM, H. S. Teoh wrote:On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:Yes. Those chips exist, and there are Standard C compilers for them. But every bit of C code compiled for them has to be custom rewritten for it.I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that?Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code?
Jan 29 2012
On Sun, Jan 29, 2012 at 09:24:48PM -0800, Walter Bright wrote:On 1/29/2012 8:21 PM, H. S. Teoh wrote:Interesting. How would D fare in that kind of environment, I wonder? I suppose it shouldn't be a big deal, since you have to custom rewrite everything anyways -- just use int32 throughout. T -- Lawyer: (n.) An innocence-vending machine, the effectiveness of which depends on how much money is inserted.On Sun, Jan 29, 2012 at 07:47:26PM -0800, Walter Bright wrote:Yes. Those chips exist, and there are Standard C compilers for them. But every bit of C code compiled for them has to be custom rewritten for it.I've had people tell me this was an advantage because there are some chips where chars, shorts, ints, and wchars are all 32 bits. Isn't it awesome that the C standard supports that?Is there an actual, real, working C compiler that has char sized as anything but 8 bits?? This one thing alone would kill, oh, 99% of all C code?
Jan 29 2012
On 1/29/2012 10:39 PM, H. S. Teoh wrote:Interesting. How would D fare in that kind of environment, I wonder? I suppose it shouldn't be a big deal, since you have to custom rewrite everything anyways -- just use int32 throughout.You could write a custom D compiler for it.
Jan 30 2012
On 29/01/2012 01:17, Alex Rønne Petersen wrote:Hi, Are there any current plans to implement cent and ucent?<snip> Whether it's implemented any time soon or not, it's high time the _syntax_ allowed their use as basic types for forward/backward compatibility's sake. http://d.puremagic.com/issues/show_bug.cgi?id=785 Stewart.
Jan 31 2012
Le 29/01/2012 02:17, Alex Rønne Petersen a écrit :Are there any current plans to implement cent and ucent?I implemented cent and ucent as a library, using division algorithm from Ian Kaplan. https://github.com/p0nce/gfm/blob/master/math/softcent.d Suggestions welcome.
Mar 29 2012