www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Integer names should be renamed

reply Brian Bober <netdemonz yahoo.com> writes:
I think it's time to re-hash an old discussion from a couple years ago. 
I propose replacing type names for integers 
(http://www.digitalmars.com/d/type.html) with the following:

bit
byte
ubyte
int16
uint16
int32
uint32
int64
uint64
int128
uint128

... etc

Why?
1) It's more logical and easier to remember
2) It better reflects the integer size, so people are less likely to make
mistakes when moving from C. It'll also make people more careful when
converting, and make sure no type names overlap when doing automatic
conversions using things like sed or awk.
2) The second is that cent and ucent is not a very good name for 128 bit 
variables. Cent means 100. int128 makes more sense.



1) Instead of using names
like long, short, and int, it would be better to use names that show the
number of bits each variable has, and whether it is unsigned. This is the
convention used in the Mozilla project, and it works very well. This will
have the advantage, also, of making people more careful when porting C/C++
applications to D. It will also mean that people migrating to D won't be
caught up in the old definition of long, which is different on Alpha and
PC systems. This will also mean there won't a lot of different types when
128 and 256 bit systems come along. It'll get too complicated. It'll also
be easier for strange system designers who want to do, say, 24-bit
integers, which might be the case on integrated systems. Then they could
just do a int24 and uint24, and no one would be confused.

You can
provide a temporary standard header that will provide the alternate names
you provided on http://www.digitalmars.com/d/type.html until people have
migrated to the new system I suggested here.

This method is a lot more logical in my opinion, and I'm sure a lot will
agree.

2) cent and ucent is not a good name for a 128-bit variable. First of all,
it might be too easily mixed up with a simple structure for representing
currency. Second of all, 128 is not 100. In fact, a 128-bit integer simply
backs up what I said in 1. Naming data types is getting ridiculous. What
is longer than long? I guess it could be 'extended' or 'stretch', but
seriously... Let's make things a bit less complicated.
Sep 22 2004
next sibling parent reply Deja Augustine <Deja_member pathlink.com> writes:
I have to say that I fully agree with Brian on this one.  Even .NET has moved to
this style of nomenclature (Byte, Int16, Int32, UInt64, Single, Double, etc...)
and it makes things a lot less confusing.

For me, the most compelling of the reasons listed below is talking about people
coming from a C/++ background.  If you have a conversion chart with entries like
these:

long double        = real  
unsigned long long = ulong  
long long          = long  
unsigned long      = uint
long               = int

that's bad news.  It's confusing enough to say aloud: "A long long is a long and
a long is an int"

-Deja

In article <cisodp$2vok$1 digitaldaemon.com>, Brian Bober says...
I think it's time to re-hash an old discussion from a couple years ago. 
I propose replacing type names for integers 
(http://www.digitalmars.com/d/type.html) with the following:

bit
byte
ubyte
int16
uint16
int32
uint32
int64
uint64
int128
uint128

... etc

Why?
1) It's more logical and easier to remember
2) It better reflects the integer size, so people are less likely to make
mistakes when moving from C. It'll also make people more careful when
converting, and make sure no type names overlap when doing automatic
conversions using things like sed or awk.
2) The second is that cent and ucent is not a very good name for 128 bit 
variables. Cent means 100. int128 makes more sense.



1) Instead of using names
like long, short, and int, it would be better to use names that show the
number of bits each variable has, and whether it is unsigned. This is the
convention used in the Mozilla project, and it works very well. This will
have the advantage, also, of making people more careful when porting C/C++
applications to D. It will also mean that people migrating to D won't be
caught up in the old definition of long, which is different on Alpha and
PC systems. This will also mean there won't a lot of different types when
128 and 256 bit systems come along. It'll get too complicated. It'll also
be easier for strange system designers who want to do, say, 24-bit
integers, which might be the case on integrated systems. Then they could
just do a int24 and uint24, and no one would be confused.

You can
provide a temporary standard header that will provide the alternate names
you provided on http://www.digitalmars.com/d/type.html until people have
migrated to the new system I suggested here.

This method is a lot more logical in my opinion, and I'm sure a lot will
agree.

2) cent and ucent is not a good name for a 128-bit variable. First of all,
it might be too easily mixed up with a simple structure for representing
currency. Second of all, 128 is not 100. In fact, a 128-bit integer simply
backs up what I said in 1. Naming data types is getting ridiculous. What
is longer than long? I guess it could be 'extended' or 'stretch', but
seriously... Let's make things a bit less complicated.
Sep 22 2004
parent reply Ant <duitoolkit yahoo.ca> writes:
On Wed, 22 Sep 2004 21:46:17 +0000, Deja Augustine wrote:

 I have to say that I fully agree with Brian on this one.
me too. this was suggested before. Even .NET has moved to
 this style of nomenclature (Byte, Int16, Int32, UInt64, Single, Double, etc...)
but something has to be done to float, single, double, real also. Ant
Sep 22 2004
next sibling parent reply Brian Bober <netdemonz yahoo.com> writes:
So you suggest something like float32, float64, float80,
imag32, imag64, imag80, and complex32, complex64, and complex80?

On Wed, 22 Sep 2004 18:20:23 -0400, Ant wrote:
 but something has to be done to float, single, double, real also.
<snip>
Sep 22 2004
parent "Tony" <talktotony email.com> writes:
I strongly agree with this suggestion (for types other than integer as
well).

When types are constrained to a particular size, it just seems intuitive to
include that size in the type name.

More ambiguous names such as "int" could be used to indicate a type
corresponding to the natural word size of a particular machine architecture
(pretty sure someone else suggested this ages ago).

Tony

"Brian Bober" <netdemonz yahoo.com> wrote in message
news:cit099$289$1 digitaldaemon.com...
 So you suggest something like float32, float64, float80,
 imag32, imag64, imag80, and complex32, complex64, and complex80?

 On Wed, 22 Sep 2004 18:20:23 -0400, Ant wrote:
 but something has to be done to float, single, double, real also.
<snip>
Sep 23 2004
prev sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <pan.2004.09.22.22.20.21.173705 yahoo.ca>, Ant says...

but something has to be done to float, single, double, real also.
Well, the /logical/ names would have to be: rational32, rational64 and rational80. But maybe they could be abbreviated to rat32, rat64 and rat80 (in the same way that "integer" is abbreviated to "int"). What's left? How about ieee32, ieee64 or ieee80? (Of course, as I'm sure you all realise, IEEE floats can only represent rationals, not reals. One cannot represent /any/ irrational number as an IEEE float). Jill
Sep 23 2004
prev sibling next sibling parent reply Brian Bober <netdemonz yahoo.com> writes:
On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath wrote:

 What no go all the way...
 
 "byte"   -> "int8"
 "float"  -> "float32"
 "double" -> "float64"
 "real"   -> "float80" (intel only?)
 "char"   -> "char8"
 "wchar"  -> "char16"
 "dchar"  -> "char32"
 
 Regan
I thought about byte being int8, but thought that people would like byte better since bit and byte are standard on all platforms but anything above that is not. For instance, word means the word length on a system. int8 would be more consistent, but I assumed in general people would like byte better than int8. char8/16/32 sounds good, and could also be utf8/16/32
Sep 22 2004
next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cit6j3$4sp$1 digitaldaemon.com>, Brian Bober says...
On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath wrote:

 What no go all the way...
 
 "char"   -> "char8"
 "wchar"  -> "char16"
 "dchar"  -> "char32"
 
 Regan
What the hell. Let's go /all/ the way... char -> utf8 wchar -> utf16 dchar -> utf32 Then we'll have none of that nonsense of people confusing D's chars with C's chars, which should /in fact/ be mapped to int8 and uint8. Jill PS. I also suggest float32, float64, float80, ifloat32, ifloat64, ifloat80, cfloat64, cfloat128 and cfloat160 for the float types.
Sep 23 2004
parent Charles Hixson <charleshixsn earthlink.net> writes:
Arcane Jill wrote:
 In article <cit6j3$4sp$1 digitaldaemon.com>, Brian Bober says...
 
On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath wrote:


What no go all the way...

"char"   -> "char8"
"wchar"  -> "char16"
"dchar"  -> "char32"

Regan
What the hell. Let's go /all/ the way... char -> utf8 wchar -> utf16 dchar -> utf32 Then we'll have none of that nonsense of people confusing D's chars with C's chars, which should /in fact/ be mapped to int8 and uint8. Jill PS. I also suggest float32, float64, float80, ifloat32, ifloat64, ifloat80, cfloat64, cfloat128 and cfloat160 for the float types.
If 'twere done, 'twere best done quickly! (I see a lot to like about those names, but perhaps the current names should be kept, for awhile, as aliases? And, of course, deprecated.)
Sep 23 2004
prev sibling parent reply Sjoerd van Leent <svanleent wanadoo.nl> writes:
Brian Bober wrote:
 On Thu, 23 Sep 2004 09:21:52 +1200, Regan Heath wrote:
 
 
What no go all the way...

"byte"   -> "int8"
"float"  -> "float32"
"double" -> "float64"
"real"   -> "float80" (intel only?)
"char"   -> "char8"
"wchar"  -> "char16"
"dchar"  -> "char32"

Regan
I thought about byte being int8, but thought that people would like byte better since bit and byte are standard on all platforms but anything above that is not. For instance, word means the word length on a system. int8 would be more consistent, but I assumed in general people would like byte better than int8. char8/16/32 sounds good, and could also be utf8/16/32
Agreed with this point of view. I always have trouble with the words "int", "long", etc. Many languages and platforms have their own explanation of the length of each of these. In the solution to provide the bitlength for each type (except bit and byte) it is possible for each platform to have their own lengths. In future, you might want to have int128 or utf64 even int4 (a nibble). These representations seem much more logical to me. (Besides that int128 couldn't possibly be named: long long long). Regards, Sjoerd
Sep 23 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <ciu8rr$sik$1 digitaldaemon.com>, Sjoerd van Leent says...

 I thought about byte being int8, but thought that people would like byte
 better since bit and byte are standard on all platforms
Nah - if "byte" were well-defined, nobody would ever have needed to invent the word "octet". In the definition of "byte" according to http://compnetworking.about.com/cs/basicnetworking/g/bldef_byte.htm it says:
In all modern network protocols, a byte contains eight bits. A few
(generally obsolete) computers may use bytes of different sizes for other
purposes.
So, strictly, it would have to be int8 and uint8 - which I would be quite happy with. Arcane Jill
Sep 23 2004
parent reply Pragma <Pragma_member pathlink.com> writes:
In article <ciuh8l$17j3$1 digitaldaemon.com>, Arcane Jill says...
Nah - if "byte" were well-defined, nobody would ever have needed to invent the
word "octet". In the definition of "byte" according to
http://compnetworking.about.com/cs/basicnetworking/g/bldef_byte.htm it says:

In all modern network protocols, a byte contains eight bits. A few
(generally obsolete) computers may use bytes of different sizes for other
purposes.
So, strictly, it would have to be int8 and uint8 - which I would be quite happy with.
Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte "C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard)." So even though I doubt anyone will be backporting D to legacy systems with non-8-bit-bytes, it makes sense to define byte as uint8 or something similar. pragma(EricAnderton,"at","yahoo");
Sep 23 2004
parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Pragma wrote:
 In article <ciuh8l$17j3$1 digitaldaemon.com>, Arcane Jill says...
 
...
... Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte "C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard)." So even though I doubt anyone will be backporting D to legacy systems with non-8-bit-bytes, it makes sense to define byte as uint8 or something similar. pragma(EricAnderton,"at","yahoo");
So if the current environment includes utf32 characters, then...
Sep 23 2004
parent Sean Kelly <sean f4.ca> writes:
In article <cive9e$2401$2 digitaldaemon.com>, Charles Hixson says...
Pragma wrote:
 In article <ciuh8l$17j3$1 digitaldaemon.com>, Arcane Jill says...
 
...
... Also wikipedia has the details on this topic: http://en.wikipedia.org/wiki/Byte "C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard)." So even though I doubt anyone will be backporting D to legacy systems with non-8-bit-bytes, it makes sense to define byte as uint8 or something similar. pragma(EricAnderton,"at","yahoo");
So if the current environment includes utf32 characters, then...
Yeah the wording could probably be refined a bit. I'm pretty sure the current expectation is for char/uchar to always occupy one byte, otherwise they'd have to go and define a new type name. Also, the wording is such that uchar is basically a building-block for everything, so it almost has to be one byte in size. Sean
Sep 23 2004
prev sibling next sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <cisodp$2vok$1 digitaldaemon.com>, Brian Bober says...
I think it's time to re-hash an old discussion from a couple years ago. 
I propose replacing type names for integers 
(http://www.digitalmars.com/d/type.html) with the following:

bit
byte
ubyte
int16
uint16
int32
uint32
int64
uint64
int128
uint128

... etc

Why?
1) It's more logical and easier to remember
Though not /quite/ as logical as: int8 (instead of byte) uint8 (instead of ubyte) Arcane Jill
Sep 23 2004
prev sibling parent reply Helmut Leitner <helmut.leitner wikiservice.at> writes:
Brian Bober wrote:
 
 I think it's time to re-hash an old discussion from a couple years ago.
 I propose replacing type names for integers
 (http://www.digitalmars.com/d/type.html) with the following:
 
 bit
 byte
 ubyte
 int16
 uint16
 int32
 uint32
 int64
 uint64
 int128
 uint128
As no-one else contradicts, I do. The problem is that this change would break all existing D code. Developers world-wide would have to put thousands of hours into their code to update. This is unfun. There are surely other issues where we can put this time into. All programming languages require some effort to learn their basic data types. It's so simple and so fundamental that I think there is no need to make it simpler. It would also increase the distance to languages like C and Java and add work to translations. This is not a logical argument, it's pragmatic. The suggestion is valid and should maybe be used for another language designed from scratch. But not at this stage of D development where we are longing for a 1.00 release. This would put us back 3-4 months. -- Helmut Leitner leitner hls.via.at Graz, Austria www.hls-software.com
Sep 26 2004
next sibling parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Helmut Leitner" <helmut.leitner wikiservice.at> wrote in message
news:41568D76.A0536642 wikiservice.at...
 Brian Bober wrote:
 I think it's time to re-hash an old discussion from a couple years ago.
 I propose replacing type names for integers
 (http://www.digitalmars.com/d/type.html) with the following:

 bit
 byte
 ubyte
 int16
 uint16
 int32
 uint32
 int64
 uint64
 int128
 uint128
As no-one else contradicts, I do. The problem is that this change would break all existing D code.
Why should it break all code if we keep aliases to current types? like: alias int 32 int...
 Developers world-wide would have to put thousands of hours into
 their code to update. This is unfun. There are surely other
 issues where we can put this time into.

 All programming languages require some effort to learn their
 basic data types. It's so simple and so fundamental that I think
 there is no need to make it simpler.

 It would also increase the distance to languages like C and Java
 and add work to translations.

 This is not a logical argument, it's pragmatic. The suggestion
 is valid and should maybe be used for another language designed from
 scratch. But not at this stage of D development where we are
 longing for a 1.00 release. This would put us back 3-4 months.

 --
 Helmut Leitner    leitner hls.via.at
 Graz, Austria   www.hls-software.com
Sep 26 2004
next sibling parent reply Helmut Leitner <helmut.leitner wikiservice.at> writes:
Ivan Senji wrote:
 
 "Helmut Leitner" <helmut.leitner wikiservice.at> wrote in message
 news:41568D76.A0536642 wikiservice.at...
 Brian Bober wrote:
 I think it's time to re-hash an old discussion from a couple years ago.
 I propose replacing type names for integers
 (http://www.digitalmars.com/d/type.html) with the following:

 bit
 byte
 ubyte
 int16
 uint16
 int32
 uint32
 int64
 uint64
 int128
 uint128
As no-one else contradicts, I do. The problem is that this change would break all existing D code.
Why should it break all code if we keep aliases to current types? like: alias int 32 int...
Then it is not a rename. Then create aliases for the suggested types. -- Helmut Leitner leitner hls.via.at Graz, Austria www.hls-software.com
Sep 26 2004
parent "Carlos Santander B." <carlos8294 msn.com> writes:
"Helmut Leitner" <helmut.leitner wikiservice.at> escribió en el mensaje 
news:4156ABF5.494C80F5 wikiservice.at...
| Then it is not a rename.
| Then create aliases for the suggested types.
|
| -- 
| Helmut Leitner    leitner hls.via.at
| Graz, Austria   www.hls-software.com

Those aliases exist: std.stdint

-----------------------
Carlos Santander Bernal 
Sep 27 2004
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Ivan Senji wrote:
 "Helmut Leitner" <helmut.leitner wikiservice.at> wrote in message
 news:41568D76.A0536642 wikiservice.at...
 
Brian Bober wrote:

I think it's time to re-hash an old discussion from a couple years ago.
I propose replacing type names for integers
(http://www.digitalmars.com/d/type.html) with the following:

bit
byte
ubyte
int16
uint16
int32
uint32
int64
uint64
int128
uint128
As no-one else contradicts, I do. The problem is that this change would break all existing D code.
Why should it break all code if we keep aliases to current types? like: alias int 32 int...
This is good, but be sure to deprecate the aliases... if not immediately, then DECIDE that they will be deprecated in, say, 6 mo.s.
 
 
...scratch. But not at this stage of D development where we are
longing for a 1.00 release. This would put us back 3-4 months.

--
Helmut Leitner    leitner hls.via.at
Graz, Austria   www.hls-software.com 
Sep 26 2004
prev sibling next sibling parent reply J C Calvarese <jcc7 cox.net> writes:
Helmut Leitner wrote:
 
 Brian Bober wrote:
 
I think it's time to re-hash an old discussion from a couple years ago.
I propose replacing type names for integers
(http://www.digitalmars.com/d/type.html) with the following:

bit
byte
ubyte
int16
uint16
int32
uint32
int64
uint64
int128
uint128
As no-one else contradicts, I do. The problem is that this change would break all existing D code. Developers world-wide would have to put thousands of hours into their code to update. This is unfun. There are surely other issues where we can put this time into.
Also, I think it should be mentioned that similar suggestions have been made numerous times in the past: 17 Aug 2001: http://www.digitalmars.com/d/archives/200.html 2 May 2002: http://www.digitalmars.com/d/archives/4842.html 28 Aug 2002: http://www.digitalmars.com/d/archives/7996.html 22 Jan 2003: http://www.digitalmars.com/drn-bin/wwwnews?D/10321 14 Jan 2004: http://www.digitalmars.com/d/archives/9954.html 3 Mar 2004: http://www.digitalmars.com/d/archives/25095.html I think if Walter was interested in doing this, he would have done it last year or the year before rather than now (when so much has stabilized). It doesn't make much sense to me to /change/ the names now, I can understand the argument to /add/ the new names. You might be able to convince Walter to alias the new names in something like object.d, but I'm pretty sure the "old" names are here to stay.
 
 All programming languages require some effort to learn their
 basic data types. It's so simple and so fundamental that I think
 there is no need to make it simpler. 
 
 It would also increase the distance to languages like C and Java
 and add work to translations. 
 
 This is not a logical argument, it's pragmatic. The suggestion
 is valid and should maybe be used for another language designed from
 scratch. But not at this stage of D development where we are 
 longing for a 1.00 release. This would put us back 3-4 months.
 
-- Justin (a/k/a jcc7) http://jcc_7.tripod.com/d/
Sep 26 2004
parent reply "Bent Rasmussen" <exo bent-rasmussen.info> writes:
Perhaps this is the essence

http://www.digitalmars.com/drn-bin/wwwnews?D/8100
Sep 26 2004
next sibling parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Bent Rasmussen" <exo bent-rasmussen.info> wrote in message
news:cj73a1$j5r$1 digitaldaemon.com...
 Perhaps this is the essence
Or maybe this little part from lexer.h: :) #define CASE_BASIC_TYPES \ case TOKwchar: case TOKdchar: \ case TOKbit: case TOKchar: \ case TOKint8: case TOKuns8: \ case TOKint16: case TOKuns16: \ case TOKint32: case TOKuns32: \ case TOKint64: case TOKuns64: \ case TOKfloat32: case TOKfloat64: case TOKfloat80: \ case TOKimaginary32: case TOKimaginary64: case TOKimaginary80: \ case TOKcomplex32: case TOKcomplex64: case TOKcomplex80: \ case TOKvoid :)
 http://www.digitalmars.com/drn-bin/wwwnews?D/8100
Sep 26 2004
prev sibling parent reply Antti =?iso-8859-1?Q?Syk=E4ri?= <jsykari cc.hut.fi> writes:
(This article speaks mostly about ints, although I definitely like utf8, utf16
and utf32 as well.)

In article <cj73a1$j5r$1 digitaldaemon.com>, Bent Rasmussen wrote:
 Perhaps this is the essence
 
 http://www.digitalmars.com/drn-bin/wwwnews?D/8100
Walter's answer here (to my proposal, actually) practically states that int16 and other identifiers with numbers appended don't look good and are visually confusing when in algebraic expression. I don't remember whether I replied to that article two years ago, but now I don't think it's such a big deal. In common programming, one usually just uses the venerable 32-byte integer type called "int", since that's pretty good for most uses. The other integer types are mostly for systems and embedded programming, squeezing the last bit of performance, or interfacing with libraries that are designed for one of the aforementioned purposes. Or indeed for situations where you actually need those extra bits. If I really wanted to shoot down the his answer and the reasoning behing it, though, I'd reuse the previous point (== "other types than 'int' are quite rarely used so them having numbers doesn't matter that much) and additionally point out that type names are not used in arithmetic expressions -- variable names are. (And those are even more rarely used, especially in arithmetic contexts.) However it is Walter's opinion, and you don't shoot down opinions. Not at least that easily. <evil grin> (By the way - I don't think that typenames with integers appended are ugly, yet I still loathe^H^H^H^H^H^H am very reserved about other syntactic baggage like ITheHorribleInterfacePrefix or pszawflHungarianNotation. Or even long identifiers if you can say it shorter. It would be difficult to think objectively if I were deciding whether to include one of them into the language conventions. Behold the power of opinions.) -Antti P.S. Is there a list for long-standing feature/change proposals in some kind of Wiki, Bugzilla or something? I'll add one argument: * The names are self-documenting
Oct 27 2004
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Antti Sykäri wrote:

 (This article speaks mostly about ints, although I definitely like utf8, utf16
 and utf32 as well.)
I'm not sure I understand all of this post, just wanted to pointed out that stdint and stdbool aliases *have* now been added to the language... module std.stdint; /* Exact sizes */ alias byte int8_t; alias ubyte uint8_t; alias short int16_t; alias ushort uint16_t; alias int int32_t; alias uint uint32_t; alias long int64_t; alias ulong uint64_t; module object; // there's a long rant about whether this alias should go in std.stdbool // instead, and why true and false should not be keywords if bool isn't alias bit bool;
 P.S.  Is there a list for long-standing feature/change proposals in some kind
 of Wiki, Bugzilla or something? I'll add one argument:
 
 * The names are self-documenting
http://www.prowiki.org/wiki4d/wiki.cgi?FeatureRequestList I posted the suggestion of std/stdutf.d to the item list: module std.stdutf; /* Code units */ alias char utf8_t; alias wchar utf16_t; alias dchar utf32_t; Unfortunately I mixed it up with the "string" and "ustring", which confused people. It should be split into two separate... It would be neat if some "voting" procedure could be installed, right now it's usually down to a final veto from the One Vote. --anders
Oct 28 2004
next sibling parent reply Kevin Bealer <Kevin_member pathlink.com> writes:
alias  byte   int8_t;
alias ubyte  uint8_t;
alias  short  int16_t;
alias ushort uint16_t;
alias  int    int32_t;
alias uint   uint32_t;
alias  long   int64_t;
alias ulong  uint64_t;
I personally find these ugly, but there is an additional annoyance with this kind of naming: What is an int8? Is it equivalent to byte, or long? About half of the code I see that uses these uses int32 to represent the "int" and the other half uses "int4". There is no standardization on whether this is bytes or bits. Or rather, there are several standardizations. Kevin
Oct 28 2004
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Kevin Bealer wrote:

alias  byte   int8_t;
alias ubyte  uint8_t;
alias  short  int16_t;
alias ushort uint16_t;
alias  int    int32_t;
alias uint   uint32_t;
alias  long   int64_t;
alias ulong  uint64_t;
I personally find these ugly, but there is an additional annoyance with this kind of naming: What is an int8? Is it equivalent to byte, or long? About half of the code I see that uses these uses int32 to represent the "int" and the other half uses "int4". There is no standardization on whether this is bytes or bits. Or rather, there are several standardizations.
This is *the* standard. It's called "stdint", and it is available in: C99, C++ - and D. // C99 / C++ #include <stdint.h> // D import std.stdint; It's a standard *just* to keep folks from inventing their own types like "int4"... --anders PS. All new standards seems to be about bits (not bytes) At least if you compare UTF-16 (new) vs UCS-2 (old) ?
Oct 29 2004
prev sibling parent reply Antti =?iso-8859-1?Q?Syk=E4ri?= <jsykari cc.hut.fi> writes:
In article <clq73e$1gln$1 digitaldaemon.com>, Anders F Björklund wrote:
 Antti Sykäri wrote:
 
 (This article speaks mostly about ints, although I definitely like
 utf8, utf16 and utf32 as well.)
I'm not sure I understand all of this post, just wanted to pointed out that stdint and stdbool aliases *have* now been added to the language... module std.stdint; /* Exact sizes */ alias byte int8_t; alias ubyte uint8_t; alias short int16_t; alias ushort uint16_t; alias int int32_t; alias uint uint32_t; alias long int64_t; alias ulong uint64_t;
(Sorry for the excessive quoting) Ah, I see. I thought I remembered something like that. So we have them. Hurray. Not that I will (at least initially) want to touch anything equipped with the dreaded '_t' suffix from the C era, but at least we have them. And to think of the fact that the original reason for rejecting them was that they are not "aesthetically pleasing". Heh. I suppose that the _t suffix should be added to the D style guide as the official suffix denoting ... umm, what does it denote actually? You don't use it in every alias, do you?
 I posted the suggestion of std/stdutf.d to the item list:
 
 module std.stdutf;
 
 /* Code units */
 
 alias  char utf8_t;
 alias wchar utf16_t;
 alias dchar utf32_t;
Thumbs up for this. I don't remember what char, wchar and dchar mean, but utf8(_t) makes it instantly crystal clear. -Antti
Oct 29 2004
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Antti Sykäri wrote:

 So we have them. Hurray. Not that I will (at least initially) want to
 touch anything equipped with the dreaded '_t' suffix from the C era, but
 at least we have them.

 And to think of the fact that the original reason for rejecting them was
 that they are not "aesthetically pleasing". Heh.
 
 I suppose that the _t suffix should be added to the D style guide as the
 official suffix denoting ... umm, what does it denote actually? You
 don't use it in every alias, do you?
I think that it stands for "typedef" (the name of alias in C), but the main reason is so that they would be totally different... Since people already had all kinds of aliases defined locally, the new standard one had to stand out a little. http://www.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html It also gets rid of the "but it could be confused with a number", since it no longer ends in any numeric characters... Or something? In D, you can also find the suffix in the standard aliases: "size_t" (either 32 or 64) and "ptrdiff_t" (either 32 or 64)
I posted the suggestion of std/stdutf.d to the item list:

module std.stdutf;

/* Code units */

alias  char utf8_t;
alias wchar utf16_t;
alias dchar utf32_t;
Thumbs up for this. I don't remember what char, wchar and dchar mean, but utf8(_t) makes it instantly crystal clear.
I'm willing to put it in the Public Domain :-) Since they are optional modules, you can still use the built-in types... But at least this should standardize the "alternative" names, I think ? --anders
Oct 29 2004
prev sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <41568D76.A0536642 wikiservice.at>, Helmut Leitner says...

As no-one else contradicts, I do.
<humor>There's always one, isn't there! (rolls eyes)</humor> Trouble is, of course, you're basically right. But let me argue with you anyway...
The problem is that this change would break all existing D code.
Developers world-wide would have to put thousands of hours into 
their code to update. This is unfun. There are surely other
issues where we can put this time into.
It doesn't /actually/ take thousands of hours to do a simple search-and-replace, however. And the old names could be deprecated during the transition.
All programming languages require some effort to learn their
basic data types. It's so simple and so fundamental that I think
there is no need to make it simpler. 
You'd think so, wouldn't you? In the case of D, however, it seems that the basic types still have not been learned by everyone, and even the D manual get confused sometimes. Is a D char the same thing as C char? Or is it the same thing as a C byte or ubyte? Does D's "bool" count as a type? Technically it's just an alias, but since that alias is declared in object.d, it's pretty difficult to ignore, and most of us treat is as if it were a basic type. But then, why does Object.opEquals() return int, not bool (which is an alias for bit, not int)? Walter says it's more efficient - in which case, shouldn't bool be int? And what does the "d" in "dchar" stand for anyway? Double-wide? So long as people write programs in C and D at the same time and expect them to interoperate, confusion over the meanings of "char", "bool array" and so on, will continue, and discussions like this will come back, over and over again.
It would also increase the distance to languages like C and Java
and add work to translations. 
On the contrary, the fact that you can store Chinese characters in a "char" array in D but not is C is already a great "distance". The fact that an array of sixteen bools is sixteen bytes in C but only two bytes in D (and you can't take the address of elements) is more "distance". Using different words for different concepts would, in my view, decrease the distance, not increase it. In fact, the only existing overlaps between C and D are "short", "int", "float" and "double" (and even then not on all platforms).
But not at this stage of D development where we are 
longing for a 1.00 release. This would put us back 3-4 months.
Many people have said (and I agree with them) that getting it /right/ is more important than getting it /soon/. I can wait 3-4 months, if it means D gets it right. Hurrying things along out of impatience is a recipe for disaster. In my humble opinion Jill
Sep 27 2004
parent Deja Augustine <deja scratch-ware.net> writes:
Arcane Jill wrote:
 In article <41568D76.A0536642 wikiservice.at>, Helmut Leitner says...
<snip>
 
 
But not at this stage of D development where we are 
longing for a 1.00 release. This would put us back 3-4 months.
Many people have said (and I agree with them) that getting it /right/ is more important than getting it /soon/. I can wait 3-4 months, if it means D gets it right. Hurrying things along out of impatience is a recipe for disaster.
</snip> To Walter's credit, if all we wanted to do is rename the data types, that would take about 35 seconds. If we wanted to add aliases, that would take maybe another minute and a half, depending on whether or not Walter knew exactly where he wanted to put them. I'm already considering renaming the data types for D.NET and aliasing code where a function returns an Int16 and the D code that receives it using a short and vice versa. It's not at all a complex operation, so the time it would take to actually implement is negligable. Of much more concern, I would think, is the public's reaction, and I think that evidence has been overwhelming to this point that such a change would be welcomed by the majority of D users. -Deja
Sep 27 2004