## digitalmars.D - D arithmetic problem

• davidl (24/24) Jun 02 2009 Following two piece of code demonstrate different behaviors currently.
• Tim Matthews (11/40) Jun 02 2009 should be in d.learn but try this:
• davidl (7/41) Jun 02 2009 I mean the first behavior can cause problems. I doubt any coder would tr...
• Tim Matthews (11/16) Jun 02 2009 It is probably mostly used for dealing with binary data but D has no
• davidl (8/24) Jun 02 2009 Perhaps, it's because of my mistranslating all Byte to byte directly.
• Rainer Deyke (5/9) Jun 02 2009 Bitwise operations rarely make sense for signed types. Would it make
• Don (9/38) Jun 02 2009 Yes. Yet there's an issue here -- EVERYONE expects 'byte' to be
• Walter Bright (16/20) Jun 03 2009 Any two's complement arithmetic system, with types of different sizes
• davidl (10/30) Jun 03 2009 I highly whether the code by utilizing this kind of semantic can be call...
• Walter Bright (10/18) Jun 03 2009 It's not about whether it is well-written or not. It is about whether it...
• Kagamin (3/7) Jun 03 2009 It uses only unsigned types.
• Don (13/40) Jun 03 2009 Really, the problem in this case isn't two's complement, but rather C's
• Kagamin (2/4) Jun 03 2009 yeah, signed byte sucks. Signed 1-byte int is tiny int in fact.
• bearophile (10/16) Jun 03 2009 I agree a lot.
• Daniel Keep (4/25) Jun 03 2009 *Every* integral type is signed by default. Why should byte be an
• bearophile (8/11) Jun 03 2009 It's not a fault of D, it's a fault of mine, I'm just used to think of b...
• Kagamin (5/7) Jun 03 2009 there are two different families of integers
• Paul D. Anderson (5/51) Jun 02 2009 The behavior is consistent with the specification -- see http://www.digi...
• Tim Matthews (5/9) Jun 02 2009 You are slightly missing the point. The point is byte should be unsigned...
• Steven Schveighoffer (20/29) Jun 03 2009 Count me as one of the people who sees byte as signed. In fact, C#
• Tim Matthews (10/13) Jun 03 2009 It would make it very messy agreed. I would hate for it to be changed
• Don (13/48) Jun 03 2009 Yes, but we need to recognize that that convention is not intuitive when...
• neob (7/12) Jun 04 2009 Then maybe byte and ubyte should be replaced by tiny and utiny. It would...
• bearophile (4/5) Jun 04 2009 For me tiny/utiny too are acceptable. In this thread I have seen several...
• Steven Schveighoffer (15/45) Jun 04 2009 I'm afraid you are thinking way deeper than I do when looking at it :)
• Denis Koroskin (2/24) Jun 03 2009 Shouldn't bitwise operations be disallowed on signed types at all?
• bearophile (4/5) Jun 03 2009 It sounds OK to me, do Don & Walter agree?
• Don (13/20) Jun 03 2009 Hmm. If you are doing bitwise operations, you are treating the number as...
• Denis Koroskin (4/20) Jun 04 2009 I see no problem adding implicit cast to unsigned counterparts here.
• bearophile (53/56) Jun 04 2009 I agree that it's common idiom in C code, but in a tidy language I may w...
• Kagamin (3/12) Jun 04 2009 How would you write this?
• Steven Schveighoffer (15/31) Jun 04 2009 Easy:
• Don (5/46) Jun 09 2009 Yes. The Java argument is a stronger one. (Anyway, you could treat the
• bearophile (4/5) Jun 10 2009 Before introducing special cases like that I strongly suggest to think a...
• Don (3/10) Jun 11 2009 I'm not proposing it. Just saying that my argument based around -1 was
• Nick Sabalausky (3/4) Jun 03 2009 Or at least prohibited on different-sized operands.
• Kagamin (2/3) Jun 04 2009 bit operations on signed types are ok as long as implicit conversions do...
davidl <davidl nospam.org> writes:
```Following two piece of code demonstrate different behaviors currently.

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= t;
writefln(v);
}

Output:4294967295

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= cast(ubyte)t;
writefln(v);
}

Output:31487

I would always want the second result. What's your opinion?

--

```
Jun 02 2009
Tim Matthews <tim.matthews7 gmail.com> writes:
```davidl wrote:
Following two piece of code demonstrate different behaviors currently.

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= t;
writefln(v);
}

Output:4294967295

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= cast(ubyte)t;
writefln(v);
}

Output:31487

I would always want the second result. What's your opinion?

should be in d.learn but try this:

import std.stdio;
void main()
{
uint v;
v=31234;
ubyte t= -1;
v |= t;
writefln(v);
}
```
Jun 02 2009
davidl <davidl nospam.org> writes:
```在 Wed, 03 Jun 2009 10:48:02 +0800，Tim Matthews <tim.matthews7 gmail.com>

davidl wrote:
Following two piece of code demonstrate different behaviors currently.
import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= t;
writefln(v);
}
Output:4294967295
import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= cast(ubyte)t;
writefln(v);
}
Output:31487
I would always want the second result. What's your opinion?

should be in d.learn but try this:

import std.stdio;
void main()
{
uint v;
v=31234;
ubyte t= -1;
v |= t;
writefln(v);
}

I mean the first behavior can cause problems. I doubt any coder would try
to get that result by writing that piece of code. I ported some C# source
to D, and I got this semantic different issue.

--

```
Jun 02 2009
Tim Matthews <tim.matthews7 gmail.com> writes:
```davidl wrote:

I mean the first behavior can cause problems.

Ohh sorry.

I doubt any coder would
try to get that result by writing that piece of code. I ported some C#
source to D, and I got this semantic different issue.

It is probably mostly used for dealing with binary data but D has no
other way of specifying a number that is only 1 byte except the char but
that would be a misuse of the type system unless ported from c/c++. Just
a few gotchas off the top of my head:

D     C#
------------
ubyte   Byte
byte    SByte
wchar   Char
```
Jun 02 2009
davidl <davidl nospam.org> writes:
```在 Wed, 03 Jun 2009 10:58:32 +0800，Tim Matthews <tim.matthews7 gmail.com>

davidl wrote:

I mean the first behavior can cause problems.

Ohh sorry.

I doubt any coder would try to get that result by writing that piece of
code. I ported some C# source to D, and I got this semantic different
issue.

It is probably mostly used for dealing with binary data but D has no
other way of specifying a number that is only 1 byte except the char but
that would be a misuse of the type system unless ported from c/c++. Just
a few gotchas off the top of my head:

D     C#
------------
ubyte   Byte
byte    SByte
wchar   Char

Perhaps, it's because of my mistranslating all Byte to byte directly.
Nonetheless, the first behavior is not impressive. I innately dislike
that. Because I try to deal with a type with one byte by using Or
operator, I get it promoted to int. That implicitly cast sounds awful.

--

```
Jun 02 2009
Rainer Deyke <rainerd eldwood.com> writes:
```davidl wrote:
Perhaps, it's because of my mistranslating all Byte to byte directly.
Nonetheless, the first behavior is not impressive. I innately dislike
that. Because I try to deal with a type with one byte by using Or
operator, I get it promoted to int. That implicitly cast sounds awful.

Bitwise operations rarely make sense for signed types.  Would it make
sense to disallow them entirely?

--
Rainer Deyke - rainerd eldwood.com
```
Jun 02 2009
Don <nospam nospam.com> writes:
```davidl wrote:
在 Wed, 03 Jun 2009 10:58:32 +0800，Tim Matthews
<tim.matthews7 gmail.com> 写道:

davidl wrote:

I mean the first behavior can cause problems.

Ohh sorry.

I doubt any coder would try to get that result by writing that piece
of code. I ported some C# source to D, and I got this semantic
different issue.

It is probably mostly used for dealing with binary data but D has no
other way of specifying a number that is only 1 byte except the char
but that would be a misuse of the type system unless ported from
c/c++. Just a few gotchas off the top of my head:

D     C#
------------
ubyte   Byte
byte    SByte
wchar   Char

Perhaps, it's because of my mistranslating all Byte to byte directly.

Yes. Yet there's an issue here -- EVERYONE expects 'byte' to be
unsigned. Walter even forgot it when he did the first release of htod.
The uses for signed bytes are so rare, that any use of 'byte' is highly
likely to be an error.

Nonetheless, the first behavior is not impressive. I innately dislike
that. Because I try to deal with a type with one byte by using Or
operator, I get it promoted to int. That implicitly cast sounds awful.

I agree, it's bug-prone.
Ideally, we'd disallow implicit widening casts for signed types in
logical operations. But can that be done without creating too many
language quirks?
```
Jun 02 2009
Walter Bright <newshound1 digitalmars.com> writes:
```Don wrote:
I agree, it's bug-prone.

Any two's complement arithmetic system, with types of different sizes
and signed-ness, is going to have quirks. It's inescapable. Back when C
was standardized in the 80's, there was a huge debate about whether to
use signed-preserving rules or value-preserving rules. After much
debate, it came down to pick set A of problems or set B of problems. The
committee picked one (value preserving) and moved on.

Ideally, we'd disallow implicit widening casts for signed types in
logical operations. But can that be done without creating too many
language quirks?

The problem with changing the rules is that the value preserving rules
are now deeply ingrained into how C (and C++) code is written. Changing
them would mean that translating complex code from C to D may produce
silent changes in behavior. I believe this would be very bad for D
because it means people would not be able to translate such code to D.

(After all, it's one thing to translate, say, an encryption program from
C to D. It's quite another thing to understand it well enough to be able
to debug it, or even verify that it is working correctly.)
```
Jun 03 2009
davidl <davidl nospam.org> writes:
```在 Wed, 03 Jun 2009 15:23:45 +0800，Walter Bright
<newshound1 digitalmars.com> 写道:

Don wrote:
I agree, it's bug-prone.

Any two's complement arithmetic system, with types of different sizes
and signed-ness, is going to have quirks. It's inescapable. Back when C
was standardized in the 80's, there was a huge debate about whether to
use signed-preserving rules or value-preserving rules. After much
debate, it came down to pick set A of problems or set B of problems. The
committee picked one (value preserving) and moved on.

Ideally, we'd disallow implicit widening casts for signed types in
logical operations. But can that be done without creating too many
language quirks?

The problem with changing the rules is that the value preserving rules
are now deeply ingrained into how C (and C++) code is written. Changing
them would mean that translating complex code from C to D may produce
silent changes in behavior. I believe this would be very bad for D
because it means people would not be able to translate such code to D.

(After all, it's one thing to translate, say, an encryption program from
C to D. It's quite another thing to understand it well enough to be able
to debug it, or even verify that it is working correctly.)

I highly whether the code by utilizing this kind of semantic can be called
well-written. Porting such code to D doesn't make D any better, also it
requires a lot of effort to port. I don't know any famous C to D or C++ to
D converter. Instead, I have a working C# to D portting tool which can
port some specific code by hacking on top of SharpDevelop in only few
weeks effort. Porting C# code to D might be more attractive.

--

```
Jun 03 2009
Walter Bright <newshound1 digitalmars.com> writes:
```davidl wrote:
I highly whether the code by utilizing this kind of semantic can be
called well-written.

It's not about whether it is well-written or not. It is about whether it
works correctly in D if it is legal C, or if it silently produces
different answers. I believe the latter is unacceptable. It should
either work the same, or issue an error.

Porting such code to D doesn't make D any better,
also it requires a lot of effort to port.

Not sure what you mean here, as the current rules make it easy to port C
expressions to D, because the rules are the same for C and D.

I don't know any famous C to D
or C++ to D converter. Instead, I have a working C# to D portting tool
which can port some specific code by hacking on top of SharpDevelop in
only few weeks effort. Porting C# code to D might be more attractive.

I have ported C code to D - see std.md5. I frankly have no idea how it
works, but I was able to port it despite it doing some inscrutable
integer arithmetic.
```
Jun 03 2009
Kagamin <spam here.lot> writes:
```Walter Bright Wrote:

It should either work the same, or issue an error.

As I understand, this is a request to issue an error.

I have ported C code to D - see std.md5. I frankly have no idea how it
works, but I was able to port it despite it doing some inscrutable
integer arithmetic.

It uses only unsigned types.
```
Jun 03 2009
Don <nospam nospam.com> writes:
```Walter Bright wrote:
Don wrote:
I agree, it's bug-prone.

Any two's complement arithmetic system, with types of different sizes
and signed-ness, is going to have quirks. It's inescapable. Back when C
was standardized in the 80's, there was a huge debate about whether to
use signed-preserving rules or value-preserving rules. After much
debate, it came down to pick set A of problems or set B of problems. The
committee picked one (value preserving) and moved on.

Really, the problem in this case isn't two's complement, but rather C's
cavalier attitude to implicit casting.
In this case, it's possible to isolate the implicit casts which are
bug-prone, without affecting useful behaviour at all.

Specifically, the rule would be:
* implicit widening casts of signed integral types may occur in
arithmetic operations, bitwise logical operations, and function calls.
* implicit widening casts of signed types  may occur in arithmetic
operations and function calls.  It does NOT occur in bitwise logical
operations.

&ct=result&resnum=9

Ideally, we'd disallow implicit widening casts for signed types in
logical operations. But can that be done without creating too many
language quirks?

The problem with changing the rules is that the value preserving rules
are now deeply ingrained into how C (and C++) code is written. Changing
them would mean that translating complex code from C to D may produce
silent changes in behavior. I believe this would be very bad for D
because it means people would not be able to translate such code to D.

No, (in contrast to the original poster) my proposed rule change would
just make it an error. There'd be nothing silent.

(After all, it's one thing to translate, say, an encryption program from
C to D. It's quite another thing to understand it well enough to be able
to debug it, or even verify that it is working correctly.)

```
Jun 03 2009
Kagamin <spam here.lot> writes:
```Don Wrote:

Yes. Yet there's an issue here -- EVERYONE expects 'byte' to be
unsigned.

yeah, signed byte sucks. Signed 1-byte int is tiny int in fact.
```
Jun 03 2009
bearophile <bearophileHUGS lycos.com> writes:
```Don:
Yes. Yet there's an issue here -- EVERYONE expects 'byte' to be
unsigned. Walter even forgot it when he did the first release of htod.
The uses for signed bytes are so rare, that any use of 'byte' is highly
likely to be an error.

[...]

I agree, it's bug-prone.

I agree a lot.
I too have put one or two bugs in my D programs caused by not remembering (even
if I know it) that byte is signed in D. Signed byte is not intuitive at all.

Even if it looks a bit dirty, I think that doing the following 3 things some
bugs from D2 code can be avoided:
- make byte unsigned.
- deprecate ubyte from the language.

This doesn't give problems in porting C code to D.

Bye,
bearophile
```
Jun 03 2009
Daniel Keep <daniel.keep.lists gmail.com> writes:
```bearophile wrote:
Don:
Yes. Yet there's an issue here -- EVERYONE expects 'byte' to be
unsigned. Walter even forgot it when he did the first release of htod.
The uses for signed bytes are so rare, that any use of 'byte' is highly
likely to be an error.

[...]

I agree, it's bug-prone.

I agree a lot.
I too have put one or two bugs in my D programs caused by not remembering
(even if I know it) that byte is signed in D. Signed byte is not intuitive at
all.

Even if it looks a bit dirty, I think that doing the following 3 things some
bugs from D2 code can be avoided:
- make byte unsigned.
- deprecate ubyte from the language.

This doesn't give problems in porting C code to D.

Bye,
bearophile

*Every* integral type is signed by default.  Why should byte be an
exception?

If it's really that much of a concern, deprecate byte and add sbyte.
```
Jun 03 2009
bearophile <bearophileHUGS lycos.com> writes:
```Daniel Keep:
*Every* integral type is signed by default.  Why should byte be an
exception?

It's not a fault of D, it's a fault of mine, I'm just used to think of bytes as
plain 8 bits, so they are unsigned.
It seems I am not the only one to have such idea of a byte. So I have suggested
something to avoid some bugs, even if this makes the language a bit less
uniform, breaking symmetry a bit.
D has deprecated literals as 10l to avoid bugs, even if this has broken the
symmetry a bit (allowing only the upper case L).

If it's really that much of a concern, deprecate byte and add sbyte.

I like that, this too solves the problem, and it's a solution simpler than mine.
(This doesn't solve the problem of silent conversions in D copied from C, it's
not a general solution, but it's better than nothing).

Bye,
bearophile
```
Jun 03 2009
Kagamin <spam here.lot> writes:
```Daniel Keep Wrote:

*Every* integral type is signed by default.  Why should byte be an
exception?

there are two different families of integers
1) tiny, short, int, long - they're signed by default.
2) byte, word, dword, qword - they're unsigned.

D just screwed things.
```
Jun 03 2009
Paul D. Anderson <paul.d.removethis.anderson comcast.andthis.net> writes:
```The behavior is consistent with the specification -- see
http://www.digitalmars.com/d/1.0/type.html.

Most C-based languages will do the same.

Paul

----------

davidl Wrote:

在 Wed, 03 Jun 2009 10:48:02 +0800，Tim Matthews <tim.matthews7 gmail.com>
写道:

davidl wrote:
Following two piece of code demonstrate different behaviors currently.
import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= t;
writefln(v);
}
Output:4294967295
import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= cast(ubyte)t;
writefln(v);
}
Output:31487
I would always want the second result. What's your opinion?

should be in d.learn but try this:

import std.stdio;
void main()
{
uint v;
v=31234;
ubyte t= -1;
v |= t;
writefln(v);
}

I mean the first behavior can cause problems. I doubt any coder would try
to get that result by writing that piece of code. I ported some C# source
to D, and I got this semantic different issue.

--
使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/

```
Jun 02 2009
Tim Matthews <tim.matthews7 gmail.com> writes:
```Paul D. Anderson wrote:
The behavior is consistent with the specification -- see
http://www.digitalmars.com/d/1.0/type.html.

Most C-based languages will do the same.

You are slightly missing the point. The point is byte should be unsigned
and a separate name for signed. Dot net actually has Byte for unsigned
and SByte for signed so its harder to make that error. Too late to
change now though.
```
Jun 02 2009
"Steven Schveighoffer" <schveiguy yahoo.com> writes:
```On Wed, 03 Jun 2009 02:20:33 -0400, Tim Matthews <tim.matthews7 gmail.com>
wrote:

Paul D. Anderson wrote:
The behavior is consistent with the specification -- see
http://www.digitalmars.com/d/1.0/type.html.
Most C-based languages will do the same.

You are slightly missing the point. The point is byte should be unsigned
and a separate name for signed. Dot net actually has Byte for unsigned
and SByte for signed so its harder to make that error. Too late to
change now though.

Count me as one of the people who sees byte as signed.  In fact, C#
confused me when I wanted to use an unsigned byte, and couldn't find
ubyte.  I think it all depends on what you learned first.

Making byte unsigned and introducing sbyte would go against the current
convention of "without a u is signed, with a u is unsigned".

BTW, with C#, the following code results in an error:

static void Main(string[] args)
{
uint v;
v = 31234;
sbyte t = -1;
v |= t;
Console.Out.WriteLine(v);
}

Eror: Cannot implicitly convert type 'sbyte' to 'uint'. An explicit
conversion exists (are you missing a cast?)

I would be in favor of a similar behavior in D.

-Steve
```
Jun 03 2009
Tim Matthews <tim.matthews7 gmail.com> writes:
```Steven Schveighoffer wrote:

I think it all depends on what you learned first.

I learnt c first, c++ second, some similar others before d.

Making byte unsigned and introducing sbyte would go against the current
convention of "without a u is signed, with a u is unsigned".

It would make it very messy agreed. I would hate for it to be changed
now and brake everything but explicitly signed aliases to the existing
types without removing the old is maybe not a bad idea though:

sbyte
sshort
sint
slong
scent
```
Jun 03 2009
Don <nospam nospam.com> writes:
```Steven Schveighoffer wrote:
On Wed, 03 Jun 2009 02:20:33 -0400, Tim Matthews
<tim.matthews7 gmail.com> wrote:

Paul D. Anderson wrote:
The behavior is consistent with the specification -- see
http://www.digitalmars.com/d/1.0/type.html.
Most C-based languages will do the same.

You are slightly missing the point. The point is byte should be
unsigned and a separate name for signed. Dot net actually has Byte for
unsigned and SByte for signed so its harder to make that error. Too
late to change now though.

Count me as one of the people who sees byte as signed.  In fact, C#
confused me when I wanted to use an unsigned byte, and couldn't find
ubyte.  I think it all depends on what you learned first.

Making byte unsigned and introducing sbyte would go against the current
convention of "without a u is signed, with a u is unsigned".

Yes, but we need to recognize that that convention is not intuitive when
applied to 'byte'. The integral types originate in C, and they are
abbreviations.

short -> short int
int   ->   int
long  -> long int

In mathematical usage, an 'integer' is signed. So the 'u' prefix makes
sense. However, 'byte' is not 'byte int'. And the word 'byte' does NOT
have an implied signed-ness in popular usage (in fact, it doesn't have
terribly much implied connotation of being a number, merely a set of 8
bits).

BTW, with C#, the following code results in an error:

static void Main(string[] args)
{
uint v;
v = 31234;
sbyte t = -1;
v |= t;
Console.Out.WriteLine(v);
}

Eror: Cannot implicitly convert type 'sbyte' to 'uint'. An explicit
conversion exists (are you missing a cast?)

I would be in favor of a similar behavior in D.

Great! That's exactly what I was proposing.
```
Jun 03 2009
neob <neobstojec gmail.com> writes:
``` In mathematical usage, an 'integer' is signed. So the 'u' prefix makes
sense. However, 'byte' is not 'byte int'. And the word 'byte' does NOT
have an implied signed-ness in popular usage (in fact, it doesn't have
terribly much implied connotation of being a number, merely a set of 8
bits).

Then maybe byte and ubyte should be replaced by tiny and utiny. It would
eliminate confusion about byte and it wouldn't go against "without a u is
signed, with a u is unsigned" convention. You would only have to add

static if (is(utiny))
{
alias tiny byte;
alias utiny ubyte
}

to older code. byte_t alias for utiny could be added to the language.
```
Jun 04 2009
bearophile <bearophileHUGS lycos.com> writes:
```neob:
Then maybe byte and ubyte should be replaced by tiny and utiny. It would
eliminate confusion about byte and it wouldn't go against "without a u is
signed, with a u is unsigned" convention. You would only have to add<

For me tiny/utiny too are acceptable. In this thread I have seen several
acceptable solutions, the solution I don't like is the one currently used by D.

Bye,
bearophile
```
Jun 04 2009
"Steven Schveighoffer" <schveiguy yahoo.com> writes:
```On Thu, 04 Jun 2009 00:06:33 -0400, Don <nospam nospam.com> wrote:

Steven Schveighoffer wrote:
Count me as one of the people who sees byte as signed.  In fact, C#
confused me when I wanted to use an unsigned byte, and couldn't find
ubyte.  I think it all depends on what you learned first.
Making byte unsigned and introducing sbyte would go against the
current convention of "without a u is signed, with a u is unsigned".

Yes, but we need to recognize that that convention is not intuitive when
applied to 'byte'. The integral types originate in C, and they are
abbreviations.

short -> short int
int   ->   int
long  -> long int

In mathematical usage, an 'integer' is signed. So the 'u' prefix makes
sense. However, 'byte' is not 'byte int'. And the word 'byte' does NOT
have an implied signed-ness in popular usage (in fact, it doesn't have
terribly much implied connotation of being a number, merely a set of 8
bits).

I'm afraid you are thinking way deeper than I do when looking at it :)

I say "I want an 8-bit integer, what does D have? (browses to
http://www.digitalmars.com/d/1.0/type.html) Oh, it's byte! cool.  what's
the unsigned version?  Don't even have to look that up, follow the
convention: ubyte!"

Granted, if it had said sbyte instead of byte, I'd probably have some
misgivings about how it doesn't follow the convention of the other types,
but I'd have gotten over that pretty quick.

The O.P. though, probably didn't go through the same thought process you
did.  He was converting C# code, so he logically thought Byte -> byte.
Makes sense.  If C# had made Byte signed, then we wouldn't even be having
this discussion...

BTW, with C#, the following code results in an error:
static void Main(string[] args)
{
uint v;
v = 31234;
sbyte t = -1;
v |= t;
Console.Out.WriteLine(v);
}
Eror: Cannot implicitly convert type 'sbyte' to 'uint'. An explicit
conversion exists (are you missing a cast?)
I would be in favor of a similar behavior in D.

Great! That's exactly what I was proposing.

At least we agree there :)

-Steve
```
Jun 04 2009
"Denis Koroskin" <2korden gmail.com> writes:
```On Wed, 03 Jun 2009 06:45:10 +0400, davidl <davidl nospam.org> wrote:

Following two piece of code demonstrate different behaviors currently.

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= t;
writefln(v);
}

Output:4294967295

import std.stdio;
void main()
{
uint v;
v=31234;
byte t= -1;
v |= cast(ubyte)t;
writefln(v);
}

Output:31487

I would always want the second result. What's your opinion?

Shouldn't bitwise operations be disallowed on signed types at all?
```
Jun 03 2009
bearophile <bearophileHUGS lycos.com> writes:
```Denis Koroskin:
Shouldn't bitwise operations be disallowed on signed types at all?

It sounds OK to me, do Don & Walter agree?

Bye,
bearophile
```
Jun 03 2009
Don <nospam nospam.com> writes:
```bearophile wrote:
Denis Koroskin:
Shouldn't bitwise operations be disallowed on signed types at all?

It sounds OK to me, do Don & Walter agree?

Hmm. If you are doing bitwise operations, you are treating the number as
unsigned, no doubt about it. Some observations:

* The use of -1 for "all bits set" is in widespread use and is important.

ushort x ^= -1;
uint y ^= -1;
ulong z ^= -1;
probably needs to remain valid.
But then consider
(x ^ -1) + y

What is the type of x^-1 ? Is it ushort? Or int?

* A lot of existing C code uses bitwise operations on ints.
* size_t had better be unsigned!

Bye,
bearophile

```
Jun 03 2009
"Denis Koroskin" <2korden gmail.com> writes:
```On Thu, 04 Jun 2009 10:00:27 +0400, Don <nospam nospam.com> wrote:

bearophile wrote:
Denis Koroskin:
Shouldn't bitwise operations be disallowed on signed types at all?

It sounds OK to me, do Don & Walter agree?

Hmm. If you are doing bitwise operations, you are treating the number as
unsigned, no doubt about it. Some observations:

* The use of -1 for "all bits set" is in widespread use and is important.

ushort x ^= -1;
uint y ^= -1;
ulong z ^= -1;
probably needs to remain valid.
But then consider
(x ^ -1) + y

What is the type of x^-1 ? Is it ushort? Or int?

I see no problem adding implicit cast to unsigned counterparts here.
In this case size of (x ^ cast(uT)-1) would be unambiguous.

* A lot of existing C code uses bitwise operations on ints.

That's fine, raising an error in such cases is a good behavior.

* size_t had better be unsigned!

```
Jun 04 2009
bearophile <bearophileHUGS lycos.com> writes:
```Don:

* The use of -1 for "all bits set" is in widespread use and is important.<

I agree that it's common idiom in C code, but in a tidy language I may want to
disallow the following:
uint x = -125;
because x can't represent -125, it's an error.
Likewise in D the following very common C idiom is an error ("cannot implicitly
convert expression (0) of type int to int*") because they are of different type
in D:
int *p = 0;
and you have to use null or nothing, because the variable is null by default.

uint x1 = -1;
ushort x2 = -1;
ubyte x3 = -1;

You can write this, that is clear and safe:
uint x1 = uint.max;
ushort x2 = ushort.max;
ubyte x3 = ubyte.max;

But all that type duplication isn't nice, it's not DRY.
You can write the following that works correctly:
auto x1 = uint.max;
auto x2 = ushort.max;
auto x3 = ubyte.max;

But I don't like that much because it's fragile: if later you change the code a
bit, for example like this:
auto x3 = ubyte.max + 100;
Now x3 is an int. Not good.

The following isn't a good solution because now you use the variable names
three times, so it's not DRY still:
uint x1;
// x1 = typeof(x1).max; // not necessary, it seems
x1 = x1.max;
ushort x2;
x2 = x2.max;
ubyte x3;
x3 = x3.max;

Few possible alternative syntaxes, I like none of them:

uint.max x1;
ushort.max x2;
ubyte.max x3;

uint x1 = .max;
ushort x2 = .max;
ubyte x3 = .max;

uint x1 = x1.max;
ushort x2 = x2.max;
ubyte x3 = x3.max;

uint x1 = self.max;
ushort x2 = self.max;
ubyte x3 = self.max;

uint x1.max;
ushort x2.max;
ubyte x3.max;

---------------------------

Denis Koroskin:

* A lot of existing C code uses bitwise operations on ints.<<

That's fine, raising an error in such cases is a good behavior.<

I tend to agree here.
(But from my coding experience I have seen that forcing the programmer to add
lot of casts is bad. Because it's tedious, makes the code more noisy, and it's
unsafe, because if you add 100 necessary casts, for mistake you may add 101 of
them instead, and then the 101-st may produce a bug. So it's sometimes better
to invent more refined/localized language rules, as other people have shown in
programs).

Bye,
bearophile
```
Jun 04 2009
Kagamin <spam here.lot> writes:
```bearophile Wrote:

uint x1 = -1;
ushort x2 = -1;
ubyte x3 = -1;

You can write this, that is clear and safe:
uint x1 = uint.max;
ushort x2 = ushort.max;
ubyte x3 = ubyte.max;

How would you write this?
```
Jun 04 2009
"Adam D. Ruppe" <destructionator gmail.com> writes:
```On Thu, Jun 04, 2009 at 09:04:45AM -0400, Kagamin wrote:
How would you write this?

uintptr_t pageMask = ~0x1000 + 1;

?

--
http://arsdnet.net
```
Jun 04 2009
"Steven Schveighoffer" <schveiguy yahoo.com> writes:
```On Thu, 04 Jun 2009 02:00:27 -0400, Don <nospam nospam.com> wrote:

bearophile wrote:
Denis Koroskin:
Shouldn't bitwise operations be disallowed on signed types at all?

It sounds OK to me, do Don & Walter agree?

Hmm. If you are doing bitwise operations, you are treating the number as
unsigned, no doubt about it. Some observations:

* The use of -1 for "all bits set" is in widespread use and is important.

ushort x ^= -1;
uint y ^= -1;
ulong z ^= -1;
probably needs to remain valid.
But then consider
(x ^ -1) + y

What is the type of x^-1 ? Is it ushort? Or int?

* A lot of existing C code uses bitwise operations on ints.
* size_t had better be unsigned!

Easy:

~0.  It's not even extra characters :)  I'd say I use that way more than
-1 (which is probably never).

That being said, I readily type int WAY more than uint when keeping track
of any integral type, including using bitmasks.  If I don't ever use the
sign bit in my bitmask, it doesn't matter.

Also, think about porting code from Java which has no unsigned types.  You
don't want to have to sit and think about what each variable really means
(should it be converted to uint?).

So I think logic operations on signed types should be allowed, but being
able to use -1 as "all bits set" isn't a good argument for it.  I'd much
rather see an error for the uncommon widening operations without an
explicit cast.

-Steve
```
Jun 04 2009
Don <nospam nospam.com> writes:
```Steven Schveighoffer wrote:
On Thu, 04 Jun 2009 02:00:27 -0400, Don <nospam nospam.com> wrote:

bearophile wrote:
Denis Koroskin:
Shouldn't bitwise operations be disallowed on signed types at all?

It sounds OK to me, do Don & Walter agree?

Hmm. If you are doing bitwise operations, you are treating the number
as unsigned, no doubt about it. Some observations:

* The use of -1 for "all bits set" is in widespread use and is important.

ushort x ^= -1;
uint y ^= -1;
ulong z ^= -1;
probably needs to remain valid.
But then consider
(x ^ -1) + y

What is the type of x^-1 ? Is it ushort? Or int?

* A lot of existing C code uses bitwise operations on ints.
* size_t had better be unsigned!

Easy:

~0.  It's not even extra characters :)  I'd say I use that way more than
-1 (which is probably never).

That being said, I readily type int WAY more than uint when keeping
track of any integral type, including using bitmasks.  If I don't ever
use the sign bit in my bitmask, it doesn't matter.

Also, think about porting code from Java which has no unsigned types.
You don't want to have to sit and think about what each variable really
means (should it be converted to uint?).

So I think logic operations on signed types should be allowed, but being
able to use -1 as "all bits set" isn't a good argument for it.

Yes. The Java argument is a stronger one. (Anyway, you could treat the
literal -1 as a special case).

I'd much
rather see an error for the uncommon widening operations without an
explicit cast.

Agreed, I think that's the only case that's a problem.
```
Jun 09 2009
bearophile <bearophileHUGS lycos.com> writes:
```Don:
(Anyway, you could treat the literal -1 as a special case).<

Before introducing special cases like that I strongly suggest to think about it
six times in all different days.

Bye,
bearophile
```
Jun 10 2009
Don <nospam nospam.com> writes:
```bearophile wrote:
Don:
(Anyway, you could treat the literal -1 as a special case).<

Before introducing special cases like that I strongly suggest to think about
it six times in all different days.

I'm not proposing it. Just saying that my argument based around -1 was
not very strong.

Bye,
bearophile

```
Jun 11 2009
"Nick Sabalausky" <a a.a> writes:
```"Denis Koroskin" <2korden gmail.com> wrote in message
news:op.uuybhaeeo7cclz soldat.creatstudio.intranet...
Shouldn't bitwise operations be disallowed on signed types at all?

Or at least prohibited on different-sized operands.
```
Jun 03 2009
Kagamin <spam here.lot> writes:
```Denis Koroskin Wrote:

Shouldn't bitwise operations be disallowed on signed types at all?

bit operations on signed types are ok as long as implicit conversions don't
change bit patterns.
```
Jun 04 2009