## digitalmars.D.learn - Implicit conversion rules

• Sigg (22/22) Oct 21 2015 I started reading "The D programming Language" earlier, and came
• anonymous (17/31) Oct 21 2015 The problem is of course that int and ulong have no common super type, a...
• Sigg (17/37) Oct 21 2015 Yes, I'm well aware of that. I was under the (wrongful)impression
• =?UTF-8?Q?Ali_=c3=87ehreli?= (9/10) Oct 21 2015 One of those side effects would be function calls binding silently to
• Marco Leise (21/36) Oct 21 2015 God forbid anyone implement such nonsense into D !
• Maxim Fomin (8/30) Oct 21 2015 AFAIK it was implemented long time ago and discussed last time
• Sigg (6/8) Oct 22 2015 Slight nitpick, but what I suggested for our hypothetical
• Maxim Fomin (3/13) Oct 21 2015 Actually 'a' is deduced to be int, so int version is called (as
Sigg <todorovicmilos89 gmail.com> writes:
```I started reading "The D programming Language" earlier, and came
to the "2.3.3 Typing of Numeric Operators" section which claims
that "if at least one participant has type ulong, the other is
implicitly converted to ulong prior to the application and the
result has type ulong.".

Now I understand reasoning behind it, and know that adding any
sufficiently negative value to a ulong/uint/ushort will cause an
underflow as in following example:

void func() {
int a = -10;
ulong b = 0;
ulong c = a + b;
writefln("%s", c);
}

out: 18446744073709551574

But shouldn't declaring c as auto force compiler to go extra step
and "properly" deduce result of the "a + b" expression, since its
already as far as I understand doing magic in the background?
Basically try to cast rvalues to narrowest type without losing
precision before evaluating expression.

Or is there a proper way to do math with unsigned and signed
primitives that I'm not aware of?
```
Oct 21 2015
anonymous <anonymous example.com> writes:
```On Wednesday, October 21, 2015 07:53 PM, Sigg wrote:

void func() {
int a = -10;
ulong b = 0;
ulong c = a + b;
writefln("%s", c);
}

out: 18446744073709551574

But shouldn't declaring c as auto force compiler to go extra step
and "properly" deduce result of the "a + b" expression, since its
already as far as I understand doing magic in the background?
Basically try to cast rvalues to narrowest type without losing
precision before evaluating expression.

The problem is of course that int and ulong have no common super type, at
least not in the primitive integer types. int supports negative values,
ulong supports values greater than long.max.

As far as I understand, you'd like the compiler to see the values of `a` and
`b` (-10, 0), figure out that the result is negative, and then make `c`
signed based on that. That's not how D rolls. The same code must compile
when the values in `a` and `b` come from run time input. So the type of the
addition cannot depend on the values of the operands, only on their types.

Or maybe you'd expect an `auto` variable to be able to hold both negative
and very large values? But `auto` is not a special type, it's just a
shorthand for typeof(right-hand side). That means, `auto` variables still
get one specific static type, like int or ulong.

std.bigint and core.checkedint may be of interest to you, if you prefer
safer operations over faster ones.

http://dlang.org/phobos/std_bigint.html
http://dlang.org/phobos/core_checkedint.html
```
Oct 21 2015
Sigg <todorovicmilos89 gmail.com> writes:
```On Wednesday, 21 October 2015 at 19:07:24 UTC, anonymous wrote:

The problem is of course that int and ulong have no common
super type, at least not in the primitive integer types. int
supports negative values, ulong supports values greater than
long.max.

Yes, I'm well aware of that. I was under the (wrongful)impression
that auto was doing much more under the hood and that it was more
safety oriented, I've prolly mixed it with something else while

As far as I understand, you'd like the compiler to see the
values of `a` and `b` (-10, 0), figure out that the result is
negative, and then make `c` signed based on that. That's not
how D rolls. The same code must compile when the values in `a`
and `b` come from run time input. So the type of the addition
cannot depend on the values of the operands, only on their
types.

Or maybe you'd expect an `auto` variable to be able to hold
both negative and very large values? But `auto` is not a
special type, it's just a shorthand for typeof(right-hand
side). That means, `auto` variables still get one specific
static type, like int or ulong.

Ima clarify what I expected using my previous example:

ulong a = 0;
int b = -10;
auto c = a + b;

a gets cast to narrowest primitive type that can hold its value,
in this case bool since bool can hold 0 value resulting in c
having value of -10. If a was bigger than max long I'd expect an
error/exception. Now on the other hand I can see why something
like this would not be implemented since it would ignore implicit
conversion table and prolly cause at least few more "fun" side
effects.

std.bigint and core.checkedint may be of interest to you, if
you prefer safer operations over faster ones.

http://dlang.org/phobos/std_bigint.html
http://dlang.org/phobos/core_checkedint.html

This is exactly what I was looking for. Thanks!
```
Oct 21 2015
=?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
```On 10/21/2015 12:37 PM, Sigg wrote:

cause at least few more "fun" side effects.

One of those side effects would be function calls binding silently to

void foo(bool){/* ... */}
void foo(int) {/* ... */}

auto a = 0;  // If the type were deduced by the value,
foo(a);      // then this would be a call to foo(bool)...
// until someone changed the value to 2. :)

Ali
```
Oct 21 2015
Marco Leise <Marco.Leise gmx.de> writes:
```Am Wed, 21 Oct 2015 12:49:35 -0700
schrieb Ali =C3=87ehreli <acehreli yahoo.com>:

On 10/21/2015 12:37 PM, Sigg wrote:
=20
> cause at least few more "fun" side effects.
=20
One of those side effects would be function calls binding silently to=20
=20
void foo(bool){/* ... */}
void foo(int) {/* ... */}
=20
auto a =3D 0;  // If the type were deduced by the value,
foo(a);      // then this would be a call to foo(bool)...
// until someone changed the value to 2. :)
=20
Ali

God forbid anyone implement such nonsense into D !
That would be the last thing we need that we cannot rely on
the overload resolution any more. It would be as if making 'a'
const would change the overload resolution when none of the

import std.format;
import std.stdio;

string foo(bool b) { return format("That's a boolean %s!", b); }
string foo(uint u) { return format("Thats an integral %s!", u); }

void main()
{
int a =3D 2497420, b =3D 2497419;
const int c =3D 2497420, d =3D 2497419;
writeln(foo(a-b));
writeln(foo(c-d));
writeln("WAT?!");
}

--=20
Marco
```
Oct 21 2015
Maxim Fomin <mxfomin gmail.com> writes:
```On Wednesday, 21 October 2015 at 22:49:16 UTC, Marco Leise wrote:
Am Wed, 21 Oct 2015 12:49:35 -0700
schrieb Ali Çehreli <acehreli yahoo.com>:

On 10/21/2015 12:37 PM, Sigg wrote:

> cause at least few more "fun" side effects.

One of those side effects would be function calls binding

void foo(bool){/* ... */}
void foo(int) {/* ... */}

auto a = 0;  // If the type were deduced by the value,
foo(a);      // then this would be a call to foo(bool)...
// until someone changed the value to 2. :)

Ali

God forbid anyone implement such nonsense into D !
That would be the last thing we need that we cannot rely on
the overload resolution any more. It would be as if making 'a'
const would change the overload resolution when none of the

AFAIK it was implemented long time ago and discussed last time
couple of years ago with example similar to Ali's.

void foo(bool)
void foo(int)

foo(0); // bool
foo(1); // bool
foo(2); // int
```
Oct 21 2015
Sigg <todorovicmilos89 gmail.com> writes:
```On Wednesday, 21 October 2015 at 22:49:16 UTC, Marco Leise wrote:

God forbid anyone implement such nonsense into D !
That would be the last thing we need

Slight nitpick, but what I suggested for our hypothetical
situation was only to apply for auto, once variable was assigned
to auto and got its correct type it would act like normal
variable. Stuff you mentioned would happen if it was part of an
expression in the rvalue expression.
```
Oct 22 2015
Maxim Fomin <mxfomin gmail.com> writes:
```On Wednesday, 21 October 2015 at 19:49:35 UTC, Ali Çehreli wrote:
On 10/21/2015 12:37 PM, Sigg wrote:

cause at least few more "fun" side effects.

One of those side effects would be function calls binding

void foo(bool){/* ... */}
void foo(int) {/* ... */}

auto a = 0;  // If the type were deduced by the value,
foo(a);      // then this would be a call to foo(bool)...
// until someone changed the value to 2. :)

Ali

Actually 'a' is deduced to be int, so int version is called (as
expected?). See my example above for the VRO overload issue.
```
Oct 21 2015