www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - sqrt(2) must go

reply Don <nospam nospam.com> writes:
In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's 
ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also 
applies to _any_ function which has overloads for more than one floating 
point type.

In D2 between versions 2.049 and the present, sqrt(2) compiles due to 
the request of a small number of people (2-3, I think). But still, no 
other floating point function works with integer literals.

The "bug" being fixed was
Bugzilla 4455: Taking the sqrt of an integer shouldn't require an 
explicit cast.

This compiles only due to an awful, undocumented hack in std.math. It 
doesn't work for any function other than sqrt. I protested strongly 
against this, but accepted it only on the proviso that we would fix 
integer literal conversions to floating point in _all_ cases, so that 
the hack could be removed.

However, when I proposed the fix on the newsgroup (integer literals 
should have a preferred conversion to double), it was *unanimously* 
rejected. Those who had argued for the hack were conspicuously absent.

The hack must go.
Oct 19 2011
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 19-10-2011 18:18, Don wrote:
 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's
 ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also
 applies to _any_ function which has overloads for more than one floating
 point type.

 In D2 between versions 2.049 and the present, sqrt(2) compiles due to
 the request of a small number of people (2-3, I think). But still, no
 other floating point function works with integer literals.

 The "bug" being fixed was
 Bugzilla 4455: Taking the sqrt of an integer shouldn't require an
 explicit cast.

 This compiles only due to an awful, undocumented hack in std.math. It
 doesn't work for any function other than sqrt. I protested strongly
 against this, but accepted it only on the proviso that we would fix
 integer literal conversions to floating point in _all_ cases, so that
 the hack could be removed.

 However, when I proposed the fix on the newsgroup (integer literals
 should have a preferred conversion to double), it was *unanimously*
 rejected. Those who had argued for the hack were conspicuously absent.

 The hack must go.
What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f instead of 2. In .NET land, people live with this just fine (see System.Math.Sqrt). Why can't we? I say kill the hack. - Alex
Oct 19 2011
parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 19-10-2011 18:22, Alex Rønne Petersen wrote:
 On 19-10-2011 18:18, Don wrote:
 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's
 ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also
 applies to _any_ function which has overloads for more than one floating
 point type.

 In D2 between versions 2.049 and the present, sqrt(2) compiles due to
 the request of a small number of people (2-3, I think). But still, no
 other floating point function works with integer literals.

 The "bug" being fixed was
 Bugzilla 4455: Taking the sqrt of an integer shouldn't require an
 explicit cast.

 This compiles only due to an awful, undocumented hack in std.math. It
 doesn't work for any function other than sqrt. I protested strongly
 against this, but accepted it only on the proviso that we would fix
 integer literal conversions to floating point in _all_ cases, so that
 the hack could be removed.

 However, when I proposed the fix on the newsgroup (integer literals
 should have a preferred conversion to double), it was *unanimously*
 rejected. Those who had argued for the hack were conspicuously absent.

 The hack must go.
What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f instead of 2. In .NET land, people live with this just fine (see System.Math.Sqrt). Why can't we? I say kill the hack. - Alex
PS: What's wrong with converting integer literals to double? It's a lossless conversion, no? - Alex
Oct 19 2011
parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 19.10.2011, 18:25 Uhr, schrieb Alex R=C3=B8nne Petersen  =

<xtzgzorex gmail.com>:

 On 19-10-2011 18:22, Alex R=C3=B8nne Petersen wrote:
 On 19-10-2011 18:18, Don wrote:
 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that i=
t's
 ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This a=
lso
 applies to _any_ function which has overloads for more than one  =
 floating
 point type.

 In D2 between versions 2.049 and the present, sqrt(2) compiles due t=
o
 the request of a small number of people (2-3, I think). But still, n=
o
 other floating point function works with integer literals.

 The "bug" being fixed was
 Bugzilla 4455: Taking the sqrt of an integer shouldn't require an
 explicit cast.

 This compiles only due to an awful, undocumented hack in std.math. I=
t
 doesn't work for any function other than sqrt. I protested strongly
 against this, but accepted it only on the proviso that we would fix
 integer literal conversions to floating point in _all_ cases, so tha=
t
 the hack could be removed.

 However, when I proposed the fix on the newsgroup (integer literals
 should have a preferred conversion to double), it was *unanimously*
 rejected. Those who had argued for the hack were conspicuously absen=
t.
 The hack must go.
What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f inste=
ad
 of 2. In .NET land, people live with this just fine (see
 System.Math.Sqrt). Why can't we?

 I say kill the hack.

 - Alex
PS: What's wrong with converting integer literals to double? It's a =
 lossless conversion, no?

 - Alex
As long as it is not a 64-bit integer, yes. More philosophical, I think integer math should be separate from FP math= , = because it has different semantics (integer division with remainder, = overflow vs. infinity, fixed steps vs. exponent, shifting, ...). So I = think .NET handles this correctly. But converting 32-bit ints to double = = shouldn't harm much either. JavaScript has no integers, only doubles. It= = works. When you come across a for loop there and think to yourself "it incremen= ts = a variant that stores a double there", you can laugh or cry. (Actually = good JS engines optimize that case, of course.)
Oct 19 2011
prev sibling next sibling parent reply Jesse Phillips <jessekphillips+D gmail.com> writes:
Don Wrote:

 However, when I proposed the fix on the newsgroup (integer literals 
 should have a preferred conversion to double), it was *unanimously* 
 rejected. Those who had argued for the hack were conspicuously absent.
 
 The hack must go.
 
I agree. If we should have a proper fix, or no fix at all. If it isn't going to compile then be sure it is documented. While it will be strange not having it compile it does make sense that when dealing with floating point you should choose its size.
Oct 19 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
You could maybe ease transition for people that rely on this behavior via:

import std.traits;

auto sqrt(F, T)(T x) { return sqrt(cast(F)x); }

auto sqrt(F)(F f)
    if (!isIntegral!F)
{
    return sqrt(f);
}

void main()
{
    int x = 1;
    sqrt(4);         // ng
    sqrt!float(x);   // ok
    sqrt(4.5);       // ok
}

It sure is shorter to type than cast(float). Then again it's templated
so maybe that's not too nice for std.math.
Oct 19 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
That call "return sqrt(f);" should have been "return sqrt!F(f)" that
forwards to sqrt functions that take floats/doubles/reals. Again these
would be templates.. it's a shame templates still can't overload
against functions.
Oct 19 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
http://codepad.org/4g0hXOse

Too much?
Oct 19 2011
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Don:

 However, when I proposed the fix on the newsgroup (integer literals 
 should have a preferred conversion to double), it was *unanimously* 
 rejected.
I don't remember the rationale of those refusals. Do you have a link to the discussion? Bye, bearophile
Oct 19 2011
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Don (nospam nospam.com)'s article
 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's
 ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also
 applies to _any_ function which has overloads for more than one floating
 point type.
 In D2 between versions 2.049 and the present, sqrt(2) compiles due to
 the request of a small number of people (2-3, I think). But still, no
 other floating point function works with integer literals.
 The "bug" being fixed was
 Bugzilla 4455: Taking the sqrt of an integer shouldn't require an
 explicit cast.
 This compiles only due to an awful, undocumented hack in std.math. It
 doesn't work for any function other than sqrt. I protested strongly
 against this, but accepted it only on the proviso that we would fix
 integer literal conversions to floating point in _all_ cases, so that
 the hack could be removed.
 However, when I proposed the fix on the newsgroup (integer literals
 should have a preferred conversion to double), it was *unanimously*
 rejected. Those who had argued for the hack were conspicuously absent.
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
http://www.digitalmars.com/d/archives/digitalmars/D/PROPOSAL_Implicit_conversions_of_integer_literals_to_floating_point_125539.html#N125539
Oct 19 2011
prev sibling next sibling parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 19.10.2011, 20:12 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 No.  Something as simple as sqrt(2) must work at all costs, period.  A  
 language
 that adds a bunch of silly complications to something this simple is  
 fundamentally
 broken.  I don't remember your post on implicit preferred conversions,  
 but IMHO
 implicit conversions of integer to double is a no-brainer.  Requiring  
 something
 this simple to be explicit is Java/Pascal-like overkill on explicitness.
Pascal (FreePascal 2.4.0) allows sqrt(2) just fine.
Oct 19 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Marco Leise (Marco.Leise gmx.de)'s article
 Am 19.10.2011, 20:12 Uhr, schrieb dsimcha <dsimcha yahoo.com>:
 No.  Something as simple as sqrt(2) must work at all costs, period.  A
 language
 that adds a bunch of silly complications to something this simple is
 fundamentally
 broken.  I don't remember your post on implicit preferred conversions,
 but IMHO
 implicit conversions of integer to double is a no-brainer.  Requiring
 something
 this simple to be explicit is Java/Pascal-like overkill on explicitness.
Pascal (FreePascal 2.4.0) allows sqrt(2) just fine.
LOL and Pascal was my example of a bondage-and-discipline language. All the more reason why we need to allow it in D come Hell or high water.
Oct 19 2011
parent Russel Winder <russel russel.org.uk> writes:
On Wed, 2011-10-19 at 19:12 +0000, dsimcha wrote:
[ . . . ]
 LOL and Pascal was my example of a bondage-and-discipline language.  All =
the more
 reason why we need to allow it in D come Hell or high water.
Bondage. Discipline. Does this mean Lady Heather will take control? =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Oct 19 2011
prev sibling next sibling parent reply Alvaro <alvaro_segura gmail.com> writes:
El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Oct 19 2011
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Alvaro:

 I call that uncluttered programming. No excessive explicitness should be 
 necessary when what you mean is obvious (under some simple conventions). 
 Leads to clearer code.
Explicitness usually means adding more annotations in the code, and this usually increases the visual noise in inside the code. This noise masks the code and often leads to mistakes. On the other hand too many implicit disallow some of them available in C. OCaML language disallow most of them). So the language designers must find some middle balancing point, that is somehow an optimum. Bye, bearophile
Oct 19 2011
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
Oct 19 2011
parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:
 =3D=3D Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A=
 language
 that adds a bunch of silly complications to something this simple is=
 fundamentally
 broken. I don't remember your post on implicit preferred conversions=
,
 but IMHO
 implicit conversions of integer to double is a no-brainer. Requiring=
 something
 this simple to be explicit is Java/Pascal-like overkill on  =
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should=
be
 necessary when what you mean is obvious (under some simple convention=
s).
 Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's bigge=
st =
 strengths.  Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit int= o = an int, but it fits into a double.
Oct 19 2011
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
Oct 19 2011
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>  =

wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>  =
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:
 =3D=3D Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period.=
A
 language
 that adds a bunch of silly complications to something this simple =
is
 fundamentally
 broken. I don't remember your post on implicit preferred conversio=
ns,
 but IMHO
 implicit conversions of integer to double is a no-brainer. Requiri=
ng
 something
 this simple to be explicit is Java/Pascal-like overkill on
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness shou=
ld =
 be
 necessary when what you mean is obvious (under some simple  =
 conventions).
 Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's =
 biggest
 strengths.  Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit =
=
 into
 an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, =
 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all lon= gs = fit into a double exactly. -Steve
Oct 19 2011
next sibling parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 20.10.2011, 05:01 Uhr, schrieb Steven Schveighoffer  =

<schveiguy yahoo.com>:

 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> =
=
 wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> =
=
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:
 =3D=3D Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period=
. A
 language
 that adds a bunch of silly complications to something this simple=
is
 fundamentally
 broken. I don't remember your post on implicit preferred  =
 conversions,
 but IMHO
 implicit conversions of integer to double is a no-brainer. Requir=
ing
 something
 this simple to be explicit is Java/Pascal-like overkill on
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness =
 should be
 necessary when what you mean is obvious (under some simple  =
 conventions).
 Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's =
 biggest
 strengths.  Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit=
=
 into
 an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, =
 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all =
 longs fit into a double exactly.

 -Steve
And real can be used without protability problems on PowerPC or ARM?
Oct 19 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 10/19/2011 11:27 PM, Marco Leise wrote:
 And real can be used without protability problems on PowerPC or ARM?
Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
Oct 19 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:
 On 10/19/2011 11:27 PM, Marco Leise wrote:
 And real can be used without protability problems on PowerPC or ARM?
=20 Yes, it's just that they may only give 64 bits of precision. Floating=
=20
 point is inexact anyhow, though.  IMHO the fact that you may lose a=20
 little precision with very large longs is not a game changer.
This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Oct 19 2011
parent reply "Marco Leise" <Marco.Leise gmx.de> writes:
Am 20.10.2011, 08:02 Uhr, schrieb Russel Winder <russel russel.org.uk>:

 On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:
 On 10/19/2011 11:27 PM, Marco Leise wrote:
 And real can be used without protability problems on PowerPC or ARM?
Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem.
Sure it matters, but performance also matters. If I needed the precision of a real, I would make sure that I give the compile the right hint. And adding zeros doesn't help. The representation is mantissa and exponent and your three examples would all come out the same in that representation. :) Is this really a real life problem, or can we just go with any solution for sqrt(2) that works (int->double, long->real) and leave the details to the ones who care and would write sqrt(2.0f) ?
Oct 20 2011
parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 20.10.2011, 22:37 Uhr, schrieb Marco Leise <Marco.Leise gmx.de>:

 Am 20.10.2011, 08:02 Uhr, schrieb Russel Winder <russel russel.org.uk>:

 On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:
 On 10/19/2011 11:27 PM, Marco Leise wrote:
 And real can be used without protability problems on PowerPC or ARM?
Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem.
Sure it matters, but performance also matters. If I needed the precision of a real, I would make sure that I give the compile the right hint. And adding zeros doesn't help. The representation is mantissa and exponent and your three examples would all come out the same in that representation. :) Is this really a real life problem, or can we just go with any solution for sqrt(2) that works (int->double, long->real) and leave the details to the ones who care and would write sqrt(2.0f) ?
Forget what I said. If a 64-bit mantissa floating point type doesn't exist on all systems this doesn't work.
Oct 20 2011
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Wed, 19 Oct 2011 23:01:34 -0400, Steven Schveighoffer <schveiguy yahoo.com>
wrote:
 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>
 wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits?
Yes. You're right. Sorry, my brain automatically skipped forward to 5_000_000_000 => long => real.
 Although I agree, long should map to real, because obviously not all longs
 fit into a double exactly.

 -Steve
Oct 19 2011
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
Many architectures do not support real, and therefore it should never be
used implicitly by the language.

Precision problems aside, I would personally insist that implicit
conversation from any sized int always be to float, not double, for
performance reasons (the whole point of a compiled language trying
to supersede C/C++).
Amusingly, 5_000_000_000 IS actually precisely representable with a float
;) .. Bet let's take 5_000_000_001, it'll lose a few bits, but that's more
than precise enough for me.

Naturally the majority would make solid arguments against this preference,
and I would agree with them in their argument, therefore should it not just
be a compiler flag/option to explicitly specify the implicit int->float
conversion precision?

Though that leads to a problem with standard library, since it links a
pre-compiled binary... I can't afford to have the standard library messing
around with doubles because that was the flag it was compiled with...
This leads inevitably to my pointlessly rewriting the standard library
functions in my own code, just as in C where the CRT uses doubles, for the
same reasons...


On 20 October 2011 06:01, Steven Schveighoffer <schveiguy yahoo.com> wrote:

 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>
 wrote:

  On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>
 wrote:

 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

  On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:

 =3D=3D Quote from Don (nospam nospam.com)'s article

 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions=
,
 but IMHO
 implicit conversions of integer to double is a no-brainer. Requiring
 something
 this simple to be explicit is Java/Pascal-like overkill on
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's bigge=
st
 strengths.  Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit in=
to
 an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all long=
s
 fit into a double exactly.

 -Steve
Oct 20 2011
parent reply Don <nospam nospam.com> writes:
On 20.10.2011 09:47, Manu wrote:
 Many architectures do not support real, and therefore it should never be
 used implicitly by the language.

 Precision problems aside, I would personally insist that implicit
 conversation from any sized int always be to float, not double, for
 performance reasons (the whole point of a compiled language trying
 to supersede C/C++).
On almost all platforms, float and double are the same speed. Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise.
Oct 20 2011
parent reply Manu <turkeyman gmail.com> writes:
On 20 October 2011 11:02, Don <nospam nospam.com> wrote:

 On 20.10.2011 09:47, Manu wrote:

 Many architectures do not support real, and therefore it should never be
 used implicitly by the language.

 Precision problems aside, I would personally insist that implicit
 conversation from any sized int always be to float, not double, for
 performance reasons (the whole point of a compiled language trying
 to supersede C/C++).
On almost all platforms, float and double are the same speed.
This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture. I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double). If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast...
 Note that what we're discussing here is parameter passing of single
 values; if it's part of an aggregate (array or struct), the issue doesn't
 arise.
Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect?
Oct 20 2011
parent reply Don <nospam nospam.com> writes:
On 20.10.2011 13:12, Manu wrote:
 On 20 October 2011 11:02, Don <nospam nospam.com
 <mailto:nospam nospam.com>> wrote:

     On 20.10.2011 09:47, Manu wrote:

         Many architectures do not support real, and therefore it should
         never be
         used implicitly by the language.

         Precision problems aside, I would personally insist that implicit
         conversation from any sized int always be to float, not double, for
         performance reasons (the whole point of a compiled language trying
         to supersede C/C++).


     On almost all platforms, float and double are the same speed.


 This isn't true. Consider ARM, hard to say this isn't a vitally
 important architecture these days, and there are plenty of embedded
 architectures that don't support doubles at all, I would say it's a
 really bad idea to invent a systems programming language that excludes
 many architectures by its design... Atmel AVR is another important
 architecture.
It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.
 I maintain that implicit conversion of integers of any length should
 always target the same precision float, and that should be a compiler
 flag to specify the desired precision throughout the app (possibly
 defaulting to double).
I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below).
 If you choose 'float' you may lose some precision obviously, but you
 expected that when you chose the options, and did the cast...
Explicit casts are not affected in any way.
     Note that what we're discussing here is parameter passing of single
     values; if it's part of an aggregate (array or struct), the issue
     doesn't arise.


 Are we? I thought we were discussing implicit conversion of ints to
 floats? This may be parameter passing, but also assignment I expect?
There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Oct 20 2011
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 20 October 2011 15:13, Don <nospam nospam.com> wrote:

 On 20.10.2011 13:12, Manu wrote:

 On 20 October 2011 11:02, Don <nospam nospam.com
 <mailto:nospam nospam.com>> wrote:

    On 20.10.2011 09:47, Manu wrote:

        Many architectures do not support real, and therefore it should
        never be
        used implicitly by the language.

        Precision problems aside, I would personally insist that implicit
        conversation from any sized int always be to float, not double, for
        performance reasons (the whole point of a compiled language trying
        to supersede C/C++).


    On almost all platforms, float and double are the same speed.


 This isn't true. Consider ARM, hard to say this isn't a vitally
 important architecture these days, and there are plenty of embedded
 architectures that don't support doubles at all, I would say it's a
 really bad idea to invent a systems programming language that excludes
 many architectures by its design... Atmel AVR is another important
 architecture.
It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me. I maintain that implicit conversion of integers of any length should
 always target the same precision float, and that should be a compiler
 flag to specify the desired precision throughout the app (possibly
 defaulting to double).
I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below). If you choose 'float' you may lose some precision obviously, but you
 expected that when you chose the options, and did the cast...
Explicit casts are not affected in any way. Note that what we're discussing here is parameter passing of single
    values; if it's part of an aggregate (array or struct), the issue
    doesn't arise.


 Are we? I thought we were discussing implicit conversion of ints to
 floats? This may be parameter passing, but also assignment I expect?
There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Yeah sorry, I think you're right, the discussion got slightly lost in the noise here... Just to clarify, where you advocate eliminating implicit casting, do you now refer to ALL implicit casting? Or just implicit casting to an ambiguous target? Let me reposition myself to suit what it would seem is actually being discussed... :) void sqrt(float x); void sqrt(double x); void sqrt(real x); { sqrt(2); } Surely this produces some error: "Ambiguous call to overloaded function", and then there is no implicit cast rule to talk about... end of discussion? But you speak of "eliminating implicit casting" as if this may also refer to: void NotOverloaded(float x); { NotOverloaded(2); // not ambiguous... so what's the problem? } or: float x = 10; Which I can imagine why most would feel this is undesirable... I'm not clear now where you intend to draw the lines. If you're advocating banning ALL implicit casting between float/int outright, I actually feel really good about that idea. I can just imagine the number of hours saved while optimising where junior/ignorant programmers cast back and fourth with no real regard to or awareness of what they're doing. Or are we only drawing the distinction for literals? I don't mind a compile error if I incorrectly state the literal a function expects. That sort of thing becomes second nature in no time.
Oct 20 2011
parent reply Don <nospam nospam.com> writes:
On 20.10.2011 14:48, Manu wrote:
 On 20 October 2011 15:13, Don <nospam nospam.com
 <mailto:nospam nospam.com>> wrote:

     On 20.10.2011 13:12, Manu wrote:

         On 20 October 2011 11:02, Don <nospam nospam.com
         <mailto:nospam nospam.com>
         <mailto:nospam nospam.com <mailto:nospam nospam.com>>> wrote:

             On 20.10.2011 09:47, Manu wrote:

                 Many architectures do not support real, and therefore it
         should
                 never be
                 used implicitly by the language.

                 Precision problems aside, I would personally insist that
         implicit
                 conversation from any sized int always be to float, not
         double, for
                 performance reasons (the whole point of a compiled
         language trying
                 to supersede C/C++).


             On almost all platforms, float and double are the same speed.


         This isn't true. Consider ARM, hard to say this isn't a vitally
         important architecture these days, and there are plenty of embedded
         architectures that don't support doubles at all, I would say it's a
         really bad idea to invent a systems programming language that
         excludes
         many architectures by its design... Atmel AVR is another important
         architecture.


     It doesn't exclude anything. What we're talking about as desirable
     behaviour, is exactly what C does. If you care about performance on
     ARM, you'll type sqrt(2.0f).

     Personally, I'd rather completely eliminate implicit conversions
     between integers and floating point types. But that's just me.


         I maintain that implicit conversion of integers of any length should
         always target the same precision float, and that should be a
         compiler
         flag to specify the desired precision throughout the app (possibly
         defaulting to double).


     I can't believe that you'd ever write an app without that being an
     upfront decision. Casually flipping it with a compiler flag??
     Remember that it affects very few things (as discussed below).


         If you choose 'float' you may lose some precision obviously, but you
         expected that when you chose the options, and did the cast...


     Explicit casts are not affected in any way.

             Note that what we're discussing here is parameter passing of
         single
             values; if it's part of an aggregate (array or struct), the
         issue
             doesn't arise.


         Are we? I thought we were discussing implicit conversion of ints to
         floats? This may be parameter passing, but also assignment I expect?


     There's no problem with assignment, it's never ambiguous.

     There seems to be some confusion about what the issue is.
     To reiterate:

     void foo(float x) {}
     void foo(double x) {}

     void bar(float x) {}

     void baz(double x) {}

     void main()
     {
        bar(2); // OK -- 2 becomes 2.0f
        baz(2); // OK -- 2 becomes 2.0
        foo(2); // fails -- ambiguous.
     }

     My proposal was effectively: if it's ambiguous, choose double.
     That's all.


 Yeah sorry, I think you're right, the discussion got slightly lost in
 the noise here...

 Just to clarify, where you advocate eliminating implicit casting, do you
 now refer to ALL implicit casting? Or just implicit casting to an
 ambiguous target?

 Let me reposition myself to suit what it would seem is actually being
 discussed... :)

 void sqrt(float x);
 void sqrt(double x);
 void sqrt(real x);

 {
    sqrt(2);
 }

 Surely this produces some error: "Ambiguous call to overloaded
 function", and then there is no implicit cast rule to talk about... end
 of discussion?

 But you speak of "eliminating implicit casting" as if this may also
 refer to:

 void NotOverloaded(float x);

 {
    NotOverloaded(2); // not ambiguous... so what's the problem?
 }
Actually there is a problem there, I think. If someone later on adds NotOverload(double x), that call will suddenly stop compiling. That isn't just a theoretical problem. Currently log(2) will compile, but only because in std.math there is log(real), but not yet log(double) or log(float). So once we add those overloads, peoples code will break. I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now:
 or:

 float x = 10;

 Which I can imagine why most would feel this is undesirable...

 I'm not clear now where you intend to draw the lines.
 If you're advocating banning ALL implicit casting between float/int
 outright, I actually feel really good about that idea. I can just
 imagine the number of hours saved while optimising where junior/ignorant
 programmers cast back and fourth with no real regard to or awareness of
 what they're doing.

 Or are we only drawing the distinction for literals?
 I don't mind a compile error if I incorrectly state the literal a
 function expects. That sort of thing becomes second nature in no time.
more generally popular.
Oct 20 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:

 Actually there is a problem there, I think. If someone later on adds  
 NotOverload(double x), that call will suddenly stop compiling.

 That isn't just a theoretical problem.
 Currently log(2) will compile, but only because in std.math there is  
 log(real), but not yet log(double) or log(float).
 So once we add those overloads, peoples code will break.
Should there be a concern over silently changing the code path? For instance, log(2) currently binds to log(real), but with the addition of log(double) will bind to that. I'm not saying I found any problems with this, but I'm wondering if it can possibly harm anything. I don't have enough experience with floating point types to come up with a use case that would be affected. -Steve
Oct 20 2011
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 20 October 2011 16:11, Don <nospam nospam.com> wrote:

 On 20.10.2011 14:48, Manu wrote:

 On 20 October 2011 15:13, Don <nospam nospam.com

 <mailto:nospam nospam.com>> wrote:

    On 20.10.2011 13:12, Manu wrote:

        On 20 October 2011 11:02, Don <nospam nospam.com
        <mailto:nospam nospam.com>
        <mailto:nospam nospam.com <mailto:nospam nospam.com>>> wrote:

            On 20.10.2011 09:47, Manu wrote:

                Many architectures do not support real, and therefore it
        should
                never be
                used implicitly by the language.

                Precision problems aside, I would personally insist that
        implicit
                conversation from any sized int always be to float, not
        double, for
                performance reasons (the whole point of a compiled
        language trying
                to supersede C/C++).


            On almost all platforms, float and double are the same speed.


        This isn't true. Consider ARM, hard to say this isn't a vitally
        important architecture these days, and there are plenty of embedded
        architectures that don't support doubles at all, I would say it's a
        really bad idea to invent a systems programming language that
        excludes
        many architectures by its design... Atmel AVR is another important
        architecture.


    It doesn't exclude anything. What we're talking about as desirable
    behaviour, is exactly what C does. If you care about performance on
    ARM, you'll type sqrt(2.0f).

    Personally, I'd rather completely eliminate implicit conversions
    between integers and floating point types. But that's just me.


        I maintain that implicit conversion of integers of any length
 should
        always target the same precision float, and that should be a
        compiler
        flag to specify the desired precision throughout the app (possibly
        defaulting to double).


    I can't believe that you'd ever write an app without that being an
    upfront decision. Casually flipping it with a compiler flag??
    Remember that it affects very few things (as discussed below).


        If you choose 'float' you may lose some precision obviously, but
 you
        expected that when you chose the options, and did the cast...


    Explicit casts are not affected in any way.

            Note that what we're discussing here is parameter passing of
        single
            values; if it's part of an aggregate (array or struct), the
        issue
            doesn't arise.


        Are we? I thought we were discussing implicit conversion of ints to
        floats? This may be parameter passing, but also assignment I
 expect?


    There's no problem with assignment, it's never ambiguous.

    There seems to be some confusion about what the issue is.
    To reiterate:

    void foo(float x) {}
    void foo(double x) {}

    void bar(float x) {}

    void baz(double x) {}

    void main()
    {
       bar(2); // OK -- 2 becomes 2.0f
       baz(2); // OK -- 2 becomes 2.0
       foo(2); // fails -- ambiguous.
    }

    My proposal was effectively: if it's ambiguous, choose double.
    That's all.


 Yeah sorry, I think you're right, the discussion got slightly lost in
 the noise here...

 Just to clarify, where you advocate eliminating implicit casting, do you
 now refer to ALL implicit casting? Or just implicit casting to an
 ambiguous target?

 Let me reposition myself to suit what it would seem is actually being
 discussed... :)

 void sqrt(float x);
 void sqrt(double x);
 void sqrt(real x);

 {
   sqrt(2);
 }

 Surely this produces some error: "Ambiguous call to overloaded

 function", and then there is no implicit cast rule to talk about... end
 of discussion?

 But you speak of "eliminating implicit casting" as if this may also
 refer to:

 void NotOverloaded(float x);

 {
   NotOverloaded(2); // not ambiguous... so what's the problem?
 }
Actually there is a problem there, I think. If someone later on adds NotOverload(double x), that call will suddenly stop compiling. That isn't just a theoretical problem. Currently log(2) will compile, but only because in std.math there is log(real), but not yet log(double) or log(float). So once we add those overloads, peoples code will break. I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: or:
 float x = 10;

 Which I can imagine why most would feel this is undesirable...

 I'm not clear now where you intend to draw the lines.
 If you're advocating banning ALL implicit casting between float/int
 outright, I actually feel really good about that idea. I can just
 imagine the number of hours saved while optimising where junior/ignorant
 programmers cast back and fourth with no real regard to or awareness of
 what they're doing.

 Or are we only drawing the distinction for literals?
 I don't mind a compile error if I incorrectly state the literal a
 function expects. That sort of thing becomes second nature in no time.
generally popular.
Hmmm. between integer and floating types (something that far too many programmers lack) never have agreement, and perhaps more importantly, it's not instinctively obvious which it will choose and why... you just have to know, and most people won't know, they'll just use it blind (see my point about finding+fixing code by junior/ignorant programmers) the culprit committing the crime, and less obvious to the culprit that they are committing a crime. solving that problem. In order of preference: 1, 4, 2, 3, 5 I could only support 2 if it chooses 'float', the highest performance version on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible? If the argument is made to favour precision, then it should surely be real, not double... the counter argument is obviously that real is not supported on many architectures, true, but the same is true for double... so I think it's a hollow argument. double just seem like a weak compromise; more precise than float, and more likely to work on more architectures than real (but still not guaranteed), but it doesn't really satisfy either criteria neatly. The logic transposed: double is less precise than real, and potentially slower than float. Is the justification purely that it IS a compromise, since the choice at the end of the day is totally subjective :)
Oct 20 2011
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote:

 I could only support 2 if it chooses 'float', the highest performance
 version on all architectures AND actually available on all architectures;
 given this is meant to be a systems programming language, and supporting  
 as
 many architectures as possible?
D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss. As for double vs real, a 32-bit int neatly fits in any double, making it good enough. short and byte should convert to float, and I'm not sure about long/ulong, since real may not have enough bits to accurately represent it. -- Simen
Oct 20 2011
next sibling parent Manu <turkeyman gmail.com> writes:
On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:

 On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote:

  I could only support 2 if it chooses 'float', the highest performance
 version on all architectures AND actually available on all architectures;
 given this is meant to be a systems programming language, and supporting
 as
 many architectures as possible?
D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.
Correct, on all architectures I'm aware of that don't have hardware double support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy. The reason I'm so concerned about this, is not for what I may or may not do in my code, I'm likely to be careful, but imagine some cool library that I want to make use of... some programmer has gone and written 'x = sqrt(2)' in this library somewhere; they don't require double precision, but it was implicitly used regardless of their intent. Now I can't use that library in my project. Any library that wasn't written with the intent of use in embedded systems in mind, that happens to omit all of 2 characters from their float literal, can no longer be used in my project. This makes me sad. I'd also like you to ask yourself realistically, of all the integers you've EVER cast to/from a float, how many have ever been a big/huge number? And if/when that occurred, what did you do with it? Was the precision important? Was it important enough to you to explicitly state the cast? The moment you use it in a mathematical operation you are likely throwing away a bunch of precision anyway, especially for the complex functions like sqrt/log/etc in question.
Oct 20 2011
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, October 20, 2011 21:52:32 Manu wrote:
 On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:
 On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote:
  I could only support 2 if it chooses 'float', the highest performance
  
 version on all architectures AND actually available on all
 architectures; given this is meant to be a systems programming
 language, and supporting as
 many architectures as possible?
D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.
Correct, on all architectures I'm aware of that don't have hardware double support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy.
Correctness has _nothing_ to do with efficiency. It has to do with the result that you get. Losing precision means that your code is less correct.
 The reason I'm so concerned about this, is not for what I may or may not do
 in my code, I'm likely to be careful, but imagine some cool library that I
 want to make use of... some programmer has gone and written 'x = sqrt(2)'
 in this library somewhere; they don't require double precision, but it was
 implicitly used regardless of their intent. Now I can't use that library in
 my project.
 Any library that wasn't written with the intent of use in embedded systems
 in mind, that happens to omit all of 2 characters from their float literal,
 can no longer be used in my project. This makes me sad.
 
 I'd also like you to ask yourself realistically, of all the integers you've
 EVER cast to/from a float, how many have ever been a big/huge number? And
 if/when that occurred, what did you do with it? Was the precision
 important? Was it important enough to you to explicitly state the cast?
 The moment you use it in a mathematical operation you are likely throwing
 away a bunch of precision anyway, especially for the complex functions like
 sqrt/log/etc in question.
When dealing with math functions like this, it doesn't really matter whether the number being passed in is a large one or not. It matters what you want for the return type. And the higher the precision, the more correct the result, so there are a lot of people who would want the result to be real, rather than float or double. It's when your concern is efficiency that you start worrying about whether a float would be better. And yes, efficiency matters, but if efficiency matters, then you can always tell it 2.0f instead of 2. Don's suggestion results in the code being more correct in the general case and yet still lets you easily make it more efficient if you want. That's very much the D way of doing things. Personally, I'm very leery of making an int literal implicitly convert to a double when there's ambiguity (e.g. if the function could also take a float), because then the compiler is resolving ambiguity for you rather than letting you do it. It risks function hijacking (at least in the sense that you don't necessarily end up calling the function that you mean to; it's not an issue for sqrt, but it could matter a lot for a function that has different behavior for float and double). And that sort of thing is very much _not_ the D way. So, I'm all for integers implicitly converting to double so long as there's no ambiguity. But in any case where there's ambiguity or a narrowing conversion, a cast should be required. - Jonathan M Davis
Oct 20 2011
prev sibling parent reply Manu <turkeyman gmail.com> writes:
I think you just brushed over my entire concern with respect to libraries,
and very likely the standard library its self.
I've also made what I consider to be reasonable counter arguments to those
points in earlier posts, so I won't repeat myself.

I think it's fairly safe to say though, with respect to Don's question,
using a tie-breaker is extremely controversial. I can't see any way that
could be unanimously considered a good idea.
I stand by the call to ban implicit conversion between float/int. Some
might consider that a minor annoyance, but it also has so many potential
advantages and time savers down the line too.

On 20 October 2011 22:21, Jonathan M Davis <jmdavisProg gmx.com> wrote:

 On Thursday, October 20, 2011 21:52:32 Manu wrote:
 On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:
 On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote:
  I could only support 2 if it chooses 'float', the highest performance

 version on all architectures AND actually available on all
 architectures; given this is meant to be a systems programming
 language, and supporting as
 many architectures as possible?
D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.
Correct, on all architectures I'm aware of that don't have hardware
double
 support, double is emulated, and that is EXTREMELY slow.
 I can't imagine any case where causing implicit (hidden) emulation of
 unsupported hardware should be considered 'correct', and therefore made
 easy.
Correctness has _nothing_ to do with efficiency. It has to do with the result that you get. Losing precision means that your code is less correct.
 The reason I'm so concerned about this, is not for what I may or may not
do
 in my code, I'm likely to be careful, but imagine some cool library that
I
 want to make use of... some programmer has gone and written 'x = sqrt(2)'
 in this library somewhere; they don't require double precision, but it
was
 implicitly used regardless of their intent. Now I can't use that library
in
 my project.
 Any library that wasn't written with the intent of use in embedded
systems
 in mind, that happens to omit all of 2 characters from their float
literal,
 can no longer be used in my project. This makes me sad.

 I'd also like you to ask yourself realistically, of all the integers
you've
 EVER cast to/from a float, how many have ever been a big/huge number? And
 if/when that occurred, what did you do with it? Was the precision
 important? Was it important enough to you to explicitly state the cast?
 The moment you use it in a mathematical operation you are likely throwing
 away a bunch of precision anyway, especially for the complex functions
like
 sqrt/log/etc in question.
When dealing with math functions like this, it doesn't really matter whether the number being passed in is a large one or not. It matters what you want for the return type. And the higher the precision, the more correct the result, so there are a lot of people who would want the result to be real, rather than float or double. It's when your concern is efficiency that you start worrying about whether a float would be better. And yes, efficiency matters, but if efficiency matters, then you can always tell it 2.0f instead of 2. Don's suggestion results in the code being more correct in the general case and yet still lets you easily make it more efficient if you want. That's very much the D way of doing things. Personally, I'm very leery of making an int literal implicitly convert to a double when there's ambiguity (e.g. if the function could also take a float), because then the compiler is resolving ambiguity for you rather than letting you do it. It risks function hijacking (at least in the sense that you don't necessarily end up calling the function that you mean to; it's not an issue for sqrt, but it could matter a lot for a function that has different behavior for float and double). And that sort of thing is very much _not_ the D way. So, I'm all for integers implicitly converting to double so long as there's no ambiguity. But in any case where there's ambiguity or a narrowing conversion, a cast should be required. - Jonathan M Davis
Oct 20 2011
parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 20.10.2011, 22:09 Uhr, schrieb Manu <turkeyman gmail.com>:

 I think you just brushed over my entire concern with respect to  
 libraries,
 and very likely the standard library its self.
 I've also made what I consider to be reasonable counter arguments to  
 those
 points in earlier posts, so I won't repeat myself.

 I think it's fairly safe to say though, with respect to Don's question,
 using a tie-breaker is extremely controversial. I can't see any way that
 could be unanimously considered a good idea.
 I stand by the call to ban implicit conversion between float/int. Some
 might consider that a minor annoyance, but it also has so many potential
 advantages and time savers down the line too.
I start to understand the problems with implicit conversions and I think now that the size of the integer has no relation at all to size of the float in a conversion. For a large integer you may loose precision immediately when it is stored to a float instead of a double, but when you run sqrt() on it you introduce some more error anyway. So I'd vote for no implicit conversion unless the situation is unambiguous and the precision is sufficient, like in an assignment or calling a method with no overload. At least code would not break silently if overloads for sqrt would be provided later.
Oct 20 2011
prev sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:
[snip]
 I'd like to get to the situation where those overloads can be added
 without breaking peoples code. The draconian possibility is to disallow
 them in all cases: integer types never match floating point function
 parameters.
 The second possibility is to introduce a tie-breaker rule: when there's
 an ambiguity, choose double.
 And a third possibility is to only apply that tie-breaker rule to literals.
 And the fourth possibility is to keep the language as it is now, and
 allow code to break when overloads get added.

 The one I really, really don't want, is the situation we have now:

programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion. Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on. The idea behind 2b) would be: int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false);
Oct 20 2011
parent reply Don <nospam nospam.com> writes:
On 21.10.2011 05:24, Robert Jacques wrote:
 On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:
 [snip]
 I'd like to get to the situation where those overloads can be added
 without breaking peoples code. The draconian possibility is to disallow
 them in all cases: integer types never match floating point function
 parameters.
 The second possibility is to introduce a tie-breaker rule: when there's
 an ambiguity, choose double.
 And a third possibility is to only apply that tie-breaker rule to
 literals.
 And the fourth possibility is to keep the language as it is now, and
 allow code to break when overloads get added.

 The one I really, really don't want, is the situation we have now:

 function...
CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.
Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, the
 Thinking it over, here are my suggestions, though I'm not sure if 2a or
 2b would be best:

 1) Integer literals and expressions should use range propagation to use
 the thinnest loss-less conversion. If no loss-less conversion exists,
 then an error is raised. Choosing double as a default is always the
 wrong choice for GPUs and most embedded systems.
 2a) Lossy variable conversions are disallowed.
 2b) Lossy variable conversions undergo bounds checking when asserts are
 turned on.
The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well.
 The idea behind 2b) would be:

 int i = 1;
 float f = i; // assert(true);
 i = int.max;
 f = i; // assert(false);
That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
Oct 20 2011
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 21 October 2011 09:00, Don <nospam nospam.com> wrote:

 On 21.10.2011 05:24, Robert Jacques wrote:

 On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:
 [snip]

 I'd like to get to the situation where those overloads can be added
 without breaking peoples code. The draconian possibility is to disallow
 them in all cases: integer types never match floating point function
 parameters.
 The second possibility is to introduce a tie-breaker rule: when there's
 an ambiguity, choose double.
 And a third possibility is to only apply that tie-breaker rule to
 literals.
 And the fourth possibility is to keep the language as it is now, and
 allow code to break when overloads get added.

 The one I really, really don't want, is the situation we have now:

 function...
CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.
Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, the
 Thinking it over, here are my suggestions, though I'm not sure if 2a or
 2b would be best:

 1) Integer literals and expressions should use range propagation to use
 the thinnest loss-less conversion. If no loss-less conversion exists,
 then an error is raised. Choosing double as a default is always the
 wrong choice for GPUs and most embedded systems.
 2a) Lossy variable conversions are disallowed.
 2b) Lossy variable conversions undergo bounds checking when asserts are
 turned on.
The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be:
 int i = 1;
 float f = i; // assert(true);
 i = int.max;
 f = i; // assert(false);
That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions? 2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/ Perhaps I missed something? Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...
Oct 21 2011
parent reply Don <nospam nospam.com> writes:
On 21.10.2011 09:53, Manu wrote:
 On 21 October 2011 09:00, Don <nospam nospam.com
 <mailto:nospam nospam.com>> wrote:

     On 21.10.2011 05:24, Robert Jacques wrote:

         On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com
         <mailto:nospam nospam.com>> wrote:
         [snip]

             I'd like to get to the situation where those overloads can
             be added
             without breaking peoples code. The draconian possibility is
             to disallow
             them in all cases: integer types never match floating point
             function
             parameters.
             The second possibility is to introduce a tie-breaker rule:
             when there's
             an ambiguity, choose double.
             And a third possibility is to only apply that tie-breaker
             rule to
             literals.
             And the fourth possibility is to keep the language as it is
             now, and
             allow code to break when overloads get added.

             The one I really, really don't want, is the situation we
             have now:

             function...



         CUDA/GPU programming, so I live in a world of floats and ints. So
         changing the rules does worries me, but mainly because most
         people don't
         use floats on a daily basis, which introduces bias into the
         discussion.


     Yeah, that's a valuable perspective.
     sqrt(2) is "I don't care what the precision is".
     What I get from you and Manu is:
     if you're working in a float world, you want float to be the tiebreaker.
     Otherwise, you want double (or possibly real!) to be the tiebreaker.

     And therefore, the


         Thinking it over, here are my suggestions, though I'm not sure
         if 2a or
         2b would be best:

         1) Integer literals and expressions should use range propagation
         to use
         the thinnest loss-less conversion. If no loss-less conversion
         exists,
         then an error is raised. Choosing double as a default is always the
         wrong choice for GPUs and most embedded systems.
         2a) Lossy variable conversions are disallowed.
         2b) Lossy variable conversions undergo bounds checking when
         asserts are
         turned on.


     The spec says: "Integer values cannot be implicitly converted to
     another type that cannot represent the integer bit pattern after
     integral promotion."
     Now although that was intended to only apply to integers, it reads
     as if it should apply to floating point as well.


         The idea behind 2b) would be:

         int i = 1;
         float f = i; // assert(true);
         i = int.max;
         f = i; // assert(false);


     That would be catastrophically slow.

     I wonder how painful disallowing lossy conversions would be.


 1: Seems reasonable for literals; "Integer literals and expressions
 should use range propagation to use
 the thinnest loss-less conversion"... but can you clarify what you mean
 by 'expressions'? I assume we're talking strictly literal expressions?
Any expression. Just as right now, long converts to int only if the long expression is guaranteed to fit into 32 bits. Of course, if it's a literal, this is very easy.
 2b: Does runtime bounds checking actually addresses the question; which
 of an ambiguous function to choose?
 If I read you correctly, 2b suggests bounds checking the implicit cast
 for data loss at runtime, but which to choose? float/double/real? We'll
 still arguing that question even with this proposal taken into
 consideration... :/
It's an independent issue.
 Perhaps I missed something?

 Naturally all this complexity assumes we go with the tie-breaker
 approach, which I'm becoming more and more convinced is a bad plan...
No, it doesn't. As I said, this is independent. Except that it does mean that some existing int->float conversions would be disallowed. EG, float foo(int x) { return x; } wouldn't compile, because x might not fit into a float without loss of accuracy.
Oct 23 2011
parent Manu <turkeyman gmail.com> writes:
Okay so we're thinking of allowing implicit casting now ONLY if it can be
guaranteed that the cast is lossless?

I don't see how this addresses the original question though, which was how
to resolve an ambiguous function selection? It selects the smallest one it
will fit into?
Does that mean it will always choose the double version if you pass an
int32 without any extra info narrowing its possible bounds?

On 23 October 2011 22:36, Don <nospam nospam.com> wrote:

 On 21.10.2011 09:53, Manu wrote:

 On 21 October 2011 09:00, Don <nospam nospam.com

 <mailto:nospam nospam.com>> wrote:

    On 21.10.2011 05:24, Robert Jacques wrote:

        On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com
        <mailto:nospam nospam.com>> wrote:
        [snip]

            I'd like to get to the situation where those overloads can
            be added
            without breaking peoples code. The draconian possibility is
            to disallow
            them in all cases: integer types never match floating point
            function
            parameters.
            The second possibility is to introduce a tie-breaker rule:
            when there's
            an ambiguity, choose double.
            And a third possibility is to only apply that tie-breaker
            rule to
            literals.
            And the fourth possibility is to keep the language as it is
            now, and
            allow code to break when overloads get added.

            The one I really, really don't want, is the situation we
            have now:

            function...



        CUDA/GPU programming, so I live in a world of floats and ints. So
        changing the rules does worries me, but mainly because most
        people don't
        use floats on a daily basis, which introduces bias into the
        discussion.


    Yeah, that's a valuable perspective.
    sqrt(2) is "I don't care what the precision is".
    What I get from you and Manu is:
    if you're working in a float world, you want float to be the
 tiebreaker.
    Otherwise, you want double (or possibly real!) to be the tiebreaker.

    And therefore, the


        Thinking it over, here are my suggestions, though I'm not sure
        if 2a or
        2b would be best:

        1) Integer literals and expressions should use range propagation
        to use
        the thinnest loss-less conversion. If no loss-less conversion
        exists,
        then an error is raised. Choosing double as a default is always the
        wrong choice for GPUs and most embedded systems.
        2a) Lossy variable conversions are disallowed.
        2b) Lossy variable conversions undergo bounds checking when
        asserts are
        turned on.


    The spec says: "Integer values cannot be implicitly converted to
    another type that cannot represent the integer bit pattern after
    integral promotion."
    Now although that was intended to only apply to integers, it reads
    as if it should apply to floating point as well.


        The idea behind 2b) would be:

        int i = 1;
        float f = i; // assert(true);
        i = int.max;
        f = i; // assert(false);


    That would be catastrophically slow.

    I wonder how painful disallowing lossy conversions would be.


 1: Seems reasonable for literals; "Integer literals and expressions

 should use range propagation to use
 the thinnest loss-less conversion"... but can you clarify what you mean
 by 'expressions'? I assume we're talking strictly literal expressions?
Any expression. Just as right now, long converts to int only if the long expression is guaranteed to fit into 32 bits. Of course, if it's a literal, this is very easy. 2b: Does runtime bounds checking actually addresses the question; which
 of an ambiguous function to choose?
 If I read you correctly, 2b suggests bounds checking the implicit cast
 for data loss at runtime, but which to choose? float/double/real? We'll
 still arguing that question even with this proposal taken into
 consideration... :/
It's an independent issue. Perhaps I missed something?
 Naturally all this complexity assumes we go with the tie-breaker
 approach, which I'm becoming more and more convinced is a bad plan...
No, it doesn't. As I said, this is independent. Except that it does mean that some existing int->float conversions would be disallowed. EG, float foo(int x) { return x; } wouldn't compile, because x might not fit into a float without loss of accuracy.
Oct 23 2011
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:

 On 21 October 2011 09:00, Don <nospam nospam.com> wrote:

 On 21.10.2011 05:24, Robert Jacques wrote:

 On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:
 [snip]

 I'd like to get to the situation where those overloads can be added
 without breaking peoples code. The draconian possibility is to disallow
 them in all cases: integer types never match floating point function
 parameters.
 The second possibility is to introduce a tie-breaker rule: when there's
 an ambiguity, choose double.
 And a third possibility is to only apply that tie-breaker rule to
 literals.
 And the fourth possibility is to keep the language as it is now, and
 allow code to break when overloads get added.

 The one I really, really don't want, is the situation we have now:

 function...
CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.
Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, the
 Thinking it over, here are my suggestions, though I'm not sure if 2a or
 2b would be best:

 1) Integer literals and expressions should use range propagation to use
 the thinnest loss-less conversion. If no loss-less conversion exists,
 then an error is raised. Choosing double as a default is always the
 wrong choice for GPUs and most embedded systems.
 2a) Lossy variable conversions are disallowed.
 2b) Lossy variable conversions undergo bounds checking when asserts are
 turned on.
The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be:
 int i = 1;
 float f = i; // assert(true);
 i = int.max;
 f = i; // assert(false);
That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions? 2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/ Perhaps I missed something? Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...
Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly be able to know which one is called... What if the ambiguous overloads don't actually perform identical functionality with just different precision? .. I don't like the idea of it being uncertain. And one more thing to ponder, is the return type telling here? float x = sqrt(2); Obviously this may only work for these pure maths functions where the return type is matched to the args, but maybe it's an element worth considering. ie, if the function parameter is ambiguous, check for disambiguation via the return type...? Sounds pretty nasty! :)
Oct 21 2011
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Fri, 21 Oct 2011 09:00:48 -0400, Manu <turkeyman gmail.com> wrote:
 On 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:
 On 21 October 2011 09:00, Don <nospam nospam.com> wrote:
[snip]
 1: Seems reasonable for literals; "Integer literals and expressions should
 use range propagation to use
 the thinnest loss-less conversion"... but can you clarify what you mean by
 'expressions'? I assume we're talking strictly literal expressions?
Consider sqrt(i % 10). No matter what i is, the range of i % 10 is 0-9. I was more thinking of whether plain old assignment would be allowed: float f = myshort; Of course, if we deny implicit conversion, shouldn't the following fail to compile? float position = index * resolution;
 2b: Does runtime bounds checking actually addresses the question; which of
 an ambiguous function to choose?
 If I read you correctly, 2b suggests bounds checking the implicit cast for
 data loss at runtime, but which to choose? float/double/real? We'll still
 arguing that question even with this proposal taken into consideration... :/
 Perhaps I missed something?
Yes, nut only because I didn't include it. I was thinking of float f = i; as opposed to func(i) for some reason. Bounds checking would only make sense if func(float) was the only overload.
 Naturally all this complexity assumes we go with the tie-breaker approach,
 which I'm becoming more and more convinced is a bad plan...
Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly be able to know which one is called... What if the ambiguous overloads don't actually perform identical functionality with just different precision? ..
Then whoever wrote the library was Evil(tm). Given that these rules wouldn't interfere with function hijacking, I'm not sure of the practicality of this concern. Do you have an example?
 I don't like the idea of it being uncertain.

 And one more thing to ponder, is the return type telling here?
 float x = sqrt(2);
 Obviously this may only work for these pure maths functions where the
 return type is matched to the args, but maybe it's an element worth
 considering.
 ie, if the function parameter is ambiguous, check for disambiguation via
 the return type...? Sounds pretty nasty! :)
Oct 21 2011
parent reply Manu <turkeyman gmail.com> writes:
It would still allow function hijacking.
void func(double v); exists...
func(2);
then someone comes along and adds func(float v); .. It will now hijack the
call.
That's what you mean right?

On Oct 22, 2011 1:45 AM, "Robert Jacques" <sandford jhu.edu> wrote:
 On Fri, 21 Oct 2011 09:00:48 -0400, Manu <turkeyman gmail.com> wrote:
 On 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:
 On 21 October 2011 09:00, Don <nospam nospam.com> wrote:
[snip]
 1: Seems reasonable for literals; "Integer literals and expressions
should
 use range propagation to use
 the thinnest loss-less conversion"... but can you clarify what you mean
by
 'expressions'? I assume we're talking strictly literal expressions?
Consider sqrt(i % 10). No matter what i is, the range of i % 10 is 0-9. I was more thinking of whether plain old assignment would be allowed: float f = myshort; Of course, if we deny implicit conversion, shouldn't the following fail
to compile?
 float position = index * resolution;


 2b: Does runtime bounds checking actually addresses the question; which
of
 an ambiguous function to choose?
 If I read you correctly, 2b suggests bounds checking the implicit cast
for
 data loss at runtime, but which to choose? float/double/real? We'll
still
 arguing that question even with this proposal taken into
consideration... :/
 Perhaps I missed something?
Yes, nut only because I didn't include it. I was thinking of float f = i; as opposed to func(i) for some reason. Bounds checking would only make sense if func(float) was
the only overload.
 Naturally all this complexity assumes we go with the tie-breaker
approach,
 which I'm becoming more and more convinced is a bad plan...
Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly
be
 able to know which one is called... What if the ambiguous overloads don't
 actually perform identical functionality with just different precision?
..
 Then whoever wrote the library was Evil(tm). Given that these rules
wouldn't interfere with function hijacking, I'm not sure of the practicality of this concern. Do you have an example?
 I don't like the idea of it being uncertain.

 And one more thing to ponder, is the return type telling here?
 float x = sqrt(2);
 Obviously this may only work for these pure maths functions where the
 return type is matched to the args, but maybe it's an element worth
 considering.
 ie, if the function parameter is ambiguous, check for disambiguation via
 the return type...? Sounds pretty nasty! :)
Oct 21 2011
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:
 It would still allow function hijacking.
 void func(double v); exists...
 func(2);
 then someone comes along and adds func(float v); .. It will now hijack the
 call.
 That's what you mean right?
Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Oct 21 2011
parent reply Manu <turkeyman gmail.com> writes:
Sure, and hijacking is bound to happen under your proposal, no?
How would it be detected?

On 22 October 2011 06:51, Robert Jacques <sandford jhu.edu> wrote:

 On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:

 It would still allow function hijacking.
 void func(double v); exists...
 func(2);
 then someone comes along and adds func(float v); .. It will now hijack the
 call.
 That's what you mean right?
Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Oct 22 2011
parent "Robert Jacques" <sandford jhu.edu> writes:
On Sat, 22 Oct 2011 05:42:10 -0400, Manu <turkeyman gmail.com> wrote:
 Sure, and hijacking is bound to happen under your proposal, no?
 How would it be detected?

 On 22 October 2011 06:51, Robert Jacques <sandford jhu.edu> wrote:

 On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:

 It would still allow function hijacking.
 void func(double v); exists...
 func(2);
 then someone comes along and adds func(float v); .. It will now hijack the
 call.
 That's what you mean right?
Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Manu, I'm not sure you understand how function hijack detection works today. Let us say you have three modules module a; float func(float v) { return v; } module b; double func(double v) { return v; } module c; int func(int v) { return v*v; } which all define a func method. Now, if you import a; import b; void main(string[] args) { assert(func(1.0f) == 1.0f); // Error } you'll get a function hijacking error because func(1.0f) matches func(float) and func(double). However, if you instead: import a; import c; void main(string[] args) { assert(func(1.0f) == 1.0f); // Error } you won't get an error, because func(1.0f) doesn't match func(int). In short, the best overload is only selected _after_ the module name has been resolved. The proposal of myself and others only affects which overload is the best match; it has no possible effect on function hijacking.
Oct 22 2011
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
On 10/20/2011 8:13 AM, Don wrote:
 There's no problem with assignment, it's never ambiguous.

 There seems to be some confusion about what the issue is.
 To reiterate:

 void foo(float x) {}
 void foo(double x) {}

 void bar(float x) {}

 void baz(double x) {}

 void main()
 {
 bar(2); // OK -- 2 becomes 2.0f
 baz(2); // OK -- 2 becomes 2.0
 foo(2); // fails -- ambiguous.
 }

 My proposal was effectively: if it's ambiguous, choose double. That's all.
This proposal seems like a no-brainer to me. I sincerely apologize for not supporting it before. I looked at when it was created, and I realized that I was with my family for Christmas/New Years at the time, with little time to spend on the D newsgroup.
Oct 20 2011
parent Manu <turkeyman gmail.com> writes:
I vote for "Error: Ambiguous call to overloaded function". NOT implicit
conversion to arbitrary type 'double' :)


On 20 October 2011 15:49, dsimcha <dsimcha yahoo.com> wrote:

 On 10/20/2011 8:13 AM, Don wrote:

 There's no problem with assignment, it's never ambiguous.

 There seems to be some confusion about what the issue is.
 To reiterate:

 void foo(float x) {}
 void foo(double x) {}

 void bar(float x) {}

 void baz(double x) {}

 void main()
 {
 bar(2); // OK -- 2 becomes 2.0f
 baz(2); // OK -- 2 becomes 2.0
 foo(2); // fails -- ambiguous.
 }

 My proposal was effectively: if it's ambiguous, choose double. That's all.
This proposal seems like a no-brainer to me. I sincerely apologize for not supporting it before. I looked at when it was created, and I realized that I was with my family for Christmas/New Years at the time, with little time to spend on the D newsgroup.
Oct 20 2011
prev sibling next sibling parent reply "Eric Poggel (JoeCoder)" <dnewsgroup2 yage3d.net> writes:
On 10/20/2011 8:13 AM, Don wrote:
 Personally, I'd rather completely eliminate implicit conversions between
 integers and floating point types. But that's just me.
vote++
Oct 20 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Eric Poggel (JoeCoder) (dnewsgroup2 yage3d.net)'s article
 On 10/20/2011 8:13 AM, Don wrote:
 Personally, I'd rather completely eliminate implicit conversions between
 integers and floating point types. But that's just me.
vote++
I would fork the language over this because it would break too much existing code. You can't be serious.
Oct 20 2011
parent "Eric Poggel (JoeCoder)" <dnewsgroup2 yage3d.net> writes:
On 10/20/2011 1:37 PM, dsimcha wrote:
 == Quote from Eric Poggel (JoeCoder) (dnewsgroup2 yage3d.net)'s article
 On 10/20/2011 8:13 AM, Don wrote:
 Personally, I'd rather completely eliminate implicit conversions between
 integers and floating point types. But that's just me.
vote++
I would fork the language over this because it would break too much existing code. You can't be serious.
Not saying it should be immediate. Maybe D3.
Oct 20 2011
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, October 20, 2011 05:13 Don wrote:
 Personally, I'd rather completely eliminate implicit conversions between
 integers and floating point types. But that's just me.
If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.
 My proposal was effectively: if it's ambiguous, choose double. That's all.
Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision. But if there's any ambiguity, then it's definitely against the D way to have the compiler pick for you. - Jonathan M Davis
Oct 20 2011
parent reply Don <nospam nospam.com> writes:
On 20.10.2011 19:28, Jonathan M Davis wrote:
 On Thursday, October 20, 2011 05:13 Don wrote:
 Personally, I'd rather completely eliminate implicit conversions between
 integers and floating point types. But that's just me.
If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.
 My proposal was effectively: if it's ambiguous, choose double. That's all.
Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision.
The problem is, the existing approach will break a lot of existing code. For example, std.math.log(2) currently compiles. But, once the overload log(double) is added, which *must* happen, that code will break. Note that there is no realistic deprecation option, either. When the overload is added, code will break immediately. If we continue with this approach, we have to accept that EVERY TIME we add a floating point overload, existing code will break. So, we either make accept that; or we make everything that will ever break, break now (accepting that some stuff _will_ break, that would never have broken); or we introduce a tie-breaker rule. The question we face is really, which is the lesser evil?
 But if there's any ambiguity, then it's
 definitely against the D way to have the compiler pick for you.
Explain why this compiles: void foo(ubyte x) {} void foo(short x) {} void foo(ushort x) {} void foo(int x) {} void foo(uint x) {} void foo(long x) {} void foo(ulong x) {} void main() { byte b = -1; foo(b); // How ambiguous can you get????? }
Oct 20 2011
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, October 20, 2011 21:44:05 Don wrote:
 On 20.10.2011 19:28, Jonathan M Davis wrote:
 On Thursday, October 20, 2011 05:13 Don wrote:
 Personally, I'd rather completely eliminate implicit conversions
 between
 integers and floating point types. But that's just me.
If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.>
 My proposal was effectively: if it's ambiguous, choose double. That's
 all.> 
Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision.
The problem is, the existing approach will break a lot of existing code. For example, std.math.log(2) currently compiles. But, once the overload log(double) is added, which *must* happen, that code will break. Note that there is no realistic deprecation option, either. When the overload is added, code will break immediately. If we continue with this approach, we have to accept that EVERY TIME we add a floating point overload, existing code will break. So, we either make accept that; or we make everything that will ever break, break now (accepting that some stuff _will_ break, that would never have broken); or we introduce a tie-breaker rule. The question we face is really, which is the lesser evil? > But if there's any ambiguity, then it's > definitely against the D way to have the compiler pick for you. Explain why this compiles: void foo(ubyte x) {} void foo(short x) {} void foo(ushort x) {} void foo(int x) {} void foo(uint x) {} void foo(long x) {} void foo(ulong x) {} void main() { byte b = -1; foo(b); // How ambiguous can you get????? }
I wouldn't have expected that to compile. If we're already doing ambiguous implicit casts like this, then implicitly casting an int to a double isn't really going to make this much worse. On the bright side, it's almost certainly bad practice to have a function which takes a float and a double do something drastically different, so the ambiguity isn't likely to cause problems. But since D usually doesn't compile with ambiguities (particularly with classes), I'm surprised that it's as lax as it is with integral values. - Jonathan M Davis
Oct 20 2011
prev sibling parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 20-10-2011 14:13, Don wrote:
 On 20.10.2011 13:12, Manu wrote:
 On 20 October 2011 11:02, Don <nospam nospam.com
 <mailto:nospam nospam.com>> wrote:

 On 20.10.2011 09:47, Manu wrote:

 Many architectures do not support real, and therefore it should
 never be
 used implicitly by the language.

 Precision problems aside, I would personally insist that implicit
 conversation from any sized int always be to float, not double, for
 performance reasons (the whole point of a compiled language trying
 to supersede C/C++).


 On almost all platforms, float and double are the same speed.


 This isn't true. Consider ARM, hard to say this isn't a vitally
 important architecture these days, and there are plenty of embedded
 architectures that don't support doubles at all, I would say it's a
 really bad idea to invent a systems programming language that excludes
 many architectures by its design... Atmel AVR is another important
 architecture.
It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.
+1.
 I maintain that implicit conversion of integers of any length should
 always target the same precision float, and that should be a compiler
 flag to specify the desired precision throughout the app (possibly
 defaulting to double).
I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below).
 If you choose 'float' you may lose some precision obviously, but you
 expected that when you chose the options, and did the cast...
Explicit casts are not affected in any way.
 Note that what we're discussing here is parameter passing of single
 values; if it's part of an aggregate (array or struct), the issue
 doesn't arise.


 Are we? I thought we were discussing implicit conversion of ints to
 floats? This may be parameter passing, but also assignment I expect?
There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Oct 20 2011
prev sibling parent reply Don <nospam nospam.com> writes:
On 20.10.2011 05:01, Steven Schveighoffer wrote:
 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>
 wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steve
But ulong.max does NOT fit into an 80-bit real. And long won't fit into real on anything other than x86, 68K, and Itanium. I don't think long and ulong should ever implicitly convert to floating point types. Note that you can just do *1.0 or *1.0L if you want to convert them. Currently long implicitly converts even to float. This seems quite bad, it loses 60% of its bits!! Suppose we also banned implicit conversions int->float and uint->float (since float only has 24 bits, these are lossy conversions, losing 25% of the bits). Now that we've disallowed lossy integral conversions, it really seems that we should disallow these ones as well. If that was all we did, it would also mean that things like short+short wouldn't convert to float either, because C converts everything to int whenever it gets an opportunity. But we could use range checking to restore this (and to allow small longs to fit into doubles: allow conversion to double if it's <= 53 bits, allow conversion to float if <= 24 bits).
Oct 20 2011
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 20 Oct 2011 03:55:51 -0400, Don <nospam nospam.com> wrote:

 On 20.10.2011 05:01, Steven Schveighoffer wrote:
 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>=
 wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>=
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:
 =3D=3D Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, perio=
d. =
 A
 language
 that adds a bunch of silly complications to something this simpl=
e =
 is
 fundamentally
 broken. I don't remember your post on implicit preferred  =
 conversions,
 but IMHO
 implicit conversions of integer to double is a no-brainer.  =
 Requiring
 something
 this simple to be explicit is Java/Pascal-like overkill on
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fi=
t
 into
 an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steve
But ulong.max does NOT fit into an 80-bit real. And long won't fit int=
o =
 real on anything other than x86, 68K, and Itanium.

 I don't think long and ulong should ever implicitly convert to floatin=
g =
 point types. Note that you can just do *1.0 or *1.0L if you want to  =
 convert them.

 Currently long implicitly converts even to float. This seems quite bad=
, =
 it loses 60% of its bits!!

 Suppose we also banned implicit conversions int->float and uint->float=
=
 (since float only has 24 bits, these are lossy conversions, losing 25%=
=
 of the bits).

 Now that we've disallowed lossy integral conversions, it really seems =
=
 that we should disallow these ones as well.

 If that was all we did, it would also mean that things like short+shor=
t =
 wouldn't convert to float either, because C converts everything to int=
=
 whenever it gets an opportunity. But we could use range checking to  =
 restore this (and to allow small longs to fit into doubles: allow  =
 conversion to double if it's <=3D 53 bits, allow conversion to float i=
f <=3D =
 24 bits).
Would you disagree though, that if a literal can be accurately represent= ed = as a real or double, it should be allowed? -Steve
Oct 20 2011
prev sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
On Thu, 20 Oct 2011 09:55:51 +0200, Don <nospam nospam.com> wrote:

 On 20.10.2011 05:01, Steven Schveighoffer wrote:
 On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>=
 wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>=
 wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribi=C3=B3:
 =3D=3D Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, perio=
d. =
 A
 language
 that adds a bunch of silly complications to something this simpl=
e =
 is
 fundamentally
 broken. I don't remember your post on implicit preferred  =
 conversions,
 but IMHO
 implicit conversions of integer to double is a no-brainer.  =
 Requiring
 something
 this simple to be explicit is Java/Pascal-like overkill on
 explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fi=
t
 into
 an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steve
But ulong.max does NOT fit into an 80-bit real. And long won't fit int=
o =
 real on anything other than x86, 68K, and Itanium.

 I don't think long and ulong should ever implicitly convert to floatin=
g =
 point types. Note that you can just do *1.0 or *1.0L if you want to  =
 convert them.

 Currently long implicitly converts even to float. This seems quite bad=
, =
 it loses 60% of its bits!!

 Suppose we also banned implicit conversions int->float and uint->float=
=
 (since float only has 24 bits, these are lossy conversions, losing 25%=
=
 of the bits).

 Now that we've disallowed lossy integral conversions, it really seems =
=
 that we should disallow these ones as well.

 If that was all we did, it would also mean that things like short+shor=
t =
 wouldn't convert to float either, because C converts everything to int=
=
 whenever it gets an opportunity. But we could use range checking to  =
 restore this (and to allow small longs to fit into doubles: allow  =
 conversion to double if it's <=3D 53 bits, allow conversion to float i=
f <=3D =
 24 bits).
I'd really like to see, that all conversion were based on value range = propagation instead of the strange C rules. Thankfully this discussion reminded reminded me of an ugly header file b= ug http://d.puremagic.com/issues/show_bug.cgi?id=3D6833. martin
Oct 20 2011
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> wrote:

 On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:
 Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:

 On 10/19/2011 6:25 PM, Alvaro wrote:
 El 19/10/2011 20:12, dsimcha escribió:
 == Quote from Don (nospam nospam.com)'s article
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.
Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).
What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.
Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.
Opps. That should be '5_000_000_000 is a long' not ' is a 5_000_000_000 long'
Oct 19 2011
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Robert Jacques:

 Simple, is a 5_000_000_000 long, and longs convert to reals. Also,
5_000_000_000 does not fit, exactly inside a double.
There is nothing "simple" here... Bye, bearophile
Oct 19 2011
parent Don <nospam nospam.com> writes:
On 20.10.2011 05:25, bearophile wrote:
 Robert Jacques:

 Simple, is a 5_000_000_000 long, and longs convert to reals. Also,
5_000_000_000 does not fit, exactly inside a double.
There is nothing "simple" here... Bye, bearophile
Yeah, but the problem isn't with ints, it's with integer literals, where there is no problem determining if it implicitly converts or not.
Oct 19 2011
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
On 10/19/2011 10:57 PM, Robert Jacques wrote:
 Also, 5_000_000_000 does not fit, exactly inside a double.
Yes it does. Doubles can hold integers exactly up to 2 ^^ 53. http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Oct 19 2011
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 10/20/2011 05:34 AM, dsimcha wrote:
 On 10/19/2011 10:57 PM, Robert Jacques wrote:
 Also, 5_000_000_000 does not fit, exactly inside a double.
Yes it does. Doubles can hold integers exactly up to 2 ^^ 53. http://en.wikipedia.org/wiki/Double_precision_floating-point_format
5_000_000_000 even fits exactly into a IEEE 734 32-bit _float_.
Oct 20 2011
prev sibling parent Don <nospam nospam.com> writes:
On 19.10.2011 20:12, dsimcha wrote:
 == Quote from Don (nospam nospam.com)'s article
 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's
 ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also
 applies to _any_ function which has overloads for more than one floating
 point type.
 In D2 between versions 2.049 and the present, sqrt(2) compiles due to
 the request of a small number of people (2-3, I think). But still, no
 other floating point function works with integer literals.
 The "bug" being fixed was
 Bugzilla 4455: Taking the sqrt of an integer shouldn't require an
 explicit cast.
 This compiles only due to an awful, undocumented hack in std.math. It
 doesn't work for any function other than sqrt. I protested strongly
 against this, but accepted it only on the proviso that we would fix
 integer literal conversions to floating point in _all_ cases, so that
 the hack could be removed.
 However, when I proposed the fix on the newsgroup (integer literals
 should have a preferred conversion to double), it was *unanimously*
 rejected. Those who had argued for the hack were conspicuously absent.
 The hack must go.
No. Something as simple as sqrt(2) must work at all costs, period.
Where the hell were you when I made that proposal before? Frankly, I'm pissed off that you guys bullied me into putting an *awful* temporary hack into std.math, and then gave me no support when the idea got shouted down on the ng.
 A language
 that adds a bunch of silly complications to something this simple is
fundamentally
 broken.  I don't remember your post on implicit preferred conversions, but IMHO
 implicit conversions of integer to double is a no-brainer.  Requiring something
 this simple to be explicit is Java/Pascal-like overkill on explicitness.
The bottom line: the hack MUST go. Either we fix this properly, as I suggested, or else it must not compile.
Oct 19 2011