digitalmars.D - sqrt(2) must go
- Don (19/19) Oct 19 2011 In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (7/26) Oct 19 2011 What on earth does it matter? It's just a cast. And when typing out
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (4/36) Oct 19 2011 PS: What's wrong with converting integer literals to double? It's a
- Marco Leise (24/64) Oct 19 2011 t's
- Jesse Phillips (2/8) Oct 19 2011 I agree. If we should have a proper fix, or no fix at all. If it isn't g...
- Andrej Mitrovic (17/17) Oct 19 2011 You could maybe ease transition for people that rely on this behavior vi...
- Andrej Mitrovic (4/4) Oct 19 2011 That call "return sqrt(f);" should have been "return sqrt!F(f)" that
- Andrej Mitrovic (2/2) Oct 19 2011 http://codepad.org/4g0hXOse
- bearophile (4/7) Oct 19 2011 I don't remember the rationale of those refusals. Do you have a link to ...
- dsimcha (6/25) Oct 19 2011 No. Something as simple as sqrt(2) must work at all costs, period. A l...
- Andrej Mitrovic (1/1) Oct 19 2011 http://www.digitalmars.com/d/archives/digitalmars/D/PROPOSAL_Implicit_co...
- Marco Leise (2/11) Oct 19 2011 Pascal (FreePascal 2.4.0) allows sqrt(2) just fine.
- dsimcha (3/14) Oct 19 2011 LOL and Pascal was my example of a bondage-and-discipline language. All...
- Russel Winder (15/17) Oct 19 2011 the more
- Alvaro (5/12) Oct 19 2011 Completely agree.
- bearophile (4/7) Oct 19 2011 Explicitness usually means adding more annotations in the code, and this...
- dsimcha (3/20) Oct 19 2011 Yes, and for the most part uncluttered programming is one of D's biggest...
- Marco Leise (8/32) Oct 19 2011 be
- Robert Jacques (2/30) Oct 19 2011 Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000...
- Steven Schveighoffer (13/51) Oct 19 2011 A
- Marco Leise (9/55) Oct 19 2011 =
- dsimcha (4/5) Oct 19 2011 Yes, it's just that they may only give 64 bits of precision. Floating
- Russel Winder (29/35) Oct 19 2011 This is not convincing.
- Marco Leise (8/32) Oct 20 2011 Sure it matters, but performance also matters. If I needed the precision...
- Marco Leise (3/43) Oct 20 2011 Forget what I said. If a 64-bit mantissa floating point type doesn't exi...
- Robert Jacques (2/47) Oct 19 2011
- Manu (24/76) Oct 20 2011 Many architectures do not support real, and therefore it should never be
- Don (5/11) Oct 20 2011 On almost all platforms, float and double are the same speed.
- Manu (14/27) Oct 20 2011 This isn't true. Consider ARM, hard to say this isn't a vitally importan...
- Don (24/52) Oct 20 2011 It doesn't exclude anything. What we're talking about as desirable
- Manu (34/99) Oct 20 2011 Yeah sorry, I think you're right, the discussion got slightly lost in th...
- Don (22/120) Oct 20 2011 Actually there is a problem there, I think. If someone later on adds
- Steven Schveighoffer (8/14) Oct 20 2011 Should there be a concern over silently changing the code path? For
- Manu (32/204) Oct 20 2011 Hmmm.
- Simen Kjaeraas (11/16) Oct 20 2011 D specifically supports double (as a 64-bit float), regardless of the
- Manu (22/33) Oct 20 2011 Correct, on all architectures I'm aware of that don't have hardware doub...
- Jonathan M Davis (24/60) Oct 20 2011 Correctness has _nothing_ to do with efficiency. It has to do with the r...
- Manu (11/93) Oct 20 2011 I think you just brushed over my entire concern with respect to librarie...
- Marco Leise (10/22) Oct 20 2011 I start to understand the problems with implicit conversions and I think...
- Robert Jacques (12/23) Oct 20 2011 I agree that #5 and #4 not acceptable longer term solutions. I do CUDA/G...
- Don (14/48) Oct 20 2011 Yeah, that's a valuable perspective.
- Manu (13/69) Oct 21 2011 1: Seems reasonable for literals; "Integer literals and expressions shou...
- Don (14/90) Oct 23 2011 Any expression. Just as right now, long converts to int only if the long...
- Manu (8/128) Oct 23 2011 Okay so we're thinking of allowing implicit casting now ONLY if it can b...
- Manu (13/93) Oct 21 2011 Then again, with regards to 1, the function chosen will depend on the
- Robert Jacques (13/40) Oct 21 2011 Consider sqrt(i % 10). No matter what i is, the range of i % 10 is 0-9.
- Manu (20/64) Oct 21 2011 It would still allow function hijacking.
- Robert Jacques (2/8) Oct 21 2011 Hijacking is what happends when someone adds func(float v); _in another ...
- Manu (3/14) Oct 22 2011 Sure, and hijacking is bound to happen under your proposal, no?
- Robert Jacques (22/39) Oct 22 2011 Manu, I'm not sure you understand how function hijack detection works to...
- dsimcha (5/19) Oct 20 2011 This proposal seems like a no-brainer to me. I sincerely apologize for
- Manu (3/29) Oct 20 2011 I vote for "Error: Ambiguous call to overloaded function". NOT implicit
- Eric Poggel (JoeCoder) (2/4) Oct 20 2011 vote++
- dsimcha (3/7) Oct 20 2011 I would fork the language over this because it would break too much exis...
- Eric Poggel (JoeCoder) (2/9) Oct 20 2011 Not saying it should be immediate. Maybe D3.
- Jonathan M Davis (12/15) Oct 20 2011 If it's a narrowing conversion, it should require a cast. If it's not, a...
- Don (25/40) Oct 20 2011 The problem is, the existing approach will break a lot of existing code....
- Jonathan M Davis (9/60) Oct 20 2011 I wouldn't have expected that to compile. If we're already doing ambiguo...
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (2/63) Oct 20 2011
- Don (19/64) Oct 20 2011 But ulong.max does NOT fit into an 80-bit real. And long won't fit into
- Steven Schveighoffer (17/89) Oct 20 2011 e =
- Martin Nowak (20/92) Oct 20 2011 e =
- Robert Jacques (2/34) Oct 19 2011 Opps. That should be '5_000_000_000 is a long' not ' is a 5_000_000_000 ...
- bearophile (4/5) Oct 19 2011 There is nothing "simple" here...
- Don (3/8) Oct 19 2011 Yeah, but the problem isn't with ints, it's with integer literals, where...
- dsimcha (3/4) Oct 19 2011 Yes it does. Doubles can hold integers exactly up to 2 ^^ 53.
- Timon Gehr (2/6) Oct 20 2011 5_000_000_000 even fits exactly into a IEEE 734 32-bit _float_.
- Don (7/33) Oct 19 2011 Where the hell were you when I made that proposal before?
In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also applies to _any_ function which has overloads for more than one floating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due to the request of a small number of people (2-3, I think). But still, no other floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. It doesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so that the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.
Oct 19 2011
On 19-10-2011 18:18, Don wrote:In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also applies to _any_ function which has overloads for more than one floating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due to the request of a small number of people (2-3, I think). But still, no other floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. It doesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so that the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f instead of 2. In .NET land, people live with this just fine (see System.Math.Sqrt). Why can't we? I say kill the hack. - Alex
Oct 19 2011
On 19-10-2011 18:22, Alex Rønne Petersen wrote:On 19-10-2011 18:18, Don wrote:PS: What's wrong with converting integer literals to double? It's a lossless conversion, no? - AlexIn D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also applies to _any_ function which has overloads for more than one floating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due to the request of a small number of people (2-3, I think). But still, no other floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. It doesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so that the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f instead of 2. In .NET land, people live with this just fine (see System.Math.Sqrt). Why can't we? I say kill the hack. - Alex
Oct 19 2011
Am 19.10.2011, 18:25 Uhr, schrieb Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com>:On 19-10-2011 18:22, Alex R=C3=B8nne Petersen wrote:t'sOn 19-10-2011 18:18, Don wrote:In D2 prior to 2.048, sqrt(2) does not compile. The reason is that i=lsoambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This a=applies to _any_ function which has overloads for more than one =ofloating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due t=othe request of a small number of people (2-3, I think). But still, n=tother floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. I=tdoesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so tha=t.the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absen=adThe hack must go.What on earth does it matter? It's just a cast. And when typing out floating-point literals, it *really* does not hurt to type 2.0f inste=of 2. In .NET land, people live with this just fine (see System.Math.Sqrt). Why can't we? I say kill the hack. - AlexPS: What's wrong with converting integer literals to double? It's a =lossless conversion, no? - AlexAs long as it is not a 64-bit integer, yes. More philosophical, I think integer math should be separate from FP math= , = because it has different semantics (integer division with remainder, = overflow vs. infinity, fixed steps vs. exponent, shifting, ...). So I = think .NET handles this correctly. But converting 32-bit ints to double = = shouldn't harm much either. JavaScript has no integers, only doubles. It= = works. When you come across a for loop there and think to yourself "it incremen= ts = a variant that stores a double there", you can laugh or cry. (Actually = good JS engines optimize that case, of course.)
Oct 19 2011
Don Wrote:However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.I agree. If we should have a proper fix, or no fix at all. If it isn't going to compile then be sure it is documented. While it will be strange not having it compile it does make sense that when dealing with floating point you should choose its size.
Oct 19 2011
You could maybe ease transition for people that rely on this behavior via: import std.traits; auto sqrt(F, T)(T x) { return sqrt(cast(F)x); } auto sqrt(F)(F f) if (!isIntegral!F) { return sqrt(f); } void main() { int x = 1; sqrt(4); // ng sqrt!float(x); // ok sqrt(4.5); // ok } It sure is shorter to type than cast(float). Then again it's templated so maybe that's not too nice for std.math.
Oct 19 2011
That call "return sqrt(f);" should have been "return sqrt!F(f)" that forwards to sqrt functions that take floats/doubles/reals. Again these would be templates.. it's a shame templates still can't overload against functions.
Oct 19 2011
http://codepad.org/4g0hXOse Too much?
Oct 19 2011
Don:However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected.I don't remember the rationale of those refusals. Do you have a link to the discussion? Bye, bearophile
Oct 19 2011
== Quote from Don (nospam nospam.com)'s articleIn D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also applies to _any_ function which has overloads for more than one floating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due to the request of a small number of people (2-3, I think). But still, no other floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. It doesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so that the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
http://www.digitalmars.com/d/archives/digitalmars/D/PROPOSAL_Implicit_conversions_of_integer_literals_to_floating_point_125539.html#N125539
Oct 19 2011
Am 19.10.2011, 20:12 Uhr, schrieb dsimcha <dsimcha yahoo.com>:No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Pascal (FreePascal 2.4.0) allows sqrt(2) just fine.
Oct 19 2011
== Quote from Marco Leise (Marco.Leise gmx.de)'s articleAm 19.10.2011, 20:12 Uhr, schrieb dsimcha <dsimcha yahoo.com>:LOL and Pascal was my example of a bondage-and-discipline language. All the more reason why we need to allow it in D come Hell or high water.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Pascal (FreePascal 2.4.0) allows sqrt(2) just fine.
Oct 19 2011
On Wed, 2011-10-19 at 19:12 +0000, dsimcha wrote: [ . . . ]LOL and Pascal was my example of a bondage-and-discipline language. All =the morereason why we need to allow it in D come Hell or high water.Bondage. Discipline. Does this mean Lady Heather will take control? =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Oct 19 2011
El 19/10/2011 20:12, dsimcha escribió:== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
Alvaro:I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.Explicitness usually means adding more annotations in the code, and this usually increases the visual noise in inside the code. This noise masks the code and often leads to mistakes. On the other hand too many implicit disallow some of them available in C. OCaML language disallow most of them). So the language designers must find some middle balancing point, that is somehow an optimum. Bye, bearophile
Oct 19 2011
On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribió:Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A=language that adds a bunch of silly complications to something this simple is=,fundamentally broken. I don't remember your post on implicit preferred conversions=but IMHO implicit conversions of integer to double is a no-brainer. Requiring=something this simple to be explicit is Java/Pascal-like overkill on =beexplicitness.Completely agree. I call that uncluttered programming. No excessive explicitness should=s).necessary when what you mean is obvious (under some simple convention=st =Leads to clearer code.Yes, and for the most part uncluttered programming is one of D's bigge=strengths. Let's not ruin it by complicating sqrt(2).What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit int= o = an int, but it fits into a double.
Oct 19 2011
On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.On 10/19/2011 6:25 PM, Alvaro wrote:What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.El 19/10/2011 20:12, dsimcha escribió:Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> = wrote:On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> =wrote:AAm 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, period.=islanguage that adds a bunch of silly complications to something this simple =ns,fundamentally broken. I don't remember your post on implicit preferred conversio=ngbut IMHO implicit conversions of integer to double is a no-brainer. Requiri=ld =something this simple to be explicit is Java/Pascal-like overkill on explicitness.Completely agree. I call that uncluttered programming. No excessive explicitness shou=be necessary when what you mean is obvious (under some simple =conventions). Leads to clearer code.Yes, and for the most part uncluttered programming is one of D's ==biggest strengths. Let's not ruin it by complicating sqrt(2).What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit =into an int, but it fits into a double.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, =5_000_000_000 does not fit, exactly inside a double.It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all lon= gs = fit into a double exactly. -Steve
Oct 19 2011
Am 20.10.2011, 05:01 Uhr, schrieb Steven Schveighoffer = <schveiguy yahoo.com>:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> ==wrote:=On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> =. Awrote:Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, period=islanguage that adds a bunch of silly complications to something this simple=fundamentally broken. I don't remember your post on implicit preferred =ingconversions, but IMHO implicit conversions of integer to double is a no-brainer. Requir=something this simple to be explicit is Java/Pascal-like overkill on explicitness.Completely agree. I call that uncluttered programming. No excessive explicitness =should be necessary when what you mean is obvious (under some simple =conventions). Leads to clearer code.Yes, and for the most part uncluttered programming is one of D's ==biggest strengths. Let's not ruin it by complicating sqrt(2).What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit=into an int, but it fits into a double.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, =5_000_000_000 does not fit, exactly inside a double.It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all =longs fit into a double exactly. -SteveAnd real can be used without protability problems on PowerPC or ARM?
Oct 19 2011
On 10/19/2011 11:27 PM, Marco Leise wrote:And real can be used without protability problems on PowerPC or ARM?Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
Oct 19 2011
On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:On 10/19/2011 11:27 PM, Marco Leise wrote:=20And real can be used without protability problems on PowerPC or ARM?=20 Yes, it's just that they may only give 64 bits of precision. Floating=point is inexact anyhow, though. IMHO the fact that you may lose a=20 little precision with very large longs is not a game changer.This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Oct 19 2011
Am 20.10.2011, 08:02 Uhr, schrieb Russel Winder <russel russel.org.uk>:On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:Sure it matters, but performance also matters. If I needed the precision of a real, I would make sure that I give the compile the right hint. And adding zeros doesn't help. The representation is mantissa and exponent and your three examples would all come out the same in that representation. :) Is this really a real life problem, or can we just go with any solution for sqrt(2) that works (int->double, long->real) and leave the details to the ones who care and would write sqrt(2.0f) ?On 10/19/2011 11:27 PM, Marco Leise wrote:This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem.And real can be used without protability problems on PowerPC or ARM?Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
Oct 20 2011
Am 20.10.2011, 22:37 Uhr, schrieb Marco Leise <Marco.Leise gmx.de>:Am 20.10.2011, 08:02 Uhr, schrieb Russel Winder <russel russel.org.uk>:Forget what I said. If a 64-bit mantissa floating point type doesn't exist on all systems this doesn't work.On Wed, 2011-10-19 at 23:36 -0400, dsimcha wrote:Sure it matters, but performance also matters. If I needed the precision of a real, I would make sure that I give the compile the right hint. And adding zeros doesn't help. The representation is mantissa and exponent and your three examples would all come out the same in that representation. :) Is this really a real life problem, or can we just go with any solution for sqrt(2) that works (int->double, long->real) and leave the details to the ones who care and would write sqrt(2.0f) ?On 10/19/2011 11:27 PM, Marco Leise wrote:This is not convincing. One of the biggest problem is software development is that computers have two systems of hardware arithmetic that are mutually incompatible and have very different properties. Humans are taught that there are abstract numbers that can be put into different sets: reals, integers, naturals, etc. There are already far too many programmers out there who do not understand that computer numbers have representation problems and rounding errors. Another issue: sqrt ( 2 ) sqrt ( 2.0 ) sqrt ( 2.0000000000000000000000000000000000000000 ) actually mean very different things. The number of zeros carries information. Summary, losing precision is a game changer. This stuff matters. This is a hard problem.And real can be used without protability problems on PowerPC or ARM?Yes, it's just that they may only give 64 bits of precision. Floating point is inexact anyhow, though. IMHO the fact that you may lose a little precision with very large longs is not a game changer.
Oct 20 2011
On Wed, 19 Oct 2011 23:01:34 -0400, Steven Schveighoffer <schveiguy yahoo.com> wrote:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> wrote:Yes. You're right. Sorry, my brain automatically skipped forward to 5_000_000_000 => long => real.On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:It doesn't? I thought double could do 53 bits?Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.On 10/19/2011 6:25 PM, Alvaro wrote:What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.El 19/10/2011 20:12, dsimcha escribió:Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steve
Oct 19 2011
Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). Amusingly, 5_000_000_000 IS actually precisely representable with a float ;) .. Bet let's take 5_000_000_001, it'll lose a few bits, but that's more than precise enough for me. Naturally the majority would make solid arguments against this preference, and I would agree with them in their argument, therefore should it not just be a compiler flag/option to explicitly specify the implicit int->float conversion precision? Though that leads to a problem with standard library, since it links a pre-compiled binary... I can't afford to have the standard library messing around with doubles because that was the flag it was compiled with... This leads inevitably to my pointlessly rewriting the standard library functions in my own code, just as in C where the CRT uses doubles, for the same reasons... On 20 October 2011 06:01, Steven Schveighoffer <schveiguy yahoo.com> wrote:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> wrote: On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>,wrote:Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>: On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions=stYes, and for the most part uncluttered programming is one of D's bigge=but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.tostrengths. Let's not ruin it by complicating sqrt(2).What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit in=sIt doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all long=an int, but it fits into a double.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.fit into a double exactly. -Steve
Oct 20 2011
On 20.10.2011 09:47, Manu wrote:Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++).On almost all platforms, float and double are the same speed. Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise.
Oct 20 2011
On 20 October 2011 11:02, Don <nospam nospam.com> wrote:On 20.10.2011 09:47, Manu wrote:This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture. I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double). If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast...Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++).On almost all platforms, float and double are the same speed.Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise.Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect?
Oct 20 2011
On 20.10.2011 13:12, Manu wrote:On 20 October 2011 11:02, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 20.10.2011 09:47, Manu wrote: Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). On almost all platforms, float and double are the same speed. This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture.It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double).I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below).If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast...Explicit casts are not affected in any way.Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise. Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect?There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Oct 20 2011
On 20 October 2011 15:13, Don <nospam nospam.com> wrote:On 20.10.2011 13:12, Manu wrote:Yeah sorry, I think you're right, the discussion got slightly lost in the noise here... Just to clarify, where you advocate eliminating implicit casting, do you now refer to ALL implicit casting? Or just implicit casting to an ambiguous target? Let me reposition myself to suit what it would seem is actually being discussed... :) void sqrt(float x); void sqrt(double x); void sqrt(real x); { sqrt(2); } Surely this produces some error: "Ambiguous call to overloaded function", and then there is no implicit cast rule to talk about... end of discussion? But you speak of "eliminating implicit casting" as if this may also refer to: void NotOverloaded(float x); { NotOverloaded(2); // not ambiguous... so what's the problem? } or: float x = 10; Which I can imagine why most would feel this is undesirable... I'm not clear now where you intend to draw the lines. If you're advocating banning ALL implicit casting between float/int outright, I actually feel really good about that idea. I can just imagine the number of hours saved while optimising where junior/ignorant programmers cast back and fourth with no real regard to or awareness of what they're doing. Or are we only drawing the distinction for literals? I don't mind a compile error if I incorrectly state the literal a function expects. That sort of thing becomes second nature in no time.On 20 October 2011 11:02, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 20.10.2011 09:47, Manu wrote: Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). On almost all platforms, float and double are the same speed. This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture.It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me. I maintain that implicit conversion of integers of any length shouldalways target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double).I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below). If you choose 'float' you may lose some precision obviously, but youexpected that when you chose the options, and did the cast...Explicit casts are not affected in any way. Note that what we're discussing here is parameter passing of singlevalues; if it's part of an aggregate (array or struct), the issue doesn't arise. Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect?There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Oct 20 2011
On 20.10.2011 14:48, Manu wrote:On 20 October 2011 15:13, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 20.10.2011 13:12, Manu wrote: On 20 October 2011 11:02, Don <nospam nospam.com <mailto:nospam nospam.com> <mailto:nospam nospam.com <mailto:nospam nospam.com>>> wrote: On 20.10.2011 09:47, Manu wrote: Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). On almost all platforms, float and double are the same speed. This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture. It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me. I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double). I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below). If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast... Explicit casts are not affected in any way. Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise. Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect? There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all. Yeah sorry, I think you're right, the discussion got slightly lost in the noise here... Just to clarify, where you advocate eliminating implicit casting, do you now refer to ALL implicit casting? Or just implicit casting to an ambiguous target? Let me reposition myself to suit what it would seem is actually being discussed... :) void sqrt(float x); void sqrt(double x); void sqrt(real x); { sqrt(2); } Surely this produces some error: "Ambiguous call to overloaded function", and then there is no implicit cast rule to talk about... end of discussion? But you speak of "eliminating implicit casting" as if this may also refer to: void NotOverloaded(float x); { NotOverloaded(2); // not ambiguous... so what's the problem? }Actually there is a problem there, I think. If someone later on adds NotOverload(double x), that call will suddenly stop compiling. That isn't just a theoretical problem. Currently log(2) will compile, but only because in std.math there is log(real), but not yet log(double) or log(float). So once we add those overloads, peoples code will break. I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now:or: float x = 10; Which I can imagine why most would feel this is undesirable... I'm not clear now where you intend to draw the lines. If you're advocating banning ALL implicit casting between float/int outright, I actually feel really good about that idea. I can just imagine the number of hours saved while optimising where junior/ignorant programmers cast back and fourth with no real regard to or awareness of what they're doing. Or are we only drawing the distinction for literals? I don't mind a compile error if I incorrectly state the literal a function expects. That sort of thing becomes second nature in no time.more generally popular.
Oct 20 2011
On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote:Actually there is a problem there, I think. If someone later on adds NotOverload(double x), that call will suddenly stop compiling. That isn't just a theoretical problem. Currently log(2) will compile, but only because in std.math there is log(real), but not yet log(double) or log(float). So once we add those overloads, peoples code will break.Should there be a concern over silently changing the code path? For instance, log(2) currently binds to log(real), but with the addition of log(double) will bind to that. I'm not saying I found any problems with this, but I'm wondering if it can possibly harm anything. I don't have enough experience with floating point types to come up with a use case that would be affected. -Steve
Oct 20 2011
On 20 October 2011 16:11, Don <nospam nospam.com> wrote:On 20.10.2011 14:48, Manu wrote:Hmmm. between integer and floating types (something that far too many programmers lack) never have agreement, and perhaps more importantly, it's not instinctively obvious which it will choose and why... you just have to know, and most people won't know, they'll just use it blind (see my point about finding+fixing code by junior/ignorant programmers) the culprit committing the crime, and less obvious to the culprit that they are committing a crime. solving that problem. In order of preference: 1, 4, 2, 3, 5 I could only support 2 if it chooses 'float', the highest performance version on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible? If the argument is made to favour precision, then it should surely be real, not double... the counter argument is obviously that real is not supported on many architectures, true, but the same is true for double... so I think it's a hollow argument. double just seem like a weak compromise; more precise than float, and more likely to work on more architectures than real (but still not guaranteed), but it doesn't really satisfy either criteria neatly. The logic transposed: double is less precise than real, and potentially slower than float. Is the justification purely that it IS a compromise, since the choice at the end of the day is totally subjective :)On 20 October 2011 15:13, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 20.10.2011 13:12, Manu wrote: On 20 October 2011 11:02, Don <nospam nospam.com <mailto:nospam nospam.com> <mailto:nospam nospam.com <mailto:nospam nospam.com>>> wrote: On 20.10.2011 09:47, Manu wrote: Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). On almost all platforms, float and double are the same speed. This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture. It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me. I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double). I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below). If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast... Explicit casts are not affected in any way. Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise. Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect? There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all. Yeah sorry, I think you're right, the discussion got slightly lost in the noise here... Just to clarify, where you advocate eliminating implicit casting, do you now refer to ALL implicit casting? Or just implicit casting to an ambiguous target? Let me reposition myself to suit what it would seem is actually being discussed... :) void sqrt(float x); void sqrt(double x); void sqrt(real x); { sqrt(2); } Surely this produces some error: "Ambiguous call to overloaded function", and then there is no implicit cast rule to talk about... end of discussion? But you speak of "eliminating implicit casting" as if this may also refer to: void NotOverloaded(float x); { NotOverloaded(2); // not ambiguous... so what's the problem? }Actually there is a problem there, I think. If someone later on adds NotOverload(double x), that call will suddenly stop compiling. That isn't just a theoretical problem. Currently log(2) will compile, but only because in std.math there is log(real), but not yet log(double) or log(float). So once we add those overloads, peoples code will break. I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: or:float x = 10; Which I can imagine why most would feel this is undesirable... I'm not clear now where you intend to draw the lines. If you're advocating banning ALL implicit casting between float/int outright, I actually feel really good about that idea. I can just imagine the number of hours saved while optimising where junior/ignorant programmers cast back and fourth with no real regard to or awareness of what they're doing. Or are we only drawing the distinction for literals? I don't mind a compile error if I incorrectly state the literal a function expects. That sort of thing becomes second nature in no time.generally popular.
Oct 20 2011
On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote:I could only support 2 if it chooses 'float', the highest performance version on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible?D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss. As for double vs real, a 32-bit int neatly fits in any double, making it good enough. short and byte should convert to float, and I'm not sure about long/ulong, since real may not have enough bits to accurately represent it. -- Simen
Oct 20 2011
On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote: I could only support 2 if it chooses 'float', the highest performanceCorrect, on all architectures I'm aware of that don't have hardware double support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy. The reason I'm so concerned about this, is not for what I may or may not do in my code, I'm likely to be careful, but imagine some cool library that I want to make use of... some programmer has gone and written 'x = sqrt(2)' in this library somewhere; they don't require double precision, but it was implicitly used regardless of their intent. Now I can't use that library in my project. Any library that wasn't written with the intent of use in embedded systems in mind, that happens to omit all of 2 characters from their float literal, can no longer be used in my project. This makes me sad. I'd also like you to ask yourself realistically, of all the integers you've EVER cast to/from a float, how many have ever been a big/huge number? And if/when that occurred, what did you do with it? Was the precision important? Was it important enough to you to explicitly state the cast? The moment you use it in a mathematical operation you are likely throwing away a bunch of precision anyway, especially for the complex functions like sqrt/log/etc in question.version on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible?D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.
Oct 20 2011
On Thursday, October 20, 2011 21:52:32 Manu wrote:On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:Correctness has _nothing_ to do with efficiency. It has to do with the result that you get. Losing precision means that your code is less correct.On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote: I could only support 2 if it chooses 'float', the highest performanceCorrect, on all architectures I'm aware of that don't have hardware double support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy.version on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible?D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.The reason I'm so concerned about this, is not for what I may or may not do in my code, I'm likely to be careful, but imagine some cool library that I want to make use of... some programmer has gone and written 'x = sqrt(2)' in this library somewhere; they don't require double precision, but it was implicitly used regardless of their intent. Now I can't use that library in my project. Any library that wasn't written with the intent of use in embedded systems in mind, that happens to omit all of 2 characters from their float literal, can no longer be used in my project. This makes me sad. I'd also like you to ask yourself realistically, of all the integers you've EVER cast to/from a float, how many have ever been a big/huge number? And if/when that occurred, what did you do with it? Was the precision important? Was it important enough to you to explicitly state the cast? The moment you use it in a mathematical operation you are likely throwing away a bunch of precision anyway, especially for the complex functions like sqrt/log/etc in question.When dealing with math functions like this, it doesn't really matter whether the number being passed in is a large one or not. It matters what you want for the return type. And the higher the precision, the more correct the result, so there are a lot of people who would want the result to be real, rather than float or double. It's when your concern is efficiency that you start worrying about whether a float would be better. And yes, efficiency matters, but if efficiency matters, then you can always tell it 2.0f instead of 2. Don's suggestion results in the code being more correct in the general case and yet still lets you easily make it more efficient if you want. That's very much the D way of doing things. Personally, I'm very leery of making an int literal implicitly convert to a double when there's ambiguity (e.g. if the function could also take a float), because then the compiler is resolving ambiguity for you rather than letting you do it. It risks function hijacking (at least in the sense that you don't necessarily end up calling the function that you mean to; it's not an issue for sqrt, but it could matter a lot for a function that has different behavior for float and double). And that sort of thing is very much _not_ the D way. So, I'm all for integers implicitly converting to double so long as there's no ambiguity. But in any case where there's ambiguity or a narrowing conversion, a cast should be required. - Jonathan M Davis
Oct 20 2011
I think you just brushed over my entire concern with respect to libraries, and very likely the standard library its self. I've also made what I consider to be reasonable counter arguments to those points in earlier posts, so I won't repeat myself. I think it's fairly safe to say though, with respect to Don's question, using a tie-breaker is extremely controversial. I can't see any way that could be unanimously considered a good idea. I stand by the call to ban implicit conversion between float/int. Some might consider that a minor annoyance, but it also has so many potential advantages and time savers down the line too. On 20 October 2011 22:21, Jonathan M Davis <jmdavisProg gmx.com> wrote:On Thursday, October 20, 2011 21:52:32 Manu wrote:On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras gmail.com> wrote:doubleOn Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman gmail.com> wrote: I could only support 2 if it chooses 'float', the highest performanceCorrect, on all architectures I'm aware of that don't have hardwareversion on all architectures AND actually available on all architectures; given this is meant to be a systems programming language, and supporting as many architectures as possible?D specifically supports double (as a 64-bit float), regardless of the actual hardware. Also, the D way is to make the correct way simple, the fast way possible. This is clearly in favor of not using float, which *would* lead to precision loss.support, double is emulated, and that is EXTREMELY slow. I can't imagine any case where causing implicit (hidden) emulation of unsupported hardware should be considered 'correct', and therefore made easy.Correctness has _nothing_ to do with efficiency. It has to do with the result that you get. Losing precision means that your code is less correct.The reason I'm so concerned about this, is not for what I may or may notdoin my code, I'm likely to be careful, but imagine some cool library thatIwant to make use of... some programmer has gone and written 'x = sqrt(2)' in this library somewhere; they don't require double precision, but itwasimplicitly used regardless of their intent. Now I can't use that libraryinmy project. Any library that wasn't written with the intent of use in embeddedsystemsin mind, that happens to omit all of 2 characters from their floatliteral,can no longer be used in my project. This makes me sad. I'd also like you to ask yourself realistically, of all the integersyou'veEVER cast to/from a float, how many have ever been a big/huge number? And if/when that occurred, what did you do with it? Was the precision important? Was it important enough to you to explicitly state the cast? The moment you use it in a mathematical operation you are likely throwing away a bunch of precision anyway, especially for the complex functionslikesqrt/log/etc in question.When dealing with math functions like this, it doesn't really matter whether the number being passed in is a large one or not. It matters what you want for the return type. And the higher the precision, the more correct the result, so there are a lot of people who would want the result to be real, rather than float or double. It's when your concern is efficiency that you start worrying about whether a float would be better. And yes, efficiency matters, but if efficiency matters, then you can always tell it 2.0f instead of 2. Don's suggestion results in the code being more correct in the general case and yet still lets you easily make it more efficient if you want. That's very much the D way of doing things. Personally, I'm very leery of making an int literal implicitly convert to a double when there's ambiguity (e.g. if the function could also take a float), because then the compiler is resolving ambiguity for you rather than letting you do it. It risks function hijacking (at least in the sense that you don't necessarily end up calling the function that you mean to; it's not an issue for sqrt, but it could matter a lot for a function that has different behavior for float and double). And that sort of thing is very much _not_ the D way. So, I'm all for integers implicitly converting to double so long as there's no ambiguity. But in any case where there's ambiguity or a narrowing conversion, a cast should be required. - Jonathan M Davis
Oct 20 2011
Am 20.10.2011, 22:09 Uhr, schrieb Manu <turkeyman gmail.com>:I think you just brushed over my entire concern with respect to libraries, and very likely the standard library its self. I've also made what I consider to be reasonable counter arguments to those points in earlier posts, so I won't repeat myself. I think it's fairly safe to say though, with respect to Don's question, using a tie-breaker is extremely controversial. I can't see any way that could be unanimously considered a good idea. I stand by the call to ban implicit conversion between float/int. Some might consider that a minor annoyance, but it also has so many potential advantages and time savers down the line too.I start to understand the problems with implicit conversions and I think now that the size of the integer has no relation at all to size of the float in a conversion. For a large integer you may loose precision immediately when it is stored to a float instead of a double, but when you run sqrt() on it you introduce some more error anyway. So I'd vote for no implicit conversion unless the situation is unambiguous and the precision is sufficient, like in an assignment or calling a method with no overload. At least code would not break silently if overloads for sqrt would be provided later.
Oct 20 2011
On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote: [snip]I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now:programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion. Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on. The idea behind 2b) would be: int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false);
Oct 20 2011
On 21.10.2011 05:24, Robert Jacques wrote:On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote: [snip]Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, theI'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: function...CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on.The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well.The idea behind 2b) would be: int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false);That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
Oct 20 2011
On 21 October 2011 09:00, Don <nospam nospam.com> wrote:On 21.10.2011 05:24, Robert Jacques wrote:1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions? 2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/ Perhaps I missed something? Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote: [snip]Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, theI'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: function...CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on.The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be:int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false);That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
Oct 21 2011
On 21.10.2011 09:53, Manu wrote:On 21 October 2011 09:00, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 21.10.2011 05:24, Robert Jacques wrote: On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: [snip] I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: function... CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion. Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, the Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on. The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be: int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false); That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be. 1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions?Any expression. Just as right now, long converts to int only if the long expression is guaranteed to fit into 32 bits. Of course, if it's a literal, this is very easy.2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/It's an independent issue.Perhaps I missed something? Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...No, it doesn't. As I said, this is independent. Except that it does mean that some existing int->float conversions would be disallowed. EG, float foo(int x) { return x; } wouldn't compile, because x might not fit into a float without loss of accuracy.
Oct 23 2011
Okay so we're thinking of allowing implicit casting now ONLY if it can be guaranteed that the cast is lossless? I don't see how this addresses the original question though, which was how to resolve an ambiguous function selection? It selects the smallest one it will fit into? Does that mean it will always choose the double version if you pass an int32 without any extra info narrowing its possible bounds? On 23 October 2011 22:36, Don <nospam nospam.com> wrote:On 21.10.2011 09:53, Manu wrote:On 21 October 2011 09:00, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 21.10.2011 05:24, Robert Jacques wrote: On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: [snip] I'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: function... CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion. Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, the Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on. The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be: int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false); That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be. 1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions?Any expression. Just as right now, long converts to int only if the long expression is guaranteed to fit into 32 bits. Of course, if it's a literal, this is very easy. 2b: Does runtime bounds checking actually addresses the question; whichof an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/It's an independent issue. Perhaps I missed something?Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...No, it doesn't. As I said, this is independent. Except that it does mean that some existing int->float conversions would be disallowed. EG, float foo(int x) { return x; } wouldn't compile, because x might not fit into a float without loss of accuracy.
Oct 23 2011
On 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:On 21 October 2011 09:00, Don <nospam nospam.com> wrote:Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly be able to know which one is called... What if the ambiguous overloads don't actually perform identical functionality with just different precision? .. I don't like the idea of it being uncertain. And one more thing to ponder, is the return type telling here? float x = sqrt(2); Obviously this may only work for these pure maths functions where the return type is matched to the args, but maybe it's an element worth considering. ie, if the function parameter is ambiguous, check for disambiguation via the return type...? Sounds pretty nasty! :)On 21.10.2011 05:24, Robert Jacques wrote:1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions? 2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/ Perhaps I missed something? Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam nospam.com> wrote: [snip]Yeah, that's a valuable perspective. sqrt(2) is "I don't care what the precision is". What I get from you and Manu is: if you're working in a float world, you want float to be the tiebreaker. Otherwise, you want double (or possibly real!) to be the tiebreaker. And therefore, theI'd like to get to the situation where those overloads can be added without breaking peoples code. The draconian possibility is to disallow them in all cases: integer types never match floating point function parameters. The second possibility is to introduce a tie-breaker rule: when there's an ambiguity, choose double. And a third possibility is to only apply that tie-breaker rule to literals. And the fourth possibility is to keep the language as it is now, and allow code to break when overloads get added. The one I really, really don't want, is the situation we have now: function...CUDA/GPU programming, so I live in a world of floats and ints. So changing the rules does worries me, but mainly because most people don't use floats on a daily basis, which introduces bias into the discussion.Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b would be best: 1) Integer literals and expressions should use range propagation to use the thinnest loss-less conversion. If no loss-less conversion exists, then an error is raised. Choosing double as a default is always the wrong choice for GPUs and most embedded systems. 2a) Lossy variable conversions are disallowed. 2b) Lossy variable conversions undergo bounds checking when asserts are turned on.The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well. The idea behind 2b) would be:int i = 1; float f = i; // assert(true); i = int.max; f = i; // assert(false);That would be catastrophically slow. I wonder how painful disallowing lossy conversions would be.
Oct 21 2011
On Fri, 21 Oct 2011 09:00:48 -0400, Manu <turkeyman gmail.com> wrote:On 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:[snip]On 21 October 2011 09:00, Don <nospam nospam.com> wrote:Consider sqrt(i % 10). No matter what i is, the range of i % 10 is 0-9. I was more thinking of whether plain old assignment would be allowed: float f = myshort; Of course, if we deny implicit conversion, shouldn't the following fail to compile? float position = index * resolution;1: Seems reasonable for literals; "Integer literals and expressions should use range propagation to use the thinnest loss-less conversion"... but can you clarify what you mean by 'expressions'? I assume we're talking strictly literal expressions?Yes, nut only because I didn't include it. I was thinking of float f = i; as opposed to func(i) for some reason. Bounds checking would only make sense if func(float) was the only overload.2b: Does runtime bounds checking actually addresses the question; which of an ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit cast for data loss at runtime, but which to choose? float/double/real? We'll still arguing that question even with this proposal taken into consideration... :/ Perhaps I missed something?Then whoever wrote the library was Evil(tm). Given that these rules wouldn't interfere with function hijacking, I'm not sure of the practicality of this concern. Do you have an example?Naturally all this complexity assumes we go with the tie-breaker approach, which I'm becoming more and more convinced is a bad plan...Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly be able to know which one is called... What if the ambiguous overloads don't actually perform identical functionality with just different precision? ..I don't like the idea of it being uncertain. And one more thing to ponder, is the return type telling here? float x = sqrt(2); Obviously this may only work for these pure maths functions where the return type is matched to the args, but maybe it's an element worth considering. ie, if the function parameter is ambiguous, check for disambiguation via the return type...? Sounds pretty nasty! :)
Oct 21 2011
It would still allow function hijacking. void func(double v); exists... func(2); then someone comes along and adds func(float v); .. It will now hijack the call. That's what you mean right? On Oct 22, 2011 1:45 AM, "Robert Jacques" <sandford jhu.edu> wrote:On Fri, 21 Oct 2011 09:00:48 -0400, Manu <turkeyman gmail.com> wrote:shouldOn 21 October 2011 10:53, Manu <turkeyman gmail.com> wrote:[snip]On 21 October 2011 09:00, Don <nospam nospam.com> wrote:1: Seems reasonable for literals; "Integer literals and expressionsbyuse range propagation to use the thinnest loss-less conversion"... but can you clarify what you meanto compile?Consider sqrt(i % 10). No matter what i is, the range of i % 10 is 0-9. I was more thinking of whether plain old assignment would be allowed: float f = myshort; Of course, if we deny implicit conversion, shouldn't the following fail'expressions'? I assume we're talking strictly literal expressions?float position = index * resolution;of2b: Does runtime bounds checking actually addresses the question; whichforan ambiguous function to choose? If I read you correctly, 2b suggests bounds checking the implicit caststilldata loss at runtime, but which to choose? float/double/real? We'llconsideration... :/arguing that question even with this proposal taken intothe only overload.Yes, nut only because I didn't include it. I was thinking of float f = i; as opposed to func(i) for some reason. Bounds checking would only make sense if func(float) wasPerhaps I missed something?approach,Naturally all this complexity assumes we go with the tie-breakerbewhich I'm becoming more and more convinced is a bad plan...Then again, with regards to 1, the function chosen will depend on the magnitude of the int, perhaps a foreign constant, you might not clearly..able to know which one is called... What if the ambiguous overloads don't actually perform identical functionality with just different precision?Then whoever wrote the library was Evil(tm). Given that these ruleswouldn't interfere with function hijacking, I'm not sure of the practicality of this concern. Do you have an example?I don't like the idea of it being uncertain. And one more thing to ponder, is the return type telling here? float x = sqrt(2); Obviously this may only work for these pure maths functions where the return type is matched to the args, but maybe it's an element worth considering. ie, if the function parameter is ambiguous, check for disambiguation via the return type...? Sounds pretty nasty! :)
Oct 21 2011
On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:It would still allow function hijacking. void func(double v); exists... func(2); then someone comes along and adds func(float v); .. It will now hijack the call. That's what you mean right?Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Oct 21 2011
Sure, and hijacking is bound to happen under your proposal, no? How would it be detected? On 22 October 2011 06:51, Robert Jacques <sandford jhu.edu> wrote:On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:It would still allow function hijacking. void func(double v); exists... func(2); then someone comes along and adds func(float v); .. It will now hijack the call. That's what you mean right?Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Oct 22 2011
On Sat, 22 Oct 2011 05:42:10 -0400, Manu <turkeyman gmail.com> wrote:Sure, and hijacking is bound to happen under your proposal, no? How would it be detected? On 22 October 2011 06:51, Robert Jacques <sandford jhu.edu> wrote:Manu, I'm not sure you understand how function hijack detection works today. Let us say you have three modules module a; float func(float v) { return v; } module b; double func(double v) { return v; } module c; int func(int v) { return v*v; } which all define a func method. Now, if you import a; import b; void main(string[] args) { assert(func(1.0f) == 1.0f); // Error } you'll get a function hijacking error because func(1.0f) matches func(float) and func(double). However, if you instead: import a; import c; void main(string[] args) { assert(func(1.0f) == 1.0f); // Error } you won't get an error, because func(1.0f) doesn't match func(int). In short, the best overload is only selected _after_ the module name has been resolved. The proposal of myself and others only affects which overload is the best match; it has no possible effect on function hijacking.On Fri, 21 Oct 2011 19:04:43 -0400, Manu <turkeyman gmail.com> wrote:It would still allow function hijacking. void func(double v); exists... func(2); then someone comes along and adds func(float v); .. It will now hijack the call. That's what you mean right?Hijacking is what happends when someone adds func(float v); _in another module_. And that hijack would/should still be detected, etc. like any other hijack.
Oct 22 2011
On 10/20/2011 8:13 AM, Don wrote:There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.This proposal seems like a no-brainer to me. I sincerely apologize for not supporting it before. I looked at when it was created, and I realized that I was with my family for Christmas/New Years at the time, with little time to spend on the D newsgroup.
Oct 20 2011
I vote for "Error: Ambiguous call to overloaded function". NOT implicit conversion to arbitrary type 'double' :) On 20 October 2011 15:49, dsimcha <dsimcha yahoo.com> wrote:On 10/20/2011 8:13 AM, Don wrote:There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.This proposal seems like a no-brainer to me. I sincerely apologize for not supporting it before. I looked at when it was created, and I realized that I was with my family for Christmas/New Years at the time, with little time to spend on the D newsgroup.
Oct 20 2011
On 10/20/2011 8:13 AM, Don wrote:Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.vote++
Oct 20 2011
== Quote from Eric Poggel (JoeCoder) (dnewsgroup2 yage3d.net)'s articleOn 10/20/2011 8:13 AM, Don wrote:I would fork the language over this because it would break too much existing code. You can't be serious.Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.vote++
Oct 20 2011
On 10/20/2011 1:37 PM, dsimcha wrote:== Quote from Eric Poggel (JoeCoder) (dnewsgroup2 yage3d.net)'s articleNot saying it should be immediate. Maybe D3.On 10/20/2011 8:13 AM, Don wrote:I would fork the language over this because it would break too much existing code. You can't be serious.Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.vote++
Oct 20 2011
On Thursday, October 20, 2011 05:13 Don wrote:Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.My proposal was effectively: if it's ambiguous, choose double. That's all.Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision. But if there's any ambiguity, then it's definitely against the D way to have the compiler pick for you. - Jonathan M Davis
Oct 20 2011
On 20.10.2011 19:28, Jonathan M Davis wrote:On Thursday, October 20, 2011 05:13 Don wrote:The problem is, the existing approach will break a lot of existing code. For example, std.math.log(2) currently compiles. But, once the overload log(double) is added, which *must* happen, that code will break. Note that there is no realistic deprecation option, either. When the overload is added, code will break immediately. If we continue with this approach, we have to accept that EVERY TIME we add a floating point overload, existing code will break. So, we either make accept that; or we make everything that will ever break, break now (accepting that some stuff _will_ break, that would never have broken); or we introduce a tie-breaker rule. The question we face is really, which is the lesser evil?Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.My proposal was effectively: if it's ambiguous, choose double. That's all.Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision.But if there's any ambiguity, then it's definitely against the D way to have the compiler pick for you.Explain why this compiles: void foo(ubyte x) {} void foo(short x) {} void foo(ushort x) {} void foo(int x) {} void foo(uint x) {} void foo(long x) {} void foo(ulong x) {} void main() { byte b = -1; foo(b); // How ambiguous can you get????? }
Oct 20 2011
On Thursday, October 20, 2011 21:44:05 Don wrote:On 20.10.2011 19:28, Jonathan M Davis wrote:I wouldn't have expected that to compile. If we're already doing ambiguous implicit casts like this, then implicitly casting an int to a double isn't really going to make this much worse. On the bright side, it's almost certainly bad practice to have a function which takes a float and a double do something drastically different, so the ambiguity isn't likely to cause problems. But since D usually doesn't compile with ambiguities (particularly with classes), I'm surprised that it's as lax as it is with integral values. - Jonathan M DavisOn Thursday, October 20, 2011 05:13 Don wrote:The problem is, the existing approach will break a lot of existing code. For example, std.math.log(2) currently compiles. But, once the overload log(double) is added, which *must* happen, that code will break. Note that there is no realistic deprecation option, either. When the overload is added, code will break immediately. If we continue with this approach, we have to accept that EVERY TIME we add a floating point overload, existing code will break. So, we either make accept that; or we make everything that will ever break, break now (accepting that some stuff _will_ break, that would never have broken); or we introduce a tie-breaker rule. The question we face is really, which is the lesser evil? > But if there's any ambiguity, then it's > definitely against the D way to have the compiler pick for you. Explain why this compiles: void foo(ubyte x) {} void foo(short x) {} void foo(ushort x) {} void foo(int x) {} void foo(uint x) {} void foo(long x) {} void foo(ulong x) {} void main() { byte b = -1; foo(b); // How ambiguous can you get????? }Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.If it's a narrowing conversion, it should require a cast. If it's not, and there's no ambguity in the conversion, then I don't see any problem with allowing the conversion to be implicit. But then again, I deal with floating point values relatively rarely, so maybe there's something that I'm missing.>My proposal was effectively: if it's ambiguous, choose double. That's all.>Are there _any_ cases in D right now where the compiler doesn't error out on ambiguity? In all of the cases that I can think of, D chooses to give an error on ambiguity rather than making a choice for you. I'm all for an int literal being implicitly converted to a double if the function call is unambiguous and there's no loss of precision.
Oct 20 2011
On 20-10-2011 14:13, Don wrote:On 20.10.2011 13:12, Manu wrote:+1.On 20 October 2011 11:02, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: On 20.10.2011 09:47, Manu wrote: Many architectures do not support real, and therefore it should never be used implicitly by the language. Precision problems aside, I would personally insist that implicit conversation from any sized int always be to float, not double, for performance reasons (the whole point of a compiled language trying to supersede C/C++). On almost all platforms, float and double are the same speed. This isn't true. Consider ARM, hard to say this isn't a vitally important architecture these days, and there are plenty of embedded architectures that don't support doubles at all, I would say it's a really bad idea to invent a systems programming language that excludes many architectures by its design... Atmel AVR is another important architecture.It doesn't exclude anything. What we're talking about as desirable behaviour, is exactly what C does. If you care about performance on ARM, you'll type sqrt(2.0f). Personally, I'd rather completely eliminate implicit conversions between integers and floating point types. But that's just me.I maintain that implicit conversion of integers of any length should always target the same precision float, and that should be a compiler flag to specify the desired precision throughout the app (possibly defaulting to double).I can't believe that you'd ever write an app without that being an upfront decision. Casually flipping it with a compiler flag?? Remember that it affects very few things (as discussed below).If you choose 'float' you may lose some precision obviously, but you expected that when you chose the options, and did the cast...Explicit casts are not affected in any way.Note that what we're discussing here is parameter passing of single values; if it's part of an aggregate (array or struct), the issue doesn't arise. Are we? I thought we were discussing implicit conversion of ints to floats? This may be parameter passing, but also assignment I expect?There's no problem with assignment, it's never ambiguous. There seems to be some confusion about what the issue is. To reiterate: void foo(float x) {} void foo(double x) {} void bar(float x) {} void baz(double x) {} void main() { bar(2); // OK -- 2 becomes 2.0f baz(2); // OK -- 2 becomes 2.0 foo(2); // fails -- ambiguous. } My proposal was effectively: if it's ambiguous, choose double. That's all.
Oct 20 2011
On 20.10.2011 05:01, Steven Schveighoffer wrote:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> wrote:But ulong.max does NOT fit into an 80-bit real. And long won't fit into real on anything other than x86, 68K, and Itanium. I don't think long and ulong should ever implicitly convert to floating point types. Note that you can just do *1.0 or *1.0L if you want to convert them. Currently long implicitly converts even to float. This seems quite bad, it loses 60% of its bits!! Suppose we also banned implicit conversions int->float and uint->float (since float only has 24 bits, these are lossy conversions, losing 25% of the bits). Now that we've disallowed lossy integral conversions, it really seems that we should disallow these ones as well. If that was all we did, it would also mean that things like short+short wouldn't convert to float either, because C converts everything to int whenever it gets an opportunity. But we could use range checking to restore this (and to allow small longs to fit into doubles: allow conversion to double if it's <= 53 bits, allow conversion to float if <= 24 bits).On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -SteveAm 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.On 10/19/2011 6:25 PM, Alvaro wrote:What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.El 19/10/2011 20:12, dsimcha escribió:Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 20 2011
On Thu, 20 Oct 2011 03:55:51 -0400, Don <nospam nospam.com> wrote:On 20.10.2011 05:01, Steven Schveighoffer wrote:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>=wrote:On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>=d. =wrote:Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, perio=e =A language that adds a bunch of silly complications to something this simpl=is fundamentally broken. I don't remember your post on implicit preferred =conversions, but IMHO implicit conversions of integer to double is a no-brainer. =tWhat is the compiler to do with sqrt(5_000_000_000) ? It doesn't fi=Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.o =But ulong.max does NOT fit into an 80-bit real. And long won't fit int=It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steveinto an int, but it fits into a double.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.real on anything other than x86, 68K, and Itanium. I don't think long and ulong should ever implicitly convert to floatin=g =point types. Note that you can just do *1.0 or *1.0L if you want to =convert them. Currently long implicitly converts even to float. This seems quite bad=, =it loses 60% of its bits!! Suppose we also banned implicit conversions int->float and uint->float==(since float only has 24 bits, these are lossy conversions, losing 25%==of the bits). Now that we've disallowed lossy integral conversions, it really seems ==that we should disallow these ones as well. If that was all we did, it would also mean that things like short+shor=t =wouldn't convert to float either, because C converts everything to int==whenever it gets an opportunity. But we could use range checking to =restore this (and to allow small longs to fit into doubles: allow =conversion to double if it's <=3D 53 bits, allow conversion to float i=f <=3D =24 bits).Would you disagree though, that if a literal can be accurately represent= ed = as a real or double, it should be allowed? -Steve
Oct 20 2011
On Thu, 20 Oct 2011 09:55:51 +0200, Don <nospam nospam.com> wrote:On 20.10.2011 05:01, Steven Schveighoffer wrote:On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu>=wrote:On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de>=d. =wrote:Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:On 10/19/2011 6:25 PM, Alvaro wrote:El 19/10/2011 20:12, dsimcha escribi=C3=B3:=3D=3D Quote from Don (nospam nospam.com)'s articleThe hack must go.No. Something as simple as sqrt(2) must work at all costs, perio=e =A language that adds a bunch of silly complications to something this simpl=is fundamentally broken. I don't remember your post on implicit preferred =conversions, but IMHO implicit conversions of integer to double is a no-brainer. =tWhat is the compiler to do with sqrt(5_000_000_000) ? It doesn't fi=Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.Completely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.o =But ulong.max does NOT fit into an 80-bit real. And long won't fit int=It doesn't? I thought double could do 53 bits? Although I agree, long should map to real, because obviously not all longs fit into a double exactly. -Steveinto an int, but it fits into a double.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.real on anything other than x86, 68K, and Itanium. I don't think long and ulong should ever implicitly convert to floatin=g =point types. Note that you can just do *1.0 or *1.0L if you want to =convert them. Currently long implicitly converts even to float. This seems quite bad=, =it loses 60% of its bits!! Suppose we also banned implicit conversions int->float and uint->float==(since float only has 24 bits, these are lossy conversions, losing 25%==of the bits). Now that we've disallowed lossy integral conversions, it really seems ==that we should disallow these ones as well. If that was all we did, it would also mean that things like short+shor=t =wouldn't convert to float either, because C converts everything to int==whenever it gets an opportunity. But we could use range checking to =restore this (and to allow small longs to fit into doubles: allow =conversion to double if it's <=3D 53 bits, allow conversion to float i=f <=3D =24 bits).I'd really like to see, that all conversion were based on value range = propagation instead of the strange C rules. Thankfully this discussion reminded reminded me of an ugly header file b= ug http://d.puremagic.com/issues/show_bug.cgi?id=3D6833. martin
Oct 20 2011
On Wed, 19 Oct 2011 22:57:48 -0400, Robert Jacques <sandford jhu.edu> wrote:On Wed, 19 Oct 2011 22:52:14 -0400, Marco Leise <Marco.Leise gmx.de> wrote:Opps. That should be '5_000_000_000 is a long' not ' is a 5_000_000_000 long'Am 20.10.2011, 02:46 Uhr, schrieb dsimcha <dsimcha yahoo.com>:Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.On 10/19/2011 6:25 PM, Alvaro wrote:What is the compiler to do with sqrt(5_000_000_000) ? It doesn't fit into an int, but it fits into a double.El 19/10/2011 20:12, dsimcha escribió:Yes, and for the most part uncluttered programming is one of D's biggest strengths. Let's not ruin it by complicating sqrt(2).== Quote from Don (nospam nospam.com)'s articleCompletely agree. I call that uncluttered programming. No excessive explicitness should be necessary when what you mean is obvious (under some simple conventions). Leads to clearer code.The hack must go.No. Something as simple as sqrt(2) must work at all costs, period. A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.
Oct 19 2011
Robert Jacques:Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.There is nothing "simple" here... Bye, bearophile
Oct 19 2011
On 20.10.2011 05:25, bearophile wrote:Robert Jacques:Yeah, but the problem isn't with ints, it's with integer literals, where there is no problem determining if it implicitly converts or not.Simple, is a 5_000_000_000 long, and longs convert to reals. Also, 5_000_000_000 does not fit, exactly inside a double.There is nothing "simple" here... Bye, bearophile
Oct 19 2011
On 10/19/2011 10:57 PM, Robert Jacques wrote:Also, 5_000_000_000 does not fit, exactly inside a double.Yes it does. Doubles can hold integers exactly up to 2 ^^ 53. http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Oct 19 2011
On 10/20/2011 05:34 AM, dsimcha wrote:On 10/19/2011 10:57 PM, Robert Jacques wrote:5_000_000_000 even fits exactly into a IEEE 734 32-bit _float_.Also, 5_000_000_000 does not fit, exactly inside a double.Yes it does. Doubles can hold integers exactly up to 2 ^^ 53. http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Oct 20 2011
On 19.10.2011 20:12, dsimcha wrote:== Quote from Don (nospam nospam.com)'s articleWhere the hell were you when I made that proposal before? Frankly, I'm pissed off that you guys bullied me into putting an *awful* temporary hack into std.math, and then gave me no support when the idea got shouted down on the ng.In D2 prior to 2.048, sqrt(2) does not compile. The reason is that it's ambiguous whether it is sqrt(2.0), sqrt(2.0L), or sqrt(2.0f). This also applies to _any_ function which has overloads for more than one floating point type. In D2 between versions 2.049 and the present, sqrt(2) compiles due to the request of a small number of people (2-3, I think). But still, no other floating point function works with integer literals. The "bug" being fixed was Bugzilla 4455: Taking the sqrt of an integer shouldn't require an explicit cast. This compiles only due to an awful, undocumented hack in std.math. It doesn't work for any function other than sqrt. I protested strongly against this, but accepted it only on the proviso that we would fix integer literal conversions to floating point in _all_ cases, so that the hack could be removed. However, when I proposed the fix on the newsgroup (integer literals should have a preferred conversion to double), it was *unanimously* rejected. Those who had argued for the hack were conspicuously absent. The hack must go.No. Something as simple as sqrt(2) must work at all costs, period.A language that adds a bunch of silly complications to something this simple is fundamentally broken. I don't remember your post on implicit preferred conversions, but IMHO implicit conversions of integer to double is a no-brainer. Requiring something this simple to be explicit is Java/Pascal-like overkill on explicitness.The bottom line: the hack MUST go. Either we fix this properly, as I suggested, or else it must not compile.
Oct 19 2011