digitalmars.D - Deprecate implicit `int` to `bool` conversion for integer literals
- Michael V. Franklin (7/7) Nov 11 2017 What's the official word on this:
- Jonathan M Davis (17/22) Nov 11 2017 It probably needs a DIP, since it's a language change, and based on what
- Dmitry Olshansky (8/28) Nov 12 2017 Yeah, this is bad.
- Michael V. Franklin (6/11) Nov 12 2017 I don't think the proposal to deprecate integer literal
- Michael V. Franklin (4/7) Nov 12 2017 Nevermind. I see what you mean now.
- Andrei Alexandrescu (15/24) Nov 12 2017 A DIP could be formulated to only address the problem at hand. BTW,
- Michael V. Franklin (17/31) Nov 12 2017 As I understand it, the case above can be solved by changing the
- Nick Treleaven (7/11) Nov 14 2017 An very similar problem exists for int and char overloads:
- Andrei Alexandrescu (2/15) Nov 14 2017 Thanks. Addressing this should be part of the DIP as well. -- Andrei
- Michael V. Franklin (9/17) Nov 14 2017 It doesn't appear to be related to the implicit conversion of
- Michael V. Franklin (7/21) Nov 14 2017 Well, of course it's not related; it's a char not a bool. But
- Nick Treleaven (4/5) Nov 14 2017 Sure:
- Steven Schveighoffer (5/21) Nov 14 2017 IMO, no character types should implicitly convert from integer types. In...
- Michael V. Franklin (5/9) Nov 14 2017 Is everyone in general agreement on this? Can anyone think of a
- H. S. Teoh (22/30) Nov 14 2017 [...]
- Steven Schveighoffer (19/50) Nov 14 2017 I couldn't believe that this is the case so I tested it:
- H. S. Teoh (19/53) Nov 14 2017 [...]
- Michael V. Franklin (4/19) Nov 14 2017 If you haven't already, can you please submit this to bugzilla.
- Walter Bright (16/28) Nov 14 2017 I just tried:
- Michael V. Franklin (4/18) Nov 14 2017 The code posted was incorrect. See
- Andrei Alexandrescu (3/11) Nov 14 2017 No, that would be too large a change of the rules. FWIW 'a' has type
- Steven Schveighoffer (7/21) Nov 14 2017 All it means is that when VRP allows it, you still have to cast. It's
- Steven Schveighoffer (6/14) Nov 14 2017 I would think this is another DIP from the one you are looking at, as it...
- Michael V. Franklin (8/13) Nov 14 2017 Wait a minute! This doesn't appear to be a casting or overload
- Steven Schveighoffer (11/27) Nov 14 2017 In fact, I'm surprised you can alias to an expression like that. Usually...
- Michael V. Franklin (10/23) Nov 14 2017 Boy did I "step in it" with my original post: Started out with
- Steven Schveighoffer (10/23) Nov 15 2017 I don't think we can prevent the aliasing in the first place, because if...
- Jonathan M Davis (23/45) Nov 15 2017 wrote:
- Steven Schveighoffer (20/70) Nov 15 2017 alias statements and alias parameters have differences, so I don't know
- Andrea Fontana (3/5) Nov 15 2017 What?
- Steven Schveighoffer (5/10) Nov 15 2017 Yep. Would never have tried that in a million years before seeing this
- Petar Kirov [ZombineDev] (13/24) Nov 16 2017 I guess you guys haven't been keeping up with language changes :P
- Timon Gehr (6/10) Nov 18 2017 There is essentially no merit to the symbol/no symbol distinction. It's
- Timon Gehr (2/5) Nov 18 2017 No. It should just overload properly.
- Timon Gehr (19/32) Nov 18 2017 Yes.
- Walter Bright (2/9) Nov 14 2017 I cannot reproduce this error.
- Michael V. Franklin (4/13) Nov 14 2017 Try it here: https://run.dlang.io/is/nfMGfG
- Andrei Alexandrescu (3/17) Nov 15 2017 Cool, thanks. That seems to be an unrelated bug. Have you added it to
- Michael V. Franklin (5/9) Nov 15 2017 Bugzilla Issue is here:
- Andrei Alexandrescu (2/13) Nov 16 2017 Gracias! -- Andrei
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (4/8) Nov 12 2017 Just change the typing of the if-conditional to:
- Temtaime (4/15) Nov 12 2017 There's no force change.
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (2/4) Nov 12 2017 Yes, but that is a flaw IMO. E.g. NaN will convert to true.
- Dmitry Olshansky (8/19) Nov 12 2017 Rather I recall that:
- Jonathan M Davis (21/43) Nov 12 2017 Yes. In conditional expressions, you get an implicitly inserted cast to
- Andrei Alexandrescu (7/17) Nov 11 2017 Hi Mike, thanks for your inquiry. A DIP is necessary for all language
- Michael V. Franklin (35/38) Nov 13 2017 Subject issues:
- Steven Schveighoffer (7/48) Nov 14 2017 My vote would be for 1. It's disruptive, but not that disruptive.
- Michael V. Franklin (6/11) Nov 15 2017 Thanks for chiming in. `bool[] boolValues =
What's the official word on this: https://github.com/dlang/dmd/pull/6404 Does it need a DIP? If I revive it will it go anywhere? What needs to be done to move it forward? Thanks, Mike
Nov 11 2017
On Saturday, November 11, 2017 13:40:23 Michael V. Franklin via Digitalmars- d wrote:What's the official word on this: https://github.com/dlang/dmd/pull/6404 Does it need a DIP? If I revive it will it go anywhere? What needs to be done to move it forward?It probably needs a DIP, since it's a language change, and based on what Walter has said in the past about this topic, I don't know how convincible he his. I think that most everyone else thought that it was terrible when code like this auto foo(bool) {...} auto foo(long) {...} foo(1); ends up with the bool overload being called, but Walter's answer was just to add an int overload if you didn't want 1 to call the bool overload. He may be more amenable to deprecating the implicit conversion now than he was then, but it's the sort of thing where I would expect there to have to be a DIP rather than it simply being done in a PR, since it's a definite semantic change and not one that Walter previously agreed should be made. I have no idea what Andrei's opinion on the topic is. - Jonathan M Davis
Nov 11 2017
On Saturday, 11 November 2017 at 14:54:42 UTC, Jonathan M Davis wrote:On Saturday, November 11, 2017 13:40:23 Michael V. Franklin via Digitalmars- d wrote:Yeah, this is bad. However, I’d hate to rewrite things like: if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.What's the official word on this: https://github.com/dlang/dmd/pull/6404 Does it need a DIP? If I revive it will it go anywhere? What needs to be done to move it forward?It probably needs a DIP, since it's a language change, and based on what Walter has said in the past about this topic, I don't know how convincible he his. I think that most everyone else thought that it was terrible when code like this auto foo(bool) {...} auto foo(long) {...} foo(1); ends up with the bool overload being called, but Walter's answer was just to add an int overload if you didn't want 1 to call the bool overload.
Nov 12 2017
On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky wrote:However, I’d hate to rewrite things like: if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.I don't think the proposal to deprecate integer literal conversions to `bool` would affect that as there doesn't appear to be an integer literal in the code. Mike
Nov 12 2017
On Sunday, 12 November 2017 at 13:49:51 UTC, Michael V. Franklin wrote:I don't think the proposal to deprecate integer literal conversions to `bool` would affect that as there doesn't appear to be an integer literal in the code.Nevermind. I see what you mean now. Mike
Nov 12 2017
On 11/12/2017 08:54 AM, Michael V. Franklin wrote:On Sunday, 12 November 2017 at 13:49:51 UTC, Michael V. Franklin wrote:A DIP could be formulated to only address the problem at hand. BTW, here's a really fun example: void fun(long) { assert(0); } void fun(bool) {} enum int a = 2; enum int b = 1; void main() { fun(a - b); } The overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP. AndreiI don't think the proposal to deprecate integer literal conversions to `bool` would affect that as there doesn't appear to be an integer literal in the code.Nevermind. I see what you mean now. Mike
Nov 12 2017
On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:A DIP could be formulated to only address the problem at hand. BTW, here's a really fun example: void fun(long) { assert(0); } void fun(bool) {} enum int a = 2; enum int b = 1; void main() { fun(a - b); } The overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP.As I understand it, the case above can be solved by changing the overload resolution rules without deprecating the implicit conversion to bool. A PR for such a fix was submitted here https://github.com/dlang/dmd/pull/1942. I fear a proposal to deprecate the implicit conversion to bool based solely on the example above could be refused in favor of overload resolution changes. IMO, the example above, while certainly a smoking gun, is actually just a symptom of the deeper problem, so I tried to make that case in the DIP. The DIP has been submitted here https://github.com/dlang/DIPs/pull/99 Perhaps I'm not the right person to be formulating these arguments, but given that the issue has been in Bugzilla for 4 years, I'm probably all you've got. Sorry. Mike
Nov 12 2017
On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:The overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Nov 14 2017
On 11/14/17 8:20 AM, Nick Treleaven wrote:On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:Thanks. Addressing this should be part of the DIP as well. -- AndreiThe overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Nov 14 2017
On Tuesday, 14 November 2017 at 13:32:52 UTC, Andrei Alexandrescu wrote:It doesn't appear to be related to the implicit conversion of integer literals to bool. While Andrei's example is fixed by by deprecating implicit conversion of integral literals to bool (at least using this implementation: https://github.com/dlang/dmd/pull/7310), Nick's example isn't. Nick, if it's not in bugzilla already, can you please add it? MikeAn very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsThanks. Addressing this should be part of the DIP as well. --
Nov 14 2017
On Tuesday, 14 November 2017 at 13:43:32 UTC, Michael V. Franklin wrote:Well, of course it's not related; it's a char not a bool. But there does seem to be some systematic problems in D's implicit conversion rules. I'll have to investigate this and perhaps I can address them both in one DIP. MikeIt doesn't appear to be related to the implicit conversion of integer literals to bool. While Andrei's example is fixed by by deprecating implicit conversion of integral literals to bool (at least using this implementation: https://github.com/dlang/dmd/pull/7310), Nick's example isn't.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsThanks. Addressing this should be part of the DIP as well. --
Nov 14 2017
On Tuesday, 14 November 2017 at 13:43:32 UTC, Michael V. Franklin wrote:Nick, if it's not in bugzilla already, can you please add it?Sure: https://issues.dlang.org/show_bug.cgi?id=17983
Nov 14 2017
On 11/14/17 8:32 AM, Andrei Alexandrescu wrote:On 11/14/17 8:20 AM, Nick Treleaven wrote:IMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this. -SteveOn Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:Thanks. Addressing this should be part of the DIP as well. -- AndreiThe overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Nov 14 2017
On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:IMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case? Mike
Nov 14 2017
On Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via Digitalmars-d wrote:On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overload There is no compelling use case for implicitly converting char types to int. If you want to directly manipulate ASCII values / Unicode code point values, a direct cast is warranted (clearer code intent). Converting char to wchar (or dchar, or vice versa, etc.) implicitly is also fraught with peril: if the char happens to be an upper byte of a multibyte sequence, you *implicitly* get a garbage value. Not useful at all. Needing to write an explicit cast will remind you to think twice, which is a good thing. T -- Famous last words: I wonder what will happen if I do *this*...IMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?
Nov 14 2017
On 11/14/17 6:09 PM, H. S. Teoh wrote:On Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via Digitalmars-d wrote:I couldn't believe that this is the case so I tested it: https://run.dlang.io/is/AHQYtA for those who don't want to look, it does indeed call the first overload for a character literal, so this is not a problem (maybe you were thinking of something else?)On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overloadIMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?There is no compelling use case for implicitly converting char types to int. If you want to directly manipulate ASCII values / Unicode code point values, a direct cast is warranted (clearer code intent).I think you misunderstand the problem. It's fine for chars to promote to int, or even bools for that matter. It's the other way around that is problematic. To put it another way, if you make this require a cast, you will have some angry coders :) if(c >= '0' && c <= '9') value = c - '0';Converting char to wchar (or dchar, or vice versa, etc.) implicitly is also fraught with peril: if the char happens to be an upper byte of a multibyte sequence, you *implicitly* get a garbage value. Not useful at all. Needing to write an explicit cast will remind you to think twice, which is a good thing.Agree, these should require casts, since the resulting type is probably not what you want in all cases. Where this continually comes up is char ranges. Other than actual char[] arrays, the following code doesn't do the right thing at all: foreach(dchar d; charRange) If we made it require a cast, this would find such problems easily. -Steve
Nov 14 2017
On Tue, Nov 14, 2017 at 06:53:43PM -0500, Steven Schveighoffer via Digitalmars-d wrote:On 11/14/17 6:09 PM, H. S. Teoh wrote:[...] Argh, should've checked before I posted. What I meant was more something like this: import std.stdio; void f(dchar) { writeln("dchar overload"); } void f(ubyte) { writeln("ubyte overload"); } void main() { f(1); f('a'); } Output: ubyte overload ubyte overload It "makes sense" from the POV of C/C++-compatible integer promotion rules, but in the context of D, it's just very WAT-worthy. T -- Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian W. KernighanOn Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via Digitalmars-d wrote:I couldn't believe that this is the case so I tested it: https://run.dlang.io/is/AHQYtA for those who don't want to look, it does indeed call the first overload for a character literal, so this is not a problem (maybe you were thinking of something else?)On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overloadIMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?
Nov 14 2017
On Tuesday, 14 November 2017 at 23:53:49 UTC, H. S. Teoh wrote:Argh, should've checked before I posted. What I meant was more something like this: import std.stdio; void f(dchar) { writeln("dchar overload"); } void f(ubyte) { writeln("ubyte overload"); } void main() { f(1); f('a'); } Output: ubyte overload ubyte overload It "makes sense" from the POV of C/C++-compatible integer promotion rules, but in the context of D, it's just very WAT-worthy.If you haven't already, can you please submit this to bugzilla. Thanks, Mike
Nov 14 2017
On 11/14/2017 3:09 PM, H. S. Teoh wrote:I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overloadI just tried: import core.stdc.stdio; void foo(char c) { printf("char\n"); } void foo(int c) { printf("int\n"); } void main() { enum int e = 1; foo(e); foo(1); foo('c'); } and it prints: int int char I cannot reproduce your or Nick's error.
Nov 14 2017
On Wednesday, 15 November 2017 at 04:30:32 UTC, Walter Bright wrote:I just tried: import core.stdc.stdio; void foo(char c) { printf("char\n"); } void foo(int c) { printf("int\n"); } void main() { enum int e = 1; foo(e); foo(1); foo('c'); } and it prints: int int charThe code posted was incorrect. See http://forum.dlang.org/post/mailman.154.1510704335.9493.digitalmars-d puremagic.com
Nov 14 2017
On 11/14/2017 06:05 PM, Michael V. Franklin wrote:On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:No, that would be too large a change of the rules. FWIW 'a' has type dchar, not char. -- AndreiIMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?
Nov 14 2017
On 11/14/17 6:48 PM, Andrei Alexandrescu wrote:On 11/14/2017 06:05 PM, Michael V. Franklin wrote:All it means is that when VRP allows it, you still have to cast. It's not that large a change actually, but I can see how it might be too disruptive to be worth it.On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:No, that would be too large a change of the rules.IMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?FWIW 'a' has type dchar, not char. -- AndreiNo: pragma(msg, typeof('a')); // char -Steve
Nov 14 2017
On 11/14/17 6:05 PM, Michael V. Franklin wrote:On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:I would think this is another DIP from the one you are looking at, as it is more far-reaching than just overload problems. They are real problems, but this makes the DIP scope broader than it should be, and lessen the chance of acceptance. -SteveIMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this.Is everyone in general agreement on this? Can anyone think of a compelling use case?
Nov 14 2017
On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven wrote:An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsWait a minute! This doesn't appear to be a casting or overload problem. Can you really overload aliases in D? I would expect the compiler to throw an error as `foo` is being redefined. Or for `foo` to be replaced by the most recent assignment in lexical order. Am I missing something? Mike
Nov 14 2017
On 11/14/17 6:14 PM, Michael V. Franklin wrote:On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven wrote:In fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered. Indeed, this is a completely different problem: enum int e = 500; static assert(foo(e) == 4); // fails to compile (can't call char with 500) If you define foo as an actual overloaded function set, it works as expected.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsWait a minute! This doesn't appear to be a casting or overload problem. Can you really overload aliases in D?I would expect the compiler to throw an error as `foo` is being redefined. Or for `foo` to be replaced by the most recent assignment in lexical order. Am I missing something?In this case, the compiler simply *ignores* the newest definition. It should throw an error IMO. -Steve
Nov 14 2017
On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven Schveighoffer wrote:Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)? MikeIn fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered.An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsWait a minute! This doesn't appear to be a casting or overload problem. Can you really overload aliases in D?
Nov 14 2017
On 11/14/17 8:56 PM, Michael V. Franklin wrote:On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven Schveighoffer wrote:I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; } However, it would be good to prevent the second alias which effectively does nothing. -SteveIn fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered.Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?
Nov 15 2017
On Wednesday, November 15, 2017 07:28:02 Steven Schveighoffer via Digitalmars-d wrote:On 11/14/17 8:56 PM, Michael V. Franklin wrote:wrote:On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven SchveighofferIn general, alias aliases symbols, whereas a lambda isn't a symbol. They're essentially the rvalue equivalent of functions. So, in that sense, it's pretty weird that it works. However, we _do_ use it all the time with alias template parameters. So, regardless of what would make sense for other aliases, if we just made it so that alias in general didn't work with lambdas, we'd be in big trouble. It wouldn't surprise me if the fact that aliases like this works with lambdas is related to the fact that it works with alias template parameters, but I don't know. It may simply be that because of how the compiler generates lambdas, it ends up with a name for them (even if you don't see it), and it just naturally came out that those were aliasable.I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; }In fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered.Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?However, it would be good to prevent the second alias which effectively does nothing.As far as I'm concerned, in principal, it's identical to declaring a variable with the same name in the same scope, and I'm very surprised that it works. Interestingly, this code alias foo = int; alias foo = float; _does_ produce an error. So, it looks like it's a problem related to lambdas specifically. - Jonathan M Davis
Nov 15 2017
On 11/15/17 9:07 AM, Jonathan M Davis wrote:On Wednesday, November 15, 2017 07:28:02 Steven Schveighoffer via Digitalmars-d wrote:alias statements and alias parameters have differences, so I don't know if there is any real relation here. For example, int cannot bind to a template alias. Some really weird stuff is going on with aliasing and function overloads in general. If we change them from anonymous lambdas to actual functions: auto lambda1(char c) { return 1; } auto lambda2(int i) { return 4; } alias foo = lambda1; alias foo = lambda2; void main() { assert(foo('a') == 1); assert(foo(1) == 4); } Hey look, it all works! Even if lambda1 and lambda2 are turned into templates, it works. I seriously did not expect this to work. -SteveOn 11/14/17 8:56 PM, Michael V. Franklin wrote:wrote:On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven SchveighofferIn general, alias aliases symbols, whereas a lambda isn't a symbol. They're essentially the rvalue equivalent of functions. So, in that sense, it's pretty weird that it works. However, we _do_ use it all the time with alias template parameters. So, regardless of what would make sense for other aliases, if we just made it so that alias in general didn't work with lambdas, we'd be in big trouble. It wouldn't surprise me if the fact that aliases like this works with lambdas is related to the fact that it works with alias template parameters, but I don't know. It may simply be that because of how the compiler generates lambdas, it ends up with a name for them (even if you don't see it), and it just naturally came out that those were aliasable.I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; }In fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered.Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?However, it would be good to prevent the second alias which effectively does nothing.As far as I'm concerned, in principal, it's identical to declaring a variable with the same name in the same scope, and I'm very surprised that it works. Interestingly, this code alias foo = int; alias foo = float;
Nov 15 2017
On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote:alias foo = lambda1; alias foo = lambda2;What?
Nov 15 2017
On 11/15/17 11:59 AM, Andrea Fontana wrote:On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote:Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Stevealias foo = lambda1; alias foo = lambda2;What?
Nov 15 2017
On Wednesday, 15 November 2017 at 19:29:29 UTC, Steven Schveighoffer wrote:On 11/15/17 11:59 AM, Andrea Fontana wrote:I guess you guys haven't been keeping up with language changes :P https://dlang.org/changelog/2.070.0.html#alias-funclit And yes, you can use 'alias' to capture overload sets. See also: https://github.com/dlang/dmd/pull/1660/files https://github.com/dlang/dmd/pull/2125/files#diff-51d0a1ca6214e6a916212fcbf93d7e40 https://github.com/dlang/dmd/pull/2417/files https://github.com/dlang/dmd/pull/4826/files https://github.com/dlang/dmd/pull/5162/files https://github.com/dlang/dmd/pull/5202 https://github.com/dlang/phobos/pull/5818/filesOn Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote:Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Stevealias foo = lambda1; alias foo = lambda2;What?
Nov 16 2017
On Thursday, 16 November 2017 at 13:05:51 UTC, Petar Kirov [ZombineDev] wrote:On Wednesday, 15 November 2017 at 19:29:29 UTC, Steven Schveighoffer wrote:Yes, as far as I understand this is just the normal way that you add a symbol to an existing overload set, except now it also interacts with the functionality of using an alias to create a named function literal. Kind of interesting because I don't think it was possible to do this before, e.g.: int function(int) f1 = (int n) => n; int function(int) f2 = (char c) => c; Would obviously be rejected by the compiler. However, using the alias syntax we can create an overload set from function literals in addition to regular functions.On 11/15/17 11:59 AM, Andrea Fontana wrote:I guess you guys haven't been keeping up with language changes :P https://dlang.org/changelog/2.070.0.html#alias-funclit And yes, you can use 'alias' to capture overload sets. See also: https://github.com/dlang/dmd/pull/1660/files https://github.com/dlang/dmd/pull/2125/files#diff-51d0a1ca6214e6a916212fcbf93d7e40 https://github.com/dlang/dmd/pull/2417/files https://github.com/dlang/dmd/pull/4826/files https://github.com/dlang/dmd/pull/5162/files https://github.com/dlang/dmd/pull/5202 https://github.com/dlang/phobos/pull/5818/filesOn Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote:Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Stevealias foo = lambda1; alias foo = lambda2;What?
Nov 16 2017
On Thursday, 16 November 2017 at 16:10:50 UTC, Meta wrote:int function(int) f1 = (int n) => n; int function(int) f2 = (char c) => c;Should be int function(char)
Nov 16 2017
On 15.11.2017 15:07, Jonathan M Davis wrote:In general, alias aliases symbols, whereas a lambda isn't a symbol. ...There is essentially no merit to the symbol/no symbol distinction. It's just a DMD implementation detail resulting in weird inconsistencies between alias declarations and alias template parameters that are being fixed one by one.... alias foo = int; alias foo = float;Case in point. Neither int nor float are symbols.
Nov 18 2017
On 15.11.2017 13:28, Steven Schveighoffer wrote:However, it would be good to prevent the second alias which effectively does nothing.No. It should just overload properly.
Nov 18 2017
On 15.11.2017 00:14, Michael V. Franklin wrote:On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven wrote:Yes. auto foo(int x){ return x; } auto bar(double x){ return x+1; } alias qux=foo; alias qux=bar; void main(){ import std.stdio; writeln(qux(1)," ",qux(1.0)); // 1 2 } This is by design. The fact that the following does not work is just a (known) compiler bug: alias qux=(int x)=>x; alias qux=(double x)=>x+1; void main(){ import std.stdio; writeln(qux(1)," ",qux(1.0)); // error } https://issues.dlang.org/show_bug.cgi?id=16099An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsWait a minute! This doesn't appear to be a casting or overload problem. Can you really overload aliases in D? ...
Nov 18 2017
On 11/14/2017 5:20 AM, Nick Treleaven wrote:An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsI cannot reproduce this error.
Nov 14 2017
On Wednesday, 15 November 2017 at 04:24:58 UTC, Walter Bright wrote:On 11/14/2017 5:20 AM, Nick Treleaven wrote:Try it here: https://run.dlang.io/is/nfMGfG DMD-nightlyAn very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsI cannot reproduce this error.
Nov 14 2017
On 11/14/17 11:33 PM, Michael V. Franklin wrote:On Wednesday, 15 November 2017 at 04:24:58 UTC, Walter Bright wrote:Cool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- AndreiOn 11/14/2017 5:20 AM, Nick Treleaven wrote:Try it here: https://run.dlang.io/is/nfMGfG DMD-nightlyAn very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // failsI cannot reproduce this error.
Nov 15 2017
On Thursday, 16 November 2017 at 07:24:44 UTC, Andrei Alexandrescu wrote:Bugzilla Issue is here: https://issues.dlang.org/show_bug.cgi?id=17983 MikeTry it here: https://run.dlang.io/is/nfMGfG DMD-nightlyCool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- Andrei
Nov 15 2017
On 11/16/2017 02:29 AM, Michael V. Franklin wrote:On Thursday, 16 November 2017 at 07:24:44 UTC, Andrei Alexandrescu wrote:Gracias! -- AndreiBugzilla Issue is here: https://issues.dlang.org/show_bug.cgi?id=17983 MikeTry it here: https://run.dlang.io/is/nfMGfG DMD-nightlyCool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- Andrei
Nov 16 2017
On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky wrote:if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.Just change the typing of the if-conditional to: if (boolean|integral) {…}
Nov 12 2017
On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad wrote:On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky wrote:There's no force change. if explicitly converts cond to bool.if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.Just change the typing of the if-conditional to: if (boolean|integral) {…}
Nov 12 2017
On Sunday, 12 November 2017 at 16:04:59 UTC, Temtaime wrote:There's no force change. if explicitly converts cond to bool.Yes, but that is a flaw IMO. E.g. NaN will convert to true.
Nov 12 2017
On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad wrote:On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky wrote:Rather I recall that: if(expr) is considered to be if(cast(bool)expr) The latter to support user-defined types. So we are good.if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.Just change the typing of the if-conditional to: if (boolean|integral) {…}
Nov 12 2017
On Sunday, November 12, 2017 19:13:00 Dmitry Olshansky via Digitalmars-d wrote:On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad wrote:Yes. In conditional expressions, you get an implicitly inserted cast to bool. So, you have an implicit, explicit cast to bool (weird as that sounds). If the implicit cast to integers to bool were removed (meaning neither integer literals nor VRP allowed the conversion), then it would have no effect on if statements or loops and whatnot. It would affect overloading and other expressions. So, something like bool a = 2 - 1; or auto foo(bool) {...} foo(1); wouldn't compile anymore. But something like if(1) would compile just fine, just like if("str") compiles just fine, but auto foo(bool) {...} foo("str"); does not. - Jonathan M DavisOn Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky wrote:Rather I recall that: if(expr) is considered to be if(cast(bool)expr) The latter to support user-defined types. So we are good.if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.Just change the typing of the if-conditional to: if (boolean|integral) {…}
Nov 12 2017
Hi Mike, thanks for your inquiry. A DIP is necessary for all language changes. In this case a short and well-argued DIP seems to be the ticket. Walter and I spoke and such a proposal has a good chance to be successful. Thanks, Andrei On 11/11/2017 08:40 AM, Michael V. Franklin wrote:What's the official word on this: https://github.com/dlang/dmd/pull/6404 Does it need a DIP? If I revive it will it go anywhere? What needs to be done to move it forward? Thanks, Mike
Nov 11 2017
On Saturday, 11 November 2017 at 23:30:18 UTC, Andrei Alexandrescu wrote:A DIP is necessary for all language changes. In this case a short and well-argued DIP seems to be the ticket. Walter and I spoke and such a proposal has a good chance to be successful.Subject issues: https://issues.dlang.org/show_bug.cgi?id=9999 https://issues.dlang.org/show_bug.cgi?id=10560 Spec in question: https://dlang.org/spec/type.html#bool DIP: https://github.com/dlang/DIPs/pull/99 I need some feedback from the community before I move forward with the DIP. I'm torn between a few ideas and not sure how to proceed. 1. Deprecate implicit conversion of integer literals to bool 2. Allow implicit conversion of integer literals to bool if a function is not overloaded, but disallow it if the function is overloaded. 3. Change the overload resolution rules as illustrated in https://github.com/dlang/dmd/pull/1942 If I had to choose one I would go with 1, simply because the implicit conversion is janky and circumvents the type system for a mild-at-best convenience. But, it will cause breakage that needs to be managed. 2 would solve the issues in question, but keep breakage at a minimum, and would probably be preferred if users wish to maintain the status quo. Disadvantage is it's a special case to document, consider, and explain. 3 is similar to 2, and like 2, is a special case. I don't even really have a dog in this fight, but the demonstration of the problem in the bugzilla issues is simply embarrassing, and I'm tired of seeing issues languish for so long in bugzilla without any resolution. Is there any general consensus in the community on this issue so I can be sure I'm fulfilling the community's preference? Thanks, Mike
Nov 13 2017
On 11/13/17 8:01 PM, Michael V. Franklin wrote:On Saturday, 11 November 2017 at 23:30:18 UTC, Andrei Alexandrescu wrote:My vote would be for 1. It's disruptive, but not that disruptive. I almost always initialize a bool with true or false, not with 1 or 0. The array handling is probably the only part that would be painful. but we could handle that the same way we deprecated octal numbers: bools!"01001101"; => [false, true, false, false, true, true, false, true]; -SteveA DIP is necessary for all language changes. In this case a short and well-argued DIP seems to be the ticket. Walter and I spoke and such a proposal has a good chance to be successful.Subject issues: https://issues.dlang.org/show_bug.cgi?id=9999 https://issues.dlang.org/show_bug.cgi?id=10560 Spec in question: https://dlang.org/spec/type.html#bool DIP: https://github.com/dlang/DIPs/pull/99 I need some feedback from the community before I move forward with the DIP. I'm torn between a few ideas and not sure how to proceed. 1. Deprecate implicit conversion of integer literals to bool 2. Allow implicit conversion of integer literals to bool if a function is not overloaded, but disallow it if the function is overloaded. 3. Change the overload resolution rules as illustrated in https://github.com/dlang/dmd/pull/1942 If I had to choose one I would go with 1, simply because the implicit conversion is janky and circumvents the type system for a mild-at-best convenience. But, it will cause breakage that needs to be managed. 2 would solve the issues in question, but keep breakage at a minimum, and would probably be preferred if users wish to maintain the status quo. Disadvantage is it's a special case to document, consider, and explain. 3 is similar to 2, and like 2, is a special case. I don't even really have a dog in this fight, but the demonstration of the problem in the bugzilla issues is simply embarrassing, and I'm tired of seeing issues languish for so long in bugzilla without any resolution. Is there any general consensus in the community on this issue so I can be sure I'm fulfilling the community's preference?
Nov 14 2017
On Tuesday, 14 November 2017 at 13:17:22 UTC, Steven Schveighoffer wrote:The array handling is probably the only part that would be painful. but we could handle that the same way we deprecated octal numbers: bools!"01001101"; => [false, true, false, false, true, true, false, true];Thanks for chiming in. `bool[] boolValues = cast(bool[])[0,1,0,1]` will still work fine under option 1. It's only *implicit* casting that's being proposed for deprecation. Mike
Nov 15 2017