www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Rant after trying Rust a bit

reply "simendsjo" <simendsjo gmail.com> writes:
Long rant ahead - a bit dipsy..

TL;DR: Rust has momentum, manpower and tooling. Tooling matters.  
Safe defaults.  Ergonomics like expressions and deconstructing 
rocks.

I'm reluctantly confessing that I've had a small affair with Rust 
recently. While I think D is a really good language, there are 
several things I don't agree upon. Some are caused by historical 
reasons, some are caused by the lack of peer review. Lets look at 
a couple of things I think looks exiting about Rust after a bit 
experimentation.

Manpower
--------
Silly stating this as why I have been playing with Rust, but it's 
probably one of the main reasons. Rust has momentum and a lot of 
manpower and well as some big compinies behind it. It should be 
of secondary importance, but this is something that really shows 
in the quality of Rust.

I've been using D on and off since 2007, and the lack of manpower 
shows in every aspect of the language, design and ecosystem. Rust 
has a pretty nice ecosystem and tools given its very young age.

Cargo
-----
Rust has a default package manager much like Dub. The main 
difference is that Cargo has been endorsed by the Rust team and 
is an official product. This means it works very well with the 
compiler and feels like an integrated part of the language. Dub, 
on the other hand, is a product from the outside, and 
unfortunately, it feels this way too. Having Dub become the 
endorsed package manager for D sounds like a very good idea for 
me. In order to do this, some breaking changes might be necessary 
(I haven't used much D or Dub recently though..). If so, a good 
time to introduce breaking changes would be before integrating it 
into the "core" D ecosystem.

The effects of Cargo is very visible by looking on the amount of 
libraries on https://crates.io as opposed to 
http://code.dlang.org.

While code.dlang.org has 530 packages, crates.io has 2610 
packages, and this is even if Rust is very new. Dubs repository 
website is a lot better than Rusts though :)

Traits
------
I think the ability to express an interface without buying into 
inheritance is the right move. The alternative in D is specifying 
the behavior as a template and verifying the contract in a 
unittest for the type.

Algebraic data types
--------------------
Haven't looked into `Algebraic!`, so I won't start bashing D here 
:) But given the lack of pattern matching, I doubt it will be as 
pretty as Rust.

Macros
------
I haven't written more than extremely simple macros in Rust, but 
having macros that is possible for tools to understand is a win.  
Templates and string mixins is often used for this purpose, but 
trying to build tools when string mixins exists is probably 
extremely hard. If D had hygenic macros, I expect several 
features could be expressed with this instead of string mixins, 
making tooling easier to implement.

Safe by default
---------------
D is often said being safe by default, but D still has default 
nullable references and mutable by default. I don't see it being 
possible to change at this stage, but expressing when I want to 
be unsafe rather than the opposite is very nice. I end up typing 
a lot more in D than Rust because of this.

Pattern matching
----------------
Ooooh... I don't know what to say.. D should definitely look into 
implementing some pattern matching! final switch is good for 
making sure all values are handled, but deconstructing is just so 
ergonomic.

Expressions
-----------
This probably also falls in the "too late" category, but 
statements-as-expressions is really nice. `auto a = if ...` <- 
why not?

Borrowing
---------
This is probably the big thing that makes Rust really different.  
Everything is a resource, and resources have an owner and a 
lifetime. As a part of this, you can either have multiple aliases 
with read-only references, or a single reference with a writeable 
reference. I won't say I have a lot of experience with this, but 
it seems like it's not an extremely unergonomic trade-off. I 
cannot even remotely imagine the amount of possible compiler 
optimizations possible with this feature.

----

Why have I been looking at Rust?

I haven't been using D in production since 2008, which was an 
utterly disaster. I did some work on the native mysql client, but 
that was mostly making the code more D than C. Some small 
experimentations thereafter.

The constant breakage in the language and standard library 
haven't been an real issue for me as I haven't used it in 
production - the problem when I used it was partly the phobos vs 
tango with incompatible runtimes together with an extremely buggy 
compiler. On the breaking part, the real issue is the "We're not 
going to break any code!" stance, while each release still breaks 
every codebase. The effect is that a lot of really long-term 
necessary breaking changes is never accepted - the only breaking 
changes is the unintended breaking changes! I'm in the "break-it" 
camp. If you don't mind living with historical baggage and want 
an expressive language, you can use C++. Why should D try to stay 
with the baggage it has only recently acquired because of the 
lack of manpower? I've recently been to job interviews, and none 
of them ever heard of D...

After following Rust for some time (instead of following D!) and 
now spending some time playing with it, I have to say that Rust 
has several things going for it: manpower, momentum, tooling.  
There are also really nice things like "everything" being an 
expression, pattern matching and the compiler as a library.

But again... After playing a bit with Rust, I feel it lacks a lot 
in expressive power. D has templates, template mixins, alias 
this, string mixins, opDispatch etc. In my little time with Rust, 
I've seen several pages of generic constrains that is expressible 
in a couple of lines with D. I've seen copy/pasted code that just 
isn't necessary when you code in D.

Anyways - my little ramblings after trying the Rust programming 
language while I haven't used D in a long, long while (But I'm 
still here now, as I'm not sure Rust is able to express 
everything that is possible with D). Looking forward to following 
D again :)
Jul 22 2015
next sibling parent reply "Jack Stouffer" <jack jackstouffer.com> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 I've been using D on and off since 2007, and the lack of 
 manpower shows in every aspect of the language, design and 
 ecosystem. Rust has a pretty nice ecosystem and tools given its 
 very young age.
Only one way to fix this. Volunteer.
 Rust has a default package manager much like Dub. The main 
 difference is that Cargo has been endorsed by the Rust team and 
 is an official product. This means it works very well with the 
 compiler and feels like an integrated part of the language. 
 Dub, on the other hand, is a product from the outside, and 
 unfortunately, it feels this way too. Having Dub become the 
 endorsed package manager for D sounds like a very good idea for 
 me.
Dub is endorsed by the leadership and is included in the same Github organization as the complier and the standard library
 While code.dlang.org has 530 packages, crates.io has 2610 
 packages, and this is even if Rust is very new. Dubs repository 
 website is a lot better than Rusts though :)
I attribute this to hype. Also, while dub may feel a little bare at times, I always think of the worse alternative in node, so many packages that are crap that they push out anything of value.
 Traits
 ------
 I think the ability to express an interface without buying into 
 inheritance is the right move. The alternative in D is 
 specifying the behavior as a template and verifying the 
 contract in a unittest for the type.
I don't know enough about rust to comment.
 Macros
 ------
 I haven't written more than extremely simple macros in Rust, 
 but having macros that is possible for tools to understand is a 
 win.  Templates and string mixins is often used for this 
 purpose, but trying to build tools when string mixins exists is 
 probably extremely hard. If D had hygenic macros, I expect 
 several features could be expressed with this instead of string 
 mixins, making tooling easier to implement.
Maybe tooling would become easier to write, but in my personal experience, macros are much harder for programmers to understand than mixins, and much easier to abuse.
 Safe by default
 ---------------
 D is often said being safe by default, but D still has default 
 nullable references and mutable by default. I don't see it 
 being possible to change at this stage, but expressing when I 
 want to be unsafe rather than the opposite is very nice. I end 
 up typing a lot more in D than Rust because of this.
This would break so much code it's not even funny. I agree that immutable by default is the best paradigm, but as far as breaking changes go, you can only have so many before people abandoned a language.
 Expressions
 -----------
 This probably also falls in the "too late" category, but 
 statements-as-expressions is really nice. `auto a = if ...` <- 
 why not?
Don't quite know what you mean here.
 Borrowing
 ---------
 This is probably the big thing that makes Rust really 
 different.  Everything is a resource, and resources have an 
 owner and a lifetime. As a part of this, you can either have 
 multiple aliases with read-only references, or a single 
 reference with a writeable reference. I won't say I have a lot 
 of experience with this, but it seems like it's not an 
 extremely unergonomic trade-off. I cannot even remotely imagine 
 the amount of possible compiler optimizations possible with 
 this feature.
I think that someone was working on this, but I think it got sidelined (as it should) to fixing RefCounted and other things in the std lib.
 The constant breakage in the language and standard library 
 haven't been an real issue for me as I haven't used it in 
 production - the problem when I used it was partly the phobos 
 vs tango with incompatible runtimes together with an extremely 
 buggy compiler.
The tango vs Phobos issue has been mostly settled after the transition from d1 to d2, and the complier is a lot better now than it was.
 On the breaking part, the real issue is the "We're not going to 
 break any code!" stance,
Who, in the leadership or a contributor, has ever said this.
Jul 22 2015
next sibling parent reply "simendsjo" <simendsjo gmail.com> writes:
On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 I've been using D on and off since 2007, and the lack of 
 manpower shows in every aspect of the language, design and 
 ecosystem. Rust has a pretty nice ecosystem and tools given 
 its very young age.
Only one way to fix this. Volunteer.
Yes, I agree. I haven't done much good for the D community, although I believe some of the code I've written for mysql native is in production. Let me add a point for Rust somewhat related :) Community --------- The community is nice, helpful and doesn't condecent people. On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Expressions
 -----------
 This probably also falls in the "too late" category, but 
 statements-as-expressions is really nice. `auto a = if ...` <- 
 why not?
Don't quite know what you mean here.
When "everything" is an expressions, you can write things like auto a = if(e) c else d; In D you have to write type a = invalid_value; if(e) a = c; else a = d; assert(a != invalid_value); On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 (... snip [regarding phobos/tango runtime incompatibility] ...)
The tango vs Phobos issue has been mostly settled after the transition from d1 to d2, and the complier is a lot better now than it was.
There are probably nothing in newer times that can be credited as much pain as the incompatible runtimes of phobos and tango, but I've still encountered a lot of compiler and phobos bugs. Yes, it's just every 500 or so lines now, but I've only encountered library in several years - again credited to a lot more manpower available. On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 On the breaking part, the real issue is the "We're not going 
 to break any code!" stance,
Who, in the leadership or a contributor, has ever said this.
I'm pretty sure Walter Bright has said this on a lot of occations. He seemed to believe a failure of adoption of D was due to the constant breaking changes, which might be true. On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Rust has a default package manager much like Dub. The main 
 difference is that Cargo has been endorsed by the Rust team 
 and is an official product. (... snip ...)
Dub is endorsed by the leadership and is included in the same Github organization as the complier and the standard library
I didn't know that. Very nice. I hope it's in a clean state when it gets pulled in. On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 While code.dlang.org has 530 packages, crates.io has 2610 
 packages, (... snip ...)
I attribute this to hype. (... snip ...)
Yes, but the hype is probably driving even more developers to the language, and I believe a large user-base is of paramount importance for a language to become good. On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Macros
 ------
 (... snip ...)
Maybe tooling would become easier to write, but in my personal experience, macros are much harder for programmers to understand than mixins, and much easier to abuse.
I disagree. String mixins are much easier to abuse than hygenic macros. String mixins allows anything, and while it offers infinite possibilities, it also encourage abuse.
Jul 22 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
That's what the ternary expression is for: auto a = e ? c : d; Though the ternary is unnecessary with statements as expressions, common cases like this are handled.
Jul 22 2015
parent reply "simendsjo" <simendsjo gmail.com> writes:
On Wednesday, 22 July 2015 at 20:59:11 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
That's what the ternary expression is for: auto a = e ? c : d; Though the ternary is unnecessary with statements as expressions, common cases like this are handled.
:) The example was written to save space. I recon you understand what I mean.
Jul 22 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you 
 understand what I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
Jul 23 2015
parent reply "ixid" <adamsibson hotmail.com> writes:
On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you 
 understand what I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Jul 23 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 23 July 2015 at 14:49:55 UTC, ixid wrote:
 On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you 
 understand what I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Maybe, but the ternary operator is a lot less verbose, and from some other comments in this thread, it sounds like the way they implemented it in Rust forces you to use braces for single line statements, which would be a _huge_ downside IMHO. I'm inclined to think that it would need a use case that's a lot more compelling than if-else chains to be worth it. - Jonathan M Davis
Jul 23 2015
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 17:41, Jonathan M Davis wrote:

 Maybe, but the ternary operator is a lot less verbose, and from some
 other comments in this thread, it sounds like the way they implemented
 it in Rust forces you to use braces for single line statements, which
 would be a _huge_ downside IMHO.
Scala implements it without those requirements. I looks exactly like in D, just that it also returns a value. Also, I think it's getting a lot more interesting when you combine it with automatically return in a method and optional braces for methods with a single expression: def returnType = if (node.isConstructor) None else Some(Type.translate(binding.getReturnType)) Or if you're using pattern matching: def fromModifier(value: Modifier) = value match { case Modifier.ABSTRACT => ABSTRACT case Modifier.STATIC => STATIC case Modifier.FINAL => FINAL case _ => NONE } BTW, Ruby supports both the ternary operator and statements are expressions. -- /Jacob Carlborg
Jul 23 2015
prev sibling parent reply Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 8:41 AM, Jonathan M Davis via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 Maybe, but the ternary operator is a lot less verbose
The ternary operator becomes much harder to read the moment you have more than the simple if/else case. As it was mentioned elsewhere on this thread, you can do the following in Scala: val x = if (condition_1) 1 else if (condition_2) 2 else if (condition_3) 3 else 4 Having expressions be "built-in" extends beyond the simple if/else case, which can be emulated with the ternary operator as you said. You can assign the result of match expressions for instance, or the result of scoped blocks, .e.g. val x = { val ys = foo() ys.map(...).filter(...).exists(...) } , and from some other comments in this thread, it sounds like the way they
 implemented it in Rust forces you to use braces for single line statements,
 which would be a _huge_ downside IMHO.
On the other hand, Rust does not require parenthesis around if conditions: let x = if some_condition { 1 } else { 2 }
 I'm inclined to think that it would need a use case that's a lot more
 compelling than if-else chains to be worth it.
I provided examples above. -- Ziad
Jul 23 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 23 July 2015 at 20:48:34 UTC, Ziad Hatahet wrote:
 The ternary operator becomes much harder to read the moment you 
 have more than the simple if/else case.
I think it is actually kinda pretty: auto x = (condition_1) ? 1 : (condition_2) ? 2 : (condition_3) ? 3 : 4;
 val x = {
     val ys = foo()
     ys.map(...).filter(...).exists(...)
 }
auto x = { auto ys = foo(); return ys.map(...).filter(...).exists(...); }(); Before you get too worried about the (), I'd point out that this is a very common pattern in Javascript (for like everything...) and while everybody hates JS, most every uses it too; this patten is good enough for usefulness. (or for that you could prolly just write foo().map()... directly but i get your point)
Jul 23 2015
parent reply Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 2:00 PM, Adam D. Ruppe via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 I think it is actually kinda pretty:
What about: int median(int a, int b, int c) { return (a<b) ? (b<c) ? b : (a<c) ? c : a : (a<c) ? a : (b<c) ? c : b; } vs. def median(a: Int, b: Int, c: Int) = if (a < b) { if (b < c) b else if (a < c) c else a } else if (a < c) a else if (b < c) c else b Before you get too worried about the (), I'd point out that this is a very
 common pattern in Javascript (for like everything...) and while everybody
 hates JS, most every uses it too; this patten is good enough for usefulness.
Is the compiler always able to always optimize out the function call by inlining it, as would be the case with a scope?
Jul 23 2015
next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 23 July 2015 at 21:27:17 UTC, Ziad Hatahet wrote:
 I think it is actually kinda pretty:
What about: int median(int a, int b, int c) { return (a<b) ? (b<c) ? b : (a<c) ? c : a : (a<c) ? a : (b<c) ? c : b; } vs. def median(a: Int, b: Int, c: Int) = if (a < b) { if (b < c) b else if (a < c) c else a } else if (a < c) a else if (b < c) c else b
Not really a spaces-to-spaces comparison... to be honest, I'd probably just write that as: int median(int a, int b, int c) { if (a < b) { if (b < c) return b; else if (a < c) return c; else return a; } else if (a < c) return a; else if (b < c) return c; else return b; } You don't need it to be an expression since it is a function, you can simply write return statements (which I kinda like since then it is obvious that that value is a terminating condition and not just the middle of some other calculation). But if we were using a ternary, some newlines can help with it: return (a < b) ? ( (b < c) ? b : (a < c) ? c : a ) : (a < c) ? a : (b < c) ? c : b; Indeed, there I just took your if/else thing and swapped out the else keyword for the : symbol, then replaced if(cond) with (cond) ?, then changed out the {} for (). It still works the same way.
 Is the compiler always able to always optimize out the function 
 call by inlining it, as would be the case with
 a scope?
It should be.
Jul 23 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 5:26 PM, Ziad Hatahet via Digitalmars-d wrote:
 On Thu, Jul 23, 2015 at 2:00 PM, Adam D. Ruppe via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     I think it is actually kinda pretty:


 What about:

 int median(int a, int b, int c) {
      return (a<b) ? (b<c) ? b : (a<c) ? c : a : (a<c) ? a : (b<c) ? c : b;
 }

 vs.

 def median(a: Int, b: Int, c: Int) =
    if (a < b) {
      if (b < c) b
      else if (a < c) c
      else a
    }
    else if (a < c) a
    else if (b < c) c
    else b
This is a wash. If we want to discuss good things in Rust we could get inspiration from, we need relevant examples. -- Andrei
Jul 25 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/25/2015 02:19 PM, Andrei Alexandrescu wrote:
 On 7/23/15 5:26 PM, Ziad Hatahet via Digitalmars-d wrote:
 On Thu, Jul 23, 2015 at 2:00 PM, Adam D. Ruppe via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     I think it is actually kinda pretty:


 What about:

 int median(int a, int b, int c) {
      return (a<b) ? (b<c) ? b : (a<c) ? c : a : (a<c) ? a : (b<c) ? c
 : b;
 }

 vs.

 def median(a: Int, b: Int, c: Int) =
    if (a < b) {
      if (b < c) b
      else if (a < c) c
      else a
    }
    else if (a < c) a
    else if (b < c) c
    else b
This is a wash. If we want to discuss good things in Rust
(The quoted bit is Scala code.)
 we could get inspiration from, we need relevant examples. -- Andrei
What do you mean? I think it is pretty obvious that 'if'/'else' is "better" syntax than '?:'. It e.g. does not leave the separation of context and condition up to operator precedence rules and is hence easier to parse by a human. Not that I'd care much, but it is inconvenient to be asked not to use the ternary operator in team projects just because it has a badly engineered syntax. Also, we have (int x){ return r; }, auto foo(int x){ return r; }, (int x)=>r, but not auto foo(int x)=>r. It's an arbitrary restriction. (BTW: To all the people who like to put the ternary operator condition into parens in order to imitate if: A convention that makes more sense here is to put the entire (chained) ternary expression in parentheses.)
Jul 26 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/26/15 11:18 AM, Timon Gehr wrote:
 I think it is pretty obvious that 'if'/'else' is "better" syntax than '?:'.
It may as well be the case but the whole deal is marginal. Yeah, we could have slightly better tactical tools for expressing conditionals, but really we're fine. -- Andrei
Jul 26 2015
prev sibling next sibling parent dennis luehring <dl.soluz gmx.net> writes:
Am 23.07.2015 um 22:47 schrieb Ziad Hatahet via Digitalmars-d:
 Having expressions be "built-in" extends beyond the simple if/else case
and allowes const correctness without functions
Jul 24 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 4:47 PM, Ziad Hatahet via Digitalmars-d wrote:
 On the other hand, Rust does not require parenthesis around if
 conditions:
Yet it requires braces around the arms. Rust taketh away, Rust requireth back :o). -- Andrei
Jul 25 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/25/2015 02:18 PM, Andrei Alexandrescu wrote:
 On 7/23/15 4:47 PM, Ziad Hatahet via Digitalmars-d wrote:
 On the other hand, Rust does not require parenthesis around if
 conditions:
Yet it requires braces around the arms. Rust taketh away, Rust requireth back :o). -- Andrei
It's arguably a better trade-off.
Jul 26 2015
parent "Enamex" <enamex+d outlook.com> writes:
On Sunday, 26 July 2015 at 15:21:32 UTC, Timon Gehr wrote:
 On 07/25/2015 02:18 PM, Andrei Alexandrescu wrote:
 On 7/23/15 4:47 PM, Ziad Hatahet via Digitalmars-d wrote:
 On the other hand, Rust does not require parenthesis around if
 conditions:
Yet it requires braces around the arms. Rust taketh away, Rust requireth back :o). -- Andrei
It's arguably a better trade-off.
Yeah. Instead of sometimes requiring just parentheses and sometimes them and braces. It's also less error-inducing. I rather like it though it's not exactly a functionality thing.
Jul 26 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 10:49 AM, ixid wrote:
 On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you understand what
 I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Possibly, but then you'd need to have while return a value. -- Andrei
Jul 23 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/23/2015 06:28 PM, Andrei Alexandrescu wrote:
 On 7/23/15 10:49 AM, ixid wrote:
 On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you understand what
 I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Possibly, but then you'd need to have while return a value. -- Andrei
https://en.wikipedia.org/wiki/Unit_type
Jul 23 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 1:51 PM, Timon Gehr wrote:
 On 07/23/2015 06:28 PM, Andrei Alexandrescu wrote:
 On 7/23/15 10:49 AM, ixid wrote:
 On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you understand what
 I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Possibly, but then you'd need to have while return a value. -- Andrei
https://en.wikipedia.org/wiki/Unit_type
I said awkward, not impossible. -- Andrei
Jul 23 2015
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/23/2015 09:18 PM, Andrei Alexandrescu wrote:
 On 7/23/15 1:51 PM, Timon Gehr wrote:
 On 07/23/2015 06:28 PM, Andrei Alexandrescu wrote:
 On 7/23/15 10:49 AM, ixid wrote:
 On Thursday, 23 July 2015 at 13:33:43 UTC, Adam D. Ruppe wrote:
 On Wednesday, 22 July 2015 at 21:04:57 UTC, simendsjo wrote:
 :) The example was written to save space. I recon you understand what
 I mean.
Yeah, but the if/else is one of the most useful examples of it, and is covered by ?:, so the whole thing becomes less compelling then. The other places where I've used it in languages that support it are little blocks crammed into a line and sometimes exception grabbing... but still, the value isn't that great.
If we had a clean sheet wouldn't it be better to have if return a value and ditch ternary?
Possibly, but then you'd need to have while return a value. -- Andrei
https://en.wikipedia.org/wiki/Unit_type
I said awkward, not impossible. -- Andrei
It's not awkward.
Jul 24 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 7:49 AM, ixid wrote:
 If we had a clean sheet wouldn't it be better to have if return a value and
 ditch ternary?
Then we'd start seeing code like: x = 45 + if (y == 10) { while (i--) z += call(i); z; } else { switch (x) { case 6: foo(); y; } + tan(z); I.e. the embedding of arbitrary statements within expressions. We already have some of this with embedded anonymous lambda support, and I've discovered one needs to be very careful in formatting it to not wind up with an awful unreadable mess. So I'd be really reluctant to continue down that path. Now, if you want to disallow { } within the embedded if statement, then the proposal becomes nothing more than: ? => if : => else which is a potayto potahto thing. I agree that trivial syntax issues actually do matter, but having used ?: a lot, I have a hard time seeing embeddable if-else as a real improvement, in fact I find it more than a little jarring to see.
Jul 23 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Thursday, 23 July 2015 at 20:09:34 UTC, Walter Bright wrote:
 On 7/23/2015 7:49 AM, ixid wrote:
 If we had a clean sheet wouldn't it be better to have if 
 return a value and
 ditch ternary?
Then we'd start seeing code like: x = 45 + if (y == 10) { while (i--) z += call(i); z; } else { switch (x) { case 6: foo(); y; } + tan(z); I.e. the embedding of arbitrary statements within expressions. We already have some of this with embedded anonymous lambda support, and I've discovered one needs to be very careful in formatting it to not wind up with an awful unreadable mess. So I'd be really reluctant to continue down that path. Now, if you want to disallow { } within the embedded if statement, then the proposal becomes nothing more than: ? => if : => else which is a potayto potahto thing. I agree that trivial syntax issues actually do matter, but having used ?: a lot, I have a hard time seeing embeddable if-else as a real improvement, in fact I find it more than a little jarring to see.
I think I agree on the if else issue, seems arbitrary as we already have ?:. Other statements as expressions have less obvious meanings. The only part is that I wish you could have blocks as expressions. The thing is with ufcs, it really should be possible. For example the following does not compile: int a = {return 4;}; but the following does: int a = {return 4;}(); I know it's a really small difference, but with UFCS, I would expect you the be able to omit the () and have the function literal called automatically. Though I can see that this would have problems with auto and knowing if it should be a function pointer or to call the function. I guess what I would expect is "auto a = {return 4;};" to type a to a function pointer, but if you explicitly type a to int then the literal should be called. Does UFCS even apply to function pointers? I guess it is a problem, it does not seem to be obvious when to call and when to copy the pointer. I don't really know what should happen. I think I read a dip a little while ago that might have addressed this, but I don't really remember. I dont know, now that I have written this, it seems to have more problems than I originally thought.
Jul 23 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 00:55:35 UTC, Tofu Ninja wrote:
 On Thursday, 23 July 2015 at 20:09:34 UTC, Walter Bright wrote:
 On 7/23/2015 7:49 AM, ixid wrote:
 If we had a clean sheet wouldn't it be better to have if 
 return a value and
 ditch ternary?
Then we'd start seeing code like: x = 45 + if (y == 10) { while (i--) z += call(i); z; } else { switch (x) { case 6: foo(); y; } + tan(z); I.e. the embedding of arbitrary statements within expressions. We already have some of this with embedded anonymous lambda support, and I've discovered one needs to be very careful in formatting it to not wind up with an awful unreadable mess. So I'd be really reluctant to continue down that path. Now, if you want to disallow { } within the embedded if statement, then the proposal becomes nothing more than: ? => if : => else which is a potayto potahto thing. I agree that trivial syntax issues actually do matter, but having used ?: a lot, I have a hard time seeing embeddable if-else as a real improvement, in fact I find it more than a little jarring to see.
I think I agree on the if else issue, seems arbitrary as we already have ?:. Other statements as expressions have less obvious meanings. The only part is that I wish you could have blocks as expressions. The thing is with ufcs, it really should be possible. For example the following does not compile: int a = {return 4;}; but the following does: int a = {return 4;}(); I know it's a really small difference, but with UFCS, I would expect you the be able to omit the () and have the function literal called automatically. Though I can see that this would have problems with auto and knowing if it should be a function pointer or to call the function. I guess what I would expect is "auto a = {return 4;};" to type a to a function pointer, but if you explicitly type a to int then the literal should be called. Does UFCS even apply to function pointers? I guess it is a problem, it does not seem to be obvious when to call and when to copy the pointer. I don't really know what should happen. I think I read a dip a little while ago that might have addressed this, but I don't really remember. I dont know, now that I have written this, it seems to have more problems than I originally thought.
Actually now that I think about it, I think I would expect auto a = { return 4;}; to type a to an int and call the function literal, and auto a = &{ return 4;}; to type a to a function pointer. I think that makes sense. Then if a is a function pointer auto b = a; would type b to a function pointer as well. I suppose UFCS really does not make sense to function pointers, but does make sense for function literals. I expect {return 4;} to just be an anonymous function, not a pointer to an anonymous function. That way you can write alias f = {return 4;}; which would just be an alias to a function, which makes sense. I haven't thought about how this would apply to delegates.
Jul 23 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-24 02:55, Tofu Ninja wrote:

 I think I agree on the if else issue, seems arbitrary as we already have
 ?:. Other statements as expressions have less obvious meanings. The only
 part is that I wish you could have blocks as expressions. The thing is
 with ufcs, it really should be possible.

 For example the following does not compile:
 int a = {return 4;};

 but the following does:
 int a = {return 4;}();

 I know it's a really small difference, but with UFCS, I would expect you
 the be able to omit the () and have the function literal called
 automatically. Though I can see that this would have problems with auto
 and knowing if it should be a function pointer or to call the function.

 I guess what I would expect is "auto a = {return 4;};" to type a to a
 function pointer, but if you explicitly type a to int then the literal
 should be called.

 Does UFCS even apply to function pointers? I guess it is a problem, it
 does not seem to be obvious when to call and when to copy the pointer. I
 don't really know what should happen. I think I read a dip a little
 while ago that might have addressed this, but I don't really remember. I
 dont know, now that I have written this, it seems to have more problems
 than I originally thought.
How does UFCS apply here? There isn't even a dot in the code. -- /Jacob Carlborg
Jul 24 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 13:22:34 UTC, Jacob Carlborg wrote:
 On 2015-07-24 02:55, Tofu Ninja wrote:

 I think I agree on the if else issue, seems arbitrary as we 
 already have
 ?:. Other statements as expressions have less obvious 
 meanings. The only
 part is that I wish you could have blocks as expressions. The 
 thing is
 with ufcs, it really should be possible.

 For example the following does not compile:
 int a = {return 4;};

 but the following does:
 int a = {return 4;}();

 I know it's a really small difference, but with UFCS, I would 
 expect you
 the be able to omit the () and have the function literal called
 automatically. Though I can see that this would have problems 
 with auto
 and knowing if it should be a function pointer or to call the 
 function.

 I guess what I would expect is "auto a = {return 4;};" to type 
 a to a
 function pointer, but if you explicitly type a to int then the 
 literal
 should be called.

 Does UFCS even apply to function pointers? I guess it is a 
 problem, it
 does not seem to be obvious when to call and when to copy the 
 pointer. I
 don't really know what should happen. I think I read a dip a 
 little
 while ago that might have addressed this, but I don't really 
 remember. I
 dont know, now that I have written this, it seems to have more 
 problems
 than I originally thought.
How does UFCS apply here? There isn't even a dot in the code.
Is omitting the () not part of ufcs? Or does it have some other name, I can never remember.
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 20:46:07 UTC, Tofu Ninja wrote:
 Is omitting the () not part of ufcs? Or does it have some other 
 name, I can never remember.
No. That's simply omitting the parens from a function call that has no arguments. If it has a name, it's just "optional parens." Universal Function Call Syntax is the syntax that allows you to call a free function as if it were a member function, which is why stuff like auto result = myRange.find(bar); works. It _does_ mean that you can drop even more parens, because the first function argument is now to the left of the dot, and the parens are then empty if there was only one function argument, but being able to drop the parens has nothing to do with UFCS. We could still have UFCS and yet never be able to drop parens. - Jonathan M Davis
Jul 24 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 21:29:56 UTC, Jonathan M Davis wrote:
 On Friday, 24 July 2015 at 20:46:07 UTC, Tofu Ninja wrote:
 Is omitting the () not part of ufcs? Or does it have some 
 other name, I can never remember.
No. That's simply omitting the parens from a function call that has no arguments. If it has a name, it's just "optional parens." Universal Function Call Syntax is the syntax that allows you to call a free function as if it were a member function, which is why stuff like auto result = myRange.find(bar); works. It _does_ mean that you can drop even more parens, because the first function argument is now to the left of the dot, and the parens are then empty if there was only one function argument, but being able to drop the parens has nothing to do with UFCS. We could still have UFCS and yet never be able to drop parens. - Jonathan M Davis
Ok I kinda assumed they were both included in the concept of UFCS, just replace "UFCS" in what I said before with "optional parens". Basicly I was just saying that I suppose optional parens do not really make sense on function pointers. Aka if "a" is a function pointer then auto b = a; should type "b" to a function pointer as well, which is how it currently works, afik. But the part that I don't think makes sense for auto a = {return 4;}; to type "a" to a function pointer. I would expect {return 4;} to be treated as a function(not a function pointer). With it being treated as a function, I would expect it to be called with optional parens and type "a" to an int. I would expect auto a = &{return 4;}; to type "a" to a function pointer, which makes much more sense to me. But that's not how function literals work right now. Treating {return 4;} as a function(not a function pointer) makes a lot more sense and allows alias a = {return 4;}; to work as well, which is simply a function declaration.
Jul 24 2015
parent "Enamex" <enamex+d outlook.com> writes:
On Friday, 24 July 2015 at 21:44:42 UTC, Tofu Ninja wrote:
 But the part that I don't think makes sense for

      auto a = {return 4;};

 to type "a" to a function pointer. I would expect {return 4;} 
 to be treated as a function(not a function pointer). With it 
 being treated as a function, I would expect it to be called 
 with optional parens and type "a" to an int. I would expect auto

       a = &{return 4;};

 to type "a" to a function pointer, which makes much more sense 
 to me. But that's not how function literals work right now. 
 Treating {return 4;} as a function(not a function pointer) 
 makes a lot more sense and allows

      alias a = {return 4;};

 to work as well, which is simply a function declaration.
Not crazy about your last point, TBH. Personally I really dislike function literals being _just_ `{ ... }` and as a matter of principle only write `(){...}` when I need one. `{}` to me can only mean blocks that are part of the current scope, but they're sometimes that and sometimes lambdas, depending on whether they had any `return`s and are in a place to be assigned a name or immediately called :/ A related thing is having _some way_ to quickly return a value from inside an invoked function literal without `return`. Somme stuff can't be done in a one-liner and need _two_(ish) lines and have to write, say, `{ Type myval, myres; res_by_ref(myval, myres); return myres; }()` instead of `{ Type myval, myres; res_by_ref(myval, myres); myres }` (expression-oriented) or `(){ ...; => myres; }()`(hypothetically). Point is, writing `return` in the middle of a function and having it return _only_ from a lambda breaks the flow, I believe, same as in C++.
Aug 05 2015
prev sibling next sibling parent reply "ixid" <adamsibson hotmail.com> writes:
On Thursday, 23 July 2015 at 20:09:34 UTC, Walter Bright wrote:
 On 7/23/2015 7:49 AM, ixid wrote:
 If we had a clean sheet wouldn't it be better to have if 
 return a value and
 ditch ternary?
Then we'd start seeing code like: x = 45 + if (y == 10) { while (i--) z += call(i); z; } else { switch (x) { case 6: foo(); y; } + tan(z); I.e. the embedding of arbitrary statements within expressions. We already have some of this with embedded anonymous lambda support, and I've discovered one needs to be very careful in formatting it to not wind up with an awful unreadable mess. So I'd be really reluctant to continue down that path.
As opposed to: auto n = { if (y == 10) { return { while (i--) z += call(i); return z; }(); } else { return { switch (x) { case 6: return foo(); default: return y; } }(); } }() + tan(z); You can already do that, it's even uglier.
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 3:06 AM, ixid wrote:
 On Thursday, 23 July 2015 at 20:09:34 UTC, Walter Bright wrote:
 On 7/23/2015 7:49 AM, ixid wrote:
 If we had a clean sheet wouldn't it be better to have if return a value and
 ditch ternary?
Then we'd start seeing code like: x = 45 + if (y == 10) { while (i--) z += call(i); z; } else { switch (x) { case 6: foo(); y; } + tan(z); I.e. the embedding of arbitrary statements within expressions. We already have some of this with embedded anonymous lambda support, and I've discovered one needs to be very careful in formatting it to not wind up with an awful unreadable mess. So I'd be really reluctant to continue down that path.
As opposed to: auto n = { if (y == 10) { return { while (i--) z += call(i); return z; }(); } else { return { switch (x) { case 6: return foo(); default: return y; } }(); } }() + tan(z); You can already do that, it's even uglier.
Nope. As opposed to: int r; if (y == 10) { while (i--) z += call(i); r = z; } else { switch (x) { case 6: r = foo(); break; default: r = y; break; } } x = 45 + r + tan(z);
Jul 24 2015
parent "ixid" <adamsibson hotmail.com> writes:
On Friday, 24 July 2015 at 10:15:36 UTC, Walter Bright wrote:
 Nope. As opposed to:

     int r;
     if (y == 10) {
             while (i--)
                 z += call(i);
             r = z;
     } else {
             switch (x) {
                 case 6:
                     r = foo();
 		    break;
                 default:
 		    r = y;
 	            break;
             }
     }

     x = 45 + r + tan(z);
My point was that you can effectively do the ugly thing already in a worse way. I didn't say there aren't neater ways of getting the same functionality in this particular case. Doesn't it demonstrate that expressions returning values can make a given piece of code tidier?
Jul 24 2015
prev sibling next sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 23 July 2015 at 20:09:34 UTC, Walter Bright wrote:
 I agree that trivial syntax issues actually do matter, but 
 having used ?: a lot, I have a hard time seeing embeddable 
 if-else as a real improvement, in fact I find it more than a 
 little jarring to see.
Agreed. It makes more sense for switch/match, but D's switch syntax is already quite heavy, an assignment in each branch doesn't add much additional weight.
Jul 24 2015
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/23/2015 10:09 PM, Walter Bright wrote:
 Now, if you want to disallow { } within the embedded if statement, then
 the proposal becomes nothing more than:

      ? => if
      : => else
This is obviously not true: a?b:c => a if b else c // ..?
 which is a potayto potahto thing.
Some people have trouble with the precedence of ?: . Fun fact: D and C++ have different precedence for ?: . :-)
Jul 24 2015
prev sibling next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
I prefer this example from one of the various Rust tutorials let foo = if x == 5 { "five" } else if x == 6 { "six" } else { "neither" } You're basically using a conditional expression as an rvalue. You can do the same thing with a { } block.
Jul 22 2015
next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Wednesday, 22 July 2015 at 21:36:58 UTC, jmh530 wrote:
 On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
I prefer this example from one of the various Rust tutorials let foo = if x == 5 { "five" } else if x == 6 { "six" } else { "neither" } You're basically using a conditional expression as an rvalue. You can do the same thing with a { } block.
Admittedly nowhere near as clean, but if you can bear to see the "return"s, function literals can turn any bunch of code in to an expression: auto foo = { if(x == 5) return "five"; else if(x == 6) return "six"; else return "neither"; }(); or of course there's the perhaps overly terse (brackets optional, i like them to visually group the condition with the ? ): auto foo = (x == 5)? "five" : (x == 6)? "six" : "neither";
Jul 22 2015
next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 22 July 2015 at 22:35:54 UTC, John Colvin wrote:
 Admittedly nowhere near as clean, but if you can bear to see 
 the "return"s, function literals can turn any bunch of code in 
 to an expression:
Honestly, I don't mind doing things the D way (and I probably wouldn't have even done that your way). I just think it's interesting when another programming language has a cool feature.
Jul 22 2015
prev sibling parent Andre Kostur <andre kostur.net> writes:
On 2015-07-22 3:35 PM, John Colvin wrote:
 On Wednesday, 22 July 2015 at 21:36:58 UTC, jmh530 wrote:
 On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
I prefer this example from one of the various Rust tutorials let foo = if x == 5 { "five" } else if x == 6 { "six" } else { "neither" } You're basically using a conditional expression as an rvalue. You can do the same thing with a { } block.
Admittedly nowhere near as clean, but if you can bear to see the "return"s, function literals can turn any bunch of code in to an expression: auto foo = { if(x == 5) return "five"; else if(x == 6) return "six"; else return "neither"; }(); or of course there's the perhaps overly terse (brackets optional, i like them to visually group the condition with the ? ): auto foo = (x == 5)? "five" : (x == 6)? "six" : "neither";
Shouldn't that be its own function anyway? If you needed it in one place, you'll probably need it elsewhere. And, in this case, it can even be marked as pure.
Jul 23 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/22/15 5:36 PM, jmh530 wrote:
 On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 When "everything" is an expressions, you can write things like
     auto a = if(e) c else d;

 In D you have to write
     type a = invalid_value;
     if(e) a = c;
     else  a = d;
     assert(a != invalid_value);
I prefer this example from one of the various Rust tutorials let foo = if x == 5 { "five" } else if x == 6 { "six" } else { "neither" } You're basically using a conditional expression as an rvalue. You can do the same thing with a { } block.
I used to be quite jazzed about the everything-is-an-expression mantra, but it's not all great. 1. Inferring function return types when everything is an expression (i.e. last expression there is the return type) may yield WAT results. 2. Defining a result type for loops is awkward. At the end of the day everything-is-an-expression is natural for functional languages, but doesn't seem it makes a large difference to an imperative language. To OP: thanks for your rant! Instead of getting defensive we'd do good to derive action items from it. Andrei
Jul 23 2015
next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 23 July 2015 at 13:30:49 UTC, Andrei Alexandrescu 
wrote:
 At the end of the day everything-is-an-expression is natural 
 for functional languages, but doesn't seem it makes a large 
 difference to an imperative language.
Good points.
Jul 23 2015
prev sibling next sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 23 July 2015 at 13:30:49 UTC, Andrei Alexandrescu 
wrote:
 I used to be quite jazzed about the everything-is-an-expression 
 mantra, but it's not all great.

 1. Inferring function return types when everything is an 
 expression (i.e. last expression there is the return type) may 
 yield WAT results.

 2. Defining a result type for loops is awkward.

 At the end of the day everything-is-an-expression is natural 
 for functional languages, but doesn't seem it makes a large 
 difference to an imperative language.
It also works well for Ruby with its dynamic typing. Function return types don't matter, of course, and for loops you just use whatever the last executed expression happens to be. Anyway, in D we have delegate literals, for the rare cases where it's useful.
Jul 23 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-23 15:30, Andrei Alexandrescu wrote:

 1. Inferring function return types when everything is an expression
 (i.e. last expression there is the return type) may yield WAT results.
I have not had that problem with Scala. Either I want to return some thing and let it be inferred, or I don't and declare it as Unit (void). -- /Jacob Carlborg
Jul 23 2015
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 19:20:20 UTC, Jacob Carlborg wrote:
 On 2015-07-23 15:30, Andrei Alexandrescu wrote:

 1. Inferring function return types when everything is an 
 expression
 (i.e. last expression there is the return type) may yield WAT 
 results.
I have not had that problem with Scala. Either I want to return some thing and let it be inferred, or I don't and declare it as Unit (void).
I had a lot of frustration with that (mis)feature and Rust and find it very unreadable. Because of that, so far I always used explicit returns in Rust code even if it is not necessary - that allows to quickly oversee all main exit points of the function. That is mostly matter of programming culture and hard to resonably justify in any way. Ironically, that would feel more "at home" in D than in Rust because normally latter is much more restrictive and explicit in the code style, such implicit functional syntax sugar feels very alien in typically verbose and detailed code.
Jul 23 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 21:26, Dicebot wrote:

 I had a lot of frustration with that (mis)feature and Rust and find it
 very unreadable. Because of that, so far I always used explicit returns
 in Rust code even if it is not necessary - that allows to quickly
 oversee all main exit points of the function.
Perhaps the two languages implement it differently.
 That is mostly matter of programming culture and hard to resonably
 justify in any way. Ironically, that would feel more "at home" in D than
 in Rust because normally latter is much more restrictive and explicit in
 the code style, such implicit functional syntax sugar feels very alien
 in typically verbose and detailed code.
Yeah, I'm used to Ruby where it's implemented as well. -- /Jacob Carlborg
Jul 24 2015
prev sibling next sibling parent reply "shannon mackey" <refaQtor gmail.com> writes:
"> On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer 
wrote:
On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Expressions
 -----------
 This probably also falls in the "too late" category, but 
 statements-as-expressions is really nice. `auto a = if ...` 
 <- why not?
Don't quite know what you mean here.
When "everything" is an expressions, you can write things like auto a = if(e) c else d; In D you have to write type a = invalid_value; if(e) a = c; else a = d; assert(a != invalid_value); "
I frequently do things like this in D: auto good = ( true == true ) ? "true" : "false"; which looks a lot like your Rust example? Is that not what you were looking for?
Jul 23 2015
parent "shannon mackey" <refaQtor gmail.com> writes:
On Thursday, 23 July 2015 at 15:37:24 UTC, shannon mackey wrote:
 "> On Wednesday, 22 July 2015 at 19:41:16 UTC, Jack Stouffer 
 wrote:
 On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 [...]
Don't quite know what you mean here.
When "everything" is an expressions, you can write things like auto a = if(e) c else d; In D you have to write type a = invalid_value; if(e) a = c; else a = d; assert(a != invalid_value); "
I frequently do things like this in D: auto good = ( true == true ) ? "true" : "false"; which looks a lot like your Rust example? Is that not what you were looking for?
Sorry, I didn't read to the end of the thread, and this has been covered many times.
Jul 23 2015
prev sibling parent "bachmeier" <no spam.com> writes:
On Wednesday, 22 July 2015 at 20:43:04 UTC, simendsjo wrote:

 Let me add a point for Rust somewhat related :)

 Community
 ---------
 The community is nice, helpful and doesn't condecent people.
I'd challenge you to write something other than 100% praise for Rust and see how nice the community is. Not that their community is bad, but they won't be winning any awards. Disclaimer: I gave up on Rust quite a while ago.
 I disagree. String mixins are much easier to abuse than hygenic
 macros. String mixins allows anything, and while it offers
 infinite possibilities, it also encourage abuse.
As someone that used to spend a lot of time with Lisp, I find it funny that macros are promoted as a way to avoid abuse of language features. On the issue of hygienic macros, there's a reason the major Scheme implementations still offer Common Lisp-style macros, in spite of the obvious theoretical advantages.
Jul 23 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-22 21:41, Jack Stouffer wrote:

 Dub is endorsed by the leadership and is included in the same Github
 organization as the complier and the standard library
It's not available in the Downloads, Compiler & Tools or Resources sections on dlang.org, so I disagree. -- /Jacob Carlborg
Jul 22 2015
parent reply "Joakim" <dlang joakim.fea.st> writes:
On Thursday, 23 July 2015 at 06:50:44 UTC, Jacob Carlborg wrote:
 On 2015-07-22 21:41, Jack Stouffer wrote:

 Dub is endorsed by the leadership and is included in the same 
 Github
 organization as the complier and the standard library
It's not available in the Downloads, Compiler & Tools or Resources sections on dlang.org, so I disagree.
That's because it has its own top-level link in the sidebar, More libraries. You could argue that's not the best description of dub, but it's certainly prominently placed.
Jul 23 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 09:45, Joakim wrote:

 That's because it has its own top-level link in the sidebar, More
 libraries.  You could argue that's not the best description of dub, but
 it's certainly prominently placed.
Ah, right, I completely forgot about that. But one could expect the actual tool be available to download from dlang.org as well. -- /Jacob Carlborg
Jul 23 2015
prev sibling next sibling parent reply "Alex Parrill" <initrd.gz gmail.com> writes:
I'm not at all familiar with Rust, so forgive me if I'm 
misinterpreting something.

On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Cargo
 -----
 Rust has a default package manager much like Dub. The main 
 difference is that Cargo has been endorsed by the Rust team and 
 is an official product.
I think I read that this may happen soon.
 Traits
 ------
 ...
You can make a `conformsToSomeInterface!T` template, and use `static assert`. D ranges, and the upcoming std.allocator, already use this sort of 'interfaces without polymorphism'. Ex. `static assert(isInputRange!(MyCoolRange));`
 Macros
 ------
 ...
Most of what macros in C were used for are now done with templates, static if, etc. (I don't know how Rust's macros work). Tools could theoretically execute `mixin`, but it effectively requires a D interpreter. A library to do that would be really nice.
 Borrowing
 ---------
 ...
Look into `std.typecons.Unique`, though I've seen people posting that they don't like it (I haven't used it much; I had one use case for it, which was sending it through `std.concurrency.send`, but it doesn't work with that function). Yes, D's community is pretty small. It's not something you can just code; you have to market the language. And it's the community that creates the many tools and packages.
Jul 22 2015
next sibling parent reply "simendsjo" <simendsjo gmail.com> writes:
On Wednesday, 22 July 2015 at 19:52:34 UTC, Alex Parrill wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Traits
 ------
 ...
You can make a `conformsToSomeInterface!T` template, and use `static assert`. D ranges, and the upcoming std.allocator, already use this sort of 'interfaces without polymorphism'. Ex. `static assert(isInputRange!(MyCoolRange));`
Exactly. D is a lot more flexible, but looking just at MyCoolRange, you cannot actually see it conforms to InputRange without looking at the unittests (or wherever else the static assert is inserted). On Wednesday, 22 July 2015 at 19:52:34 UTC, Alex Parrill wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Borrowing
 ---------
 ...
Look into `std.typecons.Unique`, though I've seen people posting that they don't like it (I haven't used it much; I had one use case for it, which was sending it through `std.concurrency.send`, but it doesn't work with that function).
I haven't actually tried to recreate the stuff that Rust does with types in D. I expect some of it is possible, but the addition is things like Unique (which I haven't tried). And this escalates. You end up with `Unique!(NonNull!(MyClass))` and probably even more type decorators. On Wednesday, 22 July 2015 at 19:52:34 UTC, Alex Parrill wrote:
 Yes, D's community is pretty small. It's not something you can 
 just code; you have to market the language. And it's the 
 community that creates the many tools and packages.
If I'm not mistaken, people of the D community have tried to market the language quite heavily. I don't know why more people haven't joined, and it's even more baffeling to see the comments on Reddit calling D related posts spam and speaking negatively of the marketing on a site where upvotes dictates the ranking on the front page.
Jul 22 2015
parent reply "John" <john.joyus gmail.com> writes:
On Wednesday, 22 July 2015 at 20:50:53 UTC, simendsjo wrote:
 If I'm not mistaken, people of the D community have tried to
 market the language quite heavily. I don't know why more people
 haven't joined, and it's even more baffeling to see the comments
 on Reddit calling D related posts spam and speaking negatively 
 of
 the marketing on a site where upvotes dictates the ranking on 
 the
 front page.
I wish all the D related posts go under the sub-reddit https://www.reddit.com/r/dlang dlang is a familiar name due to the dlang.org itself. Also, the pattern is easy to guess, like the golang. You may be having tens of sub-reddits for D but they all look non-standard and confusing at best.
Jul 28 2015
parent "John" <john.joyus gmail.com> writes:
On Tuesday, 28 July 2015 at 15:39:13 UTC, John wrote:
 I wish all the D related posts go under the sub-reddit
 https://www.reddit.com/r/dlang

 dlang is a familiar name due to the dlang.org itself. Also, the 
 pattern is easy to guess, like the golang.

 You may be having tens of sub-reddits for D but they all look 
 non-standard and confusing at best.
To make my point more clear, the other language groups post their announcements to their respective sub-reddits like r/rust, r/golang etc, while D group tries to post *everything* directly to the r/programming. This is what makes them call it a spam.
Jul 29 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-22 21:52, Alex Parrill wrote:

 You can make a `conformsToSomeInterface!T` template, and use `static
 assert`. D ranges, and the upcoming std.allocator, already use this sort
 of 'interfaces without polymorphism'.

 Ex. `static assert(isInputRange!(MyCoolRange));`
I would be a lot more cleaner to be able to do this: void foo (T : MyCoolRange) (T range); Or just: void foo (MyCoolRange range);
 Most of what macros in C were used for are now done with templates,
 static if, etc. (I don't know how Rust's macros work). Tools could
 theoretically execute `mixin`, but it effectively requires a D
 interpreter. A library to do that would be really nice.
Macros in C have nothing to do with macros in Rust, which are called AST macros or syntax macros. It's unfortunate that they use the same word, "macro". If D macros you could do something like this: auto person = Person.where(e => e.name == 'Foo'); Where the lambda would be translated do an SQL string and performs a query in a database. -- /Jacob Carlborg
Jul 22 2015
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
Traits system is awesome and pure win. Pattern matching is not 
that game changing but helps often enough to feel important. 
Borrowship system is damn smart but totally impractical for most 
real-world cases. Macros are utterly horrible and pretty much 
unusable outside of advanced library internals.

Recently I attended local Rust meetup for curious newcomers - it 
was very interesting to observe reaction of unbiased devs not 
familiar with D at all. General reaction was "this is awesome 
interesting language that I would never use for any production 
system unless I am crazy or can throw away money like crazy". 
Because, well, productivity.

D has done many things wrong, but there is one right thing that 
totally outshines it all - it is cost-effective and pragmatical 
tool for a very wide scope of applications.
Jul 22 2015
next sibling parent reply "simendsjo" <simendsjo gmail.com> writes:
On Wednesday, 22 July 2015 at 19:54:05 UTC, Dicebot wrote:
 Traits system is awesome and pure win.
Agreed.
 Pattern matching is not > that game changing but helps often
 enough to feel important.
The fact that you can use pattern matching many places makes it very much a win. if Some(InnerClass::SomeType(value)) = some_call() { // here you can use `value` }
 Borrowship system is damn smart but totally impractical for 
 most real-world cases.
I haven't used Rust enough to really have a voice on the subject. It looks like a pardigm shift, and it might only take some getting used to, but it might also be very difficult to use. There are some big stuff written in Rust though - the rust compiler and the servo browser engine. The fact that it makes a lot of errors impossible is the exiting thing for me.
 Macros are utterly horrible and pretty much unusable outside
 of advanced library internals.
Not sure what you are referencing here. Macros expand to code. If you compare this to string mixins, they are a lot easier for tool writers, but a lot less powerful.
 Recently I attended local Rust meetup for curious newcomers - 
 it was very interesting to observe reaction of unbiased devs 
 not familiar with D at all. General reaction was "this is 
 awesome interesting language that I would never use for any 
 production system unless I am crazy or can throw away money 
 like crazy". Because, well, productivity.
I'm having some problems interpreting this. This is people in a Rust meetup - in other words, early adopters. And they thing D is crazy "becuse productivity"? I don't understand what you mean.
 D has done many things wrong, but there is one right thing that 
 totally outshines it all - it is cost-effective and pragmatical 
 tool for a very wide scope of applications.
Yes, D is pragmatic and extremely powerful and meldable to every usecase.
Jul 22 2015
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, 22 July 2015 at 21:03:52 UTC, simendsjo wrote:
 On Wednesday, 22 July 2015 at 19:54:05 UTC, Dicebot wrote:
 Recently I attended local Rust meetup for curious newcomers - 
 it was very interesting to observe reaction of unbiased devs 
 not familiar with D at all. General reaction was "this is 
 awesome interesting language that I would never use for any 
 production system unless I am crazy or can throw away money 
 like crazy". Because, well, productivity.
I'm having some problems interpreting this. This is people in a Rust meetup - in other words, early adopters. And they thing D is crazy "becuse productivity"? I don't understand what you mean.
What I understood that he meant was that it was interesting to see the reaction of folks who have nothing to with D when they learned about Rust. So, they weren't likely colored by whatever preconceptions you'd get out of someone who's already invested in D. They were reacting to Rust without D being in the picture at all. And their reaction to Rust was that it was cool and interesting but that it would be crazy to use it in production, because it's not productive (or at least, they didn't think that it was) - presumably because some of its features make it too hard to use, but dicebot would have to elaborate. - Jonathan M Davis
Jul 22 2015
prev sibling next sibling parent "Dicebot" <public dicebot.lv> writes:
On Wednesday, 22 July 2015 at 21:03:52 UTC, simendsjo wrote:
 Macros are utterly horrible and pretty much unusable outside
 of advanced library internals.
Not sure what you are referencing here. Macros expand to code. If you compare this to string mixins, they are a lot easier for tool writers, but a lot less powerful.
Compared to string mixins they require much much more effort to learn and reason about. In our meetup group no one was able to figure out meaning of any small sample macro without detailed explanation. Compared to string mixins they are much harder to abuse because of inherent hygiene but also much harder to start using, while in D the concept comes quick and naturally but requires discipline to get used in maintainable fashion. And they definitely don't feel any easier for library writers from my personal experience. More maintainable - yes, but not easier at all.
 Recently I attended local Rust meetup for curious newcomers - 
 it was very interesting to observe reaction of unbiased devs 
 not familiar with D at all. General reaction was "this is 
 awesome interesting language that I would never use for any 
 production system unless I am crazy or can throw away money 
 like crazy". Because, well, productivity.
I'm having some problems interpreting this. This is people in a Rust meetup - in other words, early adopters. And they thing D is crazy "becuse productivity"? I don't understand what you mean.
Not early adopters, more like curious group learning new stuff together. And "crazy" was about Rust and how impractical it is for real business needs they have. No one has ever mentioned D there.
Jul 22 2015
prev sibling parent "Brad Anderson" <eco gnuk.net> writes:
On Wednesday, 22 July 2015 at 21:03:52 UTC, simendsjo wrote:
 On Wednesday, 22 July 2015 at 19:54:05 UTC, Dicebot wrote:
 Macros are utterly horrible and pretty much unusable outside
 of advanced library internals.
Not sure what you are referencing here. Macros expand to code. If you compare this to string mixins, they are a lot easier for tool writers, but a lot less powerful.
I've read that someone managed to implement compile time regex using macros in Rust. The author of it noted that D was the only other language he knew of that had pulled that off. The fact that it's expressive enough to pull off one of Phobo's coolest tricks is impressive I think. I don't know enough about it to have my own opinions of how it fairs against D in general. It could very well be that doing stuff like that is far beyond anyone but the most advanced users though I don't think many D users could pull of what Dmitry has done either.
Jul 23 2015
prev sibling next sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 22 July 2015 at 19:54:05 UTC, Dicebot wrote:
 D has done many things wrong, but there is one right thing that 
 totally outshines it all - it is cost-effective and pragmatical 
 tool for a very wide scope of applications.
+1, from a business D user! --- Paolo
Jul 23 2015
prev sibling parent reply "Laeeth Isharc" <laeethnospam nospamlaeeth.com> writes:
Dicebot:
  D has done many things wrong, but there is one right thing that
 totally outshines it all - it is cost-effective and pragmatical 
 tool for a very wide scope of applications.
would you care to write more on this as a blog post, making it vivid and setting out some examples? that's my tentative judgement too, and why I am here, but I lean very heavily on my intuition and that's not enough to persuade other people when I don't yet have mastery of the relevant domain knowledge, having returned to programming quite recently. I am talking to another decent-sized hf that might be open to exploring the use of D, but the more vivid people are able to make the case the better.
Jul 23 2015
parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 16:46:01 UTC, Laeeth Isharc wrote:
 Dicebot:
  D has done many things wrong, but there is one right thing that
 totally outshines it all - it is cost-effective and 
 pragmatical tool for a very wide scope of applications.
would you care to write more on this as a blog post, making it vivid and setting out some examples? that's my tentative judgement too, and why I am here, but I lean very heavily on my intuition and that's not enough to persuade other people when I don't yet have mastery of the relevant domain knowledge, having returned to programming quite recently. I am talking to another decent-sized hf that might be open to exploring the use of D, but the more vivid people are able to make the case the better.
AFAIR Don had quite a good summary at DConf 2013 (http://dconf.org/2013/talks/clugston.html) for how it applies to our business. I guess that presentation can still be used as selling point. I like to put it this way : only very few of Sociomantic developers knew at least something about D before joining the company. It were mostly C++/Java/Whatever developers which learned everything on spot. We haven't even had any special training courses - just giving one book (Learn Tango with D) and few weeks of time to experiment was usually enough to start righting some production code, learning more advanced stuff later on per-need basis from reviewer comments. Considering growing deficit of skilled programmers in the industry in general being able to kickstart into new language like that is a very big deal for business. C style syntax brings familiarity and being able to write working apps with simple concepts only (even if they are un-idiomatic in "modern D") greatly improves learning curve. No matter how many effort is put into tooling or documentation, I simply can't see Rust ever being used like that. Well, unless it gets studied commonly as part of computer science BSc and most new devs will be at least faimilar with it. Writing simple number guessing app (like one presented in Rust book) can easily take half an hour for even experienced (but new to Rust) developer - it simply won't compile until you get every single smallest bit _right_. In small to medium business you simply can't afford investments like that. I see Rust as possible language of choice for a very small (but important) subset of applications of big enough size that maintenance costs are much greater than development costs AND both performance and safety matter. AAA games, life-critical real-time software, software monsters like your next Firefox or Photoshop. That is big enough share of market in terms of money for language to keep being demanded but it is tiny minority of applications in terms of pure project count. It is clearly not your average next project.
Jul 23 2015
prev sibling next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Traits
 ------
 I think the ability to express an interface without buying into 
 inheritance is the right move. The alternative in D is 
 specifying the behavior as a template and verifying the 
 contract in a unittest for the type.
Rust only has structs. I'm not as familiar with them so it's not as clear how they overlap with D's structs and classes. It seems like you can put Rust structs on the stack or heap. Others have already commented on how you can basically accomplish the same thing with templates, static ifs, and assert. That was my first thought as well. Nevertheless, I would add that Rust's ability to use multiple traits in the template constraints is a positive. However, it seems like the essence of a trait in Rust is forcing some user-defined type to implement some functionality. That sounds a lot like an interface in D to me. In fact, Rust's traits seem different from Scala and PHP traits in that I don't see any examples with them using implementation. The two biggest differences between Rust traits and D interfaces are 1) inheritance of Rust traits (to my knowledge D does not have inheritance for interfaces, am I wrong?), 2) Rust traits can be easily used in template constraints. Those both seem like positives to me, though I'm not sure if they can be easily changed in D. If you look at the Rust book's main example on traits. They use them to implement a print_area function that can only be called with structs that have a trait HasArea defined. Classes in pretty much any OOP language could use inheritance to implement a simpler version where print_area is a method instead of a separate function. To play devil's advocate, writing a bunch of code with templates/static ifs/assert is definitely more confusing for someone new to the language than Rust's traits are. A lot of the D code makes quite a bit of sense when you understand how the different ideas combine together, but it takes a while to figure it out. If traits were deemed important enough to add to D, I would suggest just extending interfaces so that they have inheritance and can be easily used in template constraints. I would be more interested in seeing some of these OOP features come to structs, but I have no idea how much work that requires. I feel like alias this is hacky compared to real inheritance.
 Algebraic data types
 --------------------
 Haven't looked into `Algebraic!`, so I won't start bashing D 
 here :) But given the lack of pattern matching, I doubt it will 
 be as pretty as Rust.
It seems like D's Algebraic does not allow for recursive Algebraic Types currently, whereas Rust's does. I'm honestly not that familiar with the concept. Seems cool, but I'm not sure I've ever really needed it.
 Borrowing
 ---------
 This is probably the big thing that makes Rust really 
 different.  Everything is a resource, and resources have an 
 owner and a lifetime. As a part of this, you can either have 
 multiple aliases with read-only references, or a single 
 reference with a writeable reference. I won't say I have a lot 
 of experience with this, but it seems like it's not an 
 extremely unergonomic trade-off. I cannot even remotely imagine 
 the amount of possible compiler optimizations possible with 
 this feature.
I feel like it's hard to separate borrowing from Rust's variety of pointers (& is borrowed pointer, ~ is for unique pointer, is for managed pointer). Nevertheless, I think having them part of the language with relatively straightforward syntax is a positive for Rust. Also, my understanding is that they are checked at compile-time for safety (at least unique and managed). However, I feel like borrowing and lifetimes have a bit of a learning curve to them. I've read a few tutorials without a good sense of them. I would probably need to program a bit in it to grok it. I think part of it is that Rust has = potentially meaning copy or move depending on whether it is defined. Perhaps they need a separate move assignment, like <- and -> or something (to steal syntax from R, even though that's not what <- and -> do in R).
Jul 22 2015
next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 22 July 2015 at 23:25:56 UTC, jmh530 wrote:
 If traits were deemed important enough to add to D, I would 
 suggest just extending interfaces so that they have inheritance 
 and can be easily used in template constraints.
I think I had a few misconception about D interfaces. The documentation does not say anything about interface inheritance, but it seems to work when I tried it. It definitely seems weird that this behavior isn't mentioned anywhere. Also, the use of static* or final methods in the interface can allow default implementations of methods. This is something that can be done with Scala traits. I feel like this isn't documentated as well as it could be. A weirder thing is that when I tried to test that static and final methods could not overridden (which is allowed in Scala), it seemed like they were getting overridden. The tricky part is that I had been using something like final foo() { } instead of final void foo() { }. For some reason you can override the first, but not the second. So really, I guess the D interfaces are more powerful than I had thought. Only downside is that they only work for classes. * I feel like these also aren't particularly described well in the documentation either.
Jul 23 2015
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 23 July 2015 at 15:20:36 UTC, jmh530 wrote:
 It definitely seems weird that this behavior isn't
 mentioned anywhere.
It isn't really expanded upon, but the : Interfaces at the top here tells you can inherit them: http://dlang.org/interface.html It doesn't show an example in the text of an interface extending another interface, but it does say "interfaces can be inherited".
 Also, the use of static* or final methods in the interface can 
 allow default implementations of methods.
Eh sort of but not really. static and final methods cannot be overridden in the virtual sense. interface Foo { final void foo() { writeln("foo"); } } class Bar : Foo { override void foo() { writeln("foo from bar"); } } b.d(8): Error: function b.Bar.foo does not override any function, did you mean o override 'b.Foo.foo'? b.d(8): Error: function b.Bar.foo cannot override final function Foo.b.Foo.foo Static is similar, change those finals to static and you get: b.d(8): Error: function b.Bar.foo cannot override a non-virtual function Take override off and it compiles... but is statically dispatched: Foo bar = new Bar(); bar.foo(); // calls the version from the interface Bar bar = new Bar(); bar.foo(); // calls the version from the class That's the main thing with interfaces: you are supposed to get the overridden method when you call it through the base, but that only happens if it is virtual - neither final nor static. And those cannot have default implementations in a D interface, only in a class.
 The tricky part is that I had been using something like final 
 foo() { } instead of final void foo() { }. For some reason you 
 can override the first, but not the second.
You just defined a template by skipping the return type... templates are allowed in D interfaces and classes, but are always implicitly final, and thus get the behavior above - if you call it through the interface, you get a different func than calling it through the class. BTW you can also check interface conversion for objects in a template function and get static dispatch that way... sort of: void something(T : Foo)(T bar) { bar.foo(); } void main() { Foo bar = new Bar(); // explicitly using the interface something(bar); // leads to this calling "foo" // but if you did "auto bar = new Bar();" or "Bar bar" // it would call "foo from bar" }
Jul 23 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 23 July 2015 at 15:38:53 UTC, Adam D. Ruppe wrote:
 It isn't really expanded upon, but the : Interfaces at the top 
 here tells you can inherit them:

 http://dlang.org/interface.html

 It doesn't show an example in the text of an interface 
 extending another interface, but it does say "interfaces can be 
 inherited".
I see that now. It's there, but it's practically a throw-a-way line. I've looked at that page a bunch of times and skipped over it each time. This is a broader problem with D's reference pages. Compare http://dlang.org/interface.html with https://doc.rust-lang.org/book/traits.html which is better? Rust's documentation uses clear examples to show how something should be used and the most important features. By contrast, there are many parts of D's documentation that someone with less-than-expert programming knowledge will find quite difficult to understand. For instance, look at http://dlang.org/function.html you have to scroll down for quite a while before you even get to anything useful. Then, what's the first thing that comes up? Something about contracts, return values, bodies, purity. Not "this is what a D function is". I'm all for a complete reference of every D feature, but there needs to be some step up in terms of difficulty of the material. Start with the basics, then work up into all the specifics. I'd say at a minimum some of this documentation needs to be broken up into several pages. There seem to be a lot of places where I could improve the D documentation. However, I feel like if I start going crazy with changes I might get some push-back...
 BTW you can also check interface conversion for objects in a 
 template function and get static dispatch that way... sort of:

 void something(T : Foo)(T bar) {
         bar.foo();
 }
 void main() {
         Foo bar = new Bar();  // explicitly using the interface
         something(bar); // leads to this calling "foo"

         // but if you did "auto bar = new Bar();" or "Bar bar"
        // it would call "foo from bar"
 }
This is cool and I wasn't aware that you could do this. I was trying to use std.traits' InterfacesTuple to test that a class had a particular interface. However, I also find it a bit confusing. I don't understand precisely what it means when the interface is on the left. It's like you're using the class to create an object of a type the same as the interface...Nevertheless, the interface page says you can't do this D d = new D(); where D is an interface. However, it also says that you can D d = cast(D) b; where b is some an object of some other class B that inherits the interface D. You example does not fit either of these cases. Then, if you change Foo to not be final and Bar to not override and run it, it will call "foo from bar" regardless of whether you do Foo bar = new Bar(); or Bar bar = new Bar();
Jul 23 2015
next sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 04:54:50PM +0000, jmh530 via Digitalmars-d wrote:
[...]
 I see that now. It's there, but it's practically a throw-a-way line.
 I've looked at that page a bunch of times and skipped over it each
 time. This is a broader problem with D's reference pages.
 
 Compare
 http://dlang.org/interface.html
 with
 https://doc.rust-lang.org/book/traits.html
 which is better?
 
 Rust's documentation uses clear examples to show how something should
 be used and the most important features. By contrast, there are many
 parts of D's documentation that someone with less-than-expert
 programming knowledge will find quite difficult to understand.
 
 For instance, look at http://dlang.org/function.html you have to
 scroll down for quite a while before you even get to anything useful.
 Then, what's the first thing that comes up? Something about contracts,
 return values, bodies, purity. Not "this is what a D function is".
 
 I'm all for a complete reference of every D feature, but there needs
 to be some step up in terms of difficulty of the material. Start with
 the basics, then work up into all the specifics. I'd say at a minimum
 some of this documentation needs to be broken up into several pages.
 
 There seem to be a lot of places where I could improve the D
 documentation.  However, I feel like if I start going crazy with
 changes I might get some push-back...
[...] Please bring on the PR's, we need all the help we can get! As for push-back... as long as each PR is focused on one thing, e.g., improving the prose, or adding a beginner-friendly example to the top of the page, I think it will be acceptable. Just don't put too many changes into one PR (reformat the source, fix whitespace, rewrite 5 disparate paragraphs, etc., you know what I mean). T -- Notwithstanding the eloquent discontent that you have just respectfully expressed at length against my verbal capabilities, I am afraid that I must unfortunately bring it to your attention that I am, in fact, NOT verbose.
Jul 23 2015
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 12:54 PM, jmh530 wrote:
 There seem to be a lot of places where I could improve the D
 documentation. However, I feel like if I start going crazy with changes
 I might get some push-back...
Focused, specialized, and clear pull requests are the way to go here. -- Andrei
Jul 23 2015
prev sibling next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 23 July 2015 at 16:54:52 UTC, jmh530 wrote:
 This is cool and I wasn't aware that you could do this. I was 
 trying to use std.traits' InterfacesTuple to test that a class 
 had a particular interface.
Again, I just found a reference to this behavior in the template section where it talks about Argument Deduction. No idea it was there. There were two things that I thought Rust's traits could do, but D's interfaces couldn't do. Now it seems like D's interfaces could do both of them.
Jul 23 2015
prev sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 23 July 2015 at 16:54:52 UTC, jmh530 wrote:
 Rust's documentation uses clear examples to show how something 
 should be used and the most important features. By contrast, 
 there are many parts of D's documentation that someone with 
 less-than-expert programming knowledge will find quite 
 difficult to understand.
Yeah, I agree. Here's what I've been trying to write: A template for tutorials: show how to do it link to features as they are discussed Feature template: What it is so like a mixin template is a list of declarations with names Core concept in usage mixin template example, basic use use with templates use with a name to mix in overloads Where it is used in tutorials and other places IRL That's the skeleton outline for how I want to do docs. The "what it is" we kinda have now, though it could often be clearer, but we don't have the nice examples (well, outside of things like my book) and we certainly don't have the top-level how to do X in the whole ecosystem, and the interlinking to learn more. Of course, this goes slowly because answering questions is kinda fast, but making it archivable and presentable in depth is a slow process and I just have a hundred other things to do too :( speaking of which, work meeting in 30 mins, and my presentation isn't ready yet, i shouldn't be on this forum at all right now!
 I'm all for a complete reference of every D feature, but there 
 needs to be some step up in terms of difficulty of the 
 material. Start with the basics, then work up into all the 
 specifics. I'd say at a minimum some of this documentation 
 needs to be broken up into several pages.
That would be good. I also think a top-level "the boss whats you to do X and needs it by end of day, follow these steps to learn how and read these links to dive deeper" would be super useful. They should feed into each other.
 However, I also find it a bit confusing. I don't understand 
 precisely what it means when the interface is on the left.
You always create objects, but you can work with interfaces. Ideally, in OOP theory, your functions always work with just interfaces and they don't care what specific class was constructed in the user code. This makes those functions most reusable in the paradigm. So for the local variable, using an interface doesn't make as much sense, but it is common to write: void doSomething(MyInterface i) { /* work with i */ } void main() { Foo bar = new Foo(); doSomething(bar); // bar becomes a MyInterface for this call } which would work the same way. Objects will implicitly convert to their interfaces, so you can pass them around to those generic functions. D also has templates which do something similar at first glace, but work on an entirely different principal... For the interface/class thing, you can read about it in Java tutorials or documentation. D's system is very similar to Java's so if you understand that foundation a lot more of it will make sense. Then the next level is the final things and the templates and how they interact which I'm touching the surface on here.
 cases. Then, if you change Foo to not be final and Bar to not 
 override and run it, it will call "foo from bar" regardless of 
 whether you do
 Foo bar = new Bar();
 or
 Bar bar = new Bar();
yeah, if it is non-final, the child implementation gets called. This allows you to substitute a derived class for the base class or interface while getting new behavior.
Jul 23 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 23 July 2015 at 17:34:45 UTC, Adam D. Ruppe wrote:
 Feature template:
         What it is
                 so like a mixin template is a list of 
 declarations with names
         Core concept in usage
                 mixin template example, basic use
                 use with templates
                 use with a name to mix in overloads
         Where it is used in tutorials and other places IRL




 That's the skeleton outline for how I want to do docs. The 
 "what it is" we kinda have now, though it could often be 
 clearer, but we don't have the nice examples (well, outside of 
 things like my book) and we certainly don't have the top-level 
 how to do X in the whole ecosystem, and the interlinking to 
 learn more.
Definitely sounds interesting.
 That would be good. I also think a top-level "the boss whats 
 you to do X and needs it by end of day, follow these steps to 
 learn how and read these links to dive deeper" would be super 
 useful. They should feed into each other.
I actually just had your D Cookbook delivered a few days ago. Beyond a few cases of extra semi-colons, one of the things that really stood out were where you took something I would find very complicated, describe the few key steps to accomplish the complicated task, and then follow up with a bunch of code that does those steps. I like that teaching method.
Jul 23 2015
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thursday, 23 July 2015 at 17:43:49 UTC, jmh530 wrote:
 I actually just had your D Cookbook delivered a few days ago. 
 Beyond a few cases of extra semi-colons
That gets better! I didn't realize how the editing process worked when I turned in chapter one so it is embarrassingly bad in places, absolutely horrible start, but I figured it out after that so the code will get a lot prettier.
 one of the things that really stood out were where you took 
 something I would find very complicated, describe the few key 
 steps to accomplish the complicated task, and then follow up 
 with a bunch of code that does those steps. I like that 
 teaching method.
I think a strong foundation - knowing the basics really well - is the key to being a good programmer. Most hard things can be broken down into a combination of a few basic things. Andrei's "design by introspection" hits this same principle: looking at pieces and just working with them, knowing what they can and can't do one part at a time, is easier than trying to name and master all the various combinations of them. So that was what I wanted to do in my book: start with something bigger, but focus on the building blocks with the hope that you'll be able to reassemble them into something new for yourself in the end. A concrete example, consider std.typecons.Typedef. I think that is totally useless because if you understand what it does, it is trivial to do your own with various customizations specific to you (like allowing or disallowing implicit conversions, disabling individual operators, etc). Or to just do the basics is easy if you know how it is built (it is really just a plain struct!). Whereas if you don't understand the building blocks, you're lost when you need to customize it somehow (and there's too many options to reasonably name each of them in Phobos)... and you are also likely to be surprised by bugs when they come up, like with the Typedef cookie parameter. So knowing structs is IMO far more valuable knowledge than knowing Typedef. But at the same time, having pre-made kits is helpful too, especially as starting points, so when i get to my tutorial series, I want to get more of that.
Jul 23 2015
prev sibling next sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
"jmh530" <john.michael.hall gmail.com> wrote: 
 I feel like it's hard to separate borrowing from Rust's variety of
 pointers (& is borrowed pointer, ~ is for unique pointer,   is for managed
pointer).
That's actually very outdated information. Two of the four pointer types (&, ,~ and *) were ditched in favor of library solutions. *T: Raw pointers are still the same &T: is now called reference, not borrowed pointer ~T: Is now Box<T> T: Is now Rc<T> (and possibly Gc<T> although a GC was never implemented) Tobi
Jul 23 2015
prev sibling next sibling parent Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, Jul 22, 2015 at 4:25 PM, jmh530 via Digitalmars-d <
digitalmars-d puremagic.com> wrote:
 I feel like it's hard to separate borrowing from Rust's variety of
 pointers (& is borrowed pointer, ~ is for unique pointer,   is for managed
 pointer).
Rust has ditched the ~ and syntax for pointers for a long time now. For unique pointers they use Box<T>, and for managed pointers it is either Rc<T>, Arc<T>, or Gc<T>, depending on the desired behavior. The last one is currently still under development I believe.
Jul 23 2015
prev sibling parent "Bienlein" <jeti789 web.de> writes:
I think the ability to express an interface without buying into 
inheritance is the >>right move. The alternative in D is 
specifying the behavior as a template and >>verifying the 
contract in a unittest for the type.
Rust only has structs. I'm not as familiar with them so it's 
not as clear how they >>overlap with D's structs and classes. 
It seems like you can put Rust structs on the >>stack or heap.
What some people call "interface inheritance" in Rust is not inheritance in the sense of OOP whewre an inherited method can be overwritten. Rust is here the same as Go. Some people sometimes get confused and don't see that without the possibility of overwriting an inherited method delegation applies and not inheritance. This is also explained in this blog: http://objectscape.blogspot.de/search/label/Go See the section titled "Inheritance". -- Bienlein
Jul 26 2015
prev sibling next sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 ...
I think rust makes the ugliness of D's "push everything into phobos for simplicity" become very visible. D and Rust share many equal constructs, but D's is almost always uglier.
Jul 22 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/22/15 7:47 PM, rsw0x wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 ...
I think rust makes the ugliness of D's "push everything into phobos for simplicity" become very visible. D and Rust share many equal constructs, but D's is almost always uglier.
Care for a list? Thanks! -- Andrei
Jul 23 2015
parent reply "QAston" <qastonx gmail.com> writes:
On Thursday, 23 July 2015 at 13:33:48 UTC, Andrei Alexandrescu 
wrote:
 On 7/22/15 7:47 PM, rsw0x wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 ...
I think rust makes the ugliness of D's "push everything into phobos for simplicity" become very visible. D and Rust share many equal constructs, but D's is almost always uglier.
Care for a list? Thanks! -- Andrei
Ugliness is in the eyes of the beholder, for me for example is kinda funny when people call D syntax better than Rust, for me those are nearly the same (I do know lisp). Back to topic: Rust's compiler is aware of some traits and attributes: http://doc.rust-lang.org/std/clone/trait.Clone.html which disables move semantics or other traits grouped here: http://doc.rust-lang.org/std/marker/ This works sort of the way D compiler is aware of input ranges or druntime for foreach and makes it possible to create alternative standard libraries like https://github.com/carllerche/mio D has equivalent capabilities in most cases, but it's way less consistent in a way they're provided. As an example: D has __traits + isExpressions + template constraints phobos wrappers where Rust has trait: http://doc.rust-lang.org/std/marker/trait.Reflect.html + template constraints D has some rules which types have size at compile time, Rust has http://doc.rust-lang.org/std/marker/trait.Sized.html D has rules for sharing certain types, Rust has http://doc.rust-lang.org/std/marker/trait.Sync.html D has operator overloading while Rust has that to, but using traits: http://rustbyexample.com/trait/ops.html D has special druntime Object type, Rust has a hash trait, cmp trait http://doc.rust-lang.org/core/cmp/index.html http://doc.rust-lang.org/core/hash/index.html which are more elastic. This is in no way deal breaker because things are there in D. On the other hand learning is much easier because things are consistent. First example did hurt me badly, juggling 6 docs files for type introspection. That's what I think simendsjo meant about "ugliness". I think the reason for this "mess" in D is that D is developed in a very ad hoc way, while for rust things really get peer reviewed and tested for a long period of time.
Jul 23 2015
parent "simendsjo" <simendsjo gmail.com> writes:
On Thursday, 23 July 2015 at 16:22:30 UTC, QAston wrote:
(... snip ...)
 That's what I think simendsjo meant about "ugliness".
Someone else wrote about the "ugliness", not me.
Jul 23 2015
prev sibling next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
I've not used Rust, but don't plan to.

On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 While code.dlang.org has 530 packages, crates.io has 2610 
 packages,
I think this tells something foremost about the size of the community. More people leads to more code.
 Traits
 ------
 I think the ability to express an interface without buying into 
 inheritance is the right move. The alternative in D is 
 specifying the behavior as a template and verifying the 
 contract in a unittest for the type.
Traits can't do Design by Introspection aka compile-time duck. C++ can on the other hand.
 Algebraic data types
 --------------------
 Haven't looked into `Algebraic!`, so I won't start bashing D 
 here :) But given the lack of pattern matching, I doubt it will 
 be as pretty as Rust.
You can pattern-match with visit.
 Macros
 ------
 I haven't written more than extremely simple macros in Rust, 
 but having macros that is possible for tools to understand is a 
 win.  Templates and string mixins is often used for this 
 purpose, but trying to build tools when string mixins exists is 
 probably extremely hard. If D had hygenic macros, I expect 
 several features could be expressed with this instead of string 
 mixins, making tooling easier to implement.
There is a difference though: Rust forces macros on you on the get go, while in D string mixing are quite a rare occurence thanks to other meta things and don't have a weird separate syntax. Regular templates + tuple foreach + static if is just easier to debug.
 Borrowing
 ---------
 This is probably the big thing that makes Rust really 
 different.  Everything is a resource, and resources have an 
 owner and a lifetime. As a part of this, you can either have 
 multiple aliases with read-only references, or a single 
 reference with a writeable reference. I won't say I have a lot 
 of experience with this, but it seems like it's not an 
 extremely unergonomic trade-off. I cannot even remotely imagine 
 the amount of possible compiler optimizations possible with 
 this feature.
The problem I see with this is that it is exactly like C++ scoped ownership except enforced. Even in Rust meeting they said they were converging on C++. It remotely feels like the same language to me. I've worked in fully scoped ownership codebases, it's very nice and consistent, and you don't feel like you would need anything else while doing it. You must train everyone to do it. Enforcing it in the language? I'm not sure about this. There are time where you debug and you have to comment stuff out wildly. Also if you are going to replace C++, it is a given to at least
Jul 22 2015
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 08:46, ponce wrote:

 Algebraic data types
 --------------------
 Haven't looked into `Algebraic!`, so I won't start bashing D here :)
 But given the lack of pattern matching, I doubt it will be as pretty
 as Rust.
You can pattern-match with visit.
I had a look at the example for "visit" and looked at the implementation. I would hardly call that "pattern matching". -- /Jacob Carlborg
Jul 23 2015
prev sibling parent "Nick B" <nick.barbalich gmail.com> writes:
On Thursday, 23 July 2015 at 06:46:14 UTC, ponce wrote:
 I've not used Rust, but don't plan to.

 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 While code.dlang.org has 530 packages, crates.io has 2610 
 packages,
I think this tells something foremost about the size of the community. More people leads to more code.
But does it reflect the size of the community? Look at these numbers, below. D is ranked no 26, Rust is not in the top 50 !! http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html Nick
Jul 23 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-22 20:47, simendsjo wrote:

 Traits
 ------
 I think the ability to express an interface without buying into
 inheritance is the right move. The alternative in D is specifying the
 behavior as a template and verifying the contract in a unittest for the
 type.
I completely agree and don't really like the approach D has implemented template constraints. Yeah I know, Andrei will destroy this :)
 Macros
 ------
 I haven't written more than extremely simple macros in Rust, but having
 macros that is possible for tools to understand is a win. Templates and
 string mixins is often used for this purpose, but trying to build tools
 when string mixins exists is probably extremely hard. If D had hygenic
 macros, I expect several features could be expressed with this instead
 of string mixins, making tooling easier to implement.
I completely agree. String mixins are one of the most ugliest features.
 Safe by default
 ---------------
 D is often said being safe by default, but D still has default nullable
 references and mutable by default. I don't see it being possible to
 change at this stage, but expressing when I want to be unsafe rather
 than the opposite is very nice. I end up typing a lot more in D than
 Rust because of this.
Agree.
 Pattern matching
 ----------------
 Ooooh... I don't know what to say.. D should definitely look into
 implementing some pattern matching! final switch is good for making sure
 all values are handled, but deconstructing is just so ergonomic.
Yeah, pattern matching is soooo nice. I've been trying to implement something similar in D as a library, something like this: auto foo = Foo(); match!(foo Foo, (int a, int b) => writeln(a, b); ); Which kind of works for deconstructing variable pattern. But then you want to have a pattern where "a" is 1, they it becomes much harder: match!(foo Foo, (value!(1), int b) => writeln(b); ); Or match!(foo Foo, 1, (int b) => writeln(b); ); But now you need to use different syntax for value pattern and variable pattern and soon everything becomes a big mess. It will never be as good as proper language support.
 Expressions
 -----------
 This probably also falls in the "too late" category, but
 statements-as-expressions is really nice. `auto a = if ...` <- why not?
I really like, I use it a lot in Scala. The with the combination of automatically return the last thing in a method and methods not needing curly braces for single expression methods is really nice: def foo(a: String) = if (a == "foo") 1 else if (a == "bar") 2 else 3
 On the breaking part, the
 real issue is the "We're not going to break any code!" stance, while
 each release still breaks every codebase. The effect is that a lot of
 really long-term necessary breaking changes is never accepted - the only
 breaking changes is the unintended breaking changes!
Agree. -- /Jacob Carlborg
Jul 22 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/22/2015 11:47 PM, Jacob Carlborg wrote:
 On 2015-07-22 20:47, simendsjo wrote:

 Traits
 ------
 I think the ability to express an interface without buying into
 inheritance is the right move. The alternative in D is specifying the
 behavior as a template
I completely agree and don't really like the approach D has implemented template constraints. Yeah I know, Andrei will destroy this :)
Consider that template constraints can be arbitrarily complex, and can even check behavior, not just a list of function signatures ANDed together. Turns out many constraints in Phobos are of the form (A || B), not just (A && B).
 and verifying the contract in a unittest for the type.
I am a bit puzzled by the notion of shipping template code that has never been instantiated as being a positive thing. This has also turned up in the C++ static_if discussions.
Jul 23 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 5:07 AM, Walter Bright wrote:
 On 7/22/2015 11:47 PM, Jacob Carlborg wrote:
 On 2015-07-22 20:47, simendsjo wrote:

 Traits
 ------
 I think the ability to express an interface without buying into
 inheritance is the right move. The alternative in D is specifying the
 behavior as a template
I completely agree and don't really like the approach D has implemented template constraints. Yeah I know, Andrei will destroy this :)
Consider that template constraints can be arbitrarily complex, and can even check behavior, not just a list of function signatures ANDed together. Turns out many constraints in Phobos are of the form (A || B), not just (A && B).
Agreed. And that's just scratching the surface. Serious question: how do you express in Rust that a type implements one trait or another, then figure out statically which?
  >> and verifying the contract in a unittest for the type.

 I am a bit puzzled by the notion of shipping template code that has
 never been instantiated as being a positive thing. This has also turned
 up in the C++ static_if discussions.
This is easy to understand. Weeding out uncovered code during compilation is a central feature of C++ concepts. Admitting you actually never want to do that would be a major blow. Andrei
Jul 23 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 14:15:30 UTC, Andrei Alexandrescu 
wrote:
 On 7/23/15 5:07 AM, Walter Bright wrote:
 On 7/22/2015 11:47 PM, Jacob Carlborg wrote:
 On 2015-07-22 20:47, simendsjo wrote:

 Traits
 ------
 I think the ability to express an interface without buying 
 into
 inheritance is the right move. The alternative in D is 
 specifying the
 behavior as a template
I completely agree and don't really like the approach D has implemented template constraints. Yeah I know, Andrei will destroy this :)
Consider that template constraints can be arbitrarily complex, and can even check behavior, not just a list of function signatures ANDed together. Turns out many constraints in Phobos are of the form (A || B), not just (A && B).
Agreed. And that's just scratching the surface.
It is definitely a big issue for designing more advanced generic libraries and one of my major issues with Rust but you need to realize that vast majority of application domain usage of such constraints is simply ensuring list of methods. You may be biased by too much standard library development ;) At the same time one HUGE deal breaker with rust traits that rarely gets mentioned is the fact that they are both constraints and interfaces at the same time: // this is template constraint, it will generate new `foo` symbol for each new T fn foo <T : InputRange> (range : T) // this use the very same trait definition but creates "fat pointer" on demand with simplistic dispatch table fn foo (range : InputRange) It kills all the necessity for hacks like RangeObject and is quite a salvation once you get to defining dynamic shared libraries with stable ABI. This is probably my most loved feature of Rust.
 Serious question: how do you express in Rust that a type 
 implements one trait or another, then figure out statically 
 which?
As far as I understand current idiomatics, you don't. Code that tries to use functions that are not ensured by trait will simply not compile and for any complicated generic programming one is supposed to use macros. Rust does not have templates, only trait-restricted generics. I find it terribly unproductive but it seems to appeal to certain developer mindset, primarily the ones that associate "templates" with "C++ templates".
Jul 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 8:03 AM, Dicebot wrote:
 At the same time one HUGE deal breaker with rust traits that rarely gets
 mentioned is the fact that they are both constraints and interfaces at the same
 time:

 // this is template constraint, it will generate new `foo` symbol for each new
T
 fn foo <T : InputRange> (range : T)

 // this use the very same trait definition but creates "fat pointer" on demand
 with simplistic dispatch table
 fn foo (range : InputRange)

 It kills all the necessity for hacks like RangeObject and is quite a salvation
 once you get to defining dynamic shared libraries with stable ABI.

 This is probably my most loved feature of Rust.
D interface types also produce the simplistic dispatch table, and if you make them extern(C++) they don't need a RangeObject. I know it isn't as convenient as what you describe above, but it can be pressed into service.
Jul 23 2015
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 19:55:30 UTC, Walter Bright wrote:
 On 7/23/2015 8:03 AM, Dicebot wrote:
 At the same time one HUGE deal breaker with rust traits that 
 rarely gets
 mentioned is the fact that they are both constraints and 
 interfaces at the same
 time:

 // this is template constraint, it will generate new `foo` 
 symbol for each new T
 fn foo <T : InputRange> (range : T)

 // this use the very same trait definition but creates "fat 
 pointer" on demand
 with simplistic dispatch table
 fn foo (range : InputRange)

 It kills all the necessity for hacks like RangeObject and is 
 quite a salvation
 once you get to defining dynamic shared libraries with stable 
 ABI.

 This is probably my most loved feature of Rust.
D interface types also produce the simplistic dispatch table, and if you make them extern(C++) they don't need a RangeObject. I know it isn't as convenient as what you describe above, but it can be pressed into service.
I am not sure how it applies. My point was about the fact that `isInputRange` and `InputRangeObject` are the same entities in Rust, simply interpreted differently by compiler depending on usage context. This is important because you normally want to design your application in terms of template constraints and structs to get most out of inlining and optimization. However, to define stable ABI for shared libraries, the very same interfaces need to be wrapped in runtime polymorphism. Closest thing in D would be to define traits as interfaces and use code like this: void foo(T)() if ( (is(T == struct) || is(T == class)) && Matches!(T, Interface) ) { } where `Matches` is a template helper that statically iterates method list of interface and looks for matching methods in T. However, making it built-in feels really convenient in Rust: - considerably less function declaration visual noise - much better error messages: trying to use methods of T not defined by a trait will result in compile-time error even without instantiating the template
Jul 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 1:08 PM, Dicebot wrote:
 I am not sure how it applies.
D interfaces (defined with the 'interface' keyword) are simple dispatch types, they don't require an Object. Such interfaces can also have default implementations.
 My point was about the fact that `isInputRange`
 and `InputRangeObject` are the same entities in Rust, simply interpreted
 differently by compiler depending on usage context.
I understand.
 This is important because you normally want to design your application in terms
 of template constraints and structs to get most out of inlining and
 optimization. However, to define stable ABI for shared libraries, the very same
 interfaces need to be wrapped in runtime polymorphism.

 Closest thing in D would be to define traits as interfaces and use code like 
this:
 void foo(T)()
      if (  (is(T == struct) || is(T == class))
         && Matches!(T, Interface)
      )
 { }

 where `Matches` is a template helper that statically iterates method list of
 interface and looks for matching methods in T.
I don't think the test for struct and class is necessary. It can be just: void foo(T)() if (Matches!(T, Interface)) { ... } as opposed to: void foo(T : Interface)() { ... }
 However, making it built-in feels
 really convenient in Rust:

 - considerably less function declaration visual noise
It's less noise, sure, and perhaps we can do some syntactical sugar to improve that. But I don't think this is a fundamental shortcoming for D as it stands now (although nobody has written a Matches template, perhaps that should be a priority).
 - much better error messages: trying to use methods of T not defined by a trait
 will result in compile-time error even without instantiating the template
The error messages will occur at compile time and will be the same if you write a unit test to instantiate the template. As I wrote earlier, I don't really understand the need to ship template source code that has never been instantiated.
Jul 23 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 20:52:46 UTC, Walter Bright wrote:
 My point was about the fact that `isInputRange`
 and `InputRangeObject` are the same entities in Rust, simply
interpreted
 differently by compiler depending on usage context.
I understand.
Ok, sorry, it wasn't clear from the response context :)
 I don't think the test for struct and class is necessary. It 
 can be just:

     void foo(T)() if (Matches!(T, Interface)) { ... }

 as opposed to:

     void foo(T : Interface)() { ... }
Correct indeed, though I don't feel it is much of a difference considering how common such code is (remember that you endorse ranges as The Way for designing APIs!)
 - much better error messages: trying to use methods of T not
defined by a trait
 will result in compile-time error even without instantiating
the template The error messages will occur at compile time and will be the same if you write a unit test to instantiate the template. As I wrote earlier, I don't really understand the need to ship template source code that has never been instantiated.
1) It does not protect from errors in definition void foo (R) (Range r) if (isInputRange!Range) { r.save(); } unittest { SomeForwardRange r; foo(r); } This will compile and show 100% test coverage. Yet when user will try using it with real input range, it will fail. 2) There is quite a notable difference in clarity between error message coming from some arcane part of function body and referring to wrong usage (or even totally misleading because of UFCS) and simple and straightforward "Your type X does not implement method X necessary for trait Y" 3) Coverage does not work with conditional compilation: void foo (T) () { import std.stdio; static if (is(T == int)) writeln("1"); else writeln("2"); } unittest { foo!int(); } $ dmd -cov=100 -unittest -main ./sample.d
Jul 23 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 2:08 PM, Dicebot wrote:
 It does not protect from errors in definition

 void foo (R) (Range r)
      if (isInputRange!Range)
 { r.save(); }

 unittest
 {
      SomeForwardRange r;
      foo(r);
 }

 This will compile and show 100% test coverage. Yet when user will try using it
 with real input range, it will fail.
That is correct. Some care must be taken that the mock types used in the unit tests actually match what the constraint is, rather than being a superset of them.
 There is quite a notable difference in clarity between error message coming
from
 some arcane part of function body and referring to wrong usage (or even totally
 misleading because of UFCS) and simple and straightforward "Your type X does
not
 implement method X necessary for trait Y"
I believe they are the same. "method X does not exist for type Y".
 Coverage does not work with conditional compilation:

 void foo (T) ()
 {
      import std.stdio;
      static if (is(T == int))
          writeln("1");
      else
          writeln("2");
 }

 unittest
 {
      foo!int();
 }

 $ dmd -cov=100 -unittest -main ./sample.d
Let's look at the actual coverage report: =============================== |void foo (T) () |{ | import std.stdio; | static if (is(T == int)) 1| writeln("1"); | else | writeln("2"); |} | |unittest |{ 1| foo!int(); |} | foo.d is 100% covered ============================ I look at these all the time. It's pretty obvious that the second writeln is not being compiled in. Now, if I make a mistake in the second writeln such that it is syntactically correct yet semantically wrong, and I ship it, and it blows up when the customer actually instantiates that line of code, -- where is the advantage to me? -- How am I, the developer, better off? How does "well, it looks syntactically like D code, so ship it!" pass any sort of professional quality assurance?
Jul 23 2015
parent reply "Dicebot" <public dicebot.lv> writes:
Sorry for somewhat delayed answer - not sure if anyone has 
answered to your questions in the meanwhile.

On Friday, 24 July 2015 at 00:19:50 UTC, Walter Bright wrote:
 On 7/23/2015 2:08 PM, Dicebot wrote:
 It does not protect from errors in definition

 void foo (R) (Range r)
      if (isInputRange!Range)
 { r.save(); }

 unittest
 {
      SomeForwardRange r;
      foo(r);
 }

 This will compile and show 100% test coverage. Yet when user 
 will try using it
 with real input range, it will fail.
That is correct. Some care must be taken that the mock types used in the unit tests actually match what the constraint is, rather than being a superset of them.
This is absolutely impractical. I will never even consider such attitude as a solution for production projects. If test coverage can't be verified automatically, it is garbage, period. No one will ever manually verify thousands lines of code after some trivial refactoring just to make sure compiler does its job. By your attitude `-cov` is not necessary at all - you can do the same manually anyway, with some help of 3d party tool. Yet you advertise it as crucial D feature (and are being totally right about it).
 There is quite a notable difference in clarity between error 
 message coming from
 some arcane part of function body and referring to wrong usage 
 (or even totally
 misleading because of UFCS) and simple and straightforward 
 "Your type X does not
 implement method X necessary for trait Y"
I believe they are the same. "method X does not exist for type Y".
Well, the difference is that you "believe" and I actually write code and read those error messages. They are not the same at all. In D error message gets evaluated in context of function body and is likely to be completely misleading in all but most trivial methods. For example, if there is a global UFCS function available with the same name but different argument list, you will get an error about wrong arguments and not about missing methods.
 Coverage does not work with conditional compilation:

 void foo (T) ()
 {
      import std.stdio;
      static if (is(T == int))
          writeln("1");
      else
          writeln("2");
 }

 unittest
 {
      foo!int();
 }

 $ dmd -cov=100 -unittest -main ./sample.d
Let's look at the actual coverage report: =============================== |void foo (T) () |{ | import std.stdio; | static if (is(T == int)) 1| writeln("1"); | else | writeln("2"); |} | |unittest |{ 1| foo!int(); |} | foo.d is 100% covered ============================ I look at these all the time. It's pretty obvious that the second writeln is not being compiled in.
Again, this is impractical. You may be capable of reading with speed of light but this not the normal industry case. Programs are huge, changesets are big, time pressure is real. If something can't be verified in automated way at least for basic sanity, it is simply not good enough. This is the whole point of CI revolution. In practice I will only look into .cov files when working on adding new tests to improve the coverage and will never be able to do it more often (unless compiler notifies me to do so). This is real-world constraint one needs to deal with, not matter what your personal preferences about good development process are.
 Now, if I make a mistake in the second writeln such that it is 
 syntactically correct yet semantically wrong, and I ship it, 
 and it blows up when the customer actually instantiates that 
 line of code,

    -- where is the advantage to me? --

 How am I, the developer, better off? How does "well, it looks 
 syntactically like D code, so ship it!" pass any sort of 
 professional quality assurance?
? If compiler would actually show 0 coverage for non-instantiated lines, than automatic coverage control check in CI would complain and code would never be shipped unless it gets covered with tests (which check the semantics). Your are putting it totally backwards.
Jul 25 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 9:35 AM, Dicebot wrote:
 This is absolutely impractical. I will never even consider such attitude
 as a solution for production projects. If test coverage can't be
 verified automatically, it is garbage, period. No one will ever manually
 verify thousands lines of code after some trivial refactoring just to
 make sure compiler does its job.
Test coverage shouldn't totter up and down as application code is written - it should be established by the unittests. And yes one does need to examine coverage output while writing unittests. I do agree more automation is better here (as is always). For example, if a template is followed by one or more unittests, the compiler might issue an error if the unittests don't cover the template. Andrei
Jul 25 2015
parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 25 July 2015 at 14:28:31 UTC, Andrei Alexandrescu 
wrote:
 On 7/25/15 9:35 AM, Dicebot wrote:
 This is absolutely impractical. I will never even consider 
 such attitude
 as a solution for production projects. If test coverage can't 
 be
 verified automatically, it is garbage, period. No one will 
 ever manually
 verify thousands lines of code after some trivial refactoring 
 just to
 make sure compiler does its job.
Test coverage shouldn't totter up and down as application code is written - it should be established by the unittests. And yes one does need to examine coverage output while writing unittests.
Does word "refactoring" or "adding new features" ring a bell? In the first case no one manually checks coverage of all affected code because simply too much code is affected. Yet it can become reduced by an accident. In the second case developer is likely to check coverage for actual functionality he has written - and yet coverage can become reduced in different (but related) parts of code because that is how templates work. You will have a very hard time selling this approach. If official position of language authors is that one must manually check test coverage all the time over and over again, pragmatical people will look into other languages.
 I do agree more automation is better here (as is always). For 
 example, if a template is followed by one or more unittests, 
 the compiler might issue an error if the unittests don't cover 
 the template.
This isn't "better". This is bare minimum for me to call that functionality effectively testable. Manual approach to testing doesn't work, I thought everyone has figured that out by 2015. It works better than no tests at all, sure, but this is not considered enough anymore.
Jul 25 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 6:35 AM, Dicebot wrote:
 If compiler would actually show 0 coverage for non-instantiated lines, than
 automatic coverage control check in CI would complain and code would never be
 shipped unless it gets covered with tests (which check the semantics). Your are
 putting it totally backwards.
A good case. https://issues.dlang.org/show_bug.cgi?id=14825
Jul 25 2015
parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 25 July 2015 at 21:20:28 UTC, Walter Bright wrote:
 On 7/25/2015 6:35 AM, Dicebot wrote:
 If compiler would actually show 0 coverage for 
 non-instantiated lines, than
 automatic coverage control check in CI would complain and code 
 would never be
 shipped unless it gets covered with tests (which check the 
 semantics). Your are
 putting it totally backwards.
A good case. https://issues.dlang.org/show_bug.cgi?id=14825
Thanks!
Jul 26 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 4:52 PM, Walter Bright wrote:
 On 7/23/2015 1:08 PM, Dicebot wrote:
  > I am not sure how it applies.

 D interfaces (defined with the 'interface' keyword) are simple dispatch
 types, they don't require an Object. Such interfaces can also have
 default implementations.
Is this new? I agree we should allow it, but I don't think it was added to the language yet. Andrei
Jul 25 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 12:15:04 UTC, Andrei Alexandrescu 
wrote:
 On 7/23/15 4:52 PM, Walter Bright wrote:
 On 7/23/2015 1:08 PM, Dicebot wrote:
  > I am not sure how it applies.

 D interfaces (defined with the 'interface' keyword) are simple 
 dispatch
 types, they don't require an Object. Such interfaces can also 
 have
 default implementations.
Is this new? I agree we should allow it, but I don't think it was added to the language yet. Andrei
This is not in the language and should not be added lightly. There is all kind of collisions that could happen, and they need proper disambiguate rules. Scala trait (different beast from Rust traits) is a successful implementation of this.
Jul 25 2015
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/25/2015 02:15 PM, Andrei Alexandrescu wrote:
 On 7/23/15 4:52 PM, Walter Bright wrote:
 On 7/23/2015 1:08 PM, Dicebot wrote:
  > I am not sure how it applies.

 D interfaces (defined with the 'interface' keyword) are simple dispatch
 types, they don't require an Object. Such interfaces can also have
 default implementations.
Is this new? I agree we should allow it, but I don't think it was added to the language yet. Andrei
In any case, the painful thing about the RangeObject interfaces is that they transform value types into reference types.
Jul 26 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 4:08 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 19:55:30 UTC, Walter Bright wrote:
 On 7/23/2015 8:03 AM, Dicebot wrote:
 At the same time one HUGE deal breaker with rust traits that rarely gets
 mentioned is the fact that they are both constraints and interfaces
 at the same
 time:

 // this is template constraint, it will generate new `foo` symbol for
 each new T
 fn foo <T : InputRange> (range : T)

 // this use the very same trait definition but creates "fat pointer"
 on demand
 with simplistic dispatch table
 fn foo (range : InputRange)

 It kills all the necessity for hacks like RangeObject and is quite a
 salvation
 once you get to defining dynamic shared libraries with stable ABI.

 This is probably my most loved feature of Rust.
D interface types also produce the simplistic dispatch table, and if you make them extern(C++) they don't need a RangeObject. I know it isn't as convenient as what you describe above, but it can be pressed into service.
I am not sure how it applies. My point was about the fact that `isInputRange` and `InputRangeObject` are the same entities in Rust, simply interpreted differently by compiler depending on usage context. This is important because you normally want to design your application in terms of template constraints and structs to get most out of inlining and optimization. However, to define stable ABI for shared libraries, the very same interfaces need to be wrapped in runtime polymorphism. Closest thing in D would be to define traits as interfaces and use code like this: void foo(T)() if ( (is(T == struct) || is(T == class)) && Matches!(T, Interface) ) { } where `Matches` is a template helper that statically iterates method list of interface and looks for matching methods in T.
I could have sworn we have implementsInterface in std.traits.
 However, making
 it built-in feels really convenient in Rust:

 - considerably less function declaration visual noise
 - much better error messages: trying to use methods of T not defined by
 a trait will result in compile-time error even without instantiating the
 template
Yah, building stuff in does have its advantages. Andrei
Jul 25 2015
parent "Dicebot" <public dicebot.lv> writes:
On Saturday, 25 July 2015 at 12:09:34 UTC, Andrei Alexandrescu 
wrote:
 However, making
 it built-in feels really convenient in Rust:

 - considerably less function declaration visual noise
 - much better error messages: trying to use methods of T not 
 defined by
 a trait will result in compile-time error even without 
 instantiating the
 template
Yah, building stuff in does have its advantages.
I feel it is not as much about "built-in vs library", but "generic vs templates" - somewhat deeper ideological difference that consequently calls for different tools. Metaprogramming with traits in Rust is inconvenient to the point of being almost impossible but generics have a very strong static API verification. In D you can get destroyed by flow of deeply nested error messages but the magic you can do with templates with minimal effort investment is beyond comparison. Different values, different trade-offs.
Jul 25 2015
prev sibling parent reply "Enamex" <enamex+d outlook.com> writes:
On Thursday, 23 July 2015 at 15:03:56 UTC, Dicebot wrote:
 At the same time one HUGE deal breaker with rust traits that 
 rarely gets mentioned is the fact that they are both 
 constraints and interfaces at the same time:

 ...

 It kills all the necessity for hacks like RangeObject and is 
 quite a salvation once you get to defining dynamic shared 
 libraries with stable ABI.

 This is probably my most loved feature of Rust.
Sorry, I don't quite get this. How is the most loved feature of Rust (that interfaces are also constraints for generics), a *deal breaker*?
Jul 25 2015
parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 26 July 2015 at 01:33:09 UTC, Enamex wrote:
 On Thursday, 23 July 2015 at 15:03:56 UTC, Dicebot wrote:
 At the same time one HUGE deal breaker with rust traits that 
 rarely gets mentioned is the fact that they are both 
 constraints and interfaces at the same time:

 ...

 It kills all the necessity for hacks like RangeObject and is 
 quite a salvation once you get to defining dynamic shared 
 libraries with stable ABI.

 This is probably my most loved feature of Rust.
Sorry, I don't quite get this. How is the most loved feature of Rust (that interfaces are also constraints for generics), a *deal breaker*?
I have just checked the dictionary and it simply a matter of being having terrible English and using this phrase wrong all the time :) It was supposed to mean something like "feature that makes crucial (positive) difference"
Jul 26 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 7:15 AM, Andrei Alexandrescu wrote:
 I am a bit puzzled by the notion of shipping template code that has
 never been instantiated as being a positive thing. This has also turned
 up in the C++ static_if discussions.
This is easy to understand. Weeding out uncovered code during compilation is a central feature of C++ concepts. Admitting you actually never want to do that would be a major blow.
But if a unit test fails at instantiating it, it fails at compile time.
Jul 23 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 12:49:29PM -0700, Walter Bright via Digitalmars-d wrote:
 On 7/23/2015 7:15 AM, Andrei Alexandrescu wrote:
I am a bit puzzled by the notion of shipping template code that has
never been instantiated as being a positive thing. This has also
turned up in the C++ static_if discussions.
This is easy to understand. Weeding out uncovered code during compilation is a central feature of C++ concepts. Admitting you actually never want to do that would be a major blow.
But if a unit test fails at instantiating it, it fails at compile time.
That assumes the template author is diligent (foolhardy?) enough to write unittests that cover all possible instantiations... T -- People say I'm indecisive, but I'm not sure about that. -- YHL, CONLANG
Jul 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 12:50 PM, H. S. Teoh via Digitalmars-d wrote:
 That assumes the template author is diligent (foolhardy?) enough to
 write unittests that cover all possible instantiations...
No, only each branch of the template code must be instantiated, not every possible instantiation. And we have a tool to help with that: -cov Does anyone believe it is a good practice to ship template code that has never been instantiated?
Jul 23 2015
next sibling parent reply "Vlad Levenfeld" <vlevenfeld gmail.com> writes:
On Thursday, 23 July 2015 at 20:40:17 UTC, Walter Bright wrote:
 On 7/23/2015 12:50 PM, H. S. Teoh via Digitalmars-d wrote:
 That assumes the template author is diligent (foolhardy?) 
 enough to
 write unittests that cover all possible instantiations...
No, only each branch of the template code must be instantiated, not every possible instantiation. And we have a tool to help with that: -cov Does anyone believe it is a good practice to ship template code that has never been instantiated?
I dunno about good practices but I have some use cases. I write a bunch of zero-parameter template methods and then pass them into a Match template which attempts to instantiate each of them in turn, settling on the first one which does compile. So the methods basically form a list of "preferred implementation of functionality X". All but one winds up uninstantiated. I also use a pattern where I mix in a zero-parameter template methods into a struct - they don't necessarily work for that struct, but they won't stop compilation unless they are instantiated. A complete interface is generated but only the subset which the context actually supports can be successfully instantiated - and anything the caller doesn't need, doesn't get compiled. Again, not sure if this is a bad or good thing. But I have found these patterns useful.
Jul 23 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 08:50:55PM +0000, Vlad Levenfeld via Digitalmars-d
wrote:
 On Thursday, 23 July 2015 at 20:40:17 UTC, Walter Bright wrote:
On 7/23/2015 12:50 PM, H. S. Teoh via Digitalmars-d wrote:
That assumes the template author is diligent (foolhardy?) enough to
write unittests that cover all possible instantiations...
No, only each branch of the template code must be instantiated, not every possible instantiation. And we have a tool to help with that: -cov Does anyone believe it is a good practice to ship template code that has never been instantiated?
I dunno about good practices but I have some use cases. I write a bunch of zero-parameter template methods and then pass them into a Match template which attempts to instantiate each of them in turn, settling on the first one which does compile. So the methods basically form a list of "preferred implementation of functionality X". All but one winds up uninstantiated.
[...] But don't you still have to test each template, to make sure they compile when they're supposed to? T -- Without geometry, life would be pointless. -- VS
Jul 23 2015
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 01:40:17PM -0700, Walter Bright via Digitalmars-d wrote:
 On 7/23/2015 12:50 PM, H. S. Teoh via Digitalmars-d wrote:
That assumes the template author is diligent (foolhardy?) enough to
write unittests that cover all possible instantiations...
No, only each branch of the template code must be instantiated, not every possible instantiation. And we have a tool to help with that: -cov Does anyone believe it is a good practice to ship template code that has never been instantiated?
OK, I jumped into the middle of this discussion so probably I'm speaking totally out of context... but anyway, with regards to template code, I agree that it ought to be thoroughly tested by at least instantiating the most typical use cases (as well as some not-so-typical use cases). An uninstantiated template path is worse than a branch that's never taken, because the compiler can't help you find obvious problems before you ship it to the customer. A lot of Phobos bugs lurk in rarely-used template branches that are not covered by the unittests. Instantiating all branches is only part of the solution, though. A lot of Phobos bugs also arise from undetected dependencies of the template code on the specifics of the concrete types used to test it in the unittests. The template passes the unittest but when you instantiate it with a type not used in the unittests, it breaks. For instance, a lot of range-based templates are tested with arrays in the unittests. Some of these templates wrongly depend on array behaviour (as opposed to being confined only to range API operations) while their signature constraints indicate only the generic range API. As a result, when non-array ranges are used, it breaks. Sometimes bugs like this can lurk undetected for a long time before somebody one day happens to instantiate it with a range type that violates the hidden assumption in the template code. If we had a Concepts-like construct in D, where template code is statically constrained to only use, e.g., range API when manipulating an incoming type, a lot of these bugs would've been caught. In fact, I'd argue that this should be done for *all* templates -- for example, a function like this ought to be statically rejected: auto myFunc(T)(T t) { return t + 1; } because it assumes the validity of the + operation on T, but T is not constrained in any way, so it can be *any* type, most of which, arguably, do not support the + operation. Instead, templates ought to be required to explicitly declare up-front all operations that it will perform on incoming types, so that (1) its assumptions are obvious, and (2) the compiler will reject attempts to instantiate it with an incompatible type. auto myFunc(T)(T t) if (is(typeof(T.init + 1))) { return t + 1; } The current syntax is ugly, of course, but that's easily remedied. The more fundamental problem is that the compiler does not restrict operations on T in any way, even when the sig constraint specifies how T ought to be used. Someone could easily introduce a bug: auto myFunc(T)(T t) if (is(typeof(T.init + 1))) { /* Oops, we checked that +1 is a valid operation on T, * but here we're doing -1 instead, which may or may not * be valid: */ return t - 1; } The compiler still accepts this code as long as the unittests use types that support both + and -. So this dependency on the incidental characteristics of T remains as a latent bug. If the compiler outright rejected any operation on T that hasn't been explicitly tested for, *then* we will have eliminated a whole class of template bugs. Wrong code like the last example above would be caught as soon as the compiler compiles the body of myFunc. T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De Vitis
Jul 23 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably I'm 
 speaking totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Jul 23 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias). 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations. 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive. Anyone know how Rust traits and C++ concepts deal with this?
Jul 23 2015
next sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias). 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations. 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive. Anyone know how Rust traits and C++ concepts deal with this?
You may aus well ask "How do interfaces in OO programming deal with this?". Frankly, I've never had an issue with that. Or it's a hint for design problems. Traits (and interfaces) are mostly not that fine grained, i.e. you don't have a trait/interface for every method. They should ideally define an abstraction/entity with a semantic meaning. If your constraint "hasColor(x)" just means "x has method color()", and then implement it for every class that has this method, you can just as well omit constraints and use duck typing. Tobi
Jul 23 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Jul 24, 2015 at 05:39:35AM +0000, Tobias Mller via Digitalmars-d wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably I'm
 speaking totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias). 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations. 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive.
Well, this is where the whole idea of Concepts comes from. Rather than specify the nitty-gritty of exactly which operations the type must support, you introduce an abstraction called a Concept, which basically is a group of related traits that a type must satisfy. You can think of it as a prototypical "type" that supports all the required operations. For example, an input range would be a Concept that supports the operations .front, .empty, and .popFront. A forward range would be a larger Concept derived from the input range Concept, that adds the operation .save. Your range functions don't have to specify explicitly that "type T must have methods called .empty, .front, .popFront", they simply say "type T must conform to the InputRange Concept". // Hypothetical syntax concept InputRange(ElementType) { bool empty; ElementType front(); void popFront(); } void myRangeFunc(R : InputRange)(R range) { // freely use .empty, .front, .popFront here } This also solves your objection in the other post that specifying constraints will become too onerous because you have to keep listing every individual operation the function needs, and changing a function far down the call chain will bubble up and require updating all functions that call it. With Concepts, you don't have to do this, because any change can be done in the definition of the Concept itself. If not, the function that requires more than what the current Concept provides actually needs a larger Concept that it's asking for, in which case all its callers *need* to be updated anyway. It's no different from deciding that a function that used to take struct S1 now needs to take struct S2 instead -- there's no way to avoid having to update all code that calls the function so that they pass in the new type. Concepts can derive from other Concepts too, so a ForwardRange concept need not repeat the traits specified by the InputRange concept; it can simply specify that a forward range supports all InputRange traits, plus the .save method. If you like, think of Concepts as compile-time interface definitions.
 Anyone know how Rust traits and C++ concepts deal with this?
You may aus well ask "How do interfaces in OO programming deal with this?". Frankly, I've never had an issue with that. Or it's a hint for design problems. Traits (and interfaces) are mostly not that fine grained, i.e. you don't have a trait/interface for every method. They should ideally define an abstraction/entity with a semantic meaning. If your constraint "hasColor(x)" just means "x has method color()", and then implement it for every class that has this method, you can just as well omit constraints and use duck typing.
[...] Yes, the value of Concepts mostly comes from the, um, concepts that group together a set of traits that characterize a particular category of types. Like input range, forward range, or output range. It's of more limited utility for testing traits individually. You're not really thinking in terms of individual traits, at least not directly, when you're using Concepts; you're thinking in terms of the conceptual abstraction that the Concept represents. The compiler does the checking of individual traits for you. T -- The richest man is not he who has the most, but he who needs the least.
Jul 23 2015
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 11:05 PM, H. S. Teoh via Digitalmars-d wrote:
 Yes, the value of Concepts mostly comes from the, um, concepts that
 group together a set of traits that characterize a particular category
 of types. Like input range, forward range, or output range.  It's of more
 limited utility for testing traits individually.  You're not really
 thinking in terms of individual traits, at least not directly, when
 you're using Concepts; you're thinking in terms of the conceptual
 abstraction that the Concept represents. The compiler does the checking
 of individual traits for you.
That's true but it changes nothing about what I wrote. Just replace "hasPrefix" with "hasInterfaceA". The points I brought up remain.
Jul 23 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 2:05 AM, H. S. Teoh via Digitalmars-d wrote:
 Well, this is where the whole idea of Concepts comes from. Rather than
 specify the nitty-gritty of exactly which operations the type must
 support, you introduce an abstraction called a Concept, which basically
 is a group of related traits that a type must satisfy. You can think of
 it as a prototypical "type" that supports all the required operations.

 For example, an input range would be a Concept that supports the
 operations .front, .empty, and .popFront. A forward range would be a
 larger Concept derived from the input range Concept, that adds the
 operation .save.

 Your range functions don't have to specify explicitly that "type T must
 have methods called .empty, .front, .popFront", they simply say "type T
 must conform to the InputRange Concept".
As I argued in "Generic Programming Must Go", this does work on scarce-vocabulary domain. The moment you start talking about ranges that may or may not support cross-cutting primitives, it all comes unglued. -- Andrei
Jul 25 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 10:39 PM, Tobias Müller wrote:
 You may aus well ask "How do interfaces in OO programming deal with this?".
It's a good question. And the answer is, the top level function does not list every interface used by the call tree. Nested function calls test at runtime if a particular interface is supported by an object, using dynamic casting or QueryInterface() calls. It's fundamentally different from traits and concepts.
Jul 23 2015
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-24 08:42, Walter Bright wrote:

 It's a good question. And the answer is, the top level function does not
 list every interface used by the call tree. Nested function calls test
 at runtime if a particular interface is supported by an object, using
 dynamic casting or QueryInterface() calls. It's fundamentally different
 from traits and concepts.
If you have an interface and then doing a dynamic cast then you're doing it wrong. Yes, I know that there are code that uses this, yes I have done that too. -- /Jacob Carlborg
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:35 AM, Jacob Carlborg wrote:
 On 2015-07-24 08:42, Walter Bright wrote:

 It's a good question. And the answer is, the top level function does not
 list every interface used by the call tree. Nested function calls test
 at runtime if a particular interface is supported by an object, using
 dynamic casting or QueryInterface() calls. It's fundamentally different
 from traits and concepts.
If you have an interface and then doing a dynamic cast then you're doing it wrong. Yes, I know that there are code that uses this, yes I have done that too.
Dynamic cast is no different from QueryInterface(), which is how it's done, and the reason is the point of all this - avoiding needing to enumerate every interface needed by the leaves at the root of the call tree.
Jul 24 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-24 21:04, Walter Bright wrote:

 Dynamic cast is no different from QueryInterface(), which is how it's
 done, and the reason is the point of all this - avoiding needing to
 enumerate every interface needed by the leaves at the root of the call
 tree.
I'm not familiar with QueryInterface(): -- /Jacob Carlborg
Jul 25 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 09:44:30 UTC, Jacob Carlborg wrote:
 On 2015-07-24 21:04, Walter Bright wrote:

 Dynamic cast is no different from QueryInterface(), which is 
 how it's
 done, and the reason is the point of all this - avoiding 
 needing to
 enumerate every interface needed by the leaves at the root of 
 the call
 tree.
I'm not familiar with QueryInterface():
It's part of COM. Basically, it's equivalent to doing something like auto f = cast(MyInterface)myObj; if(f is null) { /+ you can't convert it to MyInterface +/ } else { /+ you _can_ convert it +/ } except that with the way that QueryInterface works, it's actually possible to get a completely different object out of it. It doesn't have to have any relation to the original object, and the new type doesn't have to be in an inheritance hierarchy with the original type. So, you can do wacky stuff like have an interface which relates to one object hierarchy and convert it to an interface from a completely unrelated object hierarchy just so long as the underlying type knows how to give you the type you're asking for (be it because it implements both interfaces or because it's designed to give you an object that implements the interface you're asking for). So, QueryInterface isn't actually guaranteed to do anything like a cast at all. It just guarantees that you'll either get an object of the type you ask for - somehow - or that the call will fail. And the way it's used often seems to be the equivalent of something like interface A {} interface A : B {} interface Foo {} interface Bar : Foo {} class MyClass : B, Bar {} Foo f = getFoo(); // returns what it is actually a MyClass A a = cast(A)f; It happens to work, because the underlying type implements both, but why on earth would you expect that a Foo would be related to an A when they're completely unrelated? And yet that seems to be the sort of thing that gets done with QueryInterface - at least in the code that I've worked with that does COM. Personally, I think that it's nuts to even be attempting to cast from one interface to an entirely unrelated one and expect it to work - or to write code in a way that that's a normal way to do things. - Jonathan M Davis
Jul 25 2015
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 25 July 2015 at 10:12:15 UTC, Jonathan M Davis wrote:
 On Saturday, 25 July 2015 at 09:44:30 UTC, Jacob Carlborg wrote:
 On 2015-07-24 21:04, Walter Bright wrote:

 Dynamic cast is no different from QueryInterface(), which is 
 how it's
 done, and the reason is the point of all this - avoiding 
 needing to
 enumerate every interface needed by the leaves at the root of 
 the call
 tree.
I'm not familiar with QueryInterface():
It's part of COM. Basically, it's equivalent to doing something like auto f = cast(MyInterface)myObj; if(f is null) { /+ you can't convert it to MyInterface +/ } else { /+ you _can_ convert it +/ } except that with the way that QueryInterface works, it's actually possible to get a completely different object out of it. It doesn't have to have any relation to the original object, and the new type doesn't have to be in an inheritance hierarchy with the original type. So, you can do wacky stuff like have an interface which relates to one object hierarchy and convert it to an interface from a completely unrelated object hierarchy just so long as the underlying type knows how to give you the type you're asking for (be it because it implements both interfaces or because it's designed to give you an object that implements the interface you're asking for). So, QueryInterface isn't actually guaranteed to do anything like a cast at all. It just guarantees that you'll either get an object of the type you ask for - somehow - or that the call will fail. And the way it's used often seems to be the equivalent of something like interface A {} interface A : B {} interface Foo {} interface Bar : Foo {} class MyClass : B, Bar {} Foo f = getFoo(); // returns what it is actually a MyClass A a = cast(A)f; It happens to work, because the underlying type implements both, but why on earth would you expect that a Foo would be related to an A when they're completely unrelated? And yet that seems to be the sort of thing that gets done with QueryInterface - at least in the code that I've worked with that does COM. Personally, I think that it's nuts to even be attempting to cast from one interface to an entirely unrelated one and expect it to work - or to write code in a way that that's a normal way to do things. - Jonathan M Davis
It makes sense when one thinks about pure interfaces and component oriented programming, as preached by one of the Component Pascal guys. Or for that matter how Go uses interfaces. The problem is when people try to model classical OOP on top of COM. This is one reason why on WinRT all classes implementing COM are required to be final.
Jul 25 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 24 July 2015 at 06:42:47 UTC, Walter Bright wrote:
 On 7/23/2015 10:39 PM, Tobias Müller wrote:
 You may aus well ask "How do interfaces in OO programming deal 
 with this?".
It's a good question. And the answer is, the top level function does not list every interface used by the call tree. Nested function calls test at runtime if a particular interface is supported by an object, using dynamic casting or QueryInterface() calls. It's fundamentally different from traits and concepts.
It is not required and probably shouldn't, or at least shouldn't in many cases. This problem is the exact same one as strongly typed vs dynamically typed languages, except at compile time. And solution are the same: interface (concepts) or duck typing and hope for the best. The end goal is pretty much the same: writing reusable code. The implementation differs, performances as well, but that is pretty much it.
Jul 24 2015
prev sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/23/2015 10:39 PM, Tobias Müller wrote:
 You may aus well ask "How do interfaces in OO programming deal with this?".
It's a good question. And the answer is, the top level function does not list every interface used by the call tree. Nested function calls test at runtime if a particular interface is supported by an object, using dynamic casting or QueryInterface() calls. It's fundamentally different from traits and concepts.
IMO dynamic casting or QueryInterface() is a sign of bad design. But then again, I also like exception specifications, at least the way Java does it. In C++ they're pointless, that true. Tobi
Jul 25 2015
prev sibling next sibling parent "vitus" <vitus vitus.vitus> writes:
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably 
 I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias). 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations. 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive.
Fun begins: void foo(T)(T t) if(hasPrefix!T || hasSuffix!T){ static if(...)t.prefix(); else t.suffix(); mixin("t.bar();"); }
Jul 23 2015
prev sibling next sibling parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 07/24/15 06:43, Walter Bright via Digitalmars-d wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T
The fact that some other concept/trait implementations got it wrong is not really an argument against a sane implementation. Basically, it can work like this: - traits, like your 'hasPrefix', check that 'T' implements an interface (defined in hasPrefix). (this does the job that template constraints do right now, and more [1]) - the compiler instantiates the template with a mock (defined inside 'hasPrefix'). (this immediately catches every illegal access, like 't.suffix' and results in a clear and informative error message) - the 'T' inside foo is still the original type that it was called with, so 'bar(t)' will succeed. But it needs to be conditionally enabled for just the types that implement 'hasColor' -- and this is exactly what you'd want from traits. So guard it, for example, `static if (is(T:hasColor)) bar(t);`; note that when 'bar(t)` is an alias or mixin, this can be done inside the aliased or mixed-in code. There are some syntax sugar possibilities here (aot there should be a way to access other traits without introducing a named function). http://forum.dlang.org/post/mailman.4484.1434139778.7663.digitalmars-d puremagic.com has one example, using a slightly different syntax (the 'idiomatic D' way would be an is-expression inside static-if introducing the alias, but `is()` makes code extremely ugly and unreadable). [1] "and more": it allows for overloading on traits, something that can not be (cleanly) done with constraints. When there is more than one candidate template, the compiler can easily determine the most specialized one, eg if both 'InputRange' and 'ForwardRange' matches, it just needs to try to instantiate IR with the mock from FW range trait (and vice versa); if that fails then it means that FW should be chosen. IOW it works similarly to "normal" template overload resolution. Note that this only needs to be done (lazily/on-demand) once per trait-set.
     }
 
     void bar(T: hasColor)(T t) {
        t.color();
     }
 
 Now consider a deeply nested chain of function calls like this. At the bottom,
one adds a call to 'color', and now every function in the chain has to add
'hasColor' even though it has nothing to do with the logic in that function.
No, as long as the extra functionality is optional no changes to callers are required, at least not for statically dispatched code that we're talking about here. If the new code /requires/ extra functionality then it needs to be explicitly requested. This is no different from how D classes work -- you either have to request a subclass to use it, or check with `cast(SubClass)` at run-time. Traits work at compile-time, that's all. artur
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 4:55 AM, Artur Skawina via Digitalmars-d wrote:
 Basically, it can work like this:

 - traits, like your 'hasPrefix', check that 'T' implements an
    interface (defined in hasPrefix).
    (this does the job that template constraints do right now, and
    more [1])

 - the compiler instantiates the template with a mock (defined
    inside 'hasPrefix').
    (this immediately catches every illegal access, like 't.suffix'
    and results in a clear and informative error message)
That really isn't any different from a user calling it with a 'hasPrefix' that doesn't have a 'suffix'.
 - the 'T' inside foo is still the original type that it was called
    with, so 'bar(t)' will succeed. But it needs to be conditionally
    enabled for just the types that implement 'hasColor' -- and this
    is exactly what you'd want from traits. So guard it, for example,
    `static if (is(T:hasColor)) bar(t);`; note that when 'bar(t)` is
    an alias or mixin, this can be done inside the aliased or mixed-in
    code.
As I mentioned in the antecedent, this pulls the teeth of the point of the trait, because what the function needs is 'hasPrefix && hasSuffix', and your position is that only 'hasPrefix' is required. This leaves us exactly where D is now.
    There are some syntax sugar possibilities here (aot there should
    be a way to access other traits without introducing a named function).
    http://forum.dlang.org/post/mailman.4484.1434139778.7663.digitalmars-d puremagic.com
    has one example, using a slightly different syntax (the 'idiomatic D'
    way would be an is-expression inside static-if introducing the alias,
    but `is()` makes code extremely ugly and unreadable).
As I mentioned to Dicebot, syntactical improvements are possible (though I don't think we should rush into things here).
 [1] "and more": it allows for overloading on traits, something
      that can not be (cleanly) done with constraints.
Overloading with constraints is commonplace in Phobos. Haven't really had any trouble with it.
 Now consider a deeply nested chain of function calls like this. At the bottom,
one adds a call to 'color', and now every function in the chain has to add
'hasColor' even though it has nothing to do with the logic in that function.
No, as long as the extra functionality is optional no changes to callers are required, at least not for statically dispatched code that we're talking about here.
That's how D works today.
Jul 24 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Jul 24, 2015 at 11:29:28AM -0700, Walter Bright via Digitalmars-d wrote:
[...]
 Overloading with constraints is commonplace in Phobos. Haven't really
 had any trouble with it.
[...] Actually, I've had trouble with overloading with constraints. Unlike C++, D does not allow multiple possible template instantiations in overload sets. While this is ostensibly a good thing, it makes writing fallbacks extremely cumbersome. For example, suppose I have a set of overloads of myFunc(), each of which specifies some set of constraints defining which subset of types they are implemented for: void myFunc(T)(T t) if (someSetOfConditions!T) { ... } void myFunc(T)(T t) if (someOtherSetOfConditions!T) { ... } void myFunc(T)(T t) if (yetAnotherSetOfConditions!T) { ... } As long as the conditions are mutually exclusive, everything is OK. But suppose these conditions describe specializations of the function for specific concrete types (e.g. I want to take advantage of extra properties of the concrete types to implement faster algorithms), but I want a fallback function containing a generic implementation that works for all types. Unfortunately, I can't just write another overload of myFunc without constraints, because it causes conflicts with the preceding overloads. The only way to achieve this is to explicitly negate every condition in all other overloads: // generic fallback void myFunc(T)(T t) if (!someSetOfConditions!T && !someOtherSetOfConditions!T && !yetAnotherSetOfConditions!T) { ... } This isn't too bad at first glance -- a little extra typing never hurt nobody, right? Unfortunately, this doesn't work if, say, all these overloads are part of some module M, but the user wishes to extend the functionality of myFunc by providing his own specialization for his own user-defined type, say, in a different module. That is prohibited because the generic myFunc above doesn't have the negation of whatever conditions the user placed in his specialization, so it will cause an overload conflict. It is also a maintenance issue that whenever somebody adds or removes a new specialization of myFunc (even within module M), the sig constraints of the generic fallback must be updated accordingly. It gets worse if the original sig constraints were buggy -- then you have to fix them in both the specialization and the generic fallback -- and hope you didn't miss any conditions (e.g. a typo causes the generic fallback not to pick up something that the specialization now declines). If you think this is a contrived scenario, you should take a look at std.conv, where this particular problem has become a maintenance headache. Among the several dozens of overloads of toImpl, there are all kinds of sig constraints that, at first glance, isn't obvious whether or not they cover all the necessary cases, and whether various fallbacks (yes there are multiple! the above scenario is a simplified description) correctly catch all the cases they ought to catch. When there is a bug in one of the toImpl overloads, it's a nightmare to find out which one it is -- because you have to parse and evaluate all the sig constraints of every overload just to locate the offending function. Maybe as a Phobos *user* you perceive that overloading with sig constraints is nice and clean... But as someone who was foolhardy enough once to attempt to sort out the tangled mess that is the sig constraints of toImpl overloads, I'm getting a rather different perception of the situation. T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Jul 24 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:50 AM, H. S. Teoh via Digitalmars-d wrote:
 The only way to achieve this is to explicitly
 negate every condition in all other overloads:
Another way is to list everything you accept in the constraint, and then separate out the various implementations in the template body using static if. It's a lot easier making the documentation for that, too.
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 05:27:48 UTC, Walter Bright wrote:
 On 7/24/2015 11:50 AM, H. S. Teoh via Digitalmars-d wrote:
 The only way to achieve this is to explicitly
 negate every condition in all other overloads:
Another way is to list everything you accept in the constraint, and then separate out the various implementations in the template body using static if. It's a lot easier making the documentation for that, too.
I've considered off and on arguing that a function like find should have a top level template that has the constraints that cover all of the overloads, and then either putting each of the individual functions with their own constraints internally or use separate static ifs within a single function (or some combination of the two). That way, you end up with a simple template constraint that the user sees rather than the huge mess that you get now - though if you still have individual functions within that outer template, then that doesn't really fix the overloading problem except insomuch as the common portion of their template constraints (which is then in the outer template's constraint) would then not have to be repeated. However, when anyone has brought up anything like this, Andrei has argued against it, though I think that those arguments had to do primarily with the documentation, because the person suggesting the change was looking for simplified documentation, and Andrei thought that the ddoc generation should be smart enough to be able to combine things for you. So, maybe it wouldn't be that hard to convince him of what I'm suggesting, but I don't know. I haven't tried yet. It's just something that's occurred to me from time to time, and I've wondered if we should change how we go about things in a manner along those lines. It could help with the documentation and understanding the template constraint as well as help reduce the pain with the overloads. Andrei has definitely been against overloading via static if though whenever that suggestion has been made. I think that he thinks that if you do that, it's a failure of template constraints - though if you use an outer template and then overload the function internally, then you're still using template constraints rather than static if, and you get simplified template constraints anyway. So, maybe we should look at something along those lines rather than proliferating the top-level function overloading like we're doing now. - Jonathan M Davis
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:10 PM, Jonathan M Davis wrote:
 So, maybe we should look at something along those lines rather than
 proliferating the top-level function overloading like we're doing now.
Consider the following pattern, which I see often in Phobos: void foo(T)(T t) if (A) { ... } void foo(T)(T t) if (!A && B) { ... } from a documentation (i.e. user) perspective. Now consider: void foo(T)(T t) if (A || B) { static if (A) { ... } else static if (B) { ... } else static assert(0); } Makes a lot more sense to the user, who just sees one function that needs A or B, and doesn't see the internal logic.
Jul 25 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 08:48:40 UTC, Walter Bright wrote:
 On 7/24/2015 11:10 PM, Jonathan M Davis wrote:
 So, maybe we should look at something along those lines rather 
 than
 proliferating the top-level function overloading like we're 
 doing now.
Consider the following pattern, which I see often in Phobos: void foo(T)(T t) if (A) { ... } void foo(T)(T t) if (!A && B) { ... } from a documentation (i.e. user) perspective. Now consider: void foo(T)(T t) if (A || B) { static if (A) { ... } else static if (B) { ... } else static assert(0); } Makes a lot more sense to the user, who just sees one function that needs A or B, and doesn't see the internal logic.
Yeah, though, I believe that Andrei has argued against that every time that someone suggests doing that. IIRC, he wants ddoc to that for you somehow rather than requiring that we write code that way. And from a test perspective, it's actually a bit ugly to take function overloads and turn them into static ifs, because instead of having separate functions that you can put unittest blocks under, you have to put all of those tests in a single unittest block or put the unittest blocks in a row with comments on them to indicate which static if branch they go with. It also has the problem that the function can get _way_ too long (e.g. putting all of the overloads of find in one function would be a really bad idea). Alternatively, you could do something like template foo(T) if(A || B) { void foo()(T t) if(A) {} void foo()(T t) if(B) {} } which gives you the simplified template constraint for the documentation, though for better or worse, you'd still get the individual template constraints listed separately for each overload - though given how often each overload needs an explanation, that's not necessarily bad. And in many cases, what you really have is overlapping constraints rather than truly distinct ones. So, you'd have something like auto foo(alias pred, R)(R r) if(testPred!pred && isInputRange!R && !isForwardRange!R) {} auto foo(alias pred, R)(R r) if(testPred!pred && isForwardRange!R) {} and be turning it into something like template foo(alias pred) if(testPred!pred) { auto foo(R)(R r) if(isInputRange!R && !isForwardRange!R) {} auto foo(R)(R r) if(isForwardRange!R) {} } So, part of the template constraint gets factored out completely. And if you want to factor it out more than that but still don't want to use static if because of how it affects the unit tests, or because you don't want the function to get overly large, then you can just forward it to another function. e.g. auto foo(alias pred, R)(R r) if(testPred!pred && isInputRange!R) { return _foo(pred, r); } auto _foo(alias pred, R)(R r) if(!isForwardRange!R) {} auto _foo(alias pred, R)(R r) if(isForwardRange!R) {} or go for both the outer template and forwarding, and do template foo(alias pred) if(testPred!pred) { auto foo(R)(R r) if(isInputRange!R) { return _foo(pred, r); } auto _foo(R)(R r) if(!isForwardRange!R) {} auto _foo(R)(R r) if(isForwardRange!R) {} } We've already created wrapper templates for at least some of the functions in Phobos so that you can partially instantiate them - e.g. alias myMap = map!(a => a.func()); So, it we're already partially moving stuff up a level in some cases. We just haven't used it as a method to simplify the main template constraint that user sees or to simplify overloads. I do think that it can make sense to put very similar overloads in a single function with static if branches like you're suggesting, but I do think that it's a bit of a maintenance issue to do it for completely distinct overloads - especially if there are several of them rather than just a couple. But it's still possible to combine their template constraints at a higher level and have overloaded functions rather than simply using static ifs. - Jonathan M Davis
Jul 25 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 2:14 AM, Jonathan M Davis wrote:
 I do think that it can make sense to put very similar overloads in a single
 function with static if branches like you're suggesting, but I do think that
 it's a bit of a maintenance issue to do it for completely distinct overloads -
 especially if there are several of them rather than just a couple. But it's
 still possible to combine their template constraints at a higher level and have
 overloaded functions rather than simply using static ifs.
I also sometimes see: void foo(T)(T t) if (A && B) { ... } void foo(T)(T t) if (A && !B) { ... } The user should never have to see the B constraint in the documentation. This should be handled internally with static if.
Jul 25 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 5:34 AM, Walter Bright wrote:
 On 7/25/2015 2:14 AM, Jonathan M Davis wrote:
 I do think that it can make sense to put very similar overloads in a
 single
 function with static if branches like you're suggesting, but I do
 think that
 it's a bit of a maintenance issue to do it for completely distinct
 overloads -
 especially if there are several of them rather than just a couple. But
 it's
 still possible to combine their template constraints at a higher level
 and have
 overloaded functions rather than simply using static ifs.
I also sometimes see: void foo(T)(T t) if (A && B) { ... } void foo(T)(T t) if (A && !B) { ... } The user should never have to see the B constraint in the documentation. This should be handled internally with static if.
There are problems with that but I do agree with the sentiment. -- Andrei
Jul 25 2015
prev sibling parent reply "Enamex" <enamex+d outlook.com> writes:
On Saturday, 25 July 2015 at 09:14:04 UTC, Jonathan M Davis wrote:
 . . .
 auto foo(alias pred, R)(R r)
     if(testPred!pred && isInputRange!R && !isForwardRange!R)
 {}

 auto foo(alias pred, R)(R r)
     if(testPred!pred && isForwardRange!R)
 {}

 and be turning it into something like

 template foo(alias pred)
     if(testPred!pred)
 {
     auto foo(R)(R r)
         if(isInputRange!R && !isForwardRange!R)
     {}

     auto foo(R)(R r)
         if(isForwardRange!R)
     {}
 }
 . . .
 - Jonathan M Davis
The example(s) is confusing me. `foo!(first)(second);` isn't really an alternative to `foo(first, second);`. Am I misreading something?
Jul 25 2015
parent "Nicholas Wilson" <iamthewilsonator hotmail.com> writes:
On Sunday, 26 July 2015 at 01:55:12 UTC, Enamex wrote:
 On Saturday, 25 July 2015 at 09:14:04 UTC, Jonathan M Davis 
 wrote:
 . . .
 auto foo(alias pred, R)(R r)
     if(testPred!pred && isInputRange!R && !isForwardRange!R)
 {}

 auto foo(alias pred, R)(R r)
     if(testPred!pred && isForwardRange!R)
 {}

 and be turning it into something like

 template foo(alias pred)
     if(testPred!pred)
 {
     auto foo(R)(R r)
         if(isInputRange!R && !isForwardRange!R)
     {}

     auto foo(R)(R r)
         if(isForwardRange!R)
     {}
 }
 . . .
 - Jonathan M Davis
The example(s) is confusing me. `foo!(first)(second);` isn't really an alternative to `foo(first, second);`_?_. Am I misreading something?
No. Yes. `first` is a compile time parameter and can be anything. In this case it defines a type (in examples T is used for generic type, and R for a range). but it can be anything, a type literal i.e. foo!(size_t a)(Bar b); , it can be another symbol ( a function, class or even another template (with is own set of args)! it can be varaidic as well i.e. foo!(T...)(some_args) Second is the set of runtime parameters, which can be of types defined in the compile time args.
Jul 25 2015
prev sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 25 July 2015 at 18:48, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 7/24/2015 11:10 PM, Jonathan M Davis wrote:
 So, maybe we should look at something along those lines rather than
 proliferating the top-level function overloading like we're doing now.
Consider the following pattern, which I see often in Phobos: void foo(T)(T t) if (A) { ... } void foo(T)(T t) if (!A && B) { ... } from a documentation (i.e. user) perspective. Now consider: void foo(T)(T t) if (A || B) { static if (A) { ... } else static if (B) { ... } else static assert(0); } Makes a lot more sense to the user, who just sees one function that needs A or B, and doesn't see the internal logic.
This! I've felt this way with phobos in particular for ages. I've argued this exact case before, and it's been rejected. I much prefer static if inside functions rather than pollute the namespace (and docs) with a bunch of overloads. Also, these symbols with lots of constraints can get really long!
Jul 25 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 2:50 PM, H. S. Teoh via Digitalmars-d wrote:
 Maybe as a Phobos*user*  you perceive that overloading with sig
 constraints is nice and clean... But as someone who was foolhardy enough
 once to attempt to sort out the tangled mess that is the sig constraints
 of toImpl overloads, I'm getting a rather different perception of the
 situation.
I think we're in good shape there. -- Andrei
Jul 25 2015
prev sibling parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 07/24/15 20:29, Walter Bright via Digitalmars-d wrote:
 On 7/24/2015 4:55 AM, Artur Skawina via Digitalmars-d wrote:
 Basically, it can work like this:

 - traits, like your 'hasPrefix', check that 'T' implements an
    interface (defined in hasPrefix).
    (this does the job that template constraints do right now, and
    more [1])

 - the compiler instantiates the template with a mock (defined
    inside 'hasPrefix').
    (this immediately catches every illegal access, like 't.suffix'
    and results in a clear and informative error message)
That really isn't any different from a user calling it with a 'hasPrefix' that doesn't have a 'suffix'.
The difference is that right now the developer has to write a unit-test per function that uses `hasPrefix`, otherwise the code might not even be verified to compile. 100% unit-test coverage is not going to happen in practice, and just like with docs, making things easier and reducing boilerplate to a minimum would improve the situation dramatically.
 - the 'T' inside foo is still the original type that it was called
    with, so 'bar(t)' will succeed. But it needs to be conditionally
    enabled for just the types that implement 'hasColor' -- and this
    is exactly what you'd want from traits. So guard it, for example,
    `static if (is(T:hasColor)) bar(t);`; note that when 'bar(t)` is
    an alias or mixin, this can be done inside the aliased or mixed-in
    code.
As I mentioned in the antecedent, this pulls the teeth of the point of the trait, because what the function needs is 'hasPrefix && hasSuffix', and your position is that only 'hasPrefix' is required.
The function only needs what it uses, ie the i/f defined by `hasPrefix`. It can optionally access other i/fs like `hasSuffix` if the type supports them, and it can do so w/o affecting callers (or callees). Iff you modify the function to /require/ both traits, then, yes, you need to also update the traits, eg create and use a `hasAdfixes` trait instead. This is a feature and not a problem; traits would of course be opt-in and using them only makes sense when the interfaces are very stable, like D's ranges -- if a required primitive is added, removed or modified then the range-traits *should* be updated.
 This leaves us exactly where D is now.
No, right now D does not provide any (built-in) functionality for restricting (and automatically documenting) non-dynamic interfaces.
 [1] "and more": it allows for overloading on traits, something
      that can not be (cleanly) done with constraints.
Overloading with constraints is commonplace in Phobos. Haven't really had any trouble with it.
It doesn't work (ie does not scale) when the constraints are non- exclusive. At some point one reaches for: template _pick_f(T) { static if (is(T:ForwardRange)/*&&smth&&(!smth_else||wever)*/) enum _pick_f = 1; else static if (is(T:ForwardRange)/*&&!(smth&&(!smth_else||wever))*/) enum _pick_f = 2; else static if (is(T:InputRange)) enum _pick_f = 3; /*...*/ } auto f(T)(T a) if (_pick_f!T==1) {/*...*/} auto f(T)(T a) if (_pick_f!T==2) {/*...*/} auto f(T)(T a) if (_pick_f!T==3) {/*...*/} Ugly as it is, it's still better than the alternative (where the ugliness isn't contained, but spread out over all `f` definitions and not guaranteed to be coherent). artur
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
 The difference is that right now the developer has to write a 
 unit-test per function that uses `hasPrefix`, otherwise the 
 code might not even be verified to compile. 100% unit-test 
 coverage is not going to happen in practice, and just like with 
 docs, making things easier and reducing boilerplate to a 
 minimum would improve the situation dramatically.
But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage? A feature has to add more value than to simply make it so that you're slightly less screwed if you don't write unit tests. - Jonathan M Davis
Jul 24 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis wrote:
 This is exactly wrong attitude. Why on earth should we make 
 life easier for folks who don't bother to get 100% unit test 
 coverage?
Because that is 99% of D users...
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 21:48:23 UTC, Tofu Ninja wrote:
 On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis wrote:
 This is exactly wrong attitude. Why on earth should we make 
 life easier for folks who don't bother to get 100% unit test 
 coverage?
Because that is 99% of D users...
If so, they have no excuse. D has made it ridiculously easy to unit test your code. And I very much doubt that 99% of D users don't unit test their code. There are cases where 100% isn't possible - e.g. because of an assert(0) or because you're dealing with UI code or the like where it simply isn't usable without running the program - but even then, the test coverage should be as close to 100% as can be achieved, which isn't usually going to be all that far from 100%. We should be ashamed when our code is not as close to 100% code coverage as is feasible (which is usually 100%). - Jonathan M Davis
Jul 24 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 22:07:14 UTC, Jonathan M Davis wrote:
 On Friday, 24 July 2015 at 21:48:23 UTC, Tofu Ninja wrote:
 On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis 
 wrote:
 This is exactly wrong attitude. Why on earth should we make 
 life easier for folks who don't bother to get 100% unit test 
 coverage?
Because that is 99% of D users...
If so, they have no excuse. D has made it ridiculously easy to unit test your code. And I very much doubt that 99% of D users don't unit test their code. There are cases where 100% isn't possible - e.g. because of an assert(0) or because you're dealing with UI code or the like where it simply isn't usable without running the program - but even then, the test coverage should be as close to 100% as can be achieved, which isn't usually going to be all that far from 100%. We should be ashamed when our code is not as close to 100% code coverage as is feasible (which is usually 100%). - Jonathan M Davis
I ment 99% don't 100% unit tests, but even close to 100% is still probably not that common, most D users are hobbyists I think(though I could be wrong), and hobbyists are lazy.
Jul 24 2015
prev sibling next sibling parent reply Justin Whear <justin economicmodeling.com> writes:
On Fri, 24 Jul 2015 22:07:12 +0000, Jonathan M Davis wrote:

 On Friday, 24 July 2015 at 21:48:23 UTC, Tofu Ninja wrote:
 On Friday, 24 July 2015 at 21:32:19 UTC, Jonathan M Davis wrote:
 This is exactly wrong attitude. Why on earth should we make life
 easier for folks who don't bother to get 100% unit test coverage?
Because that is 99% of D users...
If so, they have no excuse. D has made it ridiculously easy to unit test your code. And I very much doubt that 99% of D users don't unit test their code. There are cases where 100% isn't possible - e.g. because of an assert(0) or because you're dealing with UI code or the like where it simply isn't usable without running the program - but even then, the test coverage should be as close to 100% as can be achieved, which isn't usually going to be all that far from 100%. We should be ashamed when our code is not as close to 100% code coverage as is feasible (which is usually 100%). - Jonathan M Davis
Commercial (though in-house) D library and tools writer here. We run code-coverage as part of our CI process and report results back to Gitlab (our self-hosted Github-like). Merge requests all report the code coverage of the pull (haven't figured out how to do a delta against the old coverage yet). I regularly test code to 100% of coverable lines, where coverable lines are all but: assert(0, ...) Test case lines that aren't supposed to execute (e.g. lambdas in a predSwitch) I agree that there's really no excuse and think we ought to orient the language towards serious professionals who will produce quality code. Bad code is bad code, regardless of the language.
Jul 24 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 6:58 PM, Justin Whear wrote:
 I agree that there's really no excuse and think we ought to orient the
 language towards serious professionals who will produce quality code.
 Bad code is bad code, regardless of the language.
YES! Amen to that. -- Andrei
Jul 25 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 3:07 PM, Jonathan M Davis wrote:
 If so, they have no excuse. D has made it ridiculously easy to unit test your
 code. And I very much doubt that 99% of D users don't unit test their code.
D has done a great job of making unit tests the rule, rather than the exception.
 There are cases where 100% isn't possible - e.g. because of an assert(0) or
 because you're dealing with UI code or the like where it simply isn't usable
 without running the program - but even then, the test coverage should be as
 close to 100% as can be achieved, which isn't usually going to be all that far
 from 100%.

 We should be ashamed when our code is not as close to 100% code coverage as is
 feasible (which is usually 100%).
Right on, Jonathan!
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 00:28:19 UTC, Walter Bright wrote:
 On 7/24/2015 3:07 PM, Jonathan M Davis wrote:
 D has done a great job of making unit tests the rule, rather 
 than the exception.
Yeah. I wonder what would happen with some of the folks that I've worked with who were anti-unit testing if they were programming in D. It would be more or less shoved in their face at that point rather than having it in a separate set of code somewhere that they could ignore, and it would be so easy to put them in there that it would have to be embarrassing on some level at least if they didn't write them. But they'd probably still argue against them and argue that D was stupid for making them so prominent... :( I do think that our built-in unit testing facilities are a huge win for us though. It actually seems kind of silly at this point that most other languages don't have something similar given how critical they are to high quality, maintainable code.
 We should be ashamed when our code is not as close to 100% 
 code coverage as is
 feasible (which is usually 100%).
Right on, Jonathan!
I must say that this is a rather odd argument to be having though, since normally I'm having to argue that 100% test coverage isn't enough rather than that code needs to have 100% (e.g. how range-based algorithms need to be tested with both value type ranges and reference type ranges, which doesn't increase the code coverage at all but does catch bugs with how save is used, and without that, those bugs won't be caught). So, having to argue that all code should have 100% code coverage (or as close to it as is possible anyway) is kind of surreal. I would have thought that that was a given at this point. The real question is how far you need to go past that to ensure that your code works correctly. - Jonathan M Davis
Jul 25 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 12:08 AM, Jonathan M Davis wrote:
 I must say that this is a rather odd argument to be having though, since
 normally I'm having to argue that 100% test coverage isn't enough rather than
 that code needs to have 100% (e.g. how range-based algorithms need to be tested
 with both value type ranges and reference type ranges, which doesn't increase
 the code coverage at all but does catch bugs with how save is used, and without
 that, those bugs won't be caught). So, having to argue that all code should
have
 100% code coverage (or as close to it as is possible anyway) is kind of
surreal.
 I would have thought that that was a given at this point. The real question is
 how far you need to go past that to ensure that your code works correctly.
It's still unusual to have 100% coverage in Phobos, and this is not because it is hard. Most of the time, it is easy to do. It's just that nobody checks it. Although we have succeeded in making unit tests part of the culture, the next step is 100% coverage. I know that 100% unit test coverage hardly guarantees code correctness. However, since I started using code coverage analyzers in the 1980s, the results are surprising - code with 100% test coverage has at LEAST an order of magnitude fewer bugs showing up in the field. It's surprisingly effective. I would have had a LOT more trouble shipping the Warp project if I hadn't gone with 100% coverage from the ground up. Nearly all the bugs it had in the field were due to my misunderstandings of the peculiarities of gpp - the code had worked as I designed it. This is a huge reason why I want to switch to ddmd. I want to improve the quality of the compiler with unit tests. The various unit tests schemes I've tried for C++ are all ugly, inconvenient, and simply a bitch. It's like trying to use a slide rule after you've been given a calculator. (I remember the calculator revolution. It happened my freshman year at college. September 1975 had $125 slide rules in the campus bookstore. December they were at $5 cutout prices, and were gone by January. I never saw anyone use a slide rule again. I've never seen a technological switchover happen so fast, before or since.)
Jul 25 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 08:58:26 UTC, Walter Bright wrote:
 It's still unusual to have 100% coverage in Phobos, and this is 
 not because it is hard. Most of the time, it is easy to do. 
 It's just that nobody checks it.
Yeah. I thought there had been a change to make it so that the coverage got printed out as part of the build, but I don't see it right now, at least on FreeBSD. Folks would still have to pay attention to those numbers though. Writing the tests is easy, and if someone is conscientious about their tests, I'd expect them to typically hit 100% without having to check (though you can still miss the occasional branch even then - especially with templated code), but frequently folks just write a few tests to make sure that the most basic functionality works and then call it a day. I know that I'm too often guilty of assuming that I hit 100%, because I was thorough with my testing (since I tend to be _very_ thorough with my tests), and I should do better at verifying that I didn't miss something. I did put an effort a while back in making sure that std.datetime was as close to 100% as was possible, but I haven't checked it in a while... Well, that's embarrassing. Almost all of the uncovered lines are lines that should never be run - e.g. assert(0) - but it does look like there are some lines which aren't covered which should be (not many in comparison to the whole, but there shouldn't be any). Interestingly though, the coverage is worse than it should be because it was generated from the release build on the unit tests, and stuff like invariants doesn't get run. I'll have to figure out how to get it to give me the coverage for the debug run of the tests, since that would be more accurate - though std.datetime can never actually hit 100% thanks to all of the assert(0) lines in it and the scope(failure) lines for printing out extra information when a test does fail. I suspect that I'm all of a percentage point off of what the max is. Interestingly enough, disable this() {} counts as a 0 too, even though the compiler should know that it's impossible for that to run even if the code is wrong - unlike assert(0). It _would_ be nice though if the assert(0) lines at least weren't counted, and it is kind of weird that the unit test lines count (though aside from scope(failure) lines, those should all run, though since I tend to put scope(failure) lines in unit tests for better output on failures, that's going to hurt my code coverage). So, _actually_ hitting 100% is likely to be annoyingly elusive for a lot of code, even if it's actually fully tested. But while std.datetime is almost as close as it can get, it's still _almost_ as close as it can get rather than all the way. :( Clearly, I have a PR or two to write...
 Although we have succeeded in making unit tests part of the 
 culture, the next step is 100% coverage.
Agreed.
 I know that 100% unit test coverage hardly guarantees code 
 correctness. However, since I started using code coverage 
 analyzers in the 1980s, the results are surprising - code with 
 100% test coverage has at LEAST an order of magnitude fewer 
 bugs showing up in the field. It's surprisingly effective.
Oh, definitely. But while 100% unit test coverage is a huge step forward, I also think that for truly solid code, you want to go beyond that and make sure that you test corner cases and the like, test with a large enough variety of types with templates to catch behavioral bugs, etc. So, I don't think that we want to stop at 100% code coverage, but we do need to make sure that we're at 100% first and foremost.
 This is a huge reason why I want to switch to ddmd. I want to 
 improve the quality of the compiler with unit tests.
That would definitely be cool.
 The various unit tests schemes I've tried for C++ are all ugly, 
 inconvenient, and simply a bitch. It's like trying to use a 
 slide rule after you've been given a calculator.
Yeah. It's not that hard to write one, and the tests themselves generally end up being pretty much the same as what you'd have in D, but there's still an annoying amount of boilerplate in getting them declared and set up - which is annoying enough in and of itself, but yeah, once you're used to D's unit tests, it seems particularly onerous.
 (I remember the calculator revolution. It happened my freshman 
 year at college. September 1975 had $125 slide rules in the 
 campus bookstore. December they were at $5 cutout prices, and 
 were gone by January. I never saw anyone use a slide rule 
 again. I've never seen a technological switchover happen so 
 fast, before or since.)
If only folks thought that D's advantages over C++ were that obvious. ;) - Jonathan M Davis
Jul 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 2:53 AM, Jonathan M Davis wrote:
 Oh, definitely. But while 100% unit test coverage is a huge step forward, I
also
 think that for truly solid code, you want to go beyond that and make sure that
 you test corner cases and the like, test with a large enough variety of types
 with templates to catch behavioral bugs, etc. So, I don't think that we want to
 stop at 100% code coverage, but we do need to make sure that we're at 100%
first
 and foremost.
There's another thing I discovered. If functions are broken up into smaller logical units, the unit testing gets easier and there are fewer bugs. For example, the dmd code that reads the dmd.conf file was one function that read the files, allocated memory, did the parsing, built the data structures, etc. By splitting all these things up, suddenly it gets a lot easier to test! For example, just use the normal file I/O functions to read the files, which are tested elsewhere. Boom, don't need to construct test files. The parsing logic can be easily handled by its own unit tests. And so on. I think there's also a learned skill to having the fewest number of orthogonal unit tests that give 100% coverage, rather than a blizzard of tests that overlap each other.
 (I remember the calculator revolution. It happened my freshman year at
 college. September 1975 had $125 slide rules in the campus bookstore. December
 they were at $5 cutout prices, and were gone by January. I never saw anyone
 use a slide rule again. I've never seen a technological switchover happen so
 fast, before or since.)
If only folks thought that D's advantages over C++ were that obvious. ;)
Unfortunately, D's advantages only become more than a grab-bag of features after you've used it for a while. We are also still discovering the right way to use them.
Jul 25 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 08:58:26 UTC, Walter Bright wrote:
 I would have had a LOT more trouble shipping the Warp project 
 if I hadn't gone with 100% coverage from the ground up. Nearly 
 all the bugs it had in the field were due to my 
 misunderstandings of the peculiarities of gpp - the code had 
 worked as I designed it.

 This is a huge reason why I want to switch to ddmd. I want to 
 improve the quality of the compiler with unit tests. The 
 various unit tests schemes I've tried for C++ are all ugly, 
 inconvenient, and simply a bitch. It's like trying to use a 
 slide rule after you've been given a calculator.
LOL. I finally got some bugs sorted out on the project that I'm working on at work (in C++), which means that I can get back to what I was working on implementing before, and about all I recall for sure is that I was working on the unit tests for it. I don't know where I was with them. I find myself wishing that I had -cov so that I could figure out what I had left to test... :( It often seems like the advantages of some of D's features are more obvious when you have to go back to another language like C++ which doesn't have them. - Jonathan M Davis
Aug 04 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 4 August 2015 at 20:47:00 UTC, Jonathan M Davis wrote:
 LOL. I finally got some bugs sorted out on the project that I'm 
 working on at work (in C++), which means that I can get back to 
 what I was working on implementing before, and about all I 
 recall for sure is that I was working on the unit tests for it. 
 I don't know where I was with them. I find myself wishing that 
 I had -cov so that I could figure out what I had left to 
 test... :(

 It often seems like the advantages of some of D's features are 
 more obvious when you have to go back to another language like 
 C++ which doesn't have them.
What do you dislike about C++ coverage tooling in comparison with D's?
Aug 04 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, 4 August 2015 at 22:42:50 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 4 August 2015 at 20:47:00 UTC, Jonathan M Davis 
 wrote:
 LOL. I finally got some bugs sorted out on the project that 
 I'm working on at work (in C++), which means that I can get 
 back to what I was working on implementing before, and about 
 all I recall for sure is that I was working on the unit tests 
 for it. I don't know where I was with them. I find myself 
 wishing that I had -cov so that I could figure out what I had 
 left to test... :(

 It often seems like the advantages of some of D's features are 
 more obvious when you have to go back to another language like 
 C++ which doesn't have them.
What do you dislike about C++ coverage tooling in comparison with D's?
To get code coverage in C++, I'd have to go track down a tool to do it. There is none which is used as part of our normal build process at work. As it is, we only have unit tests because I went and added what was needed to write them and have been writing them. No one else has been writing them, and if I want any kind of code coverage stuff set up, I'd have to go spend the time to figure it out. With D, it's all built-in, and I don't have to figure out which tools to use or write any of them myself - either for unit testing or code coverage. They're just there and ready to go. - Jonathan M Davis
Aug 04 2015
next sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 5 August 2015 at 04:10:22 UTC, Jonathan M Davis 
wrote:
 On Tuesday, 4 August 2015 at 22:42:50 UTC, Ola Fosheim Grøstad 
 wrote:
 On Tuesday, 4 August 2015 at 20:47:00 UTC, Jonathan M Davis 
 wrote:
 [...]
What do you dislike about C++ coverage tooling in comparison with D's?
To get code coverage in C++, I'd have to go track down a tool to do it. There is none which is used as part of our normal build process at work. As it is, we only have unit tests because I went and added what was needed to write them and have been writing them. No one else has been writing them, and if I want any kind of code coverage stuff set up, I'd have to go spend the time to figure it out. With D, it's all built-in, and I don't have to figure out which tools to use or write any of them myself - either for unit testing or code coverage. They're just there and ready to go. - Jonathan M Davis
This is nonsense, what major C++ compiler doesn't provide code coverage? I feel like 99% of C++ vs D arguments on this forum are comparing C++98 to D.
Aug 04 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 August 2015 at 04:10:22 UTC, Jonathan M Davis 
wrote:
 To get code coverage in C++, I'd have to go track down a tool 
 to do it. There is none which is used as part of our normal 
 build process at work. As it is, we only have unit tests 
 because I went and added what was needed to write them and have 
 been writing them.
I also tend to use the features that can be directly used from the compiler switches more than external programs. I tend to look at "--help" first. Maybe one should also list programs that are distributed with the compiler in the compiler "--help" listing.
Aug 04 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-25 09:08, Jonathan M Davis wrote:

 I do think that our built-in unit testing facilities are a huge win for
 us though. It actually seems kind of silly at this point that most other
 languages don't have something similar given how critical they are to
 high quality, maintainable code.
Most modern languages are capable to implement something similar or better purely in library code. -- /Jacob Carlborg
Jul 28 2015
prev sibling parent reply Artur Skawina via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 07/24/15 23:32, Jonathan M Davis via Digitalmars-d wrote:
 On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
 The difference is that right now the developer has to write a unit-test per
function that uses `hasPrefix`, otherwise the code might not even be verified
to compile. 100% unit-test coverage is not going to happen in practice, and
just like with docs, making things easier and reducing boilerplate to a minimum
would improve the situation dramatically.
But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
How exactly does making it harder to write tests translate into having better coverage? Why is requiring the programmer to write unnecessary, redundant, and potentially buggy tests preferable? artur
Jul 24 2015
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 22:09:24 UTC, Artur Skawina wrote:
 On 07/24/15 23:32, Jonathan M Davis via Digitalmars-d wrote:
 On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
 The difference is that right now the developer has to write a 
 unit-test per function that uses `hasPrefix`, otherwise the 
 code might not even be verified to compile. 100% unit-test 
 coverage is not going to happen in practice, and just like 
 with docs, making things easier and reducing boilerplate to a 
 minimum would improve the situation dramatically.
But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
How exactly does making it harder to write tests translate into having better coverage? Why is requiring the programmer to write unnecessary, redundant, and potentially buggy tests preferable?
And how are we making it harder to write tests? We're merely saying that you have to actually instantiate your template and test those instantiations. If someone don't catch a bug in their template, because they didn't try the various combinations of stuff that it supports (and potentially verifying that it doesn't compile with stuff that it's not supposed to support), then they didn't test it enough. Having the compiler tell you that you're using a function that you didn't require in your template constraint might be nice, but if the programmer didn't catch that anyway, then they didn't test enough. And if you don't test enough, you're bound to have other bugs. So, the folks this helps are the folks that aren't testing their code sufficiently and thus likely have buggy code anyway. - Jonathan M Davis
Jul 24 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 6:09 PM, Artur Skawina via Digitalmars-d wrote:
 On 07/24/15 23:32, Jonathan M Davis via Digitalmars-d wrote:
 On Friday, 24 July 2015 at 20:57:34 UTC, Artur Skawina wrote:
 The difference is that right now the developer has to write a unit-test per
function that uses `hasPrefix`, otherwise the code might not even be verified
to compile. 100% unit-test coverage is not going to happen in practice, and
just like with docs, making things easier and reducing boilerplate to a minimum
would improve the situation dramatically.
But you see. This is exactly wrong attitude. Why on earth should we make life easier for folks who don't bother to get 100% unit test coverage?
How exactly does making it harder to write tests translate into having better coverage? Why is requiring the programmer to write unnecessary, redundant, and potentially buggy tests preferable?
False choice. -- Andrei
Jul 25 2015
prev sibling next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
 Consider the following:

     int foo(T: hasPrefix)(T t) {
        t.prefix();    // ok
        bar(t);        // error, hasColor was not specified for T
     }

     void bar(T: hasColor)(T t) {
        t.color();
     }

 Now consider a deeply nested chain of function calls like this. 
 At the bottom, one adds a call to 'color', and now every 
 function in the chain has to add 'hasColor' even though it has 
 nothing to do with the logic in that function. This is the pit 
 that Exception Specifications fell into.
I'm a little confused here. I seem to be of the belief that D's interfaces can accomplish virtually the same thing as Rust's traits. In your example, if the type you pass to foo also inherits from hasColor, then it shouldn't be a problem. I fleshed out what you said a bit more with respect to D's interfaces, adding another part to the chain as well. Obviously baz in my example can't call objects from classes A and B because they don't inherit from hasAlt. Isn't this the behavior you would want? Another alternative is to have hasAlt inherit from hasColor and hasPrefix. import std.stdio : writeln; interface hasColor { final void color() { writeln("calling color"); } } interface hasPrefix { final void prefix() { writeln("calling prefix"); } } interface hasAlt { final void alt() { writeln("calling alt"); } } class A : hasColor { } class B : A, hasPrefix { } class C : B, hasAlt { } void foo(T: hasColor)(T t) { t.color(); } void bar(T: hasPrefix)(T t) { t.prefix(); foo(t); } void baz(T: hasAlt)(T t) { t.alt(); bar(t); } void main() { auto a = new A; foo(a); auto b = new B; foo(b); bar(b); auto c = new C; foo(c); bar(c); baz(c); }
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 7:50 AM, jmh530 wrote:
 I'm a little confused here. I seem to be of the belief that D's interfaces can
 accomplish virtually the same thing as Rust's traits. In your example, if the
 type you pass to foo also inherits from hasColor, then it shouldn't be a
problem.
As I replied earlier, "It's a good question. And the answer is, the top level function does not list every interface used by the call tree. Nested function calls test at runtime if a particular interface is supported by an object, using dynamic casting or QueryInterface() calls. It's fundamentally different from traits and concepts."
Jul 24 2015
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/24/2015 08:30 PM, Walter Bright wrote:
 On 7/24/2015 7:50 AM, jmh530 wrote:
 I'm a little confused here. I seem to be of the belief that D's
 interfaces can
 accomplish virtually the same thing as Rust's traits. In your example,
 if the
 type you pass to foo also inherits from hasColor, then it shouldn't be
 a problem.
As I replied earlier, "It's a good question. And the answer is, the top level function does not list every interface used by the call tree. Nested function calls test at runtime if a particular interface is supported by an object, using dynamic casting or QueryInterface() calls. It's fundamentally different from traits and concepts."
This kind of testing for conformance can easily be allowed for traits/concepts at compile time. (E.g. by default, or add a special Meta trait.)
Jul 24 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-24 06:43, Walter Bright wrote:

 Consider the following:

      int foo(T: hasPrefix)(T t) {
         t.prefix();    // ok
         bar(t);        // error, hasColor was not specified for T
      }

      void bar(T: hasColor)(T t) {
         t.color();
      }

 Now consider a deeply nested chain of function calls like this. At the
 bottom, one adds a call to 'color', and now every function in the chain
 has to add 'hasColor' even though it has nothing to do with the logic in
 that function. This is the pit that Exception Specifications fell into.
I don't see the difference compared to a regular parameter. If you don't specify any constraints/traits/whatever it like using "Object" for all your parameter types in Java. -- /Jacob Carlborg
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
 I don't see the difference compared to a regular parameter. If you don't
specify
 any constraints/traits/whatever it like using "Object" for all your parameter
 types in Java.
So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless. I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset. Exactly what sunk Exception Specifications.
Jul 24 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
 I don't see the difference compared to a regular parameter. If 
 you don't specify
 any constraints/traits/whatever it like using "Object" for all 
 your parameter
 types in Java.
So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless. I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset. Exactly what sunk Exception Specifications.
But thats exactly how normal interfaces work... eg: interface Iface{ void foo(){} } void func1(Iface x){ func2(x); } void func2(Iface x){ func3(x); } void func3(Iface x){ x.bar(); } // ERROR no bar in Iface Only options here are A: update Iface to have bar() or B: make a new interface and change it on the whole tree. The same "problem" would exist for the concepts, but its the reason why people want it.
Jul 24 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 24 July 2015 at 21:27:09 UTC, Tofu Ninja wrote:
 On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
 I don't see the difference compared to a regular parameter. 
 If you don't specify
 any constraints/traits/whatever it like using "Object" for 
 all your parameter
 types in Java.
So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless. I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset. Exactly what sunk Exception Specifications.
But thats exactly how normal interfaces work... eg: interface Iface{ void foo(){} } void func1(Iface x){ func2(x); } void func2(Iface x){ func3(x); } void func3(Iface x){ x.bar(); } // ERROR no bar in Iface Only options here are A: update Iface to have bar() or B: make a new interface and change it on the whole tree. The same "problem" would exist for the concepts, but its the reason why people want it.
C: do a runtime check or downcast.
Jul 24 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 2:27 PM, Tofu Ninja wrote:
 But thats exactly how normal interfaces work...
No it isn't. Google QueryInterface(). Nobody lists all the interfaces at the top level functions, which is that Rust traits and C++ concepts require.
 eg:
 interface Iface{ void foo(){} }

 void func1(Iface x){ func2(x); }
 void func2(Iface x){ func3(x); }
 void func3(Iface x){ x.bar(); } // ERROR no bar in Iface

 Only options here are A: update Iface to have bar() or B: make a new interface
 and change it on the whole tree. The same "problem" would exist for the
 concepts, but its the reason why people want it.
Sigh. Nothing I post here is understood.
Jul 24 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 00:49:38 UTC, Walter Bright wrote:
 On 7/24/2015 2:27 PM, Tofu Ninja wrote:
 No it isn't. Google QueryInterface(). Nobody lists all the 
 interfaces at the top level functions, which is that Rust 
 traits and C++ concepts require.
The only time you don't use the right interface for your needs is if you plan on casting somewhere down the line. But certainly there are people who don't do that, I for one feel it's bad practice to need to use casts to circumvent the type system like that.
Jul 24 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 01:22:15 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 00:49:38 UTC, Walter Bright wrote:
 On 7/24/2015 2:27 PM, Tofu Ninja wrote:
 No it isn't. Google QueryInterface(). Nobody lists all the 
 interfaces at the top level functions, which is that Rust 
 traits and C++ concepts require.
The only time you don't use the right interface for your needs is if you plan on casting somewhere down the line. But certainly there are people who don't do that, I for one feel it's bad practice to need to use casts to circumvent the type system like that.
I confess that I've always thought that QueryInterface was a _horrible_ idea, and that if you need to cast your type to something else like that, you're doing something wrong. *shudder* I really have nothing good to say about COM actually... - Jonathan M Davis
Jul 24 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
 I confess that I've always thought that QueryInterface was a _horrible_ idea,
Specifying every interface that a type must support at the top of the hierarchy is worse. Once again, Exception Specifications. I suspect that 3 or 4 years after concepts and traits go into wide use, there's going to be a quiet backlash against them. Where, once again, they'll be implementing D's semantics. Heck, C++17 could just as well be renamed C++D :-) given all the D senabtucs they're quietly adopting.
Jul 24 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 03:11:59 UTC, Walter Bright wrote:
 On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
 I confess that I've always thought that QueryInterface was a 
 _horrible_ idea,
Specifying every interface that a type must support at the top of the hierarchy is worse. Once again, Exception Specifications.
But that is what every one does.... you are really making me "wut" right now? I never use casts and every thing works out fine. And generally that is what is considered good oop practice. Its not like there is a list at the top of the tree of all the interfaces, there is just a type at the top of the tree that implements them all, its not like Exception Specifications.
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 8:43 PM, Tofu Ninja wrote:
 there is just a type at the top of the tree that
 implements them all
The one type that encompasses everything defeats the whole point of type checking, traits, concepts, etc.
Jul 24 2015
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-25 07:25, Walter Bright wrote:

 The one type that encompasses everything defeats the whole point of type
 checking, traits, concepts, etc.
Most methods only operate on a very specific set of data. -- /Jacob Carlborg
Jul 25 2015
prev sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/24/2015 8:43 PM, Tofu Ninja wrote:
 there is just a type at the top of the tree that
 implements them all
The one type that encompasses everything defeats the whole point of type checking, traits, concepts, etc.
That's exactly my feeling with dynamic casts / QueryInterface... Pass 'object' everywhere and then cast it to the desired interface in demand. Tobi
Jul 25 2015
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 03:11:59 UTC, Walter Bright wrote:
 On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
 I confess that I've always thought that QueryInterface was a 
 _horrible_ idea,
Specifying every interface that a type must support at the top of the hierarchy is worse. Once again, Exception Specifications.
Well, in most code, I would think that you should be getting the actual object with its full type and converting that to an interface to use for whatever you're using rather than trying to convert something that's one interface into another. It usually doesn't even make sense to attempt it. So, the only place that has all of the functions is the actual class which _has_ to have every function that it implements. I certainly wouldn't argue for trying to combine the interfaces themselves in most cases, since they're usually distinct, and combining them wouldn't make sense. But similarly, it doesn't usually make sense to convert on interface to a totally distinct one and have any expectation that that conversion is going to work, because they're distinct. Most code I've dealt with that is at all clean doesn't cast from a base type to a derived type except in rare circumstances, and converting across the interface hierarchy never happens. I've never seen QueryInterface used in a way that I wouldn't have considered messy or simply an annoyance that you have to deal with because you're dealing with COM and can't use pure C++ and just implicitly cast from the derived type to the interface/abstract type that a particular section of code is using. But maybe I'm just not looking at the problem the right way. I don't know.
 I suspect that 3 or 4 years after concepts and traits go into 
 wide use, there's going to be a quiet backlash against them. 
 Where, once again, they'll be implementing D's semantics. Heck, 
 C++17 could just as well be renamed C++D :-) given all the D 
 senabtucs they're quietly adopting.
Well, even if concepts _were_ where it was at, at least D basically lets you implement them or do something far more lax or ad hoc, because template constraints and static if give you a _lot_ flexibility. We're not tied down in how we go about writing template constraints or even in using the function level to separate out functionality, because we can do that internally with static if where appropriate. So, essentially, we're in a great place regardless. On the other hand, if we had built template constraints around concepts or interfaces or anything like that, then our hands would be tied. By making them accept any boolean expression that's known at compile time, we have maximum flexibility. What we do with them then becomes a matter of best practices. The downside is that it's sometimes hard to decipher why a template constraint is failing, and the compiler is less able to help us with stuff like that, since it's not rigid like it would be with a concept supported directly by the language, but the sheer simplicity and flexibility of it makes it a major win anyway IMHO. - Jonathan M Davis
Jul 24 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 25 July 2015 at 05:29:39 UTC, Jonathan M Davis wrote:
 Well, even if concepts _were_ where it was at, at least D 
 basically lets you implement them or do something far more lax 
 or ad hoc, because template constraints and static if give you 
 a _lot_ flexibility. We're not tied down in how we go about 
 writing template constraints or even in using the function 
 level to separate out functionality, because we can do that 
 internally with static if where appropriate. So, essentially, 
 we're in a great place regardless.
The point of having a type system is to catch as many mistakes at compile time as possible. The primary purpose of a type system is to reduce flexibility.
Jul 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes at compile time
 as possible. The primary purpose of a type system is to reduce flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error. The idea that Rust traits check at compile time and D does not is a total misunderstanding. BTW, you might want to remove the UTF-8 characters from your user name. Evidently, NNTP doesn't do well with them.
Jul 25 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type 
 and not checked for in the constraint, you will *still* get a 
 compile time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
Jul 25 2015
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 6:05 AM, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type and not
 checked for in the constraint, you will *still* get a compile time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
I disagree. -- Andrei
Jul 25 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 10:05:35 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type 
 and not checked for in the constraint, you will *still* get a 
 compile time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
This unitest argument is becoming ridiculous. Unless some strong argument is brought to the table that this differs from the "dynamic typing is not a problem if you write unitest" we we all should know is bogus at this point, it can't be taken seriously.
Jul 25 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 22:54:07 UTC, deadalnix wrote:
 This unitest argument is becoming ridiculous. Unless some 
 strong argument is brought to the table that this differs from 
 the "dynamic typing is not a problem if you write unitest" we 
 we all should know is bogus at this point, it can't be taken 
 seriously.
+1000
Jul 25 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 6:54 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 10:05:35 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type and
 not checked for in the constraint, you will *still* get a compile
 time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
This unitest argument is becoming ridiculous. Unless some strong argument is brought to the table that this differs from the "dynamic typing is not a problem if you write unitest" we we all should know is bogus at this point, it can't be taken seriously.
To me that's self understood. Run time is fundamentally different from everything preceding it. -- Andrei
Jul 26 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 8:20 AM, Andrei Alexandrescu wrote:
 On 7/25/15 6:54 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 10:05:35 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type and
 not checked for in the constraint, you will *still* get a compile
 time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
This unitest argument is becoming ridiculous. Unless some strong argument is brought to the table that this differs from the "dynamic typing is not a problem if you write unitest" we we all should know is bogus at this point, it can't be taken seriously.
To me that's self understood. Run time is fundamentally different from everything preceding it. -- Andrei
Unit tests are also not exclusively about runtime. Using a unit test to instantiate a template is a compile time test.
Jul 26 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 19:54:28 UTC, Walter Bright wrote:
 Unit tests are also not exclusively about runtime. Using a unit 
 test to instantiate a template is a compile time test.
Yes, it test the instantiation to some extent. It tests that the instantiate works granted you pass it what is expected. It does not test that the instantiation will fail if you pass it anything else, or worse, do random unexpected stuff. The problem is the exact same as for dynamic typing and unitests. A dynamically typed function that expect a string can be test exhaustively with warious string passed as arguments. Still, none knows what happen when passed an int, float, array, object or whatever. Worse, it is either going to blow up in some unexpected way or not explode and do random stuff. The same way, instantiate your template with something it doesn't expect and you get absurdly complex errors (and it is getting worse as phobos get more and more "enterprise" with Foo forwarding to FooImpl and 25 different helpers). The problem is the same, will it fail at call site/instanciation site with "I expected X you gave me Y" or will it fail randomly somewhere down the road in some unrelated part of the code, or worse, not fail at all when it should have ?
Jul 26 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 3:53 PM, deadalnix wrote:
 On Sunday, 26 July 2015 at 19:54:28 UTC, Walter Bright wrote:
 Unit tests are also not exclusively about runtime. Using a unit test to
 instantiate a template is a compile time test.
Yes, it test the instantiation to some extent. It tests that the instantiate works granted you pass it what is expected. It does not test that the instantiation will fail if you pass it anything else, or worse, do random unexpected stuff.
If the template constraint is 'isInputRange', and you pass it an 'InputRange' that is nothing beyond an input range, and it compiles, it is JUST AS GOOD as Rust traits, without needing to add 'isInputRange' to every template up the call tree. In fact, I do just this in my D dev projects. I have a set of mock ranges that match each of the range types.
 The problem is the exact same as for dynamic typing and unitests. A dynamically
 typed function that expect a string can be test exhaustively with warious
string
 passed as arguments. Still, none knows what happen when passed an int, float,
 array, object or whatever. Worse, it is either going to blow up in some
 unexpected way or not explode and do random stuff.
Flatly no, it is not at all the same. Dynamic typing systems do not have constraints. Furthermore, dynamic typed languages tend to do random s**t when presented with the wrong type rather than fail. (Such as concatenating strings when the code was intended to do an arithmetic add.) D does not, it presents the user with a compilation error.
 The same way, instantiate your template with something it doesn't expect and
you
 get absurdly complex errors (and it is getting worse as phobos get more and
more
 "enterprise" with Foo forwarding to FooImpl and 25 different helpers).
This is incorrect. In my D project development, when I send the wrong thing, I get a list of the template instantiation stack. The bottom gives the method not found, and the stack gives how it got there. I find it adequate in that it doesn't take much effort to figure out where things went wrong. BTW, when you discover that a constraint is wrong on a Phobos template, please file a bug report on it.
 The problem is the same, will it fail at call site/instanciation site with "I
 expected X you gave me Y" or will it fail randomly somewhere down the road in
 some unrelated part of the code, or worse, not fail at all when it should have
?
If the constraint is InputRange, and the body assumes ForwardRange, and I pass it a ForwardRange, and it works, how is that 'worse'?
Jul 26 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 23:21:50 UTC, Walter Bright wrote:
 If the template constraint is 'isInputRange', and you pass it 
 an 'InputRange' that is nothing beyond an input range, and it 
 compiles, it is JUST AS GOOD as Rust traits, without needing to 
 add 'isInputRange' to every template up the call tree.

 In fact, I do just this in my D dev projects. I have a set of 
 mock ranges that match each of the range types.
That is not enough, it has been covered. It is clear that this is what needs to be done for good testing. It is also clear and explained in this thread that this is not enough.
 The problem is the exact same as for dynamic typing and 
 unitests. A dynamically
 typed function that expect a string can be test exhaustively 
 with warious string
 passed as arguments. Still, none knows what happen when passed 
 an int, float,
 array, object or whatever. Worse, it is either going to blow 
 up in some
 unexpected way or not explode and do random stuff.
Flatly no, it is not at all the same. Dynamic typing systems do not have constraints. Furthermore, dynamic typed languages tend to do random s**t when presented with the wrong type rather than fail. (Such as concatenating strings when the code was intended to do an arithmetic add.) D does not, it presents the user with a compilation error.
That is blatantly false and show more ignorance than anything else. For the precise example, concatenate with + is know to be bad in dynamic typing for this very reason. Even PHP got around that trap and pretty much only javascript got into that road. You'll find noone defneding this outside the Node.js crowd, but once you start thinking that a monothreaded event loop is the definition of scalable, you are already lost for science anyway. But more generally there is a point. Dynamic languages tends to do random shit rather than failing. Hopefully, some won't. For instance python will fail hard rather than do random stuff when passed the wrong type. If that was the problem faced, then using a type system would be pointless. Using something like python is sufficient.
 The same way, instantiate your template with something it 
 doesn't expect and you
 get absurdly complex errors (and it is getting worse as phobos 
 get more and more
 "enterprise" with Foo forwarding to FooImpl and 25 different 
 helpers).
This is incorrect. In my D project development, when I send the wrong thing, I get a list of the template instantiation stack.
Which is often 3 pages long.
 The bottom gives the method not found, and the stack gives how 
 it got there. I find it adequate in that it doesn't take much 
 effort to figure out where things went wrong.
That is learned helplessness. This is bad and we should feel bad to propose such a user experience. There are reasons why template are frowned upon by many devs, and this is one of them. I even heard Andrei suggest at a conf to pipe the output of the compiler to head (he was talking about C++, but D suffer from the same problem in that regard). Yes it works, and yes you can eventually make sens of the error, but that is terrible user experience.
 BTW, when you discover that a constraint is wrong on a Phobos 
 template, please file a bug report on it.
It is not just phobos, but everybody's code. Why all this need to be done manually when the compiler could do it for us ? Isn't it why we use computer in the first place ? If I follow your post, one have to maintain a set of minimal mock to test instantiations, a constraint for the template and a body, the 3 of them requiring to be kept in synch over time while nothing checks it is. That is completely unmaintainable.
 The problem is the same, will it fail at call 
 site/instanciation site with "I
 expected X you gave me Y" or will it fail randomly somewhere 
 down the road in
 some unrelated part of the code, or worse, not fail at all 
 when it should have ?
If the constraint is InputRange, and the body assumes ForwardRange, and I pass it a ForwardRange, and it works, how is that 'worse'?
Maybe now my complexity is quadratic ? Maybe it doesn't do what it is supposed to do anymore, but else ?
Jul 27 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Monday, 27 July 2015 at 19:11:53 UTC, deadalnix wrote:
 That is completely unmaintainable.
I really don't get how the mess of unittests, mock data types, template constraints, and type interfaces that are just convention(ranges are just a convention, they don't exist anywhere) is supposed to some how be more maintainable than a single system that can serve the function of all of them and forces them all to be in sync 100% of the time. Seriously...
Jul 27 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/27/15 3:35 PM, Tofu Ninja wrote:
 On Monday, 27 July 2015 at 19:11:53 UTC, deadalnix wrote:
 That is completely unmaintainable.
I really don't get how the mess of unittests, mock data types, template constraints, and type interfaces that are just convention(ranges are just a convention, they don't exist anywhere) is supposed to some how be more maintainable than a single system that can serve the function of all of them and forces them all to be in sync 100% of the time. Seriously...
Clearly adding more language features to D would have certain payoff. But we should not just go ahead and pull today's hot topic, which really does change very often. As the language's main proponents, we should be more inclined toward using the language that we have creatively for solving interesting tasks, instead of keeping on wishing that just one more feature would make it perfect. The lure of the eternally-open design stage is very powerful (I basked in its glow a number of times), but it inevitably becomes an adversary to productivity - instead of getting work done, there's always contemplating how the language design could be tweaked to better accommodate the task at hand. The withdrawal is unpleasant, I know. But we must acknowledge that the design of D is done. The large stones and pillars we have in place are there to stay, and D shall not be radically different five years from now. We need to acknowledge that a lot of grass seems greener on the other side (and some really is), but we need to make do with the gardening tools we got. I'll do my best to limit my participation in emotional debates, and suggest other D luminaries to do the same. We should put strong emphasis on finalizing the definition and implementation of the language we have (those imprecisions and fuzzy corners are really counterproductive), forge forward to do great work with D, and show others how it's done. Thanks, Andrei
Jul 27 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, 27 July 2015 at 20:49:54 UTC, Andrei Alexandrescu 
wrote:
 I'll do my best to limit my participation in emotional debates, 
 and suggest other D luminaries to do the same.
LOL. That's why I was originally planning to not say anything in this thread... - Jonathan M Davis
Jul 27 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Monday, 27 July 2015 at 21:54:23 UTC, Jonathan M Davis wrote:
 On Monday, 27 July 2015 at 20:49:54 UTC, Andrei Alexandrescu 
 wrote:
 I'll do my best to limit my participation in emotional 
 debates, and suggest other D luminaries to do the same.
LOL. That's why I was originally planning to not say anything in this thread... - Jonathan M Davis
Your comments were very clear and much appreciated, but I see the point.
Jul 27 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, 27 July 2015 at 22:47:05 UTC, jmh530 wrote:
 On Monday, 27 July 2015 at 21:54:23 UTC, Jonathan M Davis wrote:
 On Monday, 27 July 2015 at 20:49:54 UTC, Andrei Alexandrescu 
 wrote:
 I'll do my best to limit my participation in emotional 
 debates, and suggest other D luminaries to do the same.
LOL. That's why I was originally planning to not say anything in this thread... - Jonathan M Davis
Your comments were very clear and much appreciated, but I see the point.
Well, I ended up commenting, because there were some very import points relevant to what we do with D that needed clarifying. What I wanted to avoid (and mostly did) was arguing over Rust vs D. For instance, I'd hate to lose the ternary operator in favor of the expression if-else blocks that were being suggested, but there's no point in arguing over it, because we're not going to lose the the ternary operator, and we're not going to make it so that if-else blocks can be used as expressions. Arguing about it at this point just creates contention. And it's far too easy to come at that sort of discussion from an emotional point of view that D is better because I really like it and am invested in it, and what's being suggested is alien to me or doesn't fit with my aesthetics or whatever. Sometimes what another language has _is_ better, but often, it's a trade-off or even completely subjective, and regardless, it generally isn't going to have any effect on D at this point. Rather, it's just going to make emotions flare. So, at this point, I'd prefer to generally avoid discussions of D vs any other language. I got into a really nasty argument about ranges vs iterators the other day on reddit, and I just don't want to be doing that sort of thing anymore. However, what we _do_ stand to learn from is what's work welling for other languages (like Rust) in terms of process and the things that they do that don't necessarily have to do with the language itself which help them succeed, particularly, since even though we're generally pretty strong on the language front (not perfect, but we definitely have a very strong offering), we tend to have problems with marketing, getting folks to contribute, getting those contributions merged in in a timely manner, getting releases out, etc. We've definitely improved on that front, but it's probably our weakest point, whereas the language itself is pretty fantastic. But I would like to avoid arguments over which language is better or which feature in which language is better or anything like that, particularly since we're unlikely to add anything to D at this point because of such discussions. Rather, we need to finish what we have and make it solid. - Jonathan M Davis
Jul 27 2015
parent reply "Chris" <wendlec tcd.ie> writes:
On Tuesday, 28 July 2015 at 05:49:40 UTC, Jonathan M Davis wrote:
 On Monday, 27 July 2015 at 22:47:05 UTC, jmh530 wrote:
 On Monday, 27 July 2015 at 21:54:23 UTC, Jonathan M Davis 
 wrote:
 On Monday, 27 July 2015 at 20:49:54 UTC, Andrei Alexandrescu 
 wrote:
 I'll do my best to limit my participation in emotional 
 debates, and suggest other D luminaries to do the same.
LOL. That's why I was originally planning to not say anything in this thread... - Jonathan M Davis
Your comments were very clear and much appreciated, but I see the point.
Well, I ended up commenting, because there were some very import points relevant to what we do with D that needed clarifying. What I wanted to avoid (and mostly did) was arguing over Rust vs D. For instance, I'd hate to lose the ternary operator in favor of the expression if-else blocks that were being suggested, but there's no point in arguing over it, because we're not going to lose the the ternary operator, and we're not going to make it so that if-else blocks can be used as expressions. Arguing about it at this point just creates contention. And it's far too easy to come at that sort of discussion from an emotional point of view that D is better because I really like it and am invested in it, and what's being suggested is alien to me or doesn't fit with my aesthetics or whatever. Sometimes what another language has _is_ better, but often, it's a trade-off or even completely subjective, and regardless, it generally isn't going to have any effect on D at this point. Rather, it's just going to make emotions flare. So, at this point, I'd prefer to generally avoid discussions of D vs any other language. I got into a really nasty argument about ranges vs iterators the other day on reddit, and I just don't want to be doing that sort of thing anymore. However, what we _do_ stand to learn from is what's work welling for other languages (like Rust) in terms of process and the things that they do that don't necessarily have to do with the language itself which help them succeed, particularly, since even though we're generally pretty strong on the language front (not perfect, but we definitely have a very strong offering), we tend to have problems with marketing, getting folks to contribute, getting those contributions merged in in a timely manner, getting releases out, etc. We've definitely improved on that front, but it's probably our weakest point, whereas the language itself is pretty fantastic. But I would like to avoid arguments over which language is better or which feature in which language is better or anything like that, particularly since we're unlikely to add anything to D at this point because of such discussions. Rather, we need to finish what we have and make it solid. - Jonathan M Davis
Very wise. More often than not it is useless and irrelevant to discuss features other languages have. This does not mean that we shouldn't get inspiration from other languages. However, D's reality is different from Rust's or Go's and features don't necessarily translate directly into D (or even make sense in D). Another thing is, as I pointed out earlier, that a lot of "new" features other languages have are not yet tested well enough to be able to say whether or not they are really that good[1]. After all we are still experimenting with features and fixing things here and there after using them in the real world. There is a tendency to think that any feature we don't have absolutely soooo has to be incorporated, else D will never take off. I beg to differ. The question should not be "Why doesn't D have this feature?", but "How do I get the same effect in D?". Often we don't have to introduce a new feature, we merely have to use the tools we have to get the same effect. And we do have a lot of tools. [1] I wonder what kind of bugs will be introduced, when if-else is used as an expression.
Jul 28 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 28 July 2015 at 09:29:28 UTC, Chris wrote:
 [1] I wonder what kind of bugs will be introduced, when if-else 
 is used as an expression.
I believe most Algol-like languages outside the C-family have it...
Jul 29 2015
parent reply "Chris" <wendlec tcd.ie> writes:
On Thursday, 30 July 2015 at 02:30:45 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 28 July 2015 at 09:29:28 UTC, Chris wrote:
 [1] I wonder what kind of bugs will be introduced, when 
 if-else is used as an expression.
I believe most Algol-like languages outside the C-family have it...
So can you tell me what pitfalls there are? Sure people must have come across some nasty bugs related to this.
Jul 30 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07/30/2015 11:25 AM, Chris wrote:
 On Thursday, 30 July 2015 at 02:30:45 UTC, Ola Fosheim Grøstad wrote:
 On Tuesday, 28 July 2015 at 09:29:28 UTC, Chris wrote:
 [1] I wonder what kind of bugs will be introduced, when if-else is
 used as an expression.
I believe most Algol-like languages outside the C-family have it...
So can you tell me what pitfalls there are?
What kind of special pitfall do you envision here?
 Sure people must have come across some nasty bugs related to this.
They are the intersection of nasty bugs involving ?: and nasty bugs involving if/else statements.
Jul 30 2015
parent reply "Chris" <wendlec tcd.ie> writes:
On Thursday, 30 July 2015 at 13:32:29 UTC, Timon Gehr wrote:
 On 07/30/2015 11:25 AM, Chris wrote:
 On Thursday, 30 July 2015 at 02:30:45 UTC, Ola Fosheim Grøstad 
 wrote:
 On Tuesday, 28 July 2015 at 09:29:28 UTC, Chris wrote:
 [1] I wonder what kind of bugs will be introduced, when 
 if-else is
 used as an expression.
I believe most Algol-like languages outside the C-family have it...
So can you tell me what pitfalls there are?
What kind of special pitfall do you envision here?
 Sure people must have come across some nasty bugs related to 
 this.
They are the intersection of nasty bugs involving ?: and nasty bugs involving if/else statements.
My point was that any (new) feature introduces its own problems. Be it "everything is a statement" or built-in "bug prevention" (rigid features). While preventing certain types of bugs, new types may be introduced by features that have been introduced to prevent old bugs. It would be foolish to believe that most bugs will be erased, if only a language is rigid enough. As I said, I'll wait and see what Rust users have to say after a year or two.
Jul 30 2015
parent reply Ziad Hatahet via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, Jul 30, 2015 at 7:19 AM, Chris via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 My point was that any (new) feature introduces its own problems... As I
 said, I'll wait and see what Rust users have to say after a year or two.
Except, as it was pointed out, this is not a new feature. It has been around in many languages way before Rust. You don't have to wait a year or two, check what the experience of users of
Jul 31 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 31 July 2015 at 09:08:00 UTC, Ziad Hatahet wrote:
 On Thu, Jul 30, 2015 at 7:19 AM, Chris via Digitalmars-d < 
 digitalmars-d puremagic.com> wrote:

 My point was that any (new) feature introduces its own 
 problems... As I said, I'll wait and see what Rust users have 
 to say after a year or two.
Except, as it was pointed out, this is not a new feature. It has been around in many languages way before Rust. You don't have to wait a year or two, check what the experience been like.
It really doesn't mean much if you're talking about functional languages, because they're fundamentally different from imperative languages in how they're constructed. Almost everything in functional languages is an expression, and it works fine for them, but the way that code is written in those languages is also fundamentally different from how you'd write it closer to C++ and friends than a functional language, so how it interacts with the rest of the language is going to be quite different. That doesn't mean that it won't work just fine, but it does mean that the fact that it works fine in functional languages doesn't necessarily mean that it'll work well for Rust. Now, there may be other imperative languages which have something similar - be it that all statements are expressions or that a larger subset of them are - so there may already be one or more languages out there which show that it can work just fine with an imperative language, but AFAIK, all of the languages you listed are either outright functional languages or lean heavily in that direction rather than being imperative. So, I don't think that experiences with those languages necessarily says much about how well it will work for Rust. - Jonathan M Davis
Jul 31 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/26/15 3:54 PM, Walter Bright wrote:
 On 7/26/2015 8:20 AM, Andrei Alexandrescu wrote:
 On 7/25/15 6:54 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 10:05:35 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 if the template body uses an interface not present in the type and
 not checked for in the constraint, you will *still* get a compile
 time error.
But only if the template gets instantiated with a bad type. Unit tests don't catch every thing and have to be written properly. A proper type system should catch it.
This unitest argument is becoming ridiculous. Unless some strong argument is brought to the table that this differs from the "dynamic typing is not a problem if you write unitest" we we all should know is bogus at this point, it can't be taken seriously.
To me that's self understood. Run time is fundamentally different from everything preceding it. -- Andrei
Unit tests are also not exclusively about runtime. Using a unit test to instantiate a template is a compile time test.
YES! For templates unittests have a dual role. -- Andrei
Jul 26 2015
prev sibling next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 2015-07-25 at 02:40 -0700, Walter Bright via Digitalmars-d
wrote:
=20
[[=E2=80=A6]
 BTW, you might want to remove the UTF-8 characters from your user=20
 name.=20
 Evidently, NNTP doesn't do well with them.
Conversely someone should fix the NNTP implementation to deal properly with UTF-8 encoded Unicode codepoints. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 3:30 AM, Russel Winder via Digitalmars-d wrote:
 Conversely someone should fix the NNTP implementation to deal properly
 with UTF-8 encoded Unicode codepoints.
I propose that you do it!
Jul 25 2015
prev sibling next sibling parent reply "Brendan Zabarauskas" <bjzaba yahoo.com.au> writes:
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes 
 at compile time
 as possible. The primary purpose of a type system is to reduce 
 flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error. The idea that Rust traits check at compile time and D does not is a total misunderstanding. BTW, you might want to remove the UTF-8 characters from your user name. Evidently, NNTP doesn't do well with them.
I think the point is that trait based constraints force compilation errors to be raised at the call site, and not potentially from deep within a template expansion. Template errors are stack traces coming from duck typed, compile time programs. Library authors can't rely on the typechecker to pick up on mistakes that may only appear at expansion time in client programs.
Jul 25 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 9:17 AM, Brendan Zabarauskas wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes at
 compile time
 as possible. The primary purpose of a type system is to reduce
 flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error. The idea that Rust traits check at compile time and D does not is a total misunderstanding. BTW, you might want to remove the UTF-8 characters from your user name. Evidently, NNTP doesn't do well with them.
I think the point is that trait based constraints force compilation errors to be raised at the call site, and not potentially from deep within a template expansion. Template errors are stack traces coming from duck typed, compile time programs. Library authors can't rely on the typechecker to pick up on mistakes that may only appear at expansion time in client programs.
Understood, but by the same token library authors shouldn't ship untested code. This is basic software engineering. Once we agree on that, we figure that concepts help nobody. -- Andrei
Jul 25 2015
next sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
Once we agree on that, we figure that concepts help nobody.
You keep saying that, but I cannot find an explanation. Care to elaborate or give me a pointer? Tobi
Jul 25 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 13:59:11 UTC, Andrei Alexandrescu 
wrote:
 Understood, but by the same token library authors shouldn't 
 ship untested code. This is basic software engineering. Once we 
 agree on that, we figure that concepts help nobody. -- Andrei
Understood, but by the same token library authors shouldn't ship untested code. This is basic software engineering. Once we agree on that, we figure that [type system|grizzly|unicorns] help nobody. That is a statement, not an argument.
Jul 25 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 7:14 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 13:59:11 UTC, Andrei Alexandrescu wrote:
 Understood, but by the same token library authors shouldn't ship
 untested code. This is basic software engineering. Once we agree on
 that, we figure that concepts help nobody. -- Andrei
Understood, but by the same token library authors shouldn't ship untested code. This is basic software engineering. Once we agree on that, we figure that [type system|grizzly|unicorns] help nobody. That is a statement, not an argument.
Well then don't clip the context. -- Andrei
Jul 26 2015
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes 
 at compile time
 as possible. The primary purpose of a type system is to reduce 
 flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error.
Well, I am not sure if the flexibility scales up when clever library authors start to write flexible introspective code. It basically requires library authors to be careful and conservative. Code coverage and unit tests cannot replace a robust type system when you get down to composable datastructures due to the combinatorial explosion you get.
 The idea that Rust traits check at compile time and D does not 
 is a total misunderstanding.
I'm not arguing in favour of copying Rust… I don't think becoming more like Rust will buy D more friends. It will just be an argument for picking Rust over D. If I'd argue for something it would be for having a real deductive database at the heart of the templating type system.
 BTW, you might want to remove the UTF-8 characters from your 
 user name. Evidently, NNTP doesn't do well with them.
Hm. It works in the web interface when I reply to my own messages, maybe just a client issue?
Jul 25 2015
next sibling parent =?UTF-8?B?SsOpcsO0bWUgTS4gQmVyZ2Vy?= <jeberger free.fr> writes:
On 07/25/2015 05:03 PM, Ola Fosheim =3D?UTF-8?B?R3LDuHN0YWQi?=3D
<ola.fosheim.grostad+dlang gmail.com> wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 BTW, you might want to remove the UTF-8 characters from your=20
 user name. Evidently, NNTP doesn't do well with them.
=20 Hm. It works in the web interface when I reply to my own=20 messages, maybe just a client issue? =20
I'd say it is a problem with the way the web interface encodes the sender name, and especially the fact that it starts with a double quote. In the message source, it looks like: From: "Ola Fosheim =3D?UTF-8?B?R3LDuHN0YWQi?=3D <ola.fosheim.grostad+dlang gmail.com> According to RFC 2047 [1]: "An 'encoded-word' MUST NOT appear within a 'quoted-string'." (top of page 7), so this should be written as: From: Ola Fosheim =3D?UTF-8?B?R3LDuHN0YWQ=3D?=3D <ola.fosheim.grostad+dlang gmail.com> Jerome [1] https://tools.ietf.org/html/rfc2047 --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 25 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 8:03 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:
 BTW, you might want to remove the UTF-8 characters from your user name.
 Evidently, NNTP doesn't do well with them.
Hm. It works in the web interface when I reply to my own messages, maybe just a client issue?
Looking at the raw text of your posting, it contains: From: "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang gmail.com> I don't know where that comes from, but it is not coming from my NNTP client (Thunderbird).
Jul 25 2015
prev sibling next sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote: 
 BTW, you might want to remove the UTF-8 characters from your user name.
 Evidently, NNTP doesn't do well with them.
I don't think NNTP has any problems with that. My newsreader displays it just fine. Tobi
Jul 25 2015
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes 
 at compile time
 as possible. The primary purpose of a type system is to reduce 
 flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error. The idea that Rust traits check at compile time and D does not is a total misunderstanding.
Obvious, everything is at compile time here. Still, there is 2 steps, compiling the template (equivalent to compile time in the dynamic dispatch case), and instantiating the template (equivalent to runtime in the dynamic dispatch case). That is the exact same problem.
Jul 25 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 6:48 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 On 7/25/2015 12:19 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 The point of having a type system is to catch as many mistakes at
 compile time
 as possible. The primary purpose of a type system is to reduce
 flexibility.
Again, the D constraint system *is* a compile time system, and if the template body uses an interface not present in the type and not checked for in the constraint, you will *still* get a compile time error. The idea that Rust traits check at compile time and D does not is a total misunderstanding.
Obvious, everything is at compile time here. Still, there is 2 steps, compiling the template (equivalent to compile time in the dynamic dispatch case), and instantiating the template (equivalent to runtime in the dynamic dispatch case). That is the exact same problem.
Probably that's the root of all disagreement. So we have template writing time, template instantiation time, and just run time. I think template instantiation time is a lot "closer" to template writing time than run time. -- Andrei
Jul 26 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 15:19:06 UTC, Andrei Alexandrescu 
wrote:
 Probably that's the root of all disagreement. So we have 
 template writing time, template instantiation time, and just 
 run time. I think template instantiation time is a lot "closer" 
 to template writing time than run time. -- Andrei
It is closer, but it doesn't matter for the argument being made. You have some code that expect its argument to conform to some API. Be it dynamically typed code (which will blow up at runtime, or worse, do random shit) or template code (which will blow up at instanciation time, or worse, do random shit). The problem being fundamentally the same, arguments going for or against one go for the other.
Jul 26 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 3:44 PM, deadalnix wrote:
 or template code (which will blow up at instanciation time, or worse, do random
 shit).
Um, all Rust traits do is test for a method signature match, so it compiles. It is NOT a defense against a random method that just happens to match and does some other unrelated random shit. For example, the '+' operator. Rust traits sez "gee, there's a + operator, it's good to go. Ship it!" Meanwhile, you thought the function was summing some data, when it actually is creating a giant string, or whatever idiot thing someone decided to use '+' for. Rust still has not obviated the necessity for unit tests, nor is Rust remotely able to guarantee your code doesn't "do random shit" if it compiles.
Jul 26 2015
next sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/26/2015 3:44 PM, deadalnix wrote:
 or template code (which will blow up at instanciation time, or worse, do random
 shit).
Um, all Rust traits do is test for a method signature match, so it compiles. It is NOT a defense against a random method that just happens to match and does some other unrelated random shit.
Rust traits have to be implemented *explicitly*. It's not just an implicit test for a matching signature.
 For example, the '+' operator. Rust traits sez "gee, there's a +
 operator, it's good to go. Ship it!" Meanwhile, you thought the function
 was summing some data, when it actually is creating a giant string, or
 whatever idiot thing someone decided to use '+' for.
+ operator is somewhat special because it can only be implemented via trait. That doesn't apply for normal methods.
 Rust still has not obviated the necessity for unit tests, nor is Rust
 remotely able to guarantee your code doesn't "do random shit" if it compiles.
An example: Rust std lib defines two traits, PartialOrd and Ord. Ord depends on PartialOrd but doesn't provide any new methods. And it's clearly documented when to implement Ord and when PartialOrd. So sure, someone could decide to deliberately ignore that, but then I just don't care anymore. Tobi
Jul 27 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 27 July 2015 at 07:21:36 UTC, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/26/2015 3:44 PM, deadalnix wrote:
 or template code (which will blow up at instanciation time, 
 or worse, do random
 shit).
Um, all Rust traits do is test for a method signature match, so it compiles. It is NOT a defense against a random method that just happens to match and does some other unrelated random shit.
Rust traits have to be implemented *explicitly*. It's not just an implicit test for a matching signature.
Explicit is good, but D's problem is that it already have numerous language concepts covering the same type of semantics: classes, interfaces, alias this, template duck-typing, template constraints… C++ only have classes and template SFINAE duck-typing. Everything else is just idioms or library constructs. Adding yet another langauge level interface mechanism to D would IMO require language redesign. Which is not a bad idea, but not likely in the near term?
Jul 27 2015
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/27/2015 01:29 AM, Walter Bright wrote:
 On 7/26/2015 3:44 PM, deadalnix wrote:
 or template code (which will blow up at instanciation time, or worse,
 do random
 shit).
Um, all Rust traits do is test for a method signature match, so it compiles. It is NOT a defense against a random method that just happens to match and does some other unrelated random shit.
You are describing Go interfaces, not Rust traits.
Jul 27 2015
prev sibling parent reply "Max Samukha" <maxsamukha gmail.com> writes:
On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

 For example, the '+' operator. Rust traits sez "gee, there's a 
 + operator, it's good to go. Ship it!" Meanwhile, you thought 
 the function was summing some data, when it actually is 
 creating a giant string, or whatever idiot thing someone 
 decided to use '+' for.
Number addition and string concatenation are monoid operations. In this light, '+' for both makes perfect sense.
Aug 02 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 2 August 2015 at 19:02:22 UTC, Max Samukha wrote:
 On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

 For example, the '+' operator. Rust traits sez "gee, there's a 
 + operator, it's good to go. Ship it!" Meanwhile, you thought 
 the function was summing some data, when it actually is 
 creating a giant string, or whatever idiot thing someone 
 decided to use '+' for.
Number addition and string concatenation are monoid operations. In this light, '+' for both makes perfect sense.
Well, using + for "adding" strings together does make sense on some level, which is why it's done in so many languages, and I don't think that it causes as much confusion as Walter sometimes seems to think (at least in C/C++-derived languages). That being said, I think that it's definitely an improvement that D has another operator for it. It makes it clearer when concatenation is occurring without having to figure out what types you're dealing with, and it allows user-defined code to have both an addition operator and a concatenation operator on the same type. Where distinguishing between + and ~ would likely make a big difference though is dynamic languages that aren't strict with types and allow nonsense like "5" + 2. And in that case, I expect that Walter is completely right. It's just error-prone to use + for concatenation in cases like that, and providing a separate concatenation operator would reduce bugs. Regardless, I definitely like the fact that we have ~ and ~= instead of reusing + and += for that. It's a small improvement, but it is an improvement. - Jonathan M Davis
Aug 02 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 2 August 2015 at 21:17:10 UTC, Jonathan M Davis wrote:
 types you're dealing with, and it allows user-defined code to 
 have both an addition operator and a concatenation operator on 
 the same type.
I assume you mean vectors, though I would prefer binary "++" for that.
 Where distinguishing between + and ~ would likely make a big 
 difference though is dynamic languages that aren't strict with 
 types and allow nonsense like "5" + 2. And in that case, I 
 expect that Walter is completely right. It's just error-prone 
 to use + for concatenation in cases like that, and providing a 
 separate concatenation operator would reduce bugs.
I've never run into such bugs, have you? The ambigious case would be "result:" + 3 + 8 , but you can solve this by giving numeric plus higher precedence or avoid implicit conversion. Though, these days it is more common to support something like "result: {{3+8}}".
 Regardless, I definitely like the fact that we have ~ and ~= 
 instead of reusing + and += for that. It's a small improvement, 
 but it is an improvement.
It's a weird thing to do for a C-decendant as I would expect "~=" to do binary negation.
Aug 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com> wrote:
 It's a weird thing to do for a C-decendant as I would expect "~=" to do binary
 negation.
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
Aug 05 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright wrote:
 On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 It's a weird thing to do for a C-decendant as I would expect
"~=" to do binary
 negation.
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
I don't because "!=" is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer "=", "≠","<" and "≤" for comparison and constants... then have something else for variable assignment.
Aug 06 2015
next sibling parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright wrote:
 On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 It's a weird thing to do for a C-decendant as I would expect
"~=" to do binary
 negation.
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
I don't because "!=" is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer "=", "≠","<" and "≤" for comparison and constants... then have something else for variable assignment.
I understand your attempt to auction your old APL keyboard didn't go well?
Aug 06 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 11:30:45 UTC, Idan Arye wrote:
 I understand your attempt to auction your old APL keyboard 
 didn't go well?
There is no good reason to avoid unicode operators these days. A language without a dedicated editor-mode is pretty much DOA.
Aug 06 2015
parent reply "Kagamin" <spam here.lot> writes:
On Thursday, 6 August 2015 at 11:33:29 UTC, Ola Fosheim Grøstad 
wrote:
 DOA.
http://www.acronymfinder.com/DOA.html (Degenerate Overclockers Anonymous?)
Aug 06 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 12:33:33 UTC, Kagamin wrote:
 On Thursday, 6 August 2015 at 11:33:29 UTC, Ola Fosheim Grøstad 
 wrote:
 DOA.
http://www.acronymfinder.com/DOA.html (Degenerate Overclockers Anonymous?)
Dead on arrival.
Aug 06 2015
prev sibling parent "burjui" <bytefu gmail.com> writes:
On Thursday, 6 August 2015 at 12:33:33 UTC, Kagamin wrote:
 On Thursday, 6 August 2015 at 11:33:29 UTC, Ola Fosheim Grøstad 
 wrote:
 DOA.
http://www.acronymfinder.com/DOA.html (Degenerate Overclockers Anonymous?)
http://www.urbandictionary.com/define.php?term=DOA Without this great site it would often be hard to understand what people from USA are talking about.
Aug 06 2015
prev sibling next sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Thursday, 6 August 2015 at 11:30:45 UTC, Idan Arye wrote:
 On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad 
 wrote:
 On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright 
 wrote:
 On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 [...]
"~=" to do binary
 [...]
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
I don't because "!=" is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer "=", "≠","<" and "≤" for comparison and constants... then have something else for variable assignment.
I understand your attempt to auction your old APL keyboard didn't go well?
Compose keys have existed for a long time. The aversion to unicode is ridiculous.
Aug 06 2015
parent "Chris" <wendlec tcd.ie> writes:
On Thursday, 6 August 2015 at 12:56:05 UTC, rsw0x wrote:
 On Thursday, 6 August 2015 at 11:30:45 UTC, Idan Arye wrote:
 On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim 
 Grøstad wrote:
 On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright 
 wrote:
 On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 [...]
"~=" to do binary
 [...]
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
I don't because "!=" is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer "=", "≠","<" and "≤" for comparison and constants... then have something else for variable assignment.
I understand your attempt to auction your old APL keyboard didn't go well?
Compose keys have existed for a long time. The aversion to unicode is ridiculous.
That's because everything in IT is Anglo-centric (mainly US). To this day we suffer from the fact that nobody in the English speaking world bothered to cater for "special characters"[1], when computers and programming languages emerged as ever more important. [1] the term "special character" tells a lot about the attitude. For French or Portuguese speakers "ç" is not a "special character" nor is "ñ" for Spanish speakers (not to mention other writing systems!).
Aug 06 2015
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/06/2015 01:30 PM, Idan Arye wrote:
 On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad wrote:
 On Thursday, 6 August 2015 at 06:50:38 UTC, Walter Bright wrote:
 On 8/2/2015 8:17 PM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com> wrote:
 It's a weird thing to do for a C-decendant as I would expect
"~=" to do binary
 negation.
If you really felt this way, you'd expect the C != operator a != b to be the same as: a = !b
I don't because "!=" is frequently used and usually in a context where expectations points towards comparison and not assignment. But I would prefer "=", "≠","<" and "≤" for comparison and constants... then have something else for variable assignment.
I understand your attempt to auction your old APL keyboard didn't go well?
I can actually type ≠ and ≤ more quickly than != or <= in my editor.
Aug 06 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, 6 August 2015 at 13:27:34 UTC, Timon Gehr wrote:
 I can actually type ≠ and ≤ more quickly than != or <= in my 
 editor.
Wow. The only way that I'd know how to get those characters would be to copy-paste them from somewhere. I'm sure that there's a far easier way to generate them, but if a symbol isn't actually on my keyboard, I wouldn't have a clue how to type it, and I would have thought that support for typing it would be editor-specific. In general, I would have expected it to be a total disaster if a language used any non-ASCII characters in its syntax. - Jonathan M Davis
Aug 06 2015
next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2015-08-06 at 14:14 +0000, Jonathan M Davis via Digitalmars-d
wrote:
 On Thursday, 6 August 2015 at 13:27:34 UTC, Timon Gehr wrote:
 I can actually type =E2=89=A0 and =E2=89=A4 more quickly than !=3D or <=
=3D in my=20
 editor.
=20 Wow. The only way that I'd know how to get those characters would=20 be to copy-paste them from somewhere. I'm sure that there's a far=20 easier way to generate them, but if a symbol isn't actually on my=20 keyboard, I wouldn't have a clue how to type it, and I would have=20 thought that support for typing it would be editor-specific. In=20 general, I would have expected it to be a total disaster if a=20 language used any non-ASCII characters in its syntax. =20 - Jonathan M Davis
As rsw0x said previously, compose keys and construction of non-ASCII Unicode codepoints have been around for decades. The fixation on "only ASCII characters" is a hang-over from the 1970s I'm afraid and now it is 2015 =E2=80=93 supposedly. A neat alternative to the compose key =E2=80=93 well actually a strong accompaniment really =E2=80=93 is to allow for multiple keyboard bindings. = In particular I have Greek set up so I can switch from en (en_UK obviously since that is the only real form of en) to gr very quickly and then I am typing Greek characters on my UK keyboard. Damn useful for all this LaTeX maths (*) and general "calculating approximations to =CF=80" type stuff. More programming languages should get with the Unicode programme. Obviously not to allow the silly emoticon programs that did the rounds on Swift's release, but exactly to allow for =E2=89=A0 =C2=AB =C2=BB =E2= =89=A4 =E2=89=A5 =C2=A8 etc., etc., etc. All the sensible stuff that would make reading programs easier. (*) NB not math, that is an un-English diminutive :-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Aug 06 2015
next sibling parent reply "Kagamin" <spam here.lot> writes:
On Thursday, 6 August 2015 at 15:09:01 UTC, Russel Winder wrote:
 Damn useful for all this LaTeX maths (*) and general 
 "calculating approximations to π" type stuff.
\pi ? https://en.wikibooks.org/wiki/LaTeX/Mathematics#Greek_letters
Aug 06 2015
parent "rsw0x" <anonymous anonymous.com> writes:
On Thursday, 6 August 2015 at 15:31:25 UTC, Kagamin wrote:
 On Thursday, 6 August 2015 at 15:09:01 UTC, Russel Winder wrote:
 Damn useful for all this LaTeX maths (*) and general 
 "calculating approximations to π" type stuff.
\pi ? https://en.wikibooks.org/wiki/LaTeX/Mathematics#Greek_letters
Using a compose key or alternate layout is much, *much* faster than the LaTeX notation.
Aug 06 2015
prev sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Thursday, 6 August 2015 at 15:09:01 UTC, Russel Winder wrote:
 More programming languages should get with the Unicode 
 programme.
 Obviously not to allow the silly emoticon programs that did the 
 rounds
 on Swift's release, but exactly to allow for ≠ « »  ≤ ≥ ¨ etc., 
 etc.,
 etc. All the sensible stuff that would make reading programs 
 easier.
A unicode DIP would be nice to see. Haskell offers Unicode support and is completely compatible with ASCII. Would probably be simple to create a tool to quickly shift between the two. It definitely does look much better.
Aug 06 2015
prev sibling next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 14:14:30 UTC, Jonathan M Davis 
wrote:
 Wow. The only way that I'd know how to get those characters 
 would be to copy-paste them from somewhere. I'm sure that 
 there's a far easier way to generate them, but if a symbol 
 isn't actually on my keyboard, I wouldn't have a clue how to 
 type it, and I would have thought that support for typing it 
 would be editor-specific. In general, I would have expected it 
 to be a total disaster if a language used any non-ASCII 
 characters in its syntax.
I think it is a big advantage if the default editor and syntax is developed in tandem, it can enable less cluttered syntax and a better editing experience. Some syntaxes can improve a lot by having sensible colouring/visual layout. On many non-US keyboards the C-language syntax-characters are put in annoying positions too. To get braces "{}" I have to type: alt-shift-8 alt-shift-9. To get "<=" I have to type: < shift-0 To get "≤" I type: alt-<
Aug 06 2015
prev sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Thursday, 6 August 2015 at 14:14:30 UTC, Jonathan M Davis 
wrote:
 In general, I would have expected it to be a total disaster if 
 a language used any non-ASCII characters in its syntax.
I still think it would be...not everyone uses the same editor. I had to look up how to do it in Notepad++. It requires knowing the Unicode key. Not exactly user-friendly. I wouldn't have an issue if the editor had a layout like Lyx.
Aug 06 2015
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/06/2015 07:13 PM, jmh530 wrote:
 On Thursday, 6 August 2015 at 14:14:30 UTC, Jonathan M Davis wrote:
 In general, I would have expected it to be a total disaster if a
 language used any non-ASCII characters in its syntax.
I still think it would be...not everyone uses the same editor. I had to look up how to do it in Notepad++. It requires knowing the Unicode key. Not exactly user-friendly.
Not a Windows user, but I bet it wouldn't take long to find a better solution.
 I wouldn't have an issue if the editor had a
 layout like Lyx.
Configuration is an O(1) operation. The constant becomes smaller after the first three users of the hypothetical language have set up their environment.
Aug 06 2015
prev sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad 
wrote:
 But I would prefer "=", "≠","<" and "≤" for comparison and 
 constants... then have something else for variable assignment.
:=, or ≔
Aug 06 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/06/2015 04:14 PM, rsw0x wrote:
 On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Grøstad wrote:
 But I would prefer "=", "≠","<" and "≤" for comparison and
 constants... then have something else for variable assignment.
:=, or ≔
← or ⇐ are also fine choices.
Aug 06 2015
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2015-08-06 at 16:20 +0200, Timon Gehr via Digitalmars-d wrote:
 On 08/06/2015 04:14 PM, rsw0x wrote:
 On Thursday, 6 August 2015 at 11:26:10 UTC, Ola Fosheim Gr=C3=B8stad=
=20
 wrote:
 But I would prefer "=3D", "=E2=89=A0","<" and "=E2=89=A4" for compari=
son and
 constants... then have something else for variable assignment.
=20 :=3D, or =E2=89=94
=20 =E2=86=90 or =E2=87=90 are also fine choices.
And indeed ones taken in Scala, <- can be =E2=86=90, which looks so much ni= cer to read. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Aug 06 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 15:11:59 UTC, Russel Winder wrote:
 On Thu, 2015-08-06 at 16:20 +0200, Timon Gehr via Digitalmars-d 
 wrote:
 ← or ⇐ are also fine choices.
And indeed ones taken in Scala, <- can be ←, which looks so much nicer to read.
Yes, probably doesn't matter too much which way the arrow points, you can make up mnemonics for both. I.e. are you transferring a reference/value to the symbol or is the symbol pointing at/pinpointing an object/value. I guess the first one is the more common mnemonic, although for references maybe the latter is more in line with how you draw diagrams (the arrow pointing to the instance). IIRC, in Beta you had this pipeline like assignment/function call notation (s,t,v) => func1 => func2 => ((x,y,z), (a,b,c)) Which would be similar to the more conventional ((x,y,z), (a,b,c)) := func2(func1(s,t,v)) With arrows you can allow both directions. The conventional right to left is easier to read for short expressions. But the pipelining left to right is easier to read for longer expressions that go through multiple stages. I think Rust also allows you to bind elements to a tuple using both "let" and "mut" in the same tuple expression, so that you can declare and bind both to variables and constants in a single expression. If you have serveral visually distinct array types you probably could get a coherent and easy to remember syntax for function calls, value assignment, reference assignment, array assignment, ranges/dataflow pipelining etc.
Aug 06 2015
prev sibling parent "Kagamin" <spam here.lot> writes:
On Sunday, 2 August 2015 at 21:17:10 UTC, Jonathan M Davis wrote:
 Where distinguishing between + and ~ would likely make a big 
 difference though is dynamic languages that aren't strict with 
 types and allow nonsense like "5" + 2.
Using '~' instead of '+' to concatenate strings is just a syntax and says nothing about type system.
Aug 03 2015
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/02/2015 09:02 PM, Max Samukha wrote:
 On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

 For example, the '+' operator. Rust traits sez "gee, there's a +
 operator, it's good to go. Ship it!" Meanwhile, you thought the
 function was summing some data, when it actually is creating a giant
 string, or whatever idiot thing someone decided to use '+' for.
Number addition and string concatenation are monoid operations. In this light, '+' for both makes perfect sense.
'+' is usually used to denote the operation of an abelian group.
Aug 02 2015
parent reply "Max Samukha" <maxsamukha gmail.com> writes:
On Monday, 3 August 2015 at 06:52:41 UTC, Timon Gehr wrote:
 On 08/02/2015 09:02 PM, Max Samukha wrote:
 On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

 For example, the '+' operator. Rust traits sez "gee, there's 
 a +
 operator, it's good to go. Ship it!" Meanwhile, you thought 
 the
 function was summing some data, when it actually is creating 
 a giant
 string, or whatever idiot thing someone decided to use '+' 
 for.
Number addition and string concatenation are monoid operations. In this light, '+' for both makes perfect sense.
'+' is usually used to denote the operation of an abelian group.
The point is that '+' for string concatenation is no more of an 'idiot thing' than '~'.
Aug 03 2015
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/03/2015 11:19 AM, Max Samukha wrote:
 On Monday, 3 August 2015 at 06:52:41 UTC, Timon Gehr wrote:
 On 08/02/2015 09:02 PM, Max Samukha wrote:
 On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

 For example, the '+' operator. Rust traits sez "gee, there's a +
 operator, it's good to go. Ship it!" Meanwhile, you thought the
 function was summing some data, when it actually is creating a giant
 string, or whatever idiot thing someone decided to use '+' for.
Number addition and string concatenation are monoid operations. In this light, '+' for both makes perfect sense.
'+' is usually used to denote the operation of an abelian group.
The point is that '+' for string concatenation is no more of an 'idiot thing' than '~'.
My point is that it is. String concatenation is not commutative.
Aug 05 2015
parent reply "Max Samukha" <maxsamukha gmail.com> writes:
On Wednesday, 5 August 2015 at 15:58:28 UTC, Timon Gehr wrote:

 The point is that '+' for string concatenation is no more of 
 an 'idiot
 thing' than '~'.
My point is that it is. String concatenation is not commutative.
Ok, good point. Except that '+' in a programming language is not the mathematical '+'. Why define '+' as strictly commutative operation and not more generally as an abstract binary operation, considering the middle dot is unavailable? Or, if we want to stick to the math notation, then '*' would be more appropriate than the idiot thing '~'.
Aug 05 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 5 August 2015 at 17:12:29 UTC, Max Samukha wrote:
 On Wednesday, 5 August 2015 at 15:58:28 UTC, Timon Gehr wrote:

 The point is that '+' for string concatenation is no more of 
 an 'idiot
 thing' than '~'.
My point is that it is. String concatenation is not commutative.
Ok, good point. Except that '+' in a programming language is not the mathematical '+'. Why define '+' as strictly commutative operation and not more generally as an abstract binary operation, considering the middle dot is unavailable? Or, if we want to stick to the math notation, then '*' would be more appropriate than the idiot thing '~'.
Nobody want to stay in the math world. Not that math are worthless, but it has this tendency to make simple things absurdly complex by requiring you to learn a whole area of math to understand the introduction. This is commonly referred as the monad curse: once you understand what a monad is, you loose all capacity to explain it. In fact, Most developers have used some sort of monad, but only a very small portion know they were using one or can explain you what it is. Mathematical language is geared toward generality and correctness, not practicality. That makes sens in the context of math, that do not in the context of every day programming.
Aug 05 2015
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08/05/2015 07:32 PM, deadalnix wrote:
 On Wednesday, 5 August 2015 at 17:12:29 UTC, Max Samukha wrote:
 On Wednesday, 5 August 2015 at 15:58:28 UTC, Timon Gehr wrote:

 The point is that '+' for string concatenation is no more of an 'idiot
 thing' than '~'.
My point is that it is. String concatenation is not commutative.
Ok, good point. Except that '+' in a programming language is not the mathematical '+'. Why define '+' as strictly commutative operation and not more generally as an abstract binary operation, considering the middle dot is unavailable? Or, if we want to stick to the math notation, then '*' would be more appropriate than the idiot thing '~'.
Nobody want to stay in the math world. Not that math are worthless, but it has this tendency to make simple things absurdly complex by requiring you to learn a whole area of math to understand the introduction.
I assume the set of examples you are generalizing this from has cardinality close to one? Anyway, it seems like an exaggeration.
 This is commonly referred as the monad curse: once you understand what a
 monad is, you loose all capacity to explain it.
I'm not buying it.
 In fact, Most developers
 have used some sort of monad, but only a very small portion know they
 were using one or can explain you what it is.
 ...
Which isn't surprising. This isn't a very useful name in their (quite specific) use cases.
 Mathematical language is geared toward generality and correctness, not
 practicality. That makes sens in the context of math, that do not in the
 context of every day programming.
I don't see what you are trying to get at here, but I guess it is almost entirely unrelated to choosing a notation for string concatenation.
Aug 05 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 August 2015 at 19:56:37 UTC, Timon Gehr wrote:
 On 08/05/2015 07:32 PM, deadalnix wrote:
 Mathematical language is geared toward generality and 
 correctness, not
 practicality. That makes sens in the context of math, that do 
 not in the
 context of every day programming.
I don't see what you are trying to get at here, but I guess it is almost entirely unrelated to choosing a notation for string concatenation.
Well, I don't think practicality is the main issue, but the mnemonic aspect of syntax is important. It is not unreasonable to make the identity of operators/functions consist of both name and parameter types like in C++ and D. So you don't have "+" as the operator name, you have "+(int,int)" and "+(string,string)". If one makes mathematical properties intrinsic to untyped part of the name then a lot of overloading scenarios break down e.g. for non-euclidean types. It has been argued that functional languages would benefit from teaching functional programming in a less mathematical manner (e.g. talk about "callbacks" rather than "monads" etc): https://youtu.be/oYk8CKH7OhE
Aug 05 2015
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/06/2015 07:50 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Wednesday, 5 August 2015 at 19:56:37 UTC, Timon Gehr wrote:
 On 08/05/2015 07:32 PM, deadalnix wrote:
 Mathematical language is geared toward generality and correctness, not
 practicality. That makes sens in the context of math, that do not in the
 context of every day programming.
I don't see what you are trying to get at here, but I guess it is almost entirely unrelated to choosing a notation for string concatenation.
Well, I don't think practicality is the main issue, but the mnemonic aspect of syntax is important. It is not unreasonable to make the identity of operators/functions consist of both name and parameter types like in C++ and D. So you don't have "+" as the operator name, you have "+(int,int)" and "+(string,string)". ...
Certainly, but overloading is not always a good idea.
 It has been argued that functional languages would benefit from teaching
 functional programming in a less mathematical manner (e.g. talk about
 "callbacks" rather than "monads" etc):

 https://youtu.be/oYk8CKH7OhE
That's not less "mathematical". It is less abstract, maybe. Also, I think he is optimizing for people who pick up the language on their own. (i.e. it is not really about "teaching" in any traditional sense.)
Aug 06 2015
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08/05/2015 07:12 PM, Max Samukha wrote:
 On Wednesday, 5 August 2015 at 15:58:28 UTC, Timon Gehr wrote:

 The point is that '+' for string concatenation is no more of an 'idiot
 thing' than '~'.
My point is that it is. String concatenation is not commutative.
Ok, good point. Except that '+' in a programming language is not the mathematical '+'.
It's obvious where the notation has been borrowed from.
 Why define '+' as strictly commutative operation and
 not more generally as an abstract binary operation,
Descriptive names do have some value.
 considering the middle dot is unavailable?
(It isn't.)
 Or, if we want to stick to the math notation,
 then '*' would be more appropriate than the idiot thing '~'.
That's a different discussion. '*' is certainly more appropriate than '+'. Anyway, I think it is sensible to use distinct names for distinct operations when they are used in the same system.
Aug 05 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 8/3/2015 2:19 AM, Max Samukha wrote:
 The point is that '+' for string concatenation is no more of an 'idiot thing'
 than '~'.
Sure it is. What if you've got: T add(T)(T a, T b) { return a + b; } and some idiot overloaded + for T to be something other than addition?
Aug 05 2015
next sibling parent reply "Idan Arye" <GenericNPC gmail.com> writes:
On Thursday, 6 August 2015 at 06:54:45 UTC, Walter Bright wrote:
 On 8/3/2015 2:19 AM, Max Samukha wrote:
 The point is that '+' for string concatenation is no more of 
 an 'idiot thing'
 than '~'.
Sure it is. What if you've got: T add(T)(T a, T b) { return a + b; } and some idiot overloaded + for T to be something other than addition?
Having add("a", "b") return "ab" is not that weird. But consider this: http://pastebin.com/R3csc5Pa I can't put it in dpaste because it doesn't allow threading, but here is an example output: 45 45 45 45 45 45 45 45 45 45 MyString("0361572489") MyString("0379158246") MyString("0369158247") MyString("0582361479") MyString("0482579136") MyString("0369147258") MyString("0371482569") MyString("0469137258") MyString("0369147258") MyString("0561472389")
Aug 06 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 6 August 2015 at 09:15:25 UTC, Idan Arye wrote:
 Having add("a", "b") return "ab" is not that weird. But 
 consider this: http://pastebin.com/R3csc5Pa

 I can't put it in dpaste because it doesn't allow threading, 
 but here is an example output:
This would just be an argument for having static typing through and through, or Rust traits… If you wan't to address usability you have to look at what kind of problems people run into, not what they could construct if they tried really hard to create problems.
Aug 06 2015
prev sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 6 August 2015 at 06:54:45 UTC, Walter Bright wrote:
 On 8/3/2015 2:19 AM, Max Samukha wrote:
 The point is that '+' for string concatenation is no more of 
 an 'idiot thing'
 than '~'.
Sure it is. What if you've got: T add(T)(T a, T b) { return a + b; } and some idiot overloaded + for T to be something other than addition?
That is a general problem with structural typing. Why not assume that if a type defines 'length', it must be a range? Then call an idiot everyone who defines it otherwise. I admit that special treatment of '+' is justified by its long history, but 'idiot thing' is obviously out of place. BTW, it happens that '+' does not always have to be commutative: https://en.wikipedia.org/wiki/Near-ring https://en.wikipedia.org/wiki/Ordinal_arithmetic#Addition
Jul 11 2016
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 11 July 2016 at 17:02:50 UTC, Max Samukha wrote:
 BTW, it happens that '+' does not always have to be 
 commutative: https://en.wikipedia.org/wiki/Near-ring
 https://en.wikipedia.org/wiki/Ordinal_arithmetic#Addition
Yes, although concatenations ought to be «*»... and the empty string «""» the identity (1). e.g. a free monoid: "abc" * "d" == "abcd" "abc"*"" == "abc" "abc"*""*"" == "abc" Then if you want to represent a set of alternate sets like a regexp «ab|cd» you could replace the alternative operator «|» with «+» and let the empty set «{}» be zero (0). Thus get a semiring: ( {"a"} + {"b"} ) + {"c"} == {"a"} + ( {"b"} + {"c"} ) == {"a" + "b" + "c"} {} + {"a"} == {"a"} + {} == {"a"} {"a"} + {"b"} == {"b"} + {"a"} == {"a" + "b"} ( {"a"} * {"b"} ) * {"c"} == {"a"} * ( {"b"} * {"c"} ) == {"abc"} {"a"} * ({"b"} + {"c"}) == ({"a"} * {"b"}) + ({"a"} * {"c"}) == { "ab" + "ac" } ({"a"} + {"b"}) * {"c"} == ({"a"} * {"c"}) + ({"b"} * {"c"}) == {"ab + "bc" } {""} * {"a"} == {"a"} * {""} == {"a"} {} * {"a"} == {"a"} * {} == {} Sortof...
Jul 13 2016
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 13 July 2016 at 21:33:39 UTC, Ola Fosheim Grøstad 
wrote:
 Then if you want to represent a set of alternate sets like a
Typo: a set of alternative strings.
Jul 13 2016
prev sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Saturday, 25 July 2015 at 09:40:52 UTC, Walter Bright wrote:
 BTW, you might want to remove the UTF-8 characters from your 
 user name. Evidently, NNTP doesn't do well with them.
 Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
It (or more likely, his user agent) does deal with them well. It's correctly quoted according to RFC2047: http://www.faqs.org/rfcs/rfc2047.html I've opened an enhancement request at DFeed's issue tracker: https://github.com/CyberShadow/DFeed/issues/44
Jul 27 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 11:12 PM, Walter Bright wrote:
 On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
 I confess that I've always thought that QueryInterface was a
 _horrible_ idea,
Specifying every interface that a type must support at the top of the hierarchy is worse.
That would be an oversimplification.
 Once again, Exception Specifications.
And that simile would be superficial. Andrei
Jul 25 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 7:28 PM, Jonathan M Davis wrote:
 I confess that I've always thought that QueryInterface was a _horrible_ idea,
 and that if you need to cast your type to something else like that, you're
doing
 something wrong. *shudder* I really have nothing good to say about COM
actually...
I am not explaining this properly. Trying again, void foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } The Java, COM interface systems do not write code like: void foo(T: hasPrefix, hasSuffix)(T t) { t.prefix(); // ok bar(t); } They write it something like: void foo(hasPrefix t) { t.prefix(); s = cast(hasSuffix)t; if (s) bar(s); else RuntimeError(message); } So no, statically checked traits and concepts are not used in the OOP world.
Jul 25 2015
parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote: 
 They write it something like:
 
     void foo(hasPrefix t) {
        t.prefix();
        s = cast(hasSuffix)t;
        if (s) bar(s);
        else RuntimeError(message);
     }
That's horrible! Tobi
Jul 25 2015
parent "Jordan Miner" <jminer7 gmail.com> writes:
On Saturday, 25 July 2015 at 19:23:43 UTC, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 They write it something like:
 
     void foo(hasPrefix t) {
        t.prefix();
        s = cast(hasSuffix)t;
        if (s) bar(s);
        else RuntimeError(message);
     }
That's horrible! Tobi
I agree. Seriously, that's horrible. I can't remember ever seeing code written like that. Why even use a statically-typed language if you are just going to bypass the type system? Jordan
Jul 25 2015
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 02:28:49 UTC, Jonathan M Davis wrote:
 On Saturday, 25 July 2015 at 01:22:15 UTC, Tofu Ninja wrote:
 On Saturday, 25 July 2015 at 00:49:38 UTC, Walter Bright wrote:
 On 7/24/2015 2:27 PM, Tofu Ninja wrote:
 No it isn't. Google QueryInterface(). Nobody lists all the 
 interfaces at the top level functions, which is that Rust 
 traits and C++ concepts require.
The only time you don't use the right interface for your needs is if you plan on casting somewhere down the line. But certainly there are people who don't do that, I for one feel it's bad practice to need to use casts to circumvent the type system like that.
I confess that I've always thought that QueryInterface was a _horrible_ idea, and that if you need to cast your type to something else like that, you're doing something wrong. *shudder* I really have nothing good to say about COM actually... - Jonathan M Davis
Well yes and now. When you design is new and shinny, sure, that is a sign that somewhere it is broken. When you are patching a hundred of thousand line of code, you may not be able to get the refactoring in all at once in a realistic manner and need to build some debt. Hopefully, as the refactoring progress, these hacks are removed, but not having just makes cost of change prohibitive, which is bad.
Jul 25 2015
prev sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 00:49:38 UTC, Walter Bright wrote:
 Sigh. Nothing I post here is understood.
Then make yourself more clear...
Jul 24 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 On 7/24/2015 11:42 AM, Jacob Carlborg wrote:
 I don't see the difference compared to a regular parameter. If 
 you don't specify
 any constraints/traits/whatever it like using "Object" for all 
 your parameter
 types in Java.
So constraints then will be an all-or-nothing proposition? I believe that would make them essentially useless. I suspect I am not getting across the essential point. If I have a call tree, and at the bottom I add a call to interface X, then I have to add a constraint that additionally specifies X on each function up the call tree to the root. That is antiethical to writing generic code, and will prove to be more of a nuisance than an asset. Exactly what sunk Exception Specifications.
In many language you have an instaceof keyword or something similar. You'd get : if (foo instanceof X) { // You can use X insterface on foo. } vs static if (foo instanceof X) { // You can use X insterface on foo. } The whole runtime vs compile time is essentially an implementation detail. The idea is the very same. The most intriguing part of this conversation is that the argument made about unitests and complexity are the very same than for dynamic vs strong typing (and there is hard data that strong typing is better). Yet, if someone would make the very same argument in the case of dynamic typing, both Walter and Andrei would not give it a second though (and rightly so). Yet, nowhere the reason why this differs in ay that make the cost/benefit ratio shift is mentioned. It is simply asserted as such.
Jul 24 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 3:12 PM, deadalnix wrote:
 On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 If I have a call tree,
 and at the bottom I add a call to interface X, then I have to add a constraint
 that additionally specifies X on each function up the call tree to the root.
 That is antiethical to writing generic code, and will prove to be more of a
 nuisance than an asset.

 Exactly what sunk Exception Specifications.
In many language you have an instaceof keyword or something similar. You'd get : if (foo instanceof X) { // You can use X insterface on foo. } vs static if (foo instanceof X) { // You can use X insterface on foo. } The whole runtime vs compile time is essentially an implementation detail. The idea is the very same. The most intriguing part of this conversation is that the argument made about unitests and complexity are the very same than for dynamic vs strong typing (and there is hard data that strong typing is better). Yet, if someone would make the very same argument in the case of dynamic typing, both Walter and Andrei would not give it a second though (and rightly so). Yet, nowhere the reason why this differs in ay that make the cost/benefit ratio shift is mentioned. It is simply asserted as such.
I don't see how this addresses my point at all. This is very frustrating.
Jul 24 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 00:45:06 UTC, Walter Bright wrote:
 On 7/24/2015 3:12 PM, deadalnix wrote:
 On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 If I have a call tree,
 and at the bottom I add a call to interface X, then I have to 
 add a constraint
 that additionally specifies X on each function up the call 
 tree to the root.
 That is antiethical to writing generic code, and will prove 
 to be more of a
 nuisance than an asset.

 Exactly what sunk Exception Specifications.
In many language you have an instaceof keyword or something similar. You'd get : if (foo instanceof X) { // You can use X insterface on foo. } vs static if (foo instanceof X) { // You can use X insterface on foo. } The whole runtime vs compile time is essentially an implementation detail. The idea is the very same. The most intriguing part of this conversation is that the argument made about unitests and complexity are the very same than for dynamic vs strong typing (and there is hard data that strong typing is better). Yet, if someone would make the very same argument in the case of dynamic typing, both Walter and Andrei would not give it a second though (and rightly so). Yet, nowhere the reason why this differs in ay that make the cost/benefit ratio shift is mentioned. It is simply asserted as such.
I don't see how this addresses my point at all. This is very frustrating.
The same "problems" that you are claiming will happen with with the compile time interfaces are the exact same as the problems that happen with normal types systems. With the normal type system, if somewhere down the call tree you need X you need to either update your interface, make a new one and add it to the whole call tree, or cast. If somewhere down the call tree a template needs X, then with this compile time interface thing, you would need to either update your interface, make a new one and add it to the whole call tree, or do some kind of static cast. ITS THE SAME. Your arguments for why not to do it are even the same that people give for dynamic typing, and we know how well that works out. Current template types work like duck typing, which works, but its error prone, and your argument of unittests is obviously bad in the context of duck typing. We want a real type system for our template types. You may be thinking, but why would you want a type system for template types, why not just use the normal type system? Well because we want static dispatch, and compile time introspection and static if and all the other great things we have at compile time.
Jul 24 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 9:16 PM, Tofu Ninja wrote:
 Current template types work like duck typing, which works, but its error
 prone, and your argument of unittests is obviously bad in the context of
 duck typing.
Could you please make the obvious explicit?
 We want a real type system for our template types.
Every time this (or really any apology of C++ concepts) comes up, the discussion has a similar shape: 1. Concepts are great because they're a type system for the type system! And better error messages! And look at these five-liners! And Look at iterators! And other nice words! 2. I destroy them. 3. But we want concepts because they're a type system for the type system! And ... etc. etc. I have no idea how people can simply ignore the fact that their arguments have been systematically dismantled. Andrei
Jul 25 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 13:47:05 UTC, Andrei Alexandrescu 
wrote:
 On 7/24/15 9:16 PM, Tofu Ninja wrote:
 Current template types work like duck typing, which works, but 
 its error
 prone, and your argument of unittests is obviously bad in the 
 context of
 duck typing.
Could you please make the obvious explicit?
 We want a real type system for our template types.
Every time this (or really any apology of C++ concepts) comes up, the discussion has a similar shape: 1. Concepts are great because they're a type system for the type system! And better error messages! And look at these five-liners! And Look at iterators! And other nice words! 2. I destroy them.
So far, you've just rehashed bogous claim made for dynamic typing decades ago and proven wrong decades ago.
 3. But we want concepts because they're a type system for the 
 type system! And ... etc. etc.

 I have no idea how people can simply ignore the fact that their 
 arguments have been systematically dismantled.


 Andrei
Because you only think you did, but really didn't.
Jul 25 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 00:45:06 UTC, Walter Bright wrote:
 On 7/24/2015 3:12 PM, deadalnix wrote:
 On Friday, 24 July 2015 at 19:10:33 UTC, Walter Bright wrote:
 If I have a call tree,
 and at the bottom I add a call to interface X, then I have to 
 add a constraint
 that additionally specifies X on each function up the call 
 tree to the root.
 That is antiethical to writing generic code, and will prove 
 to be more of a
 nuisance than an asset.

 Exactly what sunk Exception Specifications.
In many language you have an instaceof keyword or something similar. You'd get : if (foo instanceof X) { // You can use X insterface on foo. } vs static if (foo instanceof X) { // You can use X insterface on foo. } The whole runtime vs compile time is essentially an implementation detail. The idea is the very same. The most intriguing part of this conversation is that the argument made about unitests and complexity are the very same than for dynamic vs strong typing (and there is hard data that strong typing is better). Yet, if someone would make the very same argument in the case of dynamic typing, both Walter and Andrei would not give it a second though (and rightly so). Yet, nowhere the reason why this differs in ay that make the cost/benefit ratio shift is mentioned. It is simply asserted as such.
I don't see how this addresses my point at all. This is very frustrating.
I think it does. Your point is essentially an argument by ignorance: this is new, we don't really know, and it has been show in the past that what seems like a good idea (checked exception for instance) turns out horribly wrong, against all expectations. My point is that it is not new, it is very much the same thing as what we've been doing ll along for several decades, the difference being mostly implementation details. Also, argument from ignorance is hard to maintain when the thread is an actual feedback from experience.
Jul 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 3:28 PM, deadalnix wrote:
 Also, argument from ignorance is hard to maintain when the thread is an actual
 feedback from experience.
You say that interfaces do the same thing. So please show how it's done with the example I gave: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); }
Jul 25 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 03:42:22 UTC, Walter Bright wrote:
 On 7/25/2015 3:28 PM, deadalnix wrote:
 Also, argument from ignorance is hard to maintain when the 
 thread is an actual
 feedback from experience.
You say that interfaces do the same thing. So please show how it's done with the example I gave: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); }
I'm not sure what is the problem here.
Jul 26 2015
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 3:59 PM, deadalnix wrote:
 On Sunday, 26 July 2015 at 03:42:22 UTC, Walter Bright wrote:
 On 7/25/2015 3:28 PM, deadalnix wrote:
 Also, argument from ignorance is hard to maintain when the thread is an actual
 feedback from experience.
You say that interfaces do the same thing. So please show how it's done with the example I gave: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); }
I'm not sure what is the problem here.
I give up.
Jul 26 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 26 July 2015 at 22:59:09 UTC, deadalnix wrote:
 On Sunday, 26 July 2015 at 03:42:22 UTC, Walter Bright wrote:
 On 7/25/2015 3:28 PM, deadalnix wrote:
 Also, argument from ignorance is hard to maintain when the 
 thread is an actual
 feedback from experience.
You say that interfaces do the same thing. So please show how it's done with the example I gave: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); }
I'm not sure what is the problem here.
The problem here is that if you're dealing with traits or concepts or whatever that are essentially interfaces, and foo uses the Prefix interface/trait/concept and then wants to use bar which has the Color interface/trait/concept, it has to then require that what it's given implements both the Prefix and Color interfaces/traits/concepts. In the case of actual interfaces, this really doesn't work well, because you're forced to basically have an interface that's derived from both - e.g. PrefixAndColor. Otherwise, you'd be forced to do nonsense like have foo take a Prefix and then cast it to Color to pass to bar and throw an exception if the type it was given didn't actually implement both interfaces. And it's downright ugly. In reality, the code would likely simply end up not being that generic and would require some specific type that you were using in your code which implemented both Prefix and Color, and foo just wouldn't work with anything else, making it far less reusable. So, with actual interfaces, it becomes very difficult to write generic code. With traits or concepts, presumably, you could say that foo required a type that implemented both Prefix and Color, which fixes the problem of how you're able to accept something that takes both generically without coming up with something like PrefixAndColor (though if you can only list one trait/concept as required, you're just as screwed as you are with interfaces). But even if you can list multiple traits/concepts as required by a function, you quickly end up with a proliferation of traits/concepts that need to be covered by a higher level function like foo, because not only would foo need to list all of the traits/concepts that the functions it uses directly require, but it has to list all of the traits/concepts that it even indirectly requires (potentially from far down the call stack). So, any change to a function that even gets called indirectly could break foo, because it didn't have the right traits/concepts listed in its requirements. And all of the functions in the call chain would have to have their list of required traits/concepts updated any time there was any tweak to any of the underlying functions, even if those functions would have actually worked fine with most of the code that was calling them, because the types being passed in had the new requirements already (it's just that the functions higher up the stack didn't list the updated requirements yet). By having foo list all of the traits/concepts that would be required anywhere in its call stack, you're doing something very similar to what happens with checked exceptions where the exceptions have to be listed clear up the chain. It's not quite the same, and there's no equivalent to "throws Exception" (since that would be equivalent to somehow having a trait/concept that said that you didn't care what the type given to foo implemented). Rather, you're basically being forced to list each trait/concept individually up the chain - but it's still a similar problem to checked exceptions. It doesn't scale well. And if a function is required to list all of the traits/concepts that are required - even indirectly - then changing the requirements of a function - even slightly - results in code breakage similar to that of checked exceptions when you change which exceptions a function throws, and "throws Exception" wasn't being used. And even if you're not worried about breaking other people's code, it's a maintenance problem to maintain that list clear up the chain. Unfortunately, we _do_ have a similar problem with template constraints if we insist on putting all of the function's requirements in its template constraint rather than just its immediate requirements. But at least with template constraints, if the top-level constraint is missing something that a function somewhere deeper in the stack requires, then you get a compilation error only when the type being passed in doesn't pass the constraint on the function deeper in the stack. So, if you adjust a template constraint, it will only break code that doesn't work with the new constraint - even code that uses that function indirectly (possibly even quite deeply in a call stack, far from their own code) won't break due to the change, unless the type being used doesn't pass the new constraint. And when it does fail, the errors may not be pretty, but they do tell you exactly what's required to figure out what's wrong when you look at the source code. Whereas the traits/concepts solution would break _all_ code that used the function that was adjusted (even indirectly), not just the code that wouldn't work with the new requirements. I discussed this quite a bit more elsewhere in this thread: http://forum.dlang.org/post/lsxidsyweczhojoucnsw forum.dlang.org - Jonathan M Davis
Jul 26 2015
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 6:59 PM, Jonathan M Davis wrote:
 [...]
Thank you, Jonathan!
Jul 26 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 27 July 2015 at 01:59:49 UTC, Jonathan M Davis wrote:
 On Sunday, 26 July 2015 at 22:59:09 UTC, deadalnix wrote:
 On Sunday, 26 July 2015 at 03:42:22 UTC, Walter Bright wrote:
 On 7/25/2015 3:28 PM, deadalnix wrote:
 Also, argument from ignorance is hard to maintain when the 
 thread is an actual
 feedback from experience.
You say that interfaces do the same thing. So please show how it's done with the example I gave: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); }
I'm not sure what is the problem here.
The problem here is that if you're dealing with traits or concepts or whatever that are essentially interfaces, and foo uses the Prefix interface/trait/concept and then wants to use bar which has the Color interface/trait/concept, it has to then require that what it's given implements both the Prefix and Color interfaces/traits/concepts.
So, if I translate to regular D, here is what I get : int foo(T)(T t) if (hasPrefix!T) { t.prefix(); // ok bar(t); // ok, nothign is checked } void bar(T)(T t) if (hasColor!T) { t.color(); // error, color is not specified on an object of type XXX } It changes nothing.
 In the case of actual interfaces, this really doesn't work 
 well, because you're forced to basically have an interface 
 that's derived from both - e.g. PrefixAndColor. Otherwise, 
 you'd be forced to do nonsense like have foo take a Prefix and 
 then cast it to Color to pass to bar and throw an exception if 
 the type it was given didn't actually implement both 
 interfaces. And it's downright ugly. In reality, the code would 
 likely simply end up not being that generic and would require 
 some specific type that you were using in your code which 
 implemented both Prefix and Color, and foo just wouldn't work 
 with anything else, making it far less reusable. So, with 
 actual interfaces, it becomes very difficult to write generic 
 code.
We do not have to make the same limitation (especially if trait are implicitly implemented). Still, even with this limitation, using type is considered superior and using unitests is not considered to be not sufficient. Also, your message is kind of weird as you seem to assume in that part that what is discussed here is the same as passing argument, when you get back to the checked exception position lower.
 With traits or concepts, presumably, you could say that foo 
 required a type that implemented both Prefix and Color, which 
 fixes the problem of how you're able to accept something that 
 takes both generically without coming up with something like 
 PrefixAndColor (though if you can only list one trait/concept 
 as required, you're just as screwed as you are with 
 interfaces). But even if you can list multiple traits/concepts 
 as required by a function, you quickly end up with a 
 proliferation of traits/concepts that need to be covered by a 
 higher level function like foo, because not only would foo need 
 to list all of the traits/concepts that the functions it uses 
 directly require, but it has to list all of the traits/concepts 
 that it even indirectly requires (potentially from far down the 
 call stack). So, any change to a function that even gets called 
 indirectly could break foo, because it didn't have the right 
 traits/concepts listed in its requirements. And all of the 
 functions in the call chain would have to have their list of 
 required traits/concepts updated any time there was any tweak 
 to any of the underlying functions, even if those functions 
 would have actually worked fine with most of the code that was 
 calling them, because the types being passed in had the new 
 requirements already (it's just that the functions higher up 
 the stack didn't list the updated requirements yet).

 By having foo list all of the traits/concepts that would be 
 required anywhere in its call stack, you're doing something 
 very similar to what happens with checked exceptions where the 
 exceptions have to be listed clear up the chain. It's not quite 
 the same, and there's no equivalent to "throws Exception" 
 (since that would be equivalent to somehow having a 
 trait/concept that said that you didn't care what the type 
 given to foo implemented). Rather, you're basically being 
 forced to list each trait/concept individually up the chain - 
 but it's still a similar problem to checked exceptions. It 
 doesn't scale well. And if a function is required to list all 
 of the traits/concepts that are required - even indirectly - 
 then changing the requirements of a function - even slightly - 
 results in code breakage similar to that of checked exceptions 
 when you change which exceptions a function throws, and "throws 
 Exception" wasn't being used. And even if you're not worried 
 about breaking other people's code, it's a maintenance problem 
 to maintain that list clear up the chain.
I'm doing something similar to checked exception. Yes. I'm passing argument down. There are similar indeed, and this is why people though checked exception were a good idea. The main way in which the differs is that checked Exception expose implementation, while typed argument provide a contract for the caller. Consider it that way. Template are a meta language within the language. With template you write code that write code. At this meta level, types are values. Instantiating a template is the same as calling a function (function that will generate code). That is the reason why static if works so well, and that is the reason why static foreach is asked for. This is the reason why Andrei's approach to present compile time arguments in TDPL rather than templates works. For some reason, unitests cannot replace types for code that connect to database, database code itself, code that render pages, code that do scientific computation, code that do rendering, code that crunches numbers, code that do GUI, code for command line utilities, code that do whatever, but for code that wirte code, yeah, they trully do the trick !
 Unfortunately, we _do_ have a similar problem with template 
 constraints if we insist on putting all of the function's 
 requirements in its template constraint rather than just its 
 immediate requirements. But at least with template constraints, 
 if the top-level constraint is missing something that a 
 function somewhere deeper in the stack requires, then you get a 
 compilation error only when the type being passed in doesn't 
 pass the constraint on the function deeper in the stack. So, if 
 you adjust a template constraint, it will only break code that 
 doesn't work with the new constraint - even code that uses that 
 function indirectly (possibly even quite deeply in a call 
 stack, far from their own code) won't break due to the change, 
 unless the type being used doesn't pass the new constraint. And 
 when it does fail, the errors may not be pretty, but they do 
 tell you exactly what's required to figure out what's wrong 
 when you look at the source code. Whereas the traits/concepts 
 solution would break _all_ code that used the function that was 
 adjusted (even indirectly), not just the code that wouldn't 
 work with the new requirements.

 I discussed this quite a bit more elsewhere in this thread: 
 http://forum.dlang.org/post/lsxidsyweczhojoucnsw forum.dlang.org

 - Jonathan M Davis
Will read that soon.
Jul 27 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/27/2015 12:53 PM, deadalnix wrote:
 So, if I translate to regular D, here is what I get :
I asked how you'd solve the problem with interfaces.
Jul 27 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 6:12 PM, deadalnix wrote:
 The most intriguing part of this conversation is that the argument made
 about unitests and complexity are the very same than for dynamic vs
 strong typing (and there is hard data that strong typing is better).
No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei
Jul 25 2015
next sibling parent "Brendan Zabarauskas" <bjzaba yahoo.com.au> writes:
On Saturday, 25 July 2015 at 13:37:15 UTC, Andrei Alexandrescu 
wrote:
 On 7/24/15 6:12 PM, deadalnix wrote:
 The most intriguing part of this conversation is that the 
 argument made
 about unitests and complexity are the very same than for 
 dynamic vs
 strong typing (and there is hard data that strong typing is 
 better).
No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei
Runtime errors are a usability problem for users and maintianability problem for developers. Instatiation time errors are a maintianability problem for library authors and a usability problem for developers. I would argue that the latter is better than the former, but the poor developer experience of using Phobos is what made me move away from D a couple of years ago.
Jul 25 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 13:37:15 UTC, Andrei Alexandrescu 
wrote:
 On 7/24/15 6:12 PM, deadalnix wrote:
 The most intriguing part of this conversation is that the 
 argument made
 about unitests and complexity are the very same than for 
 dynamic vs
 strong typing (and there is hard data that strong typing is 
 better).
No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei
In case 1, it is argued that unitest check runtime, so we are good, and in case 2, unitest check instantiation, so we are good. That is the very same argument and it is equally bogus in both cases.
Jul 25 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/15 7:10 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 13:37:15 UTC, Andrei Alexandrescu wrote:
 On 7/24/15 6:12 PM, deadalnix wrote:
 The most intriguing part of this conversation is that the argument made
 about unitests and complexity are the very same than for dynamic vs
 strong typing (and there is hard data that strong typing is better).
No, that's not the case at all. There is a distinction: in dynamic typing the error is deferred to run time, in this discussion the error is only deferred to instantiation time. -- Andrei
In case 1, it is argued that unitest check runtime, so we are good, and in case 2, unitest check instantiation, so we are good. That is the very same argument and it is equally bogus in both cases.
I don't see it as the same argument. I do agree that applied to runtime it is specious. -- Andrei
Jul 26 2015
prev sibling next sibling parent reply "Bruno Queiroga" <brunoqueiroga gmail.com> writes:
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably 
 I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias).
Could not the compiler just issue a warning of implicit use of properties/functions like the C's implicit function declaration warning? And some form of "cast" (or test) could be done to make the use explicit: int foo(T: hasPrefix)(T t) { t.prefix(); // ok t.color(); // Compiler warning: implicit. bar(t); // Compiler warning: implicit. } void bar(T: hasColor)(T t) { t.color(); t.prefix(); // Compiler warning: implicit. } ------------- int foo(T: hasPrefix)(T t) { t.prefix(); // ok (cast(hasColor) t).color(); // ok: explicit. bar(cast(hasColor) t); // ok: explicit. } void bar(T: hasColor)(T t) { t.color(); } This seems enough to avoid bugs. Note: sorry for the bad english writing. Best regards, Bruno Queiroga.
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
 Could not the compiler just issue a warning
Please, no half features. Code should be correct or not.
 of implicit use of properties/functions like the C's implicit function
declaration warning?
C warnings are not part of Standard C. They're extensions only, and vary widely from compiler to compiler.
Jul 24 2015
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 01:15:52 UTC, Walter Bright wrote:
 On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
 Could not the compiler just issue a warning
Please, no half features. Code should be correct or not.
Yeah. I wish that no one had ever managed to convince you to add -w and -wi to dmd. :) -w is particularly bad, since it affects what code will and won't compile, which affects stuff like is expressions, so -w can actually affect the behavior of your program, even if it doesn't actually result in any errors being printed out. We really should just make it do the same thing as -wi IMHO. In any case, I strongly concur with the idea that warnings are a bad idea. A good developer is going to fix all warnings, which ultimately makes them the same as errors anyway, and a bad developer will just leave them there and make them useless, because there will be too many of them to read. - Jonathan M Davis
Jul 24 2015
prev sibling next sibling parent "Bruno Queiroga" <brunoqueiroga gmail.com> writes:
On Saturday, 25 July 2015 at 01:15:52 UTC, Walter Bright wrote:
 On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
 Could not the compiler just issue a warning
Please, no half features. Code should be correct or not.
...
 ... (consider if bar was passed as an alias)
trait S1 { void m1(); } trait S2 : S1 { void m2(); } void bar(S : S2)(S s) { s.m1(); // ok... s.m2(); // ok... } template foo(S : S1) { static void foo(alias void f(S))(S s) { s.m1(); // ok... s.m2(); // ERROR: S1 is the base trait of S f(s); // Ignored: typeof(s) is S of f(S) } } void main(string[] args) { S2 s2; alias foo!S2 fooS2; alias bar!S2 barS2; fooS2!barS2(s2); } ??
Jul 24 2015
prev sibling parent "Bruno Queiroga" <brunoqueiroga gmail.com> writes:
On Saturday, 25 July 2015 at 02:55:16 UTC, Bruno Queiroga wrote:
 On Saturday, 25 July 2015 at 01:15:52 UTC, Walter Bright wrote:
 On 7/24/2015 4:19 PM, Bruno Queiroga wrote:
 Could not the compiler just issue a warning
Please, no half features. Code should be correct or not.
...
 ... (consider if bar was passed as an alias)
trait S1 { void m1(); } trait S2 : S1 { void m2(); }
For completeness: struct Struct1 { void m1(){}; void m2(){}; void m3(){}; void m4(){}; } trait S1 { void m1(); } trait S2 : S1 { void m2(); } trait S3 : S2 { void m3(); } trait S4 { void m4(); } // no "inheritance" void bar(S : S2)(S s) { s.m1(); // ok... s.m2(); // ok... s.m3(); // ERROR! (cast(S3) s).m3(); // OK! (Struct1 has m3()) // (cast(S4) s).m4(); // OK!! (Struct1 has m4()) } template foo(S : S1) { static void foo(alias void f(S))(S s) { s.m1(); // ok... s.m2(); // ERROR: S1 is the base trait of S f(s); // OK! typeof(s) is (compatible with) S of f(S) // (structurally or nominally) } } void main(string[] args) { Struct1 struct1; alias foo!Struct1 fooSt1; alias bar!Struct1 barSt1; fooSt1!barSt1(struct1); } Is this reasonable?
Jul 24 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 04:42:59 UTC, Walter Bright wrote:
 On 7/23/2015 3:12 PM, Dicebot wrote:
 On Thursday, 23 July 2015 at 22:10:11 UTC, H. S. Teoh wrote:
 OK, I jumped into the middle of this discussion so probably 
 I'm speaking
 totally out of context...
This is exactly one major advantage of Rust traits I have been trying to explain, thanks for putting it up in much more understandable way :)
Consider the following: int foo(T: hasPrefix)(T t) { t.prefix(); // ok bar(t); // error, hasColor was not specified for T } void bar(T: hasColor)(T t) { t.color(); } Now consider a deeply nested chain of function calls like this. At the bottom, one adds a call to 'color', and now every function in the chain has to add 'hasColor' even though it has nothing to do with the logic in that function. This is the pit that Exception Specifications fell into. I can see these possibilities: 1. Require adding the constraint annotations all the way up the call tree. I believe that this will not only become highly annoying, it might make generic code impractical to write (consider if bar was passed as an alias). 2. Do the checking only for 1 level, i.e. don't consider what bar() requires. This winds up just pulling the teeth of the point of the constraint annotations. 3. Do inference of the constraints. I think that is indistinguishable from not having annotations as being exclusive. Anyone know how Rust traits and C++ concepts deal with this?
I don't know about this. The problem is that if you don't list everything in the constraint, then the user is going to get an error buried in your templated code somewhere rather than in their code, which is _not_ user friendly and is why we usually try and put everything required in the template constraint. On the other hand, you're very much right in that this doesn't scale if you have enough levels of template constraints, especially if some of the constraints in the functions being called internally change. And yet, the caller needs to know what the requirements are of the template or templated function actually are when they pass it something. So, it does kind of need to be at the top level from that aspect of usability as well. So, this is just plain ugly regardless. One option which would work at least some of the time would be to do something like void foo(T)(T t) if(hasPrefix!T && is(typeof(bar(t)))) { t.prefix(); bar(t); } void bar(T)(T t) if(hasColor!T) { t.color(); } then you don't have to care what the current constraint for bar is, and it still gets checked in foo's template constraint. ... Actually, I just messed around with some of this to see what error messages you get when foo doesn't check for bar's constraints in its template constraint, and it's a _lot_ better than it used to be. This code void foo(T)(T t) if(hasPrefix!T) { t.prefix(); bar(t); } void bar(T)(T t) if(hasColor!T) { t.color(); } struct Both { void prefix() { } void color() { } } struct OneOnly { void prefix() { } } enum hasPrefix(T) = __traits(hasMember, T, "prefix"); enum hasColor(T) = __traits(hasMember, T, "color"); void main() { foo(Both.init); bar(Both.init); foo(OneOnly.init); } results in these error messages: q.d(5): Error: template q.bar cannot deduce function from argument types !()(OneOnly), candidates are: q.d(8): q.bar(T)(T t) if (hasColor!T) q.d(25): Error: template instance q.foo!(OneOnly) error instantiating It tells you exactly which line in your code is wrong (which it didn't used to when the error was inside the template), and it clearly gives you the template constraint which is failing, whereas if you foo tests for bar in its template constraint, you get this q.d(25): Error: template q.foo cannot deduce function from argument types !()(OneOnly), candidates are: q.d(1): q.foo(T)(T t) if (hasPrefix!T && is(typeof(bar(t)))) And that doesn't tell you anything about what bar requires. Actually putting bar's template constraint in foo's template constraint would fix that, but then you wouldn't necessarily know which is failing, and you have the maintenance problem caused by having to duplicate bar's constraint. So, I actually think that how the current implementation reports errors makes it so that maybe it's _not_ a good idea to put all of the sub-constraints within the top-level constraint, because it actually makes it harder to figure out what you've done wrong. Unfortunately, it probably requires that you look at the source code of the templated function that you're calling regardless, since the error message doesn't actually make it clear that it's the argument that you passed to foo that's being passed to bar rather than an actual bug in foo (and to make matters more complicated, it could actually be something that came from what you passed to foo rather than actually being what you passed in). So, maybe we could improve the error messages further to make it clear that it was what you passed in or something about where it came from so that you wouldn't necessarily have to look at the source code, and if so, I think that that solves the problem reasonably well. It would avoid the maintenance problem of having to propagate the constraints, and it would actually give clearer error messages than propagating the constraints. And having overly complicated template constraints is one of the most annoying aspects of dealing with template constraints, because it makes it a lot harder to figure out why they're failing. So, _not_ putting the sub-constraints in the top-level constraint could make it easier to figure out what's gone wrong. So, honestly, I think that we have the makings here of a far better solution than trying to put everything in the top-level template constraint. This could be a good part of the solution that we've needed to improve error-reporting associated with template constraints. In any case, looking at this, I have to agree with you that this is the same problem you get with checked exceptions / exceptions specifications - only worse really, because you can't do "throws Exception" and be done with it like you can in Java (as hideous as that is). Rather, you're forced to do the equivalent of listing all of the exception types being thrown and maintain that list as the code changes - i.e. you have to make the top-level template constraint list all of the sub-constraints and keep that list up-to-date as the sub-constraints change, which is a mess, especially with deep call stacks of templated functions. - Jonathan M Davis
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 10:59 PM, Jonathan M Davis wrote:
 In any case, looking at this, I have to agree with you that this is the same
 problem you get with checked exceptions / exceptions specifications - only
worse
 really, because you can't do "throws Exception" and be done with it like you
can
 in Java (as hideous as that is). Rather, you're forced to do the equivalent of
 listing all of the exception types being thrown and maintain that list as the
 code changes - i.e. you have to make the top-level template constraint list all
 of the sub-constraints and keep that list up-to-date as the sub-constraints
 change, which is a mess, especially with deep call stacks of templated
functions.
Phew, finally, someone understands what I'm talking about! I'm really bad at explaining things to people that they aren't already familiar with. I'm not sure, but I suspect this problem may cripple writing generic library functions that do one operation and then forward to the next (unknown in advance) operation in a chain. It also may completely torpedo Andrei's Design By Introspection technique.
Jul 25 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 10:01:43 UTC, Walter Bright wrote:
 On 7/24/2015 10:59 PM, Jonathan M Davis wrote:
 In any case, looking at this, I have to agree with you that 
 this is the same
 problem you get with checked exceptions / exceptions 
 specifications - only worse
 really, because you can't do "throws Exception" and be done 
 with it like you can
 in Java (as hideous as that is). Rather, you're forced to do 
 the equivalent of
 listing all of the exception types being thrown and maintain 
 that list as the
 code changes - i.e. you have to make the top-level template 
 constraint list all
 of the sub-constraints and keep that list up-to-date as the 
 sub-constraints
 change, which is a mess, especially with deep call stacks of 
 templated functions.
Phew, finally, someone understands what I'm talking about! I'm really bad at explaining things to people that they aren't already familiar with. I'm not sure, but I suspect this problem may cripple writing generic library functions that do one operation and then forward to the next (unknown in advance) operation in a chain.
Well, the caller then has to do deal with the fact that the result doesn't work with the next call in the chain, and at least in that case, the failures are at their level, not buried inside of calls that they're making. And this is exactly the sort of problem that we've had for quite a while where you try and do something like auto result = rndGen().map!(a % 10).take(10).sort(); The result of take isn't random-access, but sort requires random-access, so you get a compilation failure - but it's clearly on this line and not inside of one of the calls that you're making, so it's pretty straightforward. I guess that where the problem might come in (and maybe this is what you meant) is when you try and do a chain like that inside of a templated function, and you end up with a compilation failure, because the argument to that function resulted in one of the items in the chain failing. But I'm not sure that that's any different really from a line with a single function call failing due to the outer function's argument not working with it.
 It also may completely torpedo Andrei's Design By Introspection 
 technique.
I'm not sure what you mean here. Design by Introspection seems to be primarily for implementing optional functionality, and it couldn't be in the outer template constraint regardless, because it's optional. So, I don't really see how whether or not we put all of the sub-constraints in the main template constraint really affects DbI. I do have to wonder about what Andrei means to do with DbI though, since he keeps bringing it up like it solves everything and should be used everywhere, whereas it seems like it would only to apply to certain areas where you're dealing with optional functionality, and a lot of code wouldn't benefit from it all. We're essentially using it with ranges already when we're implementing algorithms differently based on what type of range we're given or what extra capabilities the range has, so it obviously is showing its usefulness there, but the allocators is the only other case that I can think of at the moment where it would make sense to use it heavily. He sounds like he wants to use it everywhere, which I don't get at all. - Jonathan M Davis
Jul 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 3:29 AM, Jonathan M Davis wrote:
 We're essentially using it with ranges already when we're implementing
 algorithms differently based on what type of range we're given or what extra
 capabilities the range has, so it obviously is showing its usefulness there,
That's right. We've already been doing it in a haphazard manner, what Andrei is doing is recognizing the technique, naming it, and thinking about how to formalize it, organize it, and determine best practices. It's like going from an ad-hoc table of function pointers to recognizing that one is doing OOP.
 but the allocators is the only other case that I can think of at the moment
where it
 would make sense to use it heavily.
Containers are another fairly obvious use case.
Jul 25 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 25 July 2015 at 20:35:08 UTC, Walter Bright wrote:
 On 7/25/2015 3:29 AM, Jonathan M Davis wrote:
 We're essentially using it with ranges already when we're 
 implementing
 algorithms differently based on what type of range we're given 
 or what extra
 capabilities the range has, so it obviously is showing its 
 usefulness there,
That's right. We've already been doing it in a haphazard manner, what Andrei is doing is recognizing the technique, naming it, and thinking about how to formalize it, organize it, and determine best practices. It's like going from an ad-hoc table of function pointers to recognizing that one is doing OOP.
Well, it'll be interesting to see what he comes up with.
 but the allocators is the only other case that I can think of 
 at the moment where it
 would make sense to use it heavily.
Containers are another fairly obvious use case.
Yes. There are definitely places that DbI is going to be huge. I just have a hard time coming up with them. So, while I agree that it's a fantastic tool, I'm just not convinced yet that it's going to be one that's widely applicable. I guess that we'll just have to wait and see what Andrei comes up with and where others take it from there. But it's definitely something that D can do rather easily and most other languages can't do at all, so it's a big win for us in that regard, especially if it does turn out to be widely applicable. On a related note, while I'd noticed it on some level, I don't think that it had ever clicked for me how restrictive interfaces are before this discussion. The simple fact that you can't ask for two of them at once really reduces how reusable your code can be. So, templatizing those checks rather than using interfaces is huge. And DbI is an extension of that. There's likely a lot of unplumbed depth there. - Jonathan M Davis
Jul 27 2015
parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote: 
 On a related note, while I'd noticed it on some level, I don't think that
 it had ever clicked for me how restrictive interfaces are before this
 discussion. The simple fact that you can't ask for two of them at once
 really reduces how reusable your code can be. So, templatizing those
 checks rather than using interfaces is huge. And DbI is an extension of
 that. There's likely a lot of unplumbed depth there.
One big improvement of traits over interfaces is, that you can implement traits for types after they are defined, even for types that you didn't define yourself. So in Rust, if your function needs two unrelated interfaces (trait objects == dynamic polymorphism) A and B, you can easily define a new trait C that depends on A and B and implement C for all types that also implement A and B: trait A {...} trait B {...} trait C : A,B { } impl<T: A+B> C for T { } fn myFunction(c: C) {...} For generics you don't even need that: fn myFunction<T: A+B>(t: T) {...} Tobi
Jul 27 2015
next sibling parent "Enamex" <enamex+d outlook.com> writes:
On Monday, 27 July 2015 at 19:21:10 UTC, Tobias Müller wrote:
 trait A {...}
 trait B {...}

 trait C : A,B { }

 impl<T: A+B> C for T { }

 fn myFunction(c: C) {...}

 Tobi
Has to be: fn my_function(c: &C) { ... } actually, because trait objects can only be passed by reference/borrowed-pointer.
Jul 27 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, 27 July 2015 at 19:21:10 UTC, Tobias Müller wrote:
 "Jonathan M Davis" <jmdavisProg gmx.com> wrote:
 On a related note, while I'd noticed it on some level, I don't 
 think that it had ever clicked for me how restrictive 
 interfaces are before this discussion. The simple fact that 
 you can't ask for two of them at once really reduces how 
 reusable your code can be. So, templatizing those checks 
 rather than using interfaces is huge. And DbI is an extension 
 of that. There's likely a lot of unplumbed depth there.
One big improvement of traits over interfaces is, that you can implement traits for types after they are defined, even for types that you didn't define yourself. So in Rust, if your function needs two unrelated interfaces (trait objects == dynamic polymorphism) A and B, you can easily define a new trait C that depends on A and B and implement C for all types that also implement A and B: trait A {...} trait B {...} trait C : A,B { }
How is that any different from interfaces? You can do exactly the same thing with them.
 impl<T: A+B> C for T { }

 fn myFunction(c: C) {...}

 For generics you don't even need that:

 fn myFunction<T: A+B>(t: T) {...}
As long as you can list the two interfaces/traits/concepts that you require separately, then you're okay. But as soon as you have to create a new one that combines two or more interfaces/traits/concepts, then that doesn't scale. interfaces force that. I wouldn't expect traits or concepts to, because they're compile-time constructs, but that would depend on how the language defines them. It might make sense to create combined traits/concepts for the cases where the operations in question are often going to be required together, but in general, it's going to scale far better to require them separately rather than require a combined trait/concept. Otherwise, you get a combinatorial explosion of traits/concepts as you combine them to create new traits/concepts - either that, or you code doesn't end up being very generic, because it's frequently using traits/concepts that require more operations than it actually uses, meaning that it will work with fewer types than it would otherwise. In general, templates shouldn't be requiring more operations than they actually use, or they won't be as reusable as they could/should be. And that implies that the list of required operations should be kept to what's actually required rather than using traits/concepts that require those operations plus others. - Jonathan M Davis
Jul 27 2015
parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
"Jonathan M Davis" <jmdavisProg gmx.com> wrote:
 trait C : A,B { }
How is that any different from interfaces? You can do exactly the same thing with them.
 impl<T: A+B> C for T { }
^^^ That's the important line. You can define *and implement* the trait C for *all types that already implement A and B*. That's definitely not possible with interfaces. The difference between traits and interfaces is, that trait implementation is separate from the type definition.
 fn myFunction(c: C) {...}
[...] But as soon as you have to create a new one that combines two or more interfaces/traits/concepts, then that doesn't scale. interfaces force that. I wouldn't expect traits or concepts to, because they're compile-time constructs, but that would depend on how the language defines them. It might make sense to create combined traits/concepts for the cases where the operations in question are often going to be required together, but in general, it's going to scale far better to require them separately rather than require a combined trait/concept. Otherwise, you get a combinatorial explosion of traits/concepts as you combine them to create new traits/concepts - either that, or you code doesn't end up being very generic, because it's frequently using traits/concepts that require more operations than it actually uses, meaning that it will work with fewer types than it would otherwise.
Because you can implement the combined trait directly for all separate subtraits, you can define that new trait just for the use of one function. Tobi
Jul 28 2015
prev sibling next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 25 July 2015 at 10:01:43 UTC, Walter Bright wrote:
 Phew, finally, someone understands what I'm talking about! I'm 
 really bad at explaining things to people that they aren't 
 already familiar with.

 I'm not sure, but I suspect this problem may cripple writing 
 generic library functions that do one operation and then 
 forward to the next (unknown in advance) operation in a chain.

 It also may completely torpedo Andrei's Design By Introspection 
 technique.
Actually I don't think the problem you state is actually a problem at all, even disregarding my previous argument(which is still think is valid). The key point is that it would be opt-in and it would trickle down, not up. Normal templates without the concept/traits/interface things would still be able to call functions with the extra constraints with out needing to add it to them selves. For instance, say the syntax to specialize on one of these concept/traits/interface things was the same as specializing on a class, eg: void foo(T : inputRange)(T x){} Calling foo from any where would still be the same, even calling it from other templates with out the concept/traits/interface things. eg the following would work: void bar(T)(T x){ foo(x); } Because bar is a normal template, it still has no choice but to assume that T can do any thing we ask it to do, because that is what we have always done with templates. So the template happily assumes that passing x into foo will work. If for some reason you pass a type in that is not an inputRange, then it will fail at instantiation. So far it is the same as the constraints we have now. Ok, here is where it is different. In side of foo, it would be illegal to do anything other than inputRange stuff with x. For instance the following would be illegal: void foo(T : inputRange)(T x) { x.something(); // ERROR! } The real kicker here, is that THAT error can be detected without ever instantiating foo. No need to rely on unittests, which may or may not catch it depending on which types we use. Ok now take it a step further. Say we have the following: void foo(T : inputRange)(T x) { bar(x); // ERROR! } void bar(T : someOtherInterface)(T x){} The previous would error! Why? Because foo only assumes x can do inputRange things, and when you pass it into bar it asks it to do someOtherInterface which it doesn't know it can do! This all would still error with out every instantiating the template! Also the following should also error: void foo(T : inputRange)(T x) { bar(x); // ERROR! } void bar(T)(T x) if(isSomeOtherInterface!T) {} Why? Because from inside foo, it is only assumed that x can do inputRange things, when it gets passed into bar the constraint will ask it to do non inputRange things and fail! Still with out foo being instantiated! Even something like the following should error: void foo(T : inputRange)(T x) { bar(x); // ERROR! } void bar(T)(T x) { x.something_inputranges_dont_have(); } This would error for the same reasons as before, bar asked x to do non input range things. In contrast the following would be ok! void foo(T : inputRange)(T x) { bar(x); // Ok } void bar(T)(T x) { foreach(i;x){} } That still works because it is known that x can do inputRange things, so its ok! Woo! This is awesome right? All these errors being caught without ever instantiating the templates. You should still instantiate them and test them, but the value is that the errors were caught sooner, with out even instantiating them. The main difference here is that a normal template assumes that a type can do anything until it actually gets instantiated. Add some constraints(the normal ones we have now) and you can filter for things that don't do X, but you still assume that the type can do any thing else in addition to X. On the other hand, the concept/traits/interface things would be as conservative as possible and only assume a type can do what its concept/traits/interface things say it can do. In summery, the concept/traits/interface things would not require them to applied to the whole tree. Doing so would break how templates work now and really just does not make sense unless things were being redone from scratch. They are opt-in! In addition to that, they catch a bunch of bugs in templates before they are ever instantiated! This is a good thing.
Jul 25 2015
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 10:01:43 UTC, Walter Bright wrote:
 On 7/24/2015 10:59 PM, Jonathan M Davis wrote:
 In any case, looking at this, I have to agree with you that 
 this is the same
 problem you get with checked exceptions / exceptions 
 specifications - only worse
 really, because you can't do "throws Exception" and be done 
 with it like you can
 in Java (as hideous as that is). Rather, you're forced to do 
 the equivalent of
 listing all of the exception types being thrown and maintain 
 that list as the
 code changes - i.e. you have to make the top-level template 
 constraint list all
 of the sub-constraints and keep that list up-to-date as the 
 sub-constraints
 change, which is a mess, especially with deep call stacks of 
 templated functions.
Phew, finally, someone understands what I'm talking about! I'm really bad at explaining things to people that they aren't already familiar with. I'm not sure, but I suspect this problem may cripple writing generic library functions that do one operation and then forward to the next (unknown in advance) operation in a chain. It also may completely torpedo Andrei's Design By Introspection technique.
This only make sense under the premise that both technique are mutually exclusive, which they aren't, and that no introspection can be done, which nobody argues against (I'm not sure what Rust provide here, but if they don't allow it, they'll regret it).
Jul 25 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 3:06 PM, H. S. Teoh via Digitalmars-d wrote:
 OK, I jumped into the middle of this discussion so probably I'm speaking
 totally out of context... but anyway, with regards to template code, I
 agree that it ought to be thoroughly tested by at least instantiating
 the most typical use cases (as well as some not-so-typical use cases).
I argue that every code line of the template must at least have been instantiated at some point by the test suite. Anything less is, frankly, unprofessional.
 A lot of Phobos bugs lurk in rarely-used template branches that are not
 covered by the unittests.
Generally when I work on a Phobos template, I upgrade it to 100% unit test coverage. This should be a minimum bar for all Phobos work. We ought to be ashamed of anything less.
 Instantiating all branches is only part of the solution, though. A lot
 of Phobos bugs also arise from undetected dependencies of the template
 code on the specifics of the concrete types used to test it in the
 unittests.  The template passes the unittest but when you instantiate it
 with a type not used in the unittests, it breaks. For instance, a lot of
 range-based templates are tested with arrays in the unittests. Some of
 these templates wrongly depend on array behaviour (as opposed to being
 confined only to range API operations) while their signature constraints
 indicate only the generic range API. As a result, when non-array ranges
 are used, it breaks. Sometimes bugs like this can lurk undetected for a
 long time before somebody one day happens to instantiate it with a range
 type that violates the hidden assumption in the template code.
I agree that the constraint system is not checked against the actual body of the template. Dicebot brought that up as well. Some attention should be paid in the unit tests to using types that are minimal implementations of the constraints. That said, it is a pipe dream to believe that if something matches the function signatures, that it is correct and will work without ever having been tested.
 If we had a Concepts-like construct in D, where template code is
 statically constrained to only use, e.g., range API when manipulating an
 incoming type, a lot of these bugs would've been caught.

 In fact, I'd argue that this should be done for *all* templates -- for
 example, a function like this ought to be statically rejected:

 	auto myFunc(T)(T t) { return t + 1; }

 because it assumes the validity of the + operation on T, but T is not
 constrained in any way, so it can be *any* type, most of which,
 arguably, do not support the + operation.
It's a valid point, but I'd counter that it'd be pretty tedious and burdensome. D isn't meant to be a bondage & discipline language. The failed exception specifications (Java and C++) comes to mind.
 If the compiler outright rejected any operation on T that hasn't been
 explicitly tested for, *then* we will have eliminated a whole class of
 template bugs. Wrong code like the last example above would be caught as
 soon as the compiler compiles the body of myFunc.
Yeah, but few would like programming in such a nagging, annoying language. Note that if you do instantiate with a type that doesn't support those operations, it isn't the end of the world - you'll still get a compile time error message.
Jul 23 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Jul 23, 2015 at 05:34:20PM -0700, Walter Bright via Digitalmars-d wrote:
 On 7/23/2015 3:06 PM, H. S. Teoh via Digitalmars-d wrote:
OK, I jumped into the middle of this discussion so probably I'm
speaking totally out of context... but anyway, with regards to
template code, I agree that it ought to be thoroughly tested by at
least instantiating the most typical use cases (as well as some
not-so-typical use cases).
I argue that every code line of the template must at least have been instantiated at some point by the test suite. Anything less is, frankly, unprofessional.
I agree, and I didn't claim otherwise.
A lot of Phobos bugs lurk in rarely-used template branches that are
not covered by the unittests.
Generally when I work on a Phobos template, I upgrade it to 100% unit test coverage. This should be a minimum bar for all Phobos work. We ought to be ashamed of anything less.
Agreed.
Instantiating all branches is only part of the solution, though. A
lot of Phobos bugs also arise from undetected dependencies of the
template code on the specifics of the concrete types used to test it
in the unittests.  The template passes the unittest but when you
instantiate it with a type not used in the unittests, it breaks. For
instance, a lot of range-based templates are tested with arrays in
the unittests. Some of these templates wrongly depend on array
behaviour (as opposed to being confined only to range API operations)
while their signature constraints indicate only the generic range
API. As a result, when non-array ranges are used, it breaks.
Sometimes bugs like this can lurk undetected for a long time before
somebody one day happens to instantiate it with a range type that
violates the hidden assumption in the template code.
I agree that the constraint system is not checked against the actual body of the template. Dicebot brought that up as well. Some attention should be paid in the unit tests to using types that are minimal implementations of the constraints. That said, it is a pipe dream to believe that if something matches the function signatures, that it is correct and will work without ever having been tested.
I didn't say that this one thing alone will singlehandedly solve all of our template testing woes. Obviously, it cannot catch semantic errors -- you use all the valid range API operations, but you use them in the wrong order, say, or in a way that doesn't accomplish what the code is supposed to do. I think it's a given that you still need to adequately unittest the code just like you would non-template code. Nevertheless, this does help to eliminate an entire class of latent template bugs -- hidden dependencies on the incoming type that are not covered by the function's contract (i.e., signature constraints). Relying on the programmer to always use types with minimal functionality in the unittests is programming by convention, and you know very well how effective that is. Without enforcement, we have no way of being sure that our tests are actually adequate. An untested branch of template code can be detected by using -cov, but performing an operation on an incoming type without checking for it in the sig constraints cannot be detected except by reading every line of code. The unittest may have inadvertently used a type with a superset of functionality, but since this is never enforced (and the current language provides no way to actually enforce it) we can never be sure -- we're just taking it on faith that the tests have covered all bases. With actual language enforcement, we can actually provide some guarantees. It doesn't solve *all* the problems, but it does solve a significant subset of them.
If we had a Concepts-like construct in D, where template code is
statically constrained to only use, e.g., range API when manipulating
an incoming type, a lot of these bugs would've been caught.

In fact, I'd argue that this should be done for *all* templates --
for example, a function like this ought to be statically rejected:

	auto myFunc(T)(T t) { return t + 1; }

because it assumes the validity of the + operation on T, but T is not
constrained in any way, so it can be *any* type, most of which,
arguably, do not support the + operation.
It's a valid point, but I'd counter that it'd be pretty tedious and burdensome. D isn't meant to be a bondage & discipline language. The failed exception specifications (Java and C++) comes to mind.
If the compiler outright rejected any operation on T that hasn't been
explicitly tested for, *then* we will have eliminated a whole class
of template bugs. Wrong code like the last example above would be
caught as soon as the compiler compiles the body of myFunc.
Yeah, but few would like programming in such a nagging, annoying language.
I have trouble thinking of a template function that's actually *correct* when its sig constraints doesn't specify what operations are valid on the incoming type. Can you give an example? If such code is wrong, I'd say the language *should* reject it. If you think that's too "bondange and discipline", what about a generic wildcard sig constraint clause that says basically "type T works with any operation you imagine"? Then those programmers who are too lazy to figure out what operations are required for the function can just slap this on, and continue writing broken code to their heart's content.
 Note that if you do instantiate with a type that doesn't support those
 operations, it isn't the end of the world - you'll still get a compile
 time error message.
Yes, but by then it's the user that is faced with an inscrutable template error. If I'm a library author, I'd like to be able to find all these bugs *before* shipping my code to the customers. T -- People tell me I'm stubborn, but I refuse to accept it!
Jul 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 6:05 PM, H. S. Teoh via Digitalmars-d wrote:
 It doesn't solve *all* the problems, but it does solve a
 significant subset of them.
The worst case of not having this feature is a compile time error. Not a runtime error, undetectable error, or silent corruption. A compile time error. I also believe that you underestimate the nuisance significance of requiring the constraints cover 100% of everything the template body does. Experience with something similar is with exception specifications. Even advocates of ES found themselves writing obviously crap code to work around the issue, because ES was so damned annoying. I know a lot of the programming community is sold on exclusive constraints (C++ concepts, Rust traits) rather than inclusive ones (D constraints). What I don't see is a lot of experience actually using them long term. They may not turn out so well, like ES.
Jul 23 2015
next sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote: 
 I know a lot of the programming community is sold on exclusive
 constraints (C++ concepts, Rust traits) rather than inclusive ones (D
 constraints). What I don't see is a lot of experience actually using them
 long term. They may not turn out so well, like ES.
Haskell has type classes since ~1990. Tobi
Jul 23 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 10:49 PM, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 I know a lot of the programming community is sold on exclusive
 constraints (C++ concepts, Rust traits) rather than inclusive ones (D
 constraints). What I don't see is a lot of experience actually using them
 long term. They may not turn out so well, like ES.
Haskell has type classes since ~1990.
Haskell is sometimes described as a bondage-and-discipline language. Google it if you don't believe me :-) Such languages have their place and adherents, but I don't think D is directed that way. Exception Specifications were proposed for Java and C++ by smart, experienced programmers. It looked great on paper, and in the simple examples in the proposals. The unfit nature of it only emerged years later. Concepts and traits appear to me to suffer from the same fault.
Jul 23 2015
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/24/2015 08:56 AM, Walter Bright wrote:
 On 7/23/2015 10:49 PM, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 I know a lot of the programming community is sold on exclusive
 constraints (C++ concepts, Rust traits) rather than inclusive ones (D
 constraints). What I don't see is a lot of experience actually using
 them
 long term. They may not turn out so well, like ES.
Haskell has type classes since ~1990.
Haskell is sometimes described as a bondage-and-discipline language. Google it if you don't believe me :-) Such languages have their place and adherents, but I don't think D is directed that way.
Also, if there are carrots in your meal, it is vegetarian.
 Exception Specifications were proposed for Java and C++ by smart,
 experienced programmers. It looked great on paper, and in the simple
 examples in the proposals. The unfit nature of it only emerged years
 later. Concepts and traits appear to me to suffer from the same fault.
They are not the same thing. Not even close.
Jul 24 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 2:56 AM, Walter Bright wrote:
 On 7/23/2015 10:49 PM, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 I know a lot of the programming community is sold on exclusive
 constraints (C++ concepts, Rust traits) rather than inclusive ones (D
 constraints). What I don't see is a lot of experience actually using
 them
 long term. They may not turn out so well, like ES.
Haskell has type classes since ~1990.
Haskell is sometimes described as a bondage-and-discipline language. Google it if you don't believe me :-) Such languages have their place and adherents, but I don't think D is directed that way. Exception Specifications were proposed for Java and C++ by smart, experienced programmers. It looked great on paper, and in the simple examples in the proposals. The unfit nature of it only emerged years later. Concepts and traits appear to me to suffer from the same fault.
FWIW I think traits are better than concepts. -- Andrei
Jul 25 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 12:53:37 UTC, Andrei Alexandrescu 
wrote:
 FWIW I think traits are better than concepts. -- Andrei
Can you explain this in more details ?
Jul 25 2015
prev sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/23/2015 10:49 PM, Tobias Müller wrote:
 Walter Bright <newshound2 digitalmars.com> wrote:
 I know a lot of the programming community is sold on exclusive
 constraints (C++ concepts, Rust traits) rather than inclusive ones (D
 constraints). What I don't see is a lot of experience actually using them
 long term. They may not turn out so well, like ES.
Haskell has type classes since ~1990.
Haskell is sometimes described as a bondage-and-discipline language. Google it if you don't believe me :-) Such languages have their place and adherents, but I don't think D is directed that way.
I just wanted to point out that there *is* long time experience. What you're thinking about haskell is besides the point. Tobi
Jul 25 2015
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/24/2015 04:02 AM, Walter Bright wrote:
 On 7/23/2015 6:05 PM, H. S. Teoh via Digitalmars-d wrote:
 It doesn't solve *all* the problems, but it does solve a
 significant subset of them.
The worst case of not having this feature is a compile time error. Not a runtime error, undetectable error, or silent corruption. A compile time error.
You got this wrong. In D, compile-time errors possibly influence runtime semantics.
Jul 24 2015
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 9:05 PM, H. S. Teoh via Digitalmars-d wrote:
 Yes, but by then it's the user that is faced with an inscrutable
 template error. If I'm a library author, I'd like to be able to find all
 these bugs*before*  shipping my code to the customers.
Then you need to understand that concepts are not helping you with that. -- Andrei
Jul 25 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 24 July 2015 at 01:09:19 UTC, H. S. Teoh wrote:
 I have trouble thinking of a template function that's actually 
 *correct* when its sig constraints doesn't specify what 
 operations are valid on the incoming type. Can you give an 
 example?

 If such code is wrong, I'd say the language *should* reject it.
I see two issues here, both of which relate to maintenance. The first one is that if the language were actually able to check that you missed a requirement in your template constraint (like you're suggesting) and then give you an error, that makes it way easier to break valid code. Take code like this, for example auto foo(T)(T t) if(cond1!T && cond2!T) { ... auto b = bar(t); ... } auto bar(T)(T t) if(cond2!T) { ... } foo calls bar, and it does have all of bar's constraints in its own constraints so that you don't end up with a compilation error when you pass foo something that doesn't work with bar. Now, imagine if bar gets updated, and now its auto bar(T)(T t) if(cond2!T && cond3!T) { ... } but foo's constraint isn't updated (e.g. because foo is in a different library or program that depends no bar, so the person who updates bar isn't necessarily the same person who maintains foo). If the compiler then caught the fact that foo didn't check all of bar's constraints and gave an error, that would alert anyone using foo that foo needed to be updated, but it would also mean that foo would no longer compile, when it's quite possible that the argument passed to foo does indeed pass bar's template constraint and will work just fine with foo. So, working code no longer compiles when there's no technical reason why it couldn't continue to work. Presumably, once the maintainer of foo finds out about this, they'll update foo, and the problem will be fixed, but it still means that every time that the template constraint for bar is adjusted at all, every template that uses it risks breaking if the compiler insists that those templates check all of bar's constraints. So, yes. it does help ensure that users of foo don't end up with error messages inside of foo thanks to foo's template constraint not listing everything that it actually requires, but it also breaks a lot of code when template constraints change when the code itself will often work just fine as-is (particularly since the change to bar that required a change to its template constraint would usually be a change to its implementation and not what it did, since if you changed what it did, everyone that used it would be broken anyway). Code that's actually broken by the change to bar will fail bar's new template constraint even if the compiler doesn't complain about foo (or any other function) not having updated its constraint, and it'll still get caught. The error might not be as nice, since it'll often be in someone else's templated code, but it'll still be an error, and it'll still tell you what's failing. So, with the current state of affairs, only code that's actually broken by a change to bar's template constraint would be broken and not everyone, whereas what you're suggesting would break all code that used bar that didn't happen to also check the same thing that bar was now checking for. The second issue that I see with your suggestion is basically what Walter is saying the problem is. Even if we assume that we _do_ want to put all of the requirements for foo - direct or indirect - in its template constraint, this causes a maintenance problem. For instance, if foo were updated to call another function auto foo(T)(T t) if(cond1!T && cond2!T && cond3!T && cond4!T) { ... auto b = bar(t); ... auto c = baz(t); ... } auto bar(T)(T t) if(cond2!T && cond3!T) { ... } auto baz(T)(T t) if(cond1!T && cond4!T) { ... } you now have to update foo. Okay. That's not a huge deal, but now you have two functions that you're using within foo whose template constraints need to be duplicated in foo's template constraint. And ever function that _they_ call ends up affecting _their_ template constraints and then foo in turn. auto foo(T)(T t) if(cond1!T && cond2!T && cond3!T && cond4!T && cond5!T && cond6!T && cond7!T) { ... auto b = bar(t); ... auto c = baz(t); ... } auto bar(T)(T t) if(cond2!T && cond3!T) { ... auto l = lark(t); ... } auto baz(T)(T t) if(cond1!T && cond4!T) { ... auto s = stork(t); ... } auto lark(T)(T t) if(cond5!T && cond6!T) { ... } auto stork(T)(T) if(cond2!T && cond3!T && cond7!T) { auto w = wolf(t); } auto wolf(T)(T) if(cond7!T) { ... } So, foo's template constraint potentially keeps getting nastier and nastier thanks to indirect calls that it's making. Now, often there's going to be a large overlap between these constraints (e.g. because they're all range-based functions using isInputRange, isForwardRange, hasLength, etc.), so maybe foo's constraint doesn't get that nasty. But where you still have a maintenance problem even if that's the case is if a function that's being called indirectly adds something to its template constraint, then everything up the chain has to add it if you want to make sure that foo gets no compilation internally due to it failing to pass a template constraint of something that it's calling. So, if wolf ends up with a slightly more restrictive constraint in the next release, then every templated function on the planet which used it - directly or indirectly - would need to be updated. And much of that code could be maintained by someone other than the person who made the change to wolf, and much of it could be code that they've don't even know exists. So, if we're really trying to put everything that a function requires - directly or indirectly - in its template constraint, we potentially have a huge maintenance problem here once you start having templated functions call other templated functions - especially if any of these functions are part of a library that's distributed to others. But even if it's just your own code base, a slight adjustment to a template constraint could force you to change a _lot_ of the other template constraints in your code. So, while I definitely agree that it's nicer from the user's standpoint when the template constraint checks everything that the function requires - directly or indirectly - I think that we have a major maintenance issue in the making here if that's what we insist on. Putting all of the sub-constraints in the top-level constraint - especially with multiple levels of templated functions - simply doesn't scale well, even if it's desirable. Maybe some kind of constraint inference would solve the problem. I don't know. But I think that it is a problem, and it's one that we haven't really recognized yet. At this point, even if we're going to try and have top-level template constraints explicitly contain all of the constraints of the templates that they use - directly or indirectly - I think that we really need to make sure that the error messages from within templated code are as good as we can make them, because there's no way that all template constraints are going to contain all of their sub-constraints as code is changed over time, not unless the constraints are fairly simple and looking for the same stuff. Fortunately, the error messages are a lot better than they used to be, but if we can improve them sufficiently, then it becomes less critical to make sure that all sub-constraints be in the top-level constraint, and it makes it a lot more palatable when sub-constraints are missed. But as I said in the first part, I really don't think that detecting missing constraints and giving errors is a good solution. It'll just break more code that way. Rather, what we need is to either find a way to infer the sub-constraints into the top-level constraint and/or to provide really good error messages when errors show up inside templated code, because a constraint didn't check enough. - Jonathan M Davis
Jul 25 2015
next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Sunday, 26 July 2015 at 00:31:48 UTC, Jonathan M Davis wrote:
 I see two issues here, both of which relate to maintenance. The 
 first one is that if the language were actually able to check 
 that you missed a requirement in your template constraint (like 
 you're suggesting) and then give you an error, that makes it 
 way easier to break valid code. Take code like this, for example

 auto foo(T)(T t)
     if(cond1!T && cond2!T)
 {
     ...
     auto b = bar(t);
     ...
 }

 auto bar(T)(T t)
     if(cond2!T)
 {
     ...
 }

 foo calls bar, and it does have all of bar's constraints in its 
 own constraints so that you don't end up with a compilation 
 error when you pass foo something that doesn't work with bar. 
 Now, imagine if bar gets updated, and now its

 auto bar(T)(T t)
     if(cond2!T && cond3!T)
 {
     ...
 }

 but foo's constraint isn't updated (e.g. because foo is in a 
 different library or program that depends no bar, so the person 
 who updates bar isn't necessarily the same person who maintains 
 foo). If the compiler then caught the fact that foo didn't 
 check all of bar's constraints and gave an error, that would 
 alert anyone using foo that foo needed to be updated, but it 
 would also mean that foo would no longer compile, when it's 
 quite possible that the argument passed to foo does indeed pass 
 bar's template constraint and will work just fine with foo. So, 
 working code no longer compiles when there's no technical 
 reason why it couldn't continue to work. Presumably, once the 
 maintainer of foo finds out about this, they'll update foo, and 
 the problem will be fixed, but it still means that every time 
 that the template constraint for bar is adjusted at all, every 
 template that uses it risks breaking if the compiler insists 
 that those templates check all of bar's constraints.
I think one of the key points, is that it would be opt-in. Current constraints would continue to work as they do now. Only templates that chose to use the new tighter constraints would have this "problem". They would be separate from current constraints (with a separate syntax). Code using current constraints would not stop working if a template that used the new constraints. It makes sense when you consider how current constraints work now, they basically say "If you can't do X then fail" but they say nothing about if it can do extra things in addition to X. So for example in the following code I will use the specialization syntax to signify one of the NEW constraints and the regular "if" syntax for the current constraints. void foo(T)(T x) if(cond1!T) // OLD constraint { bar(x); } void bar(T : cond2)(T x) // NEW constraint { ... } Would still work, and would only fail when foo gets instantiated with a type that does not pass both cond1 and cond2. Which makes perfect sense in the context of how the current constraints work. They only test if you can do something, they don't care if you can do extra things. There would be no need to update the constraints on foo.
 So, yes. it does help ensure that users of foo don't end up 
 with error messages inside of foo thanks to foo's template 
 constraint not listing everything that it actually requires, 
 but it also breaks a lot of code when template constraints 
 change when the code itself will often work just fine as-is 
 (particularly since the change to bar that required a change to 
 its template constraint would usually be a change to its 
 implementation and not what it did, since if you changed what 
 it did, everyone that used it would be broken anyway). Code 
 that's actually broken by the change to bar will fail bar's new 
 template constraint even if the compiler doesn't complain about 
 foo (or any other function) not having updated its constraint, 
 and it'll still get caught. The error might not be as nice, 
 since it'll often be in someone else's templated code, but 
 it'll still be an error, and it'll still tell you what's 
 failing. So, with the current state of affairs, only code 
 that's actually broken by a change to bar's template constraint 
 would be broken and not everyone, whereas what you're 
 suggesting would break all code that used bar that didn't 
 happen to also check the same thing that bar was now checking 
 for.
With what I said about opt-in I think every thing above is null. The key advantage to the new constraints would be that it constrains the type to only do what the constraints say. As opposed to being able to do anything in addition to what the constraints say.
 The second issue that I see with your suggestion is basically 
 what Walter is saying the problem is. Even if we assume that we 
 _do_ want to put all of the requirements for foo - direct or 
 indirect - in its template constraint, this causes a 
 maintenance problem. For instance, if foo were updated to call 
 another function

 auto foo(T)(T t)
     if(cond1!T && cond2!T && cond3!T && cond4!T)
 {
     ...
     auto b = bar(t);
     ...
     auto c = baz(t);
     ...
 }

 auto bar(T)(T t)
     if(cond2!T && cond3!T)
 {
     ...
 }

 auto baz(T)(T t)
     if(cond1!T && cond4!T)
 {
     ...
 }

 you now have to update foo. Okay. That's not a huge deal, but 
 now you have two functions that you're using within foo whose 
 template constraints need to be duplicated in foo's template 
 constraint. And ever function that _they_ call ends up 
 affecting _their_ template constraints and then foo in turn.

 auto foo(T)(T t)
     if(cond1!T && cond2!T && cond3!T && cond4!T && cond5!T && 
 cond6!T && cond7!T)
 {
     ...
     auto b = bar(t);
     ...
     auto c = baz(t);
     ...
 }

 auto bar(T)(T t)
     if(cond2!T && cond3!T)
 {
     ...
     auto l = lark(t);
     ...
 }

 auto baz(T)(T t)
     if(cond1!T && cond4!T)
 {
     ...
     auto s = stork(t);
     ...
 }


 auto lark(T)(T t)
     if(cond5!T && cond6!T)
 {
     ...
 }

 auto stork(T)(T)
     if(cond2!T && cond3!T && cond7!T)
 {
     auto w = wolf(t);
 }

 auto wolf(T)(T)
     if(cond7!T)
 {
     ...
 }

 So, foo's template constraint potentially keeps getting nastier 
 and nastier thanks to indirect calls that it's making. Now, 
 often there's going to be a large overlap between these 
 constraints (e.g. because they're all range-based functions 
 using isInputRange, isForwardRange, hasLength, etc.), so maybe 
 foo's constraint doesn't get that nasty. But where you still 
 have a maintenance problem even if that's the case is if a 
 function that's being called indirectly adds something to its 
 template constraint, then everything up the chain has to add it 
 if you want to make sure that foo gets no compilation 
 internally due to it failing to pass a template constraint of 
 something that it's calling. So, if wolf ends up with a 
 slightly more restrictive constraint in the next release, then 
 every templated function on the planet which used it - directly 
 or indirectly - would need to be updated. And much of that code 
 could be maintained by someone other than the person who made 
 the change to wolf, and much of it could be code that they've 
 don't even know exists. So, if we're really trying to put 
 everything that a function requires - directly or indirectly - 
 in its template constraint, we potentially have a huge 
 maintenance problem here once you start having templated 
 functions call other templated functions - especially if any of 
 these functions are part of a library that's distributed to 
 others. But even if it's just your own code base, a slight 
 adjustment to a template constraint could force you to change a 
 _lot_ of the other template constraints in your code.
Again, with it being opt-in, this problem is not as bad as what you say. Only templates that choose to use the new constraints would need to do the sort of maintenance that you say. Also this problem is the same as normal type systems experience! If some low level function needs something new out of type T then the same problem will arise. Its not a problem there so it should not be a problem here.
 So, while I definitely agree that it's nicer from the user's 
 standpoint when the template constraint checks everything that 
 the function requires - directly or indirectly - I think that 
 we have a major maintenance issue in the making here if that's 
 what we insist on. Putting all of the sub-constraints in the 
 top-level constraint - especially with multiple levels of 
 templated functions - simply doesn't scale well, even if it's 
 desirable. Maybe some kind of constraint inference would solve 
 the problem. I don't know. But I think that it is a problem, 
 and it's one that we haven't really recognized yet.
Again opt-in.
 At this point, even if we're going to try and have top-level 
 template constraints explicitly contain all of the constraints 
 of the templates that they use - directly or indirectly - I 
 think that we really need to make sure that the error messages 
 from within templated code are as good as we can make them, 
 because there's no way that all template constraints are going 
 to contain all of their sub-constraints as code is changed over 
 time, not unless the constraints are fairly simple and looking 
 for the same stuff.

 Fortunately, the error messages are a lot better than they used 
 to be, but if we can improve them sufficiently, then it becomes 
 less critical to make sure that all sub-constraints be in the 
 top-level constraint, and it makes it a lot more palatable when 
 sub-constraints are missed.
Better errors are of course better.
 But as I said in the first part, I really don't think that 
 detecting missing constraints and giving errors is a good 
 solution. It'll just break more code that way. Rather, what we 
 need is to either find a way to infer the sub-constraints into 
 the top-level constraint and/or to provide really good error 
 messages when errors show up inside templated code, because a 
 constraint didn't check enough.

 - Jonathan M Davis
Key point is opt-in.
Jul 25 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 26 July 2015 at 01:56:29 UTC, Tofu Ninja wrote:
 Key point is opt-in.
Opt-in doesn't really fix the problem. It just allows you to choose whether you're going to break more code by requiring that all template constraints be updated because a function being called inside somewhere had its constraint updated. So, you're choosing whether to opt-in to that problem or not, but if you opt-in, you're still screwed by it. That's not going to change just because it's optional. And my point about template constraint condition proliferation holds even with the current implementation. Anyone choosing to try and put all of the sub-constraints in the top-level constraint has a maintenance problem. Sure, you can choose not to do that and let the user see errors from within the template when they use a type that fails the template constraint of a function being called and thus avoid the constraint proliferation (which then causes its own problems due to how that's more annoying to deal with when you run into it), but the problem is still there. If you opt-in to putting everything in the top-level template constraints, you will have a maintenance issue. The fact that you can choose what you do or don't put in your template constraints (or that you could choose whether to use the new paradigm/feature that you're proposing) doesn't fix the problem that going that route causes maintenance issues. That fundamental problem still remains. All it means is that you can choose whether you want to cause yourself problems by going that route, not that they're necessarily a good idea. - Jonathan M Davis
Jul 25 2015
prev sibling next sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Sunday, 26 July 2015 at 00:31:48 UTC, Jonathan M Davis wrote:
 On Friday, 24 July 2015 at 01:09:19 UTC, H. S. Teoh wrote:

 The second issue that I see with your suggestion is basically 
 what Walter is saying the problem is. Even if we assume that we 
 _do_ want to put all of the requirements for foo - direct or 
 indirect - in its template constraint, this causes a 
 maintenance problem. For instance, if foo were updated to call 
 another function
I think from your post I finally understand what Walter was getting at. Not sure if this simplifies things, but what if instead you do something like void foo(T)(T t) if (__traits(compiles, bar(t) && __traits(compiles, baz(t))) { ... auto b = bar(t); ... auto c = baz(t); ... } This only really works in the case where it's obvious that you are calling some bar(t). It might not work more generally... Anyway, in your next example, you have additional layers with some more condn!T constraints. If instead you just have more __traits(compiles, x) for whatever templates they are calling, then checking that bar compiles necessarily also tests whether those other functions can compile as well. In this way, you're testing all the constraints at lower levels. So I guess you would still be checking that each constraint works, but at the highest level you only have to specify that the templates you are calling compile. In my opinion this is superior (for this case) because if you change bar and baz then you don't have to make changes to foo. Am I wrong? Is this just another way of doing the same thing?
Jul 25 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 26 July 2015 at 02:15:20 UTC, jmh530 wrote:
 On Sunday, 26 July 2015 at 00:31:48 UTC, Jonathan M Davis wrote:
 On Friday, 24 July 2015 at 01:09:19 UTC, H. S. Teoh wrote:

 The second issue that I see with your suggestion is basically 
 what Walter is saying the problem is. Even if we assume that 
 we _do_ want to put all of the requirements for foo - direct 
 or indirect - in its template constraint, this causes a 
 maintenance problem. For instance, if foo were updated to call 
 another function
I think from your post I finally understand what Walter was getting at. Not sure if this simplifies things, but what if instead you do something like void foo(T)(T t) if (__traits(compiles, bar(t) && __traits(compiles, baz(t))) { ... auto b = bar(t); ... auto c = baz(t); ... } This only really works in the case where it's obvious that you are calling some bar(t). It might not work more generally... Anyway, in your next example, you have additional layers with some more condn!T constraints. If instead you just have more __traits(compiles, x) for whatever templates they are calling, then checking that bar compiles necessarily also tests whether those other functions can compile as well. In this way, you're testing all the constraints at lower levels. So I guess you would still be checking that each constraint works, but at the highest level you only have to specify that the templates you are calling compile. In my opinion this is superior (for this case) because if you change bar and baz then you don't have to make changes to foo. Am I wrong? Is this just another way of doing the same thing?
I suggested the same in response to Walter earlier. It is one way to combat the problem. However, it's really only going to work in basic cases (at least, without getting ugly). What if what you passed to bar wasn't t but was a result from calling a function on t? Or maybe it was a result of calling a function on the return value of a function that was called on t? Or perhaps you passed t through a chain of free functions and ended up with some other type from that, and it doesn't pass bar's template constraint? In order to deal with that sort of thing, pretty soon, you have to put most of the function inside its own constraint. It's _far_ cleaner in general to just be putting the sub-constraints in the top-level constraint - e.g. maybe all that it means is using isForwardRange and hasLength instead of just isInputRange rather than putting a whole chain of function calls inside of __traits(compiles, ...) test in the template constraint. It just gets ugly quickly to try and get it to work for you automatically by putting the calls you're making in the constraint so that the actual constraint conditions are inferred. The other problem is that if you're putting all of those __traits(compiles, ...) tests in template constraints rather than putting the sub-constraints in there, it makes it a lot more of a pain for the user to figure out why they're failing the constraint. The constraint for what's failing in __traits(compiles, ...) isn't shown, whereas it would be if you just let it get past the template constraint and fail the sub-constraint at the point where that function is being called, you'd see the actual condition that's failing. So, as annoying as it would be, it would actually be easier to figure out what you were doing wrong. Also, if you really didn't put the sub-constraint in the top-level constraint at all, then the constraint is split out so that when you get a failure at the top-level, you see only the stuff that the function requires directly, and when you get a failure internally, you see it the condition that that function requires and can see that separately. So, instead of having to figure out which part of condition1 && condition2 is failing, you know which it is, because the conditions are tested in separate places. I suspect that the best way to go with this is that a template constraint only require the stuff that a function uses directly and let the constraints on any functions being called internally report their own errors and then have the compiler provide really good error messages to make that sane. Then it can be a lot clearer what condition you're failing when you call the function with a bad argument. But we need to improve the error messages further if we want to go that way. The other alternative would be to just make a best faith effort to put all of the sub-constraints in the top-level constraint initially and then have better error messages for when the constraint is incomplete due to a change to a function being called. But that would still require better error messages (which is the main problem with the other suggestion), and it actually has the problem that if a function being called has its constraint lessened (rather than made more strict) such that your outer function could then accept more types of arguments, if you put all of the sub-constraints at the top-level, then it won't accept anything more until you realize that the sub-constraints have changed and update the top-level constraint. Right now, we're more or less living with the second option, but if we can get the error messages to be good enough, I think that the first option is actually better. But either way, we need to find ways to improve the error messages inside of templates to reduce the need to look at their source code when a template constraint doesn't prevent an argument being used with it that doesn't compile with it (particularly in the cases where it's due to a function being called rather than that function itself having a bug). - Jonathan M Davis
Jul 25 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Sunday, 26 July 2015 at 04:34:55 UTC, Jonathan M Davis wrote:
 I suggested the same in response to Walter earlier. It is one 
 way to combat the problem. However, it's really only going to 
 work in basic cases (at least, without getting ugly). What if 
 what you passed to bar wasn't t but was a result from calling a 
 function on t? Or maybe it was a result of calling a function 
 on the return value of a function that was called on t? Or 
 perhaps you passed t through a chain of free functions and 
 ended up with some other type from that, and it doesn't pass 
 bar's template constraint? In order to deal with that sort of 
 thing, pretty soon, you have to put most of the function inside 
 its own constraint. It's _far_ cleaner in general to just be 
 putting the sub-constraints in the top-level constraint - e.g. 
 maybe all that it means is using isForwardRange and hasLength 
 instead of just isInputRange rather than putting a whole chain 
 of function calls inside of __traits(compiles, ...) test in the 
 template constraint. It just gets ugly quickly to try and get 
 it to work for you automatically by putting the calls you're 
 making in the constraint so that the actual constraint 
 conditions are inferred.

 {snip}
I appreciate the thorough response. I think I agree with your point about error messages. Nevertheless, with respect to your point about a best effort to putting constraints at the top-level, there might be scope for making this easier for people. For instance, if there were a way to include the constraints from one template in another template. From your example, maybe something like auto foo(T)(T t) if(template_constraints!bar && template_constraints!baz) { ... auto b = bar(t); ... auto c = baz(t); ... } Ideally the template_constraints!bar would expand so that in an error message the user sees what the actual constraints are instead of the more nebulous template_constraints!bar. At least something like this would avoid your point with respect to __traits(compiles, x).
Jul 25 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 26 July 2015 at 06:12:55 UTC, jmh530 wrote:
 I appreciate the thorough response. I think I agree with your 
 point about error messages. Nevertheless, with respect to your 
 point about a best effort to putting constraints at the 
 top-level, there might be scope for making this easier for 
 people. For instance, if there were a way to include the 
 constraints from one template in another template. From your 
 example, maybe something like

 auto foo(T)(T t)
     if(template_constraints!bar && template_constraints!baz)
 {
     ...
     auto b = bar(t);
     ...
     auto c = baz(t);
     ...
 }

 Ideally the template_constraints!bar would expand so that in an 
 error message the user sees what the actual constraints are 
 instead of the more nebulous template_constraints!bar. At least 
 something like this would avoid your point with respect to 
 __traits(compiles, x).
That's certainly an interesting idea, though if we're going that route, it might be better to simply have the compiler do that automatically for you, since it can see what functions are being called and what they're constraints are. Still, part of the problem is that the constraints for bar or baz may not be directly related to the argument to foo but rather to a result of operating on it. So, bar and baz's constraints can't necessarily be moved up into foo's constraint like that in a meaningful way. It would require the code that generates the arguments to bar and baz as well in order to make that clear. For instance, it might be that T needs to be an input range for the code that's directly in foo. However, if you do something like auto u = t.f1().f2().f3(); auto b = bar(u); u then needs to be a forward range to work with bar. In order for that to happen, t likely needs to have been a forward range, but if you tried to move bar's template constraint into foo's template constraint, it would be trying to require that u was a forward range - because that's what bar needs - whereas that's r really not what needs to be in foo's template constraint. What it needs is to require that t be a forward range instead of just an input range. The connection between the arguments the user is passing to the function that they're calling and the arguments to templated functions that are called within that function aren't necessarily straightforward. So, moving those sub-constraints up into the top-level constraint in a way that's clear to the caller either requires that the writer of that function do it (because they are able to understand how the function's argument relates to the requirements of the functions being called within that function and thus come up with the full requirements of for the function's argument), or it requires that enough context be given to the caller for them to be able to understand how what they're passing in is related to the call inside the template that's failing to compile because that function's argument doesn't pass its constraint. Without having the program who's writing this function translating the sub-constraints into what the requirements then are on the function's argument and need to go in the top-level constraint, I don't see how we can avoid providing at least _some_ of the source in error messages in order to make the context of the failure clear. Simply shoving all of that into the template constraint - even automatically - is just going to get ugly outside of basic cases. And really, even then, what you're trying to do is to take the context of the failure and put it in the template constraint rather than just show that context along with the failure. The more I think about it, the harder it seems to be able to provide enough information to the caller without pretty much just showing them the source code. The compiler should be able to reduce how much of the source code would have to be shown and thus avoid forcing the user to go look at the templates full source, but in anything but the most basic cases, that source code quickly becomes required to understand what's going on. If we're truly dealing with cases where the function's argument is simply passed on to another function call, then the sort of thing that you're suggesting is pretty straightforward and would likely work well. But there are going to be a lot of cases where the constraint failure isn't on the original function argument but on the result of passing it through other functions or on something that was obtained by calling one of that argument's member functions. And as soon as that's what's going on, attempting to push the sub-constraints into the top-level constraint either by automatically inferring them or by having something like template_constraints!baz is not going to work well, if at all. I don't know. I think that we can come up with solutions that fix many of the simple cases, but I also think that a lot of the simple cases are where it's easiest to maintain the top-level template constraints with all of the sub-constraints translated into top-level constraints. It's the complicated cases (which are going to be common) where things get ugly. And I really don't see a good solution. Improved error messages obviously will help, but if the code is complicated enough, it eventually reaches the point that the caller is going to need to look at the full source code to see what's going on and figure out what they're screwing up, so I don't know how far we can go with the error messages. And it's also those cases where it's probably going to be hardest to maintain the constraints. It's a tough problem. - Jonathan M Davis
Jul 25 2015
parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Sunday, 26 July 2015 at 06:54:17 UTC, Jonathan M Davis wrote:
 That's certainly an interesting idea, though if we're going 
 that route, it might be better to simply have the compiler do 
 that automatically for you, since it can see what functions are 
 being called and what they're constraints are. Still, part of 
 the problem is that the constraints for bar or baz may not be 
 directly related to the argument to foo but rather to a result 
 of operating on it. So, bar and baz's constraints can't 
 necessarily be moved up into foo's constraint like that in a 
 meaningful way. It would require the code that generates the 
 arguments to bar and baz as well in order to make that clear. 
 For instance, it might be that T needs to be an input range for 
 the code that's directly in foo. However, if you do something 
 like

 auto u = t.f1().f2().f3();
 auto b = bar(u);
Yeah, I can see how something like that's going to get complicated. My best guess would be something like auto foo(T)(T t) { static if ( template_constraints!f1(t) && template_constraints!f2(f1(t)) && template_constraints!f3(f2(f1(t))) ) auto u = t.f1().f2().f3(); static if (template_constraints!bar(u)) auto b = bar(u); } I guess the key would be that the template_constraints should be able to take whatever inputs the function can take so that you can use it with static if at any point in the function body if needed. It may not work for everything, but I imagine it would probably cover most cases, albeit awkwardly for the chained range operations. Ideally, there could be a way so that only the last condition in that first static if is required, but I'm not sure how easy something like that would be. Nevertheless, I honestly don't know how big of an issue something like this is. I'm sort of talking out of ignorance at this point. I haven't had the need to program anything like this. My biggest problem with your point about including everything in the top-level is that if you make a change to the constraints in the bottom level then you have to remember to make the changes everywhere else. If you only allow one template constraint for each function, then it might be easier to remember (b/c multiple conditions would need to be defined in auxiliary functions), but it would also be much less flexible.
Jul 26 2015
parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 26 July 2015 at 16:39:23 UTC, jmh530 wrote:
 My biggest problem with your point about including everything 
 in the top-level is that if you make a change to the 
 constraints in the bottom level then you have to remember to 
 make the changes everywhere else.
Well, that's pretty much exactly why Walter is trying to say that it's the same problem as checked exceptions. Changing something lower in the chain forces you to change everything higher in the chain - and often, stuff higher in the chain isn't written by the same person or organization as the stuff lower in the chain. If we did what some folks are suggesting and make it so that the compiler gave an error if a template constraint failed to cover all of its sub-constraints (which would probably require using concepts), then changing the function lower in the chain would outright break all of the code higher in the chain, making it even more like checked exceptions. But even without that enforcement, we have a maintenance problem if the lower level constraints need to be propagated. If the constraints almost never change, then it won't necessarily be a big deal, but as code is updated and improved, then it could be a much bigger one. For a publicly available library like Phobos, the solution may simply be that the template constraints of functions in the public API can only ever be made less strict rather than more strict so that they don't stop working with any existing code or cause other functions up the chain to have to tighten their template constraints as well. But with all of the inference and conditional compilation that you get in D code, maybe even reducing the restrictions would cause problems in some cases - especially if something like __traits(compiles, ...) is used in template constraints, because that change could potentially make it so that overloads start conflicting (or just change) in higher level functions and thus break code. Though given how much you can do with metaprogramming in D, pretty soon changing _anything_ risks breaking code, so I don't know how much we should really worry about that. If you're doing stuff that involves that much type introspection, the odds of your code breaking with changes to the libraries you're using are high enough that it's probably not reasonable to expect that it won't break anyway. In any case, I suppose that we'll just have to wait and see how much a problem this will really become, but I don't think attempting to keep the higher level constraints in line with the lower level ones is as bad as doing something like concepts where it's forced. At least with what we have, the worst you normally get is an error inside of a template instead of at the outer template constraint. And if someone doesn't want to try and propagate the template constraints, they don't have to - they just then have to deal with error messages being inside of the templated function (or inside of templated functions that gets called by that function - either directly or indirectly) - and if we improve the error messages enough, then that won't be so bad. So, we have a potential maintenance problem here, but it's not one that's generally going to be from broken code so much as reporting the error at a point other than the one where folks want to see it. - Jonathan M Davis
Jul 26 2015
prev sibling parent "Sebastiaan Koppe" <mail skoppe.eu> writes:
On Sunday, 26 July 2015 at 00:31:48 UTC, Jonathan M Davis wrote:
 auto foo(T)(T t)
     if(cond1!T && cond2!T && cond3!T && cond4!T && cond5!T && 
 cond6!T && cond7!T)
 {
     ...
     auto b = bar(t);
     ...
     auto c = baz(t);
     ...
 }

 auto bar(T)(T t)
     if(cond2!T && cond3!T)
 {
     ...
     auto l = lark(t);
     ...
 }

 auto baz(T)(T t)
     if(cond1!T && cond4!T)
 {
     ...
     auto s = stork(t);
     ...
 }


 auto lark(T)(T t)
     if(cond5!T && cond6!T)
 {
     ...
 }

 auto stork(T)(T)
     if(cond2!T && cond3!T && cond7!T)
 {
     auto w = wolf(t);
 }

 auto wolf(T)(T)
     if(cond7!T)
 {
     ...
 }
Regardless of this debate, it would be great if template constraints could be inferred. It seems rather trivial. Although I understand that a lot of times the compiler doesn't have the function's body at hand.
Jul 25 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 6:06 PM, H. S. Teoh via Digitalmars-d wrote:
 An uninstantiated template path is worse than a branch that's never
 taken, because the compiler can't help you find obvious problems before
 you ship it to the customer.
-cov does help, although indeed in a suboptimal way because you need to manually parse the listing. It would be nice if it marked lines that are supposed to be code yet are not instantiated in a specific way. (Currently it only marks lines that are compiled but not run.)
 A lot of Phobos bugs lurk in rarely-used template branches that are not
 covered by the unittests.
Hopefully not many are left. I consider that a historical problem caused by lack of good testing discipline. My perception is we got a lot better at that. It's simple survival to not ship untested code, even if it does compile. In any language. We should never do it.
 If we had a Concepts-like construct in D, where template code is
 statically constrained to only use, e.g., range API when manipulating an
 incoming type, a lot of these bugs would've been caught.
We'd get to ship more untested code? No thanks. We need a different angle on this. Concepts support a scenario that fails basic software engineering quality assurance criteria.
 In fact, I'd argue that this should be done for *all* templates -- for
 example, a function like this ought to be statically rejected:

 	auto myFunc(T)(T t) { return t + 1; }

 because it assumes the validity of the + operation on T, but T is not
 constrained in any way, so it can be *any* type, most of which,
 arguably, do not support the + operation.
That would be a bit much. myFunc is correct under static and dynamic assumptions about T. Dynamic assumptions cannot be checked save for documentation and unittesting. If the static assumptions fail, then well we have a less-than-nice compile-time error message, but still a compile-time error message. No disaster.
 Someone could easily introduce a bug:

 	auto myFunc(T)(T t)
 		if (is(typeof(T.init + 1)))
 	{
 		/* Oops, we checked that +1 is a valid operation on T,
 		 * but here we're doing -1 instead, which may or may not
 		 * be valid: */
 		return t - 1;
Is that a bug or a suboptimal error message?
 The compiler still accepts this code as long as the unittests use types
 that support both + and -. So this dependency on the incidental
 characteristics of T remains as a latent bug.

 If the compiler outright rejected any operation on T that hasn't been
 explicitly tested for, *then* we will have eliminated a whole class of
 template bugs. Wrong code like the last example above would be caught as
 soon as the compiler compiles the body of myFunc.
Yah, I think this is off. Andrei
Jul 25 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 3:50 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Jul 23, 2015 at 12:49:29PM -0700, Walter Bright via Digitalmars-d
wrote:
 On 7/23/2015 7:15 AM, Andrei Alexandrescu wrote:
 I am a bit puzzled by the notion of shipping template code that has
 never been instantiated as being a positive thing. This has also
 turned up in the C++ static_if discussions.
This is easy to understand. Weeding out uncovered code during compilation is a central feature of C++ concepts. Admitting you actually never want to do that would be a major blow.
But if a unit test fails at instantiating it, it fails at compile time.
That assumes the template author is diligent (foolhardy?) enough to write unittests that cover all possible instantiations...
Well at least all paths must be compiled. You wouldn't ship templates that were never instantiated just as much as you wouldn't ship any code without compiling it. We've had a few cases in Phobos a while ago of templates that were never instantiated, with simple compilation errors when people tried to use them. -- Andrei
Jul 25 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 12:05:12 UTC, Andrei Alexandrescu 
wrote:
 Well at least all paths must be compiled. You wouldn't ship 
 templates that were never instantiated just as much as you 
 wouldn't ship any code without compiling it. We've had a few 
 cases in Phobos a while ago of templates that were never 
 instantiated, with simple compilation errors when people tried 
 to use them. -- Andrei
That is an instance of happy case testing. You test that what you expect to work work. You can't test that everything that is not supposed to work do not, or that you don't rely on a specific behavior of the thing you are testing.
Jul 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 3:59 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 12:05:12 UTC, Andrei Alexandrescu wrote:
 Well at least all paths must be compiled. You wouldn't ship templates that
 were never instantiated just as much as you wouldn't ship any code without
 compiling it. We've had a few cases in Phobos a while ago of templates that
 were never instantiated, with simple compilation errors when people tried to
 use them. -- Andrei
That is an instance of happy case testing. You test that what you expect to work work. You can't test that everything that is not supposed to work do not, or that you don't rely on a specific behavior of the thing you are testing.
Um, testing all paths is not happy case testing.
Jul 25 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 00:18:14 UTC, Walter Bright wrote:
 On 7/25/2015 3:59 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 12:05:12 UTC, Andrei Alexandrescu 
 wrote:
 Well at least all paths must be compiled. You wouldn't ship 
 templates that
 were never instantiated just as much as you wouldn't ship any 
 code without
 compiling it. We've had a few cases in Phobos a while ago of 
 templates that
 were never instantiated, with simple compilation errors when 
 people tried to
 use them. -- Andrei
That is an instance of happy case testing. You test that what you expect to work work. You can't test that everything that is not supposed to work do not, or that you don't rely on a specific behavior of the thing you are testing.
Um, testing all paths is not happy case testing.
You test all execution path, not all "instantiation path". Consider this, in a dynamically typed language, you can have a function that accept a string and do something with it. You can write unit tests to check it does the right thing with various strings and make sure it execute all path. Yet, what happen when it is passed an int ? a float ? an array ? an object ? Probably random shit. Same here, but at instantiation time.
Jul 26 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/26/15 5:04 AM, deadalnix wrote:
 On Sunday, 26 July 2015 at 00:18:14 UTC, Walter Bright wrote:
 On 7/25/2015 3:59 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 12:05:12 UTC, Andrei Alexandrescu wrote:
 Well at least all paths must be compiled. You wouldn't ship
 templates that
 were never instantiated just as much as you wouldn't ship any code
 without
 compiling it. We've had a few cases in Phobos a while ago of
 templates that
 were never instantiated, with simple compilation errors when people
 tried to
 use them. -- Andrei
That is an instance of happy case testing. You test that what you expect to work work. You can't test that everything that is not supposed to work do not, or that you don't rely on a specific behavior of the thing you are testing.
Um, testing all paths is not happy case testing.
You test all execution path, not all "instantiation path". Consider this, in a dynamically typed language, you can have a function that accept a string and do something with it. You can write unit tests to check it does the right thing with various strings and make sure it execute all path. Yet, what happen when it is passed an int ? a float ? an array ? an object ? Probably random shit. Same here, but at instantiation time.
No, you are very wrong here. I am sorry! Instantiation testing is making sure that syntactic conformance is there. Semantic conformance cannot be tested during compilation (big difference) and can be partially verified dynamically. This whole conflation with dynamic typing/unittesting is inappropriate and smacks of https://en.wikipedia.org/wiki/Argument_from_analogy. If you have a point, make it stand on its own. Andrei
Jul 26 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 15:33:59 UTC, Andrei Alexandrescu 
wrote:
 On 7/26/15 5:04 AM, deadalnix wrote:
 On Sunday, 26 July 2015 at 00:18:14 UTC, Walter Bright wrote:
 On 7/25/2015 3:59 PM, deadalnix wrote:
 On Saturday, 25 July 2015 at 12:05:12 UTC, Andrei 
 Alexandrescu wrote:
 Well at least all paths must be compiled. You wouldn't ship
 templates that
 were never instantiated just as much as you wouldn't ship 
 any code
 without
 compiling it. We've had a few cases in Phobos a while ago of
 templates that
 were never instantiated, with simple compilation errors 
 when people
 tried to
 use them. -- Andrei
That is an instance of happy case testing. You test that what you expect to work work. You can't test that everything that is not supposed to work do not, or that you don't rely on a specific behavior of the thing you are testing.
Um, testing all paths is not happy case testing.
You test all execution path, not all "instantiation path". Consider this, in a dynamically typed language, you can have a function that accept a string and do something with it. You can write unit tests to check it does the right thing with various strings and make sure it execute all path. Yet, what happen when it is passed an int ? a float ? an array ? an object ? Probably random shit. Same here, but at instantiation time.
No, you are very wrong here. I am sorry! Instantiation testing is making sure that syntactic conformance is there. Semantic conformance cannot be tested during compilation (big difference) and can be partially verified dynamically. This whole conflation with dynamic typing/unittesting is inappropriate and smacks of https://en.wikipedia.org/wiki/Argument_from_analogy. If you have a point, make it stand on its own. Andrei
It is not an analogy. The dynamic typing is not a problem that is used as example or something. This is fundamentally the same problem. I've made that point earlier, and I stand by it. Claiming it is inappropriate do not make it so. Once again, statements do not constitute good arguments. If you make a good point that they differs in such a way that I missed then you basically ends the argument.
Jul 26 2015
prev sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 On 7/23/15 5:07 AM, Walter Bright wrote:
Turns out many constraints in Phobos are of the form (A || B),
 not just (A && B).
Agreed. And that's just scratching the surface. Serious question: how do you express in Rust that a type implements one trait or another, then figure out statically which?
You define a new trait and implement it differently for A and B. That leads to a cleaner design IMO because you have to think about the right abstraction for that trait. TBH I'm very surprised about that argument, because boolean conditions with version() were dimissed for exactly that reason. Tobi
Jul 23 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 12:50 PM, Tobias Müller wrote:
 TBH I'm very surprised about that argument, because boolean conditions with
 version() were dimissed for exactly that reason.
I knew someone would bring that up :-) No, I do not believe it is the same thing. For one thing, you cannot test the various versions on one system. On any one system, you have to take on faith that you didn't break the version blocks on other systems. This is quite unlike D's template constraints, where all the combinations can be tested reliably with a unittest{} block.
Jul 23 2015
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-23 22:44, Walter Bright wrote:

 I knew someone would bring that up :-)

 No, I do not believe it is the same thing. For one thing, you cannot
 test the various versions on one system. On any one system, you have to
 take on faith that you didn't break the version blocks on other systems.
Perhaps it might be good idea to allow to set a predefined version identifier, i.e. set "linux" on Windows just to see that it compiles. Think of it like the "debug" statement can be used as an escape hatch for pure functions. -- /Jacob Carlborg
Jul 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:39 AM, Jacob Carlborg wrote:
 Perhaps it might be good idea to allow to set a predefined version identifier,
 i.e. set "linux" on Windows just to see that it compiles. Think of it like the
 "debug" statement can be used as an escape hatch for pure functions.
I don't want to encourage "if it compiles, ship it!" I've strongly disagreed with the C++ concepts folks on that issue, and they've downvoted me to hell on it, too :-) I get the impression that I'm the only one who thinks exclusive traits is more of a problem than a solution. It's deja vu all over again with Exception Specifications. So, one of: 1. I'm dead wrong. 2. I fail to explain my argument properly (not the first time that's happened, fer sure). 3. People strongly want to believe in traits. 4. Smart people say it tastes great and is less filling, so there's a bandwagon effect. 5. The concepts/traits people have done a fantastic job convincing people that the emperor is wearing the latest fashion :-) It's also clear that traits work very well "in the small", i.e. in specifications of the feature, presentation slide decks, tutorials, etc. Just like Exception Specifications did. It's the complex hierarchies where it fell apart.
Jul 24 2015
next sibling parent "jmh530" <john.michael.hall gmail.com> writes:
On Friday, 24 July 2015 at 19:26:33 UTC, Walter Bright wrote:
 1. I'm dead wrong.
 2. I fail to explain my argument properly (not the first time 
 that's happened, fer sure).
I wouldn't be surprised if you're right, contra one, and you've explained it properly, contra two, but I don't understand anyway. I don't have a problem deferring to people more knowledgeable than I am.
Jul 24 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/15 3:26 PM, Walter Bright wrote:
 On 7/24/2015 11:39 AM, Jacob Carlborg wrote:
 Perhaps it might be good idea to allow to set a predefined version
 identifier,
 i.e. set "linux" on Windows just to see that it compiles. Think of it
 like the
 "debug" statement can be used as an escape hatch for pure functions.
I don't want to encourage "if it compiles, ship it!" I've strongly disagreed with the C++ concepts folks on that issue, and they've downvoted me to hell on it, too :-) I get the impression that I'm the only one who thinks exclusive traits is more of a problem than a solution. It's deja vu all over again with Exception Specifications. So, one of: 1. I'm dead wrong. 2. I fail to explain my argument properly (not the first time that's happened, fer sure). 3. People strongly want to believe in traits. 4. Smart people say it tastes great and is less filling, so there's a bandwagon effect. 5. The concepts/traits people have done a fantastic job convincing people that the emperor is wearing the latest fashion :-) It's also clear that traits work very well "in the small", i.e. in specifications of the feature, presentation slide decks, tutorials, etc. Just like Exception Specifications did. It's the complex hierarchies where it fell apart.
It would be a mistake to put concepts and traits together. Traits have been used at large scale in Scala to great results (my understanding is they're similar to Rust's). Scala-style traits would marginally improve D but we already have competing mechanisms in the form of template constraints. I consider them more powerful; Odersky seems to think they're about as powerful. -- Andrei
Jul 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 6:20 AM, Andrei Alexandrescu wrote:
 It would be a mistake to put concepts and traits together.
Then I'm misunderstanding one or the other.
Jul 25 2015
prev sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 It's also clear that traits work very well "in the small", i.e. in
 specifications of the feature, presentation slide decks, tutorials, etc.
 Just like Exception Specifications did. It's the complex hierarchies where it
fell apart.
I'm not convinced at all that checked exceptions (as implemented in Java, not C++) don't work. My suspicion is that the usual Java code monkey is just too sloppy to care and thus sees it more as a nuisance rather than the help that it is. I think Rust attracts a different kind of programmer than Java. Tobi
Jul 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 11:40 AM, Tobias Müller wrote:
 I'm not convinced at all that checked exceptions (as implemented in Java,
 not C++) don't work.

 My suspicion is that the usual Java code monkey is just too sloppy to care
 and thus sees it more as a nuisance rather than the help that it is.
Unfortunately, Bruce Eckel's seminal article on it http://www.mindview.net/Etc/Discussions/CheckedExceptions has disappeared. Eckel is not a Java code monkey, he wrote the book Thinking In Java http://www.amazon.com/gp/product/0131002872/
Jul 25 2015
next sibling parent reply "Guillaume Chatelet" <chatelet.guillaume gmail.com> writes:
On Saturday, 25 July 2015 at 20:48:06 UTC, Walter Bright wrote:
 On 7/25/2015 11:40 AM, Tobias Müller wrote:
 I'm not convinced at all that checked exceptions (as 
 implemented in Java,
 not C++) don't work.

 My suspicion is that the usual Java code monkey is just too 
 sloppy to care
 and thus sees it more as a nuisance rather than the help that 
 it is.
Unfortunately, Bruce Eckel's seminal article on it http://www.mindview.net/Etc/Discussions/CheckedExceptions has disappeared. Eckel is not a Java code monkey, he wrote the book Thinking In Java http://www.amazon.com/gp/product/0131002872/
This ? http://www.artima.com/intv/handcuffs.html
Jul 25 2015
next sibling parent "Brandon Ragland" <brags callmemaybe.com> writes:
I'm not quite sure I understand why this thread is so hot...

Here's my case in point: Rust may as well be a system-language 
however it appeals more to the high-level programmers of today. 

awe. Technically speaking, Rust doesn't do anything that cannot 
be done in D. The way you go about doing things may be different, 
and frustrating at times, however that's the basis behind any 
language.

D on the other hand is more of a systems language for C and C++ 
guys.

I'm personally of the opinion that D is wonderful, and it 
certainly has already reduced my headache count tremendously from 
my C++ days.

As for marketing, for novice and intermediate programmers, the 
IDE is the language. D does not have very good IDE support. DDT 
is by far the best, but is fairly easy to break, and still lacks 
the feel of an IDE designed for D natively, as opposed to a Java 
IDE with a plugin running in the background to manage DUB for us.
Jul 25 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2015 3:19 PM, Guillaume Chatelet wrote:
 On Saturday, 25 July 2015 at 20:48:06 UTC, Walter Bright wrote:
 On 7/25/2015 11:40 AM, Tobias Müller wrote:
 I'm not convinced at all that checked exceptions (as implemented in Java,
 not C++) don't work.

 My suspicion is that the usual Java code monkey is just too sloppy to care
 and thus sees it more as a nuisance rather than the help that it is.
Unfortunately, Bruce Eckel's seminal article on it http://www.mindview.net/Etc/Discussions/CheckedExceptions has disappeared. Eckel is not a Java code monkey, he wrote the book Thinking In Java http://www.amazon.com/gp/product/0131002872/
This ? http://www.artima.com/intv/handcuffs.html
No, that's Anders.
Jul 25 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 25 July 2015 at 20:48:06 UTC, Walter Bright wrote:
 On 7/25/2015 11:40 AM, Tobias Müller wrote:
 I'm not convinced at all that checked exceptions (as 
 implemented in Java,
 not C++) don't work.

 My suspicion is that the usual Java code monkey is just too 
 sloppy to care
 and thus sees it more as a nuisance rather than the help that 
 it is.
Unfortunately, Bruce Eckel's seminal article on it http://www.mindview.net/Etc/Discussions/CheckedExceptions has disappeared. Eckel is not a Java code monkey, he wrote the book Thinking In Java http://www.amazon.com/gp/product/0131002872/
Yes, checked exception is bankrupt at this point. It was not clear at the time, now it is.
Jul 25 2015
prev sibling parent reply Alix Pexton <alix.DOT.pexton gmail.DOT.com> writes:
On 25/07/2015 9:48 PM, Walter Bright wrote:

 Unfortunately, Bruce Eckel's seminal article on it
 http://www.mindview.net/Etc/Discussions/CheckedExceptions has
 disappeared. Eckel is not a Java code monkey, he wrote the book Thinking
 In Java
 http://www.amazon.com/gp/product/0131002872/
https://web.archive.org/web/20150515072240/http://www.mindview.net/Etc/Discussions/CheckedExceptions
Jul 26 2015
next sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Alix Pexton <alix.DOT.pexton gmail.DOT.com> wrote:
 On 25/07/2015 9:48 PM, Walter Bright wrote:
 
 Unfortunately, Bruce Eckel's seminal article on it
 http://www.mindview.net/Etc/Discussions/CheckedExceptions has
 disappeared. Eckel is not a Java code monkey, he wrote the book Thinking
 In Java
 http://www.amazon.com/gp/product/0131002872/
 
https://web.archive.org/web/20150515072240/http://www.mindview.net/Etc/Discussions/CheckedExceptions
This is article not convincing at all. His argument is basically "Most programmers are sloppy and tend to catch and ignore checked exceptions." The same programmers that do this will just catch all RuntimeExceptions at top level, write a log entry and proceed. That's actually not much better and certainly not correct error handling. Tobi
Jul 26 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 26 July 2015 at 18:13:30 UTC, Tobias Müller wrote:
 Alix Pexton <alix.DOT.pexton gmail.DOT.com> wrote:
 On 25/07/2015 9:48 PM, Walter Bright wrote:
 
 Unfortunately, Bruce Eckel's seminal article on it
 http://www.mindview.net/Etc/Discussions/CheckedExceptions has
 disappeared. Eckel is not a Java code monkey, he wrote the 
 book Thinking
 In Java
 http://www.amazon.com/gp/product/0131002872/
 
https://web.archive.org/web/20150515072240/http://www.mindview.net/Etc/Discussions/CheckedExceptions
This is article not convincing at all. His argument is basically "Most programmers are sloppy and tend to catch and ignore checked exceptions."
No it is that checked Exception encourage this behavior. Ultimately, checked exception are a failure as they completely break encapsulation. Let's say you have a logger interface. Some of its implementation will just send the log to Dave Null, some write it in a file, some will send it over the network to some tailor, and so on. The class of error that arise from each is completely different and cannot be listed exhaustively at the interface level in any meaningful way.
 The same programmers that do this will just catch all 
 RuntimeExceptions at
 top level, write a log entry and proceed.
 That's actually not much better and certainly not correct error 
 handling.

 Tobi
This is often the only meaningful thing you have to do with an exception anyway.
Jul 26 2015
parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 26/07/2015 23:58, deadalnix wrote:
 On Sunday, 26 July 2015 at 18:13:30 UTC, Tobias Müller wrote:
 Alix Pexton <alix.DOT.pexton gmail.DOT.com> wrote:
 On 25/07/2015 9:48 PM, Walter Bright wrote:

 Unfortunately, Bruce Eckel's seminal article on it
 http://www.mindview.net/Etc/Discussions/CheckedExceptions has
 disappeared. Eckel is not a Java code monkey, he wrote the book
 Thinking
 In Java
 http://www.amazon.com/gp/product/0131002872/
https://web.archive.org/web/20150515072240/http://www.mindview.net/Etc/Discussions/CheckedExceptions
This is article not convincing at all. His argument is basically "Most programmers are sloppy and tend to catch and ignore checked exceptions."
No it is that checked Exception encourage this behavior. Ultimately, checked exception are a failure as they completely break encapsulation. Let's say you have a logger interface. Some of its implementation will just send the log to Dave Null, some write it in a file, some will send it over the network to some tailor, and so on. The class of error that arise from each is completely different and cannot be listed exhaustively at the interface level in any meaningful way.
Then define the logger interface as throwing a generic Exception class. (a class that sits at the top of the hierarchy of the other Exceptions) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 15 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/26/2015 12:51 AM, Alix Pexton wrote:
 On 25/07/2015 9:48 PM, Walter Bright wrote:

 Unfortunately, Bruce Eckel's seminal article on it
 http://www.mindview.net/Etc/Discussions/CheckedExceptions has
 disappeared. Eckel is not a Java code monkey, he wrote the book Thinking
 In Java
 http://www.amazon.com/gp/product/0131002872/
https://web.archive.org/web/20150515072240/http://www.mindview.net/Etc/Discussions/CheckedExceptions
That's it. Thanks for finding the link.
Jul 27 2015
prev sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 7/23/2015 12:50 PM, Tobias Müller wrote:
 TBH I'm very surprised about that argument, because boolean conditions with
 version() were dimissed for exactly that reason.
I knew someone would bring that up :-) No, I do not believe it is the same thing. For one thing, you cannot test the various versions on one system. On any one system, you have to take on faith that you didn't break the version blocks on other systems. This is quite unlike D's template constraints, where all the combinations can be tested reliably with a unittest{} block.
How is this related to testability? Using boolean conditions does not introduce any new code paths compared to helper versions/ helper traits. Testability is exactly the same. My point was that you argued with cleaner design in the case of versions. *I agree with that*, and I think the same is true for traits. Even if you're right and testability is better, that doesn't contradict the point that I'm trying to make. Tobi
Jul 25 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 11:07, Walter Bright wrote:

 Consider that template constraints can be arbitrarily complex, and can
 even check behavior, not just a list of function signatures ANDed
 together. Turns out many constraints in Phobos are of the form (A || B),
 not just (A && B).
I know that is possible, but in most cases it's only a list of function signatures that is needed. -- /Jacob Carlborg
Jul 23 2015
prev sibling next sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Long rant ahead - a bit dipsy..
[snip] It's one thing to list "nice features", and it's another thing to use these features in production code. As a code base grows, the limitations become more and more obvious. Thus, I would be wary of jumping to conclusions or hailing new features as game changers, before having tested them thoroughly in the real world. Only time will tell, if something really scales. I've learned to wait and see what people with experience report after a year or two of using a given language.
Jul 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/23/2015 2:22 AM, Chris wrote:
 It's one thing to list "nice features", and it's another thing to use these
 features in production code. As a code base grows, the limitations become more
 and more obvious. Thus, I would be wary of jumping to conclusions or hailing
new
 features as game changers, before having tested them thoroughly in the real
 world. Only time will tell, if something really scales. I've learned to wait
and
 see what people with experience report after a year or two of using a given
 language.
It is very true that many features look good on paper, and only time and experience reveals the truth. There are a lot of programming features that fail the second clause - like implicit declaration of variables.
Jul 23 2015
next sibling parent Justin Whear <justin economicmodeling.com> writes:
On Thu, 23 Jul 2015 13:46:16 -0700, Walter Bright wrote:
 like implicit declaration of variables.
Trigger warning needed!
Jul 23 2015
prev sibling next sibling parent "Chris" <wendlec tcd.ie> writes:
On Thursday, 23 July 2015 at 20:46:16 UTC, Walter Bright wrote:
 On 7/23/2015 2:22 AM, Chris wrote:
 It's one thing to list "nice features", and it's another thing 
 to use these
 features in production code. As a code base grows, the 
 limitations become more
 and more obvious. Thus, I would be wary of jumping to 
 conclusions or hailing new
 features as game changers, before having tested them 
 thoroughly in the real
 world. Only time will tell, if something really scales. I've 
 learned to wait and
 see what people with experience report after a year or two of 
 using a given
 language.
It is very true that many features look good on paper, and only time and experience reveals the truth. There are a lot of programming features that fail the second clause - like implicit declaration of variables.
What happens next is that users demand that things be changed and adapted to reality, which in turn compromises the original idea. Then you have a feature soup with dodgy rules. In a way D avoids this by providing only the ingredients and not the whole meal. At the end of the day, it's up to the programmer to make the code safe and stable. Time and again language designers try to avoid bugs by making the language as rigid and prescriptive as possible. "This error couldn't happen in X, because every variable is a Y by default!" However, new features give rise to new kinds of bugs. Finding a work around for a restriction is bound to produce bugs.
Jul 24 2015
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/23/2015 10:46 PM, Walter Bright wrote:
 On 7/23/2015 2:22 AM, Chris wrote:
 It's one thing to list "nice features", and it's another thing to use
 these
 features in production code. As a code base grows, the limitations
 become more
 and more obvious. Thus, I would be wary of jumping to conclusions or
 hailing new
 features as game changers, before having tested them thoroughly in the
 real
 world. Only time will tell, if something really scales. I've learned
 to wait and
 see what people with experience report after a year or two of using a
 given
 language.
It is very true that many features look good on paper, and only time and experience reveals the truth. There are a lot of programming features that fail the second clause - like implicit declaration of variables.
That also fails the first clause.
Jul 24 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-07-23 22:46, Walter Bright wrote:

 like implicit declaration of variables.
I would say that Ruby is pretty far up the list of successful languages, a lot higher than D ;) -- /Jacob Carlborg
Jul 24 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2015 11:40 AM, Jacob Carlborg wrote:
 On 2015-07-23 22:46, Walter Bright wrote:

 like implicit declaration of variables.
I would say that Ruby is pretty far up the list of successful languages, a lot higher than D ;)
I know you're a great fan of Ruby, so I'll bite my tongue :-)
Jul 24 2015
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 07/23/2015 11:22 AM, Chris wrote:
 On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Long rant ahead - a bit dipsy..
[snip] It's one thing to list "nice features", and it's another thing to use these features in production code. As a code base grows, the limitations become more and more obvious. Thus, I would be wary of jumping to conclusions or hailing new features as game changers, before having tested them thoroughly in the real world. Only time will tell, if something really scales. I've learned to wait and see what people with experience report after a year or two of using a given language.
FWIW, most of the features in his list are very old.
Jul 24 2015
prev sibling next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
Thank you for sharing your impressions! I think posts like this 
are really useful for the community, because it's important from 
time to time to take a look outside of the D ecosystem.

I've been watching Rust with interest myself, though I've mostly 
been a passive observer. Some of it's concept fascinate me. My 
general impression is that it's a lot stricter than D (good for 
correctness), but at the cost of expressiveness. I'm going to 
comment on some of your points below:

On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:

 Cargo
 -----
Very important. Fortunately, dub will be bundled with D in one of the next versions.
 Traits
 ------
 I think the ability to express an interface without buying into 
 inheritance is the right move. The alternative in D is 
 specifying the behavior as a template and verifying the 
 contract in a unittest for the type.
Not only that, but Rust does so much more with traits, e.g. closures. They seem to be a simple, yet powerful concept, that a big part of the language is built on. http://blog.rust-lang.org/2015/05/11/traits.html On the other hand, it is extremely restrictive in contrast to `static if()` and template constraints. D's approach is a lot more expressive and - at least for me - intuitive.
 Macros
 ------
 I haven't written more than extremely simple macros in Rust, 
 but having macros that is possible for tools to understand is a 
 win.  Templates and string mixins is often used for this 
 purpose, but trying to build tools when string mixins exists is 
 probably extremely hard. If D had hygenic macros, I expect 
 several features could be expressed with this instead of string 
 mixins, making tooling easier to implement.
They're certainly cleaner than string mixins. On the other hand, they are really complex (and probably costly to implement, because they need to expose the AST). I don't know how representative it is, but I mostly use them for accessing aggregate members by name in meta-programming: foreach(member; FieldNameTuple!T) { static if(typeof(mixin("T.init." ~ member)) : int) { // ... } } Or sometimes in cases where using recursive templates to build a list is too tedious. I don't think AST macros would gain us much in D.
 Safe by default
 ---------------
 D is often said being safe by default, but D still has default 
 nullable references and mutable by default. I don't see it 
 being possible to change at this stage, but expressing when I 
 want to be unsafe rather than the opposite is very nice. I end 
 up typing a lot more in D than Rust because of this.
safe by default would be really nice. It's easy to add ` safe:` at the beginning of your file, but large parts of e.g. vibe.d are not annotated and need to be called from system functions. Maybe more inference would help here, too.
 Pattern matching
 ----------------
 Ooooh... I don't know what to say.. D should definitely look 
 into implementing some pattern matching! final switch is good 
 for making sure all values are handled, but deconstructing is 
 just so ergonomic.
Yes, and it goes hand in hand with a more handy tuple syntax.
 Expressions
 -----------
 This probably also falls in the "too late" category, but 
 statements-as-expressions is really nice. `auto a = if ...` <- 
 why not?
Generally I find this an elegant concept. But in Rust, it leads to the distinction between expressions terminated with `;` and those without, which in turn makes it necessary to use braces even if you have only one statement or expression. This is something that I dislike very much.
 Borrowing
 ---------
 This is probably the big thing that makes Rust really 
 different.  Everything is a resource, and resources have an 
 owner and a lifetime. As a part of this, you can either have 
 multiple aliases with read-only references, or a single 
 reference with a writeable reference. I won't say I have a lot 
 of experience with this, but it seems like it's not an 
 extremely unergonomic trade-off. I cannot even remotely imagine 
 the amount of possible compiler optimizations possible with 
 this feature.
Yes. I haven't given up hope that we can introduce a simplified version of this in D. It doesn't need to be as strict and pervasive as Rust's system. I'm still thinking about how to reduce that into something less complex without breaking too many of the guarantees it provides.
Jul 23 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/23/15 6:56 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" 
wrote:
 Traits
 ------
 I think the ability to express an interface without buying into
 inheritance is the right move. The alternative in D is specifying the
 behavior as a template and verifying the contract in a unittest for
 the type.
Not only that, but Rust does so much more with traits, e.g. closures. They seem to be a simple, yet powerful concept, that a big part of the language is built on. http://blog.rust-lang.org/2015/05/11/traits.html On the other hand, it is extremely restrictive in contrast to `static if()` and template constraints. D's approach is a lot more expressive and - at least for me - intuitive.
Thanks for the link, good quick read to get the overview of Rust's traits feature. It's ingenious because it integrates static and dynamic dispatch. For dynamic dispatch, traits are better than interfaces - more flexible, better informed. For static dispatch, they don't hold a candle to D's constraints. This is important because dynamic dispatch is more of a cut-and-dried matter, whereas static dispatch is where it's at. For static dispatch I think D's template constraints are quite a lot better; they have a lot more power and offer a lot more to promise. They are an out-of-the-box solution that's a bit unwieldy because it's new enough to not yet have established idioms. In contrast, traits come from straight within the box. From a language perspective, it is my belief that Design by Introspection (enabled collectively by template constraints, introspection, compile-time function evaluation, and static if) is the one crushing advantage that D has over any competitor. Things like range-based algorithms are good but easy to copy and adapt. The faster we manage to get creative with Design by Introspection and describe it, systematize it, create compelling designs with it, the quicker will D win the minds and hearts of people. There's a lot at stake here. An question often asked by would-be users is "What does D offer that's special?" or "What does D offer over language Xyz?" As we all know there are many answers to that. Maybe too many. Now I know what the answer is: Design by Introspection. Andrei
Jul 23 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 23 July 2015 at 14:08:23 UTC, Andrei Alexandrescu 
wrote:
 Thanks for the link, good quick read to get the overview of 
 Rust's traits feature. It's ingenious because it integrates 
 static and dynamic dispatch.

 For dynamic dispatch, traits are better than interfaces - more 
 flexible, better informed. For static dispatch, they don't hold 
 a candle to D's constraints. This is important because dynamic 
 dispatch is more of a cut-and-dried matter, whereas static 
 dispatch is where it's at.
On that note, I've mentioned scala's trait, which are kind of similar and worth looking at. The thing being based on java's object model, as D's object model, it is easier to think about how this could get into D.
 For static dispatch I think D's template constraints are quite 
 a lot better; they have a lot more power and offer a lot more 
 to promise. They are an out-of-the-box solution that's a bit 
 unwieldy because it's new enough to not yet have established 
 idioms. In contrast, traits come from straight within the box.
Certainly, but they suffer from the LISP effect. You can do everything because the structure does not constrain you in any way, while at the same time it become quickly very hard to understand, for the very same reason. I do think think the opposition between the 2, as seen in your post, or Stroustrup's allergy to static if is wrong headed. May be one can be expressed via the other ?
Jul 23 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-07-23 12:56, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" 
wrote:

 Expressions
 -----------
 This probably also falls in the "too late" category, but
 statements-as-expressions is really nice. `auto a = if ...` <- why not?
Generally I find this an elegant concept. But in Rust, it leads to the distinction between expressions terminated with `;` and those without, which in turn makes it necessary to use braces even if you have only one statement or expression. This is something that I dislike very much.
In Scala there's no problem. No semicolons are required a no braces. -- /Jacob Carlborg
Jul 23 2015
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
C++ concepts for those interested: 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3701.pdf
Jul 26 2015
prev sibling next sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 22/07/2015 19:47, simendsjo wrote:
 Long rant ahead - a bit dipsy..

 TL;DR: Rust has momentum, manpower and tooling. Tooling matters. Safe
 defaults.  Ergonomics like expressions and deconstructing rocks.
Tooling doesn't just matters. Tooling trumps everything else. I've skimmed through this discussion and noticed a lot of discussion about Rust's traits vs. D's template constraints, as well as a few other minor language features. And while there is value in such discussion - as it may bring a better understanding or clarity about language design, or perhaps even a few language or library changes - don't be under the illusion that ultimately it will make any significant difference in language adoption, if the tooling quality differs a lot (and it does). Minor differences, shortcomings even, between languages will only have a big impact for the kind of people that approach language preference with an "art appreciator" kind of mentality. An almost platonic/voyeristic approach. But for people building non-small, real-world projects (the "engineering" approach), tooling will trump everything else. Only if the language differences where massive (say between D and Go), would perhaps tooling not trump language design... but even then, it would still be a big fight between the two! -- Bruno Medeiros https://twitter.com/brunodomedeiros
Jul 30 2015
parent reply "Alex Parrill" <initrd.gz gmail.com> writes:
On Thursday, 30 July 2015 at 11:46:02 UTC, Bruno Medeiros wrote:
 Tooling doesn't just matters. Tooling trumps everything else.
I don't agree. IMO reducing the need for tools would be a better solution. For example, there's no need for a memory checker if you're writing in Python, but if you're writing in C, you better start learning how to use Valgrind, and that takes time. Also there's Javascript's overabundance of tooling, with varying levels of quality, way too many choices (grunt vs gulp vs ..., hundreds of transpilers), and incompatibilities (want to use JSX and TypeScript together? Good luck). To take it to the extreme, no matter how much tooling you write for BrainFuck, I doubt anyone will use it. I think D goes in the right track by embedding things like unit tests, function contracts, and annotations into the language itself, even if the implementations could capitalize on them better than they do now.
Jul 30 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 30 July 2015 at 14:23:34 UTC, Alex Parrill wrote:
 On Thursday, 30 July 2015 at 11:46:02 UTC, Bruno Medeiros wrote:
 Tooling doesn't just matters. Tooling trumps everything else.
I don't agree. IMO reducing the need for tools would be a better solution. For example, there's no need for a memory checker if you're writing in Python, but if you're writing in C, you better start learning how to use Valgrind, and that takes time. Also there's Javascript's overabundance of tooling, with varying levels of quality, way too many choices (grunt vs gulp vs ..., hundreds of transpilers), and incompatibilities (want to use JSX and TypeScript together? Good luck). To take it to the extreme, no matter how much tooling you write for BrainFuck, I doubt anyone will use it. I think D goes in the right track by embedding things like unit tests, function contracts, and annotations into the language itself, even if the implementations could capitalize on them better than they do now.
It is not matter of agreeing or not. It is matter of fact. Language with good tooling work, language with poor tooling do not. C++ did lose traction compared to java for a while and only came back up recently because Moore law is starting to not yield expected results and tooling dramatically improved. Tooling is king, and beat language solution most of the time.
Jul 30 2015
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 30/07/2015 15:23, Alex Parrill wrote:
 On Thursday, 30 July 2015 at 11:46:02 UTC, Bruno Medeiros wrote:
 Tooling doesn't just matters. Tooling trumps everything else.
I don't agree. IMO reducing the need for tools would be a better solution. For example, there's no need for a memory checker if you're writing in Python, but if you're writing in C, you better start learning how to use Valgrind, and that takes time.
That doesn't go against what I said. Not having the need for a tool can be seen as identical to having the perfect tool (a tool you do not need). Imagine I had said it this way: Tooling shortcomings trump all other shortcomings (like language design shortcomings). Not having the need for a particular tool, is like saying that tool has no shortcomings whatsover. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Jul 30 2015
prev sibling next sibling parent reply "Enamex" <enamex+d outlook.com> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 Long rant ahead - a bit dipsy..

 TL;DR: Rust has momentum, manpower and tooling. Tooling 
 matters.  Safe defaults.  Ergonomics like expressions and 
 deconstructing rocks.
 [...]
 But again... After playing a bit with Rust, I feel it lacks a 
 lot in expressive power. D has templates, template mixins, 
 alias this, string mixins, opDispatch etc. In my little time 
 with Rust, I've seen several pages of generic constrains that 
 is expressible in a couple of lines with D. I've seen 
 copy/pasted code that just isn't necessary when you code in D.

 Anyways - my little ramblings after trying the Rust programming 
 language while I haven't used D in a long, long while (But I'm 
 still here now, as I'm not sure Rust is able to express 
 everything that is possible with D). Looking forward to 
 following D again :)
Mostly my experience, so far. If I have to choose the 'most important' things that Rust has that I'd *definitely* want in D, I'd pick (in order): * A really nicely integrated package manager like Cargo that goes seamlessly hand-in-hand with DMD. * DDMD * An attribute or something that makes it explicit to the compiler that this type is a 'one time use' deal, and /exactly/ one-time-use (or, to be practical, zero-or-one times). Copying instances of objects of that type is done by `.dup`. (Like a non-Copy type in Rust currently.) * Sum Data Types (`enum`s in Rust and `data D = VarA | VarB` in Haskell). As well as: * Pattern matching. `Algebraic!` is okay but pattern-matching that goes with it isn't PM. But maybe that's for D 3.0 in 2030 or maybe another language... (I just want SML with at least D's speed T_T. I already got something /very roughly/ akin to ML's functors from D's template and mixin magic...)
Jul 30 2015
parent reply "Enamex" <enamex+d outlook.com> writes:
On Friday, 31 July 2015 at 03:41:35 UTC, Enamex wrote:
 [...]
 Mostly my experience, so far. If I have to choose the 'most 
 important' things that Rust has that I'd *definitely* want in 
 D, I'd pick (in order):

 * A really nicely integrated package manager like Cargo that 
 goes seamlessly hand-in-hand with DMD.
 * DDMD
 [...]
Ouch. Actually forgot second most important point (right before DDMD): a ~99.99% nogc Phobos and better documentation for GC stuff. Right now docs say that `delete` is getting deprecated but using it on DMD .067.1 gives no warnings.
Jul 30 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, 31 July 2015 at 04:47:20 UTC, Enamex wrote:
 Right now docs say that `delete` is getting deprecated but 
 using it on DMD .067.1 gives no warnings.
There are no warnings because it hasn't actually been deprecated yet. So, the docs are quite correct. They say that it's _going_ to be deprecated, not that it has been deprecated. Ideally, it would have been deprecated quite some time ago, but without the custom allocators, it's a lot harder to do something similar to what delete does. So, if we'd actually deprecated it, we'd have done so without a viable alternative (it's possible without custom allocators, but it's hard to get right). Now, I think that the reality of the matter that it hasn't been deprecated is simply because no one has gotten around to it yet, but there are problems with deprecating it prior getting customer allocators. Fortunately, it looks like we will soon have that in std.experimental, so it will become more reasonable to deprecate delete, and maybe we can finally deprecate it and start moving it out of the language. But it's been planned for ages that delete would be removed from D at some point. - Jonathan M Davis
Jul 31 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 31 July 2015 at 09:37:10 UTC, Jonathan M Davis wrote:
 On Friday, 31 July 2015 at 04:47:20 UTC, Enamex wrote:
 Right now docs say that `delete` is getting deprecated but 
 using it on DMD .067.1 gives no warnings.
There are no warnings because it hasn't actually been deprecated yet. So, the docs are quite correct. They say that it's _going_ to be deprecated, not that it has been deprecated. Ideally, it would have been deprecated quite some time ago, but without the custom allocators, it's a lot harder to do something similar to what delete does. So, if we'd actually deprecated it, we'd have done so without a viable alternative (it's possible without custom allocators, but it's hard to get right). Now, I think that the reality of the matter that it hasn't been deprecated is simply because no one has gotten around to it yet, but there are problems with deprecating it prior getting customer allocators. Fortunately, it looks like we will soon have that in std.experimental, so it will become more reasonable to deprecate delete, and maybe we can finally deprecate it and start moving it out of the language. But it's been planned for ages that delete would be removed from D at some point. - Jonathan M Davis
I would much rather delete to stay and rig it up so new and delete call the global allocator(which would be the GC by default).
Jul 31 2015
prev sibling parent "Enamex" <enamex+d outlook.com> writes:
On Friday, 31 July 2015 at 09:37:10 UTC, Jonathan M Davis wrote:
 On Friday, 31 July 2015 at 04:47:20 UTC, Enamex wrote:
 Right now docs say that `delete` is getting deprecated but 
 using it on DMD .067.1 gives no warnings.
There are no warnings because it hasn't actually been deprecated yet. [...] - Jonathan M Davis
GC and memory management in general are inadequately documented. There're doc-pages and answers on SO and discussions on the forum about stuff that (coming from C++) should be so basic, like how to allocate an instance of a struct on the heap (GC'ed or otherwise) or how to allocate a class on non-managed heap (still don't get how `Unique!` works; does it even register with the GC? How to deep-copy/not-move its contents into another variable?), or on the stack, for that matter (there's `scoped!` but docs again are confusing. It's somehow stack-allocated but can't be copied?). Eventually deprecating it while leaving it now without any warnings (though the docs warn; offer no replacement) seems like it'd be more trouble than it's worth down the line, since it's not a feature addition or even full deprecation but -AFAIU- a replacement of semantics for identical syntax.
Aug 03 2015
prev sibling parent reply "rsw0x" <anonymous anonymous.com> writes:
On Wednesday, 22 July 2015 at 18:47:33 UTC, simendsjo wrote:
 ...
One thing I didn't see mentioned at all is Rust's plugin system. Rust plugin to embed C++ directly in Rust: https://github.com/mystor/rust-cpp Rust plugin to use Rust with whitespace instead of braces: https://github.com/mystor/slag Syntax extensions, lint plugins, etc are all possible via the plugin interface. https://doc.rust-lang.org/book/compiler-plugins.html Honestly, this is pretty cool :/ This thread is really long so I didn't read all the posts. Sorry if this has been mentioned.
Aug 04 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 5 August 2015 at 06:03:14 UTC, rsw0x wrote:
 Syntax extensions, lint plugins, etc are all possible via the 
 plugin interface.
 https://doc.rust-lang.org/book/compiler-plugins.html

 Honestly, this is pretty cool :/
Yes, it looks very cool. It lowers the threshold for experimentation and testing new ideas.
Aug 04 2015
prev sibling parent reply "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 5 August 2015 at 06:03:14 UTC, rsw0x wrote:
 This thread is really long so I didn't read all the posts. 
 Sorry if this has been mentioned.
Don't worry, I don't recall anybody talking about it. Nevertheless, plugins aren't unique to Rust: GCC and Clang allow plugins. Nevertheless, I think that the Rust C++ plugin you posted was cool. Reminds me of Rcpp.
Aug 05 2015
parent reply "Enamex" <enamex+d outlook.com> writes:
On Wednesday, 5 August 2015 at 14:21:13 UTC, jmh530 wrote:
 On Wednesday, 5 August 2015 at 06:03:14 UTC, rsw0x wrote:
 This thread is really long so I didn't read all the posts. 
 Sorry if this has been mentioned.
Don't worry, I don't recall anybody talking about it. Nevertheless, plugins aren't unique to Rust: GCC and Clang allow plugins. Nevertheless, I think that the Rust C++ plugin you posted was cool. Reminds me of Rcpp.
Oh, it _is_ talked about a lot. Just normally called 'syntax extensions' (the most used aspect of the plugin system), so it _is_ used. Though because it relies on compiler internals everything released as a plugin is only usable on Nightly.
Aug 05 2015
parent "jmh530" <john.michael.hall gmail.com> writes:
On Wednesday, 5 August 2015 at 22:24:34 UTC, Enamex wrote:
 Oh, it _is_ talked about a lot. Just normally called 'syntax 
 extensions' (the most used aspect of the plugin system), so it 
 _is_ used. Though because it relies on compiler internals 
 everything released as a plugin is only usable on Nightly.
I had been talking about this thread. You're right that there are a bunch of mentions on the forums (which I found after searching the term you list), but only two in the past day (including yours) on this thread.
Aug 05 2015