www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Inheritance of purity

reply Walter Bright <newshound2 digitalmars.com> writes:
Given:

     class A { void foo() { } }
     class B : A { override pure void foo() { } }

This works great, because B.foo is covariant with A.foo, meaning it can 
"tighten", or place more restrictions, on foo. But:

     class A { pure void foo() { } }
     class B : A { override void foo() { } }

fails, because B.foo tries to loosen the requirements, and so is not covariant.

Where this gets annoying is when the qualifiers on the base class function have 
to be repeated on all its overrides. I ran headlong into this when
experimenting 
with making the member functions of class Object pure.

So it occurred to me that an overriding function could *inherit* the qualifiers 
from the overridden function. The qualifiers of the overriding function would
be 
the "tightest" of its explicit qualifiers and its overridden function 
qualifiers. It turns out that most functions are naturally pure, so this
greatly 
eases things and eliminates annoying typing.

I want do to this for  safe, pure, nothrow, and even const.

I think it is semantically sound, as well. The overriding function body will be 
semantically checked against this tightest set of qualifiers.

What do you think?
Feb 16 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 16, 2012 at 06:49:40PM -0800, Walter Bright wrote:
[...]
 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most
 functions are naturally pure, so this greatly eases things and
 eliminates annoying typing.
I like this idea.
 I want do to this for  safe, pure, nothrow, and even const.
Excellent!
 I think it is semantically sound, as well. The overriding function
 body will be semantically checked against this tightest set of
 qualifiers.
 
 What do you think?
Semantically, it makes sense. And reducing typing is always good. (That's one of my pet peeves about Java: too much typing just to achieve something really simple. It feels like being forced to kill a mosquito with a laser-guided missile by specifying 3D coordinates accurate to 10 decimal places.) The one disadvantage I can think of is that it will no longer be clear exactly what qualifiers are in effect just by looking at the function definition in a derived class. Which is not terrible, I suppose, but I can see how it might get annoying if you have to trace the overrides all the way up the inheritance hierarchy just to find out what qualifiers a function actually has. OTOH, if ddoc could automatically fill in the effective qualifiers, then this will be a non-problem. ;-) T -- Frank disagreement binds closer than feigned agreement.
Feb 16 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-02-17 04:15, H. S. Teoh wrote:
 On Thu, Feb 16, 2012 at 06:49:40PM -0800, Walter Bright wrote:
 [...]
 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most
 functions are naturally pure, so this greatly eases things and
 eliminates annoying typing.
I like this idea.
 I want do to this for  safe, pure, nothrow, and even const.
Excellent!
 I think it is semantically sound, as well. The overriding function
 body will be semantically checked against this tightest set of
 qualifiers.

 What do you think?
Semantically, it makes sense. And reducing typing is always good. (That's one of my pet peeves about Java: too much typing just to achieve something really simple. It feels like being forced to kill a mosquito with a laser-guided missile by specifying 3D coordinates accurate to 10 decimal places.) The one disadvantage I can think of is that it will no longer be clear exactly what qualifiers are in effect just by looking at the function definition in a derived class. Which is not terrible, I suppose, but I can see how it might get annoying if you have to trace the overrides all the way up the inheritance hierarchy just to find out what qualifiers a function actually has. OTOH, if ddoc could automatically fill in the effective qualifiers, then this will be a non-problem. ;-)
And if ddoc could show the inheritance hierarchy as well. -- /Jacob Carlborg
Feb 16 2012
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 17 February 2012 07:42, Jacob Carlborg <doob me.com> wrote:
 On 2012-02-17 04:15, H. S. Teoh wrote:
 On Thu, Feb 16, 2012 at 06:49:40PM -0800, Walter Bright wrote:
 [...]
 So it occurred to me that an overriding function could *inherit* the

 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most
 functions are naturally pure, so this greatly eases things and
 eliminates annoying typing.
I like this idea.
 I want do to this for  safe, pure, nothrow, and even const.
Excellent!
 I think it is semantically sound, as well. The overriding function
 body will be semantically checked against this tightest set of
 qualifiers.

 What do you think?
Semantically, it makes sense. And reducing typing is always good. (That's one of my pet peeves about Java: too much typing just to achieve something really simple. It feels like being forced to kill a mosquito with a laser-guided missile by specifying 3D coordinates accurate to 10 decimal places.) The one disadvantage I can think of is that it will no longer be clear exactly what qualifiers are in effect just by looking at the function definition in a derived class. Which is not terrible, I suppose, but I can see how it might get annoying if you have to trace the overrides all the way up the inheritance hierarchy just to find out what qualifiers a function actually has. OTOH, if ddoc could automatically fill in the effective qualifiers, then this will be a non-problem. ;-)
And if ddoc could show the inheritance hierarchy as well.
Jacob++ -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Feb 17 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-02-17 13:56, Iain Buclaw wrote:
 On 17 February 2012 07:42, Jacob Carlborg<doob me.com>  wrote:
 OTOH, if ddoc could automatically fill in the effective qualifiers, then
 this will be a non-problem. ;-)
And if ddoc could show the inheritance hierarchy as well.
Jacob++
The ddoc generator in the Eclipse plugin Descent already does this. I'm wondering if it can be back ported to DMD or if it's completely separate. -- /Jacob Carlborg
Feb 17 2012
prev sibling next sibling parent James Miller <james aatch.net> writes:
On 17 February 2012 15:49, Walter Bright <newshound2 digitalmars.com> wrote=
:
 Given:

 =C2=A0 =C2=A0class A { void foo() { } }
 =C2=A0 =C2=A0class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 =C2=A0 =C2=A0class A { pure void foo() { } }
 =C2=A0 =C2=A0class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class functio=
n
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying typi=
ng.
 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body w=
ill
 be semantically checked against this tightest set of qualifiers.

 What do you think?
Makes sense to me, should also ease some pains that I've seen discussed in other threads regarding the utility of pure and const, etc. In terms of intuitiveness, I think this makes more sense, since overrides are explicit. I'm with Teoh that it might make it a bit more difficult to understand code, but to some extent that is partially a documentation problem, which is always an issue, no matter what. -- James Miller
Feb 16 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:
 
      class A { void foo() { } }
      class B : A { override pure void foo() { } }
 
 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:
 
      class A { pure void foo() { } }
      class B : A { override void foo() { } }
 
 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.
 
 Where this gets annoying is when the qualifiers on the base class function
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.
 
 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying
 typing.
 
 I want do to this for  safe, pure, nothrow, and even const.
 
 I think it is semantically sound, as well. The overriding function body will
 be semantically checked against this tightest set of qualifiers.
 
 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck. I can understand wanting to save some typing, but I really think that this harms code maintainability. It's the sort of thing that an IDE is good for. It does stuff like generate the function signatures for you or fill in the attributes that are required but are missing. I grant you that many D developers don't use IDEs at this point (at least not for D) and that those sort of capabilities are likely to be in their infancy for the IDEs that we _do_ have, but I really think that this is the sort of thing that should be left up to the IDE. Inferring attribtutes like that is just going to harm code maintainibility. It's bad enough that we end up with them not being marked on templates due to inferrence, but we _have_ to do it that way, because the attributes vary per instantiation. That is _not_ the case with class member functions. Please, do _not_ do this. - Jonathan M Davis
Feb 16 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
 No. Absolutely not. I hate the fact that C++ does this with virtual. It makes
 it so that you have to constantly look at the base classes to figure out what's
 virtual and what isn't. It harms maintenance and code understandability. And
 now you want to do that with  safe, pure, nothrow, and const? Yuck.
I do not see how it harms maintainability. It does not break any existing code. It makes it easier to convert a function hierarchy to nothrow, pure, etc.
Feb 16 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, February 16, 2012 19:41:00 Walter Bright wrote:
 On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
 No. Absolutely not. I hate the fact that C++ does this with virtual. It
 makes it so that you have to constantly look at the base classes to
 figure out what's virtual and what isn't. It harms maintenance and code
 understandability. And now you want to do that with  safe, pure, nothrow,
 and const? Yuck.
I do not see how it harms maintainability. It does not break any existing code. It makes it easier to convert a function hierarchy to nothrow, pure, etc.
It makes it harder to maintain the code using the derived classes, because you end up with a bunch of functions which aren't labeled with their attributes. You have to go and find all of the base classes and look at them to find which attributes are on their functions to know what the attributes of the functions of the derived classes actually are. It will make using all D classes harder. You should be able to look at a function and know whether it's pure, safe, nothrow, or const without having to dig through documentation and/or code elsewhere to figure it out. Doing this would make the conversion to const easier but be harmful in the long run. - Jonathan M Davis
Feb 16 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/16/2012 7:54 PM, Jonathan M Davis wrote:
 On Thursday, February 16, 2012 19:41:00 Walter Bright wrote:
 On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
 No. Absolutely not. I hate the fact that C++ does this with virtual. It
 makes it so that you have to constantly look at the base classes to
 figure out what's virtual and what isn't. It harms maintenance and code
 understandability. And now you want to do that with  safe, pure, nothrow,
 and const? Yuck.
I do not see how it harms maintainability. It does not break any existing code. It makes it easier to convert a function hierarchy to nothrow, pure, etc.
It makes it harder to maintain the code using the derived classes, because you end up with a bunch of functions which aren't labeled with their attributes. You have to go and find all of the base classes and look at them to find which attributes are on their functions to know what the attributes of the functions of the derived classes actually are. It will make using all D classes harder.
I doubt one would ever need to dig through to see what the attributes are, because: 1. The user of the override will be using it via the base class function. 2. The compiler will tell you if it, for example, violates purity. There won't be any guesswork involved. Right now, the compiler will give you a covariant error. 3. It isn't different in concept than auto declarations and all the other type inference that goes in D, including automatic inference of purity and safety.
 You should be able to look at a function and know whether it's pure,  safe,
 nothrow, or const without having to dig through documentation and/or code
 elsewhere to figure it out.

 Doing this would make the conversion to const easier but be harmful in the
 long run.
We should be encouraging people to use pure, safe, etc. Not doing the inference makes it annoying to use, and so people don't bother. My experience poking through the druntime and phobos codebase is that the overwhelming majority of the functions are safe, const & pure, but they aren't marked that way.
Feb 16 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/16/2012 8:53 PM, Walter Bright wrote:
 1. The user of the override will be using it via the base class function.

 2. The compiler will tell you if it, for example, violates purity. There won't
 be any guesswork involved. Right now, the compiler will give you a covariant
error.

 3. It isn't different in concept than auto declarations and all the other type
 inference that goes in D, including automatic inference of purity and safety.
4. It's also much like how contracts get inherited.
Feb 16 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 06:12 AM, Walter Bright wrote:
 On 2/16/2012 8:53 PM, Walter Bright wrote:
 1. The user of the override will be using it via the base class function.

 2. The compiler will tell you if it, for example, violates purity.
 There won't
 be any guesswork involved. Right now, the compiler will give you a
 covariant error.

 3. It isn't different in concept than auto declarations and all the
 other type
 inference that goes in D, including automatic inference of purity and
 safety.
4. It's also much like how contracts get inherited.
This needs some love ;) http://d.puremagic.com/issues/show_bug.cgi?id=6856
Feb 16 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 17/02/2012 04:54, Jonathan M Davis a écrit :
 On Thursday, February 16, 2012 19:41:00 Walter Bright wrote:
 On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
 No. Absolutely not. I hate the fact that C++ does this with virtual. It
 makes it so that you have to constantly look at the base classes to
 figure out what's virtual and what isn't. It harms maintenance and code
 understandability. And now you want to do that with  safe, pure, nothrow,
 and const? Yuck.
I do not see how it harms maintainability. It does not break any existing code. It makes it easier to convert a function hierarchy to nothrow, pure, etc.
It makes it harder to maintain the code using the derived classes, because you end up with a bunch of functions which aren't labeled with their attributes. You have to go and find all of the base classes and look at them to find which attributes are on their functions to know what the attributes of the functions of the derived classes actually are. It will make using all D classes harder. You should be able to look at a function and know whether it's pure, safe, nothrow, or const without having to dig through documentation and/or code elsewhere to figure it out. Doing this would make the conversion to const easier but be harmful in the long run. - Jonathan M Davis
As long as the overriden keyword is specified, you are warned about this, and so it isn't a problem. Obviously, this shouldn't apply to override that are not explicitely marked as such (with overriden keyword).
Feb 18 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 16, 2012 at 07:41:00PM -0800, Walter Bright wrote:
 On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
No. Absolutely not. I hate the fact that C++ does this with virtual.
It makes it so that you have to constantly look at the base classes
to figure out what's virtual and what isn't. It harms maintenance and
code understandability. And now you want to do that with  safe, pure,
nothrow, and const? Yuck.
I do not see how it harms maintainability. It does not break any existing code. It makes it easier to convert a function hierarchy to nothrow, pure, etc.
It's probably the same reason I brought up: looking at a function's definition will no longer tell you which modifiers are actually in effect. So you have to trace the overrides up the inheritance hierarchy in order to know exactly what modifiers it has. But again, if ddoc can automatically compute this for you, then it shouldn't be that much of an issue anymore, right? On that note, though, one thing I've always wanted in a programming language is to be able to ask the compiler to expand all templates, deduce all types, etc., for a given function/declaration, and print out what it actually understands the declaration to be (as opposed to what I *think* the declaration would expand to). I know that in C/C++ you can preprocess the source, but it still doesn't expand typedefs, templates, etc.. Plus the S:N ratio is too low (nobody wants to wade through 5000 lines of preprocessed code just to find that one declaration). If dmd (and its derivatives) has an option to do this, say perhaps something like: $ dmd -query my.module.myclass.prop01 *.d my.module.myclass.prop01: my/module.d(123): property pure lazy const int prop01(int x) { ... } $ then this should greatly ease Jonathan's objection to your proposal. (The current .di files might already sortof fill this purpose, although .di's have other problems that I don't really want to get into here.) T -- "I suspect the best way to deal with procrastination is to put off the procrastination itself until later. I've been meaning to try this, but haven't gotten around to it yet. " -- swr
Feb 16 2012
parent "Marco Leise" <Marco.Leise gmx.de> writes:
Am 17.02.2012, 05:10 Uhr, schrieb H. S. Teoh <hsteoh quickfur.ath.cx>:

 On Thu, Feb 16, 2012 at 07:41:00PM -0800, Walter Bright wrote:
 On 2/16/2012 7:23 PM, Jonathan M Davis wrote:
No. Absolutely not. I hate the fact that C++ does this with virtual.
It makes it so that you have to constantly look at the base classes
to figure out what's virtual and what isn't. It harms maintenance and
code understandability. And now you want to do that with  safe, pure,
nothrow, and const? Yuck.
It's probably the same reason I brought up: looking at a function's definition will no longer tell you which modifiers are actually in effect. So you have to trace the overrides up the inheritance hierarchy in order to know exactly what modifiers it has. On that note, though, one thing I've always wanted in a programming language is to be able to ask the compiler to expand all templates, deduce all types, etc., for a given function/declaration, and print out what it actually understands the declaration to be (as opposed to what I *think* the declaration would expand to).
Depending on how people approach the language - editor or IDE - looking up documentation or relying on intuitive code - prefer explicit or implicit declarations (see 'auto' return as well) - trust in the compiler catching their errors or trying to keep compilers out of their understanding of the source code we come to different strong opinions. If safe, pure, nothrow, and const were inherited and optional now I would try that system, but still wonder if it actually makes me use these attributes more than before. I tend to just put safe: at the top of my module and mark trivial I/O functions trusted. Frankly I don't mind the typing as much as I minded to have to remove these attributes later, because after a few nested function calls I ended up calling a throwing function (and don't want to catch). I think similar things happened with pure and const. It helps that I wouldn't have to go through all of the class hierarchy if this happens, but I wonder what the benefit of pure and nothrow is. safe and const help me detect bugs or design mistakes. If there were performance benefits to using strongly pure functions, I'd be far more tempted to use them than with automatic inheritance. Also the case for Phobos was mentioned, but that are mostly free functions that wouldn't benefit from inheritance either. My utopical IDE would deduce all the attributes from looking at the source code and actually place them in the code like this: uint foo() /*deduced:*/ pure const nothrow safe { return 42; } This way I see what the method currently evaluates to, but I am free to add a "throw new Exception(...);" with the IDE changing the signature on the fly: uint foo() /*deduced:*/ pure const safe { throw new Exception("abc"); } If the attributes could be displayed as some sort of tri-state buttons in the code view and I would decide that this method *has to be* const, I would click on 'const' to move the keyword left: uint foo() const /*deduced:*/ pure safe { throw new Exception("abc"); } And at this point no user of that IDE could be too lazy to add "pure const nothrow", because it would be deduced from the code and also put in the signature to document it. Editor purists will hate this idea and it doesn't solve anything right *now*, but I wanted to share it anyway. Maybe it inspires others to come up with better ideas. -- Marco
Feb 17 2012
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 I hate the fact that C++ does this with virtual. It makes 
 it so that you have to constantly look at the base classes to figure out
what's 
 virtual and what isn't. It harms maintenance and code understandability. And 
 now you want to do that with  safe, pure, nothrow, and const? Yuck.
This is a problem. On the other hand I presume Walter is now converting Phobos all at once to fix const correctness, so he's writing tons of attributes. So he desires to quicken this boring work. On the other hand fixing const correctness in Phobos is not a common operation, I think it needs to be done only once. Once one or two future DMD versions are out, programmers will not need to introduce a large amount of those annotations at once. So "fixing" forever D2 for an operation done only once seems risky, especially if future IDEs will be able to insert those annotations cheaply. So a possible solution is to wait 2.059 or 2.060 before introducing this "Inheritance of purity" idea. I think at that time we'll be more able to judge how much useful this feature is once Phobos is already fully const corrected and no need to fix a lot of code at once exists. Another idea is to activate this "Inheritance of purity" only if you compile with "-d" (allow deprecated features) for few months and then remove it, to help porting of today D2 code to const correctness in a more gradual way. Bye, bearophile
Feb 16 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 04:59 AM, bearophile wrote:
 Jonathan M Davis:

 I hate the fact that C++ does this with virtual. It makes
 it so that you have to constantly look at the base classes to figure out what's
 virtual and what isn't. It harms maintenance and code understandability. And
 now you want to do that with  safe, pure, nothrow, and const? Yuck.
This is a problem.
It is not a problem at all. This can happen in C++: struct S: T{ void foo(){ ... } } int main(){ T* x = new S(); x->foo(); // what will this do? No way to know without looking up T, bug prone. } This is the worst-case scenario for D: class S: T{ void foo(){ ... } } void bar()pure{ T x = new S; S.foo(); // see below } 'foo' sounds like a pure method name... Hit compile... Oh, it is not pure... It should be! Look up class T, patch in purity annotation, everything works - awesome! The analogy is so broken it is not even funny.
 On the other hand I presume Walter is now converting Phobos all at once to fix
const correctness, so he's writing tons of attributes. So he desires to quicken
this boring work.

 On the other hand fixing const correctness in Phobos is not a common
operation, I think it needs to be done only once. Once one or two future DMD
versions are out, programmers will not need to introduce a large amount of
those annotations at once. So "fixing" forever D2 for an operation done only
once seems risky, especially if future IDEs will be able to insert those
annotations cheaply.

 So a possible solution is to wait 2.059 or 2.060 before introducing this
"Inheritance of purity" idea. I think at that time we'll be more able to judge
how much useful this feature is once Phobos is already fully const corrected
and no need to fix a lot of code at once exists.

 Another idea is to activate this "Inheritance of purity" only if you compile
with "-d" (allow deprecated features) for few months and then remove it, to
help porting of today D2 code to const correctness in a more gradual way.

 Bye,
 bearophile
Are you really suggesting that making code const correct and the right methods pure etc. is not a common operation?
Feb 16 2012
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 04:23 AM, Jonathan M Davis wrote:
 On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:

       class A { void foo() { } }
       class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

       class A { pure void foo() { } }
       class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class function
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body will
 be semantically checked against this tightest set of qualifiers.

 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck.
Whether a function is virtual or not has far-reaching semantic consequences in C++ (overriding vs hiding). Whether a function is pure/nothrow/const/ safe does not, because those are just annotations that give some additional guarantees, which *ought* to be clear from what the function actually does. It is not like anyone would look up the signature before using some method inside a pure function if what it does seems to be pure.
 I can understand wanting to save some typing,
:o) Seriously, the average programmer is exceedingly lazy. Any language feature that might reduce the annotation overhead is a plus. Annotations are for the compiler, not for people.
 but I really think that this
 harms code maintainability. It's the sort of thing that an IDE is good for. It
 does stuff like generate the function signatures for you or fill in the
 attributes that are required but are missing.
An IDE can also fill in the attributes that are not required but missing if this is implemented, or directly display only the interface to some class, so that is simply not a valid point. Having all the proper annotations can become an IDE style warning for those who like IDEs.
 I grant you that many D developers don't use IDEs at this point (at least not
for D) and that those
 sort of capabilities are likely to be in their infancy for the IDEs that we
 _do_ have, but I really think that this is the sort of thing that should be
 left up to the IDE. Inferring attribtutes like that is just going to harm code
 maintainibility.
It makes re-factoring a lot easier which helps maintainability: The programmer can annotate some method with pure, hit compile and he will immediately see all the non-pure overrides if there are any and may fix them.
 It's bad enough that we end up with them not being marked on
 templates due to inferrence, but we _have_ to do it that way, because the
 attributes vary per instantiation. That is _not_ the case with class member
 functions.

 Please, do _not_ do this.

 - Jonathan M Davis
I think you are severely overstating the issues. What is the most harmful thing that might happen (except that the code gets less verbose)?
Feb 16 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/16/2012 8:51 PM, Timon Gehr wrote:
 It makes re-factoring a lot easier which helps maintainability: The programmer
 can annotate some method with pure, hit compile and he will immediately see all
 the non-pure overrides if there are any and may fix them.
Exactly.
Feb 16 2012
prev sibling next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis 
wrote:
 On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:
 
      class A { void foo() { } }
      class B : A { override pure void foo() { } }
 
 This works great, because B.foo is covariant with A.foo, 
 meaning it can
 "tighten", or place more restrictions, on foo. But:
 
      class A { pure void foo() { } }
      class B : A { override void foo() { } }
 
 fails, because B.foo tries to loosen the requirements, and so 
 is not
 covariant.
 
 Where this gets annoying is when the qualifiers on the base 
 class function
 have to be repeated on all its overrides. I ran headlong into 
 this when
 experimenting with making the member functions of class Object 
 pure.
 
 So it occurred to me that an overriding function could 
 *inherit* the
 qualifiers from the overridden function. The qualifiers of the 
 overriding
 function would be the "tightest" of its explicit qualifiers 
 and its
 overridden function qualifiers. It turns out that most 
 functions are
 naturally pure, so this greatly eases things and eliminates 
 annoying
 typing.
 
 I want do to this for  safe, pure, nothrow, and even const.
 
 I think it is semantically sound, as well. The overriding 
 function body will
 be semantically checked against this tightest set of 
 qualifiers.
 
 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck. I can understand wanting to save some typing, but I really think that this harms code maintainability. It's the sort of thing that an IDE is good for. It does stuff like generate the function signatures for you or fill in the attributes that are required but are missing. I grant you that many D developers don't use IDEs at this point (at least not for D) and that those sort of capabilities are likely to be in their infancy for the IDEs that we _do_ have, but I really think that this is the sort of thing that should be left up to the IDE. Inferring attribtutes like that is just going to harm code maintainibility. It's bad enough that we end up with them not being marked on templates due to inferrence, but we _have_ to do it that way, because the attributes vary per instantiation. That is _not_ the case with class member functions. Please, do _not_ do this. - Jonathan M Davis
In the situation where the IDE writes it for you, said IDE will help you only when you write the code. In the situation where the IDE tells you what they are (through something like hovering over it), it will help you no matter who writes the code. It is also significantly easier to implement, particularly taking into consideration things like style, comments, etc.
Feb 16 2012
parent Bruno Medeiros <brunodomedeiros+dng gmail.com> writes:
On 17/02/2012 05:08, Kapps wrote:
 On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis wrote:
 On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function
 body will
 be semantically checked against this tightest set of qualifiers.

 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck. I can understand wanting to save some typing, but I really think that this harms code maintainability. It's the sort of thing that an IDE is good for. It does stuff like generate the function signatures for you or fill in the attributes that are required but are missing. I grant you that many D developers don't use IDEs at this point (at least not for D) and that those sort of capabilities are likely to be in their infancy for the IDEs that we _do_ have, but I really think that this is the sort of thing that should be left up to the IDE. Inferring attribtutes like that is just going to harm code maintainibility. It's bad enough that we end up with them not being marked on templates due to inferrence, but we _have_ to do it that way, because the attributes vary per instantiation. That is _not_ the case with class member functions. Please, do _not_ do this. - Jonathan M Davis
In the situation where the IDE writes it for you, said IDE will help you only when you write the code. In the situation where the IDE tells you what they are (through something like hovering over it), it will help you no matter who writes the code. It is also significantly easier to implement, particularly taking into consideration things like style, comments, etc.
Exactly. If one is worried about having to look at the base classes, it's quite easy to check that info when you are using an IDE - for example, with a hover over the overriding function which lists all the parameters and attributes, and documentation too. -- Bruno Medeiros - Software Engineer
Feb 23 2012
prev sibling next sibling parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Jonathan M Davis wrote:
 On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:

       class A { void foo() { } }
       class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

       class A { pure void foo() { } }
       class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class function
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body will
 be semantically checked against this tightest set of qualifiers.

 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck.
What about: class A { pure void foo() { } } class B : A { auto override void foo() { } }
Feb 17 2012
parent reply Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
Aside the fact, that it's highly ambiguous, the programmers would
start forgetting to write that auto :-)

On Fri, Feb 17, 2012 at 4:35 PM, Piotr Szturmaj <bncrbme jadamspam.pl> wrot=
e:
 Jonathan M Davis wrote:
 On Thursday, February 16, 2012 18:49:40 Walter Bright wrote:
 Given:

 =C2=A0 =C2=A0 =C2=A0class A { void foo() { } }
 =C2=A0 =C2=A0 =C2=A0class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 =C2=A0 =C2=A0 =C2=A0class A { pure void foo() { } }
 =C2=A0 =C2=A0 =C2=A0class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overridi=
ng
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will
 be semantically checked against this tightest set of qualifiers.

 What do you think?
No. Absolutely not. I hate the fact that C++ does this with virtual. It makes it so that you have to constantly look at the base classes to figure out what's virtual and what isn't. It harms maintenance and code understandability. And now you want to do that with safe, pure, nothrow, and const? Yuck.
What about: =C2=A0 =C2=A0 =C2=A0 class A { pure void foo() { } } =C2=A0 =C2=A0 =C2=A0 class B : A { auto override void foo() { } }
--=20 Bye, Gor Gyolchanyan.
Feb 17 2012
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Gor Gyolchanyan wrote:
 Aside the fact, that it's highly ambiguous, the programmers would
 start forgetting to write that auto :-)
Actually you can't override an auto function so its not ambiguous. It's currently impossible to do: override auto func() { }
Feb 17 2012
parent reply Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
This is clearly a bug, because auto is just another way of specifying
a valid static type.

On Fri, Feb 17, 2012 at 5:22 PM, Piotr Szturmaj <bncrbme jadamspam.pl> wrote:
 Gor Gyolchanyan wrote:
 Aside the fact, that it's highly ambiguous, the programmers would
 start forgetting to write that auto :-)
Actually you can't override an auto function so its not ambiguous. It's currently impossible to do: override auto func() { }
-- Bye, Gor Gyolchanyan.
Feb 17 2012
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Gor Gyolchanyan wrote:
 This is clearly a bug, because auto is just another way of specifying
 a valid static type.
I don't know if overriding auto makes sense at all. If you override something you should specify return type which is covariant with overrided function's one. Why would you use auto return type? But wait... One use case where it can be helpful is overriding a function that return derived class and derived class is based on template parameter: class A {} class B : A {} class C : A {} class Test1 { A getA() { return new A(); } } class Test2(T : A) : Test1 { override auto getA() { return new T(); } } void test() { (new Test2!B()).getA(); (new Test2!C()).getA(); } It doesn't currently compile, but if I change auto to T it does. It may be a compiler bug as you've pointed. It's a simplified example, T may be not directly known (but still covariant to A), i.e. accesing it may be far more complex than showed by above example. This is the only use case of override auto that come to my mind.
 On Fri, Feb 17, 2012 at 5:22 PM, Piotr Szturmaj<bncrbme jadamspam.pl>  wrote:
 Gor Gyolchanyan wrote:
 Aside the fact, that it's highly ambiguous, the programmers would
 start forgetting to write that auto :-)
Actually you can't override an auto function so its not ambiguous. It's currently impossible to do: override auto func() { }
Feb 17 2012
parent reply Piotr Szturmaj <bncrbme jadamspam.pl> writes:
Forget it... auto (or super) override doesn't help much anyway.
Feb 17 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 03:25 PM, Piotr Szturmaj wrote:
 Forget it... auto (or super) override doesn't help much anyway.
It is clearly a bug though.
Feb 17 2012
prev sibling next sibling parent "dsimcha" <dsimcha yahoo.com> writes:
On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis 
wrote:
 No. Absolutely not. I hate the fact that C++ does this with 
 virtual. It makes it so that you have to constantly look at the 
 base classes to figure out what's virtual and what isn't. It 
 harms maintenance and code understandability. And now you want 
 to do that with  safe, pure, nothrow, and const? Yuck.

 I can understand wanting to save some typing, but I really 
 think that this harms code maintainability. It's the sort of 
 thing that an IDE is good for. It does stuff like generate the 
 function signatures for you or fill in the attributes that are 
 required but are missing.
Besides the fact that not everyone uses an IDE, my other counter-argument to these "the IDE generates your boilerplate" arguments is that code is read and modified more often than it is written. I don't like reading or modifying boilerplate code any more than I like writing it. Besides, if you're using a fancy IDE, can't it show you the protection attributes inherited from the derived class?
Feb 17 2012
prev sibling parent reply "so" <so so.so> writes:
On Friday, 17 February 2012 at 03:24:50 UTC, Jonathan M Davis 
wrote:

 No. Absolutely not. I hate the fact that C++ does this with 
 virtual. It makes
 it so that you have to constantly look at the base classes to 
 figure out what's
 virtual and what isn't. It harms maintenance and code 
 understandability. And
 now you want to do that with  safe, pure, nothrow, and const? 
 Yuck.

 I can understand wanting to save some typing, but I really 
 think that this
 harms code maintainability. It's the sort of thing that an IDE 
 is good for. It
 does stuff like generate the function signatures for you or 
 fill in the
 attributes that are required but are missing. I grant you that 
 many D
 developers don't use IDEs at this point (at least not for D) 
 and that those
 sort of capabilities are likely to be in their infancy for the 
 IDEs that we
 _do_ have, but I really think that this is the sort of thing 
 that should be
 left up to the IDE. Inferring attribtutes like that is just 
 going to harm code
 maintainibility. It's bad enough that we end up with them not 
 being marked on
 templates due to inferrence, but we _have_ to do it that way, 
 because the
 attributes vary per instantiation. That is _not_ the case with 
 class member
 functions.

 Please, do _not_ do this.

 - Jonathan M Davis
As much as i hate the "pure const system trusted" spam, I don't think i like the idea either. If you are not using an IDE or a mouse, this would be hell. A language shouldn't be designed with such assumptions, unless you are Microsoft. Thing is, this will make things harder not easier. (which i think is the intention here) When you overload a function, at most you copy/paste it from base class.
Feb 23 2012
parent reply "F i L" <witte2008 gmail.com> writes:
UTC, so wrote:
 If you are not using an IDE or a mouse, this would be hell.
lol wut? This isn't the 80's. In all seriousness, I think you're decoupling inherently ingrained pieces: the language and it's tools. The same way you *need* syntax highlighting to distinguish structure, you *should* have other productivity tools to help you analyze data-layout. It's not like these tools don't exist in abundance on every platform. And MS has pulled some really stupid shit in its day, but it's developer tools and support do not fall under that category.
Feb 23 2012
next sibling parent reply "so" <so so.so> writes:
On Thursday, 23 February 2012 at 22:01:43 UTC, F i L wrote:
 UTC, so wrote:
 If you are not using an IDE or a mouse, this would be hell.
lol wut? This isn't the 80's. In all seriousness, I think you're decoupling inherently ingrained pieces: the language and it's tools. The same way you *need* syntax highlighting to distinguish structure, you *should* have other productivity tools to help you analyze data-layout. It's not like these tools don't exist in abundance on every platform. And MS has pulled some really stupid shit in its day, but it's developer tools and support do not fall under that category.
No one said you shouldn't use IDE or any other tool, but i don't think it is healthy to design a language with such assumptions. Walter himself was against this and stated why he doesn't like Java way of doing things, one of the reason was the language was relying on IDEs. I understand he is trying to fulfill a need that function qualifiers looks ugly yet i am not sure this is the answer.
Feb 23 2012
parent reply "F i L" <witte2008 gmail.com> writes:
UTC, so wrote:
 No one said you shouldn't use IDE or any other tool, but i 
 don't think it is healthy to design a language with such 
 assumptions. Walter himself was against this and stated why he 
 doesn't like Java way of doing things, one of the reason was 
 the language was relying on IDEs.
Well then I disagree with Walter on this as well. What's wrong with having a "standard" toolset in the same way you have standard libraries? It's unrealistic to think people (at large) will be writing any sort of serious application outside of a modern IDE. I'm not saying it's Walters job to write IDE integration, only that the language design shouldn't cater to the smaller use-case scenario. Cleaner code is easier to read and, within an IDE with tooltips, makes little difference when looking at the hierarchy. If you want to be hard-core about it, no one is stopping you from explicitly qualifying each definition.
Feb 23 2012
next sibling parent "so" <so so.so> writes:
On Friday, 24 February 2012 at 00:01:52 UTC, F i L wrote:

 It's unrealistic to think people (at large) will be writing any 
 sort of serious application outside of a modern IDE.
You would be surprised or i should rather say shocked? :) I used to be an IDE fanatic as well, then i took an arrow...
Feb 23 2012
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/12 6:01 PM, F i L wrote:
 It's unrealistic to think people (at large) will be writing any sort of
 serious application outside of a modern IDE.
You'd hate working for Facebook :o). Andrie
Feb 23 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/12 6:46 PM, Andrei Alexandrescu wrote:
 On 2/23/12 6:01 PM, F i L wrote:
 It's unrealistic to think people (at large) will be writing any sort of
 serious application outside of a modern IDE.
You'd hate working for Facebook :o). Andrie
I tried to remove the message above, but Thunderbird doesn't recognize it as coming from me. Is there some recent change in the forum that could be linked to that? I think Thunderbird recognizes messages by a specific author by comparing email addresses. Thanks, Andrei
Feb 23 2012
parent Alix Pexton <alix.DOT.pexton gmail.DOT.com> writes:
On 24/02/2012 00:48, Andrei Alexandrescu wrote:
 On 2/23/12 6:46 PM, Andrei Alexandrescu wrote:
 On 2/23/12 6:01 PM, F i L wrote:
 It's unrealistic to think people (at large) will be writing any sort of
 serious application outside of a modern IDE.
You'd hate working for Facebook :o). Andrie
I tried to remove the message above, but Thunderbird doesn't recognize it as coming from me. Is there some recent change in the forum that could be linked to that? I think Thunderbird recognizes messages by a specific author by comparing email addresses. Thanks, Andrei
Obviously it goes by signature now ^^ A...
Feb 24 2012
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/12 6:01 PM, F i L wrote:
 It's unrealistic to think people (at large) will be writing any sort of
 serious application outside of a modern IDE.
You'd hate working for Facebook :o). Andrei
Feb 23 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2012 4:01 PM, F i L wrote:
 Well then I disagree with Walter on this as well. What's wrong with having a
 "standard" toolset in the same way you have standard libraries? It's
unrealistic
 to think people (at large) will be writing any sort of serious application
 outside of a modern IDE. I'm not saying it's Walters job to write IDE
 integration, only that the language design shouldn't cater to the smaller
 use-case scenario.
Do you really want a language that the source code isn't readable or browsable outside of an IDE? Like the switch from command line to GUI, perhaps there are some that are ready to switch from text files to some visually graphy thingy for source code. But D ain't such a language. I don't know what such a language would look like. I've never thought much about it before, though I heard there was a toy language for kids that you "programmed" by moving boxes around on the screen.
Feb 24 2012
next sibling parent reply David <d dav1d.de> writes:
Am 24.02.2012 11:43, schrieb Walter Bright:
 On 2/23/2012 4:01 PM, F i L wrote:
 Well then I disagree with Walter on this as well. What's wrong with
 having a
 "standard" toolset in the same way you have standard libraries? It's
 unrealistic
 to think people (at large) will be writing any sort of serious
 application
 outside of a modern IDE. I'm not saying it's Walters job to write IDE
 integration, only that the language design shouldn't cater to the smaller
 use-case scenario.
Do you really want a language that the source code isn't readable or browsable outside of an IDE? Like the switch from command line to GUI, perhaps there are some that are ready to switch from text files to some visually graphy thingy for source code. But D ain't such a language. I don't know what such a language would look like. I've never thought much about it before, though I heard there was a toy language for kids that you "programmed" by moving boxes around on the screen.
I think you mean Robot Karol, but this uses also a basic like syntax.
Feb 24 2012
parent Alix Pexton <alix.DOT.pexton gmail.DOT.com> writes:
On 24/02/2012 11:03, David wrote:
 Am 24.02.2012 11:43, schrieb Walter Bright:
 On 2/23/2012 4:01 PM, F i L wrote:
 Well then I disagree with Walter on this as well. What's wrong with
 having a
 "standard" toolset in the same way you have standard libraries? It's
 unrealistic
 to think people (at large) will be writing any sort of serious
 application
 outside of a modern IDE. I'm not saying it's Walters job to write IDE
 integration, only that the language design shouldn't cater to the
 smaller
 use-case scenario.
Do you really want a language that the source code isn't readable or browsable outside of an IDE? Like the switch from command line to GUI, perhaps there are some that are ready to switch from text files to some visually graphy thingy for source code. But D ain't such a language. I don't know what such a language would look like. I've never thought much about it before, though I heard there was a toy language for kids that you "programmed" by moving boxes around on the screen.
I think you mean Robot Karol, but this uses also a basic like syntax.
Sounds to me more like Scratch http://en.wikipedia.org/wiki/Scratch_%28programming_language%29 A...
Feb 24 2012
prev sibling next sibling parent Robert Clipsham <robert octarineparrot.com> writes:
On 24/02/2012 10:43, Walter Bright wrote:
 Do you really want a language that the source code isn't readable or
 browsable outside of an IDE?

 Like the switch from command line to GUI, perhaps there are some that
 are ready to switch from text files to some visually graphy thingy for
 source code. But D ain't such a language. I don't know what such a
 language would look like. I've never thought much about it before,
 though I heard there was a toy language for kids that you "programmed"
 by moving boxes around on the screen.
You're probably thinking of Scratch, though there are such languages not aimed at kids, see Android App Inventor - http://en.wikipedia.org/wiki/Google_App_Inventor Having used both of these (and a couple of others if I recall) I'd happily take a text only language any day! But then... I prefer a command line + text editor over GUI/IDE too ;) -- Robert http://octarineparrot.com/
Feb 24 2012
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Feb 24, 2012 at 02:43:01AM -0800, Walter Bright wrote:
[...]
 Like the switch from command line to GUI, perhaps there are some that
 are ready to switch from text files to some visually graphy thingy for
 source code. But D ain't such a language. I don't know what such a
 language would look like. I've never thought much about it before,
 though I heard there was a toy language for kids that you "programmed"
 by moving boxes around on the screen.
That sounds like an awesome concept for a "programming" game. :-) T -- It won't be covered in the book. The source code has to be useful for something, after all. -- Larry Wall
Feb 24 2012
parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 2/24/2012 10:29 AM, H. S. Teoh wrote:
 On Fri, Feb 24, 2012 at 02:43:01AM -0800, Walter Bright wrote:
 [...]
 Like the switch from command line to GUI, perhaps there are some that
 are ready to switch from text files to some visually graphy thingy for
 source code. But D ain't such a language. I don't know what such a
 language would look like. I've never thought much about it before,
 though I heard there was a toy language for kids that you "programmed"
 by moving boxes around on the screen.
That sounds like an awesome concept for a "programming" game. :-) T
Graphics Shaders can be developed with a UI of this nature. Just google around for the UDK material editor for an example. As advanced as it is, it will look crude in a few more years too.
Feb 24 2012
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On 24/02/12 11:43, Walter Bright wrote:
 On 2/23/2012 4:01 PM, F i L wrote:
 Well then I disagree with Walter on this as well. What's wrong with
 having a
 "standard" toolset in the same way you have standard libraries? It's
 unrealistic
 to think people (at large) will be writing any sort of serious
 application
 outside of a modern IDE. I'm not saying it's Walters job to write IDE
 integration, only that the language design shouldn't cater to the smaller
 use-case scenario.
Do you really want a language that the source code isn't readable or browsable outside of an IDE? Like the switch from command line to GUI, perhaps there are some that are ready to switch from text files to some visually graphy thingy for source code. But D ain't such a language. I don't know what such a language would look like.
There are quite a few people who use LabVIEW at my work place. :) https://en.wikipedia.org/wiki/LabVIEW -Lars
Feb 25 2012
prev sibling parent reply "so" <so so.so> writes:
On Friday, 24 February 2012 at 00:01:52 UTC, F i L wrote:

 Well then I disagree with Walter on this as well. What's wrong 
 with having a "standard" toolset in the same way you have 
 standard libraries? It's unrealistic to think people (at large) 
 will be writing any sort of serious application outside of a 
 modern IDE. I'm not saying it's Walters job to write IDE 
 integration, only that the language design shouldn't cater to 
 the smaller use-case scenario.

 Cleaner code is easier to read and, within an IDE with 
 tooltips, makes little difference when looking at the 
 hierarchy. If you want to be hard-core about it, no one is 
 stopping you from explicitly qualifying each definition.
Debugger is the single tool in VisualStudio that i failed to replace in unix land. I have tried many of them and they all sucked. They are either incomplete or crash too often. Command line gdb is not much of an option. The situation is so bad that looks like i need to go back to the VisualC++/gvim combo.
Feb 25 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 00:25, so a écrit :
 On Friday, 24 February 2012 at 00:01:52 UTC, F i L wrote:

 Well then I disagree with Walter on this as well. What's wrong with
 having a "standard" toolset in the same way you have standard
 libraries? It's unrealistic to think people (at large) will be writing
 any sort of serious application outside of a modern IDE. I'm not
 saying it's Walters job to write IDE integration, only that the
 language design shouldn't cater to the smaller use-case scenario.

 Cleaner code is easier to read and, within an IDE with tooltips, makes
 little difference when looking at the hierarchy. If you want to be
 hard-core about it, no one is stopping you from explicitly qualifying
 each definition.
Debugger is the single tool in VisualStudio that i failed to replace in unix land. I have tried many of them and they all sucked. They are either incomplete or crash too often. Command line gdb is not much of an option. The situation is so bad that looks like i need to go back to the VisualC++/gvim combo.
You have GUI that goes over gdb and are nice to use.
Feb 26 2012
parent reply "so" <so so.so> writes:
On Sunday, 26 February 2012 at 15:25:44 UTC, deadalnix wrote:
 Le 26/02/2012 00:25, so a écrit :
 You have GUI that goes over gdb and are nice to use.
You mean DDD (which i think best of them)? Indeed nice, but it crashes too often.
Feb 26 2012
parent reply "Daniel Murphy" <yebblies nospamgmail.com> writes:
"so" <so so.so> wrote in message 
news:otgdfqnpnpbfxuegmlnn forum.dlang.org...
 On Sunday, 26 February 2012 at 15:25:44 UTC, deadalnix wrote:
 Le 26/02/2012 00:25, so a écrit :
 You have GUI that goes over gdb and are nice to use.
You mean DDD (which i think best of them)? Indeed nice, but it crashes too often.
I spent hours trying to get disassembly working in ddd yesterday, in the end I gave up and used gdb. I hope I never have to leave visual studio 6 again.
Feb 26 2012
parent reply "so" <so so.so> writes:
On Sunday, 26 February 2012 at 16:47:33 UTC, Daniel Murphy wrote:

 I spent hours trying to get disassembly working in ddd 
 yesterday, in the end
 I gave up and used gdb.  I hope I never have to leave visual 
 studio 6 again.
There is always Kdevelop if you want IDE. Awesome piece of free software. It now has vim mode, if only it was supported it fully!
Feb 26 2012
parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"so" <so so.so> wrote in message 
news:qazgjukeorotmmdqdmkj forum.dlang.org...
 On Sunday, 26 February 2012 at 16:47:33 UTC, Daniel Murphy wrote:

 I spent hours trying to get disassembly working in ddd yesterday, in the 
 end
 I gave up and used gdb.  I hope I never have to leave visual studio 6 
 again.
There is always Kdevelop if you want IDE. Awesome piece of free software. It now has vim mode, if only it was supported it fully!
Thanks, I'll give that a go next time I need to do something with D on x64.
Feb 26 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 23, 2012 at 11:01:42PM +0100, F i L wrote:
 UTC, so wrote:
If you are not using an IDE or a mouse, this would be hell.
lol wut? This isn't the 80's.
I still don't use an IDE or a mouse when I code. And I don't plan to. (In fact, I rather plan the *opposite*.)
 In all seriousness, I think you're decoupling inherently ingrained
 pieces: the language and it's tools. The same way you *need* syntax
 highlighting to distinguish structure,
I don't.
 you *should* have other productivity tools to help you analyze
 data-layout. It's not like these tools don't exist in abundance on
 every platform.
I don't use them.
 And MS has pulled some really stupid shit in its day, but it's
 developer tools and support do not fall under that category.
True, they have their value. I don't argue with that. But why should anyone be *forced* to use them? They're just tools. A language is a language (a set of syntax and grammar rules with the associated semantics). It's not inherently tied to any tools. T -- Microsoft is to operating systems & security ... what McDonalds is to gourmet cooking.
Feb 23 2012
parent reply "F i L" <witte2008 gmail.com> writes:
H. S. Teoh wrote:
 In all seriousness, I think you're decoupling inherently 
 ingrained
 pieces: the language and it's tools. The same way you *need* 
 syntax
 highlighting to distinguish structure,
I don't.
wait... you don't even use Syntax Highlighting? Are you insane, you'll go blind!
 And MS has pulled some really stupid shit in its day, but it's
 developer tools and support do not fall under that category.
True, they have their value. I don't argue with that. But why should anyone be *forced* to use them? They're just tools. A language is a language (a set of syntax and grammar rules with the associated semantics). It's not inherently tied to any tools.
is a bad thing is because it's closed source and ultimately designed as [yet another] developer lock-in (just good business right?). Beyond that it's really silly not to use VS cause of all the productive features it provides. MonoDevelop is catching up, but still quite a ways behind in some areas. No one is stopping anyone from writing code in Notepad.. but then, no one is stopping 3D artists from manually editing .obj files in Notepad either.
Feb 23 2012
next sibling parent reply James Miller <james aatch.net> writes:
On 24 February 2012 13:15, F i L <witte2008 gmail.com> wrote:
 H. S. Teoh wrote:
 In all seriousness, I think you're decoupling inherently ingrained
 pieces: the language and it's tools. The same way you *need* syntax
 highlighting to distinguish structure,
I don't.
wait... you don't even use Syntax Highlighting? Are you insane, you'll go blind!
I think my colleague was blind before he start programming, but he doesn't use syntax higlighting
 And MS has pulled some really stupid shit in its day, but it's
 developer tools and support do not fall under that category.
True, they have their value. I don't argue with that. But why should anyone be *forced* to use them? They're just tools. A language is a language (a set of syntax and grammar rules with the associated semantics). It's not inherently tied to any tools.
thing is because it's closed source and ultimately designed as [yet another] developer lock-in (just good business right?). Beyond that it's really silly not to use VS cause of all the productive features it provides. MonoDevelop is catching up, but still quite a ways behind in some areas. No one is stopping anyone from writing code in Notepad.. but then, no one is stopping 3D artists from manually editing .obj files in Notepad either.
As Teoh said, Notepad is not a workable text editor, for a start, it doesn't support a massive range of modern features (like unicode, or non CRLF line terminators), doesn't do any useful code-related stuff like indentation or bracket-matching. You seem to think that there is "Notepad" or Visual Studio/eclipse, when in reality there is a sliding scale, from using cat to output to a file to using, well Eclipse or VS. But there are points along the way, like Jonathon, I'm a (g)vim user, I tend to develop in gvim and do quick edits in vim (tiling window manager, I don't like the switch from full-screen to half-a-screen then back again), I have tried all sorts of other systems and eventually just worked my way back to the terminal. My ongoing quest for productivity has led me to believe that, unless you want to be tied to a technology, back to basics is the best way. I personally believe that any set of tools should be made thinking about the use case: "What if this person was developing using a Tektronix 4014?", I'm not saying that we should still be coding to 30 year old terminals, but the idea is that somebody might not having a gui should not immediately be a blocker. This has been Windows' Achilles' heel for a while, many products don't work without a gui, and therefore are difficult - or impossible - to script. If you can provide a programmatic interface to your system, then you have just allowed a ton more products to be made, at no extra cost to you. Clang has built-in support for auto-completion and syntax analysis and the front-end is even nicely packaged into a library, so I now have C/C++/Objective-C, context-aware, accurate completion in vim, through the vim plugin clang-complete, this was not made by the people at Clang, they just exposed the functionality (by the way, XCode uses the same system, and Code::Blocks is moving their code-model to it too). Programming a craft as much as it is a process. I tend to liken it to carpentry, you have set steps, you design and plan and build etc, but there's creativity there. As such, programmers (I've found) tend to pick an environment that suits them best. I use a minimal system that I can configure and hack to my heart's content. My colleague uses a Macbook pro that he never shuts down. The designer here uses a Macbook Air. And we all work fine, there is no "One True Way" to make a chair, why should there be one for writing a program? My point is that the tools that programmers use, like compilers and linkers and parser-generators and build systems and deployment tools and source control and x and y and z and .... are going to be used by a wide range of people, in a wide range of environments, for a wide range of purposes, so they should keep in mind that maybe you /don't/ have a certain tool or feature available. So you make sure that the experience at the lowest common denominator, a vt100 terminal, is acceptable, maybe not perfect, but good enough, then you build from there. If that means that D is geared towards less typing, then good, especially if you can do the extra typing and not break things. It /is/ possible to make everybody mostly happy, and that is by aiming at the people using `cat`* to program and hitting the people using VS along the way. * Programming using `cat` is not recommended.** ** Even though /real/ programmers use `cat` -- James Miller
Feb 23 2012
parent reply "foobar" <foo bar.com> writes:
On Friday, 24 February 2012 at 05:05:29 UTC, James Miller wrote:

 You seem to think that there is "Notepad" or Visual 
 Studio/eclipse,
 when in reality there is a sliding scale, from using cat to 
 output to
 a file to using, well Eclipse or VS. But there are points along 
 the
 way, like Jonathon, I'm a (g)vim user, I tend to develop in 
 gvim and
 do quick edits in vim (tiling window manager, I don't like the 
 switch
 from full-screen to half-a-screen then back again), I have 
 tried all
 sorts of other systems and eventually just worked my way back 
 to the
 terminal. My ongoing quest for productivity has led me to 
 believe
 that, unless you want to be tied to a technology, back to 
 basics is
 the best way.
That's analogous to saying that you don't want to depend on a lighter since you can make your own fire by rubbing a stone with a wood stick. A lighter does tie you to a certain technology but loosing the lighter doesn't make for more productivity. Misuse of the tool or using the wrong one sure could hamper productivity but that's hardly the fault of technology.
 I personally believe that any set of tools should be made 
 thinking
 about the use case: "What if this person was developing using a
 Tektronix 4014?", I'm not saying that we should still be coding 
 to 30
 year old terminals, but the idea is that somebody might not 
 having a
 gui should not immediately be a blocker. This has been Windows'
 Achilles' heel for a while, many products don't work without a 
 gui,
 and therefore are difficult - or impossible - to script. If you 
 can
 provide a programmatic interface to your system, then you have 
 just
 allowed a ton more products to be made, at no extra cost to 
 you. Clang
 has built-in support for auto-completion and syntax analysis 
 and the
 front-end is even nicely packaged into a library, so I now have
 C/C++/Objective-C, context-aware, accurate completion in vim, 
 through
 the vim plugin clang-complete, this was not made by the people 
 at
 Clang, they just exposed the functionality (by the way, XCode 
 uses the
 same system, and Code::Blocks is moving their code-model to it 
 too).
The above regarding MS is incorrect. MS has lots of automation and is far better at it than *nix systems are. Its Powershell is superior to the *nix "everything is a file" ideology and there were several attempts to copy the concept to *nix with Python and Ruby.
 Programming a craft as much as it is a process. I tend to liken 
 it to
 carpentry, you have set steps, you design and plan and build 
 etc, but
 there's creativity there. As such, programmers (I've found) 
 tend to
 pick an environment that suits them best. I use a minimal 
 system that
 I can configure and hack to my heart's content. My colleague 
 uses a
 Macbook pro that he never shuts down. The designer here uses a 
 Macbook
 Air. And we all work fine, there is no "One True Way" to make a 
 chair,
 why should there be one for writing a program?

 My point is that the tools that programmers use, like compilers 
 and
 linkers and parser-generators and build systems and deployment 
 tools
 and source control and x and y and z and .... are going to be 
 used by
 a wide range of people, in a wide range of environments, for a 
 wide
 range of purposes, so they should keep in mind that maybe you 
 /don't/
 have a certain tool or feature available. So you make sure that 
 the
 experience at the lowest common denominator, a vt100 terminal, 
 is
 acceptable, maybe not perfect, but good enough, then you build 
 from
 there. If that means that D is geared towards less typing, then 
 good,
 especially if  you can do the extra typing and not break 
 things. It
 /is/ possible to make everybody mostly happy, and that is by 
 aiming at
 the people using `cat`* to program and hitting the people using 
 VS
 along the way.

 * Programming using `cat` is not recommended.**
 ** Even though /real/ programmers use `cat`

 --
 James Miller
I disagree. Simply put: +---------+ +---------+ | Magic | | comfort | | happens | | zone | | here! | +---------+ +---------+ Magic cannot happen here ^.
Feb 25 2012
parent reply James Miller <james aatch.net> writes:
On Feb 26, 2012 8:53 AM, "foobar" <foo bar.com> wrote:
 That's analogous to saying that you don't want to depend on a lighter
since you can make your own fire by rubbing a stone with a wood stick. A lighter does tie you to a certain technology but loosing the lighter doesn't make for more productivity. Misuse of the tool or using the wrong one sure could hamper productivity but that's hardly the fault of technology.

No, its analogous to not using a lighter that only lights evergreens, and
only works in Europe. Again, this is blatantly the view that there there is
either notepad or vs, ignoring the masses of features of the editors in
between. I've used vs, I don't find it to have many features - that I use -
that vim doesn't.

 The above regarding MS is incorrect. MS has lots of automation and is far
better at it than *nix systems are. Its Powershell is superior to the *nix "everything is a file" ideology and there were several attempts to copy the concept to *nix with Python and Ruby. Im not even sure what you're getting at here, I didn't realise powershell had an ideology, I don't think bash does either. And sure powershell, a nonstandard add-on, is good. Try automating something that wasn't made by microsoft though, try doing administration of it remotely without rdp.
 Programming a craft as much as it is a process. I tend to liken it to
 carpentry, you have set steps, you design and plan and build etc, but
 there's creativity there. As such, programmers (I've found) tend to
 pick an environment that suits them best. I use a minimal system that
 I can configure and hack to my heart's content. My colleague uses a
 Macbook pro that he never shuts down. The designer here uses a Macbook
 Air. And we all work fine, there is no "One True Way" to make a chair,
 why should there be one for writing a program?

 My point is that the tools that programmers use, like compilers and
 linkers and parser-generators and build systems and deployment tools
 and source control and x and y and z and .... are going to be used by
 a wide range of people, in a wide range of environments, for a wide
 range of purposes, so they should keep in mind that maybe you /don't/
 have a certain tool or feature available. So you make sure that the
 experience at the lowest common denominator, a vt100 terminal, is
 acceptable, maybe not perfect, but good enough, then you build from
 there. If that means that D is geared towards less typing, then good,
 especially if  you can do the extra typing and not break things. It
 /is/ possible to make everybody mostly happy, and that is by aiming at
 the people using `cat`* to program and hitting the people using VS
 along the way.

 * Programming using `cat` is not recommended.**
 ** Even though /real/ programmers use `cat`

 --
 James Miller
I disagree. Simply put: +---------+ +---------+ | Magic | | comfort | | happens | | zone | | here! | +---------+ +---------+ Magic cannot happen here ^.
What on earth does this mean? In the context it seems to suggest that I should be struggling to learn a new environment if I want to do something amazing. I'm guessing you meant that I should try something new, but that doesn't need to be the editor. It's far more interesting to try to build outside of my "comfort zone". In fact, your grade-school platitude annoys me, it suggests that I'm stuck in my ways and avoiding new tech because I like my terminal. I started in IDEs, and worked my way down. I also have the most fun working outside my comfort zone and doing something new, spending hours looking at code going "why wont you work! Why do you hate me!" Then finally getting a breakthrough... amazing. -- James Miller
Feb 25 2012
parent "foobar" <foo bar.com> writes:
On Sunday, 26 February 2012 at 01:18:55 UTC, James Miller wrote:
 On Feb 26, 2012 8:53 AM, "foobar" <foo bar.com> wrote:
 That's analogous to saying that you don't want to depend on a 
 lighter
since you can make your own fire by rubbing a stone with a wood stick. A lighter does tie you to a certain technology but loosing the lighter doesn't make for more productivity. Misuse of the tool or using the wrong one sure could hamper productivity but that's hardly the fault of technology.

 No, its analogous to not using a lighter that only lights 
 evergreens, and
 only works in Europe. Again, this is blatantly the view that 
 there there is
 either notepad or vs, ignoring the masses of features of the 
 editors in
 between. I've used vs, I don't find it to have many features - 
 that I use -
 that vim doesn't.
I see the analogy went over your head. Besides, what's wrong with a lighter that only works in Europe? Works perfectly fine for me! :)
 The above regarding MS is incorrect. MS has lots of automation 
 and is far
better at it than *nix systems are. Its Powershell is superior to the *nix "everything is a file" ideology and there were several attempts to copy the concept to *nix with Python and Ruby. Im not even sure what you're getting at here, I didn't realise powershell had an ideology, I don't think bash does either. And sure powershell, a nonstandard add-on, is good. Try automating something that wasn't made by microsoft though, try doing administration of it remotely without rdp.
 Programming a craft as much as it is a process. I tend to 
 liken it to
 carpentry, you have set steps, you design and plan and build 
 etc, but
 there's creativity there. As such, programmers (I've found) 
 tend to
 pick an environment that suits them best. I use a minimal 
 system that
 I can configure and hack to my heart's content. My colleague 
 uses a
 Macbook pro that he never shuts down. The designer here uses 
 a Macbook
 Air. And we all work fine, there is no "One True Way" to make 
 a chair,
 why should there be one for writing a program?

 My point is that the tools that programmers use, like 
 compilers and
 linkers and parser-generators and build systems and 
 deployment tools
 and source control and x and y and z and .... are going to be 
 used by
 a wide range of people, in a wide range of environments, for 
 a wide
 range of purposes, so they should keep in mind that maybe you 
 /don't/
 have a certain tool or feature available. So you make sure 
 that the
 experience at the lowest common denominator, a vt100 
 terminal, is
 acceptable, maybe not perfect, but good enough, then you 
 build from
 there. If that means that D is geared towards less typing, 
 then good,
 especially if  you can do the extra typing and not break 
 things. It
 /is/ possible to make everybody mostly happy, and that is by 
 aiming at
 the people using `cat`* to program and hitting the people 
 using VS
 along the way.

 * Programming using `cat` is not recommended.**
 ** Even though /real/ programmers use `cat`

 --
 James Miller
I disagree. Simply put: +---------+ +---------+ | Magic | | comfort | | happens | | zone | | here! | +---------+ +---------+ Magic cannot happen here ^.
What on earth does this mean? In the context it seems to suggest that I should be struggling to learn a new environment if I want to do something amazing. I'm guessing you meant that I should try something new, but that doesn't need to be the editor. It's far more interesting to try to build outside of my "comfort zone". In fact, your grade-school platitude annoys me, it suggests that I'm stuck in my ways and avoiding new tech because I like my terminal. I started in IDEs, and worked my way down. I also have the most fun working outside my comfort zone and doing something new, spending hours looking at code going "why wont you work! Why do you hate me!" Then finally getting a breakthrough... amazing. -- James Miller
The picture is both a simple fact of life and in our current discussion a response to the above attitude of "lowest common denominator". I'm suggesting that progress is made by progressing forward and not by retreating backwards. Your agitated response suggests I hit a nerve. That's a sign that my post had an effect.
Feb 26 2012
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Feb 24, 2012 at 06:05:20PM +1300, James Miller wrote:
[...]
 My ongoing quest for productivity has led me to believe that, unless
 you want to be tied to a technology, back to basics is the best way.
That's an interesting observation. I have to agree.
 I personally believe that any set of tools should be made thinking
 about the use case: "What if this person was developing using a
 Tektronix 4014?", I'm not saying that we should still be coding to 30
 year old terminals, but the idea is that somebody might not having a
 gui should not immediately be a blocker.
This reminds me of a very insightful quote I found online a while ago: A program should be written to model the concepts of the task it performs rather than the physical world or a process because this maximizes the potential for it to be applied to tasks that are conceptually similar and, more important, to tasks that have not yet been conceived. -- Michael B. Allen
 This has been Windows' Achilles' heel for a while, many products don't
 work without a gui, and therefore are difficult - or impossible - to
 script. If you can provide a programmatic interface to your system,
 then you have just allowed a ton more products to be made, at no extra
 cost to you.
It's exactly as I quoted above: by limiting yourself to a GUI, you have limited the applicability of your program, even if what the program actually *does* is not inherently related to a GUI.
 Clang has built-in support for auto-completion and syntax analysis and
 the front-end is even nicely packaged into a library, so I now have
 C/C++/Objective-C, context-aware, accurate completion in vim, through
 the vim plugin clang-complete, this was not made by the people at
 Clang, they just exposed the functionality (by the way, XCode uses the
 same system, and Code::Blocks is moving their code-model to it too).
"This maximizes the potential for it to be applied ... to tasks that have not yet been conceived." :-) [...]
 * Programming using `cat` is not recommended.**
 ** Even though /real/ programmers use `cat`
[...] Oh? I thought *real* real programmers use a soldering iron, a pair of tweezers, a magnifying glass, and really *really* steady hands... Tricky things to program, those new-fangled nanometer-scale microprocessors they make these days. :-P T -- To err is human; to forgive is not our policy. -- Samuel Adler
Feb 23 2012
parent "foobar" <foo bar.com> writes:
On Friday, 24 February 2012 at 05:48:51 UTC, H. S. Teoh wrote:
 On Fri, Feb 24, 2012 at 06:05:20PM +1300, James Miller wrote:
 [...]
 My ongoing quest for productivity has led me to believe that, 
 unless
 you want to be tied to a technology, back to basics is the 
 best way.
That's an interesting observation. I have to agree.
 I personally believe that any set of tools should be made 
 thinking
 about the use case: "What if this person was developing using a
 Tektronix 4014?", I'm not saying that we should still be 
 coding to 30
 year old terminals, but the idea is that somebody might not 
 having a
 gui should not immediately be a blocker.
This reminds me of a very insightful quote I found online a while ago: A program should be written to model the concepts of the task it performs rather than the physical world or a process because this maximizes the potential for it to be applied to tasks that are conceptually similar and, more important, to tasks that have not yet been conceived. -- Michael B. Allen
 This has been Windows' Achilles' heel for a while, many 
 products don't
 work without a gui, and therefore are difficult - or 
 impossible - to
 script. If you can provide a programmatic interface to your 
 system,
 then you have just allowed a ton more products to be made, at 
 no extra
 cost to you.
It's exactly as I quoted above: by limiting yourself to a GUI, you have limited the applicability of your program, even if what the program actually *does* is not inherently related to a GUI.
 Clang has built-in support for auto-completion and syntax 
 analysis and
 the front-end is even nicely packaged into a library, so I now 
 have
 C/C++/Objective-C, context-aware, accurate completion in vim, 
 through
 the vim plugin clang-complete, this was not made by the people 
 at
 Clang, they just exposed the functionality (by the way, XCode 
 uses the
 same system, and Code::Blocks is moving their code-model to it 
 too).
"This maximizes the potential for it to be applied ... to tasks that have not yet been conceived." :-) [...]
 * Programming using `cat` is not recommended.**
 ** Even though /real/ programmers use `cat`
[...] Oh? I thought *real* real programmers use a soldering iron, a pair of tweezers, a magnifying glass, and really *really* steady hands... Tricky things to program, those new-fangled nanometer-scale microprocessors they make these days. :-P T
Clearly, the quote above is misapplied since clang's applicability has everything to do with its good modular design and its API and nothing to do with the arguing over GUI vs. CLI. In fact, CLI forces its own set of limitations on the program.
Feb 25 2012
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2012-02-17 02:49:40 +0000, Walter Bright <newshound2 digitalmars.com> said:

 Given:
 
      class A { void foo() { } }
      class B : A { override pure void foo() { } }
 
 This works great, because B.foo is covariant with A.foo, meaning it can 
 "tighten", or place more restrictions, on foo. But:
 
      class A { pure void foo() { } }
      class B : A { override void foo() { } }
 
 fails, because B.foo tries to loosen the requirements, and so is not covariant.
 
 Where this gets annoying is when the qualifiers on the base class 
 function have to be repeated on all its overrides. I ran headlong into 
 this when experimenting with making the member functions of class 
 Object pure.
 
 So it occurred to me that an overriding function could *inherit* the 
 qualifiers from the overridden function. The qualifiers of the 
 overriding function would be the "tightest" of its explicit qualifiers 
 and its overridden function qualifiers. It turns out that most 
 functions are naturally pure, so this greatly eases things and 
 eliminates annoying typing.
 
 I want do to this for  safe, pure, nothrow, and even const.
 
 I think it is semantically sound, as well. The overriding function body 
 will be semantically checked against this tightest set of qualifiers.
 
 What do you think?
Seems like a good idea to me. But I think you should make sure error messages mentioning an implied inherited attribute says from which subclass the attribute was inherited from. For instance: override void foo() { impure(); } // -> error: cannot call impure() in pure function (purity inherited from A.foo) I think such messages will ease code maintenance, because if you later edit foo() you might easily forget it is implicitly pure. With this message if you somehow need to remove purity you know where to look. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Feb 16 2012
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 03:49 AM, Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
Yes, please!
Feb 16 2012
prev sibling next sibling parent "F i L" <witte2008 gmail.com> writes:
 What do you think?
Sounds good! Only, like Michel said, please make errors output helpful hints as well. I can see why David is concerned.
Feb 16 2012
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/16/12 8:49 PM, Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
I thought about this for a while and seems to work well. The maintenance scenarios have already been discussed (people add or remove some attribute or qualifier) and I don't see ways in which things become inadvertently broken. The const qualifier is a bit different because it allows overloading. Attention must be paid there so only the appropriate overload is overridden. Congratulations Walter for a great idea. Inference is definitely the way to go. Andrei
Feb 16 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 08:34 AM, Andrei Alexandrescu wrote:
 On 2/16/12 8:49 PM, Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
I thought about this for a while and seems to work well. The maintenance scenarios have already been discussed (people add or remove some attribute or qualifier) and I don't see ways in which things become inadvertently broken.
I imagine that some contrived example could be deduced that uses __traits(compiles, ...) inside a pure function. But I don't think avoiding such scenarios is worthwhile.
 The const qualifier is a bit different because it allows overloading.
 Attention must be paid there so only the appropriate overload is
 overridden.
Introducing a new overload against const in a subclass is illegal: class C{ void foo(){} } class D : C{ override void foo(){} override void foo()const{} } Error: D.foo multiple overrides of same function
 Congratulations Walter for a great idea. Inference is definitely the way
 to go.


 Andrei
Feb 17 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 02:33 PM, Timon Gehr wrote:
 Introducing a new overload against const in a subclass is illegal:

 class C{
      void foo(){}
 }
 class D : C{
      override void foo(){}
      override void foo()const{}
 }

 Error: D.foo multiple overrides of same function
Oops... I meant class C{ void foo(){} } class D:C{ override void foo(){} void foo()const{} } The error message is the same though.
Feb 17 2012
next sibling parent reply kenji hara <k.hara.pg gmail.com> writes:
I think this is a current implementation problem.

In this case, just `override void foo()` in class D should override
the method in C.
And `void foo()const` should be a new overlodad of foo.

Kenji Hara

2012/2/17 Timon Gehr <timon.gehr gmx.ch>:
 On 02/17/2012 02:33 PM, Timon Gehr wrote:
 Introducing a new overload against const in a subclass is illegal:

 class C{
 =A0 =A0 void foo(){}
 }
 class D : C{
 =A0 =A0 override void foo(){}
 =A0 =A0 override void foo()const{}
 }

 Error: D.foo multiple overrides of same function
Oops... I meant class C{ =A0 =A0void foo(){} } class D:C{ =A0 =A0override void foo(){} =A0 =A0void foo()const{} } The error message is the same though.
Feb 17 2012
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/17/2012 03:00 PM, kenji hara wrote:
 I think this is a current implementation problem.

 In this case, just `override void foo()` in class D should override
 the method in C.
 And `void foo()const` should be a new overlodad of foo.

 Kenji Hara
Walter has stated that this is by design. http://d.puremagic.com/issues/show_bug.cgi?id=3757
Feb 17 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Agreed. Timon, could you please submit to bugzilla?

BTW I referred to the converse problem:

class C {
   void foo() {}
   void foo() const {}
}

class D : C {
   void foo() const {} // should only override the const overload
}


Andrei

On 2/17/12 8:00 AM, kenji hara wrote:
 I think this is a current implementation problem.

 In this case, just `override void foo()` in class D should override
 the method in C.
 And `void foo()const` should be a new overlodad of foo.

 Kenji Hara

 2012/2/17 Timon Gehr<timon.gehr gmx.ch>:
 On 02/17/2012 02:33 PM, Timon Gehr wrote:
 Introducing a new overload against const in a subclass is illegal:

 class C{
      void foo(){}
 }
 class D : C{
      override void foo(){}
      override void foo()const{}
 }

 Error: D.foo multiple overrides of same function
Oops... I meant class C{ void foo(){} } class D:C{ override void foo(){} void foo()const{} } The error message is the same though.
Feb 17 2012
next sibling parent "Timon Gehr" <timon.gehr gmx.ch> writes:
On Friday, 17 February 2012 at 16:17:23 UTC, Andrei Alexandrescu 
wrote:
 Agreed. Timon, could you please submit to bugzilla?
Filed as an enhancement: http://d.puremagic.com/issues/show_bug.cgi?id=7534
 BTW I referred to the converse problem:

 class C {
   void foo() {}
   void foo() const {}
 }

 class D : C {
   void foo() const {} // should only override the const overload
 }


 Andrei
OK. Note that alias C.foo foo; inside Ds body is required in order to make the code compile by inheriting the overload.
Feb 17 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 17/02/2012 17:17, Andrei Alexandrescu a écrit :
 Agreed. Timon, could you please submit to bugzilla?

 BTW I referred to the converse problem:

 class C {
 void foo() {}
 void foo() const {}
 }

 class D : C {
 void foo() const {} // should only override the const overload
 }


 Andrei

 On 2/17/12 8:00 AM, kenji hara wrote:
 I think this is a current implementation problem.

 In this case, just `override void foo()` in class D should override
 the method in C.
 And `void foo()const` should be a new overlodad of foo.

 Kenji Hara

 2012/2/17 Timon Gehr<timon.gehr gmx.ch>:
 On 02/17/2012 02:33 PM, Timon Gehr wrote:
 Introducing a new overload against const in a subclass is illegal:

 class C{
 void foo(){}
 }
 class D : C{
 override void foo(){}
 override void foo()const{}
 }

 Error: D.foo multiple overrides of same function
Oops... I meant class C{ void foo(){} } class D:C{ override void foo(){} void foo()const{} } The error message is the same though.
I guess that is such a case, const should be explicited to get rid of abiguous code.
Feb 18 2012
prev sibling parent reply kenji hara <k.hara.pg gmail.com> writes:
I have thought a reverse case.

class C
{
     safe const void f(){}
}
class D : C
{
    override void f(){} // inferred as  safe const?
        // 'override' keyword must be required for override method?
    void f(){}          //  system and mutable?
        // Adding mutable version in derived class should work?
        // Does no override keyword should mean always "new root of overrid=
ing"?
}

I think the lack of 'override' keyword (filed as bug 3836) should
become an error, without the phase of deprecating it. Otherwise
following case will be allowed.

class C
{
    const void f(){}
}
class D : C
{
    void f(){}
        // in 2.058  Error: function test.D.f of type void() overrides
but is not
        //                  covariant with test.C.f of type const void()
        // in 2.059? overrides C.foo implicitly!!
        //   Although bug 3836 will be fixed with deprecating, -d
option allows this annoying overriding.
        //   It is worse than 2.058 and before.
}

Kenji Hara

2012/2/17 kenji hara <k.hara.pg gmail.com>:
 I think this is a current implementation problem.

 In this case, just `override void foo()` in class D should override
 the method in C.
 And `void foo()const` should be a new overlodad of foo.

 Kenji Hara

 2012/2/17 Timon Gehr <timon.gehr gmx.ch>:
 On 02/17/2012 02:33 PM, Timon Gehr wrote:
 Introducing a new overload against const in a subclass is illegal:

 class C{
 =A0 =A0 void foo(){}
 }
 class D : C{
 =A0 =A0 override void foo(){}
 =A0 =A0 override void foo()const{}
 }

 Error: D.foo multiple overrides of same function
Oops... I meant class C{ =A0 =A0void foo(){} } class D:C{ =A0 =A0override void foo(){} =A0 =A0void foo()const{} } The error message is the same though.
Feb 17 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/17/12 8:13 AM, kenji hara wrote:
 I think the lack of 'override' keyword (filed as bug 3836) should
 become an error, without the phase of deprecating it. Otherwise
 following case will be allowed.
Yes. Walter? Andrei
Feb 17 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 17/02/2012 17:19, Andrei Alexandrescu a écrit :
 On 2/17/12 8:13 AM, kenji hara wrote:
 I think the lack of 'override' keyword (filed as bug 3836) should
 become an error, without the phase of deprecating it. Otherwise
 following case will be allowed.
Yes. Walter? Andrei
I'm surprised this isn't even mentionned in http://drdobbs.com/blogs/cpp/232601305 I definitively don't think that pushing stuff like that - I'm suspecting for ego reasons - ignoring some flaw of the idea is a good way to proceed. This even may be armfull for the language on the long run. With no override keyword, function can just explode on your face for no aparent reason in the source code you are lookign at. This isn't an issue we should ignore. This has a pretty simple solution : don't inherit thoses attributes of override isn't present. On the long run, don't allow override without override keyword ?
Feb 24 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2012 3:22 AM, deadalnix wrote:
 Le 17/02/2012 17:19, Andrei Alexandrescu a écrit :
 On 2/17/12 8:13 AM, kenji hara wrote:
 I think the lack of 'override' keyword (filed as bug 3836) should
 become an error, without the phase of deprecating it. Otherwise
 following case will be allowed.
Yes. Walter? Andrei
I'm surprised this isn't even mentionned in http://drdobbs.com/blogs/cpp/232601305 I definitively don't think that pushing stuff like that - I'm suspecting for ego reasons - ignoring some flaw of the idea is a good way to proceed. This even may be armfull for the language on the long run. With no override keyword, function can just explode on your face for no aparent reason in the source code you are lookign at. This isn't an issue we should ignore. This has a pretty simple solution : don't inherit thoses attributes of override isn't present. On the long run, don't allow override without override keyword ?
Not using override is currently deprecated. Eventually, it will be required. Doing this precipitously breaks existing code without allowing people plenty of time to upgrade their code. This annoys people, and results in them considering D "unstable" and "unusable". I know that some do not see it as a problem to regularly introduce breaking changes and pull the rug out from under people every month. But I think that is a recipe for disaster.
Feb 25 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 00:02, Walter Bright a écrit :
 On 2/24/2012 3:22 AM, deadalnix wrote:
 Le 17/02/2012 17:19, Andrei Alexandrescu a écrit :
 On 2/17/12 8:13 AM, kenji hara wrote:
 I think the lack of 'override' keyword (filed as bug 3836) should
 become an error, without the phase of deprecating it. Otherwise
 following case will be allowed.
Yes. Walter? Andrei
I'm surprised this isn't even mentionned in http://drdobbs.com/blogs/cpp/232601305 I definitively don't think that pushing stuff like that - I'm suspecting for ego reasons - ignoring some flaw of the idea is a good way to proceed. This even may be armfull for the language on the long run. With no override keyword, function can just explode on your face for no aparent reason in the source code you are lookign at. This isn't an issue we should ignore. This has a pretty simple solution : don't inherit thoses attributes of override isn't present. On the long run, don't allow override without override keyword ?
Not using override is currently deprecated. Eventually, it will be required. Doing this precipitously breaks existing code without allowing people plenty of time to upgrade their code. This annoys people, and results in them considering D "unstable" and "unusable". I know that some do not see it as a problem to regularly introduce breaking changes and pull the rug out from under people every month. But I think that is a recipe for disaster.
True. This is why I stated « in the long run ». The solution to that is, IMO, a standard process to deprecate and replace a feature, with a known period of time, and a page on the website to annonce this. Eventually, someday, the codebase will be that big that breaking changes will not be an option anymore.
Feb 26 2012
prev sibling parent "Daniel Murphy" <yebblies nospamgmail.com> writes:
"Walter Bright" <newshound2 digitalmars.com> wrote in message 
news:jibpa0$vt$1 digitalmars.com...
 Not using override is currently deprecated. Eventually, it will be 
 required.
It's still in the 'warning' stage. It would be nice to know when it's actually going to become properly deprecated.
Feb 26 2012
prev sibling next sibling parent "Martin Nowak" <dawg dawgfoto.de> writes:
 No. Absolutely not. I hate the fact that C++ does this with virtual. It  
 makes
 it so that you have to constantly look at the base classes to figure out  
 what's
 virtual and what isn't. It harms maintenance and code understandability.  
 And
 now you want to do that with  safe, pure, nothrow, and const? Yuck.
It's different from virtual. Virtual is an implicitly inherited loosening attribute while safe, pure, nothrow and const are restricting. It could be potentially confusing when introducing new overloads. But that is also detected easily. class Base { void foo() const { } } class Derived : Base { override void foo() { } void foo() const { } }
Feb 17 2012
prev sibling next sibling parent reply Michal Minich <michal.minich gmail.com> writes:
V Thu, 16 Feb 2012 18:49:40 -0800, Walter Bright wrote:

 Given:
 
      class A { void foo() { } }
      class B : A { override pure void foo() { } }
 
 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo.
Will the 'inheritance' of attributes work for interfaces too? interface I { void foo() safe pure nothrow const; } class B : I { void foo() { } } // is it safe pure nothrow const ?
Feb 17 2012
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Feb 17, 2012 at 05:27:11PM +0000, Michal Minich wrote:
[...]
 Will the 'inheritance' of attributes work for interfaces too?
 
      interface I { void foo()  safe pure nothrow const; }
      class B : I { void foo() { } }  // is it  safe pure nothrow const ?
I think it would make sense to do this. --T
Feb 17 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/17/2012 9:27 AM, Michal Minich wrote:
 Will the 'inheritance' of attributes work for interfaces too?

       interface I { void foo()  safe pure nothrow const; }
       class B : I { void foo() { } }  // is it  safe pure nothrow const ?
Yes.
Feb 17 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 16 Feb 2012 21:49:40 -0500, Walter Bright  
<newshound2 digitalmars.com> wrote:

 Given:

      class A { void foo() { } }
      class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can  
 "tighten", or place more restrictions, on foo. But:

      class A { pure void foo() { } }
      class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not  
 covariant.

 Where this gets annoying is when the qualifiers on the base class  
 function have to be repeated on all its overrides. I ran headlong into  
 this when experimenting with making the member functions of class Object  
 pure.

 So it occurred to me that an overriding function could *inherit* the  
 qualifiers from the overridden function. The qualifiers of the  
 overriding function would be the "tightest" of its explicit qualifiers  
 and its overridden function qualifiers. It turns out that most functions  
 are naturally pure, so this greatly eases things and eliminates annoying  
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body  
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
I think it feels like a (welcome) about face from previous stances. But I share some concern with others in that we are separating the attributes of a function from its declaration to the point where we need a tool to determine what the exact attributes of a function are. I shudder to think about how I would understand some of the CSS I have to deal in my daily work if I didn't have firebug. I hope D doesn't become similar. Of course, if we get to the point where DDoc is full-featured, one will almost never have to look at function declarations when using them. Also, I like how you *can* repeat the attributes if it's necessary to. Could there be a compiler option for requiring overriding functions to repeat base attributes? Or at least print out where they are while compiling? At least then you can see where your attributes aren't repeated. -Steve
Feb 17 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 17/02/2012 03:49, Walter Bright a écrit :
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
Walter, I think you get the const qualifier wrong. const does not qualify the method, but the hidden parameter "this". I don't think this is a good idea for const/immutable . Simply because you may want to have both defined, and it lead to ambiguity. However, for pure, safe, nothrow and basically any qualifier that effectively qualify the function, it is a great idea. BTW, to keep the source code understandable, this should be enabled only if the overriden keyword is present. So if you see a function with overriden, you'll know about thoses qualifier possibly being present. If the overriden isn't present, the current behaviour should be preserved.
Feb 18 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/18/2012 03:46 PM, deadalnix wrote:
 Le 17/02/2012 03:49, Walter Bright a écrit :
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
Walter, I think you get the const qualifier wrong. const does not qualify the method, but the hidden parameter "this".
Conceptually, close. Effectively, no. Const methods are the only place in the language where 'contravariant' overrides are allowed.
 I don't think this is a good idea for const/immutable . Simply because
 you may want to have both defined, and it lead to ambiguity.
With the current overriding/overloading rules I see no possibility for ambiguity. This is illegal: class C{ void foo() {} } class D: C{ override void foo() {} void foo()const {} } This is unambiguous: class C{ void foo() {} void foo()const {} } class D: C{ alias C.foo foo; override void foo() {} // overrides first overload } A minor issue I see: Possible hijacking scenario: abstract class C{ void foo(); void foo()const { ... } } class D: C{ alias C.foo foo; // explicitly loosen hijacking prevention override void foo() { (cast(const)this).foo(); } } "we don't need a non-const overload..." => abstract class C{ void foo()const { ... } } class D: C{ alias C.foo foo; // explicitly loosen hijacking prevention override void foo() { (cast(const)this).foo(); } // foo()const hijacked } Error under old rules, hijacking that introduces infinite recursion under new rules.
 However, for pure,  safe, nothrow and basically any qualifier that
 effectively qualify the function, it is a great idea.
For them, it is certainly safe. It is questionable how large the effective benefit is for const, since the const qualifier would be inherited for the method only, but not for its parameters.
 BTW, to keep the source code understandable, this should be enabled only
 if the overriden keyword is present. So if you see a function with
 overriden, you'll know about thoses qualifier possibly being present. If
 the overriden isn't present, the current behaviour should be preserved.
'override' will be mandatory soon anyway.
Feb 18 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 18/02/2012 16:04, Timon Gehr a écrit :
 For them, it is certainly safe. It is questionable how large the
 effective benefit is for const, since the const qualifier would be
 inherited for the method only, but not for its parameters.
The const qualifier does NEVER qualify a function. This is a misconception. In what we call const function, what is const is the hhidden parameter "this", not the function.
Feb 18 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/18/2012 10:06 PM, deadalnix wrote:
 Le 18/02/2012 16:04, Timon Gehr a écrit :
 For them, it is certainly safe. It is questionable how large the
 effective benefit is for const, since the const qualifier would be
 inherited for the method only, but not for its parameters.
The const qualifier does NEVER qualify a function. This is a misconception.
I don't care whether or not it is a misconception. It is how the language is defined. If you want to change this, file an enhancement request.
 In what we call const function, what is const is the
 hhidden parameter "this", not the function.
Both are const. Ask the compiler. struct S{ void foo()const{ static assert(is(typeof(this)==const)); static assert(is(typeof(foo)==const)); } } In fact, the incident that the method is const is what enables contravariant overriding of mutable/immutable by const methods. (This is not supported for the explicit formal parameter types.)
Feb 18 2012
next sibling parent Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
This doesn't make any sense. The const-ness of *this* is the logical
and obvious reason why methods overload on const. *this* is just as
good an valid parameter as everything other ones despite the fact,
that its hidden. The signature of two functions differ between *this*
parameter has different types. There we go: sensible overloading and
const never refers to the function itself.
The fact, that it does currently is a big bug, IMO.

On Sun, Feb 19, 2012 at 1:18 AM, Timon Gehr <timon.gehr gmx.ch> wrote:
 On 02/18/2012 10:06 PM, deadalnix wrote:
 Le 18/02/2012 16:04, Timon Gehr a =C3=A9crit :
 For them, it is certainly safe. It is questionable how large the
 effective benefit is for const, since the const qualifier would be
 inherited for the method only, but not for its parameters.
The const qualifier does NEVER qualify a function. This is a misconception.
I don't care whether or not it is a misconception. It is how the language=
is
 defined. If you want to change this, file an enhancement request.


 In what we call const function, what is const is the
 hhidden parameter "this", not the function.
Both are const. Ask the compiler. struct S{ =C2=A0 =C2=A0void foo()const{ =C2=A0 =C2=A0 =C2=A0 =C2=A0static assert(is(typeof(this)=3D=3Dconst)); =C2=A0 =C2=A0 =C2=A0 =C2=A0static assert(is(typeof(foo)=3D=3Dconst)); =C2=A0 =C2=A0} } In fact, the incident that the method is const is what enables contravari=
ant
 overriding of mutable/immutable by const methods. (This is not supported =
for
 the explicit formal parameter types.)
--=20 Bye, Gor Gyolchanyan.
Feb 20 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 18/02/2012 22:18, Timon Gehr a écrit :
 On 02/18/2012 10:06 PM, deadalnix wrote:
 Le 18/02/2012 16:04, Timon Gehr a écrit :
 For them, it is certainly safe. It is questionable how large the
 effective benefit is for const, since the const qualifier would be
 inherited for the method only, but not for its parameters.
The const qualifier does NEVER qualify a function. This is a misconception.
I don't care whether or not it is a misconception. It is how the language is defined. If you want to change this, file an enhancement request.
 In what we call const function, what is const is the
 hhidden parameter "this", not the function.
Both are const. Ask the compiler. struct S{ void foo()const{ static assert(is(typeof(this)==const)); static assert(is(typeof(foo)==const)); } } In fact, the incident that the method is const is what enables contravariant overriding of mutable/immutable by const methods. (This is not supported for the explicit formal parameter types.)
The spec is incosistent. Either the function isn't const, and only this is const. In this case the inference here doesn't make any sense. const and non const are different and because parameters are differents. Or we consider that const qualify both the hidden parameter AND the function. In this case, you mustn't be able to define both a const and a non const version of a function. Just like a function is pure or non pure, and you cannot define twice the same function, with one pure version and an impure one. This has the advantage of imporving covariance in overload, but if you can define both const and non const version of a function, then you ends up with crazy situation where all overload change meaning if I add a function is the base class - with no warning, no errors, just a completely fuckup program and hours of happy digging in the code to clean this up.
Feb 24 2012
prev sibling next sibling parent reply kenji hara <k.hara.pg gmail.com> writes:
After some thoughts, I agree that inheritance of pure  safe, and
nothrow is good feature.
But I disagree to const inference, because const attribute interacts
with overloadings.

Kenji Hara

2012/2/17 Walter Bright <newshound2 digitalmars.com>:
 Given:

 =A0 =A0class A { void foo() { } }
 =A0 =A0class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 =A0 =A0class A { pure void foo() { } }
 =A0 =A0class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class functio=
n
 have to be repeated on all its overrides. I ran headlong into this when
 experimenting with making the member functions of class Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the overriding
 function would be the "tightest" of its explicit qualifiers and its
 overridden function qualifiers. It turns out that most functions are
 naturally pure, so this greatly eases things and eliminates annoying typi=
ng.
 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body w=
ill
 be semantically checked against this tightest set of qualifiers.

 What do you think?
Feb 18 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/18/2012 6:49 AM, kenji hara wrote:
 After some thoughts, I agree that inheritance of pure  safe, and
 nothrow is good feature.
 But I disagree to const inference, because const attribute interacts
 with overloadings.
The const inheritance *only* happens if otherwise you'd get a covariance error. It does not change the meaning of any existing code that compiled successfully.
Feb 18 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 18/02/2012 19:25, Walter Bright a écrit :
 On 2/18/2012 6:49 AM, kenji hara wrote:
 After some thoughts, I agree that inheritance of pure  safe, and
 nothrow is good feature.
 But I disagree to const inference, because const attribute interacts
 with overloadings.
The const inheritance *only* happens if otherwise you'd get a covariance error. It does not change the meaning of any existing code that compiled successfully.
yes but then, if a method is added to the base class, you will have a changement of behavior of all overriden method, silently. It means nasty bugs, an hours spent to go throw the whole codebase and review manually which method do we want to override in each cases. The current state of D is very inconsistent on this topic and it lead to many quirks.
Feb 24 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/24/2012 01:41 PM, deadalnix wrote:
 Le 18/02/2012 19:25, Walter Bright a écrit :
 On 2/18/2012 6:49 AM, kenji hara wrote:
 After some thoughts, I agree that inheritance of pure  safe, and
 nothrow is good feature.
 But I disagree to const inference, because const attribute interacts
 with overloadings.
The const inheritance *only* happens if otherwise you'd get a covariance error. It does not change the meaning of any existing code that compiled successfully.
yes but then, if a method is added to the base class, you will have a changement of behavior of all overriden method, silently.
No. The compiler will explode in your face.
 It means nasty bugs, an hours spent to go throw the whole codebase and
 review manually which method do we want to override in each cases.

 The current state of D is very inconsistent on this topic and it lead to
 many quirks.
Not at all. Have you actually experienced such problems? D is explicitly designed to avoid this kind of scenarios.
Feb 25 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 25/02/2012 12:40, Timon Gehr a écrit :
 On 02/24/2012 01:41 PM, deadalnix wrote:
 Le 18/02/2012 19:25, Walter Bright a écrit :
 On 2/18/2012 6:49 AM, kenji hara wrote:
 After some thoughts, I agree that inheritance of pure  safe, and
 nothrow is good feature.
 But I disagree to const inference, because const attribute interacts
 with overloadings.
The const inheritance *only* happens if otherwise you'd get a covariance error. It does not change the meaning of any existing code that compiled successfully.
yes but then, if a method is added to the base class, you will have a changement of behavior of all overriden method, silently.
No. The compiler will explode in your face.
class A { void fun() const { ... } } class B : A { override void fun() { ... } } Now I change the class A to become : class A { void fun() const { ... } void fun() { ... } } And suddenly, the override doesn't override the same thing anymore. Which is unnacceptable.
Feb 25 2012
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/25/2012 06:53 PM, deadalnix wrote:
 Le 25/02/2012 12:40, Timon Gehr a écrit :
 On 02/24/2012 01:41 PM, deadalnix wrote:
 Le 18/02/2012 19:25, Walter Bright a écrit :
 On 2/18/2012 6:49 AM, kenji hara wrote:
 After some thoughts, I agree that inheritance of pure  safe, and
 nothrow is good feature.
 But I disagree to const inference, because const attribute interacts
 with overloadings.
The const inheritance *only* happens if otherwise you'd get a covariance error. It does not change the meaning of any existing code that compiled successfully.
yes but then, if a method is added to the base class, you will have a changement of behavior of all overriden method, silently.
No. The compiler will explode in your face.
class A { void fun() const { ... } } class B : A { override void fun() { ... } } Now I change the class A to become : class A { void fun() const { ... } void fun() { ... } } And suddenly, the override doesn't override the same thing anymore. Which is unnacceptable.
You didn't try to actually compile this, did you? ;D
Feb 25 2012
next sibling parent reply "so" <so so.so> writes:
On Saturday, 25 February 2012 at 17:57:54 UTC, Timon Gehr wrote:
 class A {
     void fun() const { ... }
 }

 class B : A {
     override void fun() { ... }
 }

 Now I change the class A to become :

 class A {
     void fun() const { ... }
     void fun() { ... }
 }

 And suddenly, the override doesn't override the same thing 
 anymore.
 Which is unnacceptable.
You didn't try to actually compile this, did you? ;D
You can't compile that now, can you?
Feb 25 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/25/2012 09:05 PM, so wrote:
 On Saturday, 25 February 2012 at 17:57:54 UTC, Timon Gehr wrote:
 class A {
     void fun() const { ... }
 }

 class B : A {
     override void fun() { ... }
 }

 Now I change the class A to become :

 class A {
     void fun() const { ... }
     void fun() { ... }
 }

 And suddenly, the override doesn't override the same thing anymore.
 Which is unnacceptable.
You didn't try to actually compile this, did you? ;D
You can't compile that now, can you?
Exactly, it won't compile. It was an explicit measure to prevent this form of function hijacking.
Feb 25 2012
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/25/12 11:57 AM, Timon Gehr wrote:
 On 02/25/2012 06:53 PM, deadalnix wrote:
 class A {
 void fun() const { ... }
 }

 class B : A {
 override void fun() { ... }
 }

 Now I change the class A to become :

 class A {
 void fun() const { ... }
 void fun() { ... }
 }

 And suddenly, the override doesn't override the same thing anymore.
 Which is unnacceptable.
You didn't try to actually compile this, did you? ;D
Apparently me neither. Andrei
Feb 25 2012
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/25/12 11:53 AM, deadalnix wrote:
 class A {
 void fun() const { ... }
 }

 class B : A {
 override void fun() { ... }
 }

 Now I change the class A to become :

 class A {
 void fun() const { ... }
 void fun() { ... }
 }

 And suddenly, the override doesn't override the same thing anymore.
 Which is unnacceptable.
I agree that that's a problem. Andrei
Feb 25 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore. Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
Feb 25 2012
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 25/02/2012 21:44, Walter Bright a écrit :
 On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore.
 Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
So, how do someone override the non const version of the function but not the const version ?
Feb 25 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/25/2012 10:28 PM, deadalnix wrote:
 Le 25/02/2012 21:44, Walter Bright a écrit :
 On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore.
 Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
So, how do someone override the non const version of the function but not the const version ?
By explicitly stating that he is aware of all the overloads: class B : A { alias A.fun fun; override void fun() { } } Alternatively: class B : A{ override void fun()const{super.fun();} override void fun() { } }
Feb 25 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 25/02/2012 22:25, Timon Gehr a écrit :
 On 02/25/2012 10:28 PM, deadalnix wrote:
 Le 25/02/2012 21:44, Walter Bright a écrit :
 On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore.
 Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
So, how do someone override the non const version of the function but not the const version ?
By explicitly stating that he is aware of all the overloads: class B : A { alias A.fun fun; override void fun() { } } Alternatively: class B : A{ override void fun()const{super.fun();} override void fun() { } }
So, back to the example above, someone will have to go throw the whole codebase and add override void fun()const{super.fun();} all over the place to fix the broken code ? It is better, but still . . .
Feb 25 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2012 1:53 PM, deadalnix wrote:
 Le 25/02/2012 22:25, Timon Gehr a écrit :
 By explicitly stating that he is aware of all the overloads:

 class B : A {
 alias A.fun fun;
 override void fun() { }
 }

 Alternatively:

 class B : A{
 override void fun()const{super.fun();}
 override void fun() { }
 }
So, back to the example above, someone will have to go throw the whole codebase and add override void fun()const{super.fun();} all over the place to fix the broken code ?
No, he can use the alias version.
Feb 25 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 25/02/2012 22:59, Walter Bright a écrit :
 On 2/25/2012 1:53 PM, deadalnix wrote:
 Le 25/02/2012 22:25, Timon Gehr a écrit :
 By explicitly stating that he is aware of all the overloads:

 class B : A {
 alias A.fun fun;
 override void fun() { }
 }

 Alternatively:

 class B : A{
 override void fun()const{super.fun();}
 override void fun() { }
 }
So, back to the example above, someone will have to go throw the whole codebase and add override void fun()const{super.fun();} all over the place to fix the broken code ?
No, he can use the alias version.
What would it look like ?
Feb 25 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2012 2:16 PM, deadalnix wrote:
 Le 25/02/2012 22:59, Walter Bright a écrit :
 On 2/25/2012 1:53 PM, deadalnix wrote:
 Le 25/02/2012 22:25, Timon Gehr a écrit :
 By explicitly stating that he is aware of all the overloads:

 class B : A {
 alias A.fun fun;
 override void fun() { }
 }

 Alternatively:

 class B : A{
 override void fun()const{super.fun();}
 override void fun() { }
 }
So, back to the example above, someone will have to go throw the whole codebase and add override void fun()const{super.fun();} all over the place to fix the broken code ?
No, he can use the alias version.
What would it look like ?
class B : A { alias A.fun fun; override void fun() { } }
Feb 25 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 00:02, Walter Bright a écrit :
 On 2/25/2012 2:16 PM, deadalnix wrote:
 Le 25/02/2012 22:59, Walter Bright a écrit :
 On 2/25/2012 1:53 PM, deadalnix wrote:
 Le 25/02/2012 22:25, Timon Gehr a écrit :
 By explicitly stating that he is aware of all the overloads:

 class B : A {
 alias A.fun fun;
 override void fun() { }
 }

 Alternatively:

 class B : A{
 override void fun()const{super.fun();}
 override void fun() { }
 }
So, back to the example above, someone will have to go throw the whole codebase and add override void fun()const{super.fun();} all over the place to fix the broken code ?
No, he can use the alias version.
What would it look like ?
class B : A { alias A.fun fun; override void fun() { } }
OK, but the problem remains equivalent. The programmer did modify in the base class something that is unrelated whith what is currently overridden in subclasses, and suddenly, all of them are broken. They all have to be fixed with an alias or a forwarding override. I think this is an issue. Thinking more about this, I did notice that I almost never do a const and a non const version of the same function when coding (either the functionality require const or it doesn't, so the const and non const version will do something very different, which is confusing). Is it common ? If it is, it open the door to limiting override possibilities when it come to const.non const, with the advantage of being able to infer const in way more place than it is actually. I could expand about that. PS: Ha nice, I got this English dictionary to work in my newsgroup client !
Feb 26 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 26, 2012 13:02:14 deadalnix wrote:
 Thinking more about this, I did notice that I almost never do a const
 and a non const version of the same function when coding (either the
 functionality require const or it doesn't, so the const and non const
 version will do something very different, which is confusing).
 
 Is it common ? If it is, it open the door to limiting override
 possibilities when it come to const.non const, with the advantage of
 being able to infer const in way more place than it is actually. I could
 expand about that.
It's common for some stuff. A classic example would be iterators (or ranges). If you have a const reference or pointer to a container, then the iterator (or range) that you get out of it must give you const access to the elements, whereas a non-const reference or pointer to a container should be able to give you an iterator or range with mutable access to the elements. One place that overloading on const in D could be very useful where it's of little use in C++ would be for caching. If you called a non-const version of a function, then the result could be cached (or the cached result used if the result isn't dirty), whereas while the const one could also use the cached version if it wasn't dirty, if the cached value was dirty, it would have to do the calculation without caching the result, since the object is const, and so it can't alter the cached value. It _is_ true, however, that there are a lot of cases where it makes no sense to have both a const and non-const version of a function. - Jonathan M Davis
Feb 26 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 13:17, Jonathan M Davis a écrit :
 On Sunday, February 26, 2012 13:02:14 deadalnix wrote:
 Thinking more about this, I did notice that I almost never do a const
 and a non const version of the same function when coding (either the
 functionality require const or it doesn't, so the const and non const
 version will do something very different, which is confusing).

 Is it common ? If it is, it open the door to limiting override
 possibilities when it come to const.non const, with the advantage of
 being able to infer const in way more place than it is actually. I could
 expand about that.
It's common for some stuff. A classic example would be iterators (or ranges). If you have a const reference or pointer to a container, then the iterator (or range) that you get out of it must give you const access to the elements, whereas a non-const reference or pointer to a container should be able to give you an iterator or range with mutable access to the elements.
Can't inout help us for such an issue ?
Feb 26 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 26, 2012 16:30:44 deadalnix wrote:
 Le 26/02/2012 13:17, Jonathan M Davis a =C3=A9crit :
 On Sunday, February 26, 2012 13:02:14 deadalnix wrote:
 Thinking more about this, I did notice that I almost never do a co=
nst
 and a non const version of the same function when coding (either t=
he
 functionality require const or it doesn't, so the const and non co=
nst
 version will do something very different, which is confusing).
=20
 Is it common ? If it is, it open the door to limiting override
 possibilities when it come to const.non const, with the advantage =
of
 being able to infer const in way more place than it is actually. I=
could
 expand about that.
=20 It's common for some stuff. A classic example would be iterators (o=
r
 ranges). If you have a const reference or pointer to a container, t=
hen
 the iterator (or range) that you get out of it must give you const =
access
 to the elements, whereas a non-const reference or pointer to a cont=
ainer
 should be able to give you an iterator or range with mutable access=
to
 the elements.
=20 Can't inout help us for such an issue ?
Probably. I'm not very well versed in inout though. But it _is_ likely = that=20 inout solves at least some issues which would require you to overload o= n=20 const. - Jonathan M Davis
Feb 26 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/25/12 2:44 PM, Walter Bright wrote:
 On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore.
 Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
Hm, the issue there is that now both overloads of fun must be overridden. Andrei
Feb 25 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, February 25, 2012 18:11:27 Andrei Alexandrescu wrote:
 On 2/25/12 2:44 PM, Walter Bright wrote:
 On 2/25/2012 9:53 AM, deadalnix wrote:
 And suddenly, the override doesn't override the same thing anymore.
 Which is
 unnacceptable.
class A { void fun() const { } void fun() { } } class B : A { override void fun() { } } ---- dmd -c foo foo.d(6): Error: class foo.B use of foo.A.fun() hidden by B is deprecated
Hm, the issue there is that now both overloads of fun must be overridden.
Except that that's already normally the case. If you had class A { void func(int i) {} void func(float i) {} } class B { override void func(int i) {} } then the float version is not in B's overload set, and you have to add an alias. class b { alias A.func func; override void func(int i) {} } Though I suppose that the difference is that with const, it's giving you an error, and with this, it isn't. Perhaps that's what your concern is. - Jonathan M Davis
Feb 25 2012
prev sibling next sibling parent reply Bruno Medeiros <brunodomedeiros+dng gmail.com> writes:
On 17/02/2012 02:49, Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it can
 "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class Object
 pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most functions
 are naturally pure, so this greatly eases things and eliminates annoying
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function body
 will be semantically checked against this tightest set of qualifiers.

 What do you think?
Sounds like a good idea. I would even add to this that it might be useful to have similar syntax that would allow to define an override method without having to specify the return type nor the parameters of the overridden method. Sometimes in class hierarchies there is a lot of redundancy when overriding methods and it could be a nice small feature to reduce that (especially for methods with lots of parameters). class Foo { int num; override opEquals { if(cast(Foo) o is null) return false; return this.num == (cast(Foo) o).num; } override toString { return to!(string)(num); } } -- Bruno Medeiros - Software Engineer
Feb 23 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2012 8:59 AM, Bruno Medeiros wrote:
 I would even add to this that it might be useful to have similar syntax that
 would allow to define an override method without having to specify the return
 type nor the parameters of the overridden method. Sometimes in class
hierarchies
 there is a lot of redundancy when overriding methods and it could be a nice
 small feature to reduce that (especially for methods with lots of parameters).

 class Foo {
 int num;

 override opEquals {
 if(cast(Foo) o is null)
 return false;
 return this.num == (cast(Foo) o).num;
 }

 override toString {
 return to!(string)(num);
 }

 }
Not a bad idea, but it would be problematic if there were any overloads.
Feb 23 2012
parent reply "so" <so so.so> writes:
On Thursday, 23 February 2012 at 18:32:12 UTC, Walter Bright 
wrote:

 Not a bad idea, but it would be problematic if there were any 
 overloads.
It is still applicable to return types. But i don't like the idea. If you omit arguments and return type, you force both yourself and the reader to check the base class for everything.
Feb 23 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 23/02/2012 21:22, so a écrit :
 On Thursday, 23 February 2012 at 18:32:12 UTC, Walter Bright wrote:

 Not a bad idea, but it would be problematic if there were any overloads.
It is still applicable to return types. But i don't like the idea. If you omit arguments and return type, you force both yourself and the reader to check the base class for everything.
The return type can be define according to the function body. This make more sens than with overload : return type can be covariant, and in all cases, the body function will have to return the right type. So this doesn't provide more than what we already have with auto.
Feb 24 2012
prev sibling parent reply "Jason House" <jason.james.house gmail.com> writes:
On Thursday, 23 February 2012 at 17:10:58 UTC, Bruno Medeiros 
wrote:
 Sounds like a good idea.
 I would even add to this that it might be useful to have 
 similar syntax that would allow to define an override method 
 without having to specify the return type nor the parameters of 
 the overridden method. Sometimes in class hierarchies there is 
 a lot of redundancy when overriding methods and it could be a 
 nice small feature to reduce that (especially for methods with 
 lots of parameters).

 class Foo {
 	int num;
 	
 	override opEquals {
 		if(cast(Foo) o is null)
 			return false;
 		return this.num == (cast(Foo) o).num;
 	}
 	
 	override toString {
 		return to!(string)(num);
 	}
 	
 }
I don't like omitting argument names, but removing argument types seems nice.
Feb 23 2012
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Feb 24, 2012 at 12:06:19AM +0100, Jason House wrote:
 On Thursday, 23 February 2012 at 17:10:58 UTC, Bruno Medeiros wrote:
I would even add to this that it might be useful to have similar
syntax that would allow to define an override method without
having to specify the return type nor the parameters of the
overridden method. Sometimes in class hierarchies there is a lot
of redundancy when overriding methods and it could be a nice small
feature to reduce that (especially for methods with lots of
parameters).

class Foo {
	int num;
	
	override opEquals {
		if(cast(Foo) o is null)
			return false;
		return this.num == (cast(Foo) o).num;
	}
	
	override toString {
		return to!(string)(num);
	}
	
}
I don't like omitting argument names, but removing argument types seems nice.
Omitting argument names/types is very evil. It opens up the possibility of changing the base class and introducing nasty subtle bugs in the derived class without any warning. For example: // Original code class Helper1 { void fun() { writeln("ABC"); } } class Helper2 { void fun() { writeln("DEF"); } } class Base { abstract void gun(Helper1 a, Helper2 b); } class Derived : Base { override void f( /* implicit arguments */) { a.fun(); // calls Helper1.fun b.fun(); // calls Helper2.fun } } // New code class Helper { void fun() { writeln("ABC"); } } class Helper2 { void fun() { writeln("DEF"); } } class Base { // NB: argument names switched abstract void gun(Helper1 b, Helper2 a); } class Derived : Base { override void f( /* implicit arguments */) { // SEMANTICS CHANGED WITHOUT WARNING! a.fun(); // calls Helper2.fun b.fun(); // calls Helper1.fun } } Similar problems occur if argument types are switched. As for removing argument types, we could just use 'auto'. Perhaps the compiler can interpret 'override auto' to mean 'use base class's return type' instead of merely 'infer return type from function body'. T -- "The number you have dialed is imaginary. Please rotate your phone 90 degrees and try again."
Feb 23 2012
parent "so" <so so.so> writes:
On Thursday, 23 February 2012 at 23:40:28 UTC, H. S. Teoh wrote:

 Omitting argument names/types is very evil. It opens up the 
 possibility
 of changing the base class and introducing nasty subtle bugs in 
 the
 derived class without any warning.  For example:
Good catch.
Feb 23 2012
prev sibling next sibling parent "so" <so so.so> writes:
If it can be applied to const, wouldn't it be like "const by 
convention" that you argued against?

On Friday, 17 February 2012 at 02:49:40 UTC, Walter Bright wrote:
 Given:

     class A { void foo() { } }
     class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, 
 meaning it can "tighten", or place more restrictions, on foo. 
 But:

     class A { pure void foo() { } }
     class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so 
 is not covariant.

 Where this gets annoying is when the qualifiers on the base 
 class function have to be repeated on all its overrides. I ran 
 headlong into this when experimenting with making the member 
 functions of class Object pure.

 So it occurred to me that an overriding function could 
 *inherit* the qualifiers from the overridden function. The 
 qualifiers of the overriding function would be the "tightest" 
 of its explicit qualifiers and its overridden function 
 qualifiers. It turns out that most functions are naturally 
 pure, so this greatly eases things and eliminates annoying 
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding 
 function body will be semantically checked against this 
 tightest set of qualifiers.

 What do you think?
Feb 23 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Feb 24, 2012 at 01:01:51AM +0100, F i L wrote:
 UTC, so wrote:
No one said you shouldn't use IDE or any other tool, but i don't
think it is healthy to design a language with such assumptions.
Walter himself was against this and stated why he doesn't like Java
way of doing things, one of the reason was the language was relying
on IDEs.
Well then I disagree with Walter on this as well. What's wrong with having a "standard" toolset in the same way you have standard libraries?
Um... dmd and phobos *are* the standard toolset.
 It's unrealistic to think people (at large) will be writing any sort
 of serious application outside of a modern IDE.
I do that at my day job, every single day. So do my other 300+ odd coworkers. And it's not just a single application, it's an entire embedded system complete with an OS, system-level services, database, and GUI.
 I'm not saying it's Walters job to write IDE integration, only that
 the language design shouldn't cater to the smaller use-case scenario.
It's not a smaller use-case scenario at all.
 Cleaner code is easier to read
I write quite-clean code with a text editor every day.
 and, within an IDE with tooltips, makes little difference when looking
 at the hierarchy. If you want to be hard-core about it, no one is
 stopping you from explicitly qualifying each definition.
What if you have to deal with other people's code? Which I have to do as part of my job responsibilities, and which often counts for 80-90% of my actual day-to-day work. On Fri, Feb 24, 2012 at 01:15:08AM +0100, F i L wrote:
 H. S. Teoh wrote:
In all seriousness, I think you're decoupling inherently ingrained
pieces: the language and it's tools. The same way you *need* syntax
highlighting to distinguish structure,
I don't.
wait... you don't even use Syntax Highlighting? Are you insane, you'll go blind!
I'll freely admit my eyesight is deteriorating, but if I'm insane then so are the other 300+ coworkers in my office, most of whom write code every single day with a text-editor on a Linux command-line. I'd like to believe I don't work in a mental institution. :-P [...]
True, they have their value. I don't argue with that.

But why should anyone be *forced* to use them? They're just tools.  A
language is a language (a set of syntax and grammar rules with the
associated semantics). It's not inherently tied to any tools.
a bad thing is because it's closed source and ultimately designed as [yet another] developer lock-in (just good business right?). Beyond that it's really silly not to use VS cause of all the productive features it provides.
It's not an option at my job, because the embedded system requires gcc to even compile properly. (Yes I hear the background screams about non portability. It's one of the perks of writing software for hardware that you make yourself. :-))
 MonoDevelop is catching up, but still quite a ways behind in some
 areas. No one is stopping anyone from writing code in Notepad.. but
 then, no one is stopping 3D artists from manually editing .obj files
 in Notepad either.
[...] Ahh, no wonder you have such aversion to non-IDE development. Let me just say this, once: NotePad is not a real text editor. You're absolutely right that if I, and my 300+ coworkers, have to use that nightmarish walking disaster called Notepad to write code, then we'd all have quit 10 years ago (or the company would've collapsed long ago from a non-working product). When you have a *real* text editor at your disposal, writing code is actually on par, if not better, than development in an IDE. I'd like to think that it's only because I'm a weirdo who lived past my generation and still haven't moved on from the 70's, but the fact of the matter is that there are 300 of us here in this building right now who write code with VI every single day, 5 days a week. And I find it hard to believe that we're the only ones on earth doing this. :-) T -- "A one-question geek test. If you get the joke, you're a geek: Seen on a California license plate on a VW Beetle: 'FEATURE'..." -- Joshua D. Wachs - Natural Intelligence, Inc.
Feb 23 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, February 23, 2012 21:50:27 H. S. Teoh wrote:
 Oh? I thought *real* real programmers use a soldering iron, a pair of
 tweezers, a magnifying glass, and really *really* steady hands... Tricky
 things to program, those new-fangled nanometer-scale microprocessors
 they make these days. :-P
Obligatory XKCD: http://xkcd.com/378/ :) - Jonathan M Davis
Feb 23 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 23, 2012 at 10:08:40PM -0800, Jonathan M Davis wrote:
 On Thursday, February 23, 2012 21:50:27 H. S. Teoh wrote:
 Oh? I thought *real* real programmers use a soldering iron, a pair
 of tweezers, a magnifying glass, and really *really* steady hands...
 Tricky things to program, those new-fangled nanometer-scale
 microprocessors they make these days. :-P
Obligatory XKCD: http://xkcd.com/378/ :)
+1 lolz T -- Study gravitation, it's a field with a lot of potential.
Feb 24 2012
prev sibling parent reply "Jason House" <jason.james.house gmail.com> writes:
On Friday, 17 February 2012 at 02:49:40 UTC, Walter Bright wrote:
 Given:

     class A { void foo() { } }
     class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, 
 meaning it can "tighten", or place more restrictions, on foo. 
 But:

     class A { pure void foo() { } }
     class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so 
 is not covariant.

 Where this gets annoying is when the qualifiers on the base 
 class function have to be repeated on all its overrides. I ran 
 headlong into this when experimenting with making the member 
 functions of class Object pure.

 So it occurred to me that an overriding function could 
 *inherit* the qualifiers from the overridden function. The 
 qualifiers of the overriding function would be the "tightest" 
 of its explicit qualifiers and its overridden function 
 qualifiers. It turns out that most functions are naturally 
 pure, so this greatly eases things and eliminates annoying 
 typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding 
 function body will be semantically checked against this 
 tightest set of qualifiers.

 What do you think?
I'm still not convinced about this apply to const. Consider this example: Initial code: class A{ void foo(int) const; void foo(float) const; } class B{ alias A.foo foo; override void foo(int); } Revision to class A: class A{ void foo(int); void foo(int) const; void foo(float); void foo(float) const; } When the user recompiles, there will be no errors or warnings. All uses of foo(int) through a const B will revert to using the base class's implementation.
Feb 26 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 02/26/2012 06:39 PM, Jason House wrote:
 On Friday, 17 February 2012 at 02:49:40 UTC, Walter Bright wrote:
 Given:

 class A { void foo() { } }
 class B : A { override pure void foo() { } }

 This works great, because B.foo is covariant with A.foo, meaning it
 can "tighten", or place more restrictions, on foo. But:

 class A { pure void foo() { } }
 class B : A { override void foo() { } }

 fails, because B.foo tries to loosen the requirements, and so is not
 covariant.

 Where this gets annoying is when the qualifiers on the base class
 function have to be repeated on all its overrides. I ran headlong into
 this when experimenting with making the member functions of class
 Object pure.

 So it occurred to me that an overriding function could *inherit* the
 qualifiers from the overridden function. The qualifiers of the
 overriding function would be the "tightest" of its explicit qualifiers
 and its overridden function qualifiers. It turns out that most
 functions are naturally pure, so this greatly eases things and
 eliminates annoying typing.

 I want do to this for  safe, pure, nothrow, and even const.

 I think it is semantically sound, as well. The overriding function
 body will be semantically checked against this tightest set of
 qualifiers.

 What do you think?
I'm still not convinced about this apply to const. Consider this example: Initial code: class A{ void foo(int) const; void foo(float) const; } class B{ alias A.foo foo; override void foo(int); } Revision to class A: class A{ void foo(int); void foo(int) const; void foo(float); void foo(float) const; } When the user recompiles, there will be no errors or warnings. All uses of foo(int) through a const B will revert to using the base class's implementation.
This is by far not the only hijacking scenario enabled by using alias for merging in the parent's overload set. re-implementing the method and calling super is the only safe way to warrant hijacking protection.
Feb 26 2012