www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - dynamic classes and duck typing

reply Walter Bright <newshound1 digitalmars.com> writes:
One thing Java and Python, Ruby, etc., still hold over D is dynamic 
classes, i.e. classes that are only known at runtime, not compile time. 
In D, this:

    s.foo(3);

could be emulated with:

    s.dynamicMethod("foo", 3);

Unfortunately, that makes it impossible to use s with generic code 
(besides looking unappealing). But with a small feature, we can make 
this work:

    struct S
    {
         ...
	T opDynamic(s : string)(args...);
    }

and then s.foo(3), if foo is not a compile time member of s, is 
rewritten as:

    s.opDynamic!("foo")(3);

and opDynamic defers all the nuts-and-bolts of making this work out of 
the language and into the library.

In particular, opDynamic's parameter and return types should all be 
instances of std.variant.

(This has come up in various forms in this n.g. before, but I don't have 
any references handy.)
Nov 27 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic 
 classes, i.e. classes that are only known at runtime, not compile time. 
 In D, this:
 
    s.foo(3);
 
 could be emulated with:
 
    s.dynamicMethod("foo", 3);
 
 Unfortunately, that makes it impossible to use s with generic code 
 (besides looking unappealing). But with a small feature, we can make 
 this work:
 
    struct S
    {
         ...
     T opDynamic(s : string)(args...);
    }
 
 and then s.foo(3), if foo is not a compile time member of s, is 
 rewritten as:
 
    s.opDynamic!("foo")(3);
 
 and opDynamic defers all the nuts-and-bolts of making this work out of 
 the language and into the library.
 
 In particular, opDynamic's parameter and return types should all be 
 instances of std.variant.
 
 (This has come up in various forms in this n.g. before, but I don't have 
 any references handy.)
One of these is the thread "Fully dynamic d by opDotExp overloading". Andrei
Nov 27 2009
prev sibling next sibling parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Sat, 28 Nov 2009 00:30:14 +0100, Walter Bright  
<newshound1 digitalmars.com> wrote:

 One thing Java and Python, Ruby, etc., still hold over D is dynamic  
 classes, i.e. classes that are only known at runtime, not compile time.  
 In D, this:

     s.foo(3);

 could be emulated with:

     s.dynamicMethod("foo", 3);

 Unfortunately, that makes it impossible to use s with generic code  
 (besides looking unappealing). But with a small feature, we can make  
 this work:

     struct S
     {
          ...
 	T opDynamic(s : string)(args...);
     }

 and then s.foo(3), if foo is not a compile time member of s, is  
 rewritten as:

     s.opDynamic!("foo")(3);

 and opDynamic defers all the nuts-and-bolts of making this work out of  
 the language and into the library.

 In particular, opDynamic's parameter and return types should all be  
 instances of std.variant.

 (This has come up in various forms in this n.g. before, but I don't have  
 any references handy.)
davidl implemented this as opDotExp in "Fully dynamic d by opDotExp overloading" (http://www.digitalmars.com/webnews/newsgroups.php?article_id=88145). I'd really like to see this. Is there a reason to allow only std.variant as parameters? I can easily see this being used where the type (or set of possible types) is known at compile time, but one does not want to implement a lot of boilerplate functions. Also, would the generated functions be usable as propertys? -- Simen
Nov 27 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Simen kjaeraas wrote:
 davidl implemented this as opDotExp in "Fully dynamic d by opDotExp 
 overloading"
 (http://www.digitalmars.com/webnews/newsgroups.php?article_id=88145).
Thanks for the ref. On one page linkies: http://www.digitalmars.com/d/archives/digitalmars/D/Fully_dynamic_d_by_opDotExp_overloading_88145.html http://www.digitalmars.com/d/archives/digitalmars/D/Re_Fully_dynamic_d_by_opDotExp_overloading_88270.html
Nov 27 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 Simen kjaeraas wrote:
 davidl implemented this as opDotExp in "Fully dynamic d by opDotExp 
 overloading"
 (http://www.digitalmars.com/webnews/newsgroups.php?article_id=88145).
Thanks for the ref. On one page linkies: http://www.digitalmars.com/d/archives/digitalmars/D/Fully_dynamic_d_by_opDotExp_ov rloading_88145.html http://www.digitalmars.com/d/archives/digitalmars/D/Re_Fully_dynamic_d_by_opDotExp_ov rloading_88270.html
And clearly, this idea was proposed before, including the templated version.
Nov 27 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 And clearly, this idea was proposed before, including the templated 
 version.
I also see (reading it) that it was pretty thoroughly discussed. I'm convinced we should do it.
Nov 27 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 One thing Java and Python, Ruby, etc., still hold over D is dynamic 
 classes, i.e. classes that are only known at runtime, not compile time. 
I see, you want a Swiss army knife language :o) http://msdn.microsoft.com/en-us/library/dd264736%28VS.100%29.aspx http://blogs.msdn.com/cburrows/archive/2008/10/27/c-dynamic.aspx http://geekswithblogs.net/sdorman/archive/2008/11/16/c-4.0-dynamic-programming.aspx Similar links can be found for the invokedynamic of Java VM. Bye, bearophile
Nov 27 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I see, you want a Swiss army knife language :o)
 


 http://msdn.microsoft.com/en-us/library/dd264736%28VS.100%29.aspx
 http://blogs.msdn.com/cburrows/archive/2008/10/27/c-dynamic.aspx
 http://geekswithblogs.net/sdorman/archive/2008/11/16/c-4.0-dynamic-programming.aspx
I think the D approach is superior, because it offers many more ways of doing things (it's implemented nearly completely as a library feature). templates.
Nov 27 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 bearophile wrote:
 I see, you want a Swiss army knife language :o)


 http://msdn.microsoft.com/en-us/library/dd264736%28VS.100%29.aspx
 http://blogs.msdn.com/cburrows/archive/2008/10/27/c-dynamic.aspx
http://geekswithblogs.net/sdorman/archive/2008/11/16/c-4.0-dynamic-programming.aspx
 I think the D approach is superior, because it offers many more ways of
 doing things (it's implemented nearly completely as a library feature).

 templates.
Sometimes I feel like there should be a law similar to Greenspun's Law for language design: Any sufficiently long-lived language that promises to be "simpler" than C++ and D will grow to contain an ad-hoc, bug-ridden, informally specified, slow implementation of half of C++ and D.
Nov 27 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Sometimes I feel like there should be a law similar to Greenspun's Law for
 language design:
 
 Any sufficiently long-lived language that promises to be "simpler" than C++
and D
 will grow to contain an ad-hoc, bug-ridden, informally specified, slow
 implementation of half of C++ and D.
The dogged inventiveness of the C++ community never ceases to amaze me. Someone always finds a way to make a library to support some paradigm. Look at all the things Boost does. The problem, though, is the result is often just so strange I'd rather do without. Sometimes, you just need to improve the language to support things better.
Nov 27 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Sometimes I feel like there should be a law similar to Greenspun's Law for
 language design:

 Any sufficiently long-lived language that promises to be "simpler" than C++
and D
 will grow to contain an ad-hoc, bug-ridden, informally specified, slow
 implementation of half of C++ and D.
The dogged inventiveness of the C++ community never ceases to amaze me. Someone always finds a way to make a library to support some paradigm. Look at all the things Boost does. The problem, though, is the result is often just so strange I'd rather do without. Sometimes, you just need to improve the language to support things better.
Right, but sometimes (though certainly not always) it's better to provide a meta-feature that solves a whole bunch of problems (like better templates) and then solve the individual problems at the library level, rather than add a language feature specifically to address each need. One thing D does very well is allow you to do the same kind of metaprogramming solutions you would do in C++, except that the result doesn't suck. For example, std.range implements functional-style lazy evaluation as a library, and does it well. The point is that, if you can't deal with the complexity of having real templates, you better be prepared for the complexity created by not having them. Having never done it before, I really cannot imagine how people get any work done in a language that doesn't have either duck typing or good templates. It's just up adding tons of ad-hoc workarounds for lacking either of these as well-integrated language features. The best/worst example is auto-boxing.
Nov 27 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Right, but sometimes (though certainly not always) it's better to provide a
 meta-feature that solves a whole bunch of problems (like better templates) and
 then solve the individual problems at the library level, rather than add a
 language feature specifically to address each need.
Yup. The hard part, though, is figuring out what the magic set of seminal features should be.
 One thing D does very well is
 allow you to do the same kind of metaprogramming solutions you would do in C++,
 except that the result doesn't suck.  For example, std.range implements
 functional-style lazy evaluation as a library, and does it well.  The point is
 that, if you can't deal with the complexity of having real templates, you
better
 be prepared for the complexity created by not having them.
Right. A "simple" language pushes the complexity onto the programmer, so he has to write complicated code instead. D programs tend to be dramatically shorter than the equivalent C++ one.
 Having never done it before, I really cannot imagine how people get any work
done
 in a language that doesn't have either duck typing or good templates.  It's
just

end
 up adding tons of ad-hoc workarounds for lacking either of these as
 well-integrated language features.  The best/worst example is auto-boxing.
I tried programming in Java. A friend of mine had an unexpected insight. He used Java a lot at a major corporation. He said an IDE was indispensable because with "one click" you could generate a "hundred lines of code". The light bulb came on. Java makes up for its lack of expressiveness by putting that expressiveness into the IDE! In D, you generate that hundred lines of code with templates and mixins.
Nov 27 2009
prev sibling parent div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Sometimes I feel like there should be a law similar to Greenspun's Law for
Having never done it before, I really cannot imagine how people get any work done in a language that doesn't have either duck typing or good templates. It's just end up adding tons of ad-hoc workarounds for lacking either of these as well-integrated language features. The best/worst example is auto-boxing.
Which is why the .NET framework is so bloody huge. If MS hadn't provided that out of the box in the first version, .NET would have been a dead duck. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFLET3kT9LetA9XoXwRAlROAKCWpHYX+yZ9CJllO2D+n1TkE5YBIQCeJSnQ BJOhQbYK/ur4ugDwcXbCVVI= =JQlY -----END PGP SIGNATURE-----
Nov 28 2009
prev sibling next sibling parent reply Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Walter Bright wrote:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic 
 classes, i.e. classes that are only known at runtime, not compile time. 
 In D, this:
 
    s.foo(3);
 
 could be emulated with:
 
    s.dynamicMethod("foo", 3);
 
 Unfortunately, that makes it impossible to use s with generic code 
 (besides looking unappealing). But with a small feature, we can make 
 this work:
 
    struct S
    {
         ...
     T opDynamic(s : string)(args...);
    }
 
 and then s.foo(3), if foo is not a compile time member of s, is 
 rewritten as:
 
    s.opDynamic!("foo")(3);
 
 and opDynamic defers all the nuts-and-bolts of making this work out of 
 the language and into the library.
 
 In particular, opDynamic's parameter and return types should all be 
 instances of std.variant.
 
 (This has come up in various forms in this n.g. before, but I don't have 
 any references handy.)
Seems fine, but how will this interact with "alias...this" and opDot? The former seems simple enough: if the "alias...this" field provides the member, use that, otherwise fall back on opDynamic. The latter seems iffy, though. Maybe something like this: // if the return type of opDot provides the member... (auto tmp = s.opDot, tmp ? tmp.foo(3) : s.opDynamic!"foo"(3)) Hmm... ew... but I can't think of anything better off-hand. The "simple" design would probably be for opDynamic's implementation to make the call on whether to forward to opDot's result; aka, push the decision to the programmer. Stick a mixin somewhere for the most basic case (what I showed above) and its no big deal. -- Chris Nicholson-Sauls
Nov 27 2009
parent "Denis Koroskin" <2korden gmail.com> writes:
On Sat, 28 Nov 2009 10:16:33 +0300, Chris Nicholson-Sauls  
<ibisbasenji gmail.com> wrote:

 Walter Bright wrote:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic  
 classes, i.e. classes that are only known at runtime, not compile time.  
 In D, this:
     s.foo(3);
  could be emulated with:
     s.dynamicMethod("foo", 3);
  Unfortunately, that makes it impossible to use s with generic code  
 (besides looking unappealing). But with a small feature, we can make  
 this work:
     struct S
    {
         ...
     T opDynamic(s : string)(args...);
    }
  and then s.foo(3), if foo is not a compile time member of s, is  
 rewritten as:
     s.opDynamic!("foo")(3);
  and opDynamic defers all the nuts-and-bolts of making this work out of  
 the language and into the library.
  In particular, opDynamic's parameter and return types should all be  
 instances of std.variant.
  (This has come up in various forms in this n.g. before, but I don't  
 have any references handy.)
Seems fine, but how will this interact with "alias...this" and opDot? The former seems simple enough: if the "alias...this" field provides the member, use that, otherwise fall back on opDynamic. The latter seems iffy, though. Maybe something like this: // if the return type of opDot provides the member... (auto tmp = s.opDot, tmp ? tmp.foo(3) : s.opDynamic!"foo"(3)) Hmm... ew... but I can't think of anything better off-hand. The "simple" design would probably be for opDynamic's implementation to make the call on whether to forward to opDot's result; aka, push the decision to the programmer. Stick a mixin somewhere for the most basic case (what I showed above) and its no big deal. -- Chris Nicholson-Sauls
I think opDot should be deprecated and eventually removed. Never used it since alias this was introduced. Why would you use it?
Nov 28 2009
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-27 18:30:14 -0500, Walter Bright <newshound1 digitalmars.com> said:

 But with a small feature, we can make this work:
 
     struct S
     {
          ...
 	T opDynamic(s : string)(args...);
     }
 
 and then s.foo(3), if foo is not a compile time member of s, is rewritten as:
 
     s.opDynamic!("foo")(3);
 
 and opDynamic defers all the nuts-and-bolts of making this work out of 
 the language and into the library.
Please make sure it can work to implement properties too. The only thing that worries me is that "functions" defined through opDynamic won't be reachable via reflection. There's no way to call "foo" if "foo" is a runtime string; with regular functions you can use compile-time reflection to build a dispatch table, but for those implemented through opDynamic (which are not available through reflection) it won't work. Also, I would name it "opDispatch" instead. I fail to see anything "dymamic" in it... it's a template so it's static isn't it? Of course you can implement dynamic dispatch with this, but that's not a requirement.
 In particular, opDynamic's parameter and return types should all be 
 instances of std.variant.
That seems unnecessary. It's a template, so you should be able to define opDynamic like this: auto opDynamic(s : string, A...)(A args) { return args[0]; } -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 28 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 and then s.foo(3), if foo is not a compile time member of s, is 
 rewritten as:
     s.opDynamic!("foo")(3);
I don't understand, isn't this the right translation? s.opDynamic("foo", 3); Python) you can't use a template argument. (And then it becomes useful to have associative arrays that are fast when the number of keys is small, < 10). Bye, bearophile
Nov 28 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
bearophile:
 If the "foo" name is known at compile time only
I meant at "run-time only", that's what defines it as dynamic. Otherwise, as Michel Fortin says, there's nothing dynamic in it. Bye, bearophile
Nov 28 2009
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
bearophile wrote:
 Walter Bright:
 and then s.foo(3), if foo is not a compile time member of s, is 
 rewritten as:
     s.opDynamic!("foo")(3);
I don't understand, isn't this the right translation? s.opDynamic("foo", 3); Python) you can't use a template argument. (And then it becomes useful to have associative arrays that are fast when the number of keys is small, < 10).
You mean, if it's known at run-time only? What Walter has written is definitely the correct one. Consider, for example, swizzling an array of float[4] using SSE2 Instead of writing the 256 functions .xyzw(), .xzyz(), .wzyy(), etc, you can just write a single function: bool isXYZW(char c) { return c=='x' || c=='y' || c=='z' || c=='w'); float[4] opDynamic(char [] s)() if (s.length==4 && isXYZW(s[0]) && isXYZW(s[1] && isXYZW(s[2]) && isXYZW { string hexdigit = "0123456789ABCDEF"; mixin("asm { pshufd xmm0, xmm1, 0x" ~ hexdigit[((s[0]-'w'+3)&3)*4+(s[1]-'w'+3)&3] ~ hexdigit[((s[2]-'w'+3)&3)*4+(s[3]-'w'+3)&3] ~";"); } Not actually tested <g>.
Nov 28 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:
 You mean, if it's known at run-time only?
Yes, sorry.
 What Walter has written is definitely the correct one.
I am not saying that what Walter has written is wrong or useless :-) I am saying that it's not dynamic. This means that: 1) The name of that operator "opDynamic" is wrong, it needs a name that doesn't contain "dynamic" inside. "opDispatch" suggested by Michel Fortin seems OK. It can have that signature shown by Walter: T opDispatch(s : string)(args...); 2) A second operator like: variant opDynamic(string name, variant[]); languages. As I've said such operator will enjoy fast associative arrays when the number of keys is small (Python dicts have an optimization for such very common case).
you can just write a single function:<
That's not an example of the most readable code, but it's cute :-) Bye and thank you, bearophile
Nov 28 2009
parent bearophile <bearophileHUGS lycos.com> writes:
 It can have that signature shown by Walter:
 T opDispatch(s : string)(args...);
Or, to keep its name simpler to remember beside the opDynamic: T opStatic(s : string)(args...); Bye, bearophile
Nov 28 2009
prev sibling parent reply KennyTM~ <kennytm gmail.com> writes:
On Nov 28, 09 22:00, bearophile wrote:
 Walter Bright:
 and then s.foo(3), if foo is not a compile time member of s, is
 rewritten as:
      s.opDynamic!("foo")(3);
I don't understand, isn't this the right translation? s.opDynamic("foo", 3); Python) you can't use a template argument. (And then it becomes useful to have associative arrays that are fast when the number of keys is small,< 10). Bye, bearophile
Probably because you can write Variant myOpReallyDynamic(string name, Variant[] s...) { ... } Variant opDynamic(string name)(Variant[] s...) { return myOpReallyDynamic(name, s); } but not the other way round.
Nov 28 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
KennyTM~ wrote:
 On Nov 28, 09 22:00, bearophile wrote:
 Walter Bright:
 and then s.foo(3), if foo is not a compile time member of s, is
 rewritten as:
      s.opDynamic!("foo")(3);
I don't understand, isn't this the right translation? s.opDynamic("foo", 3); and in Python) you can't use a template argument. (And then it becomes useful to have associative arrays that are fast when the number of keys is small,< 10). Bye, bearophile
Probably because you can write Variant myOpReallyDynamic(string name, Variant[] s...) { ... } Variant opDynamic(string name)(Variant[] s...) { return myOpReallyDynamic(name, s); } but not the other way round.
That is correct. Thanks for pointing that out. The operator is dynamic because it may perform a dynamic lookup under a static syntax. Straight dynamic invocation with a regular function has and needs no sugar. Andrei
Nov 28 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
And here it is (called opDispatch, Michel Fortin's suggestion):

http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267
Nov 28 2009
next sibling parent reply biozic <dransic free.fr> writes:
Le 29/11/09 00:36, Walter Bright a écrit :
 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267
Seems interesting, but for now the error message when no opDispatch template can be instantiated looks confusing when trying to use a class with an opDispatch implemented, and making e.g. a typo error: ============================================= module lib; class Test { string opDispatch(string name)() { static if (name == "foo") return "foo"; } } ============================================= module main; import lib; import std.stdio; void main() { auto test = new Test; writeln(test.foo); // OK writeln(test.fooo); // Error } ============================================= Error is: """ lib.d(5): Error: function lib.Test.opDispatch!("fooo").opDispatch expected to return a value of type string lib.d(9): Error: template instance lib.Test.opDispatch!("fooo") error instantiating """ nicolas
Nov 29 2009
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Sun, 29 Nov 2009 11:44:56 +0100, biozic <dransic free.fr> wrote:

 Le 29/11/09 00:36, Walter Bright a =C3=A9crit :
 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=3Dtrunk%2Fsrc 268&o=
ld=3Dtrunk%2Fsrc 267

 Seems interesting, but for now the error message when no opDispatch  =
 template can be instantiated looks confusing when trying to use a clas=
s =
 with an opDispatch implemented, and making e.g. a typo error:

 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
 module lib;
 class Test
 {
      string opDispatch(string name)()
      {
          static if (name =3D=3D "foo")
              return "foo";
      }
 }
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
 module main;
 import lib;
 import std.stdio;

 void main()
 {
      auto test =3D new Test;
      writeln(test.foo); // OK
      writeln(test.fooo); // Error
 }
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
 Error is: """
 lib.d(5): Error: function lib.Test.opDispatch!("fooo").opDispatch  =
 expected to return a value of type string
 lib.d(9): Error: template instance lib.Test.opDispatch!("fooo") error =
=
 instantiating
 """

 nicolas
That is because your opDispatch is instantiated no matter what the name = = is, but only does something sensible if it's foo. Try this: string opDispatch( string name )( ) { static if ( name =3D=3D "foo" ) { return "foo"; } else { static assert( false, "Invalid member name." ); } } -- = Simen
Nov 29 2009
next sibling parent reply biozic <dransic free.fr> writes:
Le 29/11/09 12:14, Simen kjaeraas a écrit :
 That is because your opDispatch is instantiated no matter what the name
 is, but only does something sensible if it's foo. Try this:

 string opDispatch( string name )( ) {
 static if ( name == "foo" ) {
 return "foo";
 } else {
 static assert( false, "Invalid member name." );
 }
 }
Ok but what still looks confusing is that the error is reported on the template code, as for any template instantiation error, while the user could not be aware of being instantiating a template (or should he?). Anyway, this feature is fun to play with.
Nov 29 2009
parent bearophile <bearophileHUGS lycos.com> writes:
biozic:
 Ok but what still looks confusing is that the error is reported on the 
 template code, as for any template instantiation error,
Using the template constraint has to avoid that problem, improving the error message. And Don will probably improve the error messages in the other templates in future. Bye, bearophile
Nov 29 2009
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-29 06:14:21 -0500, "Simen kjaeraas" <simen.kjaras gmail.com> said:

 That is because your opDispatch is instantiated no matter what the name
 is, but only does something sensible if it's foo. Try this:
 
 string opDispatch( string name )( ) {
    static if ( name == "foo" ) {
      return "foo";
    } else {
      static assert( false, "Invalid member name." );
    }
 }
Wouldn't this be even better? string opDispatch(string name)() if (name == "foo") { return "foo"; } I haven't tested that it works though. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 29 2009
parent reply biozic <dransic free.fr> writes:
Le 29/11/09 13:16, Michel Fortin a écrit :
 On 2009-11-29 06:14:21 -0500, "Simen kjaeraas" <simen.kjaras gmail.com>
 said:

 That is because your opDispatch is instantiated no matter what the name
 is, but only does something sensible if it's foo. Try this:

 string opDispatch( string name )( ) {
 static if ( name == "foo" ) {
 return "foo";
 } else {
 static assert( false, "Invalid member name." );
 }
 }
Wouldn't this be even better? string opDispatch(string name)() if (name == "foo") { return "foo"; } I haven't tested that it works though.
It doesn't improve the error message, but it works. It's been a long time since I used D: I didn't know this syntax!
Nov 29 2009
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
biozic wrote:

 Le 29/11/09 13:16, Michel Fortin a écrit :
 On 2009-11-29 06:14:21 -0500, "Simen kjaeraas" <simen.kjaras gmail.com>
 said:

 That is because your opDispatch is instantiated no matter what the name
 is, but only does something sensible if it's foo. Try this:

 string opDispatch( string name )( ) {
 static if ( name == "foo" ) {
 return "foo";
 } else {
 static assert( false, "Invalid member name." );
 }
 }
Wouldn't this be even better? string opDispatch(string name)() if (name == "foo") { return "foo"; } I haven't tested that it works though.
It doesn't improve the error message, but it works. It's been a long time since I used D: I didn't know this syntax!
Don has made a patch to improve these kind of error messages in templates, but that will probably come after D2 is finalized (doesn't affect the language). If you want to resolve the symbol at runtime I think you can get a better error message for throwing an exception or assertion. I don't have the svn dmd, so this isn't tested: void opDispatch(string name)(string file = __FILE__, int line = __LINE__) { if ( !dynamicDispatch(name) ) { // line and file are default initialized from the call site: throw new MethodMissingException(name, file, line); } }
Nov 29 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Sun, 29 Nov 2009 16:02:24 +0300, Lutger <lutger.blijdestijn gmail.com=
  =
wrote:
 biozic wrote:

 Le 29/11/09 13:16, Michel Fortin a =C3=A9crit :
 On 2009-11-29 06:14:21 -0500, "Simen kjaeraas" <simen.kjaras gmail.c=
om>
 said:

 That is because your opDispatch is instantiated no matter what the =
=
 name
 is, but only does something sensible if it's foo. Try this:

 string opDispatch( string name )( ) {
 static if ( name =3D=3D "foo" ) {
 return "foo";
 } else {
 static assert( false, "Invalid member name." );
 }
 }
Wouldn't this be even better? string opDispatch(string name)() if (name =3D=3D "foo") { return "foo"; } I haven't tested that it works though.
It doesn't improve the error message, but it works. It's been a long time since I used D: I didn't know this syntax!
Don has made a patch to improve these kind of error messages in =
 templates,
 but that will probably come after D2 is finalized (doesn't affect the
 language).

 If you want to resolve the symbol at runtime I think you can get a bet=
ter
 error message for throwing an exception or assertion. I don't have the=
=
 svn
 dmd, so this isn't tested:

 void opDispatch(string name)(string file =3D __FILE__, int line =3D __=
LINE__)
 {
   if ( !dynamicDispatch(name) )
   {
     // line and file are default initialized from the call site:
     throw new MethodMissingException(name, file, line);
   }
 }
IIRC, this trick only works when __FILE__ and __LINE__ are both template= = arguments.
Nov 29 2009
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Denis Koroskin wrote:

 On Sun, 29 Nov 2009 16:02:24 +0300, Lutger <lutger.blijdestijn gmail.com>
*snip*
 If you want to resolve the symbol at runtime I think you can get a better
 error message for throwing an exception or assertion. I don't have the
 svn
 dmd, so this isn't tested:

 void opDispatch(string name)(string file = __FILE__, int line = __LINE__)
 {
   if ( !dynamicDispatch(name) )
   {
     // line and file are default initialized from the call site:
     throw new MethodMissingException(name, file, line);
   }
 }
IIRC, this trick only works when __FILE__ and __LINE__ are both template arguments.
hey that's right. That means when Walter fixes the current bug with template parameters, good compile time error messages are possible after all.
Nov 29 2009
parent biozic <dransic free.fr> writes:
Le 29/11/09 14:23, Lutger a écrit :
 Denis Koroskin wrote:

 On Sun, 29 Nov 2009 16:02:24 +0300, Lutger<lutger.blijdestijn gmail.com>
*snip*
 If you want to resolve the symbol at runtime I think you can get a better
 error message for throwing an exception or assertion. I don't have the
 svn
 dmd, so this isn't tested:

 void opDispatch(string name)(string file = __FILE__, int line = __LINE__)
 {
    if ( !dynamicDispatch(name) )
    {
      // line and file are default initialized from the call site:
      throw new MethodMissingException(name, file, line);
    }
 }
IIRC, this trick only works when __FILE__ and __LINE__ are both template arguments.
hey that's right. That means when Walter fixes the current bug with template parameters, good compile time error messages are possible after all.
I tried: ============================================== module main; import std.stdio; import std.string; class Test { string foo(string name)(string file = __FILE__, int line = __LINE__) { return format("Call of Test.foo at %s(%d).", file, line); } string bar(string name, string file = __FILE__, int line = __LINE__)() { return format("Call of Test.bar at %s(%d).", file, line); } } void main() { auto test = new Test; writeln(test.foo!"something"); writeln(test.bar!"something"); } ============================================== and the output is: Call of Test.foo at test.d(21). Call of Test.bar at test.d(12).
Nov 29 2009
prev sibling next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:

 And here it is (called opDispatch, Michel Fortin's suggestion):
 
 
http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267 holy duck, that is quick!
Nov 29 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Lutger wrote:
 Walter Bright wrote:
 
 And here it is (called opDispatch, Michel Fortin's suggestion):
http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267 holy duck, that is quick!
Unfortunately, things turned out to be not quite so simple. Stay tuned.
Nov 29 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Walter Bright wrote:
 And here it is (called opDispatch, Michel Fortin's suggestion):
 
 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268
old=trunk%2Fsrc 267 
 
Fixed reported problems with it: http://www.dsource.org/projects/dmd/changeset?old_path=trunk&old=269&new_path=trunk&new=270
Nov 29 2009
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 30 Nov 2009 07:10:41 +0100, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Walter Bright wrote:
 And here it is (called opDispatch, Michel Fortin's suggestion):
   
 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267
Fixed reported problems with it: http://www.dsource.org/projects/dmd/changeset?old_path=trunk&old=269&new_path=trunk&new=270
This still does not compile: struct foo { void opDispatch( string name, T )( T value ) { } } void main( ) { foo f; f.bar( 3.14 ); } test.d(10): Error: template instance opDispatch!("bar") does not match template declaration opDispatch(string name,T) -- Simen
Nov 30 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Simen kjaeraas:
 test.d(10): Error: template instance opDispatch!("bar") does not match  
 template declaration opDispatch(string name,T)
For Walter: this quick coding-testing cycle is an huge improvement over the original way DMD was developed. Things can be improved still of course, but this was an important step. Bye, bearophile
Nov 30 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Simen kjaeraas wrote:
 This still does not compile:
 
 struct foo {
     void opDispatch( string name, T )( T value ) {
     }
 }
 
 void main( ) {
     foo f;
     f.bar( 3.14 );
 }
 
 test.d(10): Error: template instance opDispatch!("bar") does not match 
 template
 declaration opDispatch(string name,T)
It works when I try it.
Nov 30 2009
next sibling parent reply =?ISO-8859-1?Q?=c1lvaro_Castro-Castilla?= <alvcastro yahoo.es> writes:
Walter Bright Wrote:

 Simen kjaeraas wrote:
 This still does not compile:
 
 struct foo {
     void opDispatch( string name, T )( T value ) {
     }
 }
 
 void main( ) {
     foo f;
     f.bar( 3.14 );
 }
 
 test.d(10): Error: template instance opDispatch!("bar") does not match 
 template
 declaration opDispatch(string name,T)
It works when I try it.
It does. Shouldn't this work also? struct foo { void opDispatch( string name, T... )( T values ) { } } void main( ) { foo f; f.bar( 3.14 ); } lvaro Castro-Castilla
Nov 30 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
lvaro Castro-Castilla wrote:
 It does. Shouldn't this work also?
 
 struct foo {
     void opDispatch( string name, T... )( T values ) { 
     }   
 }
                                                                               
                                                        
 void main( ) { 
     foo f;
     f.bar( 3.14 );
 }
Declare as: void opDispatch(string name, T...)(T values...) ^^^
Nov 30 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 =C1lvaro Castro-Castilla wrote:
 It does. Shouldn't this work also?

 struct foo {
 =A0 =A0void opDispatch( string name, T... )( T values ) { =A0 =A0} =A0 }

 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 void main( ) {
 =A0foo f;
 =A0 =A0f.bar( 3.14 );
 }
Declare as: =A0 =A0void opDispatch(string name, T...)(T values...) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 ^^^ You didn't use to have to do that with variadic templates. Is that also a new change in SVN? Also, is there any chance you could paste the 1-line bug description into your SVN commit messages? For those of us who don't have the bug database memorized it would make it much easier to find relevant commits. (for instance I was scanning the commit log to see if there were any changes related to how variadic templates work, but all one sees is a list of commit messages like "bugzilla 3494". Takes too long to figure such things out if you have to plug each number one-by-one into bugzilla.) --bb
Nov 30 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright
    void opDispatch(string name, T...)(T values...)
                                               ^^^
You didn't use to have to do that with variadic templates. Is that also a new change in SVN?
I believe it was always like that.
 Also, is there any chance you could paste the 1-line bug description
 into your SVN commit messages?
 For those of us who don't have the bug database memorized it would
 make it much easier to find relevant commits.
 
 (for instance I was scanning the commit log to see if there were any
 changes related to how variadic templates work, but all one sees is a
 list of commit messages like "bugzilla 3494".  Takes too long to
 figure such things out if you have to plug each number one-by-one into
 bugzilla.)
Probably :-)
Nov 30 2009
parent reply =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright
    void opDispatch(string name, T...)(T values...)
                                               ^^^
You didn't use to have to do that with variadic templates. Is that also a new change in SVN?
I believe it was always like that.
What do you mean? Not the D I played with? void test1(T...)(T ts) { writeln(ts); //works as expected } void test2(string s, T...)(T ts) { writeln(s); // requires manual specifying of each type writeln(ts); // e.g. test2!("foo", int, int)(1,2) } void test3(string s, T...)(T ts...) { writeln(s); // silently dies when called with writeln(ts); // test3!("foo")(1,2,3,4) in v2.034 }
Nov 30 2009
parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 30 Nov 2009 23:13:23 +0100, Pelle M=C3=A5nsson  =

<pelle.mansson gmail.com> wrote:

 Walter Bright wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright
    void opDispatch(string name, T...)(T values...)
                                               ^^^
You didn't use to have to do that with variadic templates. Is that also a new change in SVN?
I believe it was always like that.
What do you mean? Not the D I played with? void test1(T...)(T ts) { writeln(ts); //works as expected } void test2(string s, T...)(T ts) { writeln(s); // requires manual specifying of each type writeln(ts); // e.g. test2!("foo", int, int)(1,2) } void test3(string s, T...)(T ts...) { writeln(s); // silently dies when called with writeln(ts); // test3!("foo")(1,2,3,4) in v2.034 }
It would seem Walter is right, but only for opDispatch. This compiles fine. If you want compile errors, move the ellipsis around: struct foo { void opDispatch( string name, T... )( T value... ) { } = void bar( T... )( T args ) { } } void main( ) { foo f; f.bar( 3 ); f.baz( 3.14 ); } -- = Simen
Nov 30 2009
next sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Simen kjaeraas wrote:
 On Mon, 30 Nov 2009 23:13:23 +0100, Pelle Månsson 
 <pelle.mansson gmail.com> wrote:
 
 Walter Bright wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 1:00 PM, Walter Bright
    void opDispatch(string name, T...)(T values...)
                                               ^^^
You didn't use to have to do that with variadic templates. Is that also a new change in SVN?
I believe it was always like that.
What do you mean? Not the D I played with? void test1(T...)(T ts) { writeln(ts); //works as expected } void test2(string s, T...)(T ts) { writeln(s); // requires manual specifying of each type writeln(ts); // e.g. test2!("foo", int, int)(1,2) } void test3(string s, T...)(T ts...) { writeln(s); // silently dies when called with writeln(ts); // test3!("foo")(1,2,3,4) in v2.034 }
It would seem Walter is right, but only for opDispatch. This compiles fine. If you want compile errors, move the ellipsis around: struct foo { void opDispatch( string name, T... )( T value... ) { } void bar( T... )( T args ) { } } void main( ) { foo f; f.bar( 3 ); f.baz( 3.14 ); }
So, why have this special case for opDispatch? Maybe I am missing something.
Nov 30 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Simen kjaeraas wrote:
 It would seem Walter is right, but only for opDispatch. This compiles
 fine. If you want compile errors, move the ellipsis around:
Hmm, looks like there is a problem, but it has nothing to do with opDispatch, as this: void bar(string s, T...)(T args) doesn't work either.
Nov 30 2009
parent reply Alvaro Castro-Castilla <alvcastro yahoo.es> writes:
Walter Bright Wrote:

 Simen kjaeraas wrote:
 It would seem Walter is right, but only for opDispatch. This compiles
 fine. If you want compile errors, move the ellipsis around:
Hmm, looks like there is a problem, but it has nothing to do with opDispatch, as this: void bar(string s, T...)(T args) doesn't work either.
I think this doesn't work as should. This code: struct foo { void opDispatch( string name, T... )( T values... ) { pragma(msg,values); // ...shows "tuple()" writeln(values); // ...shows nothing at runtime foreach(v;values) { // ...idem writeln(v); } } } void main( ) { foo f; f.bar( 3.14, 6.28 ); } Best regards, lvaro Castro-Castilla
Nov 30 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Alvaro Castro-Castilla wrote:
 I think this doesn't work as should. This code:
Yes, you're right. I'll look into fixing it.
Nov 30 2009
prev sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 00:00:23 +0300, Walter Bright  =

<newshound1 digitalmars.com> wrote:

 =C3=81lvaro Castro-Castilla wrote:
 It does. Shouldn't this work also?
  struct foo {
     void opDispatch( string name, T... )( T values ) {     }   }
                                                                      =
=
 void main( ) {     foo f;
     f.bar( 3.14 );
 }
Declare as: void opDispatch(string name, T...)(T values...) ^^^
What? I am using code like =C3=81lvaro posted all the time, whereas your= syntax = doesn't even work (according to my test): void foo(T...)(T values) { } foo(42); Error: template test.foo(T...) does not match any function template = declaration Error: template test.foo(T...) cannot deduce template function from = argument types !()(int) I wonder why it works for opDispatch (if it does, as you say).
Nov 30 2009
prev sibling parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 30 Nov 2009 19:07:38 +0100, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Simen kjaeraas wrote:
 This still does not compile:
  struct foo {
     void opDispatch( string name, T )( T value ) {
     }
 }
  void main( ) {
     foo f;
     f.bar( 3.14 );
 }
  test.d(10): Error: template instance opDispatch!("bar") does not match  
 template
 declaration opDispatch(string name,T)
It works when I try it.
And here. So what are you complaining about? :p Apparently, my build script was wonky, and didn't update correctly. I'm already in love with this feature. We're gonna have a beautiful life together... -- Simen
Nov 30 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: 1. hook up to COM's IDispatch 2. create 'classes' at runtime 3. add methods to existing classes (monkey patching) that allow such extensions 4. provide an easy way for users to add plugins to an app 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
Nov 30 2009
next sibling parent reply "Simen kjaeraas" <simen.kjaras gmail.com> writes:
On Mon, 30 Nov 2009 23:02:46 +0100, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: 1. hook up to COM's IDispatch 2. create 'classes' at runtime 3. add methods to existing classes (monkey patching) that allow such extensions 4. provide an easy way for users to add plugins to an app 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
I know, and interfacing with scripting languages just got even awesomer than it already was in D. Oh, and another thing: Will we get property syntax for this? I'd like to use this for shaders, allowing one to refer to the shader's own variables directly from D, but currently I am limited to function call syntax (unless I'm missing something?) -- Simen
Nov 30 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Simen kjaeraas wrote:
 Oh, and another thing: Will we get property syntax for this?
 
 I'd like to use this for shaders, allowing one to refer to
 the shader's own variables directly from D, but currently I am
 limited to function call syntax (unless I'm missing something?)
You can do it, but be careful - you cannot add data members to a class by using templates. You'll have to fake them, by using enums, or a pointer to the actual data, etc.
Nov 30 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 So am I. It seems to be incredibly powerful.
Very powerful things can be dangerous too, they can lead to bugs, etc.
 Looks to me you can do things like:
This is stuff that can be written in the D docs to, for example you can add some of those examples in the D2 docs page about operators. Bye, bearophile
Nov 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Nov 30, 2009 at 3:38 PM, bearophile <bearophileHUGS lycos.com> wrote:
 Walter Bright:
 So am I. It seems to be incredibly powerful.
Very powerful things can be dangerous too, they can lead to bugs, etc.
I'm a bit concerned about what this does to introspection. With a dynamic language like Python, adding runtime methods does not interfere with your ability to enumerate all the methods supported by an object. With this opDispatch it's no longer possible for the compiler to know what list of methods a class responds to. So unless we add some way for a class to enumerate such things, opDispatch effectively kills any introspecting that requires knowing everything in a class. So that would make things like automatic wrapper generation difficult. And automatic mock classes. Anything else that requires enumerating methods? Also how does this interact with property syntax? --bb
Nov 30 2009
next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Bill Baxter, el 30 de noviembre a las 16:09 me escribiste:
 On Mon, Nov 30, 2009 at 3:38 PM, bearophile <bearophileHUGS lycos.com> wrote:
 Walter Bright:
 So am I. It seems to be incredibly powerful.
Very powerful things can be dangerous too, they can lead to bugs, etc.
I'm a bit concerned about what this does to introspection. With a dynamic language like Python, adding runtime methods does not interfere with your ability to enumerate all the methods supported by an object. With this opDispatch it's no longer possible for the compiler to know what list of methods a class responds to.
That's another reason to have a better support for dynamic types, even when implemented in the library. Maybe a mixin can be provided to enable some basic functionality, that's common to all objects using opDispatch(), like enumerating the runtime members or checking if a member exist. Anyway, this problem is present in dynamic languages too, for example if you use the __setattr__(), __getattr__() and __hasattr__() magic methods in Python, you can't use introspection to see what's in the object, you have to know its internals to get that information. There are two levels of dynamicity (at least in Python), you can add real members to an object (which can be inspected using the standard Python facilities) or you can use those magic methods (which kills the introspection too). The problem with D (using opDispatch()) is you can't add real members, you can only use magic method to add members at runtime. And that's why I think it would be nice to have some standard facilities to do this extra work, otherwise every D programmer will come up with its own implementation and interoperability will be a nightmare. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Ah, se va en 1981? Pero por qué? ... Ah, porque ya había sido determinado, entonces quiere decir que pronto vamos a elegir presidente nuevo nosot... Ah, nosotros no? Ah, lo van a elegir en la ... Ah! Quiere que le diga? Muy bien pensado, porque cada vez que lo elegimos nosotros no duran nada! -- Tato vs. Tato (1980, Gobierno de Videla)
Nov 30 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Also how does this interact with property syntax?
Define opDispatch with property.
Nov 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Nov 30, 2009 at 6:03 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 Also how does this interact with property syntax?
Define opDispatch with property.
So we can overload on property-ness? I.e. this works struct S { property float x() { return 1.0f; } float x() { return 2.0f; } } void main() { S s; writefln("%s", s.x); // writes 1.0 writefln("%s", s.x()); // writes 2.0 } --bb
Nov 30 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works
 
 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }
 
 void main()
 {
     S  s;
     writefln("%s", s.x); // writes 1.0
     writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Nov 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
 =A0 =A0S =A0s;
 =A0 =A0writefln("%s", s.x); // writes 1.0
 =A0 =A0writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
Nov 30 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
It's a limitation similar to not having a field and a method share the same name. It avoids a number of awkward questions such as figuring the meaning of &s.x. Andrei
Nov 30 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
It's a limitation similar to not having a field and a method share the same name. It avoids a number of awkward questions such as figuring the meaning of &s.x.
I agree. While the compiler currently doesn't check for mixing up properties and methods, I intend to make it do so. I can't see any justification for allowing it.
Nov 30 2009
next sibling parent reply Max Samukha <spambox d-coding.com> writes:
On Mon, 30 Nov 2009 22:33:40 -0800, Walter Bright
<newshound1 digitalmars.com> wrote:

I agree. While the compiler currently doesn't check for mixing up 
properties and methods, I intend to make it do so. I can't see any 
justification for allowing it.
Bill rightfully mentioned that it would be impossible to dynamically dispatch to both properties and methods even if those properties and methods don't have conflicting names. And that may really be an unfortunate limitation. For example, it would be problematic to implement a generic wrapper for IDispatch: class ComDispatcher { this(IUnknown iUnk) { // query IDispatch and possibly create a member-names-to-id map, etc. } Variant opDispatch(string method, T...)(T args) { // call method (using DISPATCH_METHOD) } property void opDispatch(string property, T)(T arg) { // set property (using DISPATCH_PROPERTYPUT) } property Variant opDispatch(string property)() { // get property (using DISPATCH_PROPERTYGET) } } auto c = new ComDispatcher(iUnk); c.foo(1); // call method c.bar = 1; // set property int a = c.baz; // get property int b = c.qux(); // call method
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Max Samukha wrote:
 On Mon, 30 Nov 2009 22:33:40 -0800, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 I agree. While the compiler currently doesn't check for mixing up 
 properties and methods, I intend to make it do so. I can't see any 
 justification for allowing it.
Bill rightfully mentioned that it would be impossible to dynamically dispatch to both properties and methods even if those properties and methods don't have conflicting names. And that may really be an unfortunate limitation. For example, it would be problematic to implement a generic wrapper for IDispatch:
Is there any reason not to just make the IDispatch properties have a function interface?
Dec 01 2009
parent Max Samukha <spambox d-coding.com> writes:
On Tue, 01 Dec 2009 03:15:12 -0800, Walter Bright
<newshound1 digitalmars.com> wrote:

Max Samukha wrote:
 On Mon, 30 Nov 2009 22:33:40 -0800, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 I agree. While the compiler currently doesn't check for mixing up 
 properties and methods, I intend to make it do so. I can't see any 
 justification for allowing it.
Bill rightfully mentioned that it would be impossible to dynamically dispatch to both properties and methods even if those properties and methods don't have conflicting names. And that may really be an unfortunate limitation. For example, it would be problematic to implement a generic wrapper for IDispatch:
Is there any reason not to just make the IDispatch properties have a function interface?
I don't know. It looks like IDispatch::Invoke requires the invokation flag to be explicitly specified. The wrapper can try to determine the flag using IDispatch::GetTypeInfo. However, objects implementing IDispatch are not required to provide any type information. Denis has suggested a convention-based approach (constraint for names starting with "prop_"). It is not great but may work.
Dec 01 2009
prev sibling next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 09:33:40 +0300, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Andrei Alexandrescu wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
It's a limitation similar to not having a field and a method share the same name. It avoids a number of awkward questions such as figuring the meaning of &s.x.
I agree. While the compiler currently doesn't check for mixing up properties and methods, I intend to make it do so. I can't see any justification for allowing it.
Monkey-patching relies on it: int bar() { return 42; } Dynamic dynamic = new Dynamic(); dynamic.newMethod = &bar; // setter, property version called auto dg = dynamic.newMethod; // getter, property version called auto result = dynamic.newMethod(); // non- property version called
Dec 01 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 30 de noviembre a las 22:33 me escribiste:
 Andrei Alexandrescu wrote:
Bill Baxter wrote:
On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
Bill Baxter wrote:
So we can overload on  property-ness?
No.
I.e. this works

struct S
{
 property
float x() { return 1.0f; }
float x() { return 2.0f; }
}

void main()
{
   S  s;
   writefln("%s", s.x); // writes 1.0
   writefln("%s", s.x()); // writes 2.0
}
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
It's a limitation similar to not having a field and a method share the same name. It avoids a number of awkward questions such as figuring the meaning of &s.x.
I agree. While the compiler currently doesn't check for mixing up properties and methods, I intend to make it do so. I can't see any justification for allowing it.
What about: property int opDispatch(string n)() if (n.startsWith("prop_")) { // ... } int opDispatch(string n)() if (n.startsWith("meth_")) { // ... } int i = o.prop_x; int j = o.meth_x(); Should this work? Is not that pretty, but it's a compromise. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y tuve amores, que fue uno sólo El que me dejó de a pie y me enseñó todo...
Dec 01 2009
prev sibling parent grauzone <none example.net> writes:
Andrei Alexandrescu wrote:
 Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation. --bb
It's a limitation similar to not having a field and a method share the same name. It avoids a number of awkward questions such as figuring the meaning of &s.x.
But isn't it the same problem with overloaded functions? Or is this specific issue already solved in D2? (A short look in the language specification revealed nothing; is the unary & operator even documented?)
 Andrei
Dec 01 2009
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com> wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
Dec 01 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
Dec 01 2009
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 08:49:58 -0500, Denis Koroskin <2korden gmail.com>  
wrote:

 On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
although opDispatch allows some dynamic function definitions, the *usage* of opDispatch is always static. The question is, if you are for example wrapping another type, can you introspect the attributes of its methods? For example, I'd expect something like this should be possible in the future: struct Wrapper(T) { T t; property auto opDispatch(string s)() if(isProperty!T(s) ) {mixin("return t." ~ s ~ ";");} // getters property auto opDispatch(string s, A)(A arg) if(isProperty!T(s) ) {mixin("return (t." ~ s ~ " = arg);"); } // setters auto opDispatch(string s, A...)(A args) { mixin("return t." ~ s ~ "(args);");} } Now, given the function attributes that are possible (this does not include const and immutable, which are overloaded via parameter types), this is going to get pretty ugly quickly. Unfortunately, the attributes are not decided by the caller, but by the callee, so you have to use template conditionals. It would be nice if there was a way to say "copy the attributes from function x" when defining template functions in a way that doesn't involve conditionals, but even then, you would have a hard time defining such usage because you don't know what function you want until you evaluate the template string. -Steve
Dec 01 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 17:12:38 +0300, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 08:49:58 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
although opDispatch allows some dynamic function definitions, the *usage* of opDispatch is always static. The question is, if you are for example wrapping another type, can you introspect the attributes of its methods? For example, I'd expect something like this should be possible in the future: struct Wrapper(T) { T t; property auto opDispatch(string s)() if(isProperty!T(s) ) {mixin("return t." ~ s ~ ";");} // getters property auto opDispatch(string s, A)(A arg) if(isProperty!T(s) ) {mixin("return (t." ~ s ~ " = arg);"); } // setters auto opDispatch(string s, A...)(A args) { mixin("return t." ~ s ~ "(args);");} } Now, given the function attributes that are possible (this does not include const and immutable, which are overloaded via parameter types), this is going to get pretty ugly quickly. Unfortunately, the attributes are not decided by the caller, but by the callee, so you have to use template conditionals. It would be nice if there was a way to say "copy the attributes from function x" when defining template functions in a way that doesn't involve conditionals, but even then, you would have a hard time defining such usage because you don't know what function you want until you evaluate the template string. -Steve
I might work with your design, but it will lead to considerable code bloat, and it's not that static after all. I'd say that you could achieve the same with method forwarding using alias this: struct Wrapper(T) { T t; alias this t; } The true power of opDispatch comes with a fully Dynamic type, that has no type information until runtime: void foo(Dynamic duck) { duck.quack(): }
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 10:25:43 -0500, Denis Koroskin <2korden gmail.com>  
wrote:

 On Tue, 01 Dec 2009 17:12:38 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 08:49:58 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
although opDispatch allows some dynamic function definitions, the *usage* of opDispatch is always static. The question is, if you are for example wrapping another type, can you introspect the attributes of its methods? For example, I'd expect something like this should be possible in the future: struct Wrapper(T) { T t; property auto opDispatch(string s)() if(isProperty!T(s) ) {mixin("return t." ~ s ~ ";");} // getters property auto opDispatch(string s, A)(A arg) if(isProperty!T(s) ) {mixin("return (t." ~ s ~ " = arg);"); } // setters auto opDispatch(string s, A...)(A args) { mixin("return t." ~ s ~ "(args);");} } Now, given the function attributes that are possible (this does not include const and immutable, which are overloaded via parameter types), this is going to get pretty ugly quickly. Unfortunately, the attributes are not decided by the caller, but by the callee, so you have to use template conditionals. It would be nice if there was a way to say "copy the attributes from function x" when defining template functions in a way that doesn't involve conditionals, but even then, you would have a hard time defining such usage because you don't know what function you want until you evaluate the template string. -Steve
I might work with your design, but it will lead to considerable code bloat, and it's not that static after all. I'd say that you could achieve the same with method forwarding using alias this: struct Wrapper(T) { T t; alias this t; } The true power of opDispatch comes with a fully Dynamic type, that has no type information until runtime: void foo(Dynamic duck) { duck.quack(): }
You are missing the point of opDispatch. It is not runtime defined, because the compiler statically decides to call opDispatch. The dynamic part of opDispatch comes if you want to do something based on runtime values within the opDispatch function. e.g. the compiler doesn't decide at *runtime* whether to call opDispatch or some normal function named quack, it's decided at compile time. opDispatch could be completely compile-time defined since it is a template. But the 'dynamicness' of it is basically no more dynamic than a normal function which does something based on runtime values. Compare that to a dynamic language with which you can add methods to any object instance to make it different than another object, or make it conform to some interface. My example is not a complete example BTW. You can do much more than just dispatch to a sub-type, you can do other things that alias this cannot. For example, you could log each call to a function before calling the sub-type. But there are probably even better ways to do that with mixins. The real power of opDispatch comes when you don't want the default mapping of case-sensitive function name to implementation. -Steve
Dec 01 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 10:25:43 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 17:12:38 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 08:49:58 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
although opDispatch allows some dynamic function definitions, the *usage* of opDispatch is always static. The question is, if you are for example wrapping another type, can you introspect the attributes of its methods? For example, I'd expect something like this should be possible in the future: struct Wrapper(T) { T t; property auto opDispatch(string s)() if(isProperty!T(s) ) {mixin("return t." ~ s ~ ";");} // getters property auto opDispatch(string s, A)(A arg) if(isProperty!T(s) ) {mixin("return (t." ~ s ~ " = arg);"); } // setters auto opDispatch(string s, A...)(A args) { mixin("return t." ~ s ~ "(args);");} } Now, given the function attributes that are possible (this does not include const and immutable, which are overloaded via parameter types), this is going to get pretty ugly quickly. Unfortunately, the attributes are not decided by the caller, but by the callee, so you have to use template conditionals. It would be nice if there was a way to say "copy the attributes from function x" when defining template functions in a way that doesn't involve conditionals, but even then, you would have a hard time defining such usage because you don't know what function you want until you evaluate the template string. -Steve
I might work with your design, but it will lead to considerable code bloat, and it's not that static after all. I'd say that you could achieve the same with method forwarding using alias this: struct Wrapper(T) { T t; alias this t; } The true power of opDispatch comes with a fully Dynamic type, that has no type information until runtime: void foo(Dynamic duck) { duck.quack(): }
You are missing the point of opDispatch. It is not runtime defined, because the compiler statically decides to call opDispatch. The dynamic part of opDispatch comes if you want to do something based on runtime values within the opDispatch function. e.g. the compiler doesn't decide at *runtime* whether to call opDispatch or some normal function named quack, it's decided at compile time. opDispatch could be completely compile-time defined since it is a template. But the 'dynamicness' of it is basically no more dynamic than a normal function which does something based on runtime values. Compare that to a dynamic language with which you can add methods to any object instance to make it different than another object, or make it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com>  
wrote:

 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 You are missing the point of opDispatch.  It is not runtime defined,  
 because the compiler statically decides to call opDispatch.  The  
 dynamic part of opDispatch comes if you want to do something based on  
 runtime values within the opDispatch function.  e.g. the compiler  
 doesn't decide at *runtime* whether to call opDispatch or some normal  
 function named quack, it's decided at compile time.  opDispatch could  
 be completely compile-time defined since it is a template.  But the  
 'dynamicness' of it is basically no more dynamic than a normal function  
 which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to  
 any object instance to make it different than another object, or make  
 it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
Dec 01 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 19:41:46 +0300, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 You are missing the point of opDispatch.  It is not runtime defined,  
 because the compiler statically decides to call opDispatch.  The  
 dynamic part of opDispatch comes if you want to do something based on  
 runtime values within the opDispatch function.  e.g. the compiler  
 doesn't decide at *runtime* whether to call opDispatch or some normal  
 function named quack, it's decided at compile time.  opDispatch could  
 be completely compile-time defined since it is a template.  But the  
 'dynamicness' of it is basically no more dynamic than a normal  
 function which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to  
 any object instance to make it different than another object, or make  
 it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
As pointed out, ActionScript and JavaScript use foo.bar and foo["bar"] interchangeably, so I believe we could do something similar. I believe there is no real difference between d.foo and d.bar for opDispatch (except that it could calculate string hash at compile time for faster hast-table lookup), and it would just call a generic run-time method anyway. As such, this method could be made visible to everyone: class Dynamic { // getter property Dynamic opDispatch(string prop) { return this[prop]; } // setter property void opDispatch(string prop)(Dynamic value) { this[prop] = value; } ref Dynamic opIndex(string propName) { // do a hash-table lookup } Dynamic opCall(Args...)(Args args) { // do magic } } So essentially, opDispatch is just a syntax sugar. But it's very important one, because not only it makes writing code easier, it would allow using dynamic objects with generic algorithms. Note that only property version of opDispatch is really needed, method invokation is covered by property + opCall pair. And it's the opCall implementation that bothers me the most... I don't see any way to implement it without reflection ATM.
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 11:58:43 -0500, Denis Koroskin <2korden gmail.com>  
wrote:

 On Tue, 01 Dec 2009 19:41:46 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com>  
 wrote:

 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer  
 <schveiguy yahoo.com> wrote:

 You are missing the point of opDispatch.  It is not runtime defined,  
 because the compiler statically decides to call opDispatch.  The  
 dynamic part of opDispatch comes if you want to do something based on  
 runtime values within the opDispatch function.  e.g. the compiler  
 doesn't decide at *runtime* whether to call opDispatch or some normal  
 function named quack, it's decided at compile time.  opDispatch could  
 be completely compile-time defined since it is a template.  But the  
 'dynamicness' of it is basically no more dynamic than a normal  
 function which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to  
 any object instance to make it different than another object, or make  
 it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
As pointed out, ActionScript and JavaScript use foo.bar and foo["bar"] interchangeably, so I believe we could do something similar. I believe there is no real difference between d.foo and d.bar for opDispatch (except that it could calculate string hash at compile time for faster hast-table lookup), and it would just call a generic run-time method anyway. As such, this method could be made visible to everyone: class Dynamic { // getter property Dynamic opDispatch(string prop) { return this[prop]; } // setter property void opDispatch(string prop)(Dynamic value) { this[prop] = value; } ref Dynamic opIndex(string propName) { // do a hash-table lookup } Dynamic opCall(Args...)(Args args) { // do magic } }
This is a very nice example, I only see one minor problem with it: the "do a hash table lookup" has to tentatively add an element if one doesn't yet exist. However, opDispatch is even less runtime-decided in this example (it can always be inlined).
 So essentially, opDispatch is just a syntax sugar. But it's very  
 important one, because not only it makes writing code easier, it would  
 allow using dynamic objects with generic algorithms.
Essentially you could say opDispatch is dynamic at compile time. Runtime, not so much. But anything decided at compile time can be forwarded to a runtime function. Without opDispatch, you already can get dynamic runtime function calling via a similar method you outline above (I didn't think of using the array syntax, and I've been using Javascript quite a bit lately!), but you can't get dynamic function calling that's drop-in replaceable with normal function calling at compile time without opDispatch. I like that explanation. It is probably the most compelling usage for opDispatch. -Steve
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 12:40:21 -0500, Steven Schveighoffer wrote:

 On Tue, 01 Dec 2009 11:58:43 -0500, Denis Koroskin <2korden gmail.com>
 wrote:
 
 On Tue, 01 Dec 2009 19:41:46 +0300, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com>
 wrote:

 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:


 You are missing the point of opDispatch.  It is not runtime defined,
 because the compiler statically decides to call opDispatch.  The
 dynamic part of opDispatch comes if you want to do something based
 on runtime values within the opDispatch function.  e.g. the compiler
 doesn't decide at *runtime* whether to call opDispatch or some
 normal function named quack, it's decided at compile time. 
 opDispatch could be completely compile-time defined since it is a
 template.  But the 'dynamicness' of it is basically no more dynamic
 than a normal function which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to
 any object instance to make it different than another object, or
 make it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
As pointed out, ActionScript and JavaScript use foo.bar and foo["bar"] interchangeably, so I believe we could do something similar. I believe there is no real difference between d.foo and d.bar for opDispatch (except that it could calculate string hash at compile time for faster hast-table lookup), and it would just call a generic run-time method anyway. As such, this method could be made visible to everyone: class Dynamic { // getter property Dynamic opDispatch(string prop) { return this[prop]; } // setter property void opDispatch(string prop)(Dynamic value) { this[prop] = value; } ref Dynamic opIndex(string propName) { // do a hash-table lookup } Dynamic opCall(Args...)(Args args) { // do magic } }
This is a very nice example, I only see one minor problem with it: the "do a hash table lookup" has to tentatively add an element if one doesn't yet exist. However, opDispatch is even less runtime-decided in this example (it can always be inlined).
 So essentially, opDispatch is just a syntax sugar. But it's very
 important one, because not only it makes writing code easier, it would
 allow using dynamic objects with generic algorithms.
Essentially you could say opDispatch is dynamic at compile time. Runtime, not so much. But anything decided at compile time can be forwarded to a runtime function.
You don't seem to have any idea what the term 'dynamic' means. From http://en.wikipedia.org/wiki/Dynamic_programming_language “Dynamic programming language is a term used broadly in computer science to describe a class of high-level programming languages that execute at runtime many common behaviors that other languages might perform during compilation, if at all.” From http://en.wikipedia.org/wiki/Dynamic_typing#Dynamic_typing “A programming language is said to be dynamically typed, when the majority of its type checking is performed at run-time as opposed to at compile-time. In dynamic typing, types are associated with values not variables.” The dynamic quite clearly means something that happens at runtime. It's not a compile time feature.
Dec 01 2009
next sibling parent Don <nospam nospam.com> writes:
retard wrote:
 Tue, 01 Dec 2009 12:40:21 -0500, Steven Schveighoffer wrote:
 
 On Tue, 01 Dec 2009 11:58:43 -0500, Denis Koroskin <2korden gmail.com>
 wrote:

 On Tue, 01 Dec 2009 19:41:46 +0300, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:

 On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com>
 wrote:

 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:


 You are missing the point of opDispatch.  It is not runtime defined,
 because the compiler statically decides to call opDispatch.  The
 dynamic part of opDispatch comes if you want to do something based
 on runtime values within the opDispatch function.  e.g. the compiler
 doesn't decide at *runtime* whether to call opDispatch or some
 normal function named quack, it's decided at compile time. 
 opDispatch could be completely compile-time defined since it is a
 template.  But the 'dynamicness' of it is basically no more dynamic
 than a normal function which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to
 any object instance to make it different than another object, or
 make it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
As pointed out, ActionScript and JavaScript use foo.bar and foo["bar"] interchangeably, so I believe we could do something similar. I believe there is no real difference between d.foo and d.bar for opDispatch (except that it could calculate string hash at compile time for faster hast-table lookup), and it would just call a generic run-time method anyway. As such, this method could be made visible to everyone: class Dynamic { // getter property Dynamic opDispatch(string prop) { return this[prop]; } // setter property void opDispatch(string prop)(Dynamic value) { this[prop] = value; } ref Dynamic opIndex(string propName) { // do a hash-table lookup } Dynamic opCall(Args...)(Args args) { // do magic } }
This is a very nice example, I only see one minor problem with it: the "do a hash table lookup" has to tentatively add an element if one doesn't yet exist. However, opDispatch is even less runtime-decided in this example (it can always be inlined).
 So essentially, opDispatch is just a syntax sugar. But it's very
 important one, because not only it makes writing code easier, it would
 allow using dynamic objects with generic algorithms.
Essentially you could say opDispatch is dynamic at compile time. Runtime, not so much. But anything decided at compile time can be forwarded to a runtime function.
You don't seem to have any idea what the term 'dynamic' means. From http://en.wikipedia.org/wiki/Dynamic_programming_language “Dynamic programming language is a term used broadly in computer science to describe a class of high-level programming languages that execute at runtime many common behaviors that other languages might perform during compilation, if at all.” From http://en.wikipedia.org/wiki/Dynamic_typing#Dynamic_typing “A programming language is said to be dynamically typed, when the majority of its type checking is performed at run-time as opposed to at compile-time. In dynamic typing, types are associated with values not variables.” The dynamic quite clearly means something that happens at runtime. It's not a compile time feature.
Well, it does seem to be rather novel in a statically typed, compiled language. It blurs the boundary a bit, so it's not quite clear what naming is appropriate. It enables the same syntax which dynamic programming languages use at runtime. I'm not sure that the fact that it occurs at compile-time rather than run-time is important. (Presumably, a dynamic language is permitted to implement functionality at compile time, if it can do so without affecting the semantics).
Dec 02 2009
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 02 Dec 2009 02:22:01 -0500, retard <re tard.com.invalid> wrote:


 You don't seem to have any idea what the term 'dynamic' means. From
 http://en.wikipedia.org/wiki/Dynamic_programming_language
I'm sure the first person who suggested C++ templates were a functional language was shown wikipedia (or whatever the equivalent at the time was) as well :) -Steve
Dec 02 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Tue, 01 Dec 2009 11:20:06 -0500, Denis Koroskin <2korden gmail.com> 
 wrote:
 
 On Tue, 01 Dec 2009 19:02:27 +0300, Steven Schveighoffer 
 <schveiguy yahoo.com> wrote:

 You are missing the point of opDispatch.  It is not runtime defined, 
 because the compiler statically decides to call opDispatch.  The 
 dynamic part of opDispatch comes if you want to do something based on 
 runtime values within the opDispatch function.  e.g. the compiler 
 doesn't decide at *runtime* whether to call opDispatch or some normal 
 function named quack, it's decided at compile time.  opDispatch could 
 be completely compile-time defined since it is a template.  But the 
 'dynamicness' of it is basically no more dynamic than a normal 
 function which does something based on runtime values.

 Compare that to a dynamic language with which you can add methods to 
 any object instance to make it different than another object, or make 
 it conform to some interface.
Well, I believe it's possible to implement the same with opDispatch (not just to any object, but to those that support it): void foo() {} Dynamic d = ..; if (!d.foo) { d.foo = &foo; } d.foo();
You could do something like this (I don't think your exact syntax would work), but you could also do something like this without opDispatch. But the name 'foo' is still statically decided. Note that opDispatch doesn't implement this ability for you, you still have to implement the dynamic calls behind it. The special nature of opDispatch is how you can define how to map any symbol to any implementation without having to explicitly use strings. In fact, opDispatch is slightly less powerful than such a method if the method uses a runtime string for dispatch. For example, in php, I can do this: foo($var) { $obj->$var(); } The equivalent in D would be: foo(string var) { obj.opDispatch!(var)(); } This I would consider to be true runtime-decided dispatch. -Steve
obj.dynDispatch(var); Andrei
Dec 01 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Tue, 01 Dec 2009 08:49:58 -0500, Denis Koroskin <2korden gmail.com> 
 wrote:
 
 On Tue, 01 Dec 2009 16:46:25 +0300, Steven Schveighoffer 
 <schveiguy yahoo.com> wrote:

 On Mon, 30 Nov 2009 23:32:21 -0500, Bill Baxter <wbaxter gmail.com> 
 wrote:

 On Mon, Nov 30, 2009 at 7:12 PM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Bill Baxter wrote:
 So we can overload on  property-ness?
No.
 I.e. this works

 struct S
 {
  property
 float x() { return 1.0f; }
 float x() { return 2.0f; }
 }

 void main()
 {
    S  s;
    writefln("%s", s.x); // writes 1.0
    writefln("%s", s.x()); // writes 2.0
 }
That just looks wrong.
Ok, so you can't have both dynamic properties and dynamic methods with this. One or the other, your pick. Seems like an unfortunate limitation.
what a minute, can't you use template conditionals to distinguish? i.e. I would expect this to work: struct S { property float opDispatch(string s)() if (s == "x") {return 1.0f;} float opDispatch(string s)() { return 2.0f;} } void main() { S s; writefln("%s", s.x); // 1.0 writefln("%s", s.y()); // 2.0 } Overloading opDispatch based on the called symbol name should always be possible, and overloading on parameter types is always possible. -Steve
What if you don't know argument names a-priori? Consider a generic Dynamic class that has nothing but a single opDispatch method.
although opDispatch allows some dynamic function definitions, the *usage* of opDispatch is always static. The question is, if you are for example wrapping another type, can you introspect the attributes of its methods? For example, I'd expect something like this should be possible in the future: struct Wrapper(T) { T t; property auto opDispatch(string s)() if(isProperty!T(s) ) {mixin("return t." ~ s ~ ";");} // getters property auto opDispatch(string s, A)(A arg) if(isProperty!T(s) ) {mixin("return (t." ~ s ~ " = arg);"); } // setters auto opDispatch(string s, A...)(A args) { mixin("return t." ~ s ~ "(args);");} } Now, given the function attributes that are possible (this does not include const and immutable, which are overloaded via parameter types), this is going to get pretty ugly quickly. Unfortunately, the attributes are not decided by the caller, but by the callee, so you have to use template conditionals. It would be nice if there was a way to say "copy the attributes from function x" when defining template functions in a way that doesn't involve conditionals, but even then, you would have a hard time defining such usage because you don't know what function you want until you evaluate the template string. -Steve
Yes, we need to implement that. Essentially the pipe dream is to add opDispatch to Variant and have it accept any call that would go through the contained type. These are heady days for D! Andrei
Dec 01 2009
prev sibling parent BCS <none anon.com> writes:
Hello Denis,

 What if you don't know argument names a-priori? Consider a generic
 Dynamic  class that has nothing but a single opDispatch method.
 
you can do whatever logic you want, even (I think) aliasing the function template opDispatch(string s) { static if(WhateverLogicYouNeed!(s)) alias Something!(s) opDispatch; else alias SomethingElse!(s) opDispatch; }
Dec 02 2009
prev sibling next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Mon, Nov 30, 2009 at 02:02:46PM -0800, Walter Bright wrote:
 3. add methods to existing classes (monkey patching) that allow such 
 extensions
I think allowing this in general is a bad idea - the opDispatch should only be implemented on a small number of classes. The reason is simple: a typo in a method name should be a compile time error the vast majority of the time. If opDispatch was implemented all over the place, allowing random runtime extensions, the error is put off until runtime. I'm for having the feature - I just don't think it should be used very often. To add methods to existing classes at compile time, the way I'd love to see it done is: void newMethod(SomeClass myThis...) myThis.whatever...{ } SomeClass a; a.newMethod(); // rewritten as newMethod(a) -- Adam D. Ruppe http://arsdnet.net
Nov 30 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Adam D. Ruppe wrote:
 On Mon, Nov 30, 2009 at 02:02:46PM -0800, Walter Bright wrote:
 3. add methods to existing classes (monkey patching) that allow such 
 extensions
I think allowing this in general is a bad idea - the opDispatch should only be implemented on a small number of classes. The reason is simple: a typo in a method name should be a compile time error the vast majority of the time. If opDispatch was implemented all over the place, allowing random runtime extensions, the error is put off until runtime.
Using opDispatch is up to the discretion of the class designer. I doubt it would be used in any but a small minority of classes.
Nov 30 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright wrote:
 Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: 1. hook up to COM's IDispatch 2. create 'classes' at runtime 3. add methods to existing classes (monkey patching) that allow such extensions 4. provide an easy way for users to add plugins to an app 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
Can you show examples of points 2, 3 and 4? I can't see antyhing "dynamic" in this feature. I can't invoke an object's method based on it's name: class Foo { void opDispatch(string name)() { ... } } Foo foo = new Foo(); string something = get_user_input(); foo.opDispatch!(something)(); // no, can't do it foo.something(); // not the same... So where's the magic? I think opDispatch is just another metaprogramming feature, nothing else.
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Dec 01 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 14:26:04 +0300, retard <re tard.com.invalid> wrote:

 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
I believe you should distinguish duck types from other types. You shouldn't be able to call duckMethod given a reference to Object, it's a statically-typed language, after all.
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:30:43 +0300, Denis Koroskin wrote:

 On Tue, 01 Dec 2009 14:26:04 +0300, retard <re tard.com.invalid> wrote:
 
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
I believe you should distinguish duck types from other types. You shouldn't be able to call duckMethod given a reference to Object, it's a statically-typed language, after all.
Agreed. But this new feature is a bit confusing - there isn't anything dynamic in it. It's more or less a compile time rewrite rule. It becomes dynamic when all of that can be done on runtime and there are no templates involved.
Dec 01 2009
next sibling parent reply Max Samukha <spambox d-coding.com> writes:
On Tue, 1 Dec 2009 11:45:14 +0000 (UTC), retard <re tard.com.invalid>
wrote:

 void foo(Object o) {
   o.duckMethod();
 }

 foo(new Object() { void duckMethod() {} });

 The feature isn't very dynamic since the dispatch rules are defined
 statically. The only thing you can do is rewire the associative array
 when forwarding statically precalculated dispatching.
I believe you should distinguish duck types from other types. You shouldn't be able to call duckMethod given a reference to Object, it's a statically-typed language, after all.
Agreed. But this new feature is a bit confusing - there isn't anything dynamic in it. It's more or less a compile time rewrite rule. It becomes dynamic when all of that can be done on runtime and there are no templates involved.
But the feature can be used to implement fully dynamic behavior (provided there is extended RTTI, which is already implementable on top of compiletime introspection using a tweaked compiler). For example, Variant can implement opDispatch to forward calls to the contained object: void foo(Variant o) { o.duckMethod(); } foo(Variant(new class { void duckMethod() {} })); BTW, it is not possible currently to create a Variant from a void pointer to the object and the meta-object of that object because D's meta-objects are lacking necessary information. But that is fixable, I guess.
Dec 01 2009
parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 15:14:56 +0200, Max Samukha wrote:

 On Tue, 1 Dec 2009 11:45:14 +0000 (UTC), retard <re tard.com.invalid>
Agreed. But this new feature is a bit confusing - there isn't anything
dynamic in it. It's more or less a compile time rewrite rule. It becomes
dynamic when all of that can be done on runtime and there are no
templates involved.
But the feature can be used to implement fully dynamic behavior (provided there is extended RTTI, which is already implementable on top of compiletime introspection using a tweaked compiler). For example, Variant can implement opDispatch to forward calls to the contained object: void foo(Variant o) { o.duckMethod(); } foo(Variant(new class { void duckMethod() {} })); BTW, it is not possible currently to create a Variant from a void pointer to the object and the meta-object of that object because D's meta-objects are lacking necessary information. But that is fixable, I guess.
Ok, good to know.
Dec 01 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Tue, 01 Dec 2009 14:30:43 +0300, Denis Koroskin wrote:
 
 On Tue, 01 Dec 2009 14:26:04 +0300, retard <re tard.com.invalid> wrote:

 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
I believe you should distinguish duck types from other types. You shouldn't be able to call duckMethod given a reference to Object, it's a statically-typed language, after all.
Agreed. But this new feature is a bit confusing - there isn't anything dynamic in it. It's more or less a compile time rewrite rule. It becomes dynamic when all of that can be done on runtime and there are no templates involved.
Yes, that's done via old-school forwarding. Andrei
Dec 01 2009
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:
 
 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks. Also: class Foo { ... opDispatch ... } class Bar : Foo { // Let's make Bar understand more things... ... opDispatch ... } Foo foo = new Bar(); foo.something(); will not work as expected because something() will be bound to Foo's opDispatch and it isn't a virtual method. Of course you can make opDispatch invoke a virtual function and override that function in Bar, but since there isn't a standard name or method for doing this everyone will start doing it their way (I don't like it when there's no standarization for well-known operations) and it looks like a hack.
Dec 01 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
Dec 01 2009
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Ary Borenszweig wrote:
 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime.
I mean at compile-time, grrr. Promise, no more talking with myself. :-P
Dec 01 2009
prev sibling next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig <ary esperanto.org.ar>  
wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
Dec 01 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig 
 <ary esperanto.org.ar> wrote:
 
 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { void foo() { } } can be refactored to: class Something { void opDispatch(string name) if (name == "foo") {} } without problems on the client side either way. In brief, when you see: var x = ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
Dec 01 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig <ary esperanto.org.ar>  
wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
Dec 01 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig 
 <ary esperanto.org.ar> wrote:
 
 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig 
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
It's called a template: void yourMagicFunction(T)(T d) { d.foo(); } I can write that and I can always compile my code. I can use that function with any kind of symbol as long as it defines foo, whether it's by definining it explicitly, in it's hierarchy, in an aliased this symbol or in an opDispatch. That's the same concept as any function in Javascript (except that in Javascript if the argument doesn't define foo it's a runtime error and in D it'll be a compile-time error).
Dec 01 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
2009/12/1 Ary Borenszweig <ary esperanto.org.ar>:
 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig <ary esperanto.org.a=
r>
 wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { =A0o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative arr=
ay when
 forwarding statically precalculated dispatching.
=A0Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { =A0 =A0o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object doe=
s not
 defines duckMethod.

 That's why this is something useful in scripting languages (or ruby,
 python, etc.): if the method is not defined at runtime it's an error =
unless
 you define the magic function that catches all. Can't do that in D be=
cause
 the lookup is done at runtime.

 Basically:

 Dynanic d =3D ...;
 d.something(1, 2, 3);

 is just a shortcut for doing

 d.opDispatch!("something")(1, 2, 3);

 (and it's actually what the compiler does) but it's a standarized way
 of doing that. What's the fun in that?
=A0The fun is that you can call d.foo and d.bar() even though there is=
no
 such method/property.
 =A0In ActionScript (and JavaScript, too, I assume), foo.bar is
 auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { =A0 d.foo(); } var something =3D fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { =A0 d.foo(); } auto something =3D fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
It's called a template: void yourMagicFunction(T)(T d) { =A0d.foo(); } I can write that and I can always compile my code. I can use that functio=
n
 with any kind of symbol as long as it defines foo, whether it's by
 definining it explicitly, in it's hierarchy, in an aliased this symbol or=
in
 an opDispatch. That's the same concept as any function in Javascript (exc=
ept
 that in Javascript if the argument doesn't define foo it's a runtime erro=
r
 and in D it'll be a compile-time error).
If you define a catch-all opDispatch that forwards to a method that does dynamic lookup, then the error will be a runtime error. --bb
Dec 01 2009
parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Bill Baxter wrote:
 2009/12/1 Ary Borenszweig <ary esperanto.org.ar>:
 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig <ary esperanto.org.ar>
 wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { �o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
�Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { � �o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
�The fun is that you can call d.foo and d.bar() even though there is no such method/property. �In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { � d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { � d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
It's called a template: void yourMagicFunction(T)(T d) { �d.foo(); } I can write that and I can always compile my code. I can use that function with any kind of symbol as long as it defines foo, whether it's by definining it explicitly, in it's hierarchy, in an aliased this symbol or in an opDispatch. That's the same concept as any function in Javascript (except that in Javascript if the argument doesn't define foo it's a runtime error and in D it'll be a compile-time error).
If you define a catch-all opDispatch that forwards to a method that does dynamic lookup, then the error will be a runtime error. --bb
Which is correct, awesome, great, etc. Wouldn't want it any other way!
Dec 01 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 16:08:18 +0300, Ary Borenszweig <ary esperanto.org.ar>  
wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
It's called a template: void yourMagicFunction(T)(T d) { d.foo(); } I can write that and I can always compile my code. I can use that function with any kind of symbol as long as it defines foo, whether it's by definining it explicitly, in it's hierarchy, in an aliased this symbol or in an opDispatch. That's the same concept as any function in Javascript (except that in Javascript if the argument doesn't define foo it's a runtime error and in D it'll be a compile-time error).
No, I was thinking about a Dynamic class: class Foo { void foo() { ... } } void yourMagicFunction(Dynamic d) { d.foo(); // might throw if there is no such method in an underlying class } yourMagicFunction(new Dynamic(new Foo())); There are a few open issues, though: lack or true reflection and lack of overload by return type. For example, I'd like to do the following: Dynamic d = ...; d.foo = 42.0; int i = d.foo; // returns 42 float f = d.foo; // return 42.f Overload by return type would allow that: RetType opDispatch(RetType, string method, Args...)(Args args) { // ... } But relection is still needed to find out what methods a given object has: class Foo { void foo(float f) { ... } } Object o = new Foo(); Dynamic d = new Dynamic(o); d.foo(-1); // should call o.foo(-1.0); but I see no way to implement it currently
Dec 01 2009
parent "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 01 Dec 2009 16:19:58 +0300, Denis Koroskin <2korden gmail.com>
wrote:

 On Tue, 01 Dec 2009 16:08:18 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:47:43 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig  
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); ---
I believe there will soon be a library type that would allow that.
It's called a template: void yourMagicFunction(T)(T d) { d.foo(); } I can write that and I can always compile my code. I can use that function with any kind of symbol as long as it defines foo, whether it's by definining it explicitly, in it's hierarchy, in an aliased this symbol or in an opDispatch. That's the same concept as any function in Javascript (except that in Javascript if the argument doesn't define foo it's a runtime error and in D it'll be a compile-time error).
No, I was thinking about a Dynamic class: class Foo { void foo() { ... } } void yourMagicFunction(Dynamic d) { d.foo(); // might throw if there is no such method in an underlying class } yourMagicFunction(new Dynamic(new Foo())); There are a few open issues, though: lack or true reflection and lack of overload by return type. For example, I'd like to do the following: Dynamic d = ...; d.foo = 42.0; int i = d.foo; // returns 42 float f = d.foo; // return 42.f Overload by return type would allow that: RetType opDispatch(RetType, string method, Args...)(Args args) { // ... } But relection is still needed to find out what methods a given object has: class Foo { void foo(float f) { ... } } Object o = new Foo(); Dynamic d = new Dynamic(o); d.foo(-1); // should call o.foo(-1.0); but I see no way to implement it currently
On a second thought, overload by return type won't work since I'd like to do the following: Dynamic d = ..; d = 42.5; float f = d; // f = 42.5f; int i = d; // d = 42 And this requires an opImplicitCast(T). (http://msdn.microsoft.com/en-us/library/dd264736(VS.100).aspx): // Any object can be converted to dynamic type implicitly, as shown in the // following examples: dynamic d1 = 7; dynamic d2 = "a string"; dynamic d3 = System.DateTime.Today; dynamic d4 = System.Diagnostics.Process.GetProcesses(); // Conversely, an implicit conversion can be dynamically applied to any // expression of type dynamic: int i = d1; string str = d2; DateTime dt = d3; System.Diagnostics.Process[] procs = d4;
Dec 01 2009
prev sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tue, Dec 01, 2009 at 03:55:31PM +0300, Denis Koroskin wrote:
 I believe there will soon be a library type that would allow that.
Here's my first try. I don't have the the new compiler handy and am in a rush, so I'm doing it hacky. With the svn compiler, you should be able to almost run this like you'd expect. Running it prints: Running statically defined: test(10) Running dynamically defined test(0) object.Exception: no such method text My vararg code apparently is broken, but meh. ======== import std.stdio; import std.variant; import std.stdarg; class A { // this is our dynamic class void test(int a) { writefln("Running statically defined: test(%d)", a); } // Just like in javascript... A delegate(...)[string] dynamicFunctions; // Return value tells if we should forward to static methods bool dynamicCall(string name, out A ret, ...) { if(auto fun = name in dynamicFunctions) { ret = (*fun)(_arguments); return true; } return false; } void dynamicBind(string name, A delegate(...) fun) { dynamicFunctions[name] = fun; } A opDispatch(string a)(...) { // If we're assigning a delegate, bind it as a member if(_arguments[0] == typeid(A delegate(...))) { dynamicBind(a, *(cast(A delegate(...)*)(_argptr))); return null; } // If it is in the dynamic list, run that A ret; if(dynamicCall(a, ret, _arguments)) return ret; // If not, we'll look it up in our static table int arg = va_arg!(int)(_argptr); static if(__traits(hasMember, this, a)) { A var; // gah, I wish auto var = fun() worked when fun returns // void. static if(__traits(compiles, var = __traits(getMember, this, a)(arg))) return __traits(getMember, this, a)(arg); else { // Could be improved by trying to construct a // dynamic instance from the return value, // whatever it is __traits(getMember, this, a)(arg); return null; } } else throw new Exception("no such method " ~ a); } } void main() { A a = new A; // no dynamically defined members, so this should call the static a.opDispatch!("test")(10); // dynamically define a member to override the static one a.opDispatch!("test")(delegate A(int num) { writefln("Running dynamically defined test(%d)", num); return null; }); // see what runs a.opDispatch!("test")(20); // should throw method not defined a.opDispatch!("text")(30); } ========= If you have the svn compiler, you should be able to replace those opDispatchs with a.test = 10; and stuff like that. There's one thing though: I think the patch checks static stuff first, then if none of that matches, it forwards to opDispatch. For this to work like in Javascript, it will need a small change. 1) If opDispatch is defined, forward the method do it 2) If this compiles, do nothing more -- assume the opDispatch handled it 3) If not, do a normal static member lookup If opDispatch is not defined for the class, do nothing special - treat it like you normally do in D. The downside is you must either put a static constraint on what your opDispatch does (easy - static assert(0); if you don't handle it) or forward to your static members yourself, but the upside is it lets dynamic method overriding like I do here. I think it would be a net positive. Assuming this doesn't work already, of course. -- Adam D. Ruppe http://arsdnet.net
Dec 01 2009
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:
 
 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { void foo() { } } can be refactored to: class Something { void opDispatch(string name) if (name == "foo") {} } without problems on the client side either way. In brief, when you see: var x = ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's pretty much the same a Javascript isn't it? Except that everything in Javascript does dynamic lookup and in D you are restricted to types that have this dynamic lookup (which, pending a phobos solution you have to code yourself). Do you mean to say this 'except' is the obstacle somehow? To say it in code: void yourMagicDFunction(T)(T d) if ( ImplementsFooOrDispatch!T ) { d.foo(); // may (or not) be rewritten as d.opDispatch!"foo" } In javascript I understand it is like this: void yourMagicJavascriptFunction(T d) { d.foo(); // rewritten as d["foo"] } But with opDisptach implemented like this it is the same in D: class DynamicThing { void opDispatch(string name)() { auto func = this.lookupTable[name]; // looks up 'foo' func(); // } } How is that less dynamic? You would be able to call or even redefine at runtime, for example, signals defined in xml files used to build gui components.
Dec 01 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com> wrote=
:
 Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { =A0 o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
=A0Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { =A0 =A0 o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d =3D ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { =A0 =A0d.foo(); } var something =3D fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { =A0 =A0d.foo(); } auto something =3D fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: =A0 =A0d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { =A0 =A0void foo() { } } can be refactored to: class Something { =A0 =A0void opDispatch(string name) if (name =3D=3D "foo") {} } without problems on the client side either way. In brief, when you see: var x =3D ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it'=
s
 pretty much the same a Javascript isn't it? Except that everything in
 Javascript does dynamic lookup and in D you are restricted to types that
 have this dynamic lookup (which, pending a phobos solution you have to co=
de
 yourself). Do you mean to say this 'except' is the obstacle somehow?

 To say it in code:

 void yourMagicDFunction(T)(T d)
 =A0if ( ImplementsFooOrDispatch!T )
 {
 =A0 d.foo(); // may (or not) be rewritten as d.opDispatch!"foo"
 }

 In javascript I understand it is like this:

 void yourMagicJavascriptFunction(T d)
 {
 =A0 d.foo(); // rewritten as d["foo"]
 }

 But with opDisptach implemented like this it is the same in D:

 class DynamicThing
 {
 =A0 =A0void opDispatch(string name)()
 =A0 =A0{
 =A0 =A0 =A0 =A0auto func =3D this.lookupTable[name]; // looks up 'foo'
 =A0 =A0 =A0 =A0func(); //
 =A0 =A0}
 }

 How is that less dynamic? You would be able to call or even redefine at
 runtime, for example, signals defined in xml files used to build gui
 components.
It is a bit less dynamic because in D it's all done with templates. For instance in Javascript you can easily pass yourMagicJavascriptFunction around to other functions. And you can rebind the method by setting d.foo =3D &someOtherFunction. Instead of d.lookupTable["foo"] =3D &someOtherFunction. But I'm not sure such differences make a big impact on any major class of use cases. --bb
Dec 01 2009
parent reply Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com> wrote:
 Ary Borenszweig wrote:
 The feature isn't very dynamic since the dispatch rules are defined
 statically. The only thing you can do is rewire the associative
 I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's
 pretty much the same a Javascript isn't it? Except that everything in
 Javascript does dynamic lookup and in D you are restricted to types that
 have this dynamic lookup (which, pending a phobos solution you have to code
 yourself). Do you mean to say this 'except' is the obstacle somehow?
 How is that less dynamic? You would be able to call or even redefine at
 runtime, for example, signals defined in xml files used to build gui
 components.
It is a bit less dynamic because in D it's all done with templates.
It's a helluva lot more dynamic in D because it can do code generation on request. The "dynamic" bit in Javascript is really an AA lookup, + reflection.
Dec 01 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 11:43 AM, Don <nospam nospam.com> wrote:
 Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com>
 wrote:
 Ary Borenszweig wrote:
 The feature isn't very dynamic since the dispatch rules are defined
 statically. The only thing you can do is rewire the associative
 I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then
 it's
 pretty much the same a Javascript isn't it? Except that everything in
 Javascript does dynamic lookup and in D you are restricted to types that
 have this dynamic lookup (which, pending a phobos solution you have to
 code
 yourself). Do you mean to say this 'except' is the obstacle somehow?
 How is that less dynamic? You would be able to call or even redefine at
 runtime, for example, signals defined in xml files used to build gui
 components.
It is a bit less dynamic because in D it's all done with templates.
It's a helluva lot more dynamic in D because it can do code generation on request. The "dynamic" bit in Javascript is really an AA lookup, + reflection.
But that's code generation /at compile time/. You can call that "more dynamic" if you like, but it seems to fall more in the realm of what is considered "static" to me. Doesn't mean it's not really useful, but calling it dynamic seems to be stretching the traditional definition a bit too far. --bb
Dec 01 2009
parent Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 11:43 AM, Don <nospam nospam.com> wrote:
 Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com>
 wrote:
 Ary Borenszweig wrote:
 The feature isn't very dynamic since the dispatch rules are defined
 statically. The only thing you can do is rewire the associative
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's pretty much the same a Javascript isn't it? Except that everything in Javascript does dynamic lookup and in D you are restricted to types that have this dynamic lookup (which, pending a phobos solution you have to code yourself). Do you mean to say this 'except' is the obstacle somehow? How is that less dynamic? You would be able to call or even redefine at runtime, for example, signals defined in xml files used to build gui components.
It is a bit less dynamic because in D it's all done with templates.
It's a helluva lot more dynamic in D because it can do code generation on request. The "dynamic" bit in Javascript is really an AA lookup, + reflection.
But that's code generation /at compile time/. You can call that "more dynamic" if you like, but it seems to fall more in the realm of what is considered "static" to me. Doesn't mean it's not really useful, but calling it dynamic seems to be stretching the traditional definition a bit too far. --bb
Yeah, it's all about naming. The thing is, the traditional "dynamic" isn't very dynamic. You can't *really* add new functions at run-time. They all exist in the source code, all you're doing is manipulating function pointers, and the dynamic thing is just syntax sugar for that. If you have a language with a built-in compiler or interpreter, it can be truly dynamic, but I don't think that's the normal use of the term.
Dec 02 2009
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 5:38 AM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com> wro=
te:
 Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { =A0 o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
=A0Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { =A0 =A0 o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object doe=
s
 not defines duckMethod.

 That's why this is something useful in scripting languages (or ruby,
 python, etc.): if the method is not defined at runtime it's an error
 unless you define the magic function that catches all. Can't do that
 in D because the lookup is done at runtime.

 Basically:

 Dynanic d =3D ...;
 d.something(1, 2, 3);

 is just a shortcut for doing

 d.opDispatch!("something")(1, 2, 3);

 (and it's actually what the compiler does) but it's a standarized way
 of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { =A0 =A0d.foo(); } var something =3D fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { =A0 =A0d.foo(); } auto something =3D fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: =A0 =A0d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { =A0 =A0void foo() { } } can be refactored to: class Something { =A0 =A0void opDispatch(string name) if (name =3D=3D "foo") {} } without problems on the client side either way. In brief, when you see: var x =3D ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it=
's
 pretty much the same a Javascript isn't it? Except that everything in
 Javascript does dynamic lookup and in D you are restricted to types that
 have this dynamic lookup (which, pending a phobos solution you have to c=
ode
 yourself). Do you mean to say this 'except' is the obstacle somehow?

 To say it in code:

 void yourMagicDFunction(T)(T d)
 =A0if ( ImplementsFooOrDispatch!T )
 {
 =A0 d.foo(); // may (or not) be rewritten as d.opDispatch!"foo"
 }

 In javascript I understand it is like this:

 void yourMagicJavascriptFunction(T d)
 {
 =A0 d.foo(); // rewritten as d["foo"]
 }

 But with opDisptach implemented like this it is the same in D:

 class DynamicThing
 {
 =A0 =A0void opDispatch(string name)()
 =A0 =A0{
 =A0 =A0 =A0 =A0auto func =3D this.lookupTable[name]; // looks up 'foo'
 =A0 =A0 =A0 =A0func(); //
 =A0 =A0}
 }

 How is that less dynamic? You would be able to call or even redefine at
 runtime, for example, signals defined in xml files used to build gui
 components.
It is a bit less dynamic because in D it's all done with templates. For instance in Javascript you can easily pass yourMagicJavascriptFunction around to other functions. And you can rebind the method by setting =A0d.foo =3D &someOtherFunction. Instead of d.lookupTable["foo"] =3D &someOtherFunction. But I'm not sure such differences make a big impact on any major class of use cases.
I forgot a biggie: with opDispatch you must know the return type at compile time. You could make the return type be Variant or something, but then that makes it quite different from a "regular" function. Whereas in a dynamic language like Javascript a dynamic method looks just like a regular method (because they're all dynamic, of course). --bb
Dec 01 2009
next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Bill Baxter wrote:

 On Tue, Dec 1, 2009 at 5:38 AM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com>
 wrote:
 Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { void foo() { } } can be refactored to: class Something { void opDispatch(string name) if (name == "foo") {} } without problems on the client side either way. In brief, when you see: var x = ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's pretty much the same a Javascript isn't it? Except that everything in Javascript does dynamic lookup and in D you are restricted to types that have this dynamic lookup (which, pending a phobos solution you have to code yourself). Do you mean to say this 'except' is the obstacle somehow? To say it in code: void yourMagicDFunction(T)(T d) if ( ImplementsFooOrDispatch!T ) { d.foo(); // may (or not) be rewritten as d.opDispatch!"foo" } In javascript I understand it is like this: void yourMagicJavascriptFunction(T d) { d.foo(); // rewritten as d["foo"] } But with opDisptach implemented like this it is the same in D: class DynamicThing { void opDispatch(string name)() { auto func = this.lookupTable[name]; // looks up 'foo' func(); // } } How is that less dynamic? You would be able to call or even redefine at runtime, for example, signals defined in xml files used to build gui components.
It is a bit less dynamic because in D it's all done with templates. For instance in Javascript you can easily pass yourMagicJavascriptFunction around to other functions. And you can rebind the method by setting d.foo = &someOtherFunction. Instead of d.lookupTable["foo"] = &someOtherFunction. But I'm not sure such differences make a big impact on any major class of use cases.
I forgot a biggie: with opDispatch you must know the return type at compile time. You could make the return type be Variant or something, but then that makes it quite different from a "regular" function. Whereas in a dynamic language like Javascript a dynamic method looks just like a regular method (because they're all dynamic, of course). --bb
I understand, thanks for the clarifications. Variant doesn't sound too bad. I guess it's just the consequence of not overloading by return type. What I like about this solution is the leeway you have in how much typechecking opDispatch does. You can make the return type Variant and the parameters a variadics of Variant (is that a word?), but also define the signature opDispatch can accept precisely or through template constraints. You can even check the dispatched symbol at compile time (no dynamism at all). Obviously opDispatch can add some dynamism to D, I guess we'll see how it pans out.
Dec 01 2009
parent George Moss <rolling-stone gathers-no-moss.org> writes:
Lutger wrote:
 Bill Baxter wrote:
 
 On Tue, Dec 1, 2009 at 5:38 AM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com>
 wrote:
 Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { void foo() { } } can be refactored to: class Something { void opDispatch(string name) if (name == "foo") {} } without problems on the client side either way. In brief, when you see: var x = ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's pretty much the same a Javascript isn't it? Except that everything in Javascript does dynamic lookup and in D you are restricted to types that have this dynamic lookup (which, pending a phobos solution you have to code yourself). Do you mean to say this 'except' is the obstacle somehow? To say it in code: void yourMagicDFunction(T)(T d) if ( ImplementsFooOrDispatch!T ) { d.foo(); // may (or not) be rewritten as d.opDispatch!"foo" } In javascript I understand it is like this: void yourMagicJavascriptFunction(T d) { d.foo(); // rewritten as d["foo"] } But with opDisptach implemented like this it is the same in D: class DynamicThing { void opDispatch(string name)() { auto func = this.lookupTable[name]; // looks up 'foo' func(); // } } How is that less dynamic? You would be able to call or even redefine at runtime, for example, signals defined in xml files used to build gui components.
It is a bit less dynamic because in D it's all done with templates. For instance in Javascript you can easily pass yourMagicJavascriptFunction around to other functions. And you can rebind the method by setting d.foo = &someOtherFunction. Instead of d.lookupTable["foo"] = &someOtherFunction. But I'm not sure such differences make a big impact on any major class of use cases.
I forgot a biggie: with opDispatch you must know the return type at compile time. You could make the return type be Variant or something, but then that makes it quite different from a "regular" function. Whereas in a dynamic language like Javascript a dynamic method looks just like a regular method (because they're all dynamic, of course). --bb
I understand, thanks for the clarifications. Variant doesn't sound too bad. I guess it's just the consequence of not overloading by return type. What I like about this solution is the leeway you have in how much typechecking opDispatch does. You can make the return type Variant and the parameters a variadics of Variant (is that a word?), but also define the signature opDispatch can accept precisely or through template constraints. You can even check the dispatched symbol at compile time (no dynamism at all). Obviously opDispatch can add some dynamism to D, I guess we'll see how it pans out.
So the return type is now suggested to be Variant. Plausible expectation is that Variant is now input as argument to some another function. So now you have a Variant to deal with as a function argument. How now is Variant argument to be dealt with? Question meaning typeswitch or something else?
Dec 01 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:38 AM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 5:18 AM, Lutger <lutger.blijdestijn gmail.com> wrote:
 Ary Borenszweig wrote:

 Denis Koroskin wrote:
 On Tue, 01 Dec 2009 15:05:16 +0300, Ary Borenszweig
 <ary esperanto.org.ar> wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
The fun is that you can call d.foo and d.bar() even though there is no such method/property. In ActionScript (and JavaScript, too, I assume), foo.bar is auto-magically rewritten as foo["bar"]. What's fun in that?
The fun is that in Javascript I can do: --- function yourMagicFunction(d) { d.foo(); } var something = fromSomewhere(); yourMagicFunction(something); --- and it'll work in Javascript because there's no type-checking at compile-time (well, because there's no compile-time :P) Let's translate this to D: --- void yourMagicFunction(WhatTypeToPutHere d) { d.foo(); } auto something = fromSomewhere(); yourMagicFunction(something); --- What type to put in "WhatTypeToPutHere"? If it's Object then it won't compile. If it's something that defines foo, ok. If it's something that defines opDispatch, then it's: d.opDispatch("foo")(); but you could have written it like that from the beginning. So for now I see two uses for opDispatch: 1. To create a bunch of similar functions, like the swizzle one. 2. To be able to refactor a class by moving a method to opDispatch or viceversa: class Something { void foo() { } } can be refactored to: class Something { void opDispatch(string name) if (name == "foo") {} } without problems on the client side either way. In brief, when you see: var x = ...; x.foo(); in Javascript, you have no idea where foo could be defined. If you see the same code in D you know where to look for: the class itself, it's hierarchy, alias this, opDispatch. That's a *huge* difference.
I don't get it, what if WhatTypeToPutHere does a dynamic lookup, then it's pretty much the same a Javascript isn't it? Except that everything in Javascript does dynamic lookup and in D you are restricted to types that have this dynamic lookup (which, pending a phobos solution you have to code yourself). Do you mean to say this 'except' is the obstacle somehow? To say it in code: void yourMagicDFunction(T)(T d) if ( ImplementsFooOrDispatch!T ) { d.foo(); // may (or not) be rewritten as d.opDispatch!"foo" } In javascript I understand it is like this: void yourMagicJavascriptFunction(T d) { d.foo(); // rewritten as d["foo"] } But with opDisptach implemented like this it is the same in D: class DynamicThing { void opDispatch(string name)() { auto func = this.lookupTable[name]; // looks up 'foo' func(); // } } How is that less dynamic? You would be able to call or even redefine at runtime, for example, signals defined in xml files used to build gui components.
It is a bit less dynamic because in D it's all done with templates. For instance in Javascript you can easily pass yourMagicJavascriptFunction around to other functions. And you can rebind the method by setting d.foo = &someOtherFunction. Instead of d.lookupTable["foo"] = &someOtherFunction. But I'm not sure such differences make a big impact on any major class of use cases.
I forgot a biggie: with opDispatch you must know the return type at compile time. You could make the return type be Variant or something, but then that makes it quite different from a "regular" function. Whereas in a dynamic language like Javascript a dynamic method looks just like a regular method (because they're all dynamic, of course). --bb
I don't think that's any difference at all. Javascript does use a sort of Variant for all of its values. So if you want dynamic: a) have opDispatch forward the string to dynDispatch as a regular (runtime) value, pack all parameters into Variants (or an array thereof - probably better, or even one Variant that in turn packs an array - feature recently implemented, yum), and return a Variant; b) have dynDispatch return a Variant which will be then returned by opDispatch. It's not less powerful than discussed. It's more . Andrei
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 I don't think that's any difference at all. Javascript does use a sort 
 of Variant for all of its values.
 
 So if you want dynamic:
 
 a) have opDispatch forward the string to dynDispatch as a regular 
 (runtime) value, pack all parameters into Variants (or an array thereof 
 - probably better, or even one Variant that in turn packs an array - 
 feature recently implemented, yum), and return a Variant;
 
 b) have dynDispatch return a Variant which will be then returned by 
 opDispatch.
 
 It's not less powerful than discussed. It's more .
Yes, I think you're right that the parameters passed should be a Variant[], not variadic. BTW, folks, please when replying cut down the quoting!
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 I don't think that's any difference at all. Javascript does use a sort 
 of Variant for all of its values.

 So if you want dynamic:

 a) have opDispatch forward the string to dynDispatch as a regular 
 (runtime) value, pack all parameters into Variants (or an array 
 thereof - probably better, or even one Variant that in turn packs an 
 array - feature recently implemented, yum), and return a Variant;

 b) have dynDispatch return a Variant which will be then returned by 
 opDispatch.

 It's not less powerful than discussed. It's more .
Yes, I think you're right that the parameters passed should be a Variant[], not variadic.
Parameters to dynDispatch (the user-defined forwarding function), NOT opDispatch. opDispatch can take _anything_. Sorry if I'm repeating what you know already, but I am obsessing over a small misunderstanding could end up hamstringing this very powerful feature. So: opDispatch has absolutely no restrictions except a string in the first static parameters position. Andrei
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Parameters to dynDispatch (the user-defined forwarding function), NOT 
 opDispatch. opDispatch can take _anything_.
 
 Sorry if I'm repeating what you know already, but I am obsessing over a 
 small misunderstanding could end up hamstringing this very powerful 
 feature.
 
 So: opDispatch has absolutely no restrictions except a string in the 
 first static parameters position.
I agree and I'm not misunderstanding you. I'm just saying how Variant should work, not opDispatch!
Dec 01 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 I forgot a biggie: with opDispatch you must know the return type at
 compile time.
 You could make the return type be Variant or something, but then that
 makes it quite different from a "regular" function.
 Whereas in a dynamic language like Javascript a dynamic method looks
 just like a regular method (because they're all dynamic, of course).
The Javascript implementations use variants for all variables, values, and function returns. So it isn't any different from defining opDispatch to take and return Variant's. std.variant needs to be extended with opDispatch so it can execute a call operation on a Variant, then it will be very very similar to Javascript.
Dec 01 2009
parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tue, Dec 01, 2009 at 11:07:15AM -0800, Walter Bright wrote:
 std.variant needs to be extended with opDispatch so it can execute a 
 call operation on a Variant, then it will be very very similar to 
 Javascript.
I looked into this for a few minutes this morning. Variant already stores delegates, so all that needs to be done is the existing opCall needs to be renamed (easy enough. Maybe it could become a constructor?) and then implemented to forward its arguments to the inner delegate. That's the tricky part, and I haven't had the time to figure that out yet. -- Adam D. Ruppe http://arsdnet.net
Dec 01 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Lutger wrote:
 In javascript I understand it is like this:
 
 void yourMagicJavascriptFunction(T d)
 {
    d.foo(); // rewritten as d["foo"]
 }
d.foo is rewritten as d["foo"], d.foo() is rewritten as d["foo"]()
Dec 01 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:05:16 +0200, Ary Borenszweig wrote:

 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
Yep, this would be another cool feature. There aren't that many languages that actually support both dynamic and static types. I guess you would indeed need a new type, something like your Dynamic, to define this behavior. With dynamic types, the opDispatch would be automatically rewritten by the compiler to look up the hash table. This way the types would look syntactically like built-in method calls but would act like e.g. python objects.
Dec 01 2009
prev sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Ary Borenszweig wrote:
 Ary Borenszweig wrote:
 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:

 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Exactly! That's the kind of example I was looking for, thanks.
Actuall, just the first part of the example: void foo(Object o) { o.duckMethod(); } Can't do that because even if the real instance of Object has an opDispatch method, it'll give a compile-time error because Object does not defines duckMethod. That's why this is something useful in scripting languages (or ruby, python, etc.): if the method is not defined at runtime it's an error unless you define the magic function that catches all. Can't do that in D because the lookup is done at runtime. Basically: Dynanic d = ...; d.something(1, 2, 3); is just a shortcut for doing d.opDispatch!("something")(1, 2, 3); (and it's actually what the compiler does) but it's a standarized way of doing that. What's the fun in that?
I take it back! It would be very cool to have something like ruby's dynamic attribute-based finders in D: http://api.rubyonrails.org/classes/ActiveRecord/Base.html
Dec 21 2009
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-12-01 07:01:20 -0500, Ary Borenszweig <ary esperanto.org.ar> said:

 Foo foo = new Bar();
 foo.something();
 
 will not work as expected because something() will be bound to Foo's 
 opDispatch and it isn't a virtual method. Of course you can make 
 opDispatch invoke a virtual function and override that function in Bar, 
 but since there isn't a standard name or method for doing this everyone 
 will start doing it their way (I don't like it when there's no 
 standarization for well-known operations) and it looks like a hack.
Someone ought to make std.dispatch and create that standardized runtime dispatch system. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Dec 01 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:
 
 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Walter is right. But as it seems there is a lot of confusion about the feature, maybe we didn't define the feature (which is very general and powerful and as dynamic as you ever want to make it) in a palatable way. Ideas? Andrei
Dec 01 2009
parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 10:39:44 -0800, Andrei Alexandrescu wrote:

 retard wrote:
 Tue, 01 Dec 2009 03:16:47 -0800, Walter Bright wrote:
 
 Ary Borenszweig wrote:
 Can you show examples of points 2, 3 and 4?
Have opDispatch look up the string in an associative array that returns an associated delegate, then call the delegate. The dynamic part will be loading up the associative array at run time.
This is not exactly what everyone of us expected. I'd like to have something like void foo(Object o) { o.duckMethod(); } foo(new Object() { void duckMethod() {} }); The feature isn't very dynamic since the dispatch rules are defined statically. The only thing you can do is rewire the associative array when forwarding statically precalculated dispatching.
Walter is right. But as it seems there is a lot of confusion about the feature, maybe we didn't define the feature (which is very general and powerful and as dynamic as you ever want to make it) in a palatable way. Ideas?
Well, the most important feature of dynamic types in languages like Python is that you don't need to worry about types anywhere. Even with opDispatch you need to configure parametric types for parameters etc. A python coder wouldn't use D unless you can get rid of all type annotations.
Dec 01 2009
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Sat, 28 Nov 2009 18:36:07 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267
I have a few questions: 1. How should the compiler restrict opDispatch's string argument? i.e. if I implement opDispatch, I'm normally expecting the string to be a symbol, but one can directly call opDispatch with any string (I can see clever usages which compile but for instance circumvent const or something), forcing me to always constrain the string argument, i.e. always have isValidSymbol(s) in my constraints. Should the compiler restrict the string to always being a valid symbol name (or operator, see question 2)? 2. Can we cover templated operators with opDispatch? I can envision something like this: opDispatch(string s)(int rhs) if(s == "+") {...} I'm still hesitant on operators only being definable through templates, since it makes for very ugly and complex function signatures, regardless of whether they are virtual or not. I would be all for it if you can make shortcuts like: operator("+")(int rhs) hm.. that gives me an idea. new post... -Steve
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Sat, 28 Nov 2009 18:36:07 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268
old=trunk%2Fsrc 267 
I have a few questions: 1. How should the compiler restrict opDispatch's string argument? i.e. if I implement opDispatch, I'm normally expecting the string to be a symbol, but one can directly call opDispatch with any string (I can see clever usages which compile but for instance circumvent const or something), forcing me to always constrain the string argument, i.e. always have isValidSymbol(s) in my constraints. Should the compiler restrict the string to always being a valid symbol name (or operator, see question 2)?
Where in doubt, acquire more power :o). I'd say no checks; let user code do that or deal with those cases.
 2. Can we cover templated operators with opDispatch?  I can envision 
 something like this:
 
 opDispatch(string s)(int rhs) if(s == "+") {...}
How do you mean that? Andrei
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 13:50:38 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Steven Schveighoffer wrote:
 On Sat, 28 Nov 2009 18:36:07 -0500, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268&old=trunk%2Fsrc 267
I have a few questions: 1. How should the compiler restrict opDispatch's string argument? i.e. if I implement opDispatch, I'm normally expecting the string to be a symbol, but one can directly call opDispatch with any string (I can see clever usages which compile but for instance circumvent const or something), forcing me to always constrain the string argument, i.e. always have isValidSymbol(s) in my constraints. Should the compiler restrict the string to always being a valid symbol name (or operator, see question 2)?
Where in doubt, acquire more power :o). I'd say no checks; let user code do that or deal with those cases.
It is unlikely that anything other than symbols are expected for opDispatch, I can't think of an example that would not want to put the isValidSymbol constraint on the method. An example of abuse: struct caseInsensitiveWrapper(T) { T _t; auto opDispatch(string fname, A...) (A args) { mixin("return _t." ~ toLower(fname) ~ "(args);"); } } class C { int x; void foo(); } caseInsensitiveWrapper!(C) ciw; ciw._t = new C; ciw.opDispatch!("x = 5, delete _t, _t.foo")(); I don't know if this is anything to worry about, but my preference as an author for caseInsensitiveWrapper is that this last line should never compile without any special requirements from me.
 2. Can we cover templated operators with opDispatch?  I can envision  
 something like this:
  opDispatch(string s)(int rhs) if(s == "+") {...}
How do you mean that?
Isn't opBinary almost identical to opDispatch? The only difference I see is that opBinary works with operators as the 'symbol' and dispatch works with valid symbols. Is it important to distinguish between operators and custom dispatch? -Steve
Dec 01 2009
parent reply =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Steven Schveighoffer wrote:
 On Tue, 01 Dec 2009 13:50:38 -0500, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Steven Schveighoffer wrote:
 On Sat, 28 Nov 2009 18:36:07 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 And here it is (called opDispatch, Michel Fortin's suggestion):

 http://www.dsource.org/projects/dmd/changeset?new=trunk%2Fsrc 268
old=trunk%2Fsrc 267 
I have a few questions: 1. How should the compiler restrict opDispatch's string argument? i.e. if I implement opDispatch, I'm normally expecting the string to be a symbol, but one can directly call opDispatch with any string (I can see clever usages which compile but for instance circumvent const or something), forcing me to always constrain the string argument, i.e. always have isValidSymbol(s) in my constraints. Should the compiler restrict the string to always being a valid symbol name (or operator, see question 2)?
Where in doubt, acquire more power :o). I'd say no checks; let user code do that or deal with those cases.
It is unlikely that anything other than symbols are expected for opDispatch, I can't think of an example that would not want to put the isValidSymbol constraint on the method. An example of abuse: struct caseInsensitiveWrapper(T) { T _t; auto opDispatch(string fname, A...) (A args) { mixin("return _t." ~ toLower(fname) ~ "(args);"); } } class C { int x; void foo(); } caseInsensitiveWrapper!(C) ciw; ciw._t = new C; ciw.opDispatch!("x = 5, delete _t, _t.foo")(); I don't know if this is anything to worry about, but my preference as an author for caseInsensitiveWrapper is that this last line should never compile without any special requirements from me.
 2. Can we cover templated operators with opDispatch?  I can envision 
 something like this:
  opDispatch(string s)(int rhs) if(s == "+") {...}
How do you mean that?
Isn't opBinary almost identical to opDispatch? The only difference I see is that opBinary works with operators as the 'symbol' and dispatch works with valid symbols. Is it important to distinguish between operators and custom dispatch? -Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 15:06:27 -0500, Pelle Månsson  
<pelle.mansson gmail.com> wrote:

 Steven Schveighoffer wrote:
  Isn't opBinary almost identical to opDispatch?  The only difference I  
 see is that opBinary works with operators as the 'symbol' and dispatch  
 works with valid symbols.  Is it important to distinguish between  
 operators and custom dispatch?
  -Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
You could say the same thing about dynamic properties. How come we don't split those out as opProperty? opDispatch can do opBinary, it's a subset. It makes no sense to define opDispatch(string s)() if(s == "+") I agree, but I don't see any reason why opBinary(string s)() would fail to compile... -Steve
Dec 01 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
<schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle M=E5nsson <pelle.mansson gmail.=
com>
 wrote:

 Steven Schveighoffer wrote:
 =A0Isn't opBinary almost identical to opDispatch? =A0The only differenc=
e I
 see is that opBinary works with operators as the 'symbol' and dispatch =
works
 with valid symbols. =A0Is it important to distinguish between operators=
and
 custom dispatch?
 =A0-Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
You could say the same thing about dynamic properties. =A0How come we don=
't
 split those out as opProperty?
That's because of what Andrei pointed out: &a.b . The compiler can't tell if you want a delegate to the method b, or the address of a property b.
 opDispatch can do opBinary, it's a subset. =A0It makes no sense to define
 opDispatch(string s)() if(s =3D=3D "+") I agree, but I don't see any reas=
on why
 opBinary(string s)() would fail to compile...
I don't get your point. It's the compiler that decides to call opBinary and it's only gonna decide to do so for binary operators. Even if you pretend opBinary can accept any string. --bb
Dec 01 2009
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 16:01:41 -0500, Bill Baxter <wbaxter gmail.com> wrote:

 On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle Månsson  
 <pelle.mansson gmail.com>
 wrote:

 Steven Schveighoffer wrote:
  Isn't opBinary almost identical to opDispatch?  The only difference I
 see is that opBinary works with operators as the 'symbol' and  
 dispatch works
 with valid symbols.  Is it important to distinguish between operators  
 and
 custom dispatch?
  -Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
You could say the same thing about dynamic properties. How come we don't split those out as opProperty?
That's because of what Andrei pointed out: &a.b . The compiler can't tell if you want a delegate to the method b, or the address of a property b.
Huh?
 opDispatch can do opBinary, it's a subset.  It makes no sense to define
 opDispatch(string s)() if(s == "+") I agree, but I don't see any reason  
 why
 opBinary(string s)() would fail to compile...
I don't get your point. It's the compiler that decides to call opBinary and it's only gonna decide to do so for binary operators. Even if you pretend opBinary can accept any string.
My point is, the set of strings passed by the compiler to opBinary is completely disjoint from the set of strings passed by the compiler to opDispatch. So the only reason to keep them separate is because you want to force people to split their code between operators and methods/properties. There is no technical reason we need to keep them separate or to combine them that I can see. -Steve
Dec 01 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 1:10 PM, Steven Schveighoffer
<schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 16:01:41 -0500, Bill Baxter <wbaxter gmail.com> wrote=
:
 On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle M=E5nsson
 <pelle.mansson gmail.com>
 wrote:

 Steven Schveighoffer wrote:
 =A0Isn't opBinary almost identical to opDispatch? =A0The only differe=
nce I
 see is that opBinary works with operators as the 'symbol' and dispatc=
h
 works
 with valid symbols. =A0Is it important to distinguish between operato=
rs
 and
 custom dispatch?
 =A0-Steve
opBinary is a binary operator, opDispatch can be anything. I think the=
y
 should be kept separate.
You could say the same thing about dynamic properties. =A0How come we d=
on't
 split those out as opProperty?
That's because of what Andrei pointed out: =A0&a.b . The compiler can't tell if you want a delegate to the method b, or the address of a property b.
Huh?
If you have this: struct S { int opProperty(string s)() if(s=3D=3D"b") { ... } int opDispatch(string s)() if(s=3D=3D"b") { ... } } S a; auto x =3D &a.b; which one are you talking about? The property a.b or the method a.b()? That's why you can't split out properties as opProperty. But actually maybe this is no longer true? I'm not sure what the property is going to do to how we refer to a function as a piece of data. Maybe D won't require the & any more. Then &a.b could only refer to the property. So anyway, I think your argument is bad as things currently stand. You asked if opBinary and opDispatch are separate, then why not opProperty. Well, there's an ambiguity if you split off opProperty, that's why not. There isn't any ambiguity in splitting off opBinary.
 opDispatch can do opBinary, it's a subset. =A0It makes no sense to defi=
ne
 opDispatch(string s)() if(s =3D=3D "+") I agree, but I don't see any re=
ason
 why
 opBinary(string s)() would fail to compile...
I don't get your point. =A0It's the compiler that decides to call opBinary and it's only gonna decide to do so for binary operators. Even if you pretend opBinary can accept any string.
My point is, the set of strings passed by the compiler to opBinary is completely disjoint from the set of strings passed by the compiler to opDispatch. =A0So the only reason to keep them separate is because you wa=
nt to
 force people to split their code between operators and methods/properties=
.
 There is no technical reason we need to keep them separate or to combine
 them that I can see.
How about this: given only a catch-all opDispatch which implements dynamic dispatch, the compiler cannot statically determine if operators are really implemented or not. Since the list of operators is always finite, it makes sense to have them in a separate "namespace" of sorts. That way if you implement a catch-all opBinary, you're only saying that you implement all /operators/ not all possible methods. And vice versa, you can specify that you only implement some operators, but still have dynamic dispatch that forwards all named methods. Perhaps, though, there should be a rule where opBinary("+") is tried first, and if not defined then opDispatch("+") could be tried. Not sure if it's worth the mental burden of another rule, though. --bb
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 17:24:30 -0500, Bill Baxter <wbaxter gmail.com> wrote:

 On Tue, Dec 1, 2009 at 1:10 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 16:01:41 -0500, Bill Baxter <wbaxter gmail.com>  
 wrote:

 On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle Månsson
 <pelle.mansson gmail.com>
 wrote:

 Steven Schveighoffer wrote:
  Isn't opBinary almost identical to opDispatch?  The only  
 difference I
 see is that opBinary works with operators as the 'symbol' and  
 dispatch
 works
 with valid symbols.  Is it important to distinguish between  
 operators
 and
 custom dispatch?
  -Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
You could say the same thing about dynamic properties. How come we don't split those out as opProperty?
That's because of what Andrei pointed out: &a.b . The compiler can't tell if you want a delegate to the method b, or the address of a property b.
Huh?
If you have this: struct S { int opProperty(string s)() if(s=="b") { ... } int opDispatch(string s)() if(s=="b") { ... } } S a; auto x = &a.b; which one are you talking about? The property a.b or the method a.b()? That's why you can't split out properties as opProperty.
This seems like an ambiguity. You cannot define both the property b and the method b.
 But actually maybe this is no longer true?  I'm not sure what the
  property is going to do to how we refer to a function as a piece of
 data.  Maybe D won't require the & any more.  Then &a.b could only
 refer to the property.

 So anyway, I think your argument is bad as things currently stand.
 You asked if opBinary and opDispatch are separate, then why not
 opProperty.  Well, there's an ambiguity if you split off opProperty,
 that's why not.  There isn't any ambiguity in splitting off opBinary.
FTR, I'm not pushing this, just pointing out the inconsistency.
 opDispatch can do opBinary, it's a subset.  It makes no sense to  
 define
 opDispatch(string s)() if(s == "+") I agree, but I don't see any  
 reason
 why
 opBinary(string s)() would fail to compile...
I don't get your point. It's the compiler that decides to call opBinary and it's only gonna decide to do so for binary operators. Even if you pretend opBinary can accept any string.
My point is, the set of strings passed by the compiler to opBinary is completely disjoint from the set of strings passed by the compiler to opDispatch. So the only reason to keep them separate is because you want to force people to split their code between operators and methods/properties. There is no technical reason we need to keep them separate or to combine them that I can see.
How about this: given only a catch-all opDispatch which implements dynamic dispatch, the compiler cannot statically determine if operators are really implemented or not.
Why does it have to? proposed implementation: compiler sees 'a + b' compiler rewrites 'a.opBinary!"+"(b)' does it compile? If yes, then a implements the operator. With opDispatch: compiler sees 'a + b' compiler rewrites 'a.opDispatch!"+"(b)' does it compile? If yes, then a implements the operator. I don't see the problem.
 Since the list of operators
 is always finite, it makes sense to have them in a separate
 "namespace" of sorts.   That way if you implement a catch-all
 opBinary, you're only saying that you implement all /operators/ not
 all possible methods.  And vice versa, you can specify that you only
 implement some operators, but still have dynamic dispatch that
 forwards all named methods.
opDispatch(string s, T)(T arg) if(isOperator(s)) opDispatch(string s, T...)(T arg) if(isSymbol(s)) BTW, you are already going to want to do that for both to prevent abuse, see my original reply in this sub-thread. -Steve
Dec 01 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 3:01 PM, Steven Schveighoffer
<schveiguy yahoo.com> wrote:
 How about this: given only a catch-all opDispatch which implements
 dynamic dispatch, the compiler cannot statically determine if
 operators are really implemented or not.
Why does it have to? proposed implementation: compiler sees 'a + b' compiler rewrites 'a.opBinary!"+"(b)' does it compile? =A0If yes, then a implements the operator. With opDispatch: compiler sees 'a + b' compiler rewrites 'a.opDispatch!"+"(b)' does it compile? =A0If yes, then a implements the operator. I don't see the problem.
 Since the list of operators
 is always finite, it makes sense to have them in a separate
 "namespace" of sorts. =A0 That way if you implement a catch-all
 opBinary, you're only saying that you implement all /operators/ not
 all possible methods. =A0And vice versa, you can specify that you only
 implement some operators, but still have dynamic dispatch that
 forwards all named methods.
opDispatch(string s, T)(T arg) if(isOperator(s)) opDispatch(string s, T...)(T arg) if(isSymbol(s))
Good counterpoints to my argument. So I give up on that line. Here's another, how do you implement the opBinary_r operators with opDispat= ch? --bb
Dec 01 2009
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
Bill Baxter Wrote:

 Good counterpoints to my argument.  So I give up on that line.
 
 Here's another, how do you implement the opBinary_r operators with opDispatch?
Kinda cooky, but what about this: a + b -> b.opDispatch!("r+" )(a) -Steve
Dec 01 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 4:22 PM, Steven Schveighoffer
<schveiguy yahoo.com> wrote:
 Bill Baxter Wrote:

 Good counterpoints to my argument. =A0So I give up on that line.

 Here's another, how do you implement the opBinary_r operators with opDis=
patch?
 Kinda cooky, but what about this:

 a + b -> b.opDispatch!("r+" )(a)
That's what I had in mind too, so I guess it's not so hard to guess. Really the _r convention is also kooky. We're just more used to that. So this isn't really a strong argument for separating opBinary out of opDispatch. But that is part of why I was asking about opIn -- if opIn_r's spelling remains "opIn_r" then we will have both conventions to deal with. Not so good. But if that one's changing to opDispatch!"in" also, then we'll need opSomething!"rin". Which is kookier than "r+", I think, but at least maintains consistency. But there is a problem. It means you can't opDispatch on a method called "= rin". So I think there would have to be some non-symbol char in the "r" prefix used. Maybe "r:+", "r:+=3D", "r:in". Or just a space -- "r +", "r in", ... etc. But now it's a notch less intuitive. --bb
Dec 01 2009
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
Bill Baxter Wrote:

 On Tue, Dec 1, 2009 at 4:22 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 Bill Baxter Wrote:

 Good counterpoints to my argument. So I give up on that line.

 Here's another, how do you implement the opBinary_r operators with opDispatch?
Kinda cooky, but what about this: a + b -> b.opDispatch!("r+" )(a)
That's what I had in mind too, so I guess it's not so hard to guess. Really the _r convention is also kooky. We're just more used to that. So this isn't really a strong argument for separating opBinary out of opDispatch.
Another argument for at least keeping opBinary and opBinary_r to be defined by the same function -- commutative operators can be defined once: T opDispatch(string s)(T x) if(s == "+" || s == "r+") { return T(this.val + x.val);}
 
 But that is part of why I was asking about opIn -- if opIn_r's
 spelling remains "opIn_r" then we will have both conventions to deal
 with.  Not so good.  But if that one's changing to opDispatch!"in"
 also, then we'll need opSomething!"rin".  Which is kookier than "r+",
 I think, but at least maintains consistency.
 
 But there is a problem.  It means you can't opDispatch on a method called
"rin".
 So I think there would have to be some non-symbol char in the "r"
 prefix used.  Maybe "r:+", "r:+=", "r:in".  Or just a space -- "r +",
 "r in", ... etc.
 But now it's a notch less intuitive.
opIn is definitely a weird one. Normally, you only want to define the reverse version. Like you said, you can't use "rin" because rin isn't a keyword. I think we can probably come up with a non-symbol representation to denote "Reverse" that's intuitive or at least memorable enough. other ideas to ponder: "op.r" (no need for ..r because opDot doesn't have a reverse version) "op this" denoting that 'this' is on the right hand side -Steve
Dec 01 2009
prev sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 02 Dec 2009 00:01:41 +0300, Bill Baxter <wbaxter gmail.com> wrot=
e:

 On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle M=C3=A5nsson  =
 <pelle.mansson gmail.com>
 wrote:

 Steven Schveighoffer wrote:
  Isn't opBinary almost identical to opDispatch?  The only differenc=
e I
 see is that opBinary works with operators as the 'symbol' and  =
 dispatch works
 with valid symbols.  Is it important to distinguish between operato=
rs =
 and
 custom dispatch?
  -Steve
opBinary is a binary operator, opDispatch can be anything. I think t=
hey
 should be kept separate.
You could say the same thing about dynamic properties. How come we =
 don't
 split those out as opProperty?
That's because of what Andrei pointed out: &a.b . The compiler can't tell if you want a delegate to the method b, or the=
 address of a property b.
Technically, you are wrong. There is the same ambiguity without function= = overloads: void foo(int a); void foo(float a); auto dg =3D &foo; // which one of the two overloads is chosen and why? Resolving properties is much easier: property and function names can't = overlap, i.e. you can't have property *and* any function with the same = name.
Dec 02 2009
prev sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 1:01 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 12:38 PM, Steven Schveighoffer
 <schveiguy yahoo.com> wrote:
 On Tue, 01 Dec 2009 15:06:27 -0500, Pelle M=E5nsson <pelle.mansson gmail=
.com>
 wrote:

 Steven Schveighoffer wrote:
 =A0Isn't opBinary almost identical to opDispatch? =A0The only differen=
ce I
 see is that opBinary works with operators as the 'symbol' and dispatch=
works
 with valid symbols. =A0Is it important to distinguish between operator=
s and
 custom dispatch?
 =A0-Steve
opBinary is a binary operator, opDispatch can be anything. I think they should be kept separate.
You could say the same thing about dynamic properties. =A0How come we do=
n't
 split those out as opProperty?
That's because of what Andrei pointed out: =A0&a.b . The compiler can't tell if you want a delegate to the method b, or the address of a property b.
... but maybe the syntax for "the function itself" should be distinct from "dereference" anyway. I can't think of any reason the two need to use the same syntax other than that &func was called a "function pointer" back in C. There's no case for "generic code" needing it to be the same syntax as far as I can tell. --bb
Dec 01 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 27 de noviembre a las 15:30 me escribiste:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic
 classes, i.e. classes that are only known at runtime, not compile
 time. In D, this:
I like the feature, but I don't understand where is the duck-typing in all this. I think you're confusing duck-typing with dynamic-typing or I'm missing something? -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Ambition makes you look pretty ugly
Nov 29 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Sun, 29 Nov 2009 14:59:27 -0300, Leandro Lucarella wrote:

 Walter Bright, el 27 de noviembre a las 15:30 me escribiste:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic
 classes, i.e. classes that are only known at runtime, not compile time.
 In D, this:
I like the feature, but I don't understand where is the duck-typing in all this. I think you're confusing duck-typing with dynamic-typing or I'm missing something?
Well it seems like the duck typing happens all on compile time with the new feature. You get some of the features of true dynamic languages, but not all. You can't really write python/ruby style dynamic code with it, e.g. class foo { void sayHello() { print("hello"); } } auto bar = new foo(); try { bar.sayBye(); } catch(MethodNotFoundException e) { ... } auto bye_routine(Object o) { return o.sayBye(); } bar.sayBye = { bar.sayHello(); return "and bye"; } println(bye_routine(bar)); Of course this is inefficient and error prone but that's what's it all about in dynamic languages. You get tons of flexibility.
Nov 29 2009
parent Leandro Lucarella <llucax gmail.com> writes:
retard, el 29 de noviembre a las 18:27 me escribiste:
 Sun, 29 Nov 2009 14:59:27 -0300, Leandro Lucarella wrote:
 
 Walter Bright, el 27 de noviembre a las 15:30 me escribiste:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic
 classes, i.e. classes that are only known at runtime, not compile time.
 In D, this:
I like the feature, but I don't understand where is the duck-typing in all this. I think you're confusing duck-typing with dynamic-typing or I'm missing something?
Well it seems like the duck typing happens all on compile time with the new feature. You get some of the features of true dynamic languages, but not all. You can't really write python/ruby style dynamic code with it, e.g. class foo { void sayHello() { print("hello"); } } auto bar = new foo(); try { bar.sayBye(); } catch(MethodNotFoundException e) { ... } auto bye_routine(Object o) { return o.sayBye(); } bar.sayBye = { bar.sayHello(); return "and bye"; }
I guess this is a proposed syntax or something right? I guess you're omitting the opDispatch() implementation on purpose. Is property syntax really allowed to assign a new method?
 println(bye_routine(bar));
 
 Of course this is inefficient and error prone but that's what's it all 
 about in dynamic languages. You get tons of flexibility.
I see. As I said in the reply to Walter, I think we need more support if we really want to make dynamic typing (and duck-typing) pleasant in D. As I said, there should be a better way to ask if an object have some methond than trying to use it and catch an exception (like Python's hasattr()). It would be very nice to be able to add methods (and properties!) dynamically to an object too, this is very common in dynamic languages. I know all this can be done, but I think we need an standard facility to avoid everybody implementing its own dynamic typing "framework", which would be a mess to use and hard to interoperate between different implementations. It doesn't have to be a language feature though, if it can be implemented in Phobos. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Una mujer en bicicleta, con sombrero de paja, es la más flagrante violación a las leyes de la aerodinamia. -- Ricardo Vaporeso. 21 de Septiembre de 1917.
Nov 29 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 Walter Bright, el 27 de noviembre a las 15:30 me escribiste:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic
 classes, i.e. classes that are only known at runtime, not compile
 time. In D, this:
I like the feature, but I don't understand where is the duck-typing in all this. I think you're confusing duck-typing with dynamic-typing or I'm missing something?
Perhaps I am using the term wrong, but I figure it's duck-typing if you can go ahead and try to access methods of an object, and they are checked at runtime and throw some kind of method-not-found exception if they aren't there. With this, it should be possible to construct a type at runtime, and have it work with statically compiled code. This should work very nicely to implement a plug in architecture.
Nov 29 2009
parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 29 de noviembre a las 13:52 me escribiste:
 Leandro Lucarella wrote:
Walter Bright, el 27 de noviembre a las 15:30 me escribiste:
One thing Java and Python, Ruby, etc., still hold over D is dynamic
classes, i.e. classes that are only known at runtime, not compile
time. In D, this:
I like the feature, but I don't understand where is the duck-typing in all this. I think you're confusing duck-typing with dynamic-typing or I'm missing something?
Perhaps I am using the term wrong, but I figure it's duck-typing if you can go ahead and try to access methods of an object, and they are checked at runtime and throw some kind of method-not-found exception if they aren't there.
OK, now I see what you mean. Perhaps it would be helpful to have a standard exception for unexistent methods to support that idiom. If we don't every library will create one and it would be a mess. A standard way to test if a method exists would be nice too, something like Python's getattr(), setattr() and hasattr() can be a start. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Demasiado lento para una estrella fugaz Demasiado limpio para lo que vos acostumbras Demasiado claro para tanta oscuridad Demasiados sueños, poca realidad
Nov 29 2009
prev sibling next sibling parent reply Roman Ivanov <x y.z> writes:
Walter Bright Wrote:

 dsimcha wrote:
 Right, but sometimes (though certainly not always) it's better to provide a
 meta-feature that solves a whole bunch of problems (like better templates) and
 then solve the individual problems at the library level, rather than add a
 language feature specifically to address each need.
Yup. The hard part, though, is figuring out what the magic set of seminal features should be.
 One thing D does very well is
 allow you to do the same kind of metaprogramming solutions you would do in C++,
 except that the result doesn't suck.  For example, std.range implements
 functional-style lazy evaluation as a library, and does it well.  The point is
 that, if you can't deal with the complexity of having real templates, you
better
 be prepared for the complexity created by not having them.
Right. A "simple" language pushes the complexity onto the programmer, so he has to write complicated code instead. D programs tend to be dramatically shorter than the equivalent C++ one.
 Having never done it before, I really cannot imagine how people get any work
done
 in a language that doesn't have either duck typing or good templates.  It's
just

end
 up adding tons of ad-hoc workarounds for lacking either of these as
 well-integrated language features.  The best/worst example is auto-boxing.
I tried programming in Java. A friend of mine had an unexpected insight. He used Java a lot at a major corporation. He said an IDE was indispensable because with "one click" you could generate a "hundred lines of code". The light bulb came on. Java makes up for its lack of expressiveness by putting that expressiveness into the IDE! In D, you generate that hundred lines of code with templates and mixins.
I'm a Java programmer. IMO, the biggest problem with Java is not the language expressiveness, but poorly written APIs and badly selected abstractions. The reason I can't program in Java without an IDE is (usually) not because I need to generate tons of code, but because I'm constantly looking up new method/class names, looking up packages to export and refactoring. A lot of things that require extensive code generation do so simply because they are badly designed. Web services (SOAP based) are a good example of that. In the end, it's just reading and writing text to a socket. It could be very simple, but it isn't. An area when I find myself using code generation a lot is exception handling. I prefer to write my code without handling exceptions at all, and then let API to generate try/catch blocks. I tweak then afterward. Thing is, Java supports runtime exceptions that don't cascade in kilobytes of mostly useless code. People just don't use them that often. My point is, language is one thing, but "language culture" is another. For some reason Java bred a culture that encourages bloated, counter-intuitive, "enterprise" solutions. It's not inherent in the language. It has more to do with the companies that use it, core API design and design of popular libraries. At least that's the way I see it.
Nov 30 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Roman Ivanov wrote:
 My point is, language is one thing, but "language culture" is
 another. For some reason Java bred a culture that encourages bloated,
 counter-intuitive, "enterprise" solutions. It's not inherent in the
 language. It has more to do with the companies that use it, core API
 design and design of popular libraries. At least that's the way I see
 it.
I know what you mean. I even found the file I/O Java library routines to be impenetrable.
Nov 30 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Roman Ivanov (x y.z)'s article
 Walter Bright Wrote:
 dsimcha wrote:
 Right, but sometimes (though certainly not always) it's better to provide a
 meta-feature that solves a whole bunch of problems (like better templates) and
 then solve the individual problems at the library level, rather than add a
 language feature specifically to address each need.
Yup. The hard part, though, is figuring out what the magic set of seminal features should be.
 One thing D does very well is
 allow you to do the same kind of metaprogramming solutions you would do in C++,
 except that the result doesn't suck.  For example, std.range implements
 functional-style lazy evaluation as a library, and does it well.  The point is
 that, if you can't deal with the complexity of having real templates, you
better
 be prepared for the complexity created by not having them.
Right. A "simple" language pushes the complexity onto the programmer, so he has to write complicated code instead. D programs tend to be dramatically shorter than the equivalent C++ one.
 Having never done it before, I really cannot imagine how people get any work
done
 in a language that doesn't have either duck typing or good templates.  It's
just

end
 up adding tons of ad-hoc workarounds for lacking either of these as
 well-integrated language features.  The best/worst example is auto-boxing.
I tried programming in Java. A friend of mine had an unexpected insight. He used Java a lot at a major corporation. He said an IDE was indispensable because with "one click" you could generate a "hundred lines of code". The light bulb came on. Java makes up for its lack of expressiveness by putting that expressiveness into the IDE! In D, you generate that hundred lines of code with templates and mixins.
I'm a Java programmer. IMO, the biggest problem with Java is not the language
expressiveness, but poorly written APIs and badly selected abstractions. The reason I can't program in Java without an IDE is (usually) not because I need to generate tons of code, but because I'm constantly looking up new method/class names, looking up packages to export and refactoring.
 A lot of things that require extensive code generation do so simply because
they
are badly designed. Web services (SOAP based) are a good example of that. In the end, it's just reading and writing text to a socket. It could be very simple, but it isn't.
 An area when I find myself using code generation a lot is exception handling. I
prefer to write my code without handling exceptions at all, and then let API to generate try/catch blocks. I tweak then afterward. Thing is, Java supports runtime exceptions that don't cascade in kilobytes of mostly useless code. People just don't use them that often.
 My point is, language is one thing, but "language culture" is another. For some
reason Java bred a culture that encourages bloated, counter-intuitive, "enterprise" solutions. It's not inherent in the language. It has more to do with the companies that use it, core API design and design of popular libraries. At least that's the way I see it. Yes, but in my (possibly somewhat uninformed) opinion, the root cause of this is that Java just doesn't provide many tools for managing complexity. Complexity has to go somewhere, and about the only tool Java provides for managing it is OO-style class hierarchies. I have nothing against OO, classes, interfaces, inheritance, etc. It's just that it's not the right tool for every job. If your problem doesn't fit neatly into an OO-style class hierarchy, it will be made to fit sloppily. In a more multi-paradigm language, you might use templates, or duck typing, or higher-order functions, or closures, or eval statements, or mixins, or macros, or whatever complexity management system maps best to the problem you're trying to solve. In Java, by going overboard on making the core language simple, you end up pushing all the complexity into the APIs.
Nov 30 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
Nov 30 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
Dec 01 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
PHP?
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 01:08:11 -0800, Walter Bright wrote:

 grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple, you
 end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
PHP?
Php is a terrible joke built by a novice http://tnx.nl/php.html. Fans of e.g. ruby, python etc. could argue that their language has less corner cases and more uniform features, which makes the development more of a joy to do and code less verbose. Instead of two variations, in many cases there is only one choice - e.g. - type known at runtime/compile time -> known at runtime - generic type / ordinary type -> runtime dynamic type - primitives/objects -> everything is an object - special set of built-in operators / normal methods -> everything is a message - static classes / objects -> objects (some are singletons but can inherit from interfaces etc. unlike statics in d) - free functions / methods / static methods -> methods (the modules are singleton objects -> free functions are module methods) - functions / delegates -> functions - special set of built-in control structures -> simple primitive (e.g. recursion & library defined structures) - statements / expressions -> everything is an expression (this unifies e.g. if-then-else and a ? b : c) - built-in AA, array, list etc. -> library defined collections - dozens of primitive number types -> fixed size int & float (e.g. 32bit int and 64bit float), arbitrary precision int & float (rational type) Overall these simplifications don't remove any crucial high level language features, in fact they make the code simpler and shorter. For instance there isn't high level code that can only be written with 8-bit byte primitives, static methods or closures, but not with 32-bit generic ints, singletons, and generic higher order functions. The only thing you lose is some type safety and efficiency.
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Overall these simplifications don't remove any crucial high level 
 language features, in fact they make the code simpler and shorter. For 
 instance there isn't high level code that can only be written with 8-bit 
 byte primitives, static methods or closures, but not with 32-bit generic 
 ints, singletons, and generic higher order functions. The only thing you 
 lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming. 2. as you mentioned, there's the performance problem. It's fine if you don't need performance, but once you do, the complexity abruptly goes way up. 3. no contract programming (it's very hard to emulate contract inheritance) 4. no metaprogramming 5. simple interfacing to C 6. scope guard (transactional processing); Python has the miserable try-catch-finally paradigm 7. static verification 8. RAII 9. versioning 10. ability to manage resources directly 11. inline assembler 12. constants
Dec 01 2009
next sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 retard wrote:
 Overall these simplifications don't remove any crucial high level 
 language features, in fact they make the code simpler and shorter. For 
 instance there isn't high level code that can only be written with 
 8-bit byte primitives, static methods or closures, but not with 32-bit 
 generic ints, singletons, and generic higher order functions. The only 
 thing you lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming. 2. as you mentioned, there's the performance problem. It's fine if you don't need performance, but once you do, the complexity abruptly goes way up. 3. no contract programming (it's very hard to emulate contract inheritance) 4. no metaprogramming 5. simple interfacing to C 6. scope guard (transactional processing); Python has the miserable try-catch-finally paradigm 7. static verification 8. RAII 9. versioning 10. ability to manage resources directly 11. inline assembler 12. constants
I mostly agree, but python actually has a rather elegant version of RAII.
Dec 01 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.
Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes. Python et al support functional style programming. The fact that php doesn't only proves that its author had no idea of what he was doing. Early php versions even had a limitation on recursion, 50 levels or something like that. They still have those limitations, somewhat relaxed. Probably no other language performs so poorly with functional code than php.
 
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.
In D, there's the simplicity problem. It's fine if you don't need readability, but once you do, the efficiency abruptly goes way down.
 
 3. no contract programming (it's very hard to emulate contract
 inheritance)
True, this is a commonly overlooked feature. I don't know any other languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.
 
 4. no metaprogramming
Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?
 
 5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
 
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm
Ok. On the other hand, I don't know why this can't be done with runtime metaprogramming features.
 
 7. static verification
Dynamic language users argue that since the language is much simpler, you don't need to verify anything. And you still have unit test frameworks.
 
 8. RAII
Ok. I think this could also be enforced dynamically.
 
 9. versioning
I don't know why this can't be done dynamically.
 10. ability to manage resources directly
Ok.
 
 11. inline assembler
Ok. Note that I wrote
 Overall these simplifications don't remove any crucial ___high level___
 language features, 
 
 12. constants
I don't know why this can't be done dynamically with wrapper objects.
Dec 01 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:
 
 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.
Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes.
And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works. I can only speak from experience, and my bug count in Python is extremely low, even when doing MT (the Queue module provides a very easy way to pass messages from one thread to another). I agree that, when you don't care much for performance, things are much easier :)
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.
In D, there's the simplicity problem. It's fine if you don't need readability, but once you do, the efficiency abruptly goes way down.
 
 3. no contract programming (it's very hard to emulate contract
 inheritance)
True, this is a commonly overlooked feature. I don't know any other languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.
There are libraries to do contracts in Python: http://www.wayforward.net/pycontract/ http://blitiri.com.ar/git/?p=pymisc;a=blob;f=contract.py;h=0d78aa3dc9f3af5336c8d34ce521815ebd7d5ea0;hb=HEAD I don't know if they handle contract inheritance though. There is a PEP for that too: http://www.python.org/dev/peps/pep-0316/ But I don't many people really wants DbC in Python, so I don't think it would be implemented.
 4. no metaprogramming
Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?
Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.
 5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
In D you need interfacing code too, it can be a little simpler, that's true.
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm
WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)
 8. RAII
Ok. I think this could also be enforced dynamically.
Again, the with statement.
 
 9. versioning
I don't know why this can't be done dynamically.
It can, and it's pretty common, you can do things like this: class A: if WHATEVER: def __init__(self): pass else: def __init__(self, x): pass
 10. ability to manage resources directly
What do you mean by resource?
 11. inline assembler
You can do bytecode manipulation, which is the assembler of dynamic languages :) I really think the *only* *major* advantage of D over Python is speed. That's it. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Cuando el Mártir estaba siendo perseguido y aglutinado por los citronetos, aquellos perversos que pretendian, en su maldad, piononizar las enseñanzas de Peperino. -- Peperino Pómoro
Dec 01 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.
Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes.
And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works.
I agree that statically enforced immutability is unnecessary if you are able to rigidly follow an immutability convention. C++ also has immutability by convention. People who work in large teams with programmers of all skill levels tell me, however, that having a convention and being sure it is followed 100% are two very different things.
 I can only speak from experience, and my bug count in Python is extremely
 low, even when doing MT (the Queue module provides a very easy way to pass
 messages from one thread to another).
How about the GIL?
 I agree that, when you don't care much for performance, things are much
 easier :)
I would also agree that your bug count and complexity should be low as long as you're staying within the paradigms that Python (or any language) was designed to support.
 2. as you mentioned, there's the performance problem. It's fine if you
 don't need performance, but once you do, the complexity abruptly goes
 way up.
In D, there's the simplicity problem. It's fine if you don't need readability, but once you do, the efficiency abruptly goes way down.
That, I strongly disagree with.
 3. no contract programming (it's very hard to emulate contract
 inheritance)
True, this is a commonly overlooked feature. I don't know any other languages than Eiffel or D that support this. I'm not sure how hard it would be to emulate this feature in languages where you can define your own class mechanism.
I suspect it is a very hard problem to do with just front end rewriting, 1. I've never seen anyone manage to do it 2. I had to adjust the code generator to make it work
 But I don't many people really wants DbC in Python, so I don't think it
 would be implemented.
That goes back to if you're staying inside the supported paradigms or not.
 4. no metaprogramming
Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?
Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.
Ok, can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code generator?
 5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
In D you need interfacing code too, it can be a little simpler, that's true.
The interfacing in D is nothing more than providing a declaration. There is no code executed.
 6. scope guard (transactional processing); Python has the miserable
 try-catch-finally paradigm
WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)
Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
 8. RAII
Ok. I think this could also be enforced dynamically.
Again, the with statement.
Yes, you can emulate RAII with the with statement, but with RAII (objects that destruct when they go out of scope) you can put this behavior in the object rather than explicitly in the code every time you use it. It's more complicated to have to remember to do it every time on use.
 9. versioning
I don't know why this can't be done dynamically.
It can, and it's pretty common, you can do things like this: class A: if WHATEVER: def __init__(self): pass else: def __init__(self, x): pass
 10. ability to manage resources directly
What do you mean by resource?
Garbage collection isn't appropriate for managing every resources. Scarce ones need handling manually. Even large malloc's often are better done outside of the gc.
 11. inline assembler
You can do bytecode manipulation, which is the assembler of dynamic languages :)
That doesn't help if you really need to do a little assembler.
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.
I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.
Dec 01 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Leandro Lucarella wrote:
 with file(fname) as f:
     x = f.read(10)
     f.write(x)
Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
Actually "with" is an awful abstraction as defined in Java (the new strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail. Andrei
Dec 01 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  1 de diciembre a las 11:07 me escribiste:
 Walter Bright wrote:
Leandro Lucarella wrote:
with file(fname) as f:
    x = f.read(10)
    f.write(x)
Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
Actually "with" is an awful abstraction as defined in Java (the new function. I strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail.
Good arguments! -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Los pobres buscan su destino. Acá está; ¿no lo ven? -- Emilio Vaporeso. Marzo de 1914
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  1 de diciembre a las 11:07 me escribiste:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 with file(fname) as f:
    x = f.read(10)
    f.write(x)
Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
Actually "with" is an awful abstraction as defined in Java (the new function. I strongly believe all of the above are hopelessly misguided. Scope guard is the right thing, and I am convinced it will prevail.
Good arguments!
Yah, point taken :o). I probably haven't clarified enough that I'm talking about a mere belief. Arguments have been discussed here in the past (e.g. scalability of the language construct with multiple transactions). Time will tell, but one indicating factor is that programs don't deal well with exceptions and scope guards help that massively, whereas "with" seems to help much less. Besides, anyone may be a nut about something, and scope guard is something I'm a nut about. Andrei
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Yah, point taken :o). I probably haven't clarified enough that I'm 
 talking about a mere belief. Arguments have been discussed here in the 
 past (e.g. scalability of the language construct with multiple 
 transactions). Time will tell, but one indicating factor is that 
 programs don't deal well with exceptions and scope guards help that 
 massively, whereas "with" seems to help much less. Besides, anyone may 
 be a nut about something, and scope guard is something I'm a nut about.
I didn't read the Python with carefully, but where does it fall down?
Dec 01 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
I suggest Walter to don't try to say that D2 is "better" than Python, it's a
waste of time and it means nothing.

Walter Bright:

can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code
generator?<
Python is more more flexible. See the __getattr__ standard method: class Reg4(object): ORDER = dict((c,i) for i,c in enumerate("wxyz")) def __init__(self, data=None): self.data = [None] * 4 if data: for i, item in enumerate(data): self.data[i] = item def __getattr__(self, attr): assert sorted(list(attr)) == ['w', 'x', 'y', 'z'] self.data[:] = (self.data[Reg4.ORDER[c]] for c in attr) r = Reg4("ABCD") print r.data r.xyzw print r.data Output: ['A', 'B', 'C', 'D'] ['B', 'C', 'D', 'A'] If you want the r.xyzw() syntax, that too can be done, creating new methods on the fly. In Python there's also the __getattribute__ that's a little more powerful than __getattr__: http://pyref.infogami.com/__getattribute__
That doesn't help if you really need to do a little assembler.<
That Reg4() class can actually use true SSE registers, generating and running very efficient asm computational kernels using corepy: http://www.corepy.org/ And you can also use GPUs with PyCuda and PyOpenCL in Python: http://mathema.tician.de/software/pycuda http://python-opencl.next-touch.com/ With Python + CorePy today you can write heavy numerical code that's faster than all D code. Bye, bearophile
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I suggest Walter to don't try to say that D2 is "better" than Python,
 it's a waste of time and it means nothing.
I meant it in the form of the simpler being better hypothesis. I am arguing that a simpler language often leads to complex code. CorePy, PyCuda, PyOpenCL, etc. are not part of Python. They are extensions, and are not written in Python. Heck, C++ Boost is listed as a prerequisite for PyCuda. The very existence of those shows that Python itself is not powerful enough. Secondly, use of them does not make Python a simple language. And thirdly, any language can have extension libraries and processors.
Dec 01 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I meant it in the form of the simpler being better hypothesis.
I see, I have missed that purpose of the discussion... I am sorry.
 The very existence of those shows that Python itself is not powerful enough.
Right. But what people care in the end is programs that get the work done. If a mix of Python plus C/C++ libs are good enough and handy enough then they get used. For example I am able to use the PIL Python lib with Python to load, save and process jpeg images at high-speed with few lines of handy code. So I don't care if PIL is written in C++: http://www.pythonware.com/products/pil/
 Secondly, use of them does not make Python a simple language.
Python is simpler than D2, but it's not a simple language, it has many features, etc. A simple language is Scheme :-)
 And thirdly, any language can have extension libraries and processors.
That's true, but in practice there's difference from practice and theory :-) - Are the libs you need to do X and Y and Z actually present and are they working well? It's often possible to find every kind of binding for Python. - Are those libs powerful? CorePy allows you to write the most efficient code that runs with the SSE extensions. - Is using those handy, with a nice syntax, with a nice try-test-debug cycle? Python allows for this too, allows to write wrappers with a good syntax, etc. And the shell allows you to try code, etc. Bye, bearophile
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL
 Python lib with Python to load, save and process jpeg images at
 high-speed with few lines of handy code. So I don't care if PIL is
 written in C++: http://www.pythonware.com/products/pil/
Sure, but that's not about the language. It's about the richness of the ecosystem that supports the language, and Python certainly has a rich one.
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:22:10 -0800, Walter Bright wrote:

 bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL Python
 lib with Python to load, save and process jpeg images at high-speed
 with few lines of handy code. So I don't care if PIL is written in C++:
 http://www.pythonware.com/products/pil/
Sure, but that's not about the language. It's about the richness of the ecosystem that supports the language, and Python certainly has a rich one.
I thought D was supposed to be a practical language for real world problems. This 'D is good because everything can and must be written in D' is beginning to sound like a religion. To me it seems the Python way is more practical in all ways. Even novice programmers can produce efficient programs with it by using a mixture of low level C/C++ libs and high level python scripts. I agree that Python isn't as fast as D and it lacks type safety things and so on, but in the end of day the Python coder gets the job done while the D coder still fights with inline assembler, compiler bugs, porting the app, fighting the type system (mostly purity/constness issues). Python has more libs available, you need to write less code to implement the same functionality and it's all less troublesome because the lack of type annotations. So it's really understandable why a greater amount people favor Python.
Dec 01 2009
next sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
retard wrote:
 Tue, 01 Dec 2009 14:22:10 -0800, Walter Bright wrote:
 
 bearophile wrote:
 Right. But what people care in the end is programs that get the work
 done. If a mix of Python plus C/C++ libs are good enough and handy
 enough then they get used. For example I am able to use the PIL Python
 lib with Python to load, save and process jpeg images at high-speed
 with few lines of handy code. So I don't care if PIL is written in C++:
 http://www.pythonware.com/products/pil/
Sure, but that's not about the language. It's about the richness of the ecosystem that supports the language, and Python certainly has a rich one.
I thought D was supposed to be a practical language for real world problems. This 'D is good because everything can and must be written in D' is beginning to sound like a religion. To me it seems the Python way is more practical in all ways. Even novice programmers can produce efficient programs with it by using a mixture of low level C/C++ libs and high level python scripts. I agree that Python isn't as fast as D and it lacks type safety things and so on, but in the end of day the Python coder gets the job done while the D coder still fights with inline assembler, compiler bugs, porting the app, fighting the type system (mostly purity/constness issues). Python has more libs available, you need to write less code to implement the same functionality and it's all less troublesome because the lack of type annotations. So it's really understandable why a greater amount people favor Python.
You don't actually have to use pure, const, inline assembler, etc. D is a wonderful language to just do string-and-hashtable code in. All the other features are there to help bigger projects (contracts, yay!) or projects with special needs (I for one have never needed inline ASM).
Dec 01 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 I thought D was supposed to be a practical language for real world
 problems. This 'D is good because everything can and must be written in
 D' is beginning to sound like a religion.
You're missing the point. Mixing languages always adds complexity. If you want the languages to talk to each other, the glue layer adds complexity that has nothing to do with the problem being solved. If you don't want the languages to talk to each other, then you're severely limited in terms of the granularity at which they can be mixed. Furthermore, it's nice to be able to write generic code once and have it always "just be there". I get very annoyed with languages that target a small niche. For example, I do a lot of mathy stuff, but I hate Matlab and R because they're too domain-specific. Anytime I write more than 20 lines of code in either of these, I find that the lack of some general-purpose programming capability in these languages or the awkwardness of using it has just added a layer of complexity to my project. Even Python runs out of steam when you need more performance but you realize what a PITA it is to get all the glue working to rewrite parts of your code in C. Heck, even Numpy sometimes feels like a kludge because it reimplements basic things like arrays (with static typing, mind you) because Python's builtin arrays are too slow. Therefore, Numpy code is often not very Pythonic. A practical language should have enough complexity management tools to handle basically any type of complexity you throw at it, whether it be a really complicated business model, insane performance requirements, the need to scale to massive datasets, or the sheer volume of code that needs to be written. Making more assumptions about what problems you want to solve is what libraries or applications are for. These complexity management tools should also stay the heck out of the way when you don't need them. If you can achieve this, your language will be good for almost anything.
Dec 02 2009
parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:

because Python's builtin arrays are too slow.<
Python lists are not badly implemented, it's the interpreter that's slow (*). Python built-in arrays (lists) are dynamically typed, so they are less efficient but more flexible. NumPy arrays are the opposite. So as usual with data structures, they are a result of compromises and are chosen an optimized for your purposes. (*) And the interpreter is slow because it's designed to be simple. Being simple it's possible for not very expert people too, people that do it in their free time, to hack and fix the Python C source code. This allows CPython to keep enough developers, so the language keeps improving. In the Python design there are many lessons like this that D developers have to learn still.
 A practical language should have enough complexity management tools to handle
 basically any type of complexity you throw at it, [...]
In the world there's space for smaller and simpler languages too, like Lua, designed for more limited purposes. Not every language must become an universal ball of mud like C++.
 If you can achieve this, your language will be good for almost anything.
I will not believe in the single True Language, sorry, just like there isn't a single perfect way to implement dynamic arrays. Bye, bearophile
Dec 02 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 10:46 me escribiste:
And BTW, Python *have* some built-in immutable types (strings, tuples,
integers, floats, frozensets, and I don't remember if there is anything
else). Python uses convention over hard-discipline (no public/private for
example), so you can make your own immutable types, just don't add
mutating methods and don't mess with. I agree it's arguable, but people
actually use this conventions (they are all consenting adults :), so
things works.
I agree that statically enforced immutability is unnecessary if you are able to rigidly follow an immutability convention. C++ also has immutability by convention. People who work in large teams with programmers of all skill levels tell me, however, that having a convention and being sure it is followed 100% are two very different things.
Yes, I know, probably Python (and most dynamic languages) and Java are the two extremes in this regard.
I can only speak from experience, and my bug count in Python is extremely
low, even when doing MT (the Queue module provides a very easy way to pass
messages from one thread to another).
How about the GIL?
The GIL is a performance issue. As I said, that's the only point where D is stronger than Python (and maybe other dynamic languages, I mention Python because is the language I use the most).
I agree that, when you don't care much for performance, things are much
easier :)
I would also agree that your bug count and complexity should be low as long as you're staying within the paradigms that Python (or any language) was designed to support.
Of course. But Python is a very flexible language (or I use too few paradigms when programming ;).
4. no metaprogramming
Dynamic languages support dynamic metaprogramming. Ever heard of e.g. lisp macros?
Exactly! You can even generate code dynamically! This is a very nice example: http://code.activestate.com/recipes/362305/ It makes "self" implicit in *pure Python*. If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.
Ok, can you do Bill Baxter's swizzler? Can you do Don Clugston's FPU code generator?
I don't know any of those things, but I know Python have very good metaprogramming capabilities (decorators and metaclasses being probably the 2 bigger features in this regard).
5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
In D you need interfacing code too, it can be a little simpler, that's true.
The interfacing in D is nothing more than providing a declaration. There is no code executed.
Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant. And BTW, you can access C dynamic libraries in Python via the ctype module: http://docs.python.org/library/ctypes.html It's not safe, and of course, being a dynamic language, you can access C code at "compile time" (because there it no compile time), but you can interface with C very easily:
 import ctypes
 libc = ctypes.cdll.LoadLibrary("libc.so.6")
 libc.printf("hello world %i\n", 5)
hello world 5 Wow, that was hard! =)
6. scope guard (transactional processing); Python has the miserable
try-catch-finally paradigm
WRONG! See the with statement: http://www.python.org/dev/peps/pep-0343/ with lock: some_non_mt_function() with transaction: some_queries() with file(fname) as f: x = f.read(10) f.write(x)
Looks like you're right, and it's a recently added new feature. I suggest it proves my point - Python had to add complexity to support another paradigm. Python's "with" doesn't look any simpler than scope guard.
It's simpler, because you only have one obvious way to do things, in D you can use a struct, a scope class or a scope statement to achieve the same. Of course that gives you more flexibility, but adds complexity to the language. I'm not complaining or saying that D is wrong, I'm just saying that Python is a very expressive language without much complexity. I think the tradeoff is the speed.
8. RAII
Ok. I think this could also be enforced dynamically.
Again, the with statement.
Yes, you can emulate RAII with the with statement, but with RAII (objects that destruct when they go out of scope) you can put this behavior in the object rather than explicitly in the code every time you use it. It's more complicated to have to remember to do it every time on use.
Maybe you are right, but the with statement plays very well with the "explicit is better than implicit" of Python :) Again, is flexibility vs complexity.
10. ability to manage resources directly
What do you mean by resource?
Garbage collection isn't appropriate for managing every resources. Scarce ones need handling manually. Even large malloc's often are better done outside of the gc.
We are talking about performance again. If you need speed, I agree Python is worse than D.
11. inline assembler
You can do bytecode manipulation, which is the assembler of dynamic languages :)
That doesn't help if you really need to do a little assembler.
Right, but I don't think anyone uses assembler just for fun, you use it either for optimization (where I already said D is better than Python) or for doing some low-level stuff (where Python clearly is not a viable option).
I really think the *only* *major* advantage of D over Python is speed.
That's it.
I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.
There are static analyzers for Python: http://www.logilab.org/857 http://divmod.org/trac/wiki/DivmodPyflakes http://pychecker.sourceforge.net/ And again, judging from experience, I don't know why, but I really have a very small bug count when using Python. I don't work with huge teams of crappy programmers (which I think is the scenario that D tries to cover), that can be a reason ;) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Creativity is great but plagiarism is faster
Dec 01 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 17:11:26 -0300, Leandro Lucarella wrote:

 And again, judging from experience, I don't know why, but I really have
 a very small bug count when using Python. I don't work with huge teams
 of crappy programmers (which I think is the scenario that D tries to
 cover), that can be a reason ;)
The lack of type annotations at least removes all typing bugs. Your brain has more processing power for the task at hand since you don't need to concentrate on trivial type issues. Testing the code and writing prototypes in the repl basically eliminates all bugs. At least so they say.
Dec 01 2009
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs. 
Quite the contrary, leaving off the type annotation spawns bugs. I had to write a web app in Ruby last year, and well remember the little things that slipped past tests, pissing off end users. "Why can't I access this obscure page?" Because a != b since for some reason, the database returned a as a string, and b was assigned by an integer literal. In D, that would have been an instant compile time error. In Ruby, it was a runtime error on a page obscure enough that it slipped past testing into the real world. You might say that I should have been more disciplined about my testing, or maybe the company should have hired a dedicated tester, but the fact remains that it simply wouldn't have happened in D at all. (Even if I left off the types and used 'auto' everywhere, the compiler would still see the mismatch.) Until now :P I'm fairly certain that with std.variant and some opDispatch magic, we can recreate the dynamic system wholesale, so you could, if you really wanted to, just use var for all types. The only thing left to make it happen in the language is probably either opImplicitCast or global assignment operator overloads, and even they aren't strictly necessary for a lot of programs. -- Adam D. Ruppe http://arsdnet.net
Dec 01 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Adam D. Ruppe wrote:
 On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs. 
Quite the contrary, leaving off the type annotation spawns bugs.
Yah, I was wondering about that! The hypothesis is there, but the conclusion was the negation of the correct conclusion. Andrei
Dec 01 2009
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Adam D. Ruppe wrote:
 You might say that I should have been more disciplined about [...]
That's the usual excuse for poor language design <g>. What I've been trying to do with D is enable more static verification, so that the project team can rely on enforced guarantees rather than discipline, education, convention, hope and prayer.
Dec 01 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 16:58:32 -0500, Adam D. Ruppe wrote:

 On Tue, Dec 01, 2009 at 09:17:44PM +0000, retard wrote:
 The lack of type annotations at least removes all typing bugs.
Quite the contrary, leaving off the type annotation spawns bugs.
It spawns new bugs, for sure, but it removes all static typing bugs cause those aren't checked anymore and cannot exist under that category!
 I had
 to write a web app in Ruby last year, and well remember the little
 things that slipped past tests, pissing off end users.
 
 "Why can't I access this obscure page?"
 
 Because a != b since for some reason, the database returned a as a
 string, and b was assigned by an integer literal.
 
 In D, that would have been an instant compile time error. In Ruby, it
 was a runtime error on a page obscure enough that it slipped past
 testing into the real world.
The thing is, nowadays when all development should follow the principles of clean code (book), agile, and tdd/bdd, this cannot happen. You write tests first, then the production code. They say that writing tests and code takes less time than writing only the more or less buggy production code. Not writing tests is a sign of a novice programmer and they wouldn't hire you if you didn't advertise your TDD skills. In this particular case you use a dummy test db fixture system, write tests for 'a is int' and 'b is int'. With these tests in place, the functionality provided by D's type system is only a subset of the coverage the tests provide. So D cannot offer any advantage anymore over e.g. Python.
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 The thing is, nowadays when all development should follow the principles 
 of clean code (book), agile, and tdd/bdd, this cannot happen. You write 
 tests first, then the production code. They say that writing tests and 
 code takes less time than writing only the more or less buggy production 
 code. Not writing tests is a sign of a novice programmer and they 
 wouldn't hire you if you didn't advertise your TDD skills.
And therein lies the problem. You need the programmers to follow a certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write 
 tests for 'a is int' and 'b is int'. With these tests in place, the 
 functionality provided by D's type system is only a subset of the 
 coverage the tests provide. So D cannot offer any advantage anymore over 
 e.g. Python.
Where's the advantage of: assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?
Dec 02 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 03:16:58 -0800, Walter Bright wrote:

 retard wrote:
 The thing is, nowadays when all development should follow the
 principles of clean code (book), agile, and tdd/bdd, this cannot
 happen. You write tests first, then the production code. They say that
 writing tests and code takes less time than writing only the more or
 less buggy production code. Not writing tests is a sign of a novice
 programmer and they wouldn't hire you if you didn't advertise your TDD
 skills.
And therein lies the problem. You need the programmers to follow a certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write
 tests for 'a is int' and 'b is int'. With these tests in place, the
 functionality provided by D's type system is only a subset of the
 coverage the tests provide. So D cannot offer any advantage anymore
 over e.g. Python.
Where's the advantage of: assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?
The case I commented on was about fetching values from a db IIRC. So the connection between SQL database and D loses all type information unless you build some kind of high level SQL interface which checks the types (note that up-to-date checking cannot be done with dmd unless it allows fetching stuff from the db on compile time or you first dump the table parameters to some text file before compiling). You can't just write: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; int b = (int)a[0][0]; string c = (string)b[0][1]; and somehow expect that the first column of row 0 is an integer and the next column a string. You still need to postpone the checking to runtime with some validation function: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; void runtime_assert(T)(string s) { ... } runtime_assert!(int)(a[0][0]); int b = (int)a[0][0]; string c = b[0][1]; I agree some disciplines are hard to follow. For example ensuring immutability in a inherently mutable language. But TDD is something a bit easier - it's a lot higher level. It's easy to remember that you can't write any code into production code folder unless there is already code in test folder. You can verify with code coverage tools that you didn't forget to write some tests. In TDD the whole code looks different. You build it to be easily testable. It's provably a good way to write code - almost every company nowadays uses TDD and agile methods such as Scrum.
Dec 02 2009
next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
retard wrote:
 Wed, 02 Dec 2009 03:16:58 -0800, Walter Bright wrote:
 
 retard wrote:
 The thing is, nowadays when all development should follow the
 principles of clean code (book), agile, and tdd/bdd, this cannot
 happen. You write tests first, then the production code. They say that
 writing tests and code takes less time than writing only the more or
 less buggy production code. Not writing tests is a sign of a novice
 programmer and they wouldn't hire you if you didn't advertise your TDD
 skills.
And therein lies the problem. You need the programmers to follow a certain discipline. I don't know if you've managed programmers before, but they don't always follow discipline, no matter how good they are. The root problem is there's no way to *verify* that they've followed the discipline, convention, procedure, whatever. But with mechanical checking, you can guarantee certain things. How are you going to guarantee each member of your team put all the unit tests in? Each time they change anything?
 In this particular case you use a dummy test db fixture system, write
 tests for 'a is int' and 'b is int'. With these tests in place, the
 functionality provided by D's type system is only a subset of the
 coverage the tests provide. So D cannot offer any advantage anymore
 over e.g. Python.
Where's the advantage of: assert(a is int) over: int a; ? Especially if I have to follow the discipline and add them in everywhere?
The case I commented on was about fetching values from a db IIRC. So the connection between SQL database and D loses all type information unless you build some kind of high level SQL interface which checks the types (note that up-to-date checking cannot be done with dmd unless it allows fetching stuff from the db on compile time or you first dump the table parameters to some text file before compiling). You can't just write: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; int b = (int)a[0][0]; string c = (string)b[0][1]; and somehow expect that the first column of row 0 is an integer and the next column a string. You still need to postpone the checking to runtime with some validation function: typedef string[] row; row[] a = sql_engine.execute("select * from foobar;").result; void runtime_assert(T)(string s) { ... } runtime_assert!(int)(a[0][0]); int b = (int)a[0][0]; string c = b[0][1];
std.conv.to() to the rescue! :) import std.conv; ... row[] a = sql_engine.execute("select * from foobar;").result; int b = to!int(a[0][0]); // Throws if conversions fail string c = to!string(a[0][1]); -Lars
Dec 02 2009
parent retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 13:12:58 +0100, Lars T. Kyllingstad wrote:

 std.conv.to() to the rescue! :)
 
    import std.conv;
    ...
 
    row[] a = sql_engine.execute("select * from foobar;").result;
 
    int b = to!int(a[0][0]);          // Throws if conversions fail
    string c = to!string(a[0][1]);
 
 -Lars
You also seem to miss the point. The topic of this conversation (I think?) was about static verification. to! throws at runtime.
Dec 02 2009
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 I agree some disciplines are hard to follow. For example ensuring 
 immutability in a inherently mutable language. But TDD is something a bit 
 easier - it's a lot higher level. It's easy to remember that you can't 
 write any code into production code folder unless there is already code 
 in test folder. You can verify with code coverage tools that you didn't 
 forget to write some tests. In TDD the whole code looks different. You 
 build it to be easily testable. It's provably a good way to write code - 
 almost every company nowadays uses TDD and agile methods such as Scrum.
I totally agree with the value of unittests. That's why D has them built in to the language, and even has a code coverage analyzer built in so you can see how good your unit tests are. Where you and I disagree is on the notion that unit tests are a good enough replacement for static verification. For me it's like using a sports car to tow a trailer.
Dec 02 2009
prev sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wed, Dec 02, 2009 at 11:50:23AM +0000, retard wrote:
 The case I commented on was about fetching values from a db IIRC.
What happened to me was the value got returned as the incorrect type, stored, and used later where it threw the exception.. Conceptual code here: === def getPermission(userid) begin return $db.query("select whatever from table where id = ?", userid); end def getPermissionNeeded(operation) begin return $db.query("select whatever from table where id = ?", operation.id); end === The most common way the code used it was like this: if(getPermission(user) == getPermissionNeeded(op) || user == User.ROOT) op.run; // works - the db functions return equal strings in both cases The bug was here: if(getPermission(user) >= getPermissionNeeded(op)) // this throws at runtime op.run; // never reached, users complain If the functions were defined like they would be in D: int getPermission(int user) { return db.query(...); } The real source of the bug - that the database query didn't give me the expected type - would have been located in the fraction of a second it takes for the compiler to run its most trivial checks. That really is similar to putting in an out contract: assert(getPermission is int); or probably better: assert(getPermission >= 0); But it is a) required, so I'm not allowed to get lazy about it and b) just plain easier, so laziness won't affect it anyway. (Or hell, if it was PHP, the weak typing would have converted both to integer at that line and it would work. But weak typing comes with its own slippery bugs.) Thanks to the dynamic duck typing, the code worked most the time, but failed miserably where I deviated ever so slightly. The fix in the ruby was easy enough, once the bug was found: return db.query(...).to_i The same thing dmd would have forced me to do to make it compile, but the important difference is dmd would have found the bug for me, not an end user. It just goes to prove that you can't just forget about types in your code just because it is dynamic. -- Adam D. Ruppe http://arsdnet.net
Dec 03 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 But with mechanical checking, you can guarantee certain things.
Usually what mechanical checking guarantee is not even vaguely enough, and such guarantee aren't even about the most important parts :-) Unit tests are more important, because they cover things that matter more. Better to add much more unit tests to Phobos.
 Where's the advantage of:
      assert(a is int)
 over:
      int a;
 ? Especially if I have to follow the discipline and add them in everywhere?
Probably I have missed parts of this discussion, so what I write below can be useless. But in dynamic code you don't almost never assert that a variable is an int; you assert that 'a' is able to do its work where it's used. So 'a' can often be an int, decimal, a multiprecision long, a GMP multiprecision, or maybe even a float. What you care of it not what a is but if does what it has to, so you care if it quacks :-) That's duck typing. Bye, bearophile
Dec 02 2009
parent Michal Minich <michal.minich gmail.com> writes:
Hello bearophile,

 But in dynamic code you don't almost never assert that a variable is
 an int; you assert that 'a' is able to do its work where it's used. So
 'a' can often be an int, decimal, a multiprecision long, a GMP
 multiprecision, or maybe even a float. What you care of it not what a
 is but if does what it has to, so you care if it quacks :-) That's
 duck typing.
Yes that's duck typing: "assert that 'a' is able to do its work where it's used" (function with required signature exists) Interfaces in OOP, or type classes in Haskell are here to "assert that 'a' is intended to work where it's used" (type is some implementation of the required concept (int/long/bigint)) both have its place :) Note that duck typing need not to be only dynamic, it can also happen at compile time - ranges in D checks if some functions are specified for "object" at compile time.
Dec 02 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
In D you need interfacing code too, it can be a little simpler, that's true.
The interfacing in D is nothing more than providing a declaration. There is no code executed.
Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant.
You can also simply use C strings in D, and pass them straight to C functions that take void*. No conversion necessary. It isn't any harder to ensure a 0 termination in D than it is in C, in fact, it's just the same. D string literals even helpfully already have a 0 at the end with this in mind!
 It's not safe, and of course, being a dynamic language, you can access
 C code at "compile time" (because there it no compile time), but you can
 interface with C very easily:
 
 import ctypes
 libc = ctypes.cdll.LoadLibrary("libc.so.6")
 libc.printf("hello world %i\n", 5)
hello world 5 Wow, that was hard! =)
Ok, does this work: p = libc.malloc(100); *p = 3; ? Or this: struct S { int a; char b; }; S s; libc.fillInS(&s);
 It's simpler, because you only have one obvious way to do things,
No, Python has try/catch/finally as well.
 in D you
 can use a struct, a scope class or a scope statement to achieve the same.
 Of course that gives you more flexibility, but adds complexity to the
 language. I'm not complaining or saying that D is wrong, I'm just saying
 that Python is a very expressive language without much complexity. I think
 the tradeoff is the speed.
 Yes, you can emulate RAII with the with statement, but with RAII
 (objects that destruct when they go out of scope) you can put this
 behavior in the object rather than explicitly in the code every time
 you use it. It's more complicated to have to remember to do it every
 time on use.
Maybe you are right, but the with statement plays very well with the "explicit is better than implicit" of Python :) Again, is flexibility vs complexity.
Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.
 There are static analyzers for Python:
 http://www.logilab.org/857
 http://divmod.org/trac/wiki/DivmodPyflakes
 http://pychecker.sourceforge.net/
What's happening here is the complexity needed in the language is pushed off to third party tools. It didn't go away.
 And again, judging from experience, I don't know why, but I really have
 a very small bug count when using Python. I don't work with huge teams of
 crappy programmers (which I think is the scenario that D tries to cover),
 that can be a reason ;)
Part of that may be experience. The languages I use a lot, I tend to generate far fewer bugs with because I've learned to avoid the common bugs. There have been very few coding errors in the C++ dialect I use in dmd, the errors have been logic ones. You're right that D has a lot that is intended more for large scale projects with a diverse team than one man jobs. There is a lot to support enforced encapsulation, checking, and isolation, if that is desired. Purity, immutability, contracts, interfaces, etc., are not important for small programs.
Dec 01 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 Ok, does this work:
 
      p = libc.malloc(100);
      *p = 3;
 
 ? Or this:
 
      struct S { int a; char b; };
      S s;
      libc.fillInS(&s);
The purpose of ctypes is to interface Python with C libs, it's a quite well designed piece of software engineering. This is how you can do what you ask for: from ctypes import POINTER, Structure, cdll, c_int, c_char malloc = libc.malloc malloc.restype = POINTER(c_int) p = malloc(100) p[0] = 3 class S(Structure): _fields_ = [("a", c_int), ("b", c_char)] s = S() libc.fillInS(byref(s)) Bye, bearophile
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 Ok, does this work:
 
 p = libc.malloc(100); *p = 3;
 
 ? Or this:
 
 struct S { int a; char b; }; S s; libc.fillInS(&s);
The purpose of ctypes is to interface Python with C libs, it's a quite well designed piece of software engineering. This is how you can do what you ask for: from ctypes import POINTER, Structure, cdll, c_int, c_char malloc = libc.malloc malloc.restype = POINTER(c_int) p = malloc(100) p[0] = 3 class S(Structure): _fields_ = [("a", c_int), ("b", c_char)] libc.fillInS(byref(s)) Bye, bearophile
Doable, yes, simple, no. For example, it's clear it cannot be linked directly to C. The C code must be installed into a shared library first.
Dec 01 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 13:43 me escribiste:
 Leandro Lucarella wrote:
5. simple interfacing to C
In case you mean no unnecessary wrappers etc., this has more to do with the execution model than language features. Most scripting languages are interpreted, and require some sort of assistance from the runtime system. If the language was compiled instead, they wouldn't necessarily need those.
In D you need interfacing code too, it can be a little simpler, that's true.
The interfacing in D is nothing more than providing a declaration. There is no code executed.
Unless you want to pass D strings to C, then you have to execute toStringz(), which is a really thin "wrapper", but it's a wrapper. Using C from D is (generally) error prone and painful, so I usually end up writing more D'ish wrappers to make the D coding more pleasant.
You can also simply use C strings in D, and pass them straight to C functions that take void*. No conversion necessary. It isn't any harder to ensure a 0 termination in D than it is in C, in fact, it's just the same. D string literals even helpfully already have a 0 at the end with this in mind!
Yes, I know you can use bare C strings, but when I use D, I want to code in D, not in C =)
It's not safe, and of course, being a dynamic language, you can access
C code at "compile time" (because there it no compile time), but you can
interface with C very easily:

import ctypes
libc = ctypes.cdll.LoadLibrary("libc.so.6")
libc.printf("hello world %i\n", 5)
hello world 5 Wow, that was hard! =)
Ok, does this work: p = libc.malloc(100); *p = 3;
It looks like you can (not as easily) according to bearophile example, but this is besides the point, you only want to use malloc() for performance reasons, and I already said that D is better than Python on that. I mentioned ctypes just for the point of easy C-interoperability.
It's simpler, because you only have one obvious way to do things,
No, Python has try/catch/finally as well.
I said *obvious*. try/catch/finally is there for another reason (managing errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.
Maybe you are right, but the with statement plays very well with the
"explicit is better than implicit" of Python :)

Again, is flexibility vs complexity.
Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.
Where is the code complexity here, I can't see it.
There are static analyzers for Python:
http://www.logilab.org/857
http://divmod.org/trac/wiki/DivmodPyflakes
http://pychecker.sourceforge.net/
What's happening here is the complexity needed in the language is pushed off to third party tools. It didn't go away.
The thing is, I never used them and never had the need to. Don't ask me why, I just have very few errors when coding in Python. So it's not really *needed*.
And again, judging from experience, I don't know why, but I really have
a very small bug count when using Python. I don't work with huge teams of
crappy programmers (which I think is the scenario that D tries to cover),
that can be a reason ;)
Part of that may be experience. The languages I use a lot, I tend to generate far fewer bugs with because I've learned to avoid the common bugs. There have been very few coding errors in the C++ dialect I use in dmd, the errors have been logic ones.
You're probably right, but I think Python simplicity really helps in reducing bug count. When the language doesn't get in the way it's much harder to introduce bugs because you can focus in what's important, there is no noise distracting you :)
 You're right that D has a lot that is intended more for large scale
 projects with a diverse team than one man jobs. There is a lot to
 support enforced encapsulation, checking, and isolation, if that is
 desired. Purity, immutability, contracts, interfaces, etc., are not
 important for small programs.
Agreed. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- I am so psychosomatic it makes me sick just thinking about it! -- George Constanza
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 It looks like you can (not as easily) according to bearophile example, but
 this is besides the point, you only want to use malloc() for performance
 reasons, and I already said that D is better than Python on that.
 I mentioned ctypes just for the point of easy C-interoperability.
To me C interoperability means being able to connect with any C function. That means handling pointers, structs, etc.
 It's simpler, because you only have one obvious way to do things,
No, Python has try/catch/finally as well.
I said *obvious*. try/catch/finally is there for another reason (managing errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.
try/catch/finally is usually used for handling RAII in languages that don't have RAII, so I don't think it's really justifiable to argue that Python only gives one obvious way to do it. D has three: RAII, scope guard, and try-catch-finally. As far as I'm concerned, the only reason t-c-f isn't taken out to the woodshed and shot is to make it easy to translate code from other languages to D.
 Maybe you are right, but the with statement plays very well with the
 "explicit is better than implicit" of Python :)

 Again, is flexibility vs complexity.
Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.
Where is the code complexity here, I can't see it.
The code complexity is suppose I create a mutex object. Every time I get the mutex, I want the mutex to be released on all paths. With RAII, I build this into the mutex object itself. Without RAII, I have to add in the exception handling code EVERY place I use the object. If I change the abstraction, I have to go and change every use of it. To me, that's code complexity, not flexibility. A proper abstraction means that if I change the design, I only have to change it in one place. Not everywhere its used.
 The thing is, I never used them and never had the need to. Don't ask me
 why, I just have very few errors when coding in Python. So it's not really
 *needed*.
I agree that static analysis isn't needed. The better statement is is there a benefit to it that exceeds the cost?
Dec 01 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.
I've literally never written a finally block in my life in D because scope statements and RAII are just that good. Does anyone, other than a few beginners who were unaware of scope guards, use finally? I'm half-tempted to say we should just axe it. It's an error prone legacy feature that's completely useless for any spec, ditching finally would do so, and it would encourage converts from Java and
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.
I've literally never written a finally block in my life in D because scope statements and RAII are just that good. Does anyone, other than a few beginners who were unaware of scope guards, use finally? I'm half-tempted to say we should just axe it. It's an error prone legacy feature that's completely useless for any the spec, ditching finally would do so, and it would encourage converts from Java and
I'm sympathetic to that point of view, but it is pure drudgery to unwind try-catch-finally into proper scope statements.
Dec 01 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 17:31 me escribiste:
 Leandro Lucarella wrote:
It looks like you can (not as easily) according to bearophile example, but
this is besides the point, you only want to use malloc() for performance
reasons, and I already said that D is better than Python on that.
I mentioned ctypes just for the point of easy C-interoperability.
To me C interoperability means being able to connect with any C function. That means handling pointers, structs, etc.
Well, you can. It's a *little* more verbose than D, but since you almost *never* need to interoperate with C in Python, it's not so bad.
It's simpler, because you only have one obvious way to do things,
No, Python has try/catch/finally as well.
I said *obvious*. try/catch/finally is there for another reason (managing errors, not doing RAII). Of course you can find convoluted ways to do anything in Python as with any other language.
try/catch/finally is usually used for handling RAII in languages that don't have RAII, so I don't think it's really justifiable to argue that Python only gives one obvious way to do it.
It's obvious when you code in Python.
 D has three: RAII, scope guard, and try-catch-finally. As far as I'm
 concerned, the only reason t-c-f isn't taken out to the woodshed and
 shot is to make it easy to translate code from other languages to D.
I think code translation from other languages is not a good reason for adding complexity...
Maybe you are right, but the with statement plays very well with the
"explicit is better than implicit" of Python :)

Again, is flexibility vs complexity.
Another principle is abstractions should be in the right place. When the abstraction leaks out into the use of the abstraction, it's user code complexity. This is a case of that, I believe.
Where is the code complexity here, I can't see it.
The code complexity is suppose I create a mutex object. Every time I get the mutex, I want the mutex to be released on all paths. With RAII, I build this into the mutex object itself.
But you can do that with the 'with' statement!
 Without RAII, I have to add in the exception handling code EVERY place
 I use the object. If I change the abstraction, I have to go and change
 every use of it. To me, that's code complexity, not flexibility.
 
 A proper abstraction means that if I change the design, I only have
 to change it in one place. Not everywhere its used.
We agree completely :)
The thing is, I never used them and never had the need to. Don't ask me
why, I just have very few errors when coding in Python. So it's not really
*needed*.
I agree that static analysis isn't needed. The better statement is is there a benefit to it that exceeds the cost?
Maybe in very big projects with an heterogeneous team, I don't know. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Si pensas que el alma no se ve el alma sí se ve en los ojos
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I think code translation from other languages is not a good reason for
 adding complexity...
I think it is. We wouldn't have DWT otherwise, for example. Inner classes were added specifically in order to speed up the translation process.
 The code complexity is suppose I create a mutex object. Every time I
 get the mutex, I want the mutex to be released on all paths. With
 RAII, I build this into the mutex object itself.
But you can do that with the 'with' statement!
The with goes at the use end, not the object declaration end. Or I read the spec wrong.
Dec 01 2009
parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 But you can do that with the 'with' statement!
The with goes at the use end, not the object declaration end. Or I read the spec wrong.
So does the scope guard. I think scope guard solves the same problem as the with-statement, only it does it in a more flexible and arguably sexier way.
Dec 01 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 10:46:11 -0800, Walter Bright wrote:

 Leandro Lucarella wrote:
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.
I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.
In many places if you apply for a job, static verification is more or less bullshit talk to their ears. Unit testing with large frameworks is the way to go. You even have lots of new paradigms to learn, e.g. TDD, BDD, ...
Dec 01 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Tue, 01 Dec 2009 10:46:11 -0800, Walter Bright wrote:
 Leandro Lucarella wrote:
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.
I probably place a lot more importance on static verification rather than relying on convention and tons of unit tests.
In many places if you apply for a job, static verification is more or less bullshit talk to their ears. Unit testing with large frameworks is the way to go. You even have lots of new paradigms to learn, e.g. TDD, BDD, ...
My biggest gripe about static verification is that it can't help you at all with high-level logic/algorithmic errors, only lower level coding errors. Good unit tests (and good asserts), on the other hand, are invaluable for finding and debugging high-level logic and algorithmic errors.
Dec 01 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 Good unit
 tests (and good asserts), on the other hand, are invaluable for finding and
 debugging high-level logic and algorithmic errors.
Contract programming too can help. For example in a precondition of a binary search function you can test that the items are sorted. If you don't like that (because when such contract is present it changes the computational complexity class of the function) you can even do a random sampling test :-) In some cases it can be useful to split unittests and contracts in two groups (using a version()), a group of fast ones to be run all the time, and group of slower ones to be run only once in a while to be safer. What I'd like to know is why Andrei has asked for exceptions inside contracts too. Bye, bearophile
Dec 01 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 My biggest gripe about static verification is that it can't help you at all
with
 high-level logic/algorithmic errors, only lower level coding errors.  Good unit
 tests (and good asserts), on the other hand, are invaluable for finding and
 debugging high-level logic and algorithmic errors.
Unit tests have their limitations as well. Unit tests cannot prove a function is pure, for example. Both unit tests and static verification are needed.
Dec 01 2009
parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:

 dsimcha wrote:
 My biggest gripe about static verification is that it can't help you at
 all with high-level logic/algorithmic errors, only lower level coding
 errors.  Good unit tests (and good asserts), on the other hand, are
 invaluable for finding and debugging high-level logic and algorithmic
 errors.
Unit tests have their limitations as well. Unit tests cannot prove a function is pure, for example.
Sure, unit tests can't prove that.
 Both unit tests and static verification are needed.
But it doesn't lead to this conclusion. Static verification is sometimes very expensive and real world business applications don't need those guarantees that often. It's ok if a web site or game crashes every now and then. If I need serious static verification, I would use tools like Coq, not D..
Dec 01 2009
next sibling parent Michal Minich <michal.minich gmail.com> writes:
Hello retard,

 Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:
 
 dsimcha wrote:
 
 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other
 hand, are invaluable for finding and debugging high-level logic and
 algorithmic errors.
 
Unit tests have their limitations as well. Unit tests cannot prove a function is pure, for example.
Sure, unit tests can't prove that.
 Both unit tests and static verification are needed.
 
But it doesn't lead to this conclusion. Static verification is sometimes very expensive and real world business applications don't need those guarantees that often. It's ok if a web site or game crashes every now and then. If I need serious static verification, I would use tools like Coq, not D..
Static verification in Coq is very expensive, but who really does that for real world programs. I think we are talking about automatic static verification with none or minimal programmer assistance - it will get you assurances for larger project with multiple programmers - that various parts plug in correctly (typecheck) and that they do not affect other parts of program in unexpected ways (const/pure/safe) - then you are at good ground to verify yours program logic by yourself (debugging/pre(post)conditions/unittests/asserts/invariants).
Dec 02 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Tue, 01 Dec 2009 14:24:01 -0800, Walter Bright wrote:
 Unit tests have their limitations as well. Unit tests cannot prove a
 function is pure, for example.
Sure, unit tests can't prove that.
 Both unit tests and static verification are needed.
But it doesn't lead to this conclusion. Static verification is sometimes very expensive
Not if it's built in to the compiler. I aim to bring the cost of it down to zero.
 and real world business applications don't need those 
 guarantees that often.
Having your accounting software write checks in the wrong amount can be very very bad. And frankly, if you can afford your software unwittingly emitting garbage data, you don't need that software for your business apps.
 It's ok if a web site or game crashes every now 
 and then.
If Amazon's web site goes down, they likely lose millions of dollars a minute. Heck, I once lost a lot of business because the web site link to the credit card system went down. Few businesses can afford to have their ecommerce web sites down.
 If I need serious static verification, I would use tools like 
 Coq, not D..
There's a lot of useful stuff in between a total formal proof of correctness and nothing at all. D can offer proof of various characteristics that are valuable for eliminating bugs.
Dec 02 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello dsimcha,

 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other hand,
 are invaluable for finding and debugging high-level logic and
 algorithmic errors.
 
I don't have a link or anything but I remember hearing about a study MS did about finding bugs and what they found is that every reasonably effective tool they looked at found the same amount of bugs (ok, within shouting distance, close enough that none of them could be said to be pointless) but different bugs. The way to find the most bugs is to attack it from many angle. If I can have a language that can totally prevent one class of bugs in vast swaths of code, that's a good thing, even if it does jack for another class of bugs.
Dec 02 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from BCS (none anon.com)'s article
 Hello dsimcha,
 My biggest gripe about static verification is that it can't help you
 at all with high-level logic/algorithmic errors, only lower level
 coding errors.  Good unit tests (and good asserts), on the other hand,
 are invaluable for finding and debugging high-level logic and
 algorithmic errors.
I don't have a link or anything but I remember hearing about a study MS did about finding bugs and what they found is that every reasonably effective tool they looked at found the same amount of bugs (ok, within shouting distance, close enough that none of them could be said to be pointless) but different bugs. The way to find the most bugs is to attack it from many angle. If I can have a language that can totally prevent one class of bugs in vast swaths of code, that's a good thing, even if it does jack for another class of bugs.
Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.
Dec 02 2009
parent reply BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from BCS (none anon.com)'s article
 
 I don't have a link or anything but I remember hearing about a study
 MS did
 about finding bugs and what they found is that every reasonably
 effective
 tool they looked at found the same amount of bugs (ok, within
 shouting distance,
 close enough that none of them could be said to be pointless) but
 different
 bugs. The way to find the most bugs is to attack it from many angle.
 If I
 can have a language that can totally prevent one class of bugs in
 vast swaths
 of code, that's a good thing, even if it does jack for another class
 of bugs.
Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.
OTOH, if it's done well (doesn't get in my way) and's built into the language, any static verification is free from the end users standpoint. Heck, even it it gets in your way but only for strange cases where your hacking around, it's still useful because it tells you where the high risk code is.
Dec 02 2009
parent Don <nospam nospam.com> writes:
BCS wrote:
 Hello dsimcha,
 
 == Quote from BCS (none anon.com)'s article

 I don't have a link or anything but I remember hearing about a study
 MS did
 about finding bugs and what they found is that every reasonably
 effective
 tool they looked at found the same amount of bugs (ok, within
 shouting distance,
 close enough that none of them could be said to be pointless) but
 different
 bugs. The way to find the most bugs is to attack it from many angle.
 If I
 can have a language that can totally prevent one class of bugs in
 vast swaths
 of code, that's a good thing, even if it does jack for another class
 of bugs.
Right, but the point I was making is that you hit diminishing returns on static verification very quickly. If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.
OTOH, if it's done well (doesn't get in my way) and's built into the language, any static verification is free from the end users standpoint. Heck, even it it gets in your way but only for strange cases where your hacking around, it's still useful because it tells you where the high risk code is.
There's a really interesting synergy between pure and unit tests. It's much easier to test a function properly if it's pure -- you know that there are no globals anywhere which you have to worry about.
Dec 03 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 retard, el  1 de diciembre a las 11:42 me escribiste:
 Tue, 01 Dec 2009 03:13:28 -0800, Walter Bright wrote:

 retard wrote:
 Overall these simplifications don't remove any crucial high level
 language features, in fact they make the code simpler and shorter. For
 instance there isn't high level code that can only be written with
 8-bit byte primitives, static methods or closures, but not with 32-bit
 generic ints, singletons, and generic higher order functions. The only
 thing you lose is some type safety and efficiency.
I'm no expert on Python, but there are some things one gives up with it: 1. the ability to do functional style programming. The lack of immutability makes for very hard multithreaded programming.
Even if the language doesn't enforce immutability it's indeed possible to use immutable data types in a language without pure/const/final attributes.
And BTW, Python *have* some built-in immutable types (strings, tuples, integers, floats, frozensets, and I don't remember if there is anything else). Python uses convention over hard-discipline (no public/private for example), so you can make your own immutable types, just don't add mutating methods and don't mess with. I agree it's arguable, but people actually use this conventions (they are all consenting adults :), so things works. I can only speak from experience, and my bug count in Python is extremely low, even when doing MT (the Queue module provides a very easy way to pass messages from one thread to another).
But wait, my understanding is that threading in Python is a complete shame: one global lock. Is that correct? FWIW, that's such a bad design that _nobosy_ I know every brought it up except in jest.
 I agree that, when you don't care much for performance, things are much
 easier :)
I've hoped to leave my trace in history with a one-liner: "Inefficient abstractions are a dime a dozen". Didn't seem to catch up at all :o).
 I really think the *only* *major* advantage of D over Python is speed.
 That's it.
In wake of the above, it's actually huge. If you can provide comparable power for better speed, that's a very big deal. (Usually dynamic/scripting languages are significantly more powerful because they have fewer constraints.) Andrei
Dec 01 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  1 de diciembre a las 10:58 me escribiste:
I really think the *only* *major* advantage of D over Python is speed.
That's it.
In wake of the above, it's actually huge. If you can provide comparable power for better speed, that's a very big deal. (Usually dynamic/scripting languages are significantly more powerful because they have fewer constraints.)
I develop twice as fast in Python than in D. Of course this is only me, but that's where I think Python is better than D :) I think only not having a compile cycle (no matter how fast compiling is) is a *huge* win. Having an interactive console (with embedded documentation) is another big win. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- When I was a child I had a fever My hands felt just like two balloons. Now I've got that feeling once again I can't explain you would not understand This is not how I am. I have become comfortably numb.
Dec 01 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
 I think only not having a compile cycle (no matter how fast compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
Dec 01 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
I develop twice as fast in Python than in D. Of course this is only me,
but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
I think only not having a compile cycle (no matter how fast compiling is)
is a *huge* win. Having an interactive console (with embedded
documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- La terapia no sirve: es mucho mejor pagar para hacer las perversiones que para contarlas. -- Alberto Giordano (filósofo estilista)
Dec 01 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it. Andrei
Dec 01 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el =A01 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me=
,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive languag=
e
 and when you need speed, you need static typing and all the low-level
 support. They are all necessary evil. All I'm saying is, when I don't ne=
ed
 speed and I have to do something quickly, Python is still a far better
 language than D, because of they inherent differences.

 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.
The web page[1] says it doesn't work on Windows. That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html --bb
Dec 01 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.
The web page[1] says it doesn't work on Windows. That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html --bb
rdmd does work for Windows. What it does is to detect and cache dependencies such that you only need to specify your main file. Andrei
Dec 01 2009
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Tue, Dec 1, 2009 at 5:08 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el =A01 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only m=
e,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive langua=
ge
 and when you need speed, you need static typing and all the low-level
 support. They are all necessary evil. All I'm saying is, when I don't n=
eed
 speed and I have to do something quickly, Python is still a far better
 language than D, because of they inherent differences.

 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.
The web page[1] says it doesn't work on Windows. =A0That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html
Seems like it does work, though. Good news! The web page should be updated. I will definitely use it now that I know it works. It does seem to hang at the end of output waiting for an Enter from the con= sole. And the =E1 in the --help message doesn't show properly on the console either. (but actually it does work if I chcp 65001 first). And the --man browser thing doesn't work at all. I think you need to do some registry diving to find the browser under Windows. You can open a url in the default browser with this magic code: import std.c.windows.windows; extern(Windows) { HINSTANCE ShellExecuteW(HWND,const LPWSTR, const LPWSTR, const LPWSTR, const LPWSTR,INT); } void main() { HINSTANCE hr =3D ShellExecuteW(null, "open"w.ptr, "http://www.digitalmars.com/d"w.ptr, null, null, SW_SHOWNORMAL); } --bb
Dec 01 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 On Tue, Dec 1, 2009 at 5:08 PM, Bill Baxter <wbaxter gmail.com> wrote:
 On Tue, Dec 1, 2009 at 4:37 PM, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast compiling
 is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it.
The web page[1] says it doesn't work on Windows. That'd be my excuse for not using it. [1] http://www.digitalmars.com/d/2.0/rdmd.html
Seems like it does work, though. Good news! The web page should be updated. I will definitely use it now that I know it works. It does seem to hang at the end of output waiting for an Enter from the console. And the in the --help message doesn't show properly on the console either. (but actually it does work if I chcp 65001 first). And the --man browser thing doesn't work at all. I think you need to do some registry diving to find the browser under Windows. You can open a url in the default browser with this magic code: import std.c.windows.windows; extern(Windows) { HINSTANCE ShellExecuteW(HWND,const LPWSTR, const LPWSTR, const LPWSTR, const LPWSTR,INT); } void main() { HINSTANCE hr = ShellExecuteW(null, "open"w.ptr, "http://www.digitalmars.com/d"w.ptr, null, null, SW_SHOWNORMAL); } --bb
Thanks! Could you please submit that to bugzilla? Andrei
Dec 01 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Andrei Alexandrescu wrote:

 
 I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I
 managed to make-do without it.
 
 Andrei
rdmd is a life saver, I use it all the time.
Dec 02 2009
prev sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Andrei Alexandrescu wrote:
 Leandro Lucarella wrote:
 Walter Bright, el  1 de diciembre a las 13:45 me escribiste:
 Leandro Lucarella wrote:
 I develop twice as fast in Python than in D. Of course this is only me,
 but that's where I think Python is better than D :)
If that is not just because you know the Python system far better than the D one, then yes indeed it is a win.
And because you have less noise (and much more and better libraries I guess :) in Python, less complexity to care about. And don't get me wrong, I love D, because it's a very expressive language and when you need speed, you need static typing and all the low-level support. They are all necessary evil. All I'm saying is, when I don't need speed and I have to do something quickly, Python is still a far better language than D, because of they inherent differences.
 I think only not having a compile cycle (no matter how fast 
 compiling is)
 is a *huge* win. Having an interactive console (with embedded
 documentation) is another big win.
That makes sense.
I guess D can greatly benefit from a compiler that can compile and run a multiple-files program with one command (AFAIK rdmd only support one file programs, right?) and an interactive console that can get the ddoc documentation on the fly. But that's not very related to the language itself, I guess it's doable, the trickiest part is the interactive console, I guess...
I'm amazed that virtually nobody uses rdmd. I can hardly fathom how I managed to make-do without it. Andrei
I use it almost exclusively, and find it an extremely useful and efficient tool. The only time I use DMD directly is when I'm done coding and testing, and want to compile the final library file or executable. For libraries, I define a unit.d file in the library root directory that looks something like this: module unit; import std.stdio; // Import entire library. import mylib.moduleA; import mylib.moduleB; ... void main() { writeln("All unittests passed."); } Then I mark unit.d as executable, and run it whenever I want to test changes I've made to the library. -Lars
Dec 02 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command
dmd a b c -run args...
Dec 01 2009
parent reply =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command
dmd a b c -run args...
Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Pelle Månsson wrote:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command
dmd a b c -run args...
Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.
Could you please submit a sample to bugzilla? Andrei
Dec 02 2009
parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Andrei Alexandrescu wrote:
 Pelle Månsson wrote:
 Walter Bright wrote:
 Leandro Lucarella wrote:
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command
dmd a b c -run args...
Can we have dmd -resolve-deps-and-run main.d I use rdmd when I can, but it doesn't manage to link C-libs in properly.
Could you please submit a sample to bugzilla? Andrei
http://d.puremagic.com/issues/show_bug.cgi?id=3564 Thank you.
Dec 02 2009
prev sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Leandro Lucarella wrote:

 
 I guess D can greatly benefit from a compiler that can compile and run
 a multiple-files program with one command (AFAIK rdmd only support one
 file programs, right?) and an interactive console that can get the ddoc
 documentation on the fly. But that's not very related to the language
 itself, I guess it's doable, the trickiest part is the interactive
 console, I guess...
rdmd does copmile in dependencies, or is that not what you mean? For the module you are working in, assuming you program with unit tests: rdmd -unittest --main foo.d When you don't have tons of dependencies, it is practically as fast a scripting language.
Dec 02 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Leandro,

 
 If you say dynamic languages don't have metaprogramming capabilities,
 you just don't have any idea of what a dynamic language really is.
 
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example: unit carrying types: check for unit errors (adding feet to seconds) at compile time. I can be sure there are no unit error without knowing if I've executed every possible code path. Domain specific compile time optimizations: Evaluate a O(n^3) function so I can generate O(n) code rather than write O(n^2) code. If you do that at runtime, things get slower, not faster. Any language that doesn't have a "compile time" that is evaluated only once for all code and before the product ships, can't do these.
Dec 02 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
BCS, el  2 de diciembre a las 17:37 me escribiste:
 Hello Leandro,
 
 
If you say dynamic languages don't have metaprogramming capabilities,
you just don't have any idea of what a dynamic language really is.
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:
What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming). You're missing the point.
 unit carrying types: check for unit errors (adding feet to seconds)
 at compile time. I can be sure there are no unit error without
 knowing if I've executed every possible code path.
There is no compile time metaprogrammin in dynamic languages, you just can't verify anything at compile time, of course you can't do that! Again, you are talking about performance issues, that's doable in a dynamic languages, the checks are just runned at run time.
 Domain specific compile time optimizations: Evaluate a O(n^3)
 function so I can generate O(n) code rather than write O(n^2) code.
 If you do that at runtime, things get slower, not faster.
Again *optimization*. How many times should I say that I agree that D is better than almost every dynamic languages if you need speed?
 Any language that doesn't have a "compile time" that is evaluated
 only once for all code and before the product ships, can't do these.
You are right, but if you *don't* need *speed*, you don't need all that stuff, that's not metaprogramming to fix a "logic" problem, they are all optimization tricks, if you don't need speed, you don't need optimization tricks. The kind of metaprogramming I'm talking about is, for example, generating boring, repetitive boilerplate code. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Es mejor probar el sabor de sapo y darse cuenta que es feo, antes que no hacerlo y creer que es una gran gomita de pera. -- Dr Ricardo Vaporesso, Malta 1951
Dec 02 2009
next sibling parent reply BCS <none anon.com> writes:
Hello Leandro,

 BCS, el  2 de diciembre a las 17:37 me escribiste:
 
 Hello Leandro,
 
 If you say dynamic languages don't have metaprogramming
 capabilities, you just don't have any idea of what a dynamic
 language really is.
 
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:
What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming). You're missing the point.
No you're missing MY point. I was very careful to add "what I want to do with" to my statement. It might not be true for you but what I asserts is true for me. Most of the things *I* want from metaprogramming must be done as compile time metaprogramming. Saying "dynamic languages can do something at run time" doesn't imply that there is nothing more to be had by doing it at compile time.
 unit carrying types: check for unit errors (adding feet to seconds)
 at compile time. I can be sure there are no unit error without
 knowing if I've executed every possible code path.
 
There is no compile time metaprogrammin in dynamic languages, you just can't verify anything at compile time, of course you can't do that! Again, you are talking about performance issues, that's doable in a dynamic languages, the checks are just runned at run time.
The reason for doing the checks at compile time are not performance but correctness. I want to know a priori that the code is correct rather than wait till runtime.
 Domain specific compile time optimizations: Evaluate a O(n^3)
 function so I can generate O(n) code rather than write O(n^2) code.
 If you do that at runtime, things get slower, not faster.
 
Again *optimization*. How many times should I say that I agree that D is better than almost every dynamic languages if you need speed?
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
 Any language that doesn't have a "compile time" that is evaluated
 only once for all code and before the product ships, can't do these.
 
You are right, but if you *don't* need *speed*, you don't need all that stuff, that's not metaprogramming to fix a "logic" problem, they are all optimization tricks, if you don't need speed, you don't need optimization tricks.
Personally, I'd rater use non-metaprograming solutions where runtime solutions are viable. They are generally easier to work (from the lib authors standpoint) with and should be just as powerful. The API might be a little messier but you should be able to get just as much done with it.
 
 The kind of metaprogramming I'm talking about is, for example,
 generating boring, repetitive boilerplate code.
For that kind of things, if I had a choice between compile time meta, run time meta and non meta, that last one I'd use is run-time meta.
Dec 02 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:

 Hello Leandro,
 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance. That's also the reason you use D - it's the language that can provide greatest performance once the compiler has matured a bit. To me program inefficiency is a rather small problem today. Most programs perform fast enough. But they crash way too often and leak problems we face today are e.g. vendor lock-in in forms of tivoization, closed binaries, and cloud computing. D doesn't help here either. It doesn't enforce copyleft (e.g. AGPL) and features like inline assembler encourage the use of drm systems.
Dec 02 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,
 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.
Static dimensional analysis doesn't improve performance, and I recall he mentioned that. Andrei
Dec 02 2009
parent reply retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 16:00:50 -0800, Andrei Alexandrescu wrote:

 retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,
 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.
Static dimensional analysis doesn't improve performance, and I recall he mentioned that.
Why not? I agree it does also static checking of type compatibility, but when done at runtime, computing the associated runtime type tag and comparing them also requires cpu cycles. If the analysis is done at compile time, the computational problem degenerates to operations on scalars and types can be erased on runtime if they are not used for anything else.
Dec 02 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
retard wrote:
 Wed, 02 Dec 2009 16:00:50 -0800, Andrei Alexandrescu wrote:
 
 retard wrote:
 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:

 Hello Leandro,
 Again *optimization*. How many times should I say that I agree that D
 is better than almost every dynamic languages if you need speed?
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness. For instance, you could define natural language like statements in your DSL with functional composition. In D you basically have to write all metaprograms inside strings and parse them with CTFE functions. In e.g. lisp or io the DSL is on the same abstraction level as the main language. These are of course slow, but in some environments you need to be able to provide non-developers an intuitive interface for writing business logic. Even the runtime metaprogramming system can provide optimizations after the DSL has been processed. I understand your logic. It's very simple. You use metaprogramming to improve performance.
Static dimensional analysis doesn't improve performance, and I recall he mentioned that.
Why not? I agree it does also static checking of type compatibility, but when done at runtime, computing the associated runtime type tag and comparing them also requires cpu cycles. If the analysis is done at compile time, the computational problem degenerates to operations on scalars and types can be erased on runtime if they are not used for anything else.
Doing it at runtime... see Don's VW metaphor. The whole point is to make it impossible to write incorrect programs, not to detect those that are incorrect. There's a huge difference. Andrei
Dec 02 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello retard,

 Wed, 02 Dec 2009 21:16:28 +0000, BCS wrote:
 
 Hello Leandro,
 
 Again *optimization*. How many times should I say that I agree that
 D is better than almost every dynamic languages if you need speed?
 
I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*
Both the language used to represent D metaprograms and D are suboptimal for many kinds of DSLs. A dynamic language can provide better control over these issues without resorting to manual string parsing. If the DSL is closer to the problem domain, it can have a great effect on program correctness.
I rather like doing meta program and I've only done one program that uses string parsing. Aside from that one, the two or three most complicated libs I've done work 100% within the normal D grammar. Show me ONE thing that can be done using run time meta programming that can't be done as well or better with run time, non-dynamic, non-meta and/or compile time meta. Unless I'm totally clueless as to what people are talking about when they say runtime meta, I don't think you will be able to. Anything that amounts to making the syntax look nicer can be done as compile time meta and anything else can be done with data structure walking and interpretation. All of that is available in non dynamic languages. I guess I should concede the eval function but if you don't like CTFE+mixin...
 
 For instance, you could define natural language like statements in
 your DSL with functional composition. In D you basically have to write
 all metaprograms inside strings and parse them with CTFE functions.
I dispute that claim.
 In
 e.g. lisp or io the DSL is on the same abstraction level as the main
 language. These are of course slow, but in some environments you need
 to be able to provide non-developers an intuitive interface for
 writing business logic. Even the runtime metaprogramming system can
 provide optimizations after the DSL has been processed.
 
 I understand your logic. It's very simple. You use metaprogramming to
 improve performance.
No, that is a flawed statement. ONE of the things I use metaprogramming for is to improve performance. Look at my parser generator, my equation solver and my units library. None of these have performance as a main driving motivation. For most of the stuff I've done where perf is even considered, it's not a mater of "lets make this faster by doing it meta" it a mater of "if this solution wasn't done meta, it wouldn't be viable and a more conventional solution would". But even that isn't the norm.
 That's also the reason you use D - it's the
 language that can provide greatest performance once the compiler has
 matured a bit. To me program inefficiency is a rather small problem
 today. Most programs perform fast enough. But they crash way too often
 and leak memory. The fact that Walter actually favors segfaults won't


 of tivoization,
 closed binaries,
Why is that a problem?
 and cloud computing. 
I've never liked the cloud model, but not from the lock-in issues.
 D doesn't help here either. It doesn't enforce copyleft (e.g. AGPL)
And I think it shouldn't.
 and features like inline assembler encourage the use of drm systems.
How does that follow?
Dec 03 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from BCS (none anon.com)'s article
 Show me ONE thing that can be done using run time meta programming that can't
 be done as well or better with run time, non-dynamic, non-meta and/or compile
 time meta. Unless I'm totally clueless as to what people are talking about
 when they say runtime meta, I don't think you will be able to. Anything that
 amounts to making the syntax look nicer can be done as compile time meta
 and anything else can be done with data structure walking and interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like CTFE+mixin...
Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.
Dec 03 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from BCS (none anon.com)'s article
 Show me ONE thing that can be done using run time meta programming that can't
 be done as well or better with run time, non-dynamic, non-meta and/or compile
 time meta. Unless I'm totally clueless as to what people are talking about
 when they say runtime meta, I don't think you will be able to. Anything that
 amounts to making the syntax look nicer can be done as compile time meta
 and anything else can be done with data structure walking and interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like CTFE+mixin...
Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.
eval rocks. Andrei
Dec 03 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello dsimcha,

 == Quote from BCS (none anon.com)'s article
 
 Show me ONE thing that can be done using run time meta programming
 that can't
 be done as well or better with run time, non-dynamic, non-meta and/or
 compile
 time meta. Unless I'm totally clueless as to what people are talking
 about
 when they say runtime meta, I don't think you will be able to.
 Anything that
 amounts to making the syntax look nicer can be done as compile time
 meta
 and anything else can be done with data structure walking and
 interpretation.
 All of that is available in non dynamic languages.
 I guess I should concede the eval function but if you don't like
 CTFE+mixin...
Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.
Yup, eval is the one thing that dynamic *really* has over static.
Dec 03 2009
parent retard <re tard.com.invalid> writes:
Thu, 03 Dec 2009 21:35:14 +0000, BCS wrote:

 Hello dsimcha,
 
 == Quote from BCS (none anon.com)'s article
 
 Show me ONE thing that can be done using run time meta programming
 that can't
 be done as well or better with run time, non-dynamic, non-meta and/or
 compile
 time meta. Unless I'm totally clueless as to what people are talking
 about
 when they say runtime meta, I don't think you will be able to.
 Anything that
 amounts to making the syntax look nicer can be done as compile time
 meta
 and anything else can be done with data structure walking and
 interpretation.
 All of that is available in non dynamic languages. I guess I should
 concede the eval function but if you don't like CTFE+mixin...
Oh come on. I'm as much a fan of D metaprogramming as anyone, but even I admit that there are certain things that static languages just suck at. One day I got really addicted to std.algorithm and decided I wanted similar functionality for text filters from a command line, so I wrote map, filter and count scripts that take predicates specified at the command line. filter.py: import sys pred = eval('lambda line: ' + sys.argv[2]) for line in open(sys.argv[1]): if pred(line) : print line.strip() Usage: filter.py foo.txt "float( line.split()[1]) < 5.0" Metaprogramming isn't very rigorously defined, but this has to qualify. Try writing something similar in D.
Yup, eval is the one thing that dynamic *really* has over static.
You can even send the runtime generated string via network to some other process that runs on a completely different cpu architecture and still compute the result. You can do this with D too, but you need to write the interpreter or JIT yourself. Dynamic languages provide this as a built-in feature. Guess why D or C++ isn't used much in client side web site code :)
Dec 03 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
BCS, el  2 de diciembre a las 21:16 me escribiste:
 Hello Leandro,
 
BCS, el  2 de diciembre a las 17:37 me escribiste:

Hello Leandro,

If you say dynamic languages don't have metaprogramming
capabilities, you just don't have any idea of what a dynamic
language really is.
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:
What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming). You're missing the point.
No you're missing MY point. I was very careful to add "what I want to do with" to my statement. It might not be true for you but what I asserts is true for me. Most of the things *I* want from metaprogramming must be done as compile time metaprogramming. Saying "dynamic languages can do something at run time" doesn't imply that there is nothing more to be had by doing it at compile time.
Well, I will have to do like Monty Python then. This thread is getting too silly, so I'll have to end it. http://www.youtube.com/watch?v=yTQrCjP14tA -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- ¿Cómo estais? ¿Cómo os senteis hoy 29 del membre de 1961 día en que conmemoreramos la nonésima setima nebulización del martir Peperino Pómoro junto al Rolo Puente en la ciudad de Jadad? -- Peperino Pómoro
Dec 02 2009
prev sibling parent reply Sergey Gromov <snake.scaly gmail.com> writes:
BCS wrote:
 I'm not arguing on that point. What I'm arguing is that (at least for 
 me) the primary advantages of metaprogramming are static checks (for 
 non-perf benefits) and performance. Both of these must be done at 
 compile time. Runtime metaprogramming just seems pointless *to me.*
One of important applications of metaprogramming is code generation which would be too tedious or bug-prone to generate and support manually. Dynamic languages can definitely provide for that.
Dec 02 2009
parent reply BCS <none anon.com> writes:
Hello Sergey,

 BCS wrote:
 
 I'm not arguing on that point. What I'm arguing is that (at least for
 me) the primary advantages of metaprogramming are static checks (for
 non-perf benefits) and performance. Both of these must be done at
 compile time. Runtime metaprogramming just seems pointless *to me.*
 
One of important applications of metaprogramming is code generation which would be too tedious or bug-prone to generate and support manually. Dynamic languages can definitely provide for that.
They can, but I question if it's the best way to do it in those languages. Generating code and running it at runtime seems to be pointless. Why have the intermediate step with the code? I have something I want to do, so I use encode it as one abstraction (a DSL), translate it into another (the host language) and then compute it in a third (the runtime). If it's all at runtime anyway, why not just use the runtime to evaluate/interpret the DSL directly.
Dec 02 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Wed, Dec 2, 2009 at 3:26 PM, BCS <none anon.com> wrote:
 Hello Sergey,

 BCS wrote:

 I'm not arguing on that point. What I'm arguing is that (at least for
 me) the primary advantages of metaprogramming are static checks (for
 non-perf benefits) and performance. Both of these must be done at
 compile time. Runtime metaprogramming just seems pointless *to me.*
One of important applications of metaprogramming is code generation which would be too tedious or bug-prone to generate and support manually. =A0Dynamic languages can definitely provide for that.
They can, but I question if it's the best way to do it in those languages=
.
 Generating code and running it at runtime seems to be pointless. Why have
 the intermediate step with the code? I have something I want to do, so I =
use
 encode it as one abstraction (a DSL), translate it into another (the host
 language) and then compute it in a third (the runtime). If it's all at
 runtime anyway, why not just use the runtime to evaluate/interpret the DS=
L
 directly.
You may be able to memoize the generated code so you only have to generate it once per run, but use it many times. Probably performance is the reason you wouldn't want to reinterpret the DSL from scratch every use. Even dynamic language users have their limits on how long they're willing to wait for something to finish. --bb
Dec 02 2009
parent BCS <none anon.com> writes:
Hello Bill,

 On Wed, Dec 2, 2009 at 3:26 PM, BCS <none anon.com> wrote:
 
 Hello Sergey,
 
 They can, but I question if it's the best way to do it in those
 languages. Generating code and running it at runtime seems to be
 pointless. Why have the intermediate step with the code? I have
 something I want to do, so I use encode it as one abstraction (a
 DSL), translate it into another (the host language) and then compute
 it in a third (the runtime). If it's all at runtime anyway, why not
 just use the runtime to evaluate/interpret the DSL directly.
 
You may be able to memoize the generated code so you only have to generate it once per run, but use it many times. Probably performance is the reason you wouldn't want to reinterpret the DSL from scratch every use. Even dynamic language users have their limits on how long they're willing to wait for something to finish. --bb
Yes, some of the performance issue (that I didn't bring up) can be addressed. But what about the points I did bring up? Like added conceptual complexity and another degree of separation between what you want and what you get?
Dec 03 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Leandro Lucarella wrote:
 BCS, el  2 de diciembre a las 17:37 me escribiste:
 Hello Leandro,


 If you say dynamic languages don't have metaprogramming capabilities,
 you just don't have any idea of what a dynamic language really is.
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:
What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming).
They are metaprogramming tasks. Dynamic languages can do some metaprogramming tasks. They can't do those ones.
 You are right, but if you *don't* need *speed*, you don't need all that
 stuff, that's not metaprogramming to fix a "logic" problem, they are all
 optimization tricks, if you don't need speed, you don't need optimization
 tricks.
"you don't need speed" is a pretty glib statement. I think the reality is that you don't care about constant factors in speed, even if they are large (say 200 times slower is OK). But bubble-sort is probably still not acceptable. Metaprogramming can be used to reduce big-O complexity rather than just constant-factor improvement. Lumping that in with "optimisation" is highly misleading.
 The kind of metaprogramming I'm talking about is, for example, generating
 boring, repetitive boilerplate code.
Dec 02 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Don, el  2 de diciembre a las 22:20 me escribiste:
 Leandro Lucarella wrote:
BCS, el  2 de diciembre a las 17:37 me escribiste:
Hello Leandro,


If you say dynamic languages don't have metaprogramming capabilities,
you just don't have any idea of what a dynamic language really is.
If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:
What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming).
They are metaprogramming tasks. Dynamic languages can do some metaprogramming tasks. They can't do those ones.
Because they make no sense, I really don't know how to put it. If you need speed, you code in C/C++/D whatever. Its like saying that you can't fly with a car and that's a problem. It's not, cars are not supposed to fly. If you need to fly, go buy a plane or a helicopter. Of course is much cooler to fly than to drive a car, but if you need to go just a couple of miles, flying gets really annoying, and it would take you more time, money and effort to do it than using your car.
You are right, but if you *don't* need *speed*, you don't need all that
stuff, that's not metaprogramming to fix a "logic" problem, they are all
optimization tricks, if you don't need speed, you don't need optimization
tricks.
"you don't need speed" is a pretty glib statement. I think the
I don't know what that means...
 reality is that you don't care about constant factors in speed, even
 if they are large (say 200 times slower is OK). But bubble-sort is
 probably still not acceptable.
Bubble sort is perfeclty acceptable for, say, a 100 elements array.
 Metaprogramming can be used to reduce big-O complexity rather than
 just constant-factor improvement. Lumping that in with
 "optimisation" is highly misleading.
It always depends on the context, of course, but when doing programs that deals with small data sets and are mostly IO bounded, you *really* can care less about performance and big-O. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- "CIRILO" Y "SIRACUSA" DE "SEÑORITA MAESTRA": UNO MUERTO Y OTRO PRESO -- Crónica TV
Dec 02 2009
next sibling parent BCS <none anon.com> writes:
Hello Leandro,

 Don, el  2 de diciembre a las 22:20 me escribiste:
 
 They are metaprogramming tasks. Dynamic languages can do some
 metaprogramming tasks. They can't do those ones.
 
Because they make no sense, I really don't know how to put it. If you need speed, you code in C/C++/D whatever. Its like saying that you can't fly with a car and that's a problem. It's not, cars are not supposed to fly. If you need to fly, go buy a plane or a helicopter. Of course is much cooler to fly than to drive a car, but if you need to go just a couple of miles, flying gets really annoying, and it would take you more time, money and effort to do it than using your car.
Saying "you can do that at runtime" re dynamic languages and D's meta programming is like saying a VW bug can carry rock when someone's looking for a pickup to move gravel in. Yes it's technically correct, but there are any thing a pickup can do that the bug can't (and the same the other way). The same things is true of the topic at hand; dynamic language can do /some/ of what D's meta stuff can do, but not all of it. And I'll point out yet again; not all of the extra things D does are perf related. And befor you say it; yes ther are seom things dynamic languages beat D at. All I'm saying is that meta isn't one of them.
 Metaprogramming can be used to reduce big-O complexity rather than
 just constant-factor improvement. Lumping that in with "optimisation"
 is highly misleading.
 
It always depends on the context, of course, but when doing programs that deals with small data sets and are mostly IO bounded, you *really* can care less about performance and big-O.
1) If I can write a lib that given me a better O() for the same effort (once the lib is written) I *Always* care about O() 2) For all programs, the program is irrelevant or you will have someone throw more input at it than you ever expected. 3) the phrase is "can't care less" (sorry, nitpicking)
Dec 02 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 Bubble sort is perfeclty acceptable for, say, a 100 elements array.
 It always depends on the context, of course, but when doing programs that
 deals with small data sets and are mostly IO bounded, you *really* can
 care less about performance and big-O.
The thing about writing code that will be used by others is that they are not going to restrict themselves to small data sets. For example, bubble sort. Putting that in a library is a disaster. You can't just write in the documentation that it is usable only for less than 100 elements. One really does have to worry about big O performance, unless it is a throwaway program.
Dec 02 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
grauzone wrote:

 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
Do you mean scripting languages such as Lua or ruby and python? The latter two are by no means simple languages, they pack tons of features.
Dec 01 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple,
 you end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
Looks like even simple Javascript is getting a major complexity upgrade: http://arstechnica.com/web/news/2009/12/commonjs-effort-sets-javascript-on-path-for-world-domination.ars
Dec 01 2009
parent retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 13:15:53 -0800, Walter Bright wrote:

 grauzone wrote:
 Walter Bright wrote:
 dsimcha wrote:
 In Java, by going overboard on making the core language simple, you
 end up pushing all the complexity into the APIs.
Yup, and that's the underlying problem with "simple" languages. Complicated code.
I think users of scripting languages would disagree with you.
Looks like even simple Javascript is getting a major complexity upgrade: http://arstechnica.com/web/news/2009/12/commonjs-effort-sets-javascript-
on-path-for-world-domination.ars All languages seem to add more features during their lifetime. I've never heard of language in which feature count somehow decreases with later versions. If you're happy with the previous version, why upgrade? E.g. the existence of Java 5+ or D 2.0 doesn't mean developing code with Java 1.4 or D 1.x is illegal.
Dec 01 2009
prev sibling next sibling parent reply Alvaro Castro-Castilla <alvcastro yahoo.es> writes:
Walter Bright Wrote:

 Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: 1. hook up to COM's IDispatch 2. create 'classes' at runtime 3. add methods to existing classes (monkey patching) that allow such extensions 4. provide an easy way for users to add plugins to an app 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
Yes, this feature is really really useful. I specially like how much it can help in modularizing and your 3rd and 4th points. I added a dmd2 svn version ebuild to Gentoo Linux's D overlay, for those of you who want to try it before next release.
Nov 30 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Mon, Nov 30, 2009 at 3:20 PM, Alvaro Castro-Castilla
<alvcastro yahoo.es> wrote:
 Walter Bright Wrote:

 Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: ... 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
And once again we see Clugston's Law at work. As soon as you figure out how to do one thing in D, a new compiler feature comes along that makes it a one-liner. :-) --bb
Nov 30 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 And once again we see Clugston's Law at work.  As soon as you figure
 out how to do one thing in D, a new compiler feature comes along that
 makes it a one-liner.  :-)
The goal is to write a D compiler in one line of code!
Nov 30 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Mon, Nov 30, 2009 at 3:20 PM, Alvaro Castro-Castilla
 <alvcastro yahoo.es> wrote:
 Walter Bright Wrote:

 Simen kjaeraas wrote:
 I'm already in love with this feature.
So am I. It seems to be incredibly powerful. Looks to me you can do things like: ... 5. the already mentioned "swizzler" functions that are generated at runtime based on the name of the function
And once again we see Clugston's Law at work. As soon as you figure out how to do one thing in D, a new compiler feature comes along that makes it a one-liner. :-)
I love it!
Dec 01 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 Bill Baxter wrote:
 And once again we see Clugston's Law at work.  As soon as you figure
 out how to do one thing in D, a new compiler feature comes along that
 makes it a one-liner.  :-)
I love it!
Part of that is just that Don is very good at figuring out what needs to be supported!
Dec 01 2009
prev sibling parent BLS <windevguy hotmail.de> writes:
On 28/11/2009 00:30, Walter Bright wrote:
 One thing Java and Python, Ruby, etc., still hold over D is dynamic
 classes, i.e. classes that are only known at runtime, not compile time.
 In D, this:

 s.foo(3);
Should opDispatch also enable dynamic property injection ? I just thought that it would be nice to have a new __traits() thingy for properties. would make perfectly sense f.i. GUI widgets. Guess I like dispatching but not dymamic injection of properties. which leads me to this question : Will there be support in traits for properties ? Thanks for ignoring my ignorance once again.
Dec 01 2009