www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - What's wrong with D's templates?

reply Tim Matthews <tim.matthews7 gmail.com> writes:
In a reddit reply: "The concept of templates in D is exactly the same as 
in C++. There are minor technical differences, syntactic differences, 
but it is essentially the same thing. I think that's understandable 
since Digital Mars had a C++ compiler."

http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3

I have never touched ada but I doubt it is really has that much that 
can't be done in D. I thought most (if not all) the problems with C++ 
were absent in D as this summary of the most common ones points out 
http://www.digitalmars.com/d/2.0/templates-revisited.html.

Your thoughts?
Dec 17 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same as
 in C++. There are minor technical differences, syntactic differences,
 but it is essentially the same thing. I think that's understandable
 since Digital Mars had a C++ compiler."

 http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3


 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client. There's a whole range of designs for this and related issues and IMO the C++ design is by far the worst of them all. not to mention the fact that it isn't an orthogonal design (like many other "features" in c++). I'd much prefer a true generics design to be separated from compile-time execution of code with e.g. CTFE or AST macros, or other designs.
Dec 18 2009
next sibling parent retard <re tard.com.invalid> writes:
Fri, 18 Dec 2009 10:50:52 +0200, Yigal Chripun wrote:

 There's a whole range of designs for this and related issues and IMO the
 C++ design is by far the worst of them all. not to mention the fact that
 it isn't an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from compile-time
 execution of code with e.g. CTFE or AST macros, or other designs.
There won't be AST macros in D2, yet. Maybe in D3. You forgot to mention executable bloat inherent to the template design.
Dec 18 2009
prev sibling next sibling parent retard <re tard.com.invalid> writes:
Fri, 18 Dec 2009 10:50:52 +0200, Yigal Chripun wrote:

 There's a whole range of designs for this and related issues and IMO the
 C++ design is by far the worst of them all. not to mention the fact that
 it isn't an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from compile-time
 execution of code with e.g. CTFE or AST macros, or other designs.
There won't be AST macros in D2, yet. Maybe in D3. You forgot to mention executable bloat inherent to the template design.
Dec 18 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Yigal Chripun:
 There's a whole range of designs for this and related issues and IMO the 
 C++ design is by far the worst of them all.
My creativity is probably limited, so I think that while C++/D templates have Ada, Haskell, Object-C, Scala, and Delphi to define generic code. They produce efficient code when you don't have a virtual machine at run time, and allow to write STL-like algorithms. If you need less performance and/or you accept worse algorithms/collections then I agree there are designs simpler to use and cleaner than C++/D templates. If you are able to design something better I'd like to know about your ideas. Bye, bearophile
Dec 18 2009
parent reply retard <re tard.com.invalid> writes:
Fri, 18 Dec 2009 08:53:33 -0500, bearophile wrote:

 Yigal Chripun:
 There's a whole range of designs for this and related issues and IMO
 the C++ design is by far the worst of them all.
My creativity is probably limited, so I think that while C++/D templates have some well known problems, they are better than the strategies used code. They produce efficient code when you don't have a virtual machine at run time, and allow to write STL-like algorithms. If you need less performance and/or you accept worse algorithms/collections then I agree there are designs simpler to use and cleaner than C++/D templates. If you are able to design something better I'd like to know about your ideas.
Templates are good for parameterizing algorithms and data structures. They begin to have problems when they are used extensively for meta- programming. For instance the lack of lazy evalution in the type world forces the language to either have 'static if' or you need to add indirection via dummy members. The language is basically purely functional, but it's several orders of magnitude more verbose than say Haskell. CTFE solves some of the problems, but as a result the system becomes really unorthogonal. Macros on the other hand solve the problem of clean meta-programmming but are not the best way to describe generic types. In addition Java erases this type info on runtime so you get even worse
Dec 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 18/12/2009 16:02, retard wrote:
 Fri, 18 Dec 2009 08:53:33 -0500, bearophile wrote:

 Yigal Chripun:
 There's a whole range of designs for this and related issues and IMO
 the C++ design is by far the worst of them all.
My creativity is probably limited, so I think that while C++/D templates have some well known problems, they are better than the strategies used code. They produce efficient code when you don't have a virtual machine at run time, and allow to write STL-like algorithms. If you need less performance and/or you accept worse algorithms/collections then I agree there are designs simpler to use and cleaner than C++/D templates. If you are able to design something better I'd like to know about your ideas.
Templates are good for parameterizing algorithms and data structures. They begin to have problems when they are used extensively for meta- programming. For instance the lack of lazy evalution in the type world forces the language to either have 'static if' or you need to add indirection via dummy members. The language is basically purely functional, but it's several orders of magnitude more verbose than say Haskell. CTFE solves some of the problems, but as a result the system becomes really unorthogonal. Macros on the other hand solve the problem of clean meta-programmming but are not the best way to describe generic types. In addition Java erases this type info on runtime so you get even worse
To bearophile: you're mistaken on all counts - generics (when properly implemented) will provide the same performance as templates. Also, a VM is completely orthogonal to this. Ada ain't VM based, is it? to retard: different problems should be solved with different tools. Macros should be used for meta-programming and generics for type-parameters. they don't exclude each other. E.g. Nemerle has an awesome macro system yet it also has .net generics too. As the saying goes -"when all you got is a hammer everything looks like a nail" which is a very bad situation to be in. templates are that hammer while a much better approach is to go and by a toolbox with appropriate tools for your problem set.
Dec 18 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Sat, 19 Dec 2009 00:24:50 +0200, Yigal Chripun wrote:

 to retard:
 different problems should be solved with different tools. Macros should
 be used for meta-programming and generics for type-parameters. they
 don't exclude each other. E.g. Nemerle has an awesome macro system yet
 it also has .net generics too.
 As the saying goes -"when all you got is a hammer everything looks like
 a nail" which is a very bad situation to be in. templates are that
 hammer while a much better approach is to go and by a toolbox with
 appropriate tools for your problem set.
I didn't say anything that contradicts this.
Dec 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 19/12/2009 01:08, retard wrote:
 Sat, 19 Dec 2009 00:24:50 +0200, Yigal Chripun wrote:

 to retard:
 different problems should be solved with different tools. Macros should
 be used for meta-programming and generics for type-parameters. they
 don't exclude each other. E.g. Nemerle has an awesome macro system yet
 it also has .net generics too.
 As the saying goes -"when all you got is a hammer everything looks like
 a nail" which is a very bad situation to be in. templates are that
 hammer while a much better approach is to go and by a toolbox with
 appropriate tools for your problem set.
I didn't say anything that contradicts this.
Did you read any arguing in the above? I thought I was agreeing with you.. :)
Dec 19 2009
parent retard <re tard.com.invalid> writes:
Sat, 19 Dec 2009 15:19:16 +0200, Yigal Chripun wrote:

 On 19/12/2009 01:08, retard wrote:
 Sat, 19 Dec 2009 00:24:50 +0200, Yigal Chripun wrote:

 to retard:
 different problems should be solved with different tools. Macros
 should be used for meta-programming and generics for type-parameters.
 they don't exclude each other. E.g. Nemerle has an awesome macro
 system yet it also has .net generics too.
 As the saying goes -"when all you got is a hammer everything looks
 like a nail" which is a very bad situation to be in. templates are
 that hammer while a much better approach is to go and by a toolbox
 with appropriate tools for your problem set.
I didn't say anything that contradicts this.
Did you read any arguing in the above? I thought I was agreeing with you.. :)
Ok, sorry :) It just sounded like a lecture. I was well aware of all this. I sometimes forget that not everyone knows what we are discussing here.
Dec 19 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Yigal Chripun:

To bearophile: you're mistaken on all counts -<
Yes, this happens every day here :-) I am too much ignorant still about computer science to be able to discuss in this newsgroup in a good enough way.
generics (when properly implemented) will provide the same performance as
templates.<
I was talking about a list of current language implementations.
Also, a VM is completely orthogonal to this. Ada ain't VM based, is it?<
generics need a VM.
Macros should be used for meta-programming and generics for type-parameters.<
This can be true, but there's a lot of design to do to implement that well. In Go there are no generics/templates nor macros. So generics and macros can be added, as you say. In D2 there are templates and no macros, so in D3 macros may be added but how can you design a D3 language where templates are restricted enough to become generics? Unless D3 breaks a lot of backwards compatibility with D2 you will end in D3 with templates + macros + language conventions that tell to not use templates when macros can be used. Is this good enough? Bye, bearophile
Dec 18 2009
next sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 19/12/2009 02:43, bearophile wrote:
 Yigal Chripun:

 To bearophile: you're mistaken on all counts -<
Yes, this happens every day here :-) I am too much ignorant still about computer science to be able to discuss in this newsgroup in a good enough way.
didn't mean to sound that harsh. sorry about that.
 generics (when properly implemented) will provide the same
 performance as templates.<
I was talking about a list of current language implementations.
 Also, a VM is completely orthogonal to this. Ada ain't VM based, is
 it?<
they were implemented for a VM based system but nothing in the *design* itself inherently requires a VM. You keep talking about implementation details while I try to discuss the design aspects and trad-offs. It's obvious that we can't just copy-paste the .NET implementation to D.
 Macros should be used for meta-programming and generics for
 type-parameters.<
This can be true, but there's a lot of design to do to implement that well. In Go there are no generics/templates nor macros. So generics and macros can be added, as you say. In D2 there are templates and no macros, so in D3 macros may be added but how can you design a D3 language where templates are restricted enough to become generics? Unless D3 breaks a lot of backwards compatibility with D2 you will end in D3 with templates + macros + language conventions that tell to not use templates when macros can be used. Is this good enough? Bye, bearophile
I don't know about D3, But even now in D2 there is confusion as to what should be implemented with templates and what with CTFE.
Dec 19 2009
prev sibling parent BCS <none anon.com> writes:
Hello bearophile,

 [...] you will end in D3 with templates + macros +
 language conventions that tell to not use templates when macros can be
 used.
There is nothing new in that. Even when you have other tools, you can still make every problem look like a nail, and it's an even worse idea than when you only have a hammer.
Dec 19 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Yigal Chripun (yigal100 gmail.com)'s article
 I don't know Ada but I do agree with that reddit reply about c++ and D
 templates. D provides a better implementation of the exact same design,
 so it does fix many minor issues (implementation bugs).
I think variadics, static if and alias parameters qualify more as a "better design" than fixing "minor issues".
 An example of
 this is the foo<bar<Class>> construct that doesn't work because of the
 ">>" operator.
 However, using the same design obviously doesn't solve any of the deeper
 design problems and this design has many of those. An example of that is
 that templates are compiled as part of the client code. This forces a
 library writer to provide the source code (which might not be acceptable
 in commercial circumstances)
There's always obfuscation. I think people underestimate this. If you strip out all comments and meaningful names (I won't mention mangling whitespace because that's programmatically reversible), trade secrets are reasonably well protected. Remember, people can always disassemble your library. I don't see obfuscated D code with names and comments stripped as being **that** much easier to understand than a disassembly.
 but even more frustrating is the fact that
 template compilation bugs will also happen at the client.
 There's a whole range of designs for this and related issues and IMO the
 C++ design is by far the worst of them all. not to mention the fact that
 it isn't an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from compile-time
 execution of code with e.g. CTFE or AST macros, or other designs.
Since generics work by basically casting stuff to Object (possibly boxing it) and casting back, I wonder if it would be easy to implement generics on top of templates through a minimal wrapper. The main uses for this would be executable bloat (for those that insist that this matters in practice) and allowing virtual functions where templates can't be virtual.
Dec 18 2009
next sibling parent BCS <none anon.com> writes:
Hello dsimcha,

 Since generics work by basically casting stuff to Object (possibly
 boxing it) and casting back, I wonder if it would be easy to implement
 generics on top of templates through a minimal wrapper.
How about this? interface Foo { int I(int); } class CFoo(T) : Foo { T t; int I(int i){ return t.I(i); } } class CollectionOfFoos_Base(T) { protected: Foo opIndex_(int i){ ... } Foo opIndex_(int i, Foo f){ ... } } class CollectionOfFoos(T) : CollectionOfFoos_Base { public T opIndex(int i){ return (cast(CFoo!(T))opIndex(i)).t; } T opIndex(int i, Foo f) { auto b = new CFoo!(T); b.t = f; return (cast(CFoo!(T))opIndex(i,b)).t; } } Add opDispatch and it might get even better.
Dec 18 2009
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 18/12/2009 17:34, dsimcha wrote:
 == Quote from Yigal Chripun (yigal100 gmail.com)'s article
 I don't know Ada but I do agree with that reddit reply about c++ and D
 templates. D provides a better implementation of the exact same design,
 so it does fix many minor issues (implementation bugs).
I think variadics, static if and alias parameters qualify more as a "better design" than fixing "minor issues".
actually they qualify as - "even worse design". duplicating the syntax like that is butt ugly.
 An example of
 this is the foo<bar<Class>>  construct that doesn't work because of the
 ">>" operator.
 However, using the same design obviously doesn't solve any of the deeper
 design problems and this design has many of those. An example of that is
 that templates are compiled as part of the client code. This forces a
 library writer to provide the source code (which might not be acceptable
 in commercial circumstances)
There's always obfuscation. I think people underestimate this. If you strip out all comments and meaningful names (I won't mention mangling whitespace because that's programmatically reversible), trade secrets are reasonably well protected. Remember, people can always disassemble your library. I don't see obfuscated D code with names and comments stripped as being **that** much easier to understand than a disassembly.
this is the least of the problems of that design. The biggest problem IMO is the conflation of user-code and library code.
 but even more frustrating is the fact that
 template compilation bugs will also happen at the client.
 There's a whole range of designs for this and related issues and IMO the
 C++ design is by far the worst of them all. not to mention the fact that
 it isn't an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from compile-time
 execution of code with e.g. CTFE or AST macros, or other designs.
Since generics work by basically casting stuff to Object (possibly boxing it) and casting back, I wonder if it would be easy to implement generics on top of templates through a minimal wrapper. The main uses for this would be executable bloat (for those that insist that this matters in practice) and allowing virtual functions where templates can't be virtual.
generics have nothing to do with casting and/or boxing (except Java's poor excuse of a design/implementation). .Net generics for example work by creating an instantiation for each value type (same as c++ templates) and one instantiation for all reference types since at the binary level all reference types are simply addresses. C++ can't do this since it has no separation between "structs" and "classes". there is no casting involved since the full type info is stored. so a Foo<Bar> will be typed as Foo<Bar> at the binary level as well and you can use APIs to query this at runtime. in .Net it's part of the IL, in D this can be done by mangling the type info into the symbol's name.
Dec 18 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Yigal Chripun:

 .Net generics for example work by creating an instantiation for each 
 value type (same as c++ templates) and one instantiation for all 
 reference types since at the binary level all reference types are simply 
 addresses. C++ can't do this since it has no separation between 
 "structs" and "classes".
 there is no casting involved since the full type info is stored.
 so a Foo<Bar> will be typed as Foo<Bar> at the binary level as well and 
 you can use APIs to query this at runtime.
In D structs and classes are distinct enough, but they currently produce duplicated templated code, this may be fixed: void foo(T)(T o) { printf("foo: %d\n", o.y); } class A { int x = 1; } class B : A { int y = 2; } class C : A { int y = 3; } void main() { B b = new B(); C c = new C(); foo(b); foo(c); } /* DMD1: _D4temp17__T3fooTC4temp1BZ3fooFC4temp1BZv comdat push EAX mov ECX,offset FLAT:_D4temp1C6__vtblZ[018h] push dword ptr 0Ch[EAX] push ECX call near ptr _printf add ESP,8 pop EAX ret _D4temp17__T3fooTC4temp1CZ3fooFC4temp1CZv comdat push EAX mov ECX,offset FLAT:_D4temp1C6__vtblZ[018h] push dword ptr 0Ch[EAX] push ECX call near ptr _printf add ESP,8 pop EAX ret LDC: _D13testtemplates27__T3fooTC13testtemplates1BZ3fooFC13testtemplates1BZv: subl $12, %esp movl 12(%eax), %eax movl %eax, 4(%esp) movl $.str3, (%esp) call printf addl $12, %esp ret _D13testtemplates27__T3fooTC13testtemplates1CZ3fooFC13testtemplates1CZv: subl $12, %esp movl 12(%eax), %eax movl %eax, 4(%esp) movl $.str3, (%esp) call printf addl $12, %esp ret */ Bye, bearophile
Dec 18 2009
parent reply S <S S.com> writes:
On 2009-12-18 17:07:57 -0800, bearophile <bearophileHUGS lycos.com> said:

 Yigal Chripun:
 
 .Net generics for example work by creating an instantiation for each
 value type (same as c++ templates) and one instantiation for all
 reference types since at the binary level all reference types are simply
 addresses. C++ can't do this since it has no separation between
 "structs" and "classes".
 there is no casting involved since the full type info is stored.
 so a Foo<Bar> will be typed as Foo<Bar> at the binary level as well and
 you can use APIs to query this at runtime.
In D structs and classes are distinct enough, but they currently produce duplicated templated code, this may be fixed: void foo(T)(T o) { printf("foo: %d\n", o.y); } class A { int x = 1; } class B : A { int y = 2; } class C : A { int y = 3; } void main() { B b = new B(); C c = new C(); foo(b); foo(c); } /* DMD1: _D4temp17__T3fooTC4temp1BZ3fooFC4temp1BZv comdat push EAX mov ECX,offset FLAT:_D4temp1C6__vtblZ[018h] push dword ptr 0Ch[EAX] push ECX call near ptr _printf add ESP,8 pop EAX ret _D4temp17__T3fooTC4temp1CZ3fooFC4temp1CZv comdat push EAX mov ECX,offset FLAT:_D4temp1C6__vtblZ[018h] push dword ptr 0Ch[EAX] push ECX call near ptr _printf add ESP,8 pop EAX ret LDC: _D13testtemplates27__T3fooTC13testtemplates1BZ3fooFC13testtemplates1BZv: subl $12, %esp movl 12(%eax), %eax movl %eax, 4(%esp) movl $.str3, (%esp) call printf addl $12, %esp ret _D13testtemplates27__T3fooTC13testtemplates1CZ3fooFC13testtemplates1CZv: subl $12, %esp movl 12(%eax), %eax movl %eax, 4(%esp) movl $.str3, (%esp) call printf addl $12, %esp ret */ Bye, bearophile
This isn't a bug. This is how it is template code is supposed to work. However, there does seem to be one problem in the output you're giving. LDC should not be producing different name mangling than DMD! This is part of the D ABI last I checked, so as to avoid all the problems C++ has associated with libraries from different compilers not playing nice. -S
Dec 21 2009
parent bearophile <bearophileHUGS lycos.com> writes:
S:
This isn't a bug.   This is how it is template code is supposed to work.<
It's not a bug, but a smarter compiler can try to reduce binary size conflating parts of asm code that are equal, to reduce the size of the resulting binary. It's a way to reduce one of the disadvantages of C++/D-style templates. Bye, bearophile
Dec 21 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Yigal,

 On 18/12/2009 17:34, dsimcha wrote:
 
 I think variadics, static if and alias parameters qualify more as a
 "better design" than fixing "minor issues".
 
actually they qualify as - "even worse design". duplicating the syntax like that is butt ugly.
I for one think that it's a better design than C++ has. (Given that 99% of what they do, C++ was never designed to do at all, you'd be hard pressed to come up with a worse design without trying to.) If you can come up with an even better design for compile time stuff, I'd be interested.
the conflation of user-code and library code.
Could you elaborate on this?
 
 but even more frustrating is the fact that
 template compilation bugs will also happen at the client.
Jumping back a bit; which client? The one with the compiler or the end user? If the first; removing this puts major limits on what can be done because you can't do anything unless you be sure it will work with the open set of types that could be instanceiated, including ones you don't know about yet. at the top and then enforce that. IMHO this is a non-solution. Without be to silly I think I could come up with a library that would requiter a solution to the halting problem in order to check that the template code can't generate and error with the given constants and that a given type fits the constraints, both without actuality instanceate the template for the type. If the second; nether D nor C++ have to worry about that.
 .Net generics for example work by creating an instantiation for each
 value type (same as c++ templates) and one instantiation for all
 reference types since at the binary level all reference types are
 simply
 addresses.
they allow you to do thing to the objects that Object doesn't support. For that to happen, the objects have to be wrapped or tagged or something so that the generics code can make "foo.MyFunction()" work for different types. If I has to guess, I'd guess it's done via a vtable either as a "magic" interface or as a fat pointer. methods from a generic, then right there is my next argument against generics.
Dec 19 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 20/12/2009 03:11, BCS wrote:
 Hello Yigal,

 On 18/12/2009 17:34, dsimcha wrote:

 I think variadics, static if and alias parameters qualify more as a
 "better design" than fixing "minor issues".
actually they qualify as - "even worse design". duplicating the syntax like that is butt ugly.
I for one think that it's a better design than C++ has. (Given that 99% of what they do, C++ was never designed to do at all, you'd be hard pressed to come up with a worse design without trying to.) If you can come up with an even better design for compile time stuff, I'd be interested.
 the conflation of user-code and library code.
Could you elaborate on this?
 but even more frustrating is the fact that
 template compilation bugs will also happen at the client.
Jumping back a bit; which client? The one with the compiler or the end user? If the first; removing this puts major limits on what can be done because you can't do anything unless you be sure it will work with the open set of types that could be instanceiated, including ones you don't you will do to a type at the top and then enforce that. IMHO this is a non-solution. Without be to silly I think I could come up with a library that would requiter a solution to the halting problem in order to check that the template code can't generate and error with the given constants and that a given type fits the constraints, both without actuality instanceate the template for the type. If the second; nether D nor C++ have to worry about that.
 .Net generics for example work by creating an instantiation for each
 value type (same as c++ templates) and one instantiation for all
 reference types since at the binary level all reference types are
 simply
 addresses.
they allow you to do thing to the objects that Object doesn't support. For that to happen, the objects have to be wrapped or tagged or something so that the generics code can make "foo.MyFunction()" work for different types. If I has to guess, I'd guess it's done via a vtable either as a "magic" interface or as a fat pointer. independent methods from a generic, then right there is my next argument against generics.
What you're talking about in the above is meta-programing. Doing meta-programing a-la c++ templates is IMO like trying to use square wheels, it is just wrong. To answer your questions: D already has better designed tools for this and they keep improving. Don is doing an excellent job in fixing CTFE. I think D needs to go beyond just constant-folding (CTFE) and allow to run any function at compile-time in the same manner it's done in Nemerle (multi-stage compilation). limitations of its generics but rather of the meta-programing facilities.
Dec 21 2009
parent BCS <none anon.com> writes:
Hello Yigal,

 On 20/12/2009 03:11, BCS wrote:
 

 because they allow you to do thing to the objects that Object doesn't
 support. For that to happen, the objects have to be wrapped or tagged
 or something so that the generics code can make "foo.MyFunction()"
 work for different types. If I has to guess, I'd guess it's done via
 a vtable either as a "magic" interface or as a fat pointer.
 

 independent methods from a generic, then right there is my next
 argument against generics.
 
[reordered a bit]

 *not* limitations of its generics but rather of the meta-programing
 facilities.
 
I that case, I don't give a rat's A** about generics. Do whatever you want as long as you don't mess with my meta-programming stuff.
 What you're talking about in the above is meta-programing. Doing
 meta-programing a-la c++ templates is IMO like trying to use square
 wheels, it is just wrong.
 
 To answer your questions:
 D already has better designed tools for this and they keep improving.
 Don is doing an excellent job in fixing CTFE.
 I think D needs to go beyond just constant-folding (CTFE) and allow to
 run any function at compile-time in the same manner it's done in
 Nemerle
 (multi-stage compilation).
Personally I see CTFE as a hack for about half of the use cases I'm interested in. The vast bulk of the meta-programming that I care about amounts to compile time code generation. For that, CTFE falls into one of two uses; data processing to convert user input into something that can be (somehow) converted into code and mixin(CTFE()). For the first I think we are in agreement, the more CTFE can do the better. For the second, I think it's a total abomination in all but a few tiny use cases.
Dec 21 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Yigal,

 On 18/12/2009 02:49, Tim Matthews wrote:
 
 In a reddit reply: "The concept of templates in D is exactly the same
 as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."
 
 http://www.reddit.com/r/programming/comments/af511/ada_programming_ge
 nerics/c0hcb04?context=3
 
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.
 
 Your thoughts?
 
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client. There's a whole range of designs for this and related issues and IMO the C++ design is by far the worst of them all. not to mention the fact that it isn't an orthogonal design (like many other "features" in c++). I'd much prefer a true generics design to be separated from compile-time execution of code with e.g. CTFE or AST macros, or other designs.
If D were to switch to true generics, I for one would immediately start looking for ways to force it all back into compile time. I think that this would amount to massive use of CTFE and string mixins. One of the things I *like* about template is that it does everything at compile time. That said, I wouldn't be bothered by optional generics or some kind of compiled template where a lib writer can ship a binary object (JVM code?) that does the template instantiation at compile time without the text source. (The first I'd rarely use and the second would just be an obfuscation tool, but then from that standpoint all compilers are)
Dec 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 18/12/2009 22:09, BCS wrote:
 Hello Yigal,

 On 18/12/2009 02:49, Tim Matthews wrote:

 In a reddit reply: "The concept of templates in D is exactly the same
 as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."

 http://www.reddit.com/r/programming/comments/af511/ada_programming_ge
 nerics/c0hcb04?context=3

 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client. There's a whole range of designs for this and related issues and IMO the C++ design is by far the worst of them all. not to mention the fact that it isn't an orthogonal design (like many other "features" in c++). I'd much prefer a true generics design to be separated from compile-time execution of code with e.g. CTFE or AST macros, or other designs.
If D were to switch to true generics, I for one would immediately start looking for ways to force it all back into compile time. I think that this would amount to massive use of CTFE and string mixins. One of the things I *like* about template is that it does everything at compile time. That said, I wouldn't be bothered by optional generics or some kind of compiled template where a lib writer can ship a binary object (JVM code?) that does the template instantiation at compile time without the text source. (The first I'd rarely use and the second would just be an obfuscation tool, but then from that standpoint all compilers are)
you are confused - the term "generics" refers to writing code that is parametrized by type(s). it has nothing to do with JVM or the specific Java implementation of this idea. Java's implementation is irrelevant to our discussion since it's broken by design in order to accommodate backward compatibility. generics != Java generics !!! Generics are also orthogonal to meta-programming. please also see my reply to dsimcha.
Dec 18 2009
parent BCS <none anon.com> writes:
Hello Yigal,

 On 18/12/2009 22:09, BCS wrote:
 
 Hello Yigal,
 
 On 18/12/2009 02:49, Tim Matthews wrote:
 
 In a reddit reply: "The concept of templates in D is exactly the
 same as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."
 
 http://www.reddit.com/r/programming/comments/af511/ada_programming_
 ge nerics/c0hcb04?context=3
 
 I have never touched ada but I doubt it is really has that much
 that can't be done in D. I thought most (if not all) the problems
 with C++ were absent in D as this summary of the most common ones
 points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.
 
 Your thoughts?
 
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client. There's a whole range of designs for this and related issues and IMO the C++ design is by far the worst of them all. not to mention the fact that it isn't an orthogonal design (like many other "features" in c++). I'd much prefer a true generics design to be separated from compile-time execution of code with e.g. CTFE or AST macros, or other designs.
If D were to switch to true generics, I for one would immediately start looking for ways to force it all back into compile time. I think that this would amount to massive use of CTFE and string mixins. One of the things I *like* about template is that it does everything at compile time. That said, I wouldn't be bothered by optional generics or some kind of compiled template where a lib writer can ship a binary object (JVM code?) that does the template instantiation at compile time without the text source. (The first I'd rarely use and the second would just be an obfuscation tool, but then from that standpoint all compilers are)
you are confused - the term "generics" refers to writing code that is parametrized by type(s).
I've never looked into them in detail but I think I'm taking about the same idea as you are. To be clear, the aspect of generics that I dislike is the part that has them use the exact same code for all cases. The aspect of true templates that I like is that the template gets specialized for the type very early in the compile process. That name lookup is done after the type is know. That code gen happens after all (or at least most) of the things that make it a template are gone. That the resulting code can take full advantage of all the details of the actual type the template is instantiated on rather than being limited to merely the things they are true of everything it could be used with. And that ALL of that is over and done (including all possible template related errors) with befor runtime time. That is the aspect of generics vs template that I care about. The actual implementation doesn't bother me a bit. Heck, if you can get all of that in something that gives you the bits of generics you want, I'd be happy
 it has nothing to do with JVM or the specific
 Java implementation of this idea. Java's implementation is irrelevant
 to our discussion since it's broken by design in order to accommodate
 backward compatibility.
I actually have no clue how the Java generics work and for what I'm proposing, using a JVM would be purely an implementation detail. In fact it is very likely that Java generics would NOT be used anywhere in the system What I was proposing is that when someone wants to ship a template lib without the source, they would compile it into a JVM file that the compiler would load. This file WOULD NOT be put right into the final binary but rather be loaded by the compiler and used by the compiler to instantiate a template. Whatever functions from the file that the compiler calls would take the place of the code in the compiler that normally instantiates a template; that is, function would generate an AST or IL blob that would get passed to the backend. Note, using the JVM for this is an implementation detail. You could just as easily use Linux shared objects, DLLs, DDl, .NET assemblies, JavaScript, Python, Lua, perl, LISP, or BF. The point is that rather than ship template source, you ship a program that, when run, does the instantiation of the template. What form this program is shipped in as a minor detail as far as I'm concerned.
 generics != Java generics !!!
 
 Generics are also orthogonal to meta-programming.
 
Yes they are orthogonal. However the same is less true with regards to the aspects of templates that I want. For example, without some bits of meta-programing, you can't do most of what I want and with a really good meta-programing system, you wouldn't need any template specific features at all except as sugar.
 please also see my reply to dsimcha.
 
Dec 19 2009
prev sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Yigal Chripun wrote:

 On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same as
 in C++. There are minor technical differences, syntactic differences,
 but it is essentially the same thing. I think that's understandable
 since Digital Mars had a C++ compiler."

 
http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client.
Well yes, but the .NET design restrict the generic type to a specific named interface in order to do type checking. You may find this a good design choice, but others find it far more frustrating because this is exactly what allows for a bit more flexibility in a statically typed world. So it is not exactly a problem but rather a trade-off imho.
Dec 18 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 19/12/2009 01:31, Lutger wrote:
 Yigal Chripun wrote:

 On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same as
 in C++. There are minor technical differences, syntactic differences,
 but it is essentially the same thing. I think that's understandable
 since Digital Mars had a C++ compiler."
http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client.
Well yes, but the .NET design restrict the generic type to a specific named interface in order to do type checking. You may find this a good design choice, but others find it far more frustrating because this is exactly what allows for a bit more flexibility in a statically typed world. So it is not exactly a problem but rather a trade-off imho.
The .Net implementation isn't perfect of course and has a few issues that should be resolved, one of these is the problem with using operators. requiring interfaces by itself isn't the problem though. The only drawback in this case is verbosity which isn't really a big deal for this.
Dec 19 2009
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Yigal Chripun wrote:

 On 19/12/2009 01:31, Lutger wrote:
 Yigal Chripun wrote:

 On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same
 as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."
http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client.
Well yes, but the .NET design restrict the generic type to a specific named interface in order to do type checking. You may find this a good design choice, but others find it far more frustrating because this is exactly what allows for a bit more flexibility in a statically typed world. So it is not exactly a problem but rather a trade-off imho.
The .Net implementation isn't perfect of course and has a few issues that should be resolved, one of these is the problem with using operators. requiring interfaces by itself isn't the problem though. The only drawback in this case is verbosity which isn't really a big deal for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
Dec 20 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Lutger" <lutger.blijdestijn gmail.com> wrote in message 
news:hgl440$tlo$1 digitalmars.com...
 Yigal Chripun wrote:
 The .Net implementation isn't perfect of course and has a few issues
 that should be resolved, one of these is the problem with using
 operators. requiring interfaces by itself isn't the problem though. The
 only drawback in this case is verbosity which isn't really a big deal
 for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
It sounds like you're talking about duck typing?
Dec 20 2009
parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Nick Sabalausky wrote:

 "Lutger" <lutger.blijdestijn gmail.com> wrote in message
 news:hgl440$tlo$1 digitalmars.com...
 Yigal Chripun wrote:
 The .Net implementation isn't perfect of course and has a few issues
 that should be resolved, one of these is the problem with using
 operators. requiring interfaces by itself isn't the problem though. The
 only drawback in this case is verbosity which isn't really a big deal
 for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
It sounds like you're talking about duck typing?
I'm not sure, I don't think so. From what I understand, duck typing is supposed to be dynamic, while structural and nominative typing are part of a static type system. I meant the non-nominative kind of typing, whatever it is. fwiw, this is what wikipedia says: "Duck typing is similar to but distinct from structural typing. Structural typing is a static typing system that determines type compatibility and equivalence by a type's structure, whereas duck typing is dynamic and determines type compatibility by only that part of a type's structure that is accessed during run time." http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
Dec 20 2009
next sibling parent retard <re tard.com.invalid> writes:
Sun, 20 Dec 2009 18:53:56 +0100, Lutger wrote:

 I'm not sure, I don't think so. From what I understand, duck typing is
 supposed to be dynamic, while structural and nominative typing are part
 of a static type system. I meant the non-nominative kind of typing,
 whatever it is.
 
 fwiw, this is what wikipedia says:
 
 "Duck typing is similar to but distinct from structural typing.
 Structural typing is a static typing system that determines type
 compatibility and equivalence by a type's structure, whereas duck typing
 is dynamic and determines type compatibility by only that part of a
 type's structure that is accessed during run time."
 
 http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
You don't need a named interfaces in structural type system. Only the members of the data type have any meaning.
Dec 20 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Lutger wrote:
 Nick Sabalausky wrote:
 
 "Lutger" <lutger.blijdestijn gmail.com> wrote in message
 news:hgl440$tlo$1 digitalmars.com...
 Yigal Chripun wrote:
 The .Net implementation isn't perfect of course and has a few issues
 that should be resolved, one of these is the problem with using
 operators. requiring interfaces by itself isn't the problem though. The
 only drawback in this case is verbosity which isn't really a big deal
 for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
It sounds like you're talking about duck typing?
I'm not sure, I don't think so. From what I understand, duck typing is supposed to be dynamic, while structural and nominative typing are part of a static type system. I meant the non-nominative kind of typing, whatever it is. fwiw, this is what wikipedia says: "Duck typing is similar to but distinct from structural typing. Structural typing is a static typing system that determines type compatibility and equivalence by a type's structure, whereas duck typing is dynamic and determines type compatibility by only that part of a type's structure that is accessed during run time." http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
That Wikipedia page doesn't any make sense to me. Is that *really* what duck typing is? If so, it's a complete misnomer. Because it's totally different to "if it looks like a duck, quacks like a duck, etc". If it looks like a duck now, but *didn't* look like a duck three minutes ago, you can be pretty sure it's NOT a duck! Whereas what it calls "structural typing" follows the duck rule perfectly. There is no reasoning on that page as to why duck typing is restricted to dynamic languages. There's far too much ideology in that page, it ought to get flagged as inappropriate. Eg this line near the top: "Users of statically typed languages new to dynamically typed languages are usually tempted to .."
Dec 21 2009
next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Don wrote:

 Lutger wrote:
...
 http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
That Wikipedia page doesn't any make sense to me. Is that *really* what duck typing is? If so, it's a complete misnomer. Because it's totally different to "if it looks like a duck, quacks like a duck, etc". If it looks like a duck now, but *didn't* look like a duck three minutes ago, you can be pretty sure it's NOT a duck! Whereas what it calls "structural typing" follows the duck rule perfectly. There is no reasoning on that page as to why duck typing is restricted to dynamic languages. There's far too much ideology in that page, it ought to get flagged as inappropriate. Eg this line near the top: "Users of statically typed languages new to dynamically typed languages are usually tempted to .."
Yes, but most of the (less academic) information on the web about type systems is like this. Hence the confusion about basic terms.
Dec 21 2009
parent reply retard <re tard.com.invalid> writes:
Mon, 21 Dec 2009 12:25:35 +0100, Lutger wrote:

 Don wrote:
 
 Lutger wrote:
...
 http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
That Wikipedia page doesn't any make sense to me. Is that *really* what duck typing is? If so, it's a complete misnomer. Because it's totally different to "if it looks like a duck, quacks like a duck, etc". If it looks like a duck now, but *didn't* look like a duck three minutes ago, you can be pretty sure it's NOT a duck! Whereas what it calls "structural typing" follows the duck rule perfectly. There is no reasoning on that page as to why duck typing is restricted to dynamic languages. There's far too much ideology in that page, it ought to get flagged as inappropriate. Eg this line near the top: "Users of statically typed languages new to dynamically typed languages are usually tempted to .."
Yes, but most of the (less academic) information on the web about type systems is like this. Hence the confusion about basic terms.
The amateur opinions are often pure 100% crap - most ruby/python fanboys are just reinventing (NIH) the same wheel academic community already invented 50 years ago. Don't trust wikipedia on this. http://www.pphsg.org/cdsmith/types.html http://www.reddit.com/r/compsci/comments/9zinc/ what_is_a_dynamic_type_system/ The book discussing type system essentials: http://www.cis.upenn.edu/~bcpierce/tapl/ http://books.google.fi/books?id=ti6zoAC9Ph8C Some related discussion: http://lambda-the-ultimate.org/node/834 http://lambda-the-ultimate.org/node/1102 http://lambda-the-ultimate.org/node/1268 http://lambda-the-ultimate.org/node/2828 http://lambda-the-ultimate.org/node/2418 http://lambda-the-ultimate.org/node/1625 http://lambda-the-ultimate.org/node/1201
Dec 21 2009
parent Lutger <lutger.blijdestijn gmail.com> writes:
retard wrote:

<snip>

Thanks for links, that should help.
Dec 21 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Don, el 21 de diciembre a las 09:20 me escribiste:
http://en.wikipedia.org/wiki/Duck_typing#Structural_type_systems
That Wikipedia page doesn't any make sense to me. Is that *really* what duck typing is? If so, it's a complete misnomer. Because it's totally different to "if it looks like a duck, quacks like a duck, etc". If it looks like a duck now, but *didn't* look like a duck three minutes ago, you can be pretty sure it's NOT a duck! Whereas what it calls "structural typing" follows the duck rule perfectly. There is no reasoning on that page as to why duck typing is restricted to dynamic languages. There's far too much ideology in that page, it ought to get flagged as inappropriate. Eg this line near the top: "Users of statically typed languages new to dynamically typed languages are usually tempted to .."
I think Wikipedia talks about what the meaning of the term is commonly used, maybe duck typing is not too accurate, but I think most people use the term for dynamically typed languages. I don't think people differentiate between what Wikipedia defines as duck typing and structural typing, though, I think people usually say duck typing to both. Anyway, if you *really* think Wikipedia is wrong, you can fix it or at least mention it in the discussion page[1], that's what Wikipedia is all about :) http://en.wikipedia.org/wiki/Talk:Duck_typing -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- ... los cuales son susceptibles a una creciente variedad de ataques previsibles, tales como desbordamiento del tampón, falsificación de parámetros, ... -- Stealth - ISS LLC - Seguridad de IT
Dec 21 2009
prev sibling next sibling parent BCS <none anon.com> writes:
Hello Lutger,

 Yigal Chripun wrote:
 
 The .Net implementation isn't perfect of course and has a few issues
 that should be resolved, one of these is the problem with using
 operators. requiring interfaces by itself isn't the problem though.
 The only drawback in this case is verbosity which isn't really a big
 deal for this.
 
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so.
Fully compile-time duck typeing. Thats the one of the things I must demand be kept.
Dec 20 2009
prev sibling parent reply yigal chripun <yigal100 gmail.com> writes:
Lutger Wrote:

 Yigal Chripun wrote:
 
 On 19/12/2009 01:31, Lutger wrote:
 Yigal Chripun wrote:

 On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same
 as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."
http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client.
Well yes, but the .NET design restrict the generic type to a specific named interface in order to do type checking. You may find this a good design choice, but others find it far more frustrating because this is exactly what allows for a bit more flexibility in a statically typed world. So it is not exactly a problem but rather a trade-off imho.
The .Net implementation isn't perfect of course and has a few issues that should be resolved, one of these is the problem with using operators. requiring interfaces by itself isn't the problem though. The only drawback in this case is verbosity which isn't really a big deal for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
The way I see it we have three options: assume we have these definitions: interface I {...} class Foo : I {...} class Bar {...} // structurally compatible to I template tp (I) {...} 1) .Net nominative typing: tp!(Foo) // OK tp!(Bar) //not OK 2) structural typing (similllar to Go?) tp!(Foo) // OK tp!(Bar) // also OK 3) C++ style templates where the compatibility check is against the *body* of the template. of the three above I think option 3 is the worst design and option 2 is my favorite design. I think that in reality you'll almost always want to define such an interface and I really can't think of any useful use cases for an unrestricted template parameter as in C++. If you think of templates as functions the compiler executes, the difference between the last two options is that option 2 is staticly typed vs. option 3 which is dynamicaly typed. We all use D because we like static typing and there's no reasone to not extend this to compile-time as well.
Dec 21 2009
next sibling parent reply Don <nospam nospam.com> writes:
yigal chripun wrote:
 Lutger Wrote:
 
 Yigal Chripun wrote:

 On 19/12/2009 01:31, Lutger wrote:
 Yigal Chripun wrote:

 On 18/12/2009 02:49, Tim Matthews wrote:
 In a reddit reply: "The concept of templates in D is exactly the same
 as in C++. There are minor technical differences, syntactic
 differences, but it is essentially the same thing. I think that's
 understandable since Digital Mars had a C++ compiler."
http://www.reddit.com/r/programming/comments/af511/ada_programming_generics/c0hcb04?context=3
 I have never touched ada but I doubt it is really has that much that
 can't be done in D. I thought most (if not all) the problems with C++
 were absent in D as this summary of the most common ones points out
 http://www.digitalmars.com/d/2.0/templates-revisited.html.

 Your thoughts?
I don't know Ada but I do agree with that reddit reply about c++ and D templates. D provides a better implementation of the exact same design, so it does fix many minor issues (implementation bugs). An example of this is the foo<bar<Class>> construct that doesn't work because of the ">>" operator. However, using the same design obviously doesn't solve any of the deeper design problems and this design has many of those. An example of that is that templates are compiled as part of the client code. This forces a library writer to provide the source code (which might not be acceptable in commercial circumstances) but even more frustrating is the fact that template compilation bugs will also happen at the client.
Well yes, but the .NET design restrict the generic type to a specific named interface in order to do type checking. You may find this a good design choice, but others find it far more frustrating because this is exactly what allows for a bit more flexibility in a statically typed world. So it is not exactly a problem but rather a trade-off imho.
The .Net implementation isn't perfect of course and has a few issues that should be resolved, one of these is the problem with using operators. requiring interfaces by itself isn't the problem though. The only drawback in this case is verbosity which isn't really a big deal for this.
The drawback is not verbosity but lack of structural typing. Suppose some library has code that can be parametrized by IFoo and I have another library with a type that implements IBar, which satisfies IFoo but not explicitly so. Then what? Unless I have totally misunderstood .NET generics, I have to create some proxy object for IBar that implements IFoo just to satisfy the strong type checking of .NET generics. You could make the argument that this 'inconvenience' is a good thing, but I do think it is a bit more of a drawback than just increased verbosity.
The way I see it we have three options: assume we have these definitions: interface I {...} class Foo : I {...} class Bar {...} // structurally compatible to I template tp (I) {...} 1) .Net nominative typing: tp!(Foo) // OK tp!(Bar) //not OK 2) structural typing (similllar to Go?) tp!(Foo) // OK tp!(Bar) // also OK 3) C++ style templates where the compatibility check is against the *body* of the template. of the three above I think option 3 is the worst design and option 2 is my favorite design. I think that in reality you'll almost always want to define such an interface and I really can't think of any useful use cases for an unrestricted template parameter as in C++.
You forgot option 4: 4) D2 constrained templates, where the condition is checked inside the template constraint. This is more powerful than option 2, because: (1) there are cases where you want MORE constraints than simply an interface; and (2) only a subset of constraints can be expressed as an interface. Also a minor point: (3) interfaces don't work for built-in types. Better still would be to make it impossible to compile a template which made use of a feature not provided through a constraint.
Dec 21 2009
parent reply yigal chripun <yigal100 gmail.com> writes:
Don Wrote:
 The way I see it we have three options:
 
 assume we have these definitions:
 interface I {...}
 class Foo : I {...}
 class Bar {...} // structurally compatible to I
 
 template tp (I) {...}
 
 1) .Net nominative typing:
 tp!(Foo) // OK
 tp!(Bar) //not OK
 
 2) structural typing (similllar to Go?)
 tp!(Foo) // OK
 tp!(Bar) // also OK
 
 3) C++ style templates where the compatibility check is against the *body* of
the template.
 
 of the three above I think option 3 is the worst design and option 2 is my
favorite design. I think that in reality you'll almost always want to define
such an interface and I really can't think of any useful use cases for an
unrestricted template parameter as in C++. 
You forgot option 4: 4) D2 constrained templates, where the condition is checked inside the template constraint. This is more powerful than option 2, because: (1) there are cases where you want MORE constraints than simply an interface; and (2) only a subset of constraints can be expressed as an interface. Also a minor point: (3) interfaces don't work for built-in types. Better still would be to make it impossible to compile a template which made use of a feature not provided through a constraint.
I wouldn't give that a sepoarate option number, IMO this is a variation on option2. regarding your notes: when you can express the same concept in both ways, using an interface is esier to read & understand IMO. What about having a combination of the two designs? you define an interface and allow optionally defining additional constraints _on_the_interface_ instead of the template. I think this complies with your points (1) and (2) and is better since you don't need to repeat the constraints at the call site (each template that uses that type needs to repeat the constraint). even if you factor out the checks into a separate "isFoo" template you still need to add to each template declaration "if isFoo!(T)" which really should be done by the compiler instead. regarding point(3) - this is orthogonal IMO. Ideally I'd like to see this distinction between builtin type and user defined one removed. int should be treated in the same manner as a user defined struct. I completely agree about not compiling templates that use features not defined by constraints. This is in fact the main point I was trying to make in this thread.
Dec 21 2009
parent reply Don <nospam nospam.com> writes:
yigal chripun wrote:
 Don Wrote:
 The way I see it we have three options:

 assume we have these definitions:
 interface I {...}
 class Foo : I {...}
 class Bar {...} // structurally compatible to I

 template tp (I) {...}

 1) .Net nominative typing:
 tp!(Foo) // OK
 tp!(Bar) //not OK

 2) structural typing (similllar to Go?)
 tp!(Foo) // OK
 tp!(Bar) // also OK

 3) C++ style templates where the compatibility check is against the *body* of
the template.

 of the three above I think option 3 is the worst design and option 2 is my
favorite design. I think that in reality you'll almost always want to define
such an interface and I really can't think of any useful use cases for an
unrestricted template parameter as in C++. 
You forgot option 4: 4) D2 constrained templates, where the condition is checked inside the template constraint. This is more powerful than option 2, because: (1) there are cases where you want MORE constraints than simply an interface; and (2) only a subset of constraints can be expressed as an interface. Also a minor point: (3) interfaces don't work for built-in types. Better still would be to make it impossible to compile a template which made use of a feature not provided through a constraint.
I wouldn't give that a sepoarate option number, IMO this is a variation on option2. regarding your notes: when you can express the same concept in both ways, using an interface is esier to read & understand IMO. What about having a combination of the two designs? you define an interface and allow optionally defining additional constraints _on_the_interface_ instead of the template. I think this complies with your points (1) and (2) and is better since you don't need to repeat the constraints at the call site (each template that uses that type needs to repeat the constraint).
I don't think interfaces are flexible enough for that. EG, how do you express that the type I must have a template function void baz!(X)(X x) ? There's more to a type, than just a list of the virtual functions which it supports.
 even if you factor out the checks into a separate "isFoo" template you still
need to add to each template declaration "if isFoo!(T)" which really should be
done by the compiler instead.
 
 regarding point(3) - this is orthogonal IMO. Ideally I'd like to see this
distinction between builtin type and user defined one removed. int should be
treated in the same manner as a user defined struct. 
 
 I completely agree about not compiling templates that use features not defined
by constraints. This is in fact the main point I was trying to make in this
thread. 
The problem is, I'm not sure that it's feasible in general. At least, it's not obvious how to do it.
Dec 21 2009
next sibling parent yigal chripun <yigal100 gmail.com> writes:
Don Wrote:

 yigal chripun wrote:
 Don Wrote:
 The way I see it we have three options:

 assume we have these definitions:
 interface I {...}
 class Foo : I {...}
 class Bar {...} // structurally compatible to I

 template tp (I) {...}

 1) .Net nominative typing:
 tp!(Foo) // OK
 tp!(Bar) //not OK

 2) structural typing (similllar to Go?)
 tp!(Foo) // OK
 tp!(Bar) // also OK

 3) C++ style templates where the compatibility check is against the *body* of
the template.

 of the three above I think option 3 is the worst design and option 2 is my
favorite design. I think that in reality you'll almost always want to define
such an interface and I really can't think of any useful use cases for an
unrestricted template parameter as in C++. 
You forgot option 4: 4) D2 constrained templates, where the condition is checked inside the template constraint. This is more powerful than option 2, because: (1) there are cases where you want MORE constraints than simply an interface; and (2) only a subset of constraints can be expressed as an interface. Also a minor point: (3) interfaces don't work for built-in types. Better still would be to make it impossible to compile a template which made use of a feature not provided through a constraint.
I wouldn't give that a sepoarate option number, IMO this is a variation on option2. regarding your notes: when you can express the same concept in both ways, using an interface is esier to read & understand IMO. What about having a combination of the two designs? you define an interface and allow optionally defining additional constraints _on_the_interface_ instead of the template. I think this complies with your points (1) and (2) and is better since you don't need to repeat the constraints at the call site (each template that uses that type needs to repeat the constraint).
I don't think interfaces are flexible enough for that. EG, how do you express that the type I must have a template function void baz!(X)(X x) ? There's more to a type, than just a list of the virtual functions which it supports.
I agree that interfaces don't support this ATM. That's why i suggested to add constraints to them. e.g. Interface I if isFoo!(I) {...} // one possible syntax other approaches could be : 1) add non-virtual functions to interfaces (Andrei once suggested this) 2) add more meta-data with annotations etc..
 
 even if you factor out the checks into a separate "isFoo" template you still
need to add to each template declaration "if isFoo!(T)" which really should be
done by the compiler instead.
 
 regarding point(3) - this is orthogonal IMO. Ideally I'd like to see this
distinction between builtin type and user defined one removed. int should be
treated in the same manner as a user defined struct. 
 
 I completely agree about not compiling templates that use features not defined
by constraints. This is in fact the main point I was trying to make in this
thread. 
The problem is, I'm not sure that it's feasible in general. At least, it's not obvious how to do it.
Dec 21 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 The problem is, I'm not sure that it's feasible in general. At least, 
 it's not obvious how to do it.
C++0x Concepts tried to do it in a limited form, and it got so complicated nobody could figure out how it was supposed to work and it capsized and sank. I don't think it's possible in the more general sense.
Dec 21 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 21/12/2009 19:53, Walter Bright wrote:
 Don wrote:
 The problem is, I'm not sure that it's feasible in general. At least,
 it's not obvious how to do it.
C++0x Concepts tried to do it in a limited form, and it got so complicated nobody could figure out how it was supposed to work and it capsized and sank. I don't think it's possible in the more general sense.
The C++0x Concepts tried to add two more levels to the type system: template <typename T> ... The T parameter would belong to a Concept "type", and they also added Concept maps whice are like concept Interfaces. Add to the mix backward compatibility (as always is the case in C++) and of course you'll get a huge complicated mess of special cases that no-one can comprehend. But that doesn't mean the idea itself isn't valid. Perhaps a different language with different goals in mind can provide a much simpler non convoluted implementation and semantics for the same idea? You've shown in the past that you're willing to break backward compatibility in the name of progress and experiment with new ideas. You can make decisions that the C++ committee will never approve. Doesn't that mean that this is at least worth a shot?
Dec 21 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a different 
 language with different goals in mind can provide a much simpler non 
 convoluted implementation and semantics for the same idea?
 You've shown in the past that you're willing to break backward 
 compatibility in the name of progress and experiment with new ideas. You 
 can make decisions that the C++ committee will never approve.
 
 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint. The latter is, in my not-so-humble opinion, a desirable feature but its desirability is overwhelmed by the payment in complexity and constrictions on the Concepts necessary to make it work.
Dec 21 2009
next sibling parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a different 
 language with different goals in mind can provide a much simpler non 
 convoluted implementation and semantics for the same idea?
 You've shown in the past that you're willing to break backward 
 compatibility in the name of progress and experiment with new ideas. 
 You can make decisions that the C++ committee will never approve.

 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint. The latter is, in my not-so-humble opinion, a desirable feature but its desirability is overwhelmed by the payment in complexity and constrictions on the Concepts necessary to make it work.
I seriously wonder why you're saying that, while at the same time clinging on overcomplicated failures such as const/immutable/pure or auto ref etc...
Dec 21 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 I believe that D's template constraint feature fills the bill, it does 
 everything Concepts purported to do, and more, in a simple and easily 
 explained manner, except check the template body against the constraint.

 The latter is, in my not-so-humble opinion, a desirable feature but 
 its desirability is overwhelmed by the payment in complexity and 
 constrictions on the Concepts necessary to make it work.
I seriously wonder why you're saying that, while at the same time clinging on overcomplicated failures such as const/immutable/pure or auto ref etc...
Because there is a large payoff to immutability and purity, and they are far simpler than Concepts. Consider that Concepts required more pages to specify than the entire template feature in C++. I can expound on the huge advantages immutability and purity offer, if you want.
Dec 21 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 I can expound on the huge advantages immutability and purity offer, if 
 you want.
Yes, I'd like to hear about this. You can leave away the things that are not going to be implemented (like memorization of pure return values), are only micro-optimizations (common sub-expression elimination with pure functions?), which don't work (immutable was used to make strings read-only, but you can stomp over immutable arrays), which were thought to be useful, but nothing has materialized yet (something like immutable was supposed to be the cure for multithreading)... Over two years have passed since immutable/const was added to dmd, but I couldn't see any benefit yet. But lots of new compiler bugs. There's the danger that those feature are all nothing but hot air in real programming. I like D, and it sure would relief me to hear that this is not the case.
Dec 21 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 I can expound on the huge advantages immutability and purity offer, if 
 you want.
Yes, I'd like to hear about this. You can leave away the things that are not going to be implemented (like memorization of pure return values), are only micro-optimizations (common sub-expression elimination with pure functions?), which don't work (immutable was used to make strings read-only, but you can stomp over immutable arrays), which were thought to be useful, but nothing has materialized yet (something like immutable was supposed to be the cure for multithreading)...
The short answer is: the benefits of functional programming. The longer answer: 1. optimizations - yes, the optimizer does take advantage of immutability and purity, and yes, those optimizations are minor. But they do exist and do work. 2. immutable reference types can be treated as if they were value types - this advantage is most obvious in strings. Without immutability (and this happens in C, C++, and D1) programmers tend to make their own private copies of things "just in case" someone else changes them. 3. (2) has a great advantage in doing message passing between threads. This model was popularized by Erlang, and is very successful. You can do message passing without immutable references, but you've got to hope and pray that your programming team didn't make any mistakes with it. With immutable references, you have a statically enforced guarantee. Value types (and immutable references) do not need synchronization. 4. immutability and purity enable user reasoning about a program. Otherwise, you have to rely on the (probably wrong) documentation about a function to see what its effects are, and if it has any side effects. Yes, there was a recently discovered bug which enabled modifying an immutable array. This was a bug, and has been fixed. A bug does not mean the concept is broken.
 Over two years have passed since immutable/const was added to dmd, but I 
 couldn't see any benefit yet. But lots of new compiler bugs. There's the 
 danger that those feature are all nothing but hot air in real 
 programming. I like D, and it sure would relief me to hear that this is 
 not the case.
The message passing threading advantage awaits the construction of a message passing library. Sean Kelly is working on it.
Dec 22 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 Walter Bright wrote:
 I can expound on the huge advantages immutability and purity offer, 
 if you want.
Yes, I'd like to hear about this. You can leave away the things that are not going to be implemented (like memorization of pure return values), are only micro-optimizations (common sub-expression elimination with pure functions?), which don't work (immutable was used to make strings read-only, but you can stomp over immutable arrays), which were thought to be useful, but nothing has materialized yet (something like immutable was supposed to be the cure for multithreading)...
The short answer is: the benefits of functional programming.
I can see how D benefits from some functional language features (like those that increase expressiveness), but not the immutability thing. It's a good idea, but there's some tradeoff. And immutability can still be handled as a concept outside of the language type system.
 The longer answer:
 
 1. optimizations - yes, the optimizer does take advantage of 
 immutability and purity, and yes, those optimizations are minor. But 
 they do exist and do work.
 
 2. immutable reference types can be treated as if they were value types 
 - this advantage is most obvious in strings. Without immutability (and 
 this happens in C, C++, and D1) programmers tend to make their own 
 private copies of things "just in case" someone else changes them.
With string, you used to follow the copy-and-write protocol. Now you're doing the same (you have to re-instantiate immutable data to change it), just that the compiler forces you. This can go good for reliability, but it also takes a lot of flexibility. Plus you have to deal with the complications of the type system now.
 3. (2) has a great advantage in doing message passing between threads. 
 This model was popularized by Erlang, and is very successful. You can do 
 message passing without immutable references, but you've got to hope and 
 pray that your programming team didn't make any mistakes with it. With 
 immutable references, you have a statically enforced guarantee. Value 
 types (and immutable references) do not need synchronization.
But you need to allocate this data from a shared garbage collection, which again slow down the whole thing. Is there really an advantage over copying? For large portions of data you could (at least in theory) make it _actually_ read-only by using mprotect (make the memory pages read-only).
 
 4. immutability and purity enable user reasoning about a program. 
 Otherwise, you have to rely on the (probably wrong) documentation about 
 a function to see what its effects are, and if it has any side effects.
If the program logic gets more complicated because of the type system, this isn't going to help much. Now I see you applying language hacks like DIP2 to reduce the damage. Is there an end to it?
 Yes, there was a recently discovered bug which enabled modifying an 
 immutable array. This was a bug, and has been fixed. A bug does not mean 
 the concept is broken.
Sure, but the question is: will all those bugs ever to be fixed? There are old and central language core features which _still_ can trigger dmd bugs as of today (like forward references and circular module imports). I don't mean to be insolent, but I think the language is going to be too huge for one compiler writer. Yes, you're the one who knows best how far he can go, but I as a clueless user suffering from dmd bugs think the limit must already have been reached. (btw. here's another one, but it's also going to be fixed soon: http://d.puremagic.com/issues/show_bug.cgi?id=2093) Also, how much is this reliability worth if you can just cast away immutable? It's even exactly the same syntax you have to use for relatively harmless things, like casting a float to an integer.
 
 Over two years have passed since immutable/const was added to dmd, but 
 I couldn't see any benefit yet. But lots of new compiler bugs. There's 
 the danger that those feature are all nothing but hot air in real 
 programming. I like D, and it sure would relief me to hear that this 
 is not the case.
The message passing threading advantage awaits the construction of a message passing library. Sean Kelly is working on it.
That's nice, but it would be possible without immutable.
Dec 23 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 grauzone wrote:
 Walter Bright wrote:
 I can expound on the huge advantages immutability and purity offer, 
 if you want.
Yes, I'd like to hear about this. You can leave away the things that are not going to be implemented (like memorization of pure return values), are only micro-optimizations (common sub-expression elimination with pure functions?), which don't work (immutable was used to make strings read-only, but you can stomp over immutable arrays), which were thought to be useful, but nothing has materialized yet (something like immutable was supposed to be the cure for multithreading)...
The short answer is: the benefits of functional programming.
I can see how D benefits from some functional language features (like those that increase expressiveness), but not the immutability thing. It's a good idea, but there's some tradeoff. And immutability can still be handled as a concept outside of the language type system.
You can certainly do immutability as a convention, but I contend that is unreliable and does not scale. It's like saying you can write C code that doesn't have buffer overflows. I have been using C since before it had function prototypes. It was just all kinds of win to add the prototypes, because then the manual checking got handed over to the compiler.
 2. immutable reference types can be treated as if they were value 
 types - this advantage is most obvious in strings. Without 
 immutability (and this happens in C, C++, and D1) programmers tend to 
 make their own private copies of things "just in case" someone else 
 changes them.
With string, you used to follow the copy-and-write protocol. Now you're doing the same (you have to re-instantiate immutable data to change it), just that the compiler forces you. This can go good for reliability, but it also takes a lot of flexibility. Plus you have to deal with the complications of the type system now.
My experience is that relying on convention to follow the protocol does not work. I think that the evidence in the field that it doesn't work is pretty compelling as well.
 3. (2) has a great advantage in doing message passing between threads. 
 This model was popularized by Erlang, and is very successful. You can 
 do message passing without immutable references, but you've got to 
 hope and pray that your programming team didn't make any mistakes with 
 it. With immutable references, you have a statically enforced 
 guarantee. Value types (and immutable references) do not need 
 synchronization.
But you need to allocate this data from a shared garbage collection,
You do anyway.
 which again slow down the whole thing. Is there really an advantage over 
 copying?
Copying will invoke the garbage collector. Since you argued that is slow, then avoiding the necessity of doing so will make it faster.
 For large portions of data you could (at least in theory) make it 
 _actually_ read-only by using mprotect (make the memory pages read-only).
Compile time checking is better than runtime checking.
 4. immutability and purity enable user reasoning about a program. 
 Otherwise, you have to rely on the (probably wrong) documentation 
 about a function to see what its effects are, and if it has any side 
 effects.
If the program logic gets more complicated because of the type system, this isn't going to help much. Now I see you applying language hacks like DIP2 to reduce the damage. Is there an end to it?
C function prototypes increased the complexity, but it was darn well worth it. You can either have more complexity in the language, or you can spend endless hours manually checking to see if convention was followed - and even then you can't be sure.
 Yes, there was a recently discovered bug which enabled modifying an 
 immutable array. This was a bug, and has been fixed. A bug does not 
 mean the concept is broken.
Sure, but the question is: will all those bugs ever to be fixed?
Forgive me, but every month 20 to 40 bugs get fixed. You can see it in the change log. I don't understand these complaints.
 Also, how much is this reliability worth if you can just cast away 
 immutable? It's even exactly the same syntax you have to use for 
 relatively harmless things, like casting a float to an integer.
It's not allowed in safe functions.
 That's nice, but it would be possible without immutable.
Again, relying on convention has shown, in practice, to NOT WORK when it comes to making reliable multithreaded programs.
Dec 23 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 But you need to allocate this data from a shared garbage collection, 
You do anyway.
Consider you allocate normal data from a thread local heap, and shared data from a shared GC. We will need this anyway to get decent memory allocation and GC performance. Especially because the shared GC will have a single global lock, and will have to stop ALL threads in the process to scan for memory. Now, where do you want to allocate immutable data from? a) From the local heap: but then you can't just pass immutable data by reference to other threads. It obviously won't work, because the local GC may free the memory, even if other threads hold references to it. b) From the shared GC: but then even only-locally used data like strings would have to be allocated from the shared GC! This would work, but performance of immutable would be godawful. No way you could do this outside of alpha versions of the language. You could make a) work by copying the immutable data to the shared heap as soon as immutable data "escapes" a thread and may be accessed by other threads. What will you do?
 For large portions of data you could (at least in theory) make it 
 _actually_ read-only by using mprotect (make the memory pages read-only).
Compile time checking is better than runtime checking.
Not if the language/compiler gets unusable.
 Yes, there was a recently discovered bug which enabled modifying an 
 immutable array. This was a bug, and has been fixed. A bug does not 
 mean the concept is broken.
Sure, but the question is: will all those bugs ever to be fixed?
Forgive me, but every month 20 to 40 bugs get fixed. You can see it in the change log. I don't understand these complaints.
Frankly, I don't understand how you think that there's no problem. Even beginners can hit dmd bugs. Some basic language features are still buggy as hell (like forward referencing). Of course only if you actually try to use them.
 
 Also, how much is this reliability worth if you can just cast away 
 immutable? It's even exactly the same syntax you have to use for 
 relatively harmless things, like casting a float to an integer.
It's not allowed in safe functions.
"To hell with un- safe D"? The current cast syntax, that allows immutable to be casted away, is just a damn wide open programmer trap that must be fixed.
Dec 24 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Thu, 24 Dec 2009 17:41:30 +0100, grauzone wrote:

 Walter Bright wrote:
 Yes, there was a recently discovered bug which enabled modifying an
 immutable array. This was a bug, and has been fixed. A bug does not
 mean the concept is broken.
Sure, but the question is: will all those bugs ever to be fixed?
Forgive me, but every month 20 to 40 bugs get fixed. You can see it in the change log. I don't understand these complaints.
Frankly, I don't understand how you think that there's no problem. Even beginners can hit dmd bugs. Some basic language features are still buggy as hell (like forward referencing). Of course only if you actually try to use them.
10 year old forward reference errors don't matter at all since 20 to 40 *other* bugs get fixed every month.
Dec 24 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 10 year old forward reference errors don't matter at all since 20 to 40 
 *other* bugs get fixed every month.
Half of them show as fixed. http://d.puremagic.com/issues/show_bug.cgi?id=340
Dec 24 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 What will you do?
Because of casting, there cannot be a thread-local only gc. This does not make the gc inherently unusable. Java, for example, uses only one shared gc. It must, because Java has no concept of thread local.
Dec 24 2009
next sibling parent reply retard <re tard.com.invalid> writes:
Thu, 24 Dec 2009 11:59:57 -0800, Walter Bright wrote:

 grauzone wrote:
 What will you do?
Because of casting, there cannot be a thread-local only gc. This does not make the gc inherently unusable. Java, for example, uses only one shared gc. It must, because Java has no concept of thread local.
TLS is provided via library add-on http://java.sun.com/javase/6/docs/api/ java/lang/ThreadLocal.html
Dec 24 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Thu, 24 Dec 2009 11:59:57 -0800, Walter Bright wrote:
 
 grauzone wrote:
 What will you do?
Because of casting, there cannot be a thread-local only gc. This does not make the gc inherently unusable. Java, for example, uses only one shared gc. It must, because Java has no concept of thread local.
TLS is provided via library add-on http://java.sun.com/javase/6/docs/api/ java/lang/ThreadLocal.html
While Java can allocate thread local data, it has no *concept* of thread local data. Nothing at all prevents one from passing a reference to thread local data from one thread to another. Since nothing prevents this, it therefore cannot violate the Java memory model, and therefore must be supported by the gc.
Dec 24 2009
prev sibling parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 What will you do?
Because of casting, there cannot be a thread-local only gc.
I think this is a very bad idea. I thought TLS by default was just the beginning of separating threads better. While it will work in the initial stages of D2, I don't think it should be final. Also, it's bad how inter-thread communication will trigger GC runs, which will stop all threads in the process for a while. Because you have no choice but to allocate your immutable messages from the shared heap. That can't be... I must be missing some central point.
 This does not make the gc inherently unusable. Java, for example, uses 
 only one shared gc. It must, because Java has no concept of thread local.
What does Java matter here? Java was designed twenty years aho, when multicore wasn't an issue yet. D2 is designed *now* with good multicore support in mind. Also I'm not sure if you're right here. Java has a generational copying GC, and although I don't know Sun Java's implementation at all, I'm quite sure the younger generation uses an entirely thread local heap. In the normal case, it shouldn't need to get a single lock to allocate memory. Just an unsynchronized pointer incrementation to get the next memory block (as fast as stack allocation). If a memory block "escapes" to the older generation (the GC needs to detect this case anyway), the memory can be copied to a shared heap. This means old dumb Java will completely smash your super multicore aware D2. At least if someone wants to allocate memory... oh by the way, no way to prevent GC cycles on frequent memory allocations, even if the programmer knows that the memory isn't needed anymore: it seems manual memory managment is going to be deemed "evil". Or did I hear wrong that "delete" will be removed from D2? By the way... this reminds me of Microsoft's Singularity kernel: they achieve full memory isolation between processes running in the same address, space without extending the type system with cruft like immutable. Processes can communicate like Erlang threads using the actor model.
Dec 25 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Also, it's bad how inter-thread communication will trigger GC runs, 
No, it won't. Allocation may trigger a GC run.
 which will stop all threads in the process for a while. Because you have 
 no choice but to allocate your immutable messages from the shared heap.
It all depends. Value messages are not allocated. Immutable data structures can be pre-allocated.
 This does not make the gc inherently unusable. Java, for example, uses 
 only one shared gc. It must, because Java has no concept of thread local.
What does Java matter here? Java was designed twenty years aho, when multicore wasn't an issue yet. D2 is designed *now* with good multicore support in mind.
It matters because Java is used a lot in multithreaded applications, and it is gc based. The gc is not a disastrous problem with it.
 Also I'm not sure if you're right here. Java has a generational copying 
 GC, and although I don't know Sun Java's implementation at all, I'm 
 quite sure the younger generation uses an entirely thread local heap. In 
 the normal case, it shouldn't need to get a single lock to allocate 
 memory. Just an unsynchronized pointer incrementation to get the next 
 memory block (as fast as stack allocation). If a memory block "escapes" 
 to the older generation (the GC needs to detect this case anyway), the 
 memory can be copied to a shared heap.
Getting a memory block can be done with thread local pools, but the pools are from *shared* memory and when a collection cycle is done, it is done across all threads and shared memory.
 This means old dumb Java will completely smash your super multicore 
 aware D2.
I think you're confusing allocating from a thread local cache from the resulting memory being thread local. The latter doesn't follow from the former.
 At least if someone wants to allocate memory... oh by the way, 
 no way to prevent GC cycles on frequent memory allocations, even if the 
 programmer knows that the memory isn't needed anymore: it seems manual 
 memory managment is going to be deemed "evil". Or did I hear wrong that 
 "delete" will be removed from D2?
 
 By the way... this reminds me of Microsoft's Singularity kernel: they 
 achieve full memory isolation between processes running in the same 
 address, space without extending the type system with cruft like 
 immutable. Processes can communicate like Erlang threads using the actor 
 model.
Erlang is entirely based on immutability of data. The only "cruft" they got rid of was mutability!
Dec 25 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 Also, it's bad how inter-thread communication will trigger GC runs, 
No, it won't. Allocation may trigger a GC run.
 which will stop all threads in the process for a while. Because you 
 have no choice but to allocate your immutable messages from the shared 
 heap.
It all depends. Value messages are not allocated. Immutable data structures can be pre-allocated.
As soon as you have slightly more complex data like a simple string, the trouble starts.
 
 This does not make the gc inherently unusable. Java, for example, 
 uses only one shared gc. It must, because Java has no concept of 
 thread local.
What does Java matter here? Java was designed twenty years aho, when multicore wasn't an issue yet. D2 is designed *now* with good multicore support in mind.
It matters because Java is used a lot in multithreaded applications, and it is gc based. The gc is not a disastrous problem with it.
For one, Java has an infinitely better GC implementation than D. Yeah, this isn't a problem with the concept or the language specification, but it matters in reality. There's no way a shared GC is ever going to be scalable with multicores. If I'm wrong and it can be made scalable, I'd like to see it. Not just in theory, but in D.
 
 Also I'm not sure if you're right here. Java has a generational 
 copying GC, and although I don't know Sun Java's implementation at 
 all, I'm quite sure the younger generation uses an entirely thread 
 local heap. In the normal case, it shouldn't need to get a single lock 
 to allocate memory. Just an unsynchronized pointer incrementation to 
 get the next memory block (as fast as stack allocation). If a memory 
 block "escapes" to the older generation (the GC needs to detect this 
 case anyway), the memory can be copied to a shared heap.
Getting a memory block can be done with thread local pools, but the pools are from *shared* memory and when a collection cycle is done, it is done across all threads and shared memory.
 This means old dumb Java will completely smash your super multicore 
 aware D2.
I think you're confusing allocating from a thread local cache from the resulting memory being thread local. The latter doesn't follow from the former.
I didn't say thread local allocation couldn't improve the situation, but the problem is still there: a GC costs too much. You'll be escape the situation a while (e.g. by adding said thread local pools), but I think eventually you'll have to try something different. I think having thread local heaps could be a viable solution, especially because the D2 type system is *designed* for it. All data is thread local by default, and trying to access it from other threads is forbidden and will break stuff. We have shared/immutable to allow inter-thread accesses. There will be *never* be pointers to non-shared/mutable between different thread. This just cries for allocating "normal" data on isolated separate per-thread heaps. I was just wondering what you'd do about immutable data. But OK, you're not going this way. What a waste. We can end the discussion here, sorry for the trouble.
 At least if someone wants to allocate memory... oh by the way, no way 
 to prevent GC cycles on frequent memory allocations, even if the 
 programmer knows that the memory isn't needed anymore: it seems manual 
 memory managment is going to be deemed "evil". Or did I hear wrong 
 that "delete" will be removed from D2?

 By the way... this reminds me of Microsoft's Singularity kernel: they 
 achieve full memory isolation between processes running in the same 
 address, space without extending the type system with cruft like 
 immutable. Processes can communicate like Erlang threads using the 
 actor model.
Erlang is entirely based on immutability of data. The only "cruft" they got rid of was mutability!
You could understand your argument as "having both is cruft". Maybe D2 would be better if we removed all mutable types?
Dec 25 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 It matters because Java is used a lot in multithreaded applications, 
 and it is gc based. The gc is not a disastrous problem with it.
For one, Java has an infinitely better GC implementation than D. Yeah, this isn't a problem with the concept or the language specification, but it matters in reality.
I thought we were talking about a fundamental issue of concept and language specification.
 There's no way a shared GC is ever going to be scalable with multicores. 
 If I'm wrong and it can be made scalable, I'd like to see it. Not just 
 in theory, but in D.
I believe there's plenty that can be achieved with it first. D has a fairly simple GC implementation in it right now, probably early 90's technology. It could be pushed an awful lot further. If you want to help out with it, you're welcome to.
 Erlang is entirely based on immutability of data. The only "cruft" 
 they got rid of was mutability!
You could understand your argument as "having both is cruft". Maybe D2 would be better if we removed all mutable types?
Pure functional languages (which is what not having mutable data are) are forced to buy into a whole 'nother set of problems (see "monads"). My impression is Erlang does one thing very very well (multithreading) and everything else, not so good.
Dec 25 2009
next sibling parent grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 It matters because Java is used a lot in multithreaded applications, 
 and it is gc based. The gc is not a disastrous problem with it.
For one, Java has an infinitely better GC implementation than D. Yeah, this isn't a problem with the concept or the language specification, but it matters in reality.
I thought we were talking about a fundamental issue of concept and language specification.
Yes, but what matters is what finally can be implemented. And my initial question was what advantages immutability would offer. Of course that includes the implementation, not only theoretical possibilities. Actually, I couldn't care less about theory. The question is: will this and that be implemented in a foreseeable time? Those concepts and the limits of the implementation environment both set the frame of what will actually be possible. For example, saying D could just use the same GC algorithms as Java probably isn't going to work, because D has to deal with C compatibility, doesn't use a VM, etc... In the same way, thread local data by default and having a different set of shared types may provide some implementation opportunities that wouldn't exist in Java.
 
 There's no way a shared GC is ever going to be scalable with 
 multicores. If I'm wrong and it can be made scalable, I'd like to see 
 it. Not just in theory, but in D.
I believe there's plenty that can be achieved with it first. D has a fairly simple GC implementation in it right now, probably early 90's technology. It could be pushed an awful lot further. If you want to help out with it, you're welcome to.
I sure would if I could, because GC performance gets on my nerves. Also I believe there are reasons why anyone didn't contribute a better GC yet. For one, most extended GC algorithms seem to require compiler support (precise type information, write barriers). And then D is bound to C, which complicates things further. By the way how is dsimcha's precise GC patch coming?
Dec 25 2009
prev sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
On 12/25/2009 08:17 PM, Walter Bright wrote:
 I believe there's plenty that can be achieved with it first. D has a
 fairly simple GC implementation in it right now, probably early 90's
 technology. It could be pushed an awful lot further.

 If you want to help out with it, you're welcome to.
How about a simple way to allocate in TLS? Could be a garbage collected heap, which stops only the current thread when collecting. It doesn't need to be an all out solution, and you could just advise against casting the pointers away from TLS, just as is done with immutability. Then again, I have not measured the performance differences involved in the current solution, maybe this is a non-problem.
Dec 25 2009
prev sibling next sibling parent reply yigal chripun <yigal100 gmail.com> writes:
Walter Bright Wrote:

 Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a different 
 language with different goals in mind can provide a much simpler non 
 convoluted implementation and semantics for the same idea?
 You've shown in the past that you're willing to break backward 
 compatibility in the name of progress and experiment with new ideas. You 
 can make decisions that the C++ committee will never approve.
 
 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint. The latter is, in my not-so-humble opinion, a desirable feature but its desirability is overwhelmed by the payment in complexity and constrictions on the Concepts necessary to make it work.
could you please expand on what are the main issues with implementing that check? constraints IIRC - is the check performed there? IMO cconstraints are neat but they aren't perfect. For one, they need to be repeated for each template, it would be very awesome if that could be moved to the parameter of the template so instead of: template foo(T) if isRange!T ... template bar(T) if isRange!T ... you could write something like: struct Range if ... {} template foo(r : Range) ... template bar(r : Range) ... inside both templates the parameter r satisfies all the constraits of Range. does that sound reasonable at all?
Dec 21 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
yigal chripun wrote:
 Walter Bright Wrote:
 
 Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a
 different language with different goals in mind can provide a
 much simpler non convoluted implementation and semantics for the
 same idea? You've shown in the past that you're willing to break
 backward compatibility in the name of progress and experiment
 with new ideas. You can make decisions that the C++ committee
 will never approve.
 
 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint. The latter is, in my not-so-humble opinion, a desirable feature but its desirability is overwhelmed by the payment in complexity and constrictions on the Concepts necessary to make it work.
could you please expand on what are the main issues with implementing that check?
Because the constraint can have any computation in it. So how does one verify that all computations in the template body are represented in the constraint in the same way?
 I also wonder what's the situation regarding this in

 there?
 IMO cconstraints are neat but they aren't perfect. For one, they need
 to be repeated for each template, it would be very awesome if that
 could be moved to the parameter of the template so instead of:
 
 template foo(T) if isRange!T ... template bar(T) if isRange!T ...
 
 you could write something like: struct Range if ... {} template foo(r
 : Range) ... template bar(r : Range) ...
 
 inside both templates the parameter r satisfies all the constraits of
 Range.
 
 does that sound reasonable at all?
The template parameter list was already fairly complicated, I thought that adding in the constraints would make it impenetrable. Also, the parameter list involves the individual parameters, whereas the constraint involves any combination of them. Adding in the constraints to the parameters also has unknown consequences for figuring out partial ordering. Partial ordering is based on abstract types, whereas constraints are based on actual types.
Dec 22 2009
prev sibling next sibling parent Don <nospam nospam.com> writes:
Walter Bright wrote:
 Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a different 
 language with different goals in mind can provide a much simpler non 
 convoluted implementation and semantics for the same idea?
 You've shown in the past that you're willing to break backward 
 compatibility in the name of progress and experiment with new ideas. 
 You can make decisions that the C++ committee will never approve.

 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint. The latter is, in my not-so-humble opinion, a desirable feature but its desirability is overwhelmed by the payment in complexity and constrictions on the Concepts necessary to make it work.
I think a consequence of that, is that facilities for compile-time testing become quite important, since we're relying on testing rather than compile-time checks to eliminate bugs. So I'm delighted that the static assert backtrace patch has been implemented. (One very useful feature would be code-coverage of template instantiations -- which lines of a template have actually been instantiated?)
Dec 22 2009
prev sibling parent Max Samukha <spambox d-coding.com> writes:
On 22.12.2009 5:15, Walter Bright wrote:
 Yigal Chripun wrote:
 But that doesn't mean the idea itself isn't valid. Perhaps a different
 language with different goals in mind can provide a much simpler non
 convoluted implementation and semantics for the same idea?
 You've shown in the past that you're willing to break backward
 compatibility in the name of progress and experiment with new ideas.
 You can make decisions that the C++ committee will never approve.

 Doesn't that mean that this is at least worth a shot?
I believe that D's template constraint feature fills the bill, it does everything Concepts purported to do, and more, in a simple and easily explained manner, except check the template body against the constraint.
...and template overloading based on concept refinement. D requires hacks to accomplish that: template isFoo(T) { } template isBar(T) { enum isBar = ... && isFoo!T; } template Foo(T) if (isFoo!T && !isBar!T /+ hack +/) {} template Foo(T) if (isBar!T) {} In the run-time domain, it would be analogous to requiring a parameter to specify which interfaces the argument *should not* implement: interface IFoo { } interface IBar : IFoo { } void foo (IBar a) { } void foo ((IFoo && !IBar) a) // meh { } Maybe it is not a significant shortcoming but it needs to be mentioned.
 The latter is, in my not-so-humble opinion, a desirable feature but its
 desirability is overwhelmed by the payment in complexity and
 constrictions on the Concepts necessary to make it work.
Dec 22 2009
prev sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
yigal chripun wrote:
 2) structural typing (similllar to Go?)
 tp!(Foo) // OK
 tp!(Bar) // also OK
 
 3) C++ style templates where the compatibility check is against the
 *body* of the template.
 
 If you think of templates as functions the compiler executes, the
 difference between the last two options is that option 2 is staticly
 typed vs. option 3 which is dynamicaly typed. We all use D because we
 like static typing and there's no reasone to not extend this to
 compile-time as well.
I prefer to think of option 2 as explicitly typed while option 3 uses type inference. Type inference is a good thing. -- Rainer Deyke - rainerd eldwood.com
Dec 21 2009
parent reply yigal chripun <yigal100 gmail.com> writes:
Rainer Deyke Wrote:

 yigal chripun wrote:
 2) structural typing (similllar to Go?)
 tp!(Foo) // OK
 tp!(Bar) // also OK
 
 3) C++ style templates where the compatibility check is against the
 *body* of the template.
 
 If you think of templates as functions the compiler executes, the
 difference between the last two options is that option 2 is staticly
 typed vs. option 3 which is dynamicaly typed. We all use D because we
 like static typing and there's no reasone to not extend this to
 compile-time as well.
I prefer to think of option 2 as explicitly typed while option 3 uses type inference. Type inference is a good thing. -- Rainer Deyke - rainerd eldwood.com
You might prefer that but it's incorrect. This is exactly equivalent to calling a Ruby function vs. a D function, only happens at the compiler's run-time instead your app's run-time. Errors that the compiler statically checks in D will only be caught at run-time in Ruby. In our case, this means that a user of a tempate can get compilation errors for the temple code itself.
Dec 21 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
yigal chripun wrote:
 Rainer Deyke Wrote:
 I prefer to think of option 2 as explicitly typed while option 3 uses
 type inference.  Type inference is a good thing.
 You might prefer that but it's incorrect.
It's not incorrect, it's another way of looking at the same thing. Structural type inference and compile-time dynamic typing are the same thing. -- Rainer Deyke - rainerd eldwood.com
Dec 21 2009
parent reply retard <re tard.com.invalid> writes:
Mon, 21 Dec 2009 04:05:01 -0700, Rainer Deyke wrote:

 yigal chripun wrote:
 Rainer Deyke Wrote:
 I prefer to think of option 2 as explicitly typed while option 3 uses
 type inference.  Type inference is a good thing.
 You might prefer that but it's incorrect.
It's not incorrect, it's another way of looking at the same thing. Structural type inference and
 compile-time dynamic typing are the same
 thing.
Now that's a funny term.. you see dynamic = runtime static = compile-time "compile type dynamic X" is a paradox. And so is "runtime static X" Another note, dynamic types do have a compile time representation. On type system level the types all have a 'dynamic' type. Not much can be said about that unless e.g. pattern matching is used. Static structural types on the hand differ on compile time. They have some kind of structure. What type inference means in this context is that instead of typeof({type has members a and b}) foo = { a = 2, b = 3 } you can say auto foo = { a = 2, b = 3 } Now if you try to do auto foo = 1; foo = { a = 2, b = 3 } you get a compile time error. With dynamic types that is not a compile time nor runtime error.
Dec 21 2009
parent "Nick Sabalausky" <a a.a> writes:
"retard" <re tard.com.invalid> wrote in message 
news:hgnlug$452$1 digitalmars.com...
 Mon, 21 Dec 2009 04:05:01 -0700, Rainer Deyke wrote:

 yigal chripun wrote:
 Rainer Deyke Wrote:
 I prefer to think of option 2 as explicitly typed while option 3 uses
 type inference.  Type inference is a good thing.
 You might prefer that but it's incorrect.
It's not incorrect, it's another way of looking at the same thing. Structural type inference and
 compile-time dynamic typing are the same
 thing.
Now that's a funny term.. you see dynamic = runtime static = compile-time "compile type dynamic X" is a paradox. And so is "runtime static X"
That's not a paradox, that's a contradiction. ;) ("That's no paradox, that's my wife!" [canned laughter])
Dec 21 2009
prev sibling next sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
dsimcha Wrote:

 == Quote from Yigal Chripun (yigal100 gmail.com)'s article
 but even more frustrating is the fact that
 template compilation bugs will also happen at the client.
 There's a whole range of designs for this and related issues and IMO the
 C++ design is by far the worst of them all. not to mention the fact that
 it isn't an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from compile-time
 execution of code with e.g. CTFE or AST macros, or other designs.
Since generics work by basically casting stuff to Object (possibly boxing it) and casting back, I wonder if it would be easy to implement generics on top of templates through a minimal wrapper. The main uses for this would be executable bloat (for those that insist that this matters in practice) and allowing virtual functions where templates can't be virtual.
In C++ you could define a MyObject and MyRef (smart pointer to Object) types that implement the methods you need (like comparisons) as virtual functions that are either pure or throw exceptions. Then just define a non-template class that inherits from (or just aggregates and wraps) a vector<MyRef> and map<MyReft, MyRef>. Now you can use this map and vector code to build whatever solution you need, using dynamic_cast<T> to insure the types are what you want. If you want you can throw a type safe wrapper that uses dynamic_cast<T> around this, as long as all the methods of that class are inlined, it should have no extra bloat. Now your back at the Java level of expressiveness. I wonder, though, if you actually save anything. Since vector::operator[](size_t) is just an array index that will get inlined into your code, I think it should have much *less* code bloat in your executable than a function call (to virtual MyRef vector<MyRef>::operator[](size_t)) plus dynamic casts to figure out if all the types are castable. Movie Poster: Dr. Stroustrouplove: or How I Stopped Worrying and Learned to Love the Bloat. As for mixing template code and client code, is it that big of a deal in practice? If you are making something big enough to be worth patenting, won't most of the "heavy lifting" classes probably not be templates? Unless you are marketing template/container libraries specifically I guess... Personally I'm reluctant to buy that sort of thing from someone if I *can't* see the source. If I need a class like "vector", in practice I need to be able to dig into its methods to see what they do, e.g. is it really guaranteed that storage is contiguous, etc. On the other hand, if I was buying code to print a Word document to a pdf file, I could accept that I don't need to look into the code. But something like vector<> or map<> ends up getting so "intimate" with the rest of my design that I want to know how it works in detail just as an end-user. (Yeah, yeah, I know encapsulation says I shouldn't have to.) I think performance outweighs the needs of any particular business model, especially when you can do your own Object based version if you need to. I agree there should be ways to hide the implementation from the client, but if templates aren't one of them, then just factor that into where and how you use templates and when you use OO and virtual instead. It's enough that it's possible and practical to hide your impl, you don't need *every* language The performance / impl-hiding conflict is a fundamental problem -- if the user's compiler can't see the template method definitions, then it can't optimize them very well. If it can, then the user can too. Any method of compiling them that preserves enough info for the compiler to work with will probably be pretty easily and cleanly byte-code-decompilable. Kevin
Dec 21 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Kevin Bealer wrote:
 The performance / impl-hiding conflict is a fundamental problem -- if
 the user's compiler can't see the template method definitions, then
 it can't optimize them very well.  If it can, then the user can too.
 Any method of compiling them that preserves enough info for the
 compiler to work with will probably be pretty easily and cleanly
 byte-code-decompilable.
Absolutely right. One of the features that C++ exported templates was supposed to provide was obfuscation of the template bodies so that users couldn't see it. My contention was that there was essentially no reasonable method to ensure that. 1. any obfuscation method only has to be cracked by one individual, then everyone can see through it. 2. if you ship the library for multiple compilers, you only have to crack the weakest one 3. if you provide the decryption key to the customer, and you must, and an open source compiler is used, you lose
Dec 21 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 22/12/2009 05:22, Walter Bright wrote:
 Kevin Bealer wrote:
 The performance / impl-hiding conflict is a fundamental problem -- if
 the user's compiler can't see the template method definitions, then
 it can't optimize them very well. If it can, then the user can too.
 Any method of compiling them that preserves enough info for the
 compiler to work with will probably be pretty easily and cleanly
 byte-code-decompilable.
Absolutely right. One of the features that C++ exported templates was supposed to provide was obfuscation of the template bodies so that users couldn't see it. My contention was that there was essentially no reasonable method to ensure that. 1. any obfuscation method only has to be cracked by one individual, then everyone can see through it. 2. if you ship the library for multiple compilers, you only have to crack the weakest one 3. if you provide the decryption key to the customer, and you must, and an open source compiler is used, you lose
You can also dis-assemble binary libs. That's not the point of this discussion. The point is having proper encapsulation.
Dec 21 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Yigal Chripun wrote:
 On 22/12/2009 05:22, Walter Bright wrote:
 Kevin Bealer wrote:
 The performance / impl-hiding conflict is a fundamental problem -- if
 the user's compiler can't see the template method definitions, then
 it can't optimize them very well. If it can, then the user can too.
 Any method of compiling them that preserves enough info for the
 compiler to work with will probably be pretty easily and cleanly
 byte-code-decompilable.
Absolutely right. One of the features that C++ exported templates was supposed to provide was obfuscation of the template bodies so that users couldn't see it. My contention was that there was essentially no reasonable method to ensure that. 1. any obfuscation method only has to be cracked by one individual, then everyone can see through it. 2. if you ship the library for multiple compilers, you only have to crack the weakest one 3. if you provide the decryption key to the customer, and you must, and an open source compiler is used, you lose
You can also dis-assemble binary libs. That's not the point of this discussion. The point is having proper encapsulation.
Disassembling is a very different deal, because much information is lost in the emission of assembler. But with template bodies, 100% of the semantic information must be present. Proper encapsulation is a convention thing, because if you have the source (or can extract it somehow) you can defeat any encapsulation if you've a mind to.
Dec 21 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 21/12/2009 22:41, Kevin Bealer wrote:
 dsimcha Wrote:

 == Quote from Yigal Chripun (yigal100 gmail.com)'s article
 but even more frustrating is the fact that template compilation
 bugs will also happen at the client. There's a whole range of
 designs for this and related issues and IMO the C++ design is by
 far the worst of them all. not to mention the fact that it isn't
 an orthogonal design (like many other "features" in c++). I'd
 much prefer a true generics design to be separated from
 compile-time execution of code with e.g. CTFE or AST macros, or
 other designs.
Since generics work by basically casting stuff to Object (possibly boxing it) and casting back, I wonder if it would be easy to implement generics on top of templates through a minimal wrapper. The main uses for this would be executable bloat (for those that insist that this matters in practice) and allowing virtual functions where templates can't be virtual.
In C++ you could define a MyObject and MyRef (smart pointer to Object) types that implement the methods you need (like comparisons) as virtual functions that are either pure or throw exceptions. Then just define a non-template class that inherits from (or just aggregates and wraps) a vector<MyRef> and map<MyReft, MyRef>. Now you can use this map and vector code to build whatever solution you need, using dynamic_cast<T> to insure the types are what you want. If you want you can throw a type safe wrapper that uses dynamic_cast<T> around this, as long as all the methods of that class are inlined, it should have no extra bloat. Now your back at the Java level of expressiveness. I wonder, though, if you actually save anything. Since vector::operator[](size_t) is just an array index that will get inlined into your code, I think it should have much *less* code bloat in your executable than a function call (to virtual MyRef vector<MyRef>::operator[](size_t)) plus dynamic casts to figure out if all the types are castable. Movie Poster: Dr. Stroustrouplove: or How I Stopped Worrying and Learned to Love the Bloat. As for mixing template code and client code, is it that big of a deal in practice? If you are making something big enough to be worth patenting, won't most of the "heavy lifting" classes probably not be templates? Unless you are marketing template/container libraries specifically I guess... Personally I'm reluctant to buy that sort of thing from someone if I *can't* see the source. If I need a class like "vector", in practice I need to be able to dig into its methods to see what they do, e.g. is it really guaranteed that storage is contiguous, etc. On the other hand, if I was buying code to print a Word document to a pdf file, I could accept that I don't need to look into the code. But something like vector<> or map<> ends up getting so "intimate" with the rest of my design that I want to know how it works in detail just as an end-user. (Yeah, yeah, I know encapsulation says I shouldn't have to.) I think performance outweighs the needs of any particular business model, especially when you can do your own Object based version if you need to. I agree there should be ways to hide the implementation from the client, but if templates aren't one of them, then just factor that into where and how you use templates and when you use OO and virtual instead. It's enough that it's possible and practical to hide your impl, you don't need *every* language feature to make that The performance / impl-hiding conflict is a fundamental problem -- if the user's compiler can't see the template method definitions, then it can't optimize them very well. If it can, then the user can too. Any method of compiling them that preserves enough info for the compiler to work with will probably be pretty easily and cleanly byte-code-decompilable. Kevin
a few points: Java generics are poorly designed and create holes in the type system. Some of the designers of the language admit this openly. This implementation doesn't represent the general concept. your performance / impl-hiding conflict doesn't exist. I already you assume the same (broken) compilation model as in C++. don't.
Dec 21 2009
prev sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
yigal chripun Wrote:

 of the three above I think option 3 is the worst design and option 2 is my
favorite design. I think that in reality you'll almost always want to define
such an interface and I really can't think of any useful use cases for an
unrestricted template parameter as in C++. 
I think there are some. In C++ using "<<" to extract a type T to an output stream is common. Similarly, you could do something like dividing (x.size() + 0.0)/x.capacity() to find the ratio of internal fragmentation, assuming most of the containers in question have a capacity method. Another common example is creating a type like "Optional<T>" which wraps up a boolean plus a T to allow you to have "optional" values, e.g. the "this value is not set" concept. template<class T> class Optional { T x; bool have; public: Optional(bool have1 = false) : have(have1) {} Optional(T & x1) : x(x1), have(true){} bool haveValue() { return have; } T & get() { assert(have); return x; } }; Which is useful for passing around lists of "configuration options" etc. Essentially, any time you want to leverage a property common to a lot of different objects and with the same API across all of them, but don't want to (or can't) build that property into an inheritance diagram. Often in C++ the reason you can't or don't add a virtual method (toString) to solve the same problem is either that (1) you don't have access to the base class code, or (2) it's a template and you can't have virtual templates. As Bill Clinton said, "It depends on what the meaning of IS-A is." Often a bunch of types have a kind of unrecognized "IS-A" relationship -- they all have some common characteristic that is not recognized as common (by the type system) until you come along to write code that deals with that characteristic. (But if the T code is changing it's fragile to do this.) Kevin
Dec 21 2009