www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - compile-time regex redux

reply Walter Bright <newshound digitalmars.com> writes:
String mixins, in order to be useful, need an ability to manipulate 
strings at compile time. Currently, the core operations on strings that 
can be done are:

1) indexed access
2) slicing
3) comparison
4) getting the length
5) concatenation

Any other functionality can be built up from these using template 
metaprogramming.

The problem is that parsing strings using templates generates a large 
number of template instantiations, is (relatively) very slow, and 
consumes a lot of memory (at compile time, not runtime). For example, 
ParseInteger would need 4 template instantiations to parse 5678, and 
each template instantiation would also include the rest of the input as 
part of the template instantiation's mangled name.

At some point, this will prove a barrier to large scale use of this feature.

Andrei suggested using compile time regular expressions to shoulder much 
of the burden, reducing parsing of any particular token to one 
instantiation.

The last time I introduced core regular expressions into D, it was 
soundly rejected by the community and was withdrawn, and for good reasons.

But I think we now have good reasons to revisit this, at least for 
compile time use only. For example:

	("aa|b" ~~ "ababb") would evaluate to "ab"

I expect one would generally only see this kind of thing inside 
templates, not user code.
Feb 06 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings that 
 can be done are:
 
 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation
 
 Any other functionality can be built up from these using template 
 metaprogramming.
 
 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input as 
 part of the template instantiation's mangled name.
 
 At some point, this will prove a barrier to large scale use of this 
 feature.
 
 Andrei suggested using compile time regular expressions to shoulder much 
 of the burden, reducing parsing of any particular token to one 
 instantiation.
Let's also note for future memento that storing the md5 hash of the name instead of the full name is an additional posibility.
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good reasons.
 
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:
 
     ("aa|b" ~~ "ababb") would evaluate to "ab"
 
 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o) One program I highly recommend for playing with regexes is The Regex Coach: http://weitz.de/regex-coach/. Andrei
Feb 06 2007
next sibling parent reply Robby <robby.lansaw gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
[snipped]
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good 
 reasons.

 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o) One program I highly recommend for playing with regexes is The Regex Coach: http://weitz.de/regex-coach/. Andrei
I wasn't here during the first round of regexes so bare with me. Though I can assume that with D's growing visibility the past few months, I'm probably not the only one. Having used Ruby for years I'm quite! fond of them, some general questions. What version of regex would be the target? There's a few variations out there. Probably a couple of green questions, what is the benefit of having a compile time regex implementation over a "really fast" implementation? Or having the expression compiled and the string allowed runtime? Assuming ~~ will be the syntax used? Sidebar chuckle : http://dev.perl.org/perl6/doc/design/syn/S05.html Um, wow. They've really turned 5's implementation on its head.. could be interesting to watch the reaction from that.. could be louder than the vb guys when vb.net was first released..
Feb 07 2007
next sibling parent Walter Bright <newshound digitalmars.com> writes:
Robby wrote:
 What version of regex would be the target? There's a few variations out 
 there.
The version would behave identically to the D runtime library std.regexp, for consistency's sake. (And that version is compatible with Javascript v3.)
 Probably a couple of green questions, what is the benefit of having a 
 compile time regex implementation over a "really fast" implementation? 
 Or having the expression compiled and the string allowed runtime?
The need is for compile time, not runtime, manipulation of string literals.
Feb 07 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Robby wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
[snipped]
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good 
 reasons.

 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o) One program I highly recommend for playing with regexes is The Regex Coach: http://weitz.de/regex-coach/. Andrei
I wasn't here during the first round of regexes so bare with me. Though I can assume that with D's growing visibility the past few months, I'm probably not the only one. Having used Ruby for years I'm quite! fond of them, some general questions. What version of regex would be the target? There's a few variations out there.
Ionno.
 Probably a couple of green questions, what is the benefit of having a 
 compile time regex implementation over a "really fast" implementation? 
 Or having the expression compiled and the string allowed runtime?
It's not about speed. Compile-time regexes will be most useful for parsing and subsequently generating code. Of course, it's great to support the same regex syntax and power for both realms.
 Assuming ~~ will be the syntax used?
 
 Sidebar chuckle :
 http://dev.perl.org/perl6/doc/design/syn/S05.html
 Um, wow. They've really turned 5's implementation on its head.. could be 
 interesting to watch the reaction from that.. could be louder than the 
 vb guys when vb.net was first released..
Interesting how a thorough cleanup is more important than keeping everybody happy. :o) Andrei
Feb 07 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o)
My bad. Some more things to think about: 1) Returning the left match, the match, the right match? 2) Returning values of parenthesized expressions? 3) Some sort of sed-like replacement syntax? An alternative is to have the compiler recognize std.Regexp names as being built-in.
Feb 07 2007
next sibling parent reply kenny <funisher gmail.com> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o)
My bad. Some more things to think about: 1) Returning the left match, the match, the right match? 2) Returning values of parenthesized expressions? 3) Some sort of sed-like replacement syntax? An alternative is to have the compiler recognize std.Regexp names as being built-in.
Walter, I don't hate regex -- I just don't use it. It seems to me that to figure out regex syntax takes longer than writing quick for/while statements, and I usually forget cases in regex too... just being able to write like I can in D with compile time variables would be so much easier for me, and it would only require one template function instead of 35 to parse a simple string... for example. 1. A while back, I needed something very quickly to remove whitespace. it took me much less time with loops than I ever could have done with a regex. I want to be able to do the same in templates, if possible. I will be trying to reproduce later this, but I think that it will require a lot of templates. 2. what about building associative arrays out of a string? I have this function from existing code. It didn't take too long to write. I want to be able to write something like this in templates to build assoc arrays dynamically. I know I'm asking for a lot, but the way templates handle string are still kinda weird to me. Would string parsing in this sort of way be absolutely impossible with templates? I have not had good luck with it. Perhaps I missed something... EXAMPLES BELOW --- whitespace removal --- char[] t = text.dup; char[] new_text; uint len = new_text.length = t.length; new_text.length = 0; t = replace(t, "\r\n", "\n"); t = replace(t, "\r", "\n"); t = replace(t, "\t", " "); int i = 0; len = t.length; while(i < len) { if(t[i] == '/' && t[i+1] == '/') { if(i == 0 || t[i-1] == ' ' || t[i-1] == '\n') { while(i < len) { if(t[i] == '\n') { break; } t[i++] = '\n'; } } } i++; } for(i = 0; i < len; i++) { if(t[i] < 0x20) { if(t[i] == '\n') { i++; while(i < len && t[i] == ' ') { i++; } i--; } else { t[i] = ' '; i--; } } else if(!(t[i] == ' ' && i > 0 && t[i-1] == ' ')) { new_text ~= t[i]; } } if(new_text[0] == ' ') { new_text = new_text[1 .. length-1]; } if(new_text[length-1] == ' ') { new_text.length = new_text.length-1; } --- ASSOC ARRAY BUILDING --- char[][char[]] parse_options(char[] text) { char[][char[]] options; text = strip(text); uint text_len = text.length; uint i = 0; while(text[i] == '{' && text[text_len-1] == '}') { text_len--; i++; } if(i > 0) { text = strip(text[i .. text_len]); text_len = text.length; i = 0; } for(;i < text_len; i++) { if(text[i] != ' ' && text[i] != ',') { // { label: "options, yeah", label2: `variable`, label3: {label1: "lala:", label2: `variable2`}} // ^^^^^ uint start = i; while(text[i] != ':') { if(text[i] == ' ') { log_warning!("found a space in your label... expecting ':' in '^'", text[start .. i]); } if(++i >= text_len) { log_error!("expected label... but not found '^'", text[start .. i]); goto return_options; } } char[] label = strip(text[start .. i]); // { label: "options, yeah", label2: `variable`, label3: {label1: "lala:", label2: `variable2`}} // ^ i++; while(text[i] == ' ') { if(++i >= text_len) { log_error!("label has no value '^'", text); goto return_options; } } uint def_start = i++; switch(text[def_start]) { case '{': // { label: "options, yeah", label2: `variable`, label3: {label1: "lala:", label2: `variable2`}} // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ uint scopee = 1; while(true) { if(++i >= text_len) { log_error!("cannot find end to text string in label '^'", label); goto return_options; } if(text[i] == '{') { scopee++; } else if(text[i] == '}') { if(scopee == 1) { break; } scopee--; } // skip text if(text[i] == '"' || text[i] == '\'' || text[i] == '`') { char delim = text[i]; i++; if(i >= text_len) break; while(text[i] != delim || (text[i] == delim && text[i-1] == '\\')) { if(++i >= text_len) { log_error!("cannot find end to text string in label '^'", label); goto return_options; } } } } options[label] = strip(text[def_start .. i+1]); assert(strip(text[def_start .. i+1])[0] == '{'); assert(strip(text[def_start .. i+1])[length-1] == '}'); break; case '"', '`', '\'': // { label: "options, yeah", label2: `variable`, label3: {label1: "lala:", label2: `variable2`}} // ^^^^^^^^^^^^^ ^^^^^^^^ char delim = text[def_start]; char[] string = ""; while(text[i] != delim || (text[i] == delim && text[i-1] == '\\')) { if(text[i] == delim && text[i-1] == '\\') { string[length-1] = delim; } else { string ~= text[i]; } if(++i >= text_len) { log_error!("cannot find end to text string in label '^'", label); goto return_options; } } options[label] = string; break; default: // { label: "options, yeah", label2: variable, label3: {label1: "lala:", label2: `variable2`}} // ^^^^^^^^ while(text[i] != ' ' && text[i] != ',' && ++i < text_len) { } options[label] = text[def_start .. i]; } } } return_options: return options; }
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
kenny wrote:
 I know I'm asking for a lot, but the way templates handle string are 
 still kinda weird to me. Would string parsing in this sort of way be 
 absolutely impossible with templates? I have not had good luck with it. 
I just haven't thought about this enough. Certainly, however, solving the problem in a more general, D-ish way than regex would be a much bigger win. Regex works only for a subset of problems (can't do recursive descent parsing with it).
Feb 07 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 kenny wrote:
 
 I know I'm asking for a lot, but the way templates handle string are 
 still kinda weird to me. Would string parsing in this sort of way be 
 absolutely impossible with templates? I have not had good luck with it. 
I just haven't thought about this enough. Certainly, however, solving the problem in a more general, D-ish way than regex would be a much bigger win. Regex works only for a subset of problems (can't do recursive descent parsing with it).
As I see it the biggest problem with compile time parsing in D is that building non linear structure is a pain. Tuples implicitly cated when passed together an this make some things really hard. Allowing a tuple to be a member of another tuple would put D template in the same class as LISP. Another things that might make things easier is some way to mark a template as "evaluate to value and abandon". This would cause the template to be processed but none of the symbols generated by it would be kept, only the value. Of course, suitable restrictions would apply.
Feb 07 2007
parent reply kenny <funisher gmail.com> writes:
BCS wrote:
 Walter Bright wrote:
 kenny wrote:

 I know I'm asking for a lot, but the way templates handle string are 
 still kinda weird to me. Would string parsing in this sort of way be 
 absolutely impossible with templates? I have not had good luck with it. 
I just haven't thought about this enough. Certainly, however, solving the problem in a more general, D-ish way than regex would be a much bigger win. Regex works only for a subset of problems (can't do recursive descent parsing with it).
As I see it the biggest problem with compile time parsing in D is that building non linear structure is a pain. Tuples implicitly cated when passed together an this make some things really hard. Allowing a tuple to be a member of another tuple would put D template in the same class as LISP. Another things that might make things easier is some way to mark a template as "evaluate to value and abandon". This would cause the template to be processed but none of the symbols generated by it would be kept, only the value. Of course, suitable restrictions would apply.
so it would be like writing normal D code inside of a template? Could we use phobos or do more metastring functions like find, strip, etc. need to be reinvented in D. Or can I take those functions and just put a wrapper around it? I think this is getting into the security issues again, but something like this: auto my_text = meta trim_whitespace(import("myxml.xml")); // obviously a better keyword should be used, and can only be used in global scope too. where trim_whitespace is an actual function that is actually defined up in the file somewhere, and will be compiled, used on the import file, then discarded and the result stored into auto my_text? A day or so ago, someone mentioned a .rc compiler. I personally would use it to parse stuff for interface elements .. to generate them dynamically. We already have libraries to parse XML (XHTML), CSS, and other config type things. It would be SUPER AWESOME to be able to just re-use them as a template without any other extra work!!! Oh man, that has me really excited, if that's possible :) woah!
Feb 08 2007
next sibling parent BCS <ao pathlink.com> writes:
Reply to Kenny,

 BCS wrote:
 
 As I see it the biggest problem with compile time parsing in D is
 that building non linear structure is a pain. Tuples implicitly cated
 when passed together an this make some things really hard. Allowing a
 tuple to be a member of another tuple would put D template in the
 same class as LISP.
 
 Another things that might make things easier is some way to mark a
 template as "evaluate to value and abandon". This would cause the
 template to be processed but none of the symbols generated by it
 would be kept, only the value. Of course, suitable restrictions would
 apply.
 
so it would be like writing normal D code inside of a template? Could we use phobos or do more metastring functions like find, strip, etc. need to be reinvented in D. Or can I take those functions and just put a wrapper around it? I think this is getting into the security issues again, but something like this:
Errr. I was thinking of having this apply to the const folding stuff. Nothing that looks like runtime code would be allowed. The point would be to reduce the overhead of evaluating stuff. After thinking about it, I'm not sure that this isn't already done for template that consist of only const declarations. This would result in exactly one string being added to the compile time data set. meta tempate rmwhite(char[] c) { static if(c[0] != ' ') const rmwhite = c; else const rmwhite = rmwhite!(c[1..0]); }
Feb 08 2007
prev sibling parent janderson <askme me.com> writes:
kenny wrote:
 BCS wrote:
 Walter Bright wrote:
 kenny wrote:

 I know I'm asking for a lot, but the way templates handle string are 
 still kinda weird to me. Would string parsing in this sort of way be 
 absolutely impossible with templates? I have not had good luck with it. 
I just haven't thought about this enough. Certainly, however, solving the problem in a more general, D-ish way than regex would be a much bigger win. Regex works only for a subset of problems (can't do recursive descent parsing with it).
As I see it the biggest problem with compile time parsing in D is that building non linear structure is a pain. Tuples implicitly cated when passed together an this make some things really hard. Allowing a tuple to be a member of another tuple would put D template in the same class as LISP. Another things that might make things easier is some way to mark a template as "evaluate to value and abandon". This would cause the template to be processed but none of the symbols generated by it would be kept, only the value. Of course, suitable restrictions would apply.
so it would be like writing normal D code inside of a template? Could we use phobos or do more metastring functions like find, strip, etc. need to be reinvented in D. Or can I take those functions and just put a wrapper around it? I think this is getting into the security issues again, but something like this: auto my_text = meta trim_whitespace(import("myxml.xml")); // obviously a better keyword should be used, and can only be used in global scope too. where trim_whitespace is an actual function that is actually defined up in the file somewhere, and will be compiled, used on the import file, then discarded and the result stored into auto my_text? A day or so ago, someone mentioned a .rc compiler. I personally would use it to parse stuff for interface elements .. to generate them dynamically. We already have libraries to parse XML (XHTML), CSS, and other config type things.
I don't see any security issues if the compile-time language is restrictive enough. You could still load in files using the new import command.
 It would be SUPER AWESOME to be able to just re-use them as a template
 without any other extra work!!! Oh man, that has me really excited, if
 that's possible :)

 woah!
I agree, this is only a sub-set of what could be done. You could even try out / invent new language features for D before suggesting them on the newsgroup and post the code. Of course that may be a possibility with regex as well. -Joel
Feb 08 2007
prev sibling next sibling parent "Chris Miller" <chris dprogramming.com> writes:
On Wed, 07 Feb 2007 12:04:10 -0500, kenny <funisher gmail.com> wrote:
 Walter, I don't hate regex -- I just don't use it. It seems to me that  
 to figure out regex syntax takes longer than writing quick for/while  
 statements, and I usually forget cases in regex too...

 just being able to write like I can in D with compile time variables  
 would be so much easier for me, and it would only require one template  
 function instead of 35 to parse a simple string... for example.

 1. A while back, I needed something very quickly to remove whitespace.  
 it took me much less time with loops than I ever could have done with a  
 regex. I want to be able to do the same in templates, if possible. I  
 will be trying to reproduce later this, but I think that it will require  
 a lot of templates.
I generally dislike regex for anything semi-complex. It's handy for simple things, like it's great as a find/replace feature in an editor, but anything more advanced and it's a huge pain.
Feb 07 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kenny wrote:
 Walter, I don't hate regex -- I just don't use it. It seems to me that 
 to figure out regex syntax takes longer than writing quick for/while 
 statements, and I usually forget cases in regex too...
I think this is an age-old issue: if you don't know something, you find it harder to do things that way. The telling sign is that people who know _both_ simple loops and regexes do use regexes, and as a consequence are way more productive at a certain category of tasks.
 just being able to write like I can in D with compile time variables 
 would be so much easier for me, and it would only require one template 
 function instead of 35 to parse a simple string... for example.
 
 1. A while back, I needed something very quickly to remove whitespace. 
 it took me much less time with loops than I ever could have done with a 
 regex. I want to be able to do the same in templates, if possible. I 
 will be trying to reproduce later this, but I think that it will require 
 a lot of templates.
 2. what about building associative arrays out of a string? I have this 
 function from existing code. It didn't take too long to write. I want to 
 be able to write something like this in templates to build assoc arrays 
 dynamically.
 
 I know I'm asking for a lot, but the way templates handle string are 
 still kinda weird to me. Would string parsing in this sort of way be 
 absolutely impossible with templates? I have not had good luck with it. 
 Perhaps I missed something...
That would require functional-style programming - which, of course, also seems hard before you learn it. So either way, we're hosed :o). Andrei
Feb 07 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kenny wrote:
 Walter, I don't hate regex -- I just don't use it. It seems to me that 
 to figure out regex syntax takes longer than writing quick for/while 
 statements, and I usually forget cases in regex too...
I think this is an age-old issue: if you don't know something, you find it harder to do things that way. The telling sign is that people who know _both_ simple loops and regexes do use regexes, and as a consequence are way more productive at a certain category of tasks.
Hmm. More productive, probably. Writing better code? Not clear. I would guess that in many cases the results are not as easy to maintain as non-regexp code. Anyway, I think the question is whether compile-time regexp is really the right level of abstraction to be targeting. Wouldn't it be infinitely better to have the compile-time code facilities be so good that you could just write a regexp parser as a compile-time D library? I mean what is regexp, but a particular DSL? If the new facilities are trying to make DSL's easier to create, regexp is a great target DSL. So what compile-time language facilities do you need to implement an efficient and clean compile-time regexp library? It would be nice if we could write more-or-less generic D code with a few compile time restrictions. For instance you can write any function you want that takes only const values as arguments and returns a const value, and refers to only global const values and other such const-only functions. --bb
Feb 07 2007
next sibling parent kris <foo bar.com> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 kenny wrote:

 Walter, I don't hate regex -- I just don't use it. It seems to me 
 that to figure out regex syntax takes longer than writing quick 
 for/while statements, and I usually forget cases in regex too...
I think this is an age-old issue: if you don't know something, you find it harder to do things that way. The telling sign is that people who know _both_ simple loops and regexes do use regexes, and as a consequence are way more productive at a certain category of tasks.
Hmm. More productive, probably. Writing better code? Not clear. I would guess that in many cases the results are not as easy to maintain as non-regexp code. Anyway, I think the question is whether compile-time regexp is really the right level of abstraction to be targeting. Wouldn't it be infinitely better to have the compile-time code facilities be so good that you could just write a regexp parser as a compile-time D library? I mean what is regexp, but a particular DSL? If the new facilities are trying to make DSL's easier to create, regexp is a great target DSL. So what compile-time language facilities do you need to implement an efficient and clean compile-time regexp library? It would be nice if we could write more-or-less generic D code with a few compile time restrictions. For instance you can write any function you want that takes only const values as arguments and returns a const value, and refers to only global const values and other such const-only functions. --bb
bump+
Feb 07 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kenny wrote:
 Walter, I don't hate regex -- I just don't use it. It seems to me 
 that to figure out regex syntax takes longer than writing quick 
 for/while statements, and I usually forget cases in regex too...
I think this is an age-old issue: if you don't know something, you find it harder to do things that way. The telling sign is that people who know _both_ simple loops and regexes do use regexes, and as a consequence are way more productive at a certain category of tasks.
Hmm. More productive, probably. Writing better code? Not clear. I would guess that in many cases the results are not as easy to maintain as non-regexp code.
I don't think the guess is that right. Following the logic of even a simple parsing task (e.g. floating-point number in all of its splendor) is horrendous. For somebody who knows regexes, the pattern is obvious in a second. I do agree that code written by somebody who knows regexes is hard-to-maintain by somebody who does not know regexes, but that's pretty much self-understood and goes with any other technique. All I can say is that I got significantly enriched and more effective as a programmer at large after I sat down and understood Perl's regex bestiary. I now see my previous arguments against them as rationalizations of my resistance to go through the effort of learning. Again comparing myself with my former self, I understand it's hard to discuss relative advantages and disadvantages with someone who doesn't know them because of a bootstrap problem: I say they make code much simpler and easier to comprehend, while my former self would say exactly the opposite. It's pretty much like math notation, eating vegetables, or classical music: it's hard to bootstrap oneself into appreciating it.
 Anyway, I think the question is whether compile-time regexp is really 
 the right level of abstraction to be targeting.  Wouldn't it be 
 infinitely better to have the compile-time code facilities be so good 
 that you could just write a regexp parser as a compile-time D library?
This is possible in today's D. The problem is that it would be a Pyrrhic victory: the resulting engine would be very slow and big. I do agree that it would be nice to look into creating compile-time amenities that make such an engine fast and small.
 I mean what is regexp, but a particular DSL?  If the new facilities are 
 trying to make DSL's easier to create, regexp is a great target DSL.  So 
 what compile-time language facilities do you need to implement an 
 efficient and clean compile-time regexp library?
Conceptually, you'd need the following: (1) compile-time functions, (2) compile-time mutable variables, and (3) compile-time loops. We already have the rest. Then you can write compile-time code as comfortably as writing run-of-the-mill run-time code. D is heading that way, but with small steps. Implementation-wise, string-based templates must be made cheaper. If we'll have compile-time mutation probably this is not going to be much of a problem because much functional-style code can be written using mutation. I personally enjoy functional-style code, but it's not really needed during compilation and is a bit foreign from the rest of D, which remains largely imperative.
 It would be nice if we could write more-or-less generic D code with a 
 few compile time restrictions.  For instance you can write any function 
 you want that takes only const values as arguments and returns a const 
 value, and refers to only global const values and other such const-only 
 functions.
Templates already do that, albeit with a slightly odd syntax. But stay tuned, Walter is eyeing $ as the prefix to denote compile-time variables, and sure enough, compile-time functions will then emerge naturally :o). Andrei
Feb 07 2007
parent reply janderson <askme me.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 
 Templates already do that, albeit with a slightly odd syntax. But stay 
 tuned, Walter is eyeing $ as the prefix to denote compile-time 
 variables, and sure enough, compile-time functions will then emerge 
 naturally :o).
 
 
 Andrei
While its good that Walter is considering compile-time variable. I don't see why you need a symbol $ inside a template. Of course if your going to use them out side then you do. I think template code could look almost the same as normal code, which would make it much more writable/readable and reusable. -Joel
Feb 09 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:

 Templates already do that, albeit with a slightly odd syntax. But stay 
 tuned, Walter is eyeing $ as the prefix to denote compile-time 
 variables, and sure enough, compile-time functions will then emerge 
 naturally :o).


 Andrei
While its good that Walter is considering compile-time variable. I don't see why you need a symbol $ inside a template. Of course if your going to use them out side then you do. I think template code could look almost the same as normal code, which would make it much more writable/readable and reusable.
I think the same. We need to convince Walter :o). Andrei
Feb 09 2007
prev sibling parent Lars Ivar Igesund <larsivar igesund.net> writes:
Bill Baxter wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 kenny wrote:
 Walter, I don't hate regex -- I just don't use it. It seems to me that
 to figure out regex syntax takes longer than writing quick for/while
 statements, and I usually forget cases in regex too...
I think this is an age-old issue: if you don't know something, you find it harder to do things that way. The telling sign is that people who know _both_ simple loops and regexes do use regexes, and as a consequence are way more productive at a certain category of tasks.
Hmm. More productive, probably. Writing better code? Not clear. I would guess that in many cases the results are not as easy to maintain as non-regexp code. Anyway, I think the question is whether compile-time regexp is really the right level of abstraction to be targeting. Wouldn't it be infinitely better to have the compile-time code facilities be so good that you could just write a regexp parser as a compile-time D library? I mean what is regexp, but a particular DSL? If the new facilities are trying to make DSL's easier to create, regexp is a great target DSL. So what compile-time language facilities do you need to implement an efficient and clean compile-time regexp library? It would be nice if we could write more-or-less generic D code with a few compile time restrictions. For instance you can write any function you want that takes only const values as arguments and returns a const value, and refers to only global const values and other such const-only functions. --bb
I very much agree with Bill here, because that will create true power to such features in a much wider space. In addition, it was mentioned early on which regex syntax should be used - and then we're onto the fact that regex'es look different and operate different in different settings - some are enhanced with this, and some with that (and many seems to think std.regex is somewhat bad behaved). Having this easily implementable as (compile time) libraries would allow for (or at least more likely inspire) as many as users would feel is needed. -- Lars Ivar Igesund blog at http://larsivi.net DSource & #D: larsivi Dancing the Tango
Feb 08 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
The more traditional way is to mention the string first and pattern second, so: ("ababb" ~~ "aa|b") // match this guy against this pattern And I think it returns "b" - juxtaposition has a higher priority than "|", so your pattern is "either two a's or one b". :o)
My bad. Some more things to think about: 1) Returning the left match, the match, the right match?
Perl does allow that (has IIRC $` and $' to mark the left and right surrounding substrings), but the recommended style is to use capturing parens if you need the left and right portion; this makes all matching code more efficient. So if you want to match the left- and right-substrings you say: ("ababb" ~~ "(.*)(aa|b)(.*)") and you get in return three juicy strings: left, match, and right.
 2) Returning values of parenthesized expressions?
Probably it's easiest to always return const char[][]. If you don't have capturing parens, you could return const char[].
 3) Some sort of sed-like replacement syntax?
Definitely; otherwise it's a pain to express it, particularly because you can't mutate things during compilation. ("ababb" ~~ s/"(.*)(aa|b)(.*)"/"$1 here was an aa|b $2"/i) (This doesn't make 's' a keyword; it's just used as punctuation.) Probably a more D-like syntax could be devised, but that could be also seen as gratuitous incompatibility with sed, perl etc. The last "/" is useful because flags could follow it, as is the case here (i = ignore case).
 An alternative is to have the compiler recognize std.Regexp names as 
 being built-in.
Blech. :o) Andrei
Feb 07 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings that 
 can be done are:
 
 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation
 
 Any other functionality can be built up from these using template 
 metaprogramming.
 
 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input as 
 part of the template instantiation's mangled name.
 
 At some point, this will prove a barrier to large scale use of this 
 feature.
 
 Andrei suggested using compile time regular expressions to shoulder much 
 of the burden, reducing parsing of any particular token to one 
 instantiation.
That would help I suppose, but at the same time regexps themselves have a tendancy to end up being 'write-only' code. The heavy use of them in perl is I think a large part of what gives it a rep as a write-only language. Heh heh. I just found this regexp for matching RFC 822 email addresses: http://www.regular-expressions.info/email.html (the one at the bottom of the page) --bb
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 That would help I suppose, but at the same time regexps themselves have 
 a tendancy to end up being 'write-only' code.  The heavy use of them in 
 perl is I think a large part of what gives it a rep as a write-only 
 language.   Heh heh.  I just found this regexp for matching RFC 822 
 email addresses:
     http://www.regular-expressions.info/email.html
 (the one at the bottom of the page)
I agree that non-trivial regexes can be pretty intimidating - but writing templates to do the same will be even more intimidating.
Feb 07 2007
next sibling parent Kyle Furlong <kylefurlong gmail.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 That would help I suppose, but at the same time regexps themselves 
 have a tendancy to end up being 'write-only' code.  The heavy use of 
 them in perl is I think a large part of what gives it a rep as a 
 write-only language.   Heh heh.  I just found this regexp for matching 
 RFC 822 email addresses:
     http://www.regular-expressions.info/email.html
 (the one at the bottom of the page)
I agree that non-trivial regexes can be pretty intimidating - but writing templates to do the same will be even more intimidating.
I disagree that any d code could be any more intimidating than a 6k+ character regex string. <g>
Feb 07 2007
prev sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 That would help I suppose, but at the same time regexps themselves 
 have a tendancy to end up being 'write-only' code.  The heavy use of 
 them in perl is I think a large part of what gives it a rep as a 
 write-only language.   Heh heh.  I just found this regexp for matching 
 RFC 822 email addresses:
     http://www.regular-expressions.info/email.html
 (the one at the bottom of the page)
I agree that non-trivial regexes can be pretty intimidating - but writing templates to do the same will be even more intimidating.
Agreed! They are both intimidating which is not a good thing. There has to be a better way. -Joel
Feb 08 2007
prev sibling next sibling parent Chris Nicholson-Sauls <ibisbasenji gmail.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings 
 that can be done are:

 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation

 Any other functionality can be built up from these using template 
 metaprogramming.

 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input 
 as part of the template instantiation's mangled name.

 At some point, this will prove a barrier to large scale use of this 
 feature.

 Andrei suggested using compile time regular expressions to shoulder 
 much of the burden, reducing parsing of any particular token to one 
 instantiation.
That would help I suppose, but at the same time regexps themselves have a tendancy to end up being 'write-only' code. The heavy use of them in perl is I think a large part of what gives it a rep as a write-only language. Heh heh. I just found this regexp for matching RFC 822 email addresses: http://www.regular-expressions.info/email.html (the one at the bottom of the page) --bb
Wow... I'm actually missing hair now just from trying to read that. My internal regexp engine crashed, too -- had a neural buffer overflow. -- Chris Nicholson-Sauls
Feb 07 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings 
 that can be done are:

 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation

 Any other functionality can be built up from these using template 
 metaprogramming.

 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input 
 as part of the template instantiation's mangled name.

 At some point, this will prove a barrier to large scale use of this 
 feature.

 Andrei suggested using compile time regular expressions to shoulder 
 much of the burden, reducing parsing of any particular token to one 
 instantiation.
That would help I suppose, but at the same time regexps themselves have a tendancy to end up being 'write-only' code. The heavy use of them in perl is I think a large part of what gives it a rep as a write-only language. Heh heh. I just found this regexp for matching RFC 822 email addresses: http://www.regular-expressions.info/email.html (the one at the bottom of the page)
I think this must be qualified and understood in context. First, much of Perl's reputation of write-only code has much to do with the implicit variables and the generous syntax. The Perl regexps are a standard that all other regexp packages emulate and compare against. Showcasing the raw RFC 822 email parsing regexp is not very telling. Notice there's a lot of repetition. With symbols, the grammar is very easy to implement with readable regular expressions - and this is how anyone in their right mind would do it. Andrei
Feb 07 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings 
 that can be done are:

 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation

 Any other functionality can be built up from these using template 
 metaprogramming.

 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input 
 as part of the template instantiation's mangled name.

 At some point, this will prove a barrier to large scale use of this 
 feature.

 Andrei suggested using compile time regular expressions to shoulder 
 much of the burden, reducing parsing of any particular token to one 
 instantiation.
That would help I suppose, but at the same time regexps themselves have a tendancy to end up being 'write-only' code. The heavy use of them in perl is I think a large part of what gives it a rep as a write-only language. Heh heh. I just found this regexp for matching RFC 822 email addresses: http://www.regular-expressions.info/email.html (the one at the bottom of the page)
I think this must be qualified and understood in context. First, much of Perl's reputation of write-only code has much to do with the implicit variables and the generous syntax. The Perl regexps are a standard that all other regexp packages emulate and compare against.
Agreed. Implicit variables also make things tough to follow. Regexps also contribute to Perl's reputation for looking like line-noise. But I like perl actually. And regular expressions are ok too, but I feel like they're not optimal for writing maintainable code. They tend to look like line noise. They're difficult to comment effectively. And they're certainly not suited for certain tasks, and if you try to use them for something they're not particularly good at, they get very messy. Unfortunately, lot of what they're not good at is exactly the kind of thing you *need* them to be good at for parsing/generating code. Like parenthesis balancing, or nested comment parsing, or quoted string munching. They can be a good tool, but if they're the only tool, or even the main tool, I think we're in trouble.
 Showcasing the raw RFC 822 email parsing regexp is not very telling. 
 Notice there's a lot of repetition. With symbols, the grammar is very 
 easy to implement with readable regular expressions - and this is how 
 anyone in their right mind would do it.
True it's not a realistic example. The page says as much, and includes several versions that are more realistic Here's the recommended one: \b[A-Z0-9._%-]+ [A-Z0-9.-]+\.[A-Z]{2,4}\b Ok, that's not so bad, but throw in a few sets of capturing parenthesis here and there, and it starts to look pretty messy. I my opinion about regexps is that they're too dense and full of abbreviations. And the typical methods for creating them don't encourage encapsulation and abstraction, which are the foundations of software. For instance, every time you look at the above you have to re-interpret what [A-Z0-9._%-] really means. When I'm writing regular expressions I always have to have that chart next to me to remember all those \s \b \w \S \W \ codes, and then again when trying to figure out what the code does later. There has to be a better way. Apparently the Perl guys thing so too, because they're redoing regular expressions completely for Perl 6. --bb
Feb 07 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
[snip]
 I my opinion about regexps is that they're too dense and full of 
 abbreviations.  And the typical methods for creating them don't 
 encourage encapsulation and abstraction, which are the foundations of 
 software.  For instance, every time you look at the above you have to 
 re-interpret what [A-Z0-9._%-] really means.  When I'm writing regular 
 expressions I always have to have that chart next to me to remember all 
 those \s \b \w \S \W \ codes, and then again when trying to figure out 
 what the code does later.  There has to be a better way.  Apparently the 
 Perl guys thing so too, because they're redoing regular expressions 
 completely for Perl 6.
(Well not completely.) That's why we should keep a close eye on those. The Perl community is much more experienced with regex usage than me and possibly yourself. I just want us to not delude ourselves with the idea that we could just sit down and write a better regex syntax just because we don't remember what \s and \b mean. (I happen to remember. :o)) Andrei
Feb 07 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 [snip]
 I my opinion about regexps is that they're too dense and full of 
 abbreviations.  And the typical methods for creating them don't 
 encourage encapsulation and abstraction, which are the foundations of 
 software.  For instance, every time you look at the above you have to 
 re-interpret what [A-Z0-9._%-] really means.  When I'm writing regular 
 expressions I always have to have that chart next to me to remember 
 all those \s \b \w \S \W \ codes, and then again when trying to figure 
 out what the code does later.  There has to be a better way.  
 Apparently the Perl guys thing so too, because they're redoing regular 
 expressions completely for Perl 6.
(Well not completely.) That's why we should keep a close eye on those. The Perl community is much more experienced with regex usage than me and possibly yourself. I just want us to not delude ourselves with the idea that we could just sit down and write a better regex syntax just because we don't remember what \s and \b mean. (I happen to remember. :o))
Yes and I don't want us to go and make Perl5-ish regular expressions part of the core D language spec without understanding how and why that very expert Perl community is changing their regular expressions in the next round. I haven't followed developments with Perl 6 closely, though. Just glanced at the link someone posted the other day. I also don't want us to go make regexp part of the language spec without thoroughly ruling out the potentially much cooler ability to write that regexp parser using more fundamental but yet-to-be-invented building blocks. --bb
Feb 07 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 [snip]
 I my opinion about regexps is that they're too dense and full of 
 abbreviations.  And the typical methods for creating them don't 
 encourage encapsulation and abstraction, which are the foundations of 
 software.  For instance, every time you look at the above you have to 
 re-interpret what [A-Z0-9._%-] really means.  When I'm writing 
 regular expressions I always have to have that chart next to me to 
 remember all those \s \b \w \S \W \ codes, and then again when trying 
 to figure out what the code does later.  There has to be a better 
 way.  Apparently the Perl guys thing so too, because they're redoing 
 regular expressions completely for Perl 6.
(Well not completely.) That's why we should keep a close eye on those. The Perl community is much more experienced with regex usage than me and possibly yourself. I just want us to not delude ourselves with the idea that we could just sit down and write a better regex syntax just because we don't remember what \s and \b mean. (I happen to remember. :o))
Yes and I don't want us to go and make Perl5-ish regular expressions part of the core D language spec without understanding how and why that very expert Perl community is changing their regular expressions in the next round. I haven't followed developments with Perl 6 closely, though. Just glanced at the link someone posted the other day.
I did. Perl 6 is going to be great. The regexes had a few warts that were overdue for a fix. The spirit remains the same, and the new full-fledged grammars will take care of the larger parsing tasks.
 I also don't want us to go make regexp part of the language spec without 
 thoroughly ruling out the potentially much cooler ability to write that 
 regexp parser using more fundamental but yet-to-be-invented building 
 blocks.
I think that's a great spirit. Andrei
Feb 07 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:
 
     ("aa|b" ~~ "ababb") would evaluate to "ab"
 
 I expect one would generally only see this kind of thing inside 
 templates, not user code.
Just a quick comment--I want to think about this a bit more. If we are given compile-time regular expressions it may be useful if we could obtain more information than this. For example, I would probably also want to know where in the source string the match begins. Sean
Feb 07 2007
parent Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 Just a quick comment--I want to think about this a bit more.  If we are 
 given compile-time regular expressions it may be useful if we could 
 obtain more information than this.  For example, I would probably also 
 want to know where in the source string the match begins.
One idea is to have it return an array of strings, and then you'd index that to get the desired result string.
Feb 07 2007
prev sibling next sibling parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings that 
 can be done are:
 
 At some point, this will prove a barrier to large scale use of this 
 feature.
While I'm a fan of regex I'm not sure it meets the goal of scale. I can imagine that regex expressions + templates will get unreadable (or at least very slow to read) as programs get larger. -Joel
Feb 07 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
janderson wrote:
 Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings 
 that can be done are:

 At some point, this will prove a barrier to large scale use of this 
 feature.
While I'm a fan of regex I'm not sure it meets the goal of scale. I can imagine that regex expressions + templates will get unreadable (or at least very slow to read) as programs get larger.
Symbols and modularity will definitely go a long way to help this. A one-shot long regexp is pretty intimidating, but one containing appropriate symbols all of a sudden starts looking like... a clean sequence of tokens. Andrei
Feb 07 2007
prev sibling next sibling parent reply Miles <_______ _______.____> writes:
Walter Bright wrote:
 The problem is that parsing strings using templates generates a large
 number of template instantiations, is (relatively) very slow, and
 consumes a lot of memory (at compile time, not runtime). For example,
 ParseInteger would need 4 template instantiations to parse 5678,
Why instead of doing perversions with templates, don't you add a proper compile-time D interpreter for the purposes of generating code? Doing loops with template recursion sucks a lot. It really looks the wrong approach for the problem. D already has static if() and tuple-foreach(). Just adding compile-time variables and a static for() will be great. A compile-time D interpreter will be awesome.
Feb 07 2007
parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Miles wrote:
 Walter Bright wrote:
 The problem is that parsing strings using templates generates a large
 number of template instantiations, is (relatively) very slow, and
 consumes a lot of memory (at compile time, not runtime). For example,
 ParseInteger would need 4 template instantiations to parse 5678,
Why instead of doing perversions with templates, don't you add a proper compile-time D interpreter for the purposes of generating code? Doing loops with template recursion sucks a lot. It really looks the wrong approach for the problem. D already has static if() and tuple-foreach(). Just adding compile-time variables and a static for() will be great. A compile-time D interpreter will be awesome.
As an example of this, I recently was playing with the new mixins, and wanted to mixin an n amount of structs. I quickly realized that this would not be as trivial as static for(int i = n; i < n; i++) { mixin("my struct code"); } but ended up being: import std.metastrings; template ManyStructs(int count) { static if(count == 1) { const char[] ManyStructsImpl = "struct Struct" ~ ToString!(count) ~ " { int baz = "~ToString!(count)~"; }"; } else { const char[] ManyStructsImpl = "struct Struct" ~ ToString!(count) ~ " { int baz = "~ToString!(count)~"; }"~ManyStructsImpl!(count - 1); } } mixin(ManyStructs!()); Clearly the former is much more succinct.
Feb 07 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Kyle Furlong wrote:
 import std.metastrings;
 
 template ManyStructs(int count)
 {
     static if(count == 1)
     {
         const char[] ManyStructsImpl = "struct Struct" ~ 
 ToString!(count) ~ " { int baz = "~ToString!(count)~"; }";
     }
     else
     {
         const char[] ManyStructsImpl = "struct Struct" ~ 
 ToString!(count) ~ " { int baz = "~ToString!(count)~"; 
 }"~ManyStructsImpl!(count - 1);
     }
 }
 
 mixin(ManyStructs!());
 
 Clearly the former is much more succinct.
Ending the loop at 0 instead of 1: --- import std.metastrings; template ManyStructs(int count) { static if(count == 0) { const char[] ManyStructsImpl = ""; } else { const char[] ManyStructsImpl = "struct Struct" ~ ToString!(count) ~ " { int baz = "~ToString!(count)~"; }"~ManyStructsImpl!(count - 1); } } --- Duplicate code is bad :P.
Feb 08 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 At some point, this will prove a barrier to large scale use of this 
 feature.
I agree, though I'm not sure this feature will see large scale use either way. Template metaprogramming is still very uncommon outside of library code.
 Andrei suggested using compile time regular expressions to shoulder much 
 of the burden, reducing parsing of any particular token to one 
 instantiation.
 
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good reasons.
 
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:
 
     ("aa|b" ~~ "ababb") would evaluate to "ab"
 
 I expect one would generally only see this kind of thing inside 
 templates, not user code.
I agree that this would eliminate the need for a lot of template library code and would speed compilation for applications using such techniques. I am still unsure whether this is sufficient to warrant its inclusion to the language, but I'm not strongly opposed to the idea. However, for this to be useful I'd like to reiterate that I would want some way to continue parsing after the match point. The most obvious would be to return an index/string pair where the index contains the position of the match in the source string, or as you mentioned, perhaps an array consisting of three slices: the source string preceding the match, the match itself, and the source string following the match. Sean
Feb 07 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Walter Bright wrote:
  >
 At some point, this will prove a barrier to large scale use of this 
 feature.
I agree, though I'm not sure this feature will see large scale use either way. Template metaprogramming is still very uncommon outside of library code.
If we want to make D a language for the future, we must thoroughly rid ourselves of such a view.
 Andrei suggested using compile time regular expressions to shoulder 
 much of the burden, reducing parsing of any particular token to one 
 instantiation.

 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good 
 reasons.

 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
I agree that this would eliminate the need for a lot of template library code and would speed compilation for applications using such techniques. I am still unsure whether this is sufficient to warrant its inclusion to the language, but I'm not strongly opposed to the idea. However, for this to be useful I'd like to reiterate that I would want some way to continue parsing after the match point. The most obvious would be to return an index/string pair where the index contains the position of the match in the source string, or as you mentioned, perhaps an array consisting of three slices: the source string preceding the match, the match itself, and the source string following the match.
Parens will allow grouping much like in Perl. If a regex contains groupings, then the result will be a compile-time array with the matches. All you have to do then is to group subparts appropriately, e.g.: ("templated regex rocks" ~~ "([a-z]+) +(.*)") returns a compile-time array ["templated", "regex rocks"]. Andrei
Feb 07 2007
prev sibling next sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings that 
 can be done are:
 
 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation
 
 Any other functionality can be built up from these using template 
 metaprogramming.
 
 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input as 
 part of the template instantiation's mangled name.
 
 At some point, this will prove a barrier to large scale use of this 
 feature.
 
 Andrei suggested using compile time regular expressions to shoulder much 
 of the burden, reducing parsing of any particular token to one 
 instantiation.
 
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good reasons.
 
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:
 
     ("aa|b" ~~ "ababb") would evaluate to "ab"
 
 I expect one would generally only see this kind of thing inside 
 templates, not user code.
compile-time regex is only part of the picture. A small one too. I rather expect we'd wind up finding the manner it was exposed was just too limiting in one way or another. Exposing, as was apparently suggested, the full API of RegExp inside the compiler sounds a tad distasteful. You'll perhaps forgive me if I question whether this is driven primarily from an academic interest? What I mean is this: if and when D goes mainstream, perhaps just one in ten-thousand developers will actually use this kind of feature more than 5 times (and still find themselves limited). Perhaps I'm being generous with those numbers also? What is wrong with runtime execution anyway? It sure is easier to write and maintain clean D code than (for many ppl) complex concepts that are, what amount to, nothing more than runtime optimizations. Isn't that true? It would seem that adding such features does not address the type of things that would be useful to 80% of developers? Surely that should be far more important? And, no ... I'm not just pooh poohing the idea ... I'm really serious about D getting some realistic market traction, and I don't see how adding more compile-time 'specialities' can help in any way other than generating a little bit of 'novelty' interest. Isn't this a good example of "premature optimization" ? Surely some of the others long-term concerns, such as solid debugging support, simmering code/dataseg bloat, lib support for templates, etc, etc, should deserve full attention instead? Surely that is a more successful approach to getting D adopted in the marketplace? Lot's of questions, and I hope you can give them serious consideration, Walter. - Kris
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 compile-time regex is only part of the picture. A small one too. I 
 rather expect we'd wind up finding the manner it was exposed was just 
 too limiting in one way or another. Exposing, as was apparently 
 suggested, the full API of RegExp inside the compiler sounds a tad 
 distasteful.
I tend to agree with that.
 You'll perhaps forgive me if I question whether this is driven primarily 
 from an academic interest?  What I mean is this: if and when D goes 
 mainstream, perhaps just one in ten-thousand developers will actually 
 use this kind of feature more than 5 times (and still find themselves 
 limited). Perhaps I'm being generous with those numbers also?
 
 What is wrong with runtime execution anyway? It sure is easier to write 
 and maintain clean D code than (for many ppl) complex concepts that are, 
 what amount to, nothing more than runtime optimizations. Isn't that true?
 
 It would seem that adding such features does not address the type of 
 things that would be useful to 80% of developers? Surely that should be 
 far more important?
Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. Enabling ways for sophisticated DSLs to interoperate with D will enable such applications. Probably the only killer C++ Boost library is the Spirit library, which I have looked over with envious eyes. The way it works (expression templates) is incredibly difficult for library writers to create, and even the result is pretty quirky. But there's no denying how useful people find it. So I feel that by enabling easy DSL writing, we open the door to a much wider range of libraries to be written for D, and making libraries easy to write is (I think we all agree) key to success. It isn't at all about runtime optimization. It's about, for example, the ability to create a specialized matrix manipulation language, complete with user defined operators, etc., and have it 'compile' to regular D code.
 And, no ... I'm not just pooh poohing the idea ... I'm really serious 
 about D getting some realistic market traction, and I don't see how 
 adding more compile-time 'specialities' can help in any way other than 
 generating a little bit of 'novelty' interest. Isn't this a good example 
 of "premature optimization" ?
No, I don't think it is at all about optimization.
 Surely some of the others long-term concerns, such as solid debugging 
 support, simmering code/dataseg bloat, lib support for templates, etc, 
 etc, should deserve full attention instead? Surely that is a more 
 successful approach to getting D adopted in the marketplace?
Those are all extremely important, too.
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Walter Bright wrote:
 kris wrote:
 Surely some of the others long-term concerns, such as solid debugging 
 support, simmering code/dataseg bloat, lib support for templates, etc, 
 etc, should deserve full attention instead? Surely that is a more 
 successful approach to getting D adopted in the marketplace?
Those are all extremely important, too.
I wish to add that if you look at the changelog, the bread and butter issues (see the list of bugs fixed) get a solid share of attention.
Feb 07 2007
next sibling parent kris <foo bar.com> writes:
Walter Bright wrote:
 Walter Bright wrote:
 
 kris wrote:

 Surely some of the others long-term concerns, such as solid debugging 
 support, simmering code/dataseg bloat, lib support for templates, 
 etc, etc, should deserve full attention instead? Surely that is a 
 more successful approach to getting D adopted in the marketplace?
Those are all extremely important, too.
I wish to add that if you look at the changelog, the bread and butter issues (see the list of bugs fixed) get a solid share of attention.
Yes, you're quite right, and I hope there was no impression given to the contrary. The concern for many of us is purely about D getting mainstream traction. That's why we keep harping on about things that matter to the majority of developers :) I hate to say this (because it's somewhat tricky) but I suppose it's perhaps a question of priorities? Are compile-time features so much more important than things that limit adoption of D as it is today? D is already /jam-packed/ with features. Some of the existing ones are broken, and have been for a long time. Some of them hinder adoption. If you were a potential D user, what what you prefer to see happen? Far be it from me, or any of us, to dictate priority; but you surely have to see that mass adoption of D ain't gonna happen because of exotic compile-time features, when run-of-the-mill issues are prevalent? - Kris
Feb 07 2007
prev sibling parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 Walter Bright wrote:
 kris wrote:
 Surely some of the others long-term concerns, such as solid debugging 
 support, simmering code/dataseg bloat, lib support for templates, 
 etc, etc, should deserve full attention instead? Surely that is a 
 more successful approach to getting D adopted in the marketplace?
Those are all extremely important, too.
I wish to add that if you look at the changelog, the bread and butter issues (see the list of bugs fixed) get a solid share of attention.
Personally I think you've got a good balance going. A lot of bug fixes and a few cool features to keep peoples interest in D is a good way to go. I appreciate that. You've gotta really love working on D to have kept up this solid pace for so long. -Joel
Feb 09 2007
parent Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 You've gotta really love working on D to have kept up this solid pace 
 for so long.
It's the interest in D by the people here that fuels it.
Feb 10 2007
prev sibling parent Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:
 
 Surely some of the others long-term concerns, such as solid debugging
 support, simmering code/dataseg bloat, lib support for templates, etc,
 etc, should deserve full attention instead? Surely that is a more
 successful approach to getting D adopted in the marketplace?
Those are all extremely important, too.
I tend to think that these are important enough, together with a whole slew of other language improvements, for compile-time regex to be pushed back to post 2.0 (if they still are "needed" then!). -- Lars Ivar Igesund blog at http://larsivi.net DSource & #D: larsivi Dancing the Tango
Feb 08 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings 
 that can be done are:

 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation

 Any other functionality can be built up from these using template 
 metaprogramming.

 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input 
 as part of the template instantiation's mangled name.

 At some point, this will prove a barrier to large scale use of this 
 feature.

 Andrei suggested using compile time regular expressions to shoulder 
 much of the burden, reducing parsing of any particular token to one 
 instantiation.

 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good 
 reasons.

 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:

     ("aa|b" ~~ "ababb") would evaluate to "ab"

 I expect one would generally only see this kind of thing inside 
 templates, not user code.
compile-time regex is only part of the picture. A small one too. I rather expect we'd wind up finding the manner it was exposed was just too limiting in one way or another. Exposing, as was apparently suggested, the full API of RegExp inside the compiler sounds a tad distasteful.
Au contraire, I think it's a definite step in the right direction. Writing programs that write programs is a great way of doing more with less effort. Various languages can do that to various extents, and it's very heartening that D is taking steps in that direction. Allowing the programmer to manipulate strings during compilation is definitely a good step.
 You'll perhaps forgive me if I question whether this is driven primarily 
 from an academic interest?  What I mean is this: if and when D goes 
 mainstream, perhaps just one in ten-thousand developers will actually 
 use this kind of feature more than 5 times (and still find themselves 
 limited). Perhaps I'm being generous with those numbers also?
Perhaps, just like me, you simply aren't in the position to evaluate them. I will notice, however, a few historical trends. C++ got a shot in the arm from the STL. STL = advanced programming. Interesting. The STL did much to educate the C++ community towards code generation, which continues to be the reason why many influential gurus hang out with C++. Java tried to radically simplify things. It did get many complicated things right (safety, security), particularly those that were in the requirements early on. As of the features that Java initially stayed away from, a pattern I noticed in the Java circles is that pundits condemn, ridicule, or demean a feature or technique until Java implements it. Of course, implementing it while the language already has immovable parts is less clean. The net result is that now Java does have many of the advanced features that once were deemed uninteresting, and a history-based prediction is that it will continue to move in that direction. would appear) exotic features than Java. Again, it's natural to predict that the language will move towards recognizing and integrating advanced features. To survive, D must compensate for its relative lack of clout and publicity by offering above and beyond what more mainstream languages offer.
 What is wrong with runtime execution anyway? It sure is easier to write 
 and maintain clean D code than (for many ppl) complex concepts that are, 
 what amount to, nothing more than runtime optimizations. Isn't that true?
No. Accommodating DSLs and generating code has more to do with correctness and avoiding duplication of source code, than anything else.
 It would seem that adding such features does not address the type of 
 things that would be useful to 80% of developers? Surely that should be 
 far more important?
No. You are missing a key point - that some code is more influential than other. 2% of programmers may write libraries that work for 90% of programmers.
 And, no ... I'm not just pooh poohing the idea ... I'm really serious 
 about D getting some realistic market traction, and I don't see how 
 adding more compile-time 'specialities' can help in any way other than 
 generating a little bit of 'novelty' interest. Isn't this a good example 
 of "premature optimization" ?
No. As I said above, optimization has exceedingly little to do with it. Consider as an example the "white hole" and "black hole" pattern. Given an interface: interface A { int foo(); void bar(int); float baz(char[]); } a "white hole" class is an implementation of A that implements all methods to throw, and a "black hole" class is an implementation of A that implements all methods to return the default value of the return type. This pattern is very useful for either quick starting points for writing true classes implementing A, or as standalone degenerate implementations. To some programmers, black and white holes might not even raise a "duplicated code" flag. They sit down and write: class WhiteHoleA { int foo() { throw new Exception("foo not implemented"); } void bar(int); { throw new Exception("bar(int) not implemented"); } float baz(char[]); { throw new Exception("baz(char[]) not implemented"); } } and class BlackHoleA { int foo() { return int.init; } void bar(int); { } float baz(char[]); { return float.init; } } But if the language is advanced enough, it readily offers such rapid development goodies as library elements: alias black_hole!(A) BlackHoleA; alias white_hole!(A) WhiteHoleA; This has nothing to do with optimization. It is all about abstraction, saving duplication, and allowing expressive code.
 Surely some of the others long-term concerns, such as solid debugging 
 support, simmering code/dataseg bloat, lib support for templates, etc, 
 etc, should deserve full attention instead? Surely that is a more 
 successful approach to getting D adopted in the marketplace?

 Lot's of questions, and I hope you can give them serious consideration, 
 Walter.
I think it's good to be sure only when there's a solid basis. Andrei
Feb 07 2007
parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 compile-time regex is only part of the picture. A small one too. I 
 rather expect we'd wind up finding the manner it was exposed was just 
 too limiting in one way or another. Exposing, as was apparently 
 suggested, the full API of RegExp inside the compiler sounds a tad 
 distasteful.
Au contraire, I think it's a definite step in the right direction. Writing programs that write programs is a great way of doing more with less effort. Various languages can do that to various extents, and it's very heartening that D is taking steps in that direction. Allowing the programmer to manipulate strings during compilation is definitely a good step.
You're saying that a 'normal' D program is not sufficiently powerful to write other programs? Au contraire! There's nothing wrong with doing that at runtime, rather than turning the compiler itself into an abstract virtual machine?
 
 You'll perhaps forgive me if I question whether this is driven 
 primarily from an academic interest?  What I mean is this: if and when 
 D goes mainstream, perhaps just one in ten-thousand developers will 
 actually use this kind of feature more than 5 times (and still find 
 themselves limited). Perhaps I'm being generous with those numbers also?
Perhaps, just like me, you simply aren't in the position to evaluate them. I will notice, however, a few historical trends. C++ got a shot in the arm from the STL. STL = advanced programming. Interesting. The STL did much to educate the C++ community towards code generation, which continues to be the reason why many influential gurus hang out with C++.
Are you saying that adding regex support at compile-time will take the world by storm? I hope not, because STL and its ilk are about productivity for a mass audience. Not for the few who work with DSL on a regular basis. Besides D is perfectly capable of DSL handling at runtime; there's just no overpowering need for it to do that at /compile-time/. We might as well be discussing whether the compiler should embed a GUI generator. So it can be used at compile-time. There are better ways of doing that. Ways that are more accessible, more maintainable, and have a much easier learning curve. The OSX GUI builder is one fine example.
 To survive, D must compensate for its relative lack of clout and 
 publicity by offering above and beyond what more mainstream languages 
 offer.
To survive, D needs to get serious about being taken seriously. Concentrating on what /might/ become a niche ideal doesn't strike me as the best approach. Compare and constrast with, say, targeting D for cell-phone devices? It's the biggest market on the planet, just sitting there /waiting/ for D to come along with the right set of features.
 
 What is wrong with runtime execution anyway? It sure is easier to 
 write and maintain clean D code than (for many ppl) complex concepts 
 that are, what amount to, nothing more than runtime optimizations. 
 Isn't that true?
No. Accommodating DSLs and generating code has more to do with correctness and avoiding duplication of source code, than anything else.
Yes and no. I will defer, of course, to your experience in the matter; but will note that there's /always/ a point of diminishing return.
 
 It would seem that adding such features does not address the type of 
 things that would be useful to 80% of developers? Surely that should 
 be far more important?
No. You are missing a key point - that some code is more influential than other. 2% of programmers may write libraries that work for 90% of programmers.
Indeed. I'm partially responsible for a rather large library. And there's no way that I can see a use for this feature in there. That doesn't mean it won't get used by somebody somewhere (of course), but I put it to you that it does indicate just how little need there is for such features in the language (at compile time). Yes, I'd personally like to see some better template handling. I'd like to see IFTI fixed - not some new feature that this (extensive) library will never use :)
 a "white hole" class is an implementation of A that implements all 
 methods to throw, and a "black hole" class is an implementation of A 
 that implements all methods to return the default value of the return type.
 
 This pattern is very useful for either quick starting points for writing 
 true classes implementing A, or as standalone degenerate implementations.
 
 To some programmers, black and white holes might not even raise a 
 "duplicated code" flag. They sit down and write:
That could reasonably be argued as a point of diminishing return? If it takes more effort or knowledge of how to abstract the pattern from two or more concepts, and to implement it using something unfamiliar, then 99% of developers will ignore it completely. I fully agree that /idealistically/ such patterns would be nice to have, but that's not reality. And I can't see how this could possibly help D get notable traction, since it can also be done using the tools already available in D: contracts via interfaces or abstract base-classes.
 Lot's of questions, and I hope you can give them serious 
 consideration, Walter.
I think it's good to be sure only when there's a solid basis.
Yes, I agree. Does that work both ways (serious question) ? - Kris
Feb 07 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 compile-time regex is only part of the picture. A small one too. I 
 rather expect we'd wind up finding the manner it was exposed was just 
 too limiting in one way or another. Exposing, as was apparently 
 suggested, the full API of RegExp inside the compiler sounds a tad 
 distasteful.
Au contraire, I think it's a definite step in the right direction. Writing programs that write programs is a great way of doing more with less effort. Various languages can do that to various extents, and it's very heartening that D is taking steps in that direction. Allowing the programmer to manipulate strings during compilation is definitely a good step.
You're saying that a 'normal' D program is not sufficiently powerful to write other programs? Au contraire! There's nothing wrong with doing that at runtime, rather than turning the compiler itself into an abstract virtual machine?
You don't understand. Code generation is only interesting when the generated code works together with handwritten code and lives within the same symbolic ecosystem.
 You'll perhaps forgive me if I question whether this is driven 
 primarily from an academic interest?  What I mean is this: if and 
 when D goes mainstream, perhaps just one in ten-thousand developers 
 will actually use this kind of feature more than 5 times (and still 
 find themselves limited). Perhaps I'm being generous with those 
 numbers also?
Perhaps, just like me, you simply aren't in the position to evaluate them. I will notice, however, a few historical trends. C++ got a shot in the arm from the STL. STL = advanced programming. Interesting. The STL did much to educate the C++ community towards code generation, which continues to be the reason why many influential gurus hang out with C++.
Are you saying that adding regex support at compile-time will take the world by storm? I hope not, because STL and its ilk are about productivity for a mass audience. Not for the few who work with DSL on a regular basis. Besides D is perfectly capable of DSL handling at runtime; there's just no overpowering need for it to do that at /compile-time/.
I tried to provide solid argumentation and/or evidence for my statements. The post I replied to originally, and the post I am replying to now, use rhetoric and bare statements somehow implying they are drawn from common knowledge. I will not simply agree with a bare statement claiming that this is not needed or that is not necessary.
 We might as well be discussing whether the compiler should embed a GUI 
 generator. So it can be used at compile-time. There are better ways of 
 doing that. Ways that are more accessible, more maintainable, and have a 
 much easier learning curve. The OSX GUI builder is one fine example.
 
 
 To survive, D must compensate for its relative lack of clout and 
 publicity by offering above and beyond what more mainstream languages 
 offer.
To survive, D needs to get serious about being taken seriously.
Exactly what is anyone to make of this? Reminds me of Jerome K. Jerome: "Never do something shameful, my son," the mother said, "and then you'll never be ashamed of what you did."
 Concentrating on what /might/ become a niche ideal doesn't strike me as 
 the best approach. Compare and constrast with, say, targeting D for 
 cell-phone devices? It's the biggest market on the planet, just sitting 
 there /waiting/ for D to come along with the right set of features.
Sure if there are abstractions of interest to embedded programs they are worth discussing. But again, leaving it at the level of Zen statements is only empty rhetoric.
 What is wrong with runtime execution anyway? It sure is easier to 
 write and maintain clean D code than (for many ppl) complex concepts 
 that are, what amount to, nothing more than runtime optimizations. 
 Isn't that true?
No. Accommodating DSLs and generating code has more to do with correctness and avoiding duplication of source code, than anything else.
Yes and no. I will defer, of course, to your experience in the matter; but will note that there's /always/ a point of diminishing return.
To this I'll insert the obligatory answer that that's a truism.
 a "white hole" class is an implementation of A that implements all 
 methods to throw, and a "black hole" class is an implementation of A 
 that implements all methods to return the default value of the return 
 type.

 This pattern is very useful for either quick starting points for 
 writing true classes implementing A, or as standalone degenerate 
 implementations.

 To some programmers, black and white holes might not even raise a 
 "duplicated code" flag. They sit down and write:
That could reasonably be argued as a point of diminishing return? If it takes more effort or knowledge of how to abstract the pattern from two or more concepts, and to implement it using something unfamiliar, then 99% of developers will ignore it completely.
I think any developer can write an alias statement. The point of the example was to illustrate how an advanced feature can be used by an expert to democratize efficient development. Abstraction is hard, no question about that. (Another truism. :o)) The problem is that often, even when the abstraction is understood, reflecting it in code is a complete mess. Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland.
 I fully agree that /idealistically/ such patterns would be nice to have, 
 but that's not reality.
I'm not sure what this statement is based on. For example, there are Perl libraries for white holes and black holes: http://cpan.uwinnipeg.ca/htdocs/Class-BlackHole/Class/BlackHole.html http://cpan.uwinnipeg.ca/htdocs/Class-WhiteHole/Class/WhiteHole.html
 And I can't see how this could possibly help D 
 get notable traction, since it can also be done using the tools already 
 available in D: contracts via interfaces or abstract base-classes.
This reflects misunderstanding of the stakes. Interfaces and abstract base classes are a recipe for _handwritten_ code. black_hole and white_hole are tools that generate code _mechanically_.
 Lot's of questions, and I hope you can give them serious 
 consideration, Walter.
I think it's good to be sure only when there's a solid basis.
Yes, I agree. Does that work both ways (serious question) ?
I did my best to explain the basis of my opinions. Andrei
Feb 07 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter gave another good case study: Ruby on Rails. The success of Ruby 
 on Rails has a lot to do with its ability to express abstractions that
 were a complete mess to deal with in concreteland.
I found this essay to be pivotal in piquing my interest in this: http://www.paulgraham.com/avg.html and a related one: http://lib.store.yahoo.net/lib/paulgraham/bbnexcerpts.txt
Feb 07 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express abstractions 
 that
 were a complete mess to deal with in concreteland.
I found this essay to be pivotal in piquing my interest in this: http://www.paulgraham.com/avg.html and a related one: http://lib.store.yahoo.net/lib/paulgraham/bbnexcerpts.txt
Awesome. The site could have a section with "external links" pointing to these articles, as well as SICP (which is also available online at http://mitpress.mit.edu/sicp/), The View From Berkeley (http://view.eecs.berkeley.edu/wiki/Main_Page) and other texts that are important to grasp in order to understand and shape the future of D. Andrei
Feb 07 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express 
 abstractions that
 were a complete mess to deal with in concreteland.
I found this essay to be pivotal in piquing my interest in this: http://www.paulgraham.com/avg.html and a related one: http://lib.store.yahoo.net/lib/paulgraham/bbnexcerpts.txt
Awesome. The site could have a section with "external links" pointing to these articles, as well as SICP (which is also available online at http://mitpress.mit.edu/sicp/), The View From Berkeley (http://view.eecs.berkeley.edu/wiki/Main_Page) and other texts that are important to grasp in order to understand and shape the future of D. Andrei
Videos of Abelson and Sussman lectures are also available online at MIT Courseware. Good stuff if you have the time to watch. --bb
Feb 07 2007
prev sibling parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 kris wrote:
You're saying that a 'normal' D program is not sufficiently powerful to write other programs? Au contraire! There's nothing wrong with doing that at runtime, rather than turning the compiler itself into an abstract virtual machine?
You don't understand. Code generation is only interesting when the generated code works together with handwritten code and lives within the same symbolic ecosystem.
Okay, I do understand that. My overall point is that such things are of interest to only a tiny fragment of the commercial market.
 I tried to provide solid argumentation and/or evidence for my 
 statements. The post I replied to originally, and the post I am replying 
 to now, use rhetoric and bare statements somehow implying they are drawn 
 from common knowledge. I will not simply agree with a bare statement 
 claiming that this is not needed or that is not necessary.
I thought I added some evidence myself, in the paragraph below, but you seems to have missed that?
 
 We might as well be discussing whether the compiler should embed a GUI 
 generator. So it can be used at compile-time. There are better ways of 
 doing that. Ways that are more accessible, more maintainable, and have 
 a much easier learning curve. The OSX GUI builder is one fine example.


 To survive, D must compensate for its relative lack of clout and 
 publicity by offering above and beyond what more mainstream languages 
 offer.
To survive, D needs to get serious about being taken seriously.
Exactly what is anyone to make of this?
Andrei, it's really no different to your claim that D must somehow "compensate for its relative lack of clout" by adding compile-time regex to the language (the topic at hand). Wouldn't you agree? D would arguably have more clout if a number of items were resolved, rather than adding more features. Sure, I can imagine there's a lot more behind this than just Regex - but the point made elsewhere is a good one: if the DSL facilities are going to be totally awesome, one should be able to implement regex in it; efficiently. You can do that today using templates. It's just not terribly efficient, as Walter pointed out.
 Reminds me of Jerome K. Jerome:
 "Never do something shameful, my son," the mother said, "and then
 you'll never be ashamed of what you did."
How about: "Just because you can, doesn't mean you should" ?
 
 Concentrating on what /might/ become a niche ideal doesn't strike me 
 as the best approach. Compare and constrast with, say, targeting D for 
 cell-phone devices? It's the biggest market on the planet, just 
 sitting there /waiting/ for D to come along with the right set of 
 features.
Sure if there are abstractions of interest to embedded programs they are worth discussing. But again, leaving it at the level of Zen statements is only empty rhetoric.
Hrm ... I won't go there :)
 
 What is wrong with runtime execution anyway? It sure is easier to 
 write and maintain clean D code than (for many ppl) complex concepts 
 that are, what amount to, nothing more than runtime optimizations. 
 Isn't that true?
No. Accommodating DSLs and generating code has more to do with correctness and avoiding duplication of source code, than anything else.
Okay. That comes back to my original point about whether this is something that is gonna attract mainstream developers, en masse. Forgive me, but I just don't see that it does.
 Yes and no. I will defer, of course, to your experience in the matter; 
 but will note that there's /always/ a point of diminishing return.
To this I'll insert the obligatory answer that that's a truism.
 a "white hole" class is an implementation of A that implements all 
 methods to throw, and a "black hole" class is an implementation of A 
 that implements all methods to return the default value of the return 
 type.

 This pattern is very useful for either quick starting points for 
 writing true classes implementing A, or as standalone degenerate 
 implementations.

 To some programmers, black and white holes might not even raise a 
 "duplicated code" flag. They sit down and write:
That could reasonably be argued as a point of diminishing return? If it takes more effort or knowledge of how to abstract the pattern from two or more concepts, and to implement it using something unfamiliar, then 99% of developers will ignore it completely.
I think any developer can write an alias statement. The point of the example was to illustrate how an advanced feature can be used by an expert to democratize efficient development.
I agree. But then how many everyday /useful/ patterns really exist in that form? Ones that ppl would actually use? Gui patterns are one of the few areas that come to mind. Componentization of software dev has long been a dream (akin to how hardware components are assembled). But the software world has never operated that way. It gets further and further away every year, as code becomes more and more throwaway. More often than not, a 'reusable' component (perhaps written by an expert) simply does not offer what the user needs. The time it takes to understand how to 'adapt' that component is often longer than it takes to implement the required functionality directly. I don't agree with the approach, but it's what I see in both commerce and research. It doesn't matter whether the component is hand-written or machine generated; the end result appears to be the same (IMO).
 
 Abstraction is hard, no question about that. (Another truism. :o)) The 
 problem is that often, even when the abstraction is understood, 
 reflecting it in code is a complete mess.
Yes.
 
 Walter gave another good case study: Ruby on Rails. The success of Ruby 
 on Rails has a lot to do with its ability to express abstractions that 
 were a complete mess to deal with in concreteland.
 
Let look at that case study, then. The /real/ power in RoR comes from being able to dynamically bind via rich reflection. What we're talking about here does not add full reflection to D. Neither does it assist in getting D modules dynamically loaded at runtime. As it turns out, some of us are actively looking /specifically/ at the killer RoR for D; far beyond what RoR does. Oddly enough, our working name for it is - DeRailed - We have solid notions of what's needed; and several of us have build related platforms in the past. But this topic, at face value, doesn't appear to help us in any notable fashion. Perhaps you can expain this further?
 And I can't see how this could possibly help D get notable traction, 
 since it can also be done using the tools already available in D: 
 contracts via interfaces or abstract base-classes.
This reflects misunderstanding of the stakes. Interfaces and abstract base classes are a recipe for _handwritten_ code. black_hole and white_hole are tools that generate code _mechanically_.
Okay. But the fundamental approach to patterns is rather similar. You were discussing facilities provided as a template-of-functionality (in a lib or somewhere). At that level it doesn't matter, to the user, whether they were originally hand written or not. That's what drove my point, rather than a misunderstanding. - Kris
Feb 07 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
[among other things]
 Sure, I can imagine there's a lot more behind this than just Regex - but 
 the point made elsewhere is a good one: if the DSL facilities are going 
 to be totally awesome, one should be able to implement regex in it; 
 efficiently.
I think this is a very valuable outcome that today's rumpus converged towards. Andrei
Feb 07 2007
prev sibling parent reply Robby <robby.lansaw gmail.com> writes:
 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express abstractions 
 that were a complete mess to deal with in concreteland.
Let look at that case study, then. The /real/ power in RoR comes from being able to dynamically bind via rich reflection. What we're talking about here does not add full reflection to D. Neither does it assist in getting D modules dynamically loaded at runtime. As it turns out, some of us are actively looking /specifically/ at the killer RoR for D; far beyond what RoR does. Oddly enough, our working name for it is - DeRailed - We have solid notions of what's needed; and several of us have build related platforms in the past. But this topic, at face value, doesn't appear to help us in any notable fashion. Perhaps you can expain this further? - Kris
I'm having a hard time putting together the association with RoR, DSL's and the regex feature together. Perhaps they're completely separate. RoR really doesn't express abstractions per se, Ruby does and very well actually. Actually the real power comment pertains to Ruby, instead of RoR - but I agree on it's intended meaning. I've hacked around on a possible ActiveRecord 'wannabe'[1] from time to time over the past few weeks and must admit I constantly miss things that are just there in Ruby -some of the things I'm aware are going to be hard to transcribe to D due to it's background - blocks -some things that just haven't been implemented yet - true dynamic information/loading, -somethings that just don't represent themselves well in D - symbols, *everything* is an object. But the languages as they are are pretty close to each other (pretty painless interfacing to c code, mixins etc. Not you directly Kris, however I've seen mentioned a few times about needing an RoR of D (an en mass application to bring in new developers.) There are significant downsides to this situation should it happen[2] and there are issues with having such an application. I personally came to D for a few reasons, which is probably too long to bring here, however one of the things I liked about using Ruby almost exclusively was the pure readability that came with the language by design[3]. I find D quite readable at present, I just hope it doesn't lose that edge for meta programming concepts.[4] If I could ask for one feature.. it would be bringing the method with an array argument over to all built in types. While the built in type wouldn't be object based such as ruby's, the approach would be quite nice. I must admit, with over 200 feeds I read throughout the day I find myself reading more and more in this set of NG's, I'm enjoying the insight, and D and its community should be proud of the community it's fostered.. it's quite nice. And personally, IMO a compiled, c style Rebol'ish setup would be the 'one killer app', the amount of wow that little 'engine that could' does is awesome.. Robby [1] any central point of contact for DeRailed? I'd be willing to sling code, thoughts if there is one. [2] http://www.oreillynet.com/ruby/blog/2005/12/ruby_is_not_a_religion.html shows the downsides to the whole utopia application issue [3]Allowing '?' as a final character to an indentifier and the convention for representing boolean methods is a simple and pure example among others. [4]Yeah, I've read "Beating the Averages" by Graham and I understand that the readability will come with learning.. but I'm coming from a new user point of view... Context: I've written in Ruby for over 4 years, and have used Rails since inception so I'm not used to the compile time frame of mind (thus I'm pretty useless to a thread such as this :))
Feb 08 2007
parent reply kris <foo bar.com> writes:
Robby wrote:
 
 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express 
 abstractions that were a complete mess to deal with in concreteland.
Let look at that case study, then. The /real/ power in RoR comes from being able to dynamically bind via rich reflection. What we're talking about here does not add full reflection to D. Neither does it assist in getting D modules dynamically loaded at runtime. As it turns out, some of us are actively looking /specifically/ at the killer RoR for D; far beyond what RoR does. Oddly enough, our working name for it is - DeRailed - We have solid notions of what's needed; and several of us have build related platforms in the past. But this topic, at face value, doesn't appear to help us in any notable fashion. Perhaps you can expain this further? - Kris
I'm having a hard time putting together the association with RoR, DSL's and the regex feature together. Perhaps they're completely separate.
Me too. I failed to see any connection that would measurably assist DeRailed. And the question above was sadly left unaddressed.
 
 RoR really doesn't express abstractions per se, Ruby does and very well 
 actually. Actually the real power comment pertains to Ruby, instead of 
 RoR - but I agree on it's intended meaning.
You're right of course.
 I've hacked around on a possible ActiveRecord 'wannabe'[1] from time to 
 time over the past few weeks and must admit I constantly miss things 
 that are just there in Ruby
 -some of the things I'm aware are going to be hard to transcribe to D 
 due to it's background - blocks
blocks can be emulated, with delegates?
 -some things that just haven't been implemented yet - true dynamic 
 information/loading,
Absolutely. There's a number of projects currently looking into that, but 'raw' D has very little support at this time.
 -somethings that just don't represent themselves well in D - symbols, 
 *everything* is an object.
 
 But the languages as they are are pretty close to each other (pretty 
 painless interfacing to c code, mixins etc.
 
 Not you directly Kris, however I've seen mentioned a few times about 
 needing an RoR of D (an en mass application to bring in new developers.)
 There are significant downsides to this situation should it happen[2] 
 and there are issues with having such an application.
Yeah. We figure it's better to have it than not, and intend to address a lot of the RoR concerns bandied around the blogosphere.
 
 I personally came to D for a few reasons, which is probably too long to 
 bring here, however one of the things I liked about using Ruby almost 
 exclusively was the pure readability that came with the language by 
 design[3]. I find D quite readable at present, I just hope it doesn't 
 lose that edge for meta programming concepts.[4]
 
 If I could ask for one feature.. it would be bringing the method with an 
 array argument over to all built in types. While the built in type 
 wouldn't be object based such as ruby's, the approach would be quite nice.
 
 I must admit, with over 200 feeds I read throughout the day I find 
 myself reading more and more in this set of NG's, I'm enjoying the 
 insight, and D and its community should be proud of the community it's 
 fostered.. it's quite nice.
 
 And personally, IMO a compiled, c style Rebol'ish setup would be the 
 'one killer app', the amount of wow that little 'engine that could' does
 is awesome..
 
 Robby
 
 
 [1] any central point of contact for DeRailed? I'd be willing to sling 
 code, thoughts if there is one.
Catch us via the Tango site and/or IRC? Always good to have extra pair of (willing) hands :)
 [2] 
 http://www.oreillynet.com/ruby/blog/2005/12/ruby_is_not_a_religion.html 
 shows the downsides to the whole utopia application issue
 [3]Allowing '?' as a final character to an indentifier and the 
 convention for representing boolean methods is a simple and pure example 
 among others.
 [4]Yeah, I've read "Beating the Averages" by Graham and I understand 
 that the readability will come with learning.. but I'm coming from a new 
 user point of view...
 
 Context: I've written in Ruby for over 4 years, and have used Rails 
 since inception so I'm not used to the compile time frame of mind (thus 
 I'm pretty useless to a thread such as this :))
We'd like you on-board with DeRailed :)
Feb 08 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Robby wrote:
 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express 
 abstractions that were a complete mess to deal with in concreteland.
Let look at that case study, then. The /real/ power in RoR comes from being able to dynamically bind via rich reflection. What we're talking about here does not add full reflection to D. Neither does it assist in getting D modules dynamically loaded at runtime. As it turns out, some of us are actively looking /specifically/ at the killer RoR for D; far beyond what RoR does. Oddly enough, our working name for it is - DeRailed - We have solid notions of what's needed; and several of us have build related platforms in the past. But this topic, at face value, doesn't appear to help us in any notable fashion. Perhaps you can expain this further? - Kris
I'm having a hard time putting together the association with RoR, DSL's and the regex feature together. Perhaps they're completely separate.
Me too. I failed to see any connection that would measurably assist DeRailed. And the question above was sadly left unaddressed.
It's very simple. A scheme based on compile-time in(tro)spection has superior and automatic means to detect, say, mismatches between an expected database schema and the runtime reality. Andrei
Feb 08 2007
parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 Robby wrote:

 Walter gave another good case study: Ruby on Rails. The success of 
 Ruby on Rails has a lot to do with its ability to express 
 abstractions that were a complete mess to deal with in concreteland.
Let look at that case study, then. The /real/ power in RoR comes from being able to dynamically bind via rich reflection. What we're talking about here does not add full reflection to D. Neither does it assist in getting D modules dynamically loaded at runtime. As it turns out, some of us are actively looking /specifically/ at the killer RoR for D; far beyond what RoR does. Oddly enough, our working name for it is - DeRailed - We have solid notions of what's needed; and several of us have build related platforms in the past. But this topic, at face value, doesn't appear to help us in any notable fashion. Perhaps you can expain this further? - Kris
I'm having a hard time putting together the association with RoR, DSL's and the regex feature together. Perhaps they're completely separate.
Me too. I failed to see any connection that would measurably assist DeRailed. And the question above was sadly left unaddressed.
It's very simple. A scheme based on compile-time in(tro)spection has superior and automatic means to detect, say, mismatches between an expected database schema and the runtime reality.
Let's step back for a moment, please? In a practical sense, the user/developer cares mostly about the extent and capability of the development facilities exposed. Yes? That includes the whole edit, compile, debug, edit cycle along with the quality of the tools and environment presented. Whether the scheme you mention is implemented at compile-time or at runtime has little bearing in the overall practical picture; e.g. as long as the cycle is short, intuitive and effective, either approach works. At that point, it's all about practical tradeoffs instead of theoretical one-upmanship? For example, having a DSL go and hit a database at /compile time/ sounds like an appalling reduction in /perceived/ compiler efficiency. If I have to hit the DB for every single compilation of each module with such a DSL embedded, I will simply discard the toolset. That approach would be borderline insanity :p One might argue that such a DSL design is incorrect? OK; then what about the security aspects? Your example is talking about a DSL that can be verified at compile-time; against a database scheme. Yes? How could that possibly be /permitted/ to run at compile time? It's a /gaping/ security issue. With sandboxing, any compiler 'extension' would likely have to eschew OS handles, which means no DB access, no network access, no file access, and no registry access. How does your example operate under such conditions? I fail to see that there's any practical thought behind the example given, and sincerely hope you can correct that? Lastly: the example you give fails to meet the criteria of "measurably assist DeRailed". Even if it /were/ feasible from a security and compile-cycle efficiency standpoint, it still would have little bearing on the overall productivity of a user/developer (using DeRailed). In other words, there's a whole lot of pain for very little gain. What there is of value quickly vanishes against the far greater concerns over full-on reflection and dynamic linking. - Kris
Feb 08 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 For example, having a DSL go and hit a database at /compile time/ sounds 
 like an appalling reduction in /perceived/ compiler efficiency. If I 
 have to hit the DB for every single compilation of each module with such 
 a DSL embedded, I will simply discard the toolset. That approach would 
 be borderline insanity :p
Probably we haven't worked in the same environments. In the large database systems I worked with (Chase Manhattan), the schema and the basic views change very rarely, and whenever that happens, the stored procedures are spilled automatically in text files that can be read by the build process. It was entirely reasonable to recompile the system whenever that happened, and it was highly desirable to fix mismatches before the system runs. In other cases, a dynamic approach does better. Dynamic has always been more flexible, no doubt about that. The point is that each approach has its advantages and disadvantages. I can also do without the belligerent tone. Not knowing or not understanding does not automatically lend insanity on the interlocutor. Andrei
Feb 08 2007
next sibling parent kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 For example, having a DSL go and hit a database at /compile time/ 
 sounds like an appalling reduction in /perceived/ compiler efficiency. 
 If I have to hit the DB for every single compilation of each module 
 with such a DSL embedded, I will simply discard the toolset. That 
 approach would be borderline insanity :p
Probably we haven't worked in the same environments. In the large database systems I worked with (Chase Manhattan), the schema and the basic views change very rarely, and whenever that happens, the stored procedures are spilled automatically in text files that can be read by the build process. It was entirely reasonable to recompile the system whenever that happened, and it was highly desirable to fix mismatches before the system runs.
Yes, that's one approach (I wasn't trying to limit the perspective at all). Yet, the DSL would still have to read the spilled data files; at compile time? That's a blatant security breach, is it not? Or are you hinting that sort of thing is acceptable within specific organizations?
 
 In other cases, a dynamic approach does better. Dynamic has always been 
 more flexible, no doubt about that. The point is that each approach has 
 its advantages and disadvantages.
Yes, I fully agree. It just that this discourse has been heavily slanted toward compile-time instead. Trade-offs are prevalent everywhere, and I honestly feel we're not getting a clear (fully unbiased) picture of what the pros and cons are.
 
 I can also do without the belligerent tone. Not knowing or not 
 understanding does not automatically lend insanity on the interlocutor.
Hrm, you have me all wrong, Andrei. I got the distinct impression that particular tone was pointed in my direction instead? I hope you'll let it go by, and focus on the more valuable aspects. So how about it? What can DSL really do, in a practical sense, for someone using a D-based DeRailed environment? We /really/ do wish to hear what the options are, and your input on that subject is valuable ...
Feb 08 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 For example, having a DSL go and hit a database at /compile time/ 
 sounds like an appalling reduction in /perceived/ compiler efficiency. 
 If I have to hit the DB for every single compilation of each module 
 with such a DSL embedded, I will simply discard the toolset. That 
 approach would be borderline insanity :p
Probably we haven't worked in the same environments. In the large database systems I worked with (Chase Manhattan)
Funny. I worked in that building (Chase Manhattan Plaza 1) up through the end of 2000. Was for a different firm though.
 the schema and the
 basic views change very rarely, and whenever that happens, the stored 
 procedures are spilled automatically in text files that can be read by 
 the build process. It was entirely reasonable to recompile the system 
 whenever that happened, and it was highly desirable to fix mismatches 
 before the system runs.
What I've done in the past is manage the entire schema, stored procedures and all, in a modeling system like ErWin. From there I'll dump the lot to a series of scripts which are then applied to the DB. In this case, the DSL would be the intermediate query files, though parsing the complete SQL query syntax (since the files include transactions, etc), sounds sub-optimal. I suppose a peripheral data format would perhaps be more appropriate for generating code based on the static representation of a DB schema. UML perhaps? My only concern here is that the process seems confusing and unwieldy: manage the schema in one tool, dump the data description in a meta-language to a file, and then have template code in the application parse that file during compilation to generate code. Each of these translation points creates a potential for failure, and the process and code risks being incomprehensible and unmanageable for new employees. Since you've established that the schema for large systems changes only rarely and that the changes are a careful and deliberate process, is it truly the best approach to attempt to automate code changes in this way? I would think that a well-designed application or interface library could be modified manually in concert with the schema changes to produce the same result, and with a verifiable audit trail to boot. Alternately, if the process were truly to be automated, it seems preferable to generate D code directly from the schema management application or via a standalone tool operating on the intermediate data rather than in preprocessor code during compilation. This approach would give much more informative error messages, and the process could be easily tracked and debugged. Please note that I'm not criticizing in-language DSL parsing as a general idea so much as questioning whether this is truly the best example for the usefulness of such a feature.
 In other cases, a dynamic approach does better. Dynamic has always been 
 more flexible, no doubt about that. The point is that each approach has 
 its advantages and disadvantages.
I would think it is perhaps worth comparing the two here since they can both be used for the same thing (ie. customizing an application for a database), and prior discussion had already mentioned RoR as a motivating factor for these features? Or perhaps I misunderstood. I'll grant that the comparison isn't entirely fair because Ruby is a dynamic language while D is a static language, but since the tasks the new import/mixin intend to solve are essentially a compile-time equivalent of what is done in Ruby at run-time (as I understand it anyway--I don't have much Ruby experience), then the utility of each approach can perhaps be weighed against the other in an attempt to understand the situations where the D approach may or may not be appropriate?
 I can also do without the belligerent tone. Not knowing or not 
 understanding does not automatically lend insanity on the interlocutor.
No offense, but this statement is a bit patronizing. I think Kris was merely attempting to explain his position? Sean
Feb 08 2007
next sibling parent reply Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
First of all, I totally agree with Sean.

Using the compiler for preprocessing is not a step in the right 
direction. It only gives delusional benefits, while in fact being a 
source of many problems.

Let's face it, the new mixin stuff is a handy tool for generating 
few-line snippets that the current template system cannot handle. Some 
would argue that it's already too much like the C preprocessor, but IMO, 
it's fine.
But going further that way and extending the compiler into a general 
purpose pluggable text processor isn't anything that will make the 
language any more powerful. Not to mention that a standard-compliant D 
compiler was meant to be simple...

When the compiler is used for the processing of a DSL, it simply masks a 
simple step that an external tool would do. It's not much of a problem 
to run an external script or program, that will read the DSL and output 
D code, while perhaps also doing other stuff, connecting with databases 
or making coffee. But when this is moved to the compiler, security 
problems arise, code becomes more cryptic and suddenly, the D code 
generated from the DSL cannot be simply accessed. It's simply produced 
by the 'compiler extension' and given further into compilation. A 
standalone tool will produce a .d module, which can be further verified, 
processed by other tools - such as one that generates reflection data - 
and when something breaks, one can step into the generated source, 
review it and easier spot errors in it. Sean also mentioned that the DSL 
processor will probably need some diagnostic output, and simple 
pragma(msg) and static assert simply won't cut it.

Therefore, I'd like to see a case when a compile-time DSL processing is 
going to be really useful in the real world, as to provoke further 
complication of the compiler and its usage patterns.

The other aspect I observed in the discussions following the new dmd 
release, is that folks are suggesting writing full language parsers, D 
preprocessors and various sort of operations on complex languages, 
notably extended-D parsing... This may sound weird in my lips, but 
that's clearly abuse. What these people really want is a way to extend 
the language's syntax, pretty much as Nemerle does it. And frankly, if D 
is going to be a more powerful language, built-in text preprocessing 
won't cut it. Full fledged macro support and syntax extension mechanics 
are something that we should look at.


--
Tomasz Stachowiak



Sean Kelly wrote:
 What I've done in the past is manage the entire schema, stored 
 procedures and all, in a modeling system like ErWin.  From there I'll 
 dump the lot to a series of scripts which are then applied to the DB. In 
 this case, the DSL would be the intermediate query files, though parsing 
 the complete SQL query syntax (since the files include transactions, 
 etc), sounds sub-optimal.  I suppose a peripheral data format would 
 perhaps be more appropriate for generating code based on the static 
 representation of a DB schema.  UML perhaps?
 
 My only concern here is that the process seems confusing and unwieldy: 
 manage the schema in one tool, dump the data description in a 
 meta-language to a file, and then have template code in the application 
 parse that file during compilation to generate code.  Each of these 
 translation points creates a potential for failure, and the process and 
 code risks being incomprehensible and unmanageable for new employees.
 
 Since you've established that the schema for large systems changes only 
 rarely and that the changes are a careful and deliberate process, is it 
 truly the best approach to attempt to automate code changes in this way? 
  I would think that a well-designed application or interface library 
 could be modified manually in concert with the schema changes to produce 
 the same result, and with a verifiable audit trail to boot.
 
 Alternately, if the process were truly to be automated, it seems 
 preferable to generate D code directly from the schema management 
 application or via a standalone tool operating on the intermediate data 
 rather than in preprocessor code during compilation.  This approach 
 would give much more informative error messages, and the process could 
 be easily tracked and debugged.
 
 Please note that I'm not criticizing in-language DSL parsing as a 
 general idea so much as questioning whether this is truly the best 
 example for the usefulness of such a feature.
Feb 09 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Tom S wrote:
 When the compiler is used for the processing of a DSL, it simply masks a 
 simple step that an external tool would do. It's not much of a problem 
 to run an external script or program, that will read the DSL and output 
 D code, while perhaps also doing other stuff, connecting with databases 
 or making coffee.
This is a misrepresentation. Code generation with external tools has been done forever; it is never easy (unless all you need is a table of logarithms), and it always incurs a non-amortized cost of parsing the DSL _plus_ the host language. Look at lex and yacc, the prototypical examples. They aren't small or simple nor perfectly integrated with the host language. And their DSL is extremely well understood. That's why there's no proliferation of lex&yacc-like tools for other DSLs (I seem to recall there was an embedded SQL that got lost in the noise) simply because the code generator would basically have to rewrite a significant part of the compiler to do anything interesting. Even lex and yacc are often dropped in favor of Xpressive and Spirit, which, for all their odd syntax, are 100% integrated with the host language, which allows writing fully expressive code without fear that the tool won't understand this or won't recognize that. People have gone at amazing lengths to stay within the language, and guess why - because within the language you're immersed in the environment that your DSL lives in. Reducing all the issue to the mythical external code generator that does it all and make coffee is simplistic. Proxy/stub generators for remote procedure calls were always an absolute pain to deal with; now compilers do it automatically, because they can. Understanding that that door can, and should, be opened to the programmer is an essential step in appreciating the power of metacode.
 But when this is moved to the compiler, security 
 problems arise, code becomes more cryptic and suddenly, the D code 
 generated from the DSL cannot be simply accessed. It's simply produced 
 by the 'compiler extension' and given further into compilation. A 
 standalone tool will produce a .d module, which can be further verified, 
 processed by other tools - such as one that generates reflection data - 
 and when something breaks, one can step into the generated source, 
 review it and easier spot errors in it. Sean also mentioned that the DSL 
 processor will probably need some diagnostic output, and simple 
 pragma(msg) and static assert simply won't cut it.
I don't see this as a strong argument. Tools can get better, no question about that. But their current defects should be estimated only having the potential power in mind. Heck, nobody would have bought the first lightbulb or the first automobile. They sucked.
 Therefore, I'd like to see a case when a compile-time DSL processing is 
 going to be really useful in the real world, as to provoke further 
 complication of the compiler and its usage patterns.
I can't parse this sentence. Did you mean "as opposed to provoking" instead of "as to provoke"?
 The other aspect I observed in the discussions following the new dmd 
 release, is that folks are suggesting writing full language parsers, D 
 preprocessors and various sort of operations on complex languages, 
 notably extended-D parsing... This may sound weird in my lips, but 
 that's clearly abuse. What these people really want is a way to extend 
 the language's syntax, pretty much as Nemerle does it. And frankly, if D 
 is going to be a more powerful language, built-in text preprocessing 
 won't cut it. Full fledged macro support and syntax extension mechanics 
 are something that we should look at.
This is a misunderstanding. The syntax is not to be extended. It stays fixed, and that is arguably a good thing. The semantics become more flexible. For example, they will make it easy to write a matrix operation: A = (I - B) * C + D and generate highly performant code from it. (There are many reasons for which that's way harder than it looks.) I think there is a lot of apprehension and misunderstanding surrounding what metacode is able and supposed to do or simplify. Please, let's focus on understanding _before_ forming an opinion. Andrei
Feb 09 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Tom S wrote:
 Therefore, I'd like to see a case when a compile-time DSL processing 
 is going to be really useful in the real world, as to provoke further 
 complication of the compiler and its usage patterns.
I can't parse this sentence. Did you mean "as opposed to provoking" instead of "as to provoke"?
I think he just meant he wants to see some real world examples that justify the additional complexity that will be added to the compiler and language.
 The other aspect I observed in the discussions following the new dmd 
 release, is that folks are suggesting writing full language parsers, D 
 preprocessors and various sort of operations on complex languages, 
 notably extended-D parsing... This may sound weird in my lips, but 
 that's clearly abuse. What these people really want is a way to extend 
 the language's syntax, pretty much as Nemerle does it. And frankly, if 
 D is going to be a more powerful language, built-in text preprocessing 
 won't cut it. Full fledged macro support and syntax extension 
 mechanics are something that we should look at.
This is a misunderstanding. The syntax is not to be extended. It stays fixed, and that is arguably a good thing. The semantics become more flexible. For example, they will make it easy to write a matrix operation: A = (I - B) * C + D and generate highly performant code from it. (There are many reasons for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
 I think there is a lot of apprehension and misunderstanding surrounding 
 what metacode is able and supposed to do or simplify. Please, let's 
 focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does. Anyway, I think it would help get everyone on board if some specific and useful examples were given of how this solves real problems (and no I don't really consider black and white holes as solving real problems. I couldn't even find any non-astronomical uses of the terms in a google search.) For instance, it would be nice to see some more concrete discussion about * the "rails" case. * the X = A*B + C matrix/vector expressions case. * the case of generating bindings to scripting langauges / ORB stubs * the Spirit/parser generator case So here's my take on vector expressions since that's the only one I know anything about. *Problem statement*: Make the expression A=(I-B)*C+D efficient, where the variables are large vectors (I'll leave out matrices for now). *Why it's hard*: The difficulty is that (ignoring SSE instruction etc) the most efficient way to compute that is do all operations component-wise. So instead of computing I-B then multiplying by C, you compute A[i] = (I[i]-B[i])*C[i]+D[i]; for each i. This eliminates the need to allocate large intermediate vectors. *Existing solutions*: Expression templates in C++, e.g. The Blitz++ library. Instead of making opSub in I-B return a new Vector object, you make opSub return an ExpressionTemplate object. This is a little template struct that contains a reference to I and to B, and knows how to subtract the two in a component-wise manner. The types of I and B are template parameters, LeftT and RightT. Its interface also allows it to be treated just like a Vector. You can add a vector to it, subtract a Vector from it etc. Now we go and try to multiply that result times C. The result of that is a new MultExpressionTemplate with two paramters, the LeftT being our previous SubExpressionTemplate!(Vector,Vector) and the RightT being Vector. Proceeding on in this way eventually the result of the math is of type: AddExpressionTemplate!( MultExpressionTemplate!( SubExpressionTemplate!(Vector,Vector), Vector), Vector) And you can see that we basically have a parse tree expressed as nested templates. The final trick is that a Vector.opAssign that takes an ExpressionTemplate is provided and that method calls a method of the expression template to finally trigger the calculation, like expr.eval(this). eval() has the top-level loop over the components of the answer. *Why Existing solutions are insufficient* For that all that effort to actually be useful the compiler has to be pretty agressive about inlining everything so that in the end all the temporary template structs and function calls go away and you're just left with one eval call. It can be tricky to get the code into just the right configuration so that the compiler will do the inlining. And even then results will depend on the quality of the compiler. My attempts using MSVC++ several years ago always came out to be slower than the naive version, but with about 10x the amount of code. The code is also pretty difficult to follow because of all those different types of expression templates that have to be created. If you include matrices things are even trickier because there are special cases like "A*B+a*C" that can be computed efficiently by a single optimized routine. You'd like to recognize such cases and turn them into single calls to that fast routine. You might also want to recognize "A*B+C" as a special case of that with a==1. *How it could be improved*: ??? this is what I'd like to see explained better. --bb
Feb 09 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Tom S wrote:
 Therefore, I'd like to see a case when a compile-time DSL processing 
 is going to be really useful in the real world, as to provoke further 
 complication of the compiler and its usage patterns.
I can't parse this sentence. Did you mean "as opposed to provoking" instead of "as to provoke"?
I think he just meant he wants to see some real world examples that justify the additional complexity that will be added to the compiler and language.
 The other aspect I observed in the discussions following the new dmd 
 release, is that folks are suggesting writing full language parsers, 
 D preprocessors and various sort of operations on complex languages, 
 notably extended-D parsing... This may sound weird in my lips, but 
 that's clearly abuse. What these people really want is a way to 
 extend the language's syntax, pretty much as Nemerle does it. And 
 frankly, if D is going to be a more powerful language, built-in text 
 preprocessing won't cut it. Full fledged macro support and syntax 
 extension mechanics are something that we should look at.
This is a misunderstanding. The syntax is not to be extended. It stays fixed, and that is arguably a good thing. The semantics become more flexible. For example, they will make it easy to write a matrix operation: A = (I - B) * C + D and generate highly performant code from it. (There are many reasons for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does. Anyway, I think it would help get everyone on board if some specific and useful examples were given of how this solves real problems (and no I don't really consider black and white holes as solving real problems. I couldn't even find any non-astronomical uses of the terms in a google search.) For instance, it would be nice to see some more concrete discussion about * the "rails" case. * the X = A*B + C matrix/vector expressions case. * the case of generating bindings to scripting langauges / ORB stubs * the Spirit/parser generator case So here's my take on vector expressions since that's the only one I know anything about. *Problem statement*: Make the expression A=(I-B)*C+D efficient, where the variables are large vectors (I'll leave out matrices for now). *Why it's hard*: The difficulty is that (ignoring SSE instruction etc) the most efficient way to compute that is do all operations component-wise. So instead of computing I-B then multiplying by C, you compute A[i] = (I[i]-B[i])*C[i]+D[i]; for each i. This eliminates the need to allocate large intermediate vectors. *Existing solutions*: Expression templates in C++, e.g. The Blitz++ library. Instead of making opSub in I-B return a new Vector object, you make opSub return an ExpressionTemplate object. This is a little template struct that contains a reference to I and to B, and knows how to subtract the two in a component-wise manner. The types of I and B are template parameters, LeftT and RightT. Its interface also allows it to be treated just like a Vector. You can add a vector to it, subtract a Vector from it etc. Now we go and try to multiply that result times C. The result of that is a new MultExpressionTemplate with two paramters, the LeftT being our previous SubExpressionTemplate!(Vector,Vector) and the RightT being Vector. Proceeding on in this way eventually the result of the math is of type: AddExpressionTemplate!( MultExpressionTemplate!( SubExpressionTemplate!(Vector,Vector), Vector), Vector) And you can see that we basically have a parse tree expressed as nested templates. The final trick is that a Vector.opAssign that takes an ExpressionTemplate is provided and that method calls a method of the expression template to finally trigger the calculation, like expr.eval(this). eval() has the top-level loop over the components of the answer. *Why Existing solutions are insufficient* For that all that effort to actually be useful the compiler has to be pretty agressive about inlining everything so that in the end all the temporary template structs and function calls go away and you're just left with one eval call. It can be tricky to get the code into just the right configuration so that the compiler will do the inlining. And even then results will depend on the quality of the compiler. My attempts using MSVC++ several years ago always came out to be slower than the naive version, but with about 10x the amount of code. The code is also pretty difficult to follow because of all those different types of expression templates that have to be created. If you include matrices things are even trickier because there are special cases like "A*B+a*C" that can be computed efficiently by a single optimized routine. You'd like to recognize such cases and turn them into single calls to that fast routine. You might also want to recognize "A*B+C" as a special case of that with a==1. *How it could be improved*: ??? this is what I'd like to see explained better.
Ok, here's my high-level stab at how the new stuff could help. Instead of returning "expression templates" where the the parse tree is represented as a hierarchical type, just make opSub (I-B) return an ExpressionWrapper which is just a light wrapper around the string "I-B". Next the compiler gets to "ExpressionWrapper * C", have that just return another ExpressionWrapper that modifies the original string to be "(" ~ origstring ~ "*C" In the end you still have an opAssign overload that calls eval() on the expression, but now the expression has a (fully parenthesized) string representation of the expression: "((I-B)*C)+D". And eval just has to use the new compile-time string parsing tricks to decide how best to evaluate that expression. It's a pattern matching task, and thankfully now you have the whole pattern in one place, as opposed to the ExpressionTemplates, which have a hard time really getting a look at the whole picture. --bb
Feb 09 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Tom S wrote:
 Therefore, I'd like to see a case when a compile-time DSL processing 
 is going to be really useful in the real world, as to provoke 
 further complication of the compiler and its usage patterns.
I can't parse this sentence. Did you mean "as opposed to provoking" instead of "as to provoke"?
I think he just meant he wants to see some real world examples that justify the additional complexity that will be added to the compiler and language.
 The other aspect I observed in the discussions following the new dmd 
 release, is that folks are suggesting writing full language parsers, 
 D preprocessors and various sort of operations on complex languages, 
 notably extended-D parsing... This may sound weird in my lips, but 
 that's clearly abuse. What these people really want is a way to 
 extend the language's syntax, pretty much as Nemerle does it. And 
 frankly, if D is going to be a more powerful language, built-in text 
 preprocessing won't cut it. Full fledged macro support and syntax 
 extension mechanics are something that we should look at.
This is a misunderstanding. The syntax is not to be extended. It stays fixed, and that is arguably a good thing. The semantics become more flexible. For example, they will make it easy to write a matrix operation: A = (I - B) * C + D and generate highly performant code from it. (There are many reasons for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does. Anyway, I think it would help get everyone on board if some specific and useful examples were given of how this solves real problems (and no I don't really consider black and white holes as solving real problems. I couldn't even find any non-astronomical uses of the terms in a google search.) For instance, it would be nice to see some more concrete discussion about * the "rails" case. * the X = A*B + C matrix/vector expressions case. * the case of generating bindings to scripting langauges / ORB stubs * the Spirit/parser generator case So here's my take on vector expressions since that's the only one I know anything about. *Problem statement*: Make the expression A=(I-B)*C+D efficient, where the variables are large vectors (I'll leave out matrices for now). *Why it's hard*: The difficulty is that (ignoring SSE instruction etc) the most efficient way to compute that is do all operations component-wise. So instead of computing I-B then multiplying by C, you compute A[i] = (I[i]-B[i])*C[i]+D[i]; for each i. This eliminates the need to allocate large intermediate vectors. *Existing solutions*: Expression templates in C++, e.g. The Blitz++ library. Instead of making opSub in I-B return a new Vector object, you make opSub return an ExpressionTemplate object. This is a little template struct that contains a reference to I and to B, and knows how to subtract the two in a component-wise manner. The types of I and B are template parameters, LeftT and RightT. Its interface also allows it to be treated just like a Vector. You can add a vector to it, subtract a Vector from it etc. Now we go and try to multiply that result times C. The result of that is a new MultExpressionTemplate with two paramters, the LeftT being our previous SubExpressionTemplate!(Vector,Vector) and the RightT being Vector. Proceeding on in this way eventually the result of the math is of type: AddExpressionTemplate!( MultExpressionTemplate!( SubExpressionTemplate!(Vector,Vector), Vector), Vector) And you can see that we basically have a parse tree expressed as nested templates. The final trick is that a Vector.opAssign that takes an ExpressionTemplate is provided and that method calls a method of the expression template to finally trigger the calculation, like expr.eval(this). eval() has the top-level loop over the components of the answer. *Why Existing solutions are insufficient* For that all that effort to actually be useful the compiler has to be pretty agressive about inlining everything so that in the end all the temporary template structs and function calls go away and you're just left with one eval call. It can be tricky to get the code into just the right configuration so that the compiler will do the inlining. And even then results will depend on the quality of the compiler. My attempts using MSVC++ several years ago always came out to be slower than the naive version, but with about 10x the amount of code. The code is also pretty difficult to follow because of all those different types of expression templates that have to be created. If you include matrices things are even trickier because there are special cases like "A*B+a*C" that can be computed efficiently by a single optimized routine. You'd like to recognize such cases and turn them into single calls to that fast routine. You might also want to recognize "A*B+C" as a special case of that with a==1. *How it could be improved*: ??? this is what I'd like to see explained better.
Ok, here's my high-level stab at how the new stuff could help. Instead of returning "expression templates" where the the parse tree is represented as a hierarchical type, just make opSub (I-B) return an ExpressionWrapper which is just a light wrapper around the string "I-B". Next the compiler gets to "ExpressionWrapper * C", have that just return another ExpressionWrapper that modifies the original string to be "(" ~ origstring ~ "*C" In the end you still have an opAssign overload that calls eval() on the expression, but now the expression has a (fully parenthesized) string representation of the expression: "((I-B)*C)+D". And eval just has to use the new compile-time string parsing tricks to decide how best to evaluate that expression. It's a pattern matching task, and thankfully now you have the whole pattern in one place, as opposed to the ExpressionTemplates, which have a hard time really getting a look at the whole picture.
You're on the right track!!! I was writing an answer, but I need to leave so I have to interrupt it. Thanks for lending a hand :o). Andrei
Feb 09 2007
prev sibling parent reply "Andrei Alexandrescu (See Website for Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is a misunderstanding. The syntax is not to be extended. It stays 
 fixed, and that is arguably a good thing. The semantics become more 
 flexible. For example, they will make it easy to write a matrix 
 operation:

 A = (I - B) * C + D

 and generate highly performant code from it. (There are many reasons 
 for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
That's a great starting point for a good conversation. And no, you wouldn't do it with strings, you'd do it exactly as I wrote, with regular code. You could use strings if you want to use more math-looking operators and Greek symbols etc.
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does.
Then such people would be hard-pressed to figure why tuples (which are essentially typelists, the single most used component in Loki) have engendered, according to Walter himself, a surge in language popularity. I don't even think metaprogramming is the future. I think that it's one of many helpful tools for writing good libraries, as are GC, modules, or interfaces. *Everybody* appreciates good libraries.
 Anyway, I think it would help get everyone on board if some specific and 
 useful examples were given of how this solves real problems (and no I 
 don't really consider black and white holes as solving real problems.  I 
 couldn't even find any non-astronomical uses of the terms in a google 
 search.)
The terms are recent and introduced by the Perl community. I happened to like the metaphor. The consecrated name is "null object pattern".
 For instance, it would be nice to see some more concrete discussion about
 * the "rails" case.
 * the X = A*B + C matrix/vector expressions case.
 * the case of generating bindings to scripting langauges / ORB stubs
 * the Spirit/parser generator case
 
 So here's my take on vector expressions since that's the only one I know 
 anything about.
 
 *Problem statement*:
 Make the expression A=(I-B)*C+D efficient, where the variables are large 
 vectors (I'll leave out matrices for now).
 
 *Why it's hard*:
 The difficulty is that (ignoring SSE instruction etc) the most efficient 
 way to compute that is do all operations component-wise.  So instead of 
 computing I-B then multiplying by C, you compute
     A[i] = (I[i]-B[i])*C[i]+D[i];
 for each i.  This eliminates the need to allocate large intermediate 
 vectors.
There's much more than that. (First off, things are more complicated in the case of matrices.) You want to be gentle on cache, so when you have any column-wise iteration you want to do matrix blocking. Depending on the expression, you want to select different block sizes. Then you also want to do optionally partial unrolling *on top of blocking*. Again, the amount of unrolling might depend on the expression. (You don't want to unroll large expressions.) At this point things are utterly messy. Iterating may also be better row-wise or column-wise, depending on the expression. Some subexpressions have a "preferential" row-wise scanning order and a "possible" column-wise order. Others must do things one way or the other. For all this the appropriate code must be generated.
 *Existing solutions*:
   Expression templates in C++, e.g. The Blitz++ library.  Instead of 
 making opSub in I-B return a new Vector object, you make opSub return an 
 ExpressionTemplate object.  This is a little template struct that 
 contains a reference to I and to B, and knows how to subtract the two in 
 a component-wise manner.  The types of I and B are template parameters, 
 LeftT and RightT. Its interface also allows it to be treated just like a 
 Vector.  You can add a vector to it, subtract a Vector from it etc.
 
 Now we go and try to multiply that result times C.  The result of that 
 is a new MultExpressionTemplate with two paramters, the LeftT being our 
 previous SubExpressionTemplate!(Vector,Vector) and the RightT being Vector.
 
 Proceeding on in this way eventually the result of the math is of type:
 
 AddExpressionTemplate!(
    MultExpressionTemplate!(
       SubExpressionTemplate!(Vector,Vector),
       Vector),
    Vector)
 
 And you can see that we basically have a parse tree expressed as nested 
 templates.  The final trick is that a Vector.opAssign that takes an 
 ExpressionTemplate is provided and that method calls a method of the 
 expression template to finally trigger the calculation, like 
 expr.eval(this).  eval() has the top-level loop over the components of 
 the answer.
 
 *Why Existing solutions are insufficient*
 For that all that effort to actually be useful the compiler has to be 
 pretty agressive about inlining everything so that in the end all the 
 temporary template structs and function calls go away and you're just 
 left with one eval call.  It can be tricky to get the code into just the 
 right configuration so that the compiler will do the inlining.  And even 
 then results will depend on the quality of the compiler.  My attempts 
 using MSVC++ several years ago always came out to be slower than the 
 naive version, but with about 10x the amount of code.  The code is also 
 pretty difficult to follow because of all those different types of 
 expression templates that have to be created.
 
 If you include matrices things are even trickier because there are 
 special cases like "A*B+a*C" that can be computed efficiently by a 
 single optimized routine.  You'd like to recognize such cases and turn 
 them into single calls to that fast routine.  You might also want to 
 recognize "A*B+C" as a special case of that with a==1.
 
 *How it could be improved*:
 ??? this is what I'd like to see explained better.
Great. One problem is that building ET libraries is exceedingly hard because of the multiple issues that must be solved simultaneously. To get a glimpse into the kind of problems that we are talking about, see e.g. http://www.adtmag.com/joop/carticle.aspx?ID=627. Libraries cannot afford to implement an optimal solution, partly because they have so much language-y mud to go through, and partly because the C++ language simply does not offer the code generation abilities that are needed. I haven't sat down to design a linear algebra using D's new abilities, but they are definitely helpful in that specifying an elementary matrix operation (the core of a loop) only needs one template. For example, if you want to specify the kernel of a matrix addition, you can say it as a type: MatrixOp!("lhs[i, j] + rhs[i, j]", rowwise, columnwise) The type specifies the kernel of the operation, conventionally expressed in terms of lhs, rhs, i, and j, then the preferred iteration order, and finally the alternate order. The MatrixOp template defines a member function get(uint i, uint j) that expands the string given in the The kernel of multiplication is: MatrixOp!("sum!(k, 0, n, lhs[i, k] * rhs[k, j])", rowwise, columnwise, blocking!(8), unrolling!(8)) (sum is not doable in today's D because D lacks the ability to inject a new alias name into a template upon invocation.) This type specifies how an element of the multiplication is obtained, again what preferred and alternate method of iteration is possible, and also specifies blocking and unrolling suggestions. In a complex expression, lhs and rhs will bind to subexpressions; it's easy to then generate the appropriate code upon assignment to the target, with heuristically chosen blocking and unrolling parameters based on the suggestions made by the subexpressions. All this is doable without difficulty through manipulating and generating code during compilation. Andrei
Feb 09 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Andrei Alexandrescu (See Website for Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is a misunderstanding. The syntax is not to be extended. It 
 stays fixed, and that is arguably a good thing. The semantics become 
 more flexible. For example, they will make it easy to write a matrix 
 operation:

 A = (I - B) * C + D

 and generate highly performant code from it. (There are many reasons 
 for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
That's a great starting point for a good conversation. And no, you wouldn't do it with strings, you'd do it exactly as I wrote, with regular code. You could use strings if you want to use more math-looking operators and Greek symbols etc.
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does.
Then such people would be hard-pressed to figure why tuples (which are essentially typelists, the single most used component in Loki) have engendered, according to Walter himself, a surge in language popularity. I don't even think metaprogramming is the future. I think that it's one of many helpful tools for writing good libraries, as are GC, modules, or interfaces. *Everybody* appreciates good libraries.
 Anyway, I think it would help get everyone on board if some specific 
 and useful examples were given of how this solves real problems (and 
 no I don't really consider black and white holes as solving real 
 problems.  I couldn't even find any non-astronomical uses of the terms 
 in a google search.)
The terms are recent and introduced by the Perl community. I happened to like the metaphor. The consecrated name is "null object pattern".
 For instance, it would be nice to see some more concrete discussion about
 * the "rails" case.
 * the X = A*B + C matrix/vector expressions case.
 * the case of generating bindings to scripting langauges / ORB stubs
 * the Spirit/parser generator case

 So here's my take on vector expressions since that's the only one I 
 know anything about.

 *Problem statement*:
 Make the expression A=(I-B)*C+D efficient, where the variables are 
 large vectors (I'll leave out matrices for now).

 *Why it's hard*:
 The difficulty is that (ignoring SSE instruction etc) the most 
 efficient way to compute that is do all operations component-wise.  So 
 instead of computing I-B then multiplying by C, you compute
     A[i] = (I[i]-B[i])*C[i]+D[i];
 for each i.  This eliminates the need to allocate large intermediate 
 vectors.
There's much more than that. (First off, things are more complicated in the case of matrices.) You want to be gentle on cache, so when you have any column-wise iteration you want to do matrix blocking. Depending on the expression, you want to select different block sizes. Then you also want to do optionally partial unrolling *on top of blocking*. Again, the amount of unrolling might depend on the expression. (You don't want to unroll large expressions.) At this point things are utterly messy. Iterating may also be better row-wise or column-wise, depending on the expression. Some subexpressions have a "preferential" row-wise scanning order and a "possible" column-wise order. Others must do things one way or the other. For all this the appropriate code must be generated.
 *Existing solutions*:
   Expression templates in C++, e.g. The Blitz++ library.  Instead of 
 making opSub in I-B return a new Vector object, you make opSub return 
 an ExpressionTemplate object.  This is a little template struct that 
 contains a reference to I and to B, and knows how to subtract the two 
 in a component-wise manner.  The types of I and B are template 
 parameters, LeftT and RightT. Its interface also allows it to be 
 treated just like a Vector.  You can add a vector to it, subtract a 
 Vector from it etc.

 Now we go and try to multiply that result times C.  The result of that 
 is a new MultExpressionTemplate with two paramters, the LeftT being 
 our previous SubExpressionTemplate!(Vector,Vector) and the RightT 
 being Vector.

 Proceeding on in this way eventually the result of the math is of type:

 AddExpressionTemplate!(
    MultExpressionTemplate!(
       SubExpressionTemplate!(Vector,Vector),
       Vector),
    Vector)

 And you can see that we basically have a parse tree expressed as 
 nested templates.  The final trick is that a Vector.opAssign that 
 takes an ExpressionTemplate is provided and that method calls a method 
 of the expression template to finally trigger the calculation, like 
 expr.eval(this).  eval() has the top-level loop over the components of 
 the answer.

 *Why Existing solutions are insufficient*
 For that all that effort to actually be useful the compiler has to be 
 pretty agressive about inlining everything so that in the end all the 
 temporary template structs and function calls go away and you're just 
 left with one eval call.  It can be tricky to get the code into just 
 the right configuration so that the compiler will do the inlining.  
 And even then results will depend on the quality of the compiler.  My 
 attempts using MSVC++ several years ago always came out to be slower 
 than the naive version, but with about 10x the amount of code.  The 
 code is also pretty difficult to follow because of all those different 
 types of expression templates that have to be created.

 If you include matrices things are even trickier because there are 
 special cases like "A*B+a*C" that can be computed efficiently by a 
 single optimized routine.  You'd like to recognize such cases and turn 
 them into single calls to that fast routine.  You might also want to 
 recognize "A*B+C" as a special case of that with a==1.

 *How it could be improved*:
 ??? this is what I'd like to see explained better.
Great. One problem is that building ET libraries is exceedingly hard because of the multiple issues that must be solved simultaneously. To get a glimpse into the kind of problems that we are talking about, see e.g. http://www.adtmag.com/joop/carticle.aspx?ID=627. Libraries cannot afford to implement an optimal solution, partly because they have so much language-y mud to go through, and partly because the C++ language simply does not offer the code generation abilities that are needed. I haven't sat down to design a linear algebra using D's new abilities, but they are definitely helpful in that specifying an elementary matrix operation (the core of a loop) only needs one template. For example, if you want to specify the kernel of a matrix addition, you can say it as a type: MatrixOp!("lhs[i, j] + rhs[i, j]", rowwise, columnwise) The type specifies the kernel of the operation, conventionally expressed in terms of lhs, rhs, i, and j, then the preferred iteration order, and finally the alternate order. The MatrixOp template defines a member function get(uint i, uint j) that expands the string given in the The kernel of multiplication is: MatrixOp!("sum!(k, 0, n, lhs[i, k] * rhs[k, j])", rowwise, columnwise, blocking!(8), unrolling!(8)) (sum is not doable in today's D because D lacks the ability to inject a new alias name into a template upon invocation.) This type specifies how an element of the multiplication is obtained, again what preferred and alternate method of iteration is possible, and also specifies blocking and unrolling suggestions. In a complex expression, lhs and rhs will bind to subexpressions; it's easy to then generate the appropriate code upon assignment to the target, with heuristically chosen blocking and unrolling parameters based on the suggestions made by the subexpressions. All this is doable without difficulty through manipulating and generating code during compilation. Andrei
I've also done some experimentation on this problem. To do a perfect solution, you'd need to be able to identify when a variable is used twice, for example, a = 3.0 * b + (c - a * 1.5); The compiler knows that 'a' occurs twice, but that information is not transferred into expression templates. The compiler may also know the length of some of the arrays, and that is also lost. I've almost completed a BLAS1 generator for x87 (not SSE), which spits out _optimal_ asm code for any combination of vector-scalar operations (vec+vec, vec-vec, vec*real, dot(vec, vec), for any mixture of 32-,64-, and 80- bit vectors. The cache effects are simple in this case. The code to do this is amazingly compact, thanks to tuples and the new 1.005 mixins (in fact, it is a small fraction of the size of an asm BLAS1 kernal). I noticed that the 'expression template' part of the code contained almost nothing specific to vectors; I think there's potential for a library mixin component to do it. Of course another approach would be to add opAssignExpression(char [] expr)() { } where "expr" is the verbatim expression. a = b*5 + c*d; would become mixin a.opAssignExpression!("a=b*5+c*d"); since it seems that with expression templates, you're fighting the compiler, attempting to undo everything it has done.
Feb 11 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Don Clugston wrote:
 Andrei Alexandrescu (See Website for Email) wrote:
 Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is a misunderstanding. The syntax is not to be extended. It 
 stays fixed, and that is arguably a good thing. The semantics become 
 more flexible. For example, they will make it easy to write a matrix 
 operation:

 A = (I - B) * C + D

 and generate highly performant code from it. (There are many reasons 
 for which that's way harder than it looks.)
This is one thing I haven't really understood in the discussion. How do the current proposals help that case? From what I'm getting you're going to have to write every statement like the above as something like: mixin { ProcessMatrixExpr!( "A = (I - B) * C + D;" ); } How do you get from this mixin/string processing stuff to an API that I might actually be willing to use for common operations?
That's a great starting point for a good conversation. And no, you wouldn't do it with strings, you'd do it exactly as I wrote, with regular code. You could use strings if you want to use more math-looking operators and Greek symbols etc.
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
I think that's exactly what Tom's getting at. He's asking for examples of how this would make like better for him and others. I think given your background you take it for granted that metaprogramming is the future. But D is attracts folks from all kinds of walks of life, because it promises to be a kinder, gentler C++. So some people here aren't even sure why D needs templates at all. Fortran doesn't have 'em after all. And Java just barely does.
Then such people would be hard-pressed to figure why tuples (which are essentially typelists, the single most used component in Loki) have engendered, according to Walter himself, a surge in language popularity. I don't even think metaprogramming is the future. I think that it's one of many helpful tools for writing good libraries, as are GC, modules, or interfaces. *Everybody* appreciates good libraries.
 Anyway, I think it would help get everyone on board if some specific 
 and useful examples were given of how this solves real problems (and 
 no I don't really consider black and white holes as solving real 
 problems.  I couldn't even find any non-astronomical uses of the 
 terms in a google search.)
The terms are recent and introduced by the Perl community. I happened to like the metaphor. The consecrated name is "null object pattern".
 For instance, it would be nice to see some more concrete discussion 
 about
 * the "rails" case.
 * the X = A*B + C matrix/vector expressions case.
 * the case of generating bindings to scripting langauges / ORB stubs
 * the Spirit/parser generator case

 So here's my take on vector expressions since that's the only one I 
 know anything about.

 *Problem statement*:
 Make the expression A=(I-B)*C+D efficient, where the variables are 
 large vectors (I'll leave out matrices for now).

 *Why it's hard*:
 The difficulty is that (ignoring SSE instruction etc) the most 
 efficient way to compute that is do all operations component-wise.  
 So instead of computing I-B then multiplying by C, you compute
     A[i] = (I[i]-B[i])*C[i]+D[i];
 for each i.  This eliminates the need to allocate large intermediate 
 vectors.
There's much more than that. (First off, things are more complicated in the case of matrices.) You want to be gentle on cache, so when you have any column-wise iteration you want to do matrix blocking. Depending on the expression, you want to select different block sizes. Then you also want to do optionally partial unrolling *on top of blocking*. Again, the amount of unrolling might depend on the expression. (You don't want to unroll large expressions.) At this point things are utterly messy. Iterating may also be better row-wise or column-wise, depending on the expression. Some subexpressions have a "preferential" row-wise scanning order and a "possible" column-wise order. Others must do things one way or the other. For all this the appropriate code must be generated.
 *Existing solutions*:
   Expression templates in C++, e.g. The Blitz++ library.  Instead of 
 making opSub in I-B return a new Vector object, you make opSub return 
 an ExpressionTemplate object.  This is a little template struct that 
 contains a reference to I and to B, and knows how to subtract the two 
 in a component-wise manner.  The types of I and B are template 
 parameters, LeftT and RightT. Its interface also allows it to be 
 treated just like a Vector.  You can add a vector to it, subtract a 
 Vector from it etc.

 Now we go and try to multiply that result times C.  The result of 
 that is a new MultExpressionTemplate with two paramters, the LeftT 
 being our previous SubExpressionTemplate!(Vector,Vector) and the 
 RightT being Vector.

 Proceeding on in this way eventually the result of the math is of type:

 AddExpressionTemplate!(
    MultExpressionTemplate!(
       SubExpressionTemplate!(Vector,Vector),
       Vector),
    Vector)

 And you can see that we basically have a parse tree expressed as 
 nested templates.  The final trick is that a Vector.opAssign that 
 takes an ExpressionTemplate is provided and that method calls a 
 method of the expression template to finally trigger the calculation, 
 like expr.eval(this).  eval() has the top-level loop over the 
 components of the answer.

 *Why Existing solutions are insufficient*
 For that all that effort to actually be useful the compiler has to be 
 pretty agressive about inlining everything so that in the end all the 
 temporary template structs and function calls go away and you're just 
 left with one eval call.  It can be tricky to get the code into just 
 the right configuration so that the compiler will do the inlining.  
 And even then results will depend on the quality of the compiler.  My 
 attempts using MSVC++ several years ago always came out to be slower 
 than the naive version, but with about 10x the amount of code.  The 
 code is also pretty difficult to follow because of all those 
 different types of expression templates that have to be created.

 If you include matrices things are even trickier because there are 
 special cases like "A*B+a*C" that can be computed efficiently by a 
 single optimized routine.  You'd like to recognize such cases and 
 turn them into single calls to that fast routine.  You might also 
 want to recognize "A*B+C" as a special case of that with a==1.

 *How it could be improved*:
 ??? this is what I'd like to see explained better.
Great. One problem is that building ET libraries is exceedingly hard because of the multiple issues that must be solved simultaneously. To get a glimpse into the kind of problems that we are talking about, see e.g. http://www.adtmag.com/joop/carticle.aspx?ID=627. Libraries cannot afford to implement an optimal solution, partly because they have so much language-y mud to go through, and partly because the C++ language simply does not offer the code generation abilities that are needed. I haven't sat down to design a linear algebra using D's new abilities, but they are definitely helpful in that specifying an elementary matrix operation (the core of a loop) only needs one template. For example, if you want to specify the kernel of a matrix addition, you can say it as a type: MatrixOp!("lhs[i, j] + rhs[i, j]", rowwise, columnwise) The type specifies the kernel of the operation, conventionally expressed in terms of lhs, rhs, i, and j, then the preferred iteration order, and finally the alternate order. The MatrixOp template defines a member function get(uint i, uint j) that expands the string given in the The kernel of multiplication is: MatrixOp!("sum!(k, 0, n, lhs[i, k] * rhs[k, j])", rowwise, columnwise, blocking!(8), unrolling!(8)) (sum is not doable in today's D because D lacks the ability to inject a new alias name into a template upon invocation.) This type specifies how an element of the multiplication is obtained, again what preferred and alternate method of iteration is possible, and also specifies blocking and unrolling suggestions. In a complex expression, lhs and rhs will bind to subexpressions; it's easy to then generate the appropriate code upon assignment to the target, with heuristically chosen blocking and unrolling parameters based on the suggestions made by the subexpressions. All this is doable without difficulty through manipulating and generating code during compilation. Andrei
I've also done some experimentation on this problem. To do a perfect solution, you'd need to be able to identify when a variable is used twice, for example, a = 3.0 * b + (c - a * 1.5); The compiler knows that 'a' occurs twice, but that information is not transferred into expression templates.
I think what you really need is aliasing information (e.g. two names referring to the same vector), and that is typically not easily computable.
 The compiler may also know the 
 length of some of the arrays, and that is also lost.
Ah, indeed. So that would mean better error checking and probably less runtime bounds checking.
 I've almost completed a BLAS1 generator for x87 (not SSE), which spits 
 out _optimal_ asm code for any combination of vector-scalar operations 
 (vec+vec, vec-vec, vec*real, dot(vec, vec), for any mixture of 32-,64-, 
 and 80- bit vectors. The cache effects are simple in this case.
 The code to do this is amazingly compact, thanks to tuples and the new 
 1.005 mixins (in fact, it is a small fraction of the size of an asm 
 BLAS1 kernal).
 I noticed that the 'expression template' part of the code contained 
 almost nothing specific to vectors; I think there's potential for a 
 library mixin component to do it.
 
 Of course another approach would be to add
 opAssignExpression(char [] expr)()
 {
 }
 where "expr" is the verbatim expression.
 a = b*5 + c*d;
 
 would become
 mixin a.opAssignExpression!("a=b*5+c*d");
 
 since it seems that with expression templates, you're fighting the 
 compiler, attempting to undo everything it has done.
Not everything - e.g., precedence of operators remains unchanged. In the string-based approach you'd have to write a little parser to reimplement operator precedences. But point taken. Andrei
Feb 11 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Don Clugston wrote:
 To do a perfect solution, you'd need to be able to identify when a 
 variable is used twice, for example,

 a = 3.0 * b + (c - a * 1.5);
 The compiler knows that 'a' occurs twice, but that information is not 
 transferred into expression templates.
I think what you really need is aliasing information (e.g. two names referring to the same vector), and that is typically not easily computable.
Ideally, yes. But there's value in knowing names which are *definitely* aliased, even if the majority of names have indeterminate aliasing. I think that most of the value comes from the simple cases, the ultimate example being the += operator. Example: for x87, the most severe limitation on expression complexity is not the number of temporaries, but the number of distinct vectors (you run out of pointer registers before you run out of floating-point stack).
 The compiler may also know the length of some of the arrays, and that 
 is also lost.
Ah, indeed. So that would mean better error checking and probably less runtime bounds checking.
Exactly. And there's implications for loop unrolling, too. (eg, loop unrolling is very attractive if you know the length is a multiple of 4).
 Of course another approach would be to add
 opAssignExpression(char [] expr)()
 {
 }
 where "expr" is the verbatim expression.
 a = b*5 + c*d;

 would become
 mixin a.opAssignExpression!("a=b*5+c*d");

 since it seems that with expression templates, you're fighting the 
 compiler, attempting to undo everything it has done.
Not everything - e.g., precedence of operators remains unchanged. In the string-based approach you'd have to write a little parser to reimplement operator precedences. But point taken.
I've thought about this a bit more. The compiler has done some useful constant folding for you, and parsed all the literals. So ideally you want it in a partially digested form. My implementation converts the expression into a tuple of values (basically a stack), and a char [] of postfix operations to be performed on that stack. Something like this is probably close to ideal, especially if the compiler could remove known duplicate (aliased) values from the tuple. There would need to be a canonical form for the operations, though -- postfix is great for Forth, but looks thoroughly out of place in D. eg a = 3.0 * b + (c - a * 1.5); becomes "v2 v3 * v4 v5 v1 * - v1 =" with the tuple being v[1] = a, v[2] = 3.0, v[3] = b, v[4] = c, v[5] = 1.5 But perhaps a char [] ConvertToPostfix(char [] expr) library metafunction is all that's required.
Feb 12 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Don Clugston wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Don Clugston wrote:
 To do a perfect solution, you'd need to be able to identify when a 
 variable is used twice, for example,

 a = 3.0 * b + (c - a * 1.5);
 The compiler knows that 'a' occurs twice, but that information is not 
 transferred into expression templates.
I think what you really need is aliasing information (e.g. two names referring to the same vector), and that is typically not easily computable.
Ideally, yes. But there's value in knowing names which are *definitely* aliased, even if the majority of names have indeterminate aliasing. I think that most of the value comes from the simple cases, the ultimate example being the += operator. Example: for x87, the most severe limitation on expression complexity is not the number of temporaries, but the number of distinct vectors (you run out of pointer registers before you run out of floating-point stack).
 The compiler may also know the length of some of the arrays, and that 
 is also lost.
Ah, indeed. So that would mean better error checking and probably less runtime bounds checking.
Exactly. And there's implications for loop unrolling, too. (eg, loop unrolling is very attractive if you know the length is a multiple of 4).
 Of course another approach would be to add
 opAssignExpression(char [] expr)()
 {
 }
 where "expr" is the verbatim expression.
 a = b*5 + c*d;

 would become
 mixin a.opAssignExpression!("a=b*5+c*d");

 since it seems that with expression templates, you're fighting the 
 compiler, attempting to undo everything it has done.
Not everything - e.g., precedence of operators remains unchanged. In the string-based approach you'd have to write a little parser to reimplement operator precedences. But point taken.
I've thought about this a bit more. The compiler has done some useful constant folding for you, and parsed all the literals. So ideally you want it in a partially digested form. My implementation converts the expression into a tuple of values (basically a stack), and a char [] of postfix operations to be performed on that stack. Something like this is probably close to ideal, especially if the compiler could remove known duplicate (aliased) values from the tuple. There would need to be a canonical form for the operations, though -- postfix is great for Forth, but looks thoroughly out of place in D. eg a = 3.0 * b + (c - a * 1.5); becomes "v2 v3 * v4 v5 v1 * - v1 =" with the tuple being v[1] = a, v[2] = 3.0, v[3] = b, v[4] = c, v[5] = 1.5 But perhaps a char [] ConvertToPostfix(char [] expr) library metafunction is all that's required.
Or go with prefix notation, and use some sort of nesting indicator, like idunno, uh, parentheses. ;-) char [] ConvertToLisp(char [] dexpr) Seriously though, it may be that that's pretty much what we're going to need in the end. Greenspun's tenth law and all. If you want to manipulate a textual form of parse trees in the most convenient way possible you're pretty much going to end up with Lisp Sexprs. You'll just want to get out of Lisp in the end to spit out the D code: char [] ConvertLispToD(char [] sexpr) I could imagine worse things happening. D becomes a better C++ with embedded compile-time Lisp for extensions. --bb
Feb 12 2007
prev sibling next sibling parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Tom S wrote:
 
 When the compiler is used for the processing of a DSL, it simply masks 
 a simple step that an external tool would do. It's not much of a 
 problem to run an external script or program, that will read the DSL 
 and output D code, while perhaps also doing other stuff, connecting 
 with databases or making coffee.
This is a misrepresentation. Code generation with external tools has been done forever; it is never easy (unless all you need is a table of logarithms), and it always incurs a non-amortized cost of parsing the DSL _plus_ the host language. Look at lex and yacc, the prototypical examples. They aren't small or simple nor perfectly integrated with the host language. And their DSL is extremely well understood. That's why there's no proliferation of lex&yacc-like tools for other DSLs (I seem to recall there was an embedded SQL that got lost in the noise) simply because the code generator would basically have to rewrite a significant part of the compiler to do anything interesting.
Are you stating that D will address all these concerns? Without any detrimental side-effects? [snip]
 I think there is a lot of apprehension and misunderstanding surrounding 
 what metacode is able and supposed to do or simplify. Please, let's 
 focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent. - Kris
Feb 09 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Tom S wrote:

 When the compiler is used for the processing of a DSL, it simply 
 masks a simple step that an external tool would do. It's not much of 
 a problem to run an external script or program, that will read the 
 DSL and output D code, while perhaps also doing other stuff, 
 connecting with databases or making coffee.
This is a misrepresentation. Code generation with external tools has been done forever; it is never easy (unless all you need is a table of logarithms), and it always incurs a non-amortized cost of parsing the DSL _plus_ the host language. Look at lex and yacc, the prototypical examples. They aren't small or simple nor perfectly integrated with the host language. And their DSL is extremely well understood. That's why there's no proliferation of lex&yacc-like tools for other DSLs (I seem to recall there was an embedded SQL that got lost in the noise) simply because the code generator would basically have to rewrite a significant part of the compiler to do anything interesting.
Are you stating that D will address all these concerns? Without any detrimental side-effects?
It will naturally address the concerns exactly by not relying on an external generator. This was my point all along.
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it. My current understanding is that DeRailed's approach is basically dynamic, a domain that metaprogramming can help somewhat, but not a lot. The simplest example is to define variant types (probably they are already there) using templates. Also possibly there are some code generation aspects (e.g. for various platforms), which I understand RoR does a lot, that could be solved using metacode. On the other hand, there are many good examples coming from C++ (e.g. most of Boost and Loki) offering good experimental evidence that metacode can help a whole lot. I've tried to add a couple of ad-hoc examples in posts, but I understand they can't qualify because they don't directly and obviously help DeRailed. Andrei
Feb 09 2007
parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is 
 that DeRailed's approach is basically dynamic, a domain that 
 metaprogramming can help somewhat, but not a lot. The simplest example 
 is to define variant types (probably they are already there) using 
 templates. Also possibly there are some code generation aspects (e.g. 
 for various platforms), which I understand RoR does a lot, that could be 
 solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg? ~ Kris
Feb 09 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some code 
 generation aspects (e.g. for various platforms), which I understand 
 RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
Feb 09 2007
next sibling parent reply kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some code 
 generation aspects (e.g. for various platforms), which I understand 
 RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
Hrm ... Perhaps it should be pointed out that some of us are going to great personal expense and effort to further D in the marketplace. Because of that, what you read above is both a sincere and honest effort to elicit concrete assistance based upon relevant claims made (above). Your response is not quite as helpful as it might be; but then it would appear to have been intended for other effect? What we really need is some assistance instead. Thank you; - Kris
Feb 09 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some 
 code generation aspects (e.g. for various platforms), which I 
 understand RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
Hrm ... Perhaps it should be pointed out that some of us are going to great personal expense and effort to further D in the marketplace. Because of that, what you read above is both a sincere and honest effort to elicit concrete assistance based upon relevant claims made (above). Your response is not quite as helpful as it might be; but then it would appear to have been intended for other effect? What we really need is some assistance instead. Thank you; - Kris
This is Kafkian, pure and simple. What am I supposed to reply to this, that I took a 4x cut in pay to do 4x more work in grad school and that therefore I must be helped? Am I to be construed into a bad guy because I don't have the time or the inclination to look into your favorite project? If I don't learn RoR and DeRailed and consequently fail to produce evidence that code generation will help DeRailed, will that serve as proof that code generation is good for nothing? Will I be sued? Will Judge Judy prosecute me on TV? I didn't self-righteously ask how DeRailed can help me with the machine learning and natural language processing problems that I'm tackling. Listen. I think it's great that you work on a project that you like, and I truly hope it will be successful, and that you will have all the fun in the process. As of me, I already spend too much time deciding what _not_ to do, so fulfilling someone else's sense of entitlement is hardly high on my list. I'm doing my best to participate to this community and to discuss things that I find interesting, with whatever arguments I'm capable of coming with. Just like the next guy. Please allow me to continue this. Thanks. Good luck, Andrei
Feb 09 2007
parent kris <foo bar.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 kris wrote:

 Andrei Alexandrescu (See Website For Email) wrote:

 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some 
 code generation aspects (e.g. for various platforms), which I 
 understand RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
Hrm ... Perhaps it should be pointed out that some of us are going to great personal expense and effort to further D in the marketplace. Because of that, what you read above is both a sincere and honest effort to elicit concrete assistance based upon relevant claims made (above). Your response is not quite as helpful as it might be; but then it would appear to have been intended for other effect? What we really need is some assistance instead. Thank you; - Kris
This is Kafkian, pure and simple. What am I supposed to reply to this, that I took a 4x cut in pay to do 4x more work in grad school and that therefore I must be helped? Am I to be construed into a bad guy because I don't have the time or the inclination to look into your favorite project? If I don't learn RoR and DeRailed and consequently fail to produce evidence that code generation will help DeRailed, will that serve as proof that code generation is good for nothing? Will I be sued? Will Judge Judy prosecute me on TV? I didn't self-righteously ask how DeRailed can help me with the machine learning and natural language processing problems that I'm tackling.
What we're asking for, Andrei, is very simple. We're asking (ad nauseam) for you to provide examples of how DSL can assist in the RoR style environment. The one yourself and Walter have referred to. You've chosen to explicitly avoid doing that, and instead resort to twisting the subject matter around to something that suits your purpose. One could be entirely forgiven for concluding that you have *no* concrete examples, and that all this talk about DSL in the context of RoR (inititated by Walter and backed by yourself) is nothing but empty rhetoric and/or hand-waving. That conclusion would be more than just a little disappointing, so I certainly hope it is not the case?
 Listen. I think it's great that you work on a project that you like, and 
 I truly hope it will be successful, and that you will have all the fun 
 in the process. As of me, I already spend too much time deciding what 
 _not_ to do, so fulfilling someone else's sense of entitlement is hardly 
 high on my list. I'm doing my best to participate to this community and 
 to discuss things that I find interesting, with whatever arguments I'm 
 capable of coming with. Just like the next guy. Please allow me to 
 continue this. Thanks.
 
 
 Good luck,
 
 Andrei
Since you bring it up, working on DeRailed is hardly on the top of my list of exciting and/or interesting projects. However, it is an area I have experience in and it seems like a good platform to further D in the market. Like you, I have rather limited time for this, yet put aside many other important things in my personal life to make this happen; have done for several years now. I don't go parading that particular fact, but you bring it up and it seems you feel many particpants here have nothing better to do. Thus, it's hard to see your approach here as a furtherment of D in any manner. It would seem you're not helping D, yourself, or anyone else by refusing to help us understand the very concepts that you accuse us of not comprehending. I can't think of a higher form of arrogance. Perhaps you can rectify that? - Kris
Feb 09 2007
prev sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some code 
 generation aspects (e.g. for various platforms), which I understand 
 RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
<rant> Have you even looked at Tango? Have you seen the amount of effort that the team has put in? Do you understand what it means for the success of D as a real world language? Do you realize that over the past week, you've been repeatedly ignoring and belittling one of the most code productive and prolific library writers in the D community? Do you understand that he has real life issues with the way D is specified and implemented that affect the most coherent library effort we have, of which he is (along with Sean Kelly) the primary code contributor? Do you comprehend that as you seem to have the ear of Walter in language design decisions, kris necessarily must come to you to try and understand said decisions? For Bob's sake, have some respect for who this man is. Try and be open minded about his concerns. If you are interested in D being a language that can solve problems for "practical" people, start listening to kris, and stop using D as your personal vehicle for language research. </rant> All that aside, I do respect you for the contributions I know you have made in C++ land and here in D land. I hope you continue to enjoy D and contribute your considerable help in its design. However, I do believe that you should take a step back and realize the context in which you are discussing.
Feb 09 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kyle Furlong wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some 
 code generation aspects (e.g. for various platforms), which I 
 understand RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
<rant> Have you even looked at Tango? Have you seen the amount of effort that the team has put in? Do you understand what it means for the success of D as a real world language? Do you realize that over the past week, you've been repeatedly ignoring and belittling one of the most code productive and prolific library writers in the D community? Do you understand that he has real life issues with the way D is specified and implemented that affect the most coherent library effort we have, of which he is (along with Sean Kelly) the primary code contributor? Do you comprehend that as you seem to have the ear of Walter in language design decisions, kris necessarily must come to you to try and understand said decisions? For Bob's sake, have some respect for who this man is. Try and be open minded about his concerns. If you are interested in D being a language that can solve problems for "practical" people, start listening to kris, and stop using D as your personal vehicle for language research. </rant>
Great. There is nothing personal ever, just in case that's not clear, nor I meant disrespect to anyone. As a rule, on the Usenet I reply "ad postum", not "ad hominem". If there were a newsreader feature to hide names, I'd think of using it, if it weren't for losing much of the sense of social dynamics. For the record, I'm not doing language research, officially or not. So much about presuppositions. :o|
 All that aside, I do respect you for the contributions I know you have 
 made in C++ land and here in D land. I hope you continue to enjoy D and 
 contribute your considerable help in its design.
 
 However, I do believe that you should take a step back and realize the 
 context in which you are discussing.
If one doesn't like a post, is one allowed to not reply to it? Andrei
Feb 09 2007
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Kyle Furlong wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 kris wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I think there is a lot of apprehension and misunderstanding 
 surrounding what metacode is able and supposed to do or simplify. 
 Please, let's focus on understanding _before_ forming an opinion.
If that's the case, then perhaps it's due to a lack of solid & practical examples for people to examine? There's been at least two requests recently for an example of how this could help DeRailed in a truly practical sense, yet both of those requests appear to have been ignored thus far. I suspect that such practical examples would help everyone understand since, as you suggest, there appears to be "differences" in perspective? Since Walter brough RoR up, and you apparently endorsed his point, perhaps one of you might enlighten us via those relevant examples? There's a request in the original post on "The DeRailed Challenge" for just such an example ... don't feel overtly obliged; but it might do something to offset the misunderstanding you believe is prevalent.
I saw the request. My problem is that I don't know much about DeRailed, and that I don't have time to invest in it.
You wrote (in the past): ==== I think things would be better if we had better libraries and some success stories. ==== We have better libraries now. And we're /trying/ to build a particular success story. Is that not enough reason to illustrate some practical examples, busy though you are? I mean, /we're/ putting in a lot of effort, regardless of how busy our personal lives may be. All we're asking for are some practical and relevant examples as to why advanced DSL support in D will assist us so much.
 My current understanding is that DeRailed's approach is basically 
 dynamic, a domain that metaprogramming can help somewhat, but not a 
 lot. The simplest example is to define variant types (probably they 
 are already there) using templates. Also possibly there are some 
 code generation aspects (e.g. for various platforms), which I 
 understand RoR does a lot, that could be solved using metacode.
I read "Possibly there ... Could be solved ..." Forgive me, Andrei, but that really does not assist in comprehending what it is that you say we fail to understand. Surely you would agree? Variant types can easily be handled by templates today, but without an example it's hard to tell if that's what you were referring to. On Feb 7th, you wrote (in reference to the DSL discourse): ===== Walter gave another good case study: Ruby on Rails. The success of Ruby on Rails has a lot to do with its ability to express abstractions that were a complete mess to deal with in concreteland. ==== On Feb 7th, Walter wrote (emphasis added): ===== Good question. The simple answer is look what Ruby on Rails did for Ruby. Ruby's a good language, but the killer app for it was RoR. RoR is what drove adoption of Ruby through the roof. /Enabling ways for sophisticated DSLs to interoperate with D will enable such applications/ ===== You see that subtle but important reference to RoR and DSL in the above? What we're asking for is that one of you explain just exactly what is /meant/ by that. DeRailed faces many of the same issues RoR did, so some relevant examples pertaining to the RoR claim may well assist DeRailed. Do we have to get down on our knees and beg?
Aw, give me a break. Andrei
<rant> Have you even looked at Tango? Have you seen the amount of effort that the team has put in? Do you understand what it means for the success of D as a real world language? Do you realize that over the past week, you've been repeatedly ignoring and belittling one of the most code productive and prolific library writers in the D community? Do you understand that he has real life issues with the way D is specified and implemented that affect the most coherent library effort we have, of which he is (along with Sean Kelly) the primary code contributor? Do you comprehend that as you seem to have the ear of Walter in language design decisions, kris necessarily must come to you to try and understand said decisions? For Bob's sake, have some respect for who this man is. Try and be open minded about his concerns. If you are interested in D being a language that can solve problems for "practical" people, start listening to kris, and stop using D as your personal vehicle for language research. </rant>
Great. There is nothing personal ever, just in case that's not clear, nor I meant disrespect to anyone. As a rule, on the Usenet I reply "ad postum", not "ad hominem". If there were a newsreader feature to hide names, I'd think of using it, if it weren't for losing much of the sense of social dynamics. For the record, I'm not doing language research, officially or not. So much about presuppositions. :o|
 All that aside, I do respect you for the contributions I know you have 
 made in C++ land and here in D land. I hope you continue to enjoy D 
 and contribute your considerable help in its design.

 However, I do believe that you should take a step back and realize the 
 context in which you are discussing.
If one doesn't like a post, is one allowed to not reply to it? Andrei
Clearly you haven't understood my intention. I propose to you that such an "anonymous" newsreader would be a terrible thing. As much as you would like it, human context DOES have a role in any discussion. For example, the fact that you are continuing to ignore, which is that kris is not just some guy with an idea of where D should go, as I've already outlined. If you simply reply to his posts with the idea that you are having an abstract argument about the ideals of creating a language, you will fail to address any of his real issues. Even if it were the case that you ignore all personality, you still have the monumental arrogance to presume that you do not have to supply evidence for your assertions. I understand that you are a successful computer scientist. I accept that you have had success with some books on the subject. I respect that you currently research in the field. None of this allows you the freedoms you have taken in the discussions you have been having.
Feb 10 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Kyle Furlong wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I understand that you are a successful computer scientist. I accept that 
 you have had success with some books on the subject. I respect that you 
 currently research in the field. None of this allows you the freedoms 
 you have taken in the discussions you have been having.
Wow, this is sounding sillier and sillier. It seems pretty clear to me that the answer is simply that Andrei doesn't really know enough about RoR give a concrete example of how better metaprogramming would be useful for DeRailed. He pretty much said as much in the last mail. But it would be good if he gave some more practical, concrete examples of places where it would help. Note that what's going on here is *talk* about features that may or may not get into DMD any time soon. In fact you could say this whole discussion has been about *preventing* features from getting introduced. At least in an ad-hoc manner. This meat of this metaprogramming discussion started with Walter saying he was thinking of adding compile time regexps to the language. Without any discussion about whether that's a good thing or not and what the ramifications are, then it's just going to happen, whether it's good for D or not. So the question becomes what should D look like? Rather than add hoc features, what do we really want D's metaprogramming to look like? To me the discussion has all been about figuring out a clear picture of what things *should* look like in the future w.r.t. metaprogramming in order to convince Walter that throwing things in ad-hoc is not the way to go. Or maybe to find out that what he's thinking of throwing in isn't so ad-hoc after all and actually makes for a nice evolutionary step towards where we want to go. As for whether that would help DeRailed, I dunno. Sounds like kris has a pretty clear idea that reflection would be much more useful to DeRailed. As for whether DeRailed will help D, I also don't know. I kinda wonder though, becuase if someone wants RoR, why wouldn't they just use RoR? Seems like it's a tough battle to unseat a champ like that. I would think that D would have a better shot at dominating by providing a great solution to a niche which is currently underserved. But that's just my opinion. Also I don't do web development, so that may be another part of it. But the description given of what Rails does so well, with all kinds of dynamic this and on-the-fly that, really sounds more like what a scripting language is good at than a static compile-time language. I mean the dominant web languages are Perl, Python, Ruby, Php, and Javascript. Not a compiled language in the bunch. There's must be a reason for that. Even Java is interpreted bytecode. As for Andrei having Walter's ear. I think Andrei has Walter's ear mostly because Andrei is interested in the same kinds of things that interest Walter. I think everyone can tell by now that Walter pretty much works on solving the problems that interest him. Right now (and pretty much ever since 'static if') the thing that seems to interest him most is metaprogramming. Hopefully some day he'll get back to being interested in reflection. But if he's really got the metaprogramming bug, then that may not be until after he's got D's compile time framework to a point where he feels it's "done". But only Walter knows. --bb
Feb 10 2007
next sibling parent Kyle Furlong <kylefurlong gmail.com> writes:
Bill Baxter wrote:
 Kyle Furlong wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 I understand that you are a successful computer scientist. I accept 
 that you have had success with some books on the subject. I respect 
 that you currently research in the field. None of this allows you the 
 freedoms you have taken in the discussions you have been having.
Wow, this is sounding sillier and sillier. It seems pretty clear to me that the answer is simply that Andrei doesn't really know enough about RoR give a concrete example of how better metaprogramming would be useful for DeRailed. He pretty much said as much in the last mail. But it would be good if he gave some more practical, concrete examples of places where it would help. Note that what's going on here is *talk* about features that may or may not get into DMD any time soon. In fact you could say this whole discussion has been about *preventing* features from getting introduced. At least in an ad-hoc manner. This meat of this metaprogramming discussion started with Walter saying he was thinking of adding compile time regexps to the language. Without any discussion about whether that's a good thing or not and what the ramifications are, then it's just going to happen, whether it's good for D or not. So the question becomes what should D look like? Rather than add hoc features, what do we really want D's metaprogramming to look like? To me the discussion has all been about figuring out a clear picture of what things *should* look like in the future w.r.t. metaprogramming in order to convince Walter that throwing things in ad-hoc is not the way to go. Or maybe to find out that what he's thinking of throwing in isn't so ad-hoc after all and actually makes for a nice evolutionary step towards where we want to go. As for whether that would help DeRailed, I dunno. Sounds like kris has a pretty clear idea that reflection would be much more useful to DeRailed. As for whether DeRailed will help D, I also don't know. I kinda wonder though, becuase if someone wants RoR, why wouldn't they just use RoR? Seems like it's a tough battle to unseat a champ like that. I would think that D would have a better shot at dominating by providing a great solution to a niche which is currently underserved. But that's just my opinion. Also I don't do web development, so that may be another part of it. But the description given of what Rails does so well, with all kinds of dynamic this and on-the-fly that, really sounds more like what a scripting language is good at than a static compile-time language. I mean the dominant web languages are Perl, Python, Ruby, Php, and Javascript. Not a compiled language in the bunch. There's must be a reason for that. Even Java is interpreted bytecode. As for Andrei having Walter's ear. I think Andrei has Walter's ear mostly because Andrei is interested in the same kinds of things that interest Walter. I think everyone can tell by now that Walter pretty much works on solving the problems that interest him. Right now (and pretty much ever since 'static if') the thing that seems to interest him most is metaprogramming. Hopefully some day he'll get back to being interested in reflection. But if he's really got the metaprogramming bug, then that may not be until after he's got D's compile time framework to a point where he feels it's "done". But only Walter knows. --bb
Even coming at this debate from the perspective you outlined, Andrei's stance and tone have still been wrong. Instead of moving towards an understanding of what would be best for the people who are actually using D for real code, he instead disregards experience and advocates the theoretical ideal (from his perspective). I simply cant understand why anyone, including Mr. Alexandrescu, would disregard the opinion of a man who has been working with D for more years in REAL code than almost anyone here.
Feb 10 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 As for Andrei having Walter's ear.  I think Andrei has Walter's ear 
 mostly because Andrei is interested in the same kinds of things that 
 interest Walter.  I think everyone can tell by now that Walter pretty 
 much works on solving the problems that interest him.   Right now (and 
 pretty much ever since 'static if') the thing that seems to interest him 
 most is metaprogramming.  Hopefully some day he'll get back to being 
 interested in reflection.  But if he's really got the metaprogramming 
 bug, then that may not be until after he's got D's compile time 
 framework to a point where he feels it's "done".  But only Walter knows.
There is a deeper connection between runtime reflection and compile-time reflection than it might appear. In the runtime reflection scenario, the compiler must generate, for each user-defined type, an amount of boilerplate code that allows symbolic inspection from the outside, and code execution from the outside with, say, untyped (or dynamically-typed) arguments. The key point is that the code is *boilerplate* and as such its production can be confined to a code generation task, which would keep the compiler simple. The availability of compile-time introspection effectively enables implementation of run-time introspection in a library. For example: class Widget { ... data ... ... methods ... } mixin Manifest!(Widget); If compile-time introspection is available, the Manifest template can generate full-blown run-time introspection code for Widget, with stubs for dynamic invocation, the whole nine yards. This is nicer than leaving the task to the compiler because it relieves the compiler writer from being the bottleneck. Andrei
Feb 10 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 
 There is a deeper connection between runtime reflection and compile-time 
 reflection than it might appear.
 
 In the runtime reflection scenario, the compiler must generate, for each 
 user-defined type, an amount of boilerplate code that allows symbolic 
 inspection from the outside, and code execution from the outside with, 
 say, untyped (or dynamically-typed) arguments.
 
 The key point is that the code is *boilerplate* and as such its 
 production can be confined to a code generation task, which would keep 
 the compiler simple. The availability of compile-time introspection 
 effectively enables implementation of run-time introspection in a library.
 
 For example:
 
 class Widget
 {
   ... data ...
   ... methods ...
 }
 
 mixin Manifest!(Widget);
 
 If compile-time introspection is available, the Manifest template can 
 generate full-blown run-time introspection code for Widget, with stubs 
 for dynamic invocation, the whole nine yards.
 
 This is nicer than leaving the task to the compiler because it relieves 
 the compiler writer from being the bottleneck.
 
 
 Andrei
 
Well there you go then. Sounds a lot like the serialization problem too. mixin Serializer!(Widget); Or the problem of exposing classes to scripting languages. mixin ScriptBinding!(Widget); Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case). Any thoughts, Kirk? You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection. --bb
Feb 10 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:

 There is a deeper connection between runtime reflection and 
 compile-time reflection than it might appear.

 In the runtime reflection scenario, the compiler must generate, for 
 each user-defined type, an amount of boilerplate code that allows 
 symbolic inspection from the outside, and code execution from the 
 outside with, say, untyped (or dynamically-typed) arguments.

 The key point is that the code is *boilerplate* and as such its 
 production can be confined to a code generation task, which would keep 
 the compiler simple. The availability of compile-time introspection 
 effectively enables implementation of run-time introspection in a 
 library.

 For example:

 class Widget
 {
   ... data ...
   ... methods ...
 }

 mixin Manifest!(Widget);

 If compile-time introspection is available, the Manifest template can 
 generate full-blown run-time introspection code for Widget, with stubs 
 for dynamic invocation, the whole nine yards.

 This is nicer than leaving the task to the compiler because it 
 relieves the compiler writer from being the bottleneck.


 Andrei
Well there you go then. Sounds a lot like the serialization problem too. mixin Serializer!(Widget);
Yes, that is being discussed a little in the announce group. One nice thing (barring potential balkanization) is that you can invent various serialization engines supporting different formats. The point is that it's the programmer, not the compiler writer, deciding that. One difference between serialization and run-time reflection is that you may decide to serialize only the subset of an object, so you may want to provide specific indications to the Serializer template (a la "don't serialize this guy", or "only serialize these guys" etc): mixin Serializer!(Widget, "exclude(x, y, foo, bar)");
 Or the problem of exposing classes to scripting languages.
 
     mixin ScriptBinding!(Widget);
And probably the first scripting engine to be supported would be DMDScript...
 Speaking of which I'm surprised Kirk hasn't piped in here more about how 
 this could make life easier for PyD (or not if that's the case).  Any 
 thoughts, Kirk?  You're in one of the best positions to say what's a 
 bottleneck with the current state of compile-time reflection.
...or PyD. :o) Andrei
Feb 10 2007
prev sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Bill Baxter wrote:
 Speaking of which I'm surprised Kirk hasn't piped in here more about how 
 this could make life easier for PyD (or not if that's the case).  Any 
 thoughts, Kirk?  You're in one of the best positions to say what's a 
 bottleneck with the current state of compile-time reflection.
 
 --bb
One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior. http://pyd.dsource.org/inherit.html Getting the most proper behavior requires a bit of a workaround. For every class that a user wishes to expose to Python, they must write a "wrapper" class, and then expose both the wrapper and the original class to Python. The basic idea is so that you can subclass D classes with Python classes and then get D code to polymorphically call the methods of the Python class: // D class class Foo { void bar() { writefln("Foo.bar"); } } // D function calling method void polymorphic_call(Foo f) { f.bar(); } class PyFoo(Foo): def bar(self): print "PyFoo.bar"
 o = PyFoo()
 polymorphic_call(o)
PyFoo.bar Read that a few times until you get it. To see how Pyd handles this, read the above link. It's quite ugly. The D wrapper class for Foo would look something like this: class FooWrapper : Foo { mixin OverloadShim; void bar() { get_overload(&super.bar, "bar"); } } Never mind what this actually does. The problem at hand is somehow generating a class like this at compile-time, possibly given only the class Foo. While these new mixins now give me a mechanism for generating this class, I don't believe I can get all of the information about the class that I need at compile-time, at least not automatically. I might be able to rig something creative up with tuples, now that I think about it... However, I have some more pressing issues with Pyd at the moment (strings, embedding, and building, for three examples), which have nothing to do with these new features. -- Kirk McDonald Pyd: Wrapping Python with D http://pyd.dsource.org
Feb 10 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kirk McDonald wrote:
 Bill Baxter wrote:
 Speaking of which I'm surprised Kirk hasn't piped in here more about 
 how this could make life easier for PyD (or not if that's the case).  
 Any thoughts, Kirk?  You're in one of the best positions to say what's 
 a bottleneck with the current state of compile-time reflection.

 --bb
One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior. http://pyd.dsource.org/inherit.html
Great lib, and a good place to figure how introspection can help.
 Getting the most proper behavior requires a bit of a workaround. For 
 every class that a user wishes to expose to Python, they must write a 
 "wrapper" class, and then expose both the wrapper and the original class 
 to Python. The basic idea is so that you can subclass D classes with 
 Python classes and then get D code to polymorphically call the methods 
 of the Python class:
 
 // D class
 class Foo {
     void bar() { writefln("Foo.bar"); }
 }
 
 // D function calling method
 void polymorphic_call(Foo f) {
     f.bar();
 }
 

 class PyFoo(Foo):
     def bar(self):
         print "PyFoo.bar"
 

  >>> o = PyFoo()
  >>> polymorphic_call(o)
 PyFoo.bar
 
 Read that a few times until you get it. To see how Pyd handles this, 
 read the above link. It's quite ugly.
If I understand things correctly, in the ideal setup you'd need a means to expose an entire, or parts of, a class to Python. That is, for the class: class Base { void foo() { writefln("Base.foo"); } void bar() { writefln("Base.bar"); } } instead of (or in addition to) the current state of affairs: wrapped_class!(Base) b; b.def!(Base.foo); b.def!(Base.bar); finalize_class(b); it would be probably desirable to simply write: defclass!(Base); which automa(t|g)ically takes care of all of the above. To do so properly, and to also solve the polymorphic problem that you mention, defclass must define the following class: class BaseWrap : Base { mixin OverloadShim; void foo() { get_overload(&super.foo, "foo"); } void bar() { get_overload(&super.bar, "bar"); } } Then the BaseWrap (and not Base) class would be exposed to Python, along with each of its methods. If I misunderstood something in the above, please point out the error and don't read the rest of this post. :o) This kind of task should be easily doable with compile-time reflection, possibly something along the following lines (for the wrapping part): class PyDWrap(class T) : T { mixin OverloadShim; // Escape into the compile-time realm mixin { foreach (m ; methods!(T)) { char[] args = formals!(m).length ? ", " ~ actuals!(m) : ""; writefln("%s %s(%s) { return get_overload(super.%s, `%s`%s); }", ret_type!(m), name!(m), formals!(m), name!(m), name!(m), args); } } } So instantiating, say, PyDWrap with Base will tantamount to this: class PyDWrap!(Base) : Base { mixin OverloadShim; void foo() { return get_overload(&super.foo, `foo`); } void bar() { get_overload(&super.bar, `bar`); } } Instantiating PyDWrap with a more sophisticated class (one that defines methods with parameters) will work properly as the conditional initialization of args suggests.
 The D wrapper class for Foo would look something like this:
 
 class FooWrapper : Foo {
     mixin OverloadShim;
     void bar() {
         get_overload(&super.bar, "bar");
     }
 }
 
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo. While these new mixins now give me a mechanism for generating 
 this class, I don't believe I can get all of the information about the 
 class that I need at compile-time, at least not automatically. I might 
 be able to rig something creative up with tuples, now that I think about 
 it...
At the end of the day, without compile-time introspection, code will end up repeating itself somewhere. For example, you nicely conserve the inheritance relationship among D classes in their Python incarnations. Why is that possible? Because D offers you the appropriate introspection primitive. If you didn't have that, or at least C++'s SUPERSUBCLASS trick (which works almost by sheer luck), you would have required the user to wire the inheritance graph explicitly.
 However, I have some more pressing issues with Pyd at the moment
 (strings, embedding, and building, for three examples), which have
 nothing to do with these new features.
Update early, update often. :o) Please write out any ideas or issues you are confronting. Looks like PyD is a good case study for D's nascent introspection abilities. Andrei
Feb 10 2007
parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Kirk McDonald wrote:
 Bill Baxter wrote:
 Speaking of which I'm surprised Kirk hasn't piped in here more about 
 how this could make life easier for PyD (or not if that's the case).  
 Any thoughts, Kirk?  You're in one of the best positions to say 
 what's a bottleneck with the current state of compile-time reflection.

 --bb
One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior. http://pyd.dsource.org/inherit.html
Great lib, and a good place to figure how introspection can help.
Thanks!
 
 Getting the most proper behavior requires a bit of a workaround. For 
 every class that a user wishes to expose to Python, they must write a 
 "wrapper" class, and then expose both the wrapper and the original 
 class to Python. The basic idea is so that you can subclass D classes 
 with Python classes and then get D code to polymorphically call the 
 methods of the Python class:

 // D class
 class Foo {
     void bar() { writefln("Foo.bar"); }
 }

 // D function calling method
 void polymorphic_call(Foo f) {
     f.bar();
 }


 class PyFoo(Foo):
     def bar(self):
         print "PyFoo.bar"


  >>> o = PyFoo()
  >>> polymorphic_call(o)
 PyFoo.bar

 Read that a few times until you get it. To see how Pyd handles this, 
 read the above link. It's quite ugly.
If I understand things correctly, in the ideal setup you'd need a means to expose an entire, or parts of, a class to Python. That is, for the class: class Base { void foo() { writefln("Base.foo"); } void bar() { writefln("Base.bar"); } } instead of (or in addition to) the current state of affairs: wrapped_class!(Base) b; b.def!(Base.foo); b.def!(Base.bar); finalize_class(b); it would be probably desirable to simply write: defclass!(Base); which automa(t|g)ically takes care of all of the above.
That would be nice. However, my gut (and previous experience) tells me that it is simpler, or at least more reliable, if the user explicitly lists the things to wrap. There remain bits and pieces of D that I can't expose to Python, and it's easier to allow the user to simply not specify those things, than to detect and not wrap them. This is moot if D's reflection becomes perfect. We are not there, yet, however, so I must play with what we have.
 To do so properly, and to also solve the polymorphic problem that you
 mention, defclass must define the following class:
 
 class BaseWrap : Base {
     mixin OverloadShim;
     void foo() {
         get_overload(&super.foo, "foo");
     }
     void bar() {
         get_overload(&super.bar, "bar");
     }
 }
 
 Then the BaseWrap (and not Base) class would be exposed to Python, along
 with each of its methods.
 
 If I misunderstood something in the above, please point out the error
 and don't read the rest of this post. :o)
One minor nit: Both BaseWrap and Base must be wrapped by Pyd, although only BaseWrap will actually be exposed to Python. (Meaning Python code can subclass and create instances of BaseWrap, but not Base.) This is so that D functions can return instances of Base to Python.
 
 This kind of task should be easily doable with compile-time reflection,
 possibly something along the following lines (for the wrapping part):
 
 class PyDWrap(class T) : T
 {
   mixin OverloadShim;
   // Escape into the compile-time realm
   mixin
   {
     foreach (m ; methods!(T))
     {
       char[] args = formals!(m).length
         ? ", " ~ actuals!(m) : "";
       writefln("%s %s(%s)
         { return get_overload(super.%s, `%s`%s); }",
         ret_type!(m), name!(m), formals!(m),
         name!(m), name!(m), args);
     }
   }
 }
 
 So instantiating, say, PyDWrap with Base will tantamount to this:
 
 class PyDWrap!(Base) : Base {
     mixin OverloadShim;
     void foo() {
         return get_overload(&super.foo, `foo`);
     }
     void bar() {
         get_overload(&super.bar, `bar`);
     }
 }
 
 Instantiating PyDWrap with a more sophisticated class (one that defines
 methods with parameters) will work properly as the conditional
 initialization of args suggests.
 
My current idea, which can be done right now, involves some simple refactoring of the class-wrapping API, which was designed before we had proper tuples. It would look something like this: wrap_class!( Base, Def!(Base.foo) Def!(Base.bar), ); 'Def' would become a struct or class template. (The capital 'D' distinguishes it from the 'def' function template used to wrap regular functions.) Since all of the methods are now specified at compile-time, I can generate the wrapper class at compile time with little difficulty.
 The D wrapper class for Foo would look something like this:

 class FooWrapper : Foo {
     mixin OverloadShim;
     void bar() {
         get_overload(&super.bar, "bar");
     }
 }

 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo. While these new mixins now give me a mechanism for 
 generating this class, I don't believe I can get all of the 
 information about the class that I need at compile-time, at least not 
 automatically. I might be able to rig something creative up with 
 tuples, now that I think about it...
At the end of the day, without compile-time introspection, code will end up repeating itself somewhere. For example, you nicely conserve the inheritance relationship among D classes in their Python incarnations. Why is that possible? Because D offers you the appropriate introspection primitive. If you didn't have that, or at least C++'s SUPERSUBCLASS trick (which works almost by sheer luck), you would have required the user to wire the inheritance graph explicitly.
 However, I have some more pressing issues with Pyd at the moment
 (strings, embedding, and building, for three examples), which have
 nothing to do with these new features.
Update early, update often. :o) Please write out any ideas or issues you are confronting. Looks like PyD is a good case study for D's nascent introspection abilities.
It has always been one. Pyd was probably the first D library to have a serious need for tuples, going so far as to fake them before they were part of the language proper. It's probably the largest concrete application of meta-programming written in D. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 10 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo.
Tell me exactly what you need.
Feb 10 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo.
Tell me exactly what you need.
Given a class Foo, allow _during compilation_ enumeration of all of its embedded symbols (fields, methods, types, aliases), with full access to their type information (notably return and argument types for methods), transitively. Allow arbitrary code generation using all of that information. Andrei
Feb 10 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo.
Tell me exactly what you need.
Given a class Foo, allow _during compilation_ enumeration of all of its embedded symbols (fields, methods, types, aliases), with full access to their type information (notably return and argument types for methods), transitively. Allow arbitrary code generation using all of that information.
That will eventually happen, but I was hoping there was some subset that Kirk needed which I can provide relatively quickly.
Feb 10 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only 
 the class Foo.
Tell me exactly what you need.
Given a class Foo, allow _during compilation_ enumeration of all of its embedded symbols (fields, methods, types, aliases), with full access to their type information (notably return and argument types for methods), transitively. Allow arbitrary code generation using all of that information.
That will eventually happen, but I was hoping there was some subset that Kirk needed which I can provide relatively quickly.
Without knowing the details, my guess is that nonstatic methods with their return and argument types should be enough. Andrei
Feb 10 2007
prev sibling parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo.
Tell me exactly what you need.
Given a class, I need a way to get a list of all of its member functions at compile-time. One might naively think a tuple of aliases to the class's member functions would do, except that this does not provide enough information in the case of overloaded functions. In other words: class Foo { void foo() {} void bar() {} void bar(int i) {} } The mechanism I suggest would get us this tuple: methodtuple!(Foo) => Tuple!(Foo.foo, Foo.bar); (Particularly, I do not think this tuple should include inherited methods.) However, there is no way to distinguish the two forms of bar at compile-time. Currently, this is only enough information to get the first form with the empty parameter list, since it is lexically first. This turns out to be a more general problem, which I will now recast in terms of global functions. void foo() {} void foo(int i) {} void foo(char[] s) {} The second mechanism needed would give us a tuple of the function types shared by this symbol: functiontuple!(foo) => Tuple!(void function(), void function(int), void function(char[])) functiontuple!(Foo.bar) => Tuple!(void function(), void function(int)) The third thing needed is the ability to detect function parameter storage classes, as you have discussed on this NG in the past. Fourth is some way of detecting whether a function has default arguments, and how many it has. This can be done with templates (as std.bind shows), but direct language support would be a great deal cleaner. Fifth is actually satisfied in a way by the new mixins. This is the ability to call an alias of a member function on an instance of the class or struct. This is very much like C++ pointers to member functions, with the difference that it happens entirely at compile-time. (Meaning that type mismatches, viz. trying to call a method of one class on an instance of another, can be detected.) I am not sure what the syntax for this should be, however. And, as I said, the new mixins can handle it, when combined with Don Clugston's nameof. The first three are absolutely necessary. The other two can be done with templates in current D, and would merely be nice. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 11 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Kirk McDonald wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only the 
 class Foo.
Tell me exactly what you need.
Given a class, I need a way to get a list of all of its member functions at compile-time.
Probably also a couple of bits per method telling whether the method is static or final. Andrei
Feb 11 2007
parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Kirk McDonald wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Never mind what this actually does. The problem at hand is somehow 
 generating a class like this at compile-time, possibly given only 
 the class Foo.
Tell me exactly what you need.
Given a class, I need a way to get a list of all of its member functions at compile-time.
Probably also a couple of bits per method telling whether the method is static or final. Andrei
Right, right, all that stuff, too. :-) -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 11 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Kirk McDonald wrote:
 Given a class, I need a way to get a list of all of its member functions 
 at compile-time.
static member functions? non-virtual member functions?
Feb 11 2007
parent reply Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 Kirk McDonald wrote:
 Given a class, I need a way to get a list of all of its member 
 functions at compile-time.
static member functions? non-virtual member functions?
Okay, I think I do need to clarify this. Let's take this class: class Foo { void foo() {} // A regular method static void bar() {} // A static member function final void baz() {} // A non-virtual member function } Here's one possible interface: get_member_functions!(Foo) => Tuple!(Foo.foo, Foo.bar, Foo.baz) To distinguish between these attributes, we could define an enum, like: enum MemberAttribute { Virtual, Final, Static } Then we would have another template or built-in function or whatever these are supposed to be: get_member_attributes!(Foo) => Tuple!(MemberAttribute.Virtual, MemberAttribute.Static, MemberAttribute.Final) Where the order of this tuple's elements corresponds exactly to the first tuple's. This is one idea. If anyone thinks of a better interface (and these names are probably unacceptable, at least), please speak up. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 11 2007
parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
Kirk McDonald wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Given a class, I need a way to get a list of all of its member 
 functions at compile-time.
static member functions? non-virtual member functions?
Okay, I think I do need to clarify this. Let's take this class: class Foo { void foo() {} // A regular method static void bar() {} // A static member function final void baz() {} // A non-virtual member function } Here's one possible interface: get_member_functions!(Foo) => Tuple!(Foo.foo, Foo.bar, Foo.baz) To distinguish between these attributes, we could define an enum, like: enum MemberAttribute { Virtual, Final, Static }
I think you should be able to use .mangleof to get that info, no? L.
Feb 12 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Lionello Lunesu wrote:
 Kirk McDonald wrote:
 Walter Bright wrote:
 Kirk McDonald wrote:
 Given a class, I need a way to get a list of all of its member 
 functions at compile-time.
static member functions? non-virtual member functions?
[snip]
 
 I think you should be able to use .mangleof to get that info, no?
At the moment, .mangleof can tell static from virtual or final (because of the 'M' indicating a 'this' pointer is passed to the latter two), but virtual and final member functions are mangled identically.
Feb 12 2007
prev sibling parent Thomas Kuehne <thomas-dloop kuehne.cn> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Andrei Alexandrescu (See Website For Email) schrieb am 2007-02-10:
 Bill Baxter wrote:
 As for Andrei having Walter's ear.  I think Andrei has Walter's ear 
 mostly because Andrei is interested in the same kinds of things that 
 interest Walter.  I think everyone can tell by now that Walter pretty 
 much works on solving the problems that interest him.   Right now (and 
 pretty much ever since 'static if') the thing that seems to interest him 
 most is metaprogramming.  Hopefully some day he'll get back to being 
 interested in reflection.  But if he's really got the metaprogramming 
 bug, then that may not be until after he's got D's compile time 
 framework to a point where he feels it's "done".  But only Walter knows.
There is a deeper connection between runtime reflection and compile-time reflection than it might appear. In the runtime reflection scenario, the compiler must generate, for each user-defined type, an amount of boilerplate code that allows symbolic inspection from the outside, and code execution from the outside with, say, untyped (or dynamically-typed) arguments. The key point is that the code is *boilerplate* and as such its production can be confined to a code generation task, which would keep the compiler simple. The availability of compile-time introspection effectively enables implementation of run-time introspection in a library. For example: class Widget { ... data ... ... methods ... } mixin Manifest!(Widget); If compile-time introspection is available, the Manifest template can generate full-blown run-time introspection code for Widget, with stubs for dynamic invocation, the whole nine yards. This is nicer than leaving the task to the compiler because it relieves the compiler writer from being the bottleneck.
Going the compile-time introspection -> boilerplate way is possible but not the only solution for runtime reflection. Combining "dmd -L/DETAILEDMAP ...", Flectioned's Symbols.scanOptlinkMap and existing TypeInfo solves 80% of the boiler plate issue. The only remaining issue are non-static non-function aggregate members. I'm not arguing against powerful compile-time reflection, in fact I love it. Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFFzkanLK5blCcjpWoRAmQyAJ9jjnvuJgoyDzJ19m+6L7CrYRMMqwCfS8Dg YjrAoiAUaV529PU6uFklVyw= =QiMc -----END PGP SIGNATURE-----
Feb 10 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Wow, this is sounding sillier and sillier.
 It seems pretty clear to me that the answer is simply that Andrei 
 doesn't really know enough about RoR give a concrete example of how 
 better metaprogramming would be useful for DeRailed.  He pretty much 
 said as much in the last mail.  But it would be good if he gave some 
 more practical, concrete examples of places where it would help.
For the record, I know next to nothing about RoR except that: 1) RoR has clearly been the "killer app" that has launched Ruby into one of the top languages in use today. 2) I thought RoR was a DSL addon to Ruby. Perhaps such supposition was a huge misunderstanding on my part. I think it's way cool that you guys are working on DeRailed. If better compiler DSL technology won't help that, well, rats! But I don't know enough about the problem DeRailed is trying to solve to suggest any way in which a DSL might be used with it.
 Note that what's going on here is *talk* about features that may or may 
 not get into DMD any time soon.  In fact you could say this whole 
 discussion has been about *preventing* features from getting introduced. 
  At least in an ad-hoc manner. This meat of this metaprogramming 
 discussion started with Walter saying he was thinking of adding compile 
 time regexps to the language.  Without any discussion about whether 
 that's a good thing or not and what the ramifications are, then it's 
 just going to happen, whether it's good for D or not.
I beg to differ on that. The reason I started this thread was to not post a fait accompli, but to elicit discussion, especially since builtin regex has been thoroughly reviled in the past. And it's pretty clear that it's going down in flames again <g>. I think there's a better way now, and this discussion has helped find it.
 So the question 
 becomes what should D look like?  Rather than add hoc features, what do 
 we really want D's metaprogramming to look like?
Yes, indeed. What Kris brings to the table is he's building key foundation libraries for D. What Andrei brings is the academic rigor that I lack. I know from these messages it seems that all we talk about is metaprogramming, but actually most of our discussions (and most of Andrei's work on D) are about filling in mundane gaps in the language, like the lack of a proper const. And I know Kris really wants a useful const <g>.
Feb 10 2007
next sibling parent Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
 For the record, I know next to nothing about RoR except that:
 
 1) RoR has clearly been the "killer app" that has launched Ruby into one 
 of the top languages in use today.
 
 2) I thought RoR was a DSL addon to Ruby. Perhaps such supposition was a 
 huge misunderstanding on my part.
 
Ruby, as a language, is well-suited to defining DSLs within itself. The strength of RoR (which speaks to Ruby's power) is that it is both merely a Ruby library and a DSL. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.org
Feb 10 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 I know from these messages it seems that all we talk about is 
 metaprogramming, but actually most of our discussions (and most of 
 Andrei's work on D) are about filling in mundane gaps in the language, 
 like the lack of a proper const. 
Ehm. Somehow I thought it's actually interesting. Reminds me of Tom Sawyer's neighbors painting the fence for him :o). Andrei
Feb 10 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 I know from these messages it seems that all we talk about is 
 metaprogramming, but actually most of our discussions (and most of 
 Andrei's work on D) are about filling in mundane gaps in the language, 
 like the lack of a proper const. 
Ehm. Somehow I thought it's actually interesting. Reminds me of Tom Sawyer's neighbors painting the fence for him :o).
It's mundane in the sense that if we do it right, like the foundation of a house, nobody will notice it. If we do it wrong, it'll be front page like the crane that fell over last month and sliced through a high rise condo. I couldn't be in this business if I didn't enjoy the mundane details <g>.
Feb 11 2007
parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 I know from these messages it seems that all we talk about is 
 metaprogramming, but actually most of our discussions (and most of 
 Andrei's work on D) are about filling in mundane gaps in the 
 language, like the lack of a proper const. 
Ehm. Somehow I thought it's actually interesting. Reminds me of Tom Sawyer's neighbors painting the fence for him :o).
It's mundane in the sense that if we do it right, like the foundation of a house, nobody will notice it. If we do it wrong, it'll be front page like the crane that fell over last month and sliced through a high rise condo. I couldn't be in this business if I didn't enjoy the mundane details <g>.
It's interesting to me that the features I requested back around DMD 0.135, which made elementary metaprocessing of strings possible, were so mundane --- "abcd"[2..4] being exactly equivalent to "bc", for example. to reach for the flashy stuff without getting the mundane things right. Mundane is good.
Feb 11 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 It's mundane in the sense that if we do it right, like the foundation 
 of a house, nobody will notice it. If we do it wrong, it'll be front 
 page like the crane that fell over last month and sliced through a 
 high rise condo.

 I couldn't be in this business if I didn't enjoy the mundane details <g>.
It's interesting to me that the features I requested back around DMD 0.135, which made elementary metaprocessing of strings possible, were so mundane --- "abcd"[2..4] being exactly equivalent to "bc", for example. to reach for the flashy stuff without getting the mundane things right. Mundane is good.
Oh, I agree. Making a great product (as Apple demonstrated) is about getting the mundane details right. The only problem is that such details don't make for a great presentation. Nobody is going to switch to D because of const. But if they do, they'll find it hard to switch away because of const (and things like it) <g>. The flashy stuff, though, is the stuff that piques peoples' interests enough to give D a try.
Feb 11 2007
next sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 Don Clugston wrote:
 
 Walter Bright wrote:

 It's mundane in the sense that if we do it right, like the foundation 
 of a house, nobody will notice it. If we do it wrong, it'll be front 
 page like the crane that fell over last month and sliced through a 
 high rise condo.

 I couldn't be in this business if I didn't enjoy the mundane details 
 <g>.
It's interesting to me that the features I requested back around DMD 0.135, which made elementary metaprocessing of strings possible, were so mundane --- "abcd"[2..4] being exactly equivalent to "bc", for example. tried to reach for the flashy stuff without getting the mundane things right. Mundane is good.
Oh, I agree. Making a great product (as Apple demonstrated) is about getting the mundane details right. The only problem is that such details don't make for a great presentation. Nobody is going to switch to D because of const. But if they do, they'll find it hard to switch away because of const (and things like it) <g>.
Aye
 
 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
Feb 11 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 Walter Bright wrote:
 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
If it is perceived as being *just* bling, it will have the reverse effect. If you can show, however, that it solves otherwise intractable or time-wasting problems, you get interest.
Feb 11 2007
next sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
Walter Bright wrote:
 kris wrote:
 Walter Bright wrote:
 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
If it is perceived as being *just* bling, it will have the reverse effect. If you can show, however, that it solves otherwise intractable or time-wasting problems, you get interest.
I agree, though I think most people have a psychological bias toward problems they can already solve but are tedious, as opposed to problems that are too hard to be done now. (I.e. it's easier to sell dishwashers than personal robots, and the first home computers were marketed as useful for organizing recipes.) So while DSLs are really cool, if you want to advertise in "java developer magazine", you want to use them in an elegant way to do the top five routine tasks better, rather than solving P = NP, even if the latter is far more interesting. From the little that I understand RoR, it seems to be exactly this - for building web sites of the type that already exist but require a lot of grunt work to get off the ground. Kevin
Feb 11 2007
prev sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 kris wrote:
 
 Walter Bright wrote:

 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
If it is perceived as being *just* bling, it will have the reverse effect. If you can show, however, that it solves otherwise intractable or time-wasting problems, you get interest.
Absolutely. If you can successfully do that for, say, dev managers, then so much the better. On the other hand, giving such ppl a reason to fear adoption would be a terrible mistake. As you probably know, commercial shops tend to be more than a bit conservative when it comes to code. Even companies perceived as progressive care deeply about ensuring the code is 'pedestrian' in nature. Google, for example, outlawed C++ templates. They did this because their experience showed such code was unmaintainable, more often than not. In the hands of the masses, such tools are often used for creating a language within a language, and everyone's version has a personal stamp: MyDSL It's the old story about great power requiring great responsibility. Should the language be penalized for that? No. Might it be viewed that way? Sure. Just being "right" is never enough in such an environment. In no way am I saying "oh, all that meta-stuff is poppycock", as someone had suggested to me. I personally /like/ a touch of DSL here and there (have been shot-down in the past for exactly that). Instead, my personal concerns (when it comes to D) are based purely around three things: 1) is it of notable or daily value to 50%+ or more of users? 3) does it have /real/ potential to hinder adoption due to ignorance, religion, sexual preferences, or the weather? Those are all arguable, of course. Programmers are notoriously fickle shops; the likes of Google, Oracle, SAP, Yahoo, along with every single company that currently uses Java instead of C++ Thus; shouting from the rooftops that D is all about meta-code, and DSL up-the-wazzoo, may well provoke a backlash from the very people who should be embracing the language. I'd imagine Andrei would vehemently disagree, but so what? The people who will ultimately be responsible for "allowing" D through the door don't care about fads or technical superiority; they care about costs. And the overwhelming cost in software development today, for the type of companies noted above, is maintenance. For them, software dev is already complex enough. In all the places I've worked or consulted, in mutiple countries, and since the time before Zortech C, pedestrian-code := maintainable-code := less overall cost. All IMO - Kris
Feb 11 2007
next sibling parent Walter Bright <newshound digitalmars.com> writes:
Reply in new thread "Super-dee-duper D features".
Feb 11 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
kris wrote:
 Walter Bright wrote:
 kris wrote:

 Walter Bright wrote:

 The flashy stuff, though, is the stuff that piques peoples' 
 interests enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
If it is perceived as being *just* bling, it will have the reverse effect. If you can show, however, that it solves otherwise intractable or time-wasting problems, you get interest.
Absolutely. If you can successfully do that for, say, dev managers, then so much the better. On the other hand, giving such ppl a reason to fear adoption would be a terrible mistake. As you probably know, commercial shops tend to be more than a bit conservative when it comes to code. Even companies perceived as progressive care deeply about ensuring the code is 'pedestrian' in nature. Google, for example, outlawed C++ templates. They did this because their experience showed such code was unmaintainable, more often than not. In the hands of the masses, such tools are often used for creating a language within a language, and everyone's version has a personal stamp: MyDSL
Well, I'd like some real references before I'll believe Google has "outlawed" templates. I found two links saying they aren't wild about 'em: This one basically says they don't have a lot of them mostly because they like to wrap their C++ with SWIG, and SWIG can't handle 'em well. http://www.sauria.com/~twl/conferences/pycon2005/20050325/Python%20at%20Google.notes This one says templates are hard to use and make for bad propeller-head APIs: http://www.manageability.org/blog/stuff/google-coding-cultures But neither of those says it's "outlawed". On the other hand Adobe is doing more and more with templates these days. http://opensource.adobe.com/index.html
 And the overwhelming cost in
 software development today, for the type of companies noted above, is 
 maintenance. For them, software dev is already complex enough. In all 
 the places I've worked or consulted, in mutiple countries, and since the 
 time before Zortech C, pedestrian-code := maintainable-code := less 
 overall cost.
Good points. I've pretty much given up trying to do anything interesting with C++ templates for much the same reason. With the exception of Boost smart pointers, pretty much every mega-template library I've tried to use has made me regret in the end, as I waste hours staring at meaningless page long compiler error messages. Looking at the source code is absolutely no help. And the template things I've worked on myself have always turned into brick-wall affairs where there just isn't a good way to get from point A to point B without a lot of shenanigans. But I don't think that's a condemnation of metaprogramming. That's a condemnation of metaprogramming /in C++/, which was never designed for such a thing. I'm hopeful that a design created with the task of metaprogramming in mind will actually lead to code that's easier to read and maintain than pages of repetitive boiler-plate. I'm hopeful anyway. But to me it seems like what D's souped-up templates have proved so far is that while features like static-if take you a long way -- maybe so far as "making the simple stuff simple" -- anything beyond that still turns into spaghetti. It's a step in the right direction, though. In C++ even the simple stuff gives you spaghetti. --bb
Feb 11 2007
next sibling parent reply janderson <askme me.com> writes:
Bill Baxter wrote:
[snip]
 I'm hopeful anyway.  But to me it seems like what D's souped-up 
 templates have proved so far is that while features like static-if take 
 you a long way -- maybe so far as "making the simple stuff simple" -- 
 anything beyond that still turns into spaghetti.
 
 It's a step in the right direction, though.  In C++ even the simple 
 stuff gives you spaghetti.
Very true. -Joel
Feb 12 2007
parent Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 Bill Baxter wrote:
 [snip]
 I'm hopeful anyway.  But to me it seems like what D's souped-up 
 templates have proved so far is that while features like static-if 
 take you a long way -- maybe so far as "making the simple stuff 
 simple" -- anything beyond that still turns into spaghetti.

 It's a step in the right direction, though.  In C++ even the simple 
 stuff gives you spaghetti.
Very true.
I was surprised at what a big step static if's were.
Feb 12 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Bill Baxter wrote:
 
 Well, I'd like some real references before I'll believe Google has 
 "outlawed" templates.
I heard something roughly to this effect from a friend of mine who used to work there, but things may have since changed. Sean
Feb 12 2007
prev sibling parent Dave <Dave_member pathlink.com> writes:
kris wrote:
 Walter Bright wrote:
 kris wrote:

 Walter Bright wrote:

 The flashy stuff, though, is the stuff that piques peoples' 
 interests enough to give D a try.
Bling is very much in the eye of the beholder, and often has the inverse effect?
If it is perceived as being *just* bling, it will have the reverse effect. If you can show, however, that it solves otherwise intractable or time-wasting problems, you get interest.
Absolutely. If you can successfully do that for, say, dev managers, then so much the better. On the other hand, giving such ppl a reason to fear adoption would be a terrible mistake. As you probably know, commercial shops tend to be more than a bit conservative when it comes to code. Even companies perceived as progressive care deeply about ensuring the code is 'pedestrian' in nature. Google, for example, outlawed C++ templates. They did this because their experience showed such code was unmaintainable, more often than not. In the hands of the masses, such tools are often used for creating a language within a language, and everyone's version has a personal stamp: MyDSL It's the old story about great power requiring great responsibility. Should the language be penalized for that? No. Might it be viewed that way? Sure. Just being "right" is never enough in such an environment. In no way am I saying "oh, all that meta-stuff is poppycock", as someone had suggested to me. I personally /like/ a touch of DSL here and there (have been shot-down in the past for exactly that). Instead, my personal concerns (when it comes to D) are based purely around three things: 1) is it of notable or daily value to 50%+ or more of users? 3) does it have /real/ potential to hinder adoption due to ignorance, religion, sexual preferences, or the weather? Those are all arguable, of course. Programmers are notoriously fickle shops; the likes of Google, Oracle, SAP, Yahoo, along with every single company that currently uses Java instead of C++ Thus; shouting from the rooftops that D is all about meta-code, and DSL up-the-wazzoo, may well provoke a backlash from the very people who should be embracing the language. I'd imagine Andrei would vehemently disagree, but so what? The people who will ultimately be responsible for "allowing" D through the door don't care about fads or technical superiority; they care about costs. And the overwhelming cost in software development today, for the type of companies noted above, is maintenance. For them, software dev is already complex enough. In all the places I've worked or consulted, in mutiple countries, and since the time before Zortech C, pedestrian-code := maintainable-code := less overall cost. All IMO - Kris
All great points. Build it (a solid foundation) and they will come. Recent activity on const/inout/scope come to mind for me. That said, the metaprogramming stuff is cool, and has its place, but I don't think it should be given precedence over the basics.
Feb 12 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 
 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
I suppose it depends on the person. Personally, I was mostly searching for a language that addressed my issues with existing languages, and D had that in spades. Sean
Feb 11 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 The flashy stuff, though, is the stuff that piques peoples' interests 
 enough to give D a try.
I suppose it depends on the person. Personally, I was mostly searching for a language that addressed my issues with existing languages, and D had that in spades. Sean
That was my case too. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 12 2007
prev sibling parent Robby <robby.lansaw gmail.com> writes:
ugh, snip

At the end of the day it seems as if there has been miscommunication 
throughout, I mean.. RoR's expressiveness has nothing to do with RoR 
directly, it's Ruby. By design D is never going to be as expressive as 
Ruby. Of course that's an objective comment, but how minimal, and how 
readable Ruby can be written in is a compliment to it's very own 
language. However, if you're ever curious, may look at the parser for 
Ruby's language, wow (and people working on D's for various reasons have 
a lot less headaches without it)

Ruby is by design something that can *produce* a dsl easily. There's 
even talk of some in the Java camp on dropping ANT for a full language 
(jruby : rake).

  In some posts it seems like the conversation is consuming them, and in 
some posts it's about building them, this thread is seemingly hard to 
follow.

A problem domain that would be YAML, it needs specifics internally, it 
needs the ability to consume and generate, and it's starting to get 
decent support. To me, that would be a decent example, even if it was 
done through community involvement.
Feb 10 2007
prev sibling parent Sean Kelly <sean f4.ca> writes:
Since there seems to be no escaping it, let's return to the realm of 
theory for a moment.  The ultimate goal of all tools and approaches 
being discussed is to automate the process of representing one language, 
A, in another language, B.  From here I feel the problem space can be 
broken into three general categories, the first being any case where a 
strict A->B mapping is desired and little to no modification of the 
output will occur.  This may be because A is a superset of B and can 
therefore the output is likely to be very close to the desired result 
(as long as the domain remains in or near the boundaries of B), or 
simply because the output can be used as reference material of sorts 
with the embellishment handled elsewhere.  A very limited example of 
where A is a superset of B might be translating the Greek word for 
'love' into English.  In Greek, there are at least four separate words 
to describe different kinds of affection, but all of these words can be 
adequately described as short phrases in English.

A more technical example where embellishment of the output, B, is often 
unnecessary is representing a database model in a language intended to 
access the database.  Typically, it is sufficient to perform A->B into a 
set of definition modules (header files) and do the heavy lifting 
separately in language B.  The output of the translation is inspectible, 
and any use of the output is verifiable as well.  Compilers are the 
preferred tool for such translations, and the problem is well 
understood.  Let's call this case A.

The second case is where a loose A->B mapping is desired or where a 
great deal of modification of B will occur.  To return to the Greek 
example for a moment, someone translating English into Greek may need to 
embellish the result to ensure that it communicates the proper intent. 
And since the original intent is contextual, an intelligent analysis of 
A is typically required.

Another situation that has been mentioned in this thread is the desire 
to perform matrix operations in a language that does not support them 
directly.  In this case we would like to do the bulk of our work in B 
but represent multiplication, addition, etc, in a manner that is 
relatively efficient.  The salient point here is that B already supports 
mathematic expressions, and this extension is simply intended to 
specialize B for additional type-driven semantics.  Meta-language tools 
tend to be fairly good at this, and several popular examples of this 
particular solution exist, expression templates being one such.  Let's 
call this case B.

The third case is where the complexity of A and B are fairly equal and 
the domains of each do not sufficiently overlap.  In such a situation, 
embellishment of the result of A->B is necessary to sufficiently express 
the desired behavior.  Let's call this case AB since the division of 
work or complexity is roughly balanced.

 From experience, it is evident that attempts to map solutions for case 
A and case B onto this problem have distinct but recognizable issues. 
Solutions for case A (ie. compilers) are excellent at a static A->B 
translation, but if B is modified into B' and then A is changed, the new 
A->B translation must again manually be converted to B', which tends to 
generally be quite complex.  From a business perspective, I have seen 
cases where language A was thrown away entirely and all work done in 
language B simply to avoid this process, and even then the vestiges of A 
can have a long-lasting impact on work in B--often it's simply too 
expensive to rewrite B' from scratch, but the existing B' is awkwardly 
expressed because of the inexact mapping that took place.

Solutions for case B, on the other hand, have the opposite problem. 
They allow for a great deal of flexibility in language B, but the way 
they perform A->B tends to be impenetrable for any reasonably complex A, 
and the process is typically not inspectible.  The C macro language is 
one example here, as are C++ and even D templates.  In fact, since they 
live in B I believe that the new mixin/import features belong to this 
category as well.  I do suspect that great improvements can be made 
here, but I am skeptical that any such tool will ever be ideal for AB.

With this in mind, it seems clear that a third approach is required for 
AB, but to discover such an approach let's first distill the previous 
two approaches: solutions for A seem to exist as external agents which 
perform the translation, while solutions for B seem to exist as 
in-language compile-time languages.  Solutions for A are insufficient 
because they do not allow for ongoing manipulation of both A and B, and 
solutions for B are insufficient because the expressing a means of 
performing A->B within B is often awkward and occurs in a way that can 
not be independently monitored.

My feeling is that the proper solution for case AB is a dynamic 
composition of pre-defined units of B to express the meaning of A.  Each 
unit is individually inspectible and its meaning is well understood, so 
any composition of such units should be comprehensible as well.  I have 
only limited experience here, but my impression is that fully reflected 
dynamic languages are well-suited for this situation.  Ruby on Rails is 
one example of such a solution, and I suspect that similar examples 
could be found for Lisp, etc.

Does this sound reasonable?  And can anyone provide supporting or 
conflicting examples?  My goal here is simply to establish some general 
parameters for the problem domain in an attempt to determine whether the 
new and planned macro features for D will ever be suitable for AB 
problems, and whether another solution for D might exist that is more 
fitting or more optimal.


Sean
Feb 10 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 Please note that I'm not criticizing in-language DSL parsing as a 
 general idea so much as questioning whether this is truly the best 
 example for the usefulness of such a feature.
Compile time DSL's will really only be useful for relatively small languages. For a complex DSL, a separate compilation tool will be probably much more powerful and much more useful. I don't know anything about database languages, so I'm no help there. One example of a highly useful compile time DSL is the regex package that Don Clugston and Eric Anderton put together. With better metaprogramming support, this kind of thing will become much simpler to write. There's often a need for custom 'little languages' for lots of projects. Most of the time, people just make do without them because they aren't worth the effort to create. I hope to make it so easy to create them, that all kinds of unforeseen uses will be made of them. I'll give an example: I often have a need to create parallel tables of data. C, C++, and D have no mechanism to do that directly (though I've used a macro trick to do it in C). With a DSL, this becomes easy.
Feb 10 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Please note that I'm not criticizing in-language DSL parsing as a 
 general idea so much as questioning whether this is truly the best 
 example for the usefulness of such a feature.
Compile time DSL's will really only be useful for relatively small languages. For a complex DSL, a separate compilation tool will be probably much more powerful and much more useful.
True, but then there's also the scavenging approach: a small DSL can be a reduced interpretation of a complex language. For example, one can imagine parsing an HTML file to extract stuff of interest, while literally skipping over the complexity of HTML (and javascript). And I've already given the example of parsing SQL code to generate a D mapping. I've actually done this in 1998: starting from CREATE VIEW statements, I was generating C++ code with one struct per view.
 I don't know anything about database languages, so I'm no help there.
 
 One example of a highly useful compile time DSL is the regex package 
 that Don Clugston and Eric Anderton put together. With better 
 metaprogramming support, this kind of thing will become much simpler to 
 write.
 
 There's often a need for custom 'little languages' for lots of projects. 
 Most of the time, people just make do without them because they aren't 
 worth the effort to create. I hope to make it so easy to create them, 
 that all kinds of unforeseen uses will be made of them.
 
 I'll give an example: I often have a need to create parallel tables of 
 data. C, C++, and D have no mechanism to do that directly (though I've 
 used a macro trick to do it in C). With a DSL, this becomes easy.
And parallel hierarchies required by some design patterns are a bitch. With code generation it will be easy to rein in. Andrei
Feb 10 2007
prev sibling next sibling parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
Walter Bright wrote:
 String mixins, in order to be useful, need an ability to manipulate 
 strings at compile time. Currently, the core operations on strings that 
 can be done are:
 
 1) indexed access
 2) slicing
 3) comparison
 4) getting the length
 5) concatenation
 
 Any other functionality can be built up from these using template 
 metaprogramming.
 
 The problem is that parsing strings using templates generates a large 
 number of template instantiations, is (relatively) very slow, and 
 consumes a lot of memory (at compile time, not runtime). For example, 
 ParseInteger would need 4 template instantiations to parse 5678, and 
 each template instantiation would also include the rest of the input as 
 part of the template instantiation's mangled name.
 
 At some point, this will prove a barrier to large scale use of this 
 feature.
 
 Andrei suggested using compile time regular expressions to shoulder much 
 of the burden, reducing parsing of any particular token to one 
 instantiation.
 
 The last time I introduced core regular expressions into D, it was 
 soundly rejected by the community and was withdrawn, and for good reasons.
 
 But I think we now have good reasons to revisit this, at least for 
 compile time use only. For example:
 
     ("aa|b" ~~ "ababb") would evaluate to "ab"
 
 I expect one would generally only see this kind of thing inside 
 templates, not user code.
How about a much simpler and a more general approach: allow compile-time evaluation of functions. a function will have a specific attribute such that if you pass it constant parameters, it gets evaluated/unfolded at compile time. This attriute can be called "meta" for example .. (since it will be mainly used for meta programming) /// Definition meta int add( int x, int y ) { return x+y; } /// Usage: int y = add( 10, 12 ); /// and that would get unfolded/interpreted at compile time to: int y = 22; Basically you execute the "meta" function at compile time, you feed it the input and get the output out of it and replace the function call with its compile time output.
Feb 07 2007
prev sibling parent reply renoX <renosky free.fr> writes:
Andrei Alexandrescu (See Website For Email) Wrote:
[]
 Au contraire, I think it's a definite step in the right direction. 
 Writing programs that write programs is a great way of doing more with 
 less effort. 
As long as it works yes, but if you need to debug the result, you're most likely going to suffer a lot.. Also this can leads to huge compile time due to code expansion (been there with generated C++ code: 6h compilation time(!) and full compilation needed each time due to the difficulty to track dependencies). So it is a bit like TNT: a useful tool sure, but a dangerous one too. renoX
Feb 12 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
renoX wrote:
 Andrei Alexandrescu (See Website For Email) Wrote:
 []
 Au contraire, I think it's a definite step in the right direction. 
 Writing programs that write programs is a great way of doing more with 
 less effort. 
As long as it works yes, but if you need to debug the result, you're most likely going to suffer a lot.. Also this can leads to huge compile time due to code expansion (been there with generated C++ code: 6h compilation time(!) and full compilation needed each time due to the difficulty to track dependencies). So it is a bit like TNT: a useful tool sure, but a dangerous one too. renoX
Compile times for metacode will reduce dramatically (to become a practical non-issue in most cases) through the technology that Walter is working on right now. Let's not forget that TNT moderated by carbon gave us dynamite :o). Andrei
Feb 12 2007