www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Compile time function execution...

reply Walter Bright <newshound digitalmars.com> writes:
... is now in DMD 1.006. For example:

 -------------------------------------------
 import std.stdio;
 
 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }
 
 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Feb 15 2007
next sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright escribió:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
This is really great, Walter! Congratulations! I'm always thinking that D is focused on great runtime performance while having great expresiveness, which is awsome. I can't believe computers have evolved that much... and still we have to wait 10 o 20 seconds for a program to load :-( A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.
Feb 15 2007
next sibling parent reply BCS <BCS pathlink.com> writes:
Ary Manzana wrote:
 A question: is there anyway the compiler can tell the user if a certain 
 function can be executed at compile time? Those six rules may be hard to 
 memorize and while coding sensitive code it would be great to ask the 
 compiler "Is the function I'm writing can be executed at compile time?". 
 Otherwise it's just a guess.
int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
Feb 15 2007
parent reply Ary Manzana <ary esperanto.org.ar> writes:
BCS escribió:
 Ary Manzana wrote:
 A question: is there anyway the compiler can tell the user if a 
 certain function can be executed at compile time? Those six rules may 
 be hard to memorize and while coding sensitive code it would be great 
 to ask the compiler "Is the function I'm writing can be executed at 
 compile time?". Otherwise it's just a guess.
int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
It would be nice to have something like this in phobos: ------------------------------------------------- import ... ? int square(int i) { return i * i; } static assert (isCompileTimeExecution(square()); ------------------------------------------------- so that if the function is changed you can still assert that or know that you've lost it.
Feb 15 2007
next sibling parent BCS <BCS pathlink.com> writes:
Ary Manzana wrote:
 BCS escribió:
 
 Ary Manzana wrote:

 A question: is there anyway the compiler can tell the user if a 
 certain function can be executed at compile time? Those six rules may 
 be hard to memorize and while coding sensitive code it would be great 
 to ask the compiler "Is the function I'm writing can be executed at 
 compile time?". Otherwise it's just a guess.
int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
It would be nice to have something like this in phobos: ------------------------------------------------- import ... ? int square(int i) { return i * i; } static assert (isCompileTimeExecution(square()); ------------------------------------------------- so that if the function is changed you can still assert that or know that you've lost it.
I'd go with an is expression is(square(int) : const) sort of in line with the function/etc. stuff.
Feb 15 2007
prev sibling parent Nicolai Waniek <no.spam thank.you> writes:
Ary Manzana wrote:
 
 It would be nice to have something like this in phobos:
Just a note (has nothing to do with the topic): IMHO, phobos and tango should be merged somehow, I don't like the idea of having two standard libraries. Hopefully, there's someone recognizing this ;)
Feb 15 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Ary Manzana wrote:
 A question: is there anyway the compiler can tell the user if a certain 
 function can be executed at compile time? Those six rules may be hard to 
 memorize and while coding sensitive code it would be great to ask the 
 compiler "Is the function I'm writing can be executed at compile time?". 
 Otherwise it's just a guess.
The way to tell is it gives you an error if you try to execute it at compile time and it cannot be. Don't bother trying to memorize the rules, if you follow instead the rule "imagine regular compile time constant folding and extend it to functions" and you'll be about 99% correct.
Feb 15 2007
parent reply BCS <BCS pathlink.com> writes:
Walter Bright wrote:
 Ary Manzana wrote:
 
 A question: is there anyway the compiler can tell the user if a 
 certain function can be executed at compile time? Those six rules may 
 be hard to memorize and while coding sensitive code it would be great 
 to ask the compiler "Is the function I'm writing can be executed at 
 compile time?". Otherwise it's just a guess.
The way to tell is it gives you an error if you try to execute it at compile time and it cannot be. Don't bother trying to memorize the rules, if you follow instead the rule "imagine regular compile time constant folding and extend it to functions" and you'll be about 99% correct.
I think the issue is where compile time and runtime are both allowed, but makes a *BIG* difference in performance.
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 I think the issue is where compile time and runtime are both allowed, 
 but makes a *BIG* difference in performance.
Right, which is why compile time execution is only done in contexts where it would otherwise error, such as in initialization of global constants.
Feb 15 2007
parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 BCS wrote:
 
 I think the issue is where compile time and runtime are both allowed,
 but makes a *BIG* difference in performance.
 
Right, which is why compile time execution is only done in contexts where it would otherwise error, such as in initialization of global constants.
Hmm, so compile time evaluation is lazy, not greedy? That makes me want a cast(const) to force it in some cases char[] CTwritef(...) const char[] message = CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1); log(message); vs. log(cast(const)CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1)); or best of all, why no have it greedy? log(CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1));
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
BCS wrote:
 Hmm, so compile time evaluation is lazy, not greedy?
Yes.
 That makes me want 
 a cast(const) to force it in some cases
I think the eval() template does that nicely.
Feb 15 2007
prev sibling parent reply Russell Lewis <webmaster villagersonline.com> writes:
Ary Manzana wrote:
 A question: is there anyway the compiler can tell the user if a certain 
 function can be executed at compile time? Those six rules may be hard to 
 memorize and while coding sensitive code it would be great to ask the 
 compiler "Is the function I'm writing can be executed at compile time?". 
 Otherwise it's just a guess.
Walter pointed out, faster than I, that an eval!() template does this. But here's my question: How do we have a compile-time switch which controls what to compile-time evaluate and what not? Somebody mentioned, elsewhere, that you would not want to do compile-time evaluation in a debug build but you would in a release build. How would one achieve that, other than wrapping every use with a version switch?
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Russell Lewis wrote:
 But here's my question: How do we have a compile-time switch which 
 controls what to compile-time evaluate and what not?
It's not necessary. The context completely determines what to compile-time and what to run-time.
Feb 15 2007
parent reply Russell Lewis <webmaster villagersonline.com> writes:
Walter Bright wrote:
 Russell Lewis wrote:
 But here's my question: How do we have a compile-time switch which 
 controls what to compile-time evaluate and what not?
It's not necessary. The context completely determines what to compile-time and what to run-time.
Please don't misunderstand me. I understand how your design works. (And I think it's a pretty good one.) But what I'm saying is that somebody might want to write code like this: int i; version(debug) i = eval!(MyFunc()); else i = MyFunc(); Perhaps they want to do this because MyFunc() is still being debugged, or because MyFunc() takes a while to do compile-time function execution. How would one write a cleaner syntax for this? I'd like so see something like: int i = exec_compile_time_on_release_build!(MyFunc()); but I'm not sure how one would code it.
Feb 16 2007
parent Walter Bright <newshound digitalmars.com> writes:
Russell Lewis wrote:
 But what I'm saying is that somebody 
 might want to write code like this:
 
     int i;
     version(debug)
         i = eval!(MyFunc());
     else
         i = MyFunc();
 
 Perhaps they want to do this because MyFunc() is still being debugged, 
 or because MyFunc() takes a while to do compile-time function execution.
 
 How would one write a cleaner syntax for this?  I'd like so see 
 something like:
 
     int i = exec_compile_time_on_release_build!(MyFunc());
 
 but I'm not sure how one would code it.
That'll become possible once templates are extended to be able to take alias expressions as arguments.
Feb 16 2007
prev sibling next sibling parent reply kris <foo bar.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Good one. Can you perhaps explain what the execution mechanism is? That aspect is much more interesting to some of us geeks <g>
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
kris wrote:
 Good one.
I thought you'd appreciate it because it involves NO new syntax. It just makes things work that were errors before.
 Can you perhaps explain what the execution mechanism is? That 
 aspect is much more interesting to some of us geeks <g>
The dirty deed is done in interpret.c. It basically just makes more powerful the existing constant folding code.
Feb 15 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions! That's just a proposal, though, right? Has it been accepted for C++09? Seems like a hard sell given the limitations and the new keyword. --bb
Feb 15 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions!
3) The C++ feature is applicable to user-defined types. Andrei
Feb 15 2007
parent Lutger <lutger.blijdestijn gmail.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:

 Bill Baxter wrote:
 Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions!
3) The C++ feature is applicable to user-defined types. Andrei
These user-defined literals seem useful too. Would it be hard to implement this with structs, or are there perhaps more subtle issues? Here is an earlier article it was based on: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1511.pdf Btw this is a really cool feature, how it is done in D I mean. A little while ago someone posted a CachedFunction template to do memoization on any function, this is now easy to do 100% safe, right? Or at least for the subset of functions that pass the compile-time criteria.
Feb 15 2007
prev sibling parent Reiner Pope <xxxx xxx.xxx> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9.
Would this be useful as an optional tag? Suppose you wanted to make a utils library that was usable in both compile-time and run-time form. You write your code trying to follow the rules supplied in the specs, but how do you make sure you haven't slipped up anywhere? Would an additional keyword help here, or do you just have to do all your unit-tests in static form, eg: char[] itoa(long value) {...} unittest { static assert(itoa(12345) == "12345"); } I suppose there'e not much incentive to add a keyword in this situation, but perhaps it could be useful elsewhere? Cheers, Reiner
Feb 15 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Huh, and Bjarne co-authored that proposal. I wonder what the reason is for all the restrictions. Sean
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Huh, and Bjarne co-authored that proposal. I wonder what the reason is for all the restrictions.
I'm guessing, but I suppose they wished to be as conservative as possible in order to avoid the unimplementable debacle exported templates were.
Feb 15 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
This is a development of epic proportions. There is a need for a couple of ancillary features. Most importantly, a constant must be distinguishable from a variable. Consider the example of regex from our correspondence: bool b = regexmatch(a, "\n$"); vs. char[] pattern = argv[1]; bool b = regexmatch(a, pattern); You'd want to dispatch regexmatch differently: the first match should be passed to compile-time code that at the end of the day yields: bool b = (a[$-1] == '\n'); while the second should invoke the full-general dynamic pattern matching algorithm since a dynamic pattern is used. Andrei
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 There is a need for a couple of ancillary features. Most importantly, a 
 constant must be distinguishable from a variable. Consider the example 
 of regex from our correspondence:
 
 bool b = regexmatch(a, "\n$");
 
 vs.
 
 char[] pattern = argv[1];
 bool b = regexmatch(a, pattern);
 
 You'd want to dispatch regexmatch differently: the first match should be 
 passed to compile-time code that at the end of the day yields:
 
 bool b = (a[$-1] == '\n');
 
 while the second should invoke the full-general dynamic pattern matching 
 algorithm since a dynamic pattern is used.
I think that can be done with an improvement to the existing compile time regex library.
Feb 15 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 There is a need for a couple of ancillary features. Most importantly, 
 a constant must be distinguishable from a variable. Consider the 
 example of regex from our correspondence:

 bool b = regexmatch(a, "\n$");

 vs.

 char[] pattern = argv[1];
 bool b = regexmatch(a, pattern);

 You'd want to dispatch regexmatch differently: the first match should 
 be passed to compile-time code that at the end of the day yields:

 bool b = (a[$-1] == '\n');

 while the second should invoke the full-general dynamic pattern 
 matching algorithm since a dynamic pattern is used.
I think that can be done with an improvement to the existing compile time regex library.
The answer is correct, but does not address the issue I raised. A simple question is: what is the signature of regexmatch? A runtime-only version is: bool regexmatch_1(char[] input, char[] pattern); A compile-time-only version is: bool regexmatch_2(pattern : char[])(char[] input); Notice how the two cannot be called the same way. So the burden is on the user to specify different syntaxes for the two cases: bool b1 = regexmatch_1(a, ".* = .*"); // forced runtime bool b2 = regexmatch_2!(".* = .*")(a); // forced compile-time Notice that b2 is NOT computed at compile time!!! This is because "a" is a regular variable. It's just that the code for computing b2 is radically different from the code for computing b1 because the former uses static knowledge of the pattern. The problem is that what's really needed is this: bool b = regexmatch(string, pattern); and have regexmatch dispatch to regexmatch_1 if pattern is a variable, or to regexmatch_2 if pattern is a compile-time constant. Do you feel me? What we need is allow a means to overload a function with a template, in a way that ensures unified invocation syntax, e.g.: bool regexmatch(char[] str, char[] pat); // 1 bool regexmatch(char[] pat)(char[] str, pat); // 2 bool regexmatch(char[] pat, char[] str)(str, pat); // 3 void main(int argc, char[][] argv) { regexmatch(argv[0], argv[1]); // goes to (1) regexmatch(argv[0], ".+"); // goes to (2) regexmatch("yah", ".+"); // goes to (3) } Notice how the invocation syntax is identical --- an essential artifact. Andrei
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 A simple question is: what is the signature of regexmatch? A
 runtime-only version is:
 
 bool regexmatch_1(char[] input, char[] pattern);
 
 A compile-time-only version is:
 
 bool regexmatch_2(pattern : char[])(char[] input);
 
 Notice how the two cannot be called the same way. So the burden is on
 the user to specify different syntaxes for the two cases:
 
 bool b1 = regexmatch_1(a, ".* = .*"); // forced runtime
 bool b2 = regexmatch_2!(".* = .*")(a); // forced compile-time
 
 Notice that b2 is NOT computed at compile time!!! This is because "a" is
 a regular variable. It's just that the code for computing b2 is
 radically different from the code for computing b1 because the former
 uses static knowledge of the pattern.
 
 The problem is that what's really needed is this:
 
 bool b = regexmatch(string, pattern);
 
 and have regexmatch dispatch to regexmatch_1 if pattern is a variable,
 or to regexmatch_2 if pattern is a compile-time constant.
 
 Do you feel me?
No, but I understand your point.
 What we need is allow a means to overload a function with a template, in
 a way that ensures unified invocation syntax, e.g.:
 
 bool regexmatch(char[] str, char[] pat); // 1
 bool regexmatch(char[] pat)(char[] str, pat); // 2
 bool regexmatch(char[] pat, char[] str)(str, pat); // 3
 
 void main(int argc, char[][] argv)
 {
   regexmatch(argv[0], argv[1]); // goes to (1)
   regexmatch(argv[0], ".+"); // goes to (2)
   regexmatch("yah", ".+"); // goes to (3)
 }
 
 Notice how the invocation syntax is identical --- an essential artifact.
Remember your proposal for a expression type which resolves to "does this expression compile"? That can be used here, along with expression aliasing, to test to see if it can be done at compile time, and then pick the right fork.
Feb 15 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 What we need is allow a means to overload a function with a template, in
 a way that ensures unified invocation syntax, e.g.:

 bool regexmatch(char[] str, char[] pat); // 1
 bool regexmatch(char[] pat)(char[] str, pat); // 2
 bool regexmatch(char[] pat, char[] str)(str, pat); // 3

 void main(int argc, char[][] argv)
 {
   regexmatch(argv[0], argv[1]); // goes to (1)
   regexmatch(argv[0], ".+"); // goes to (2)
   regexmatch("yah", ".+"); // goes to (3)
 }

 Notice how the invocation syntax is identical --- an essential artifact.
Remember your proposal for a expression type which resolves to "does this expression compile"? That can be used here, along with expression aliasing, to test to see if it can be done at compile time, and then pick the right fork.
It could indeed; I'm just hoping it can be properly cloaked away (e.g. in a library) at little cognitive cost to both the user and library developer. I assume it will be something often asked for. Many Perl coders probably have no idea that: $a =~ ".*=.*"; is faster than, and handled very, very differently, from: $a =~ $b; where $b is a dynamic variable. They just use the uniform syntax and let the compiler do whatever the hell it has to do to generate good code. We should make that kind of partial evaluation easy to define and implement. Andrei
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 where $b is a dynamic variable. They just use the uniform syntax and let 
 the compiler do whatever the hell it has to do to generate good code. We 
 should make that kind of partial evaluation easy to define and implement.
I agree.
Feb 15 2007
prev sibling next sibling parent reply Michiel <nomail please.com> writes:
Walter Bright wrote:

 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
That's a great feature! A couple of questions, if I may: * Can sqrt still call another function? * Does it still work if classes or structs are involved in any way? What about enums? What about arrays? * Why is the second sqrt(10) run at runtime? In theory it's still a constant, right? Is this something that will work in a later version? Congrats on the new feature!
Feb 15 2007
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Michiel wrote:
 That's a great feature! A couple of questions, if I may:
 
 * Can sqrt still call another function?
According to the documentation in the zip, yes. But only functions which can themselves be executed at compile time, of course.
 * Does it still work if classes or structs are involved in any way? What
 about enums? What about arrays?
Classes and structs seem to be disallowed. Array and string literals can be used as parameters, as long as all members would also be valid parameters by themselves. The body may not use non-const arrays. Enums aren't mentioned, but either they qualify as integers (meaning they can be passes as parameters) or not. Either way, they're not disallowed in the body so they should be usable there.
 * Why is the second sqrt(10) run at runtime? In theory it's still a
 constant, right? Is this something that will work in a later version?
From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
Frits van Bommel wrote:

 * Why is the second sqrt(10) run at runtime? In theory it's still a
 constant, right? Is this something that will work in a later version?
From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Well, then there is room for improvement. (Good thing, too. Can you imagine how bad it would be if perfection had already been achieved? ;)) Anyway, it looks to me like every subtree of the abstract syntax tree could potentially be collapsed by compile time function execution. This would include the mentioned sqrt call. Of course, I don't really know how the D compiler works internally, so I can't be sure in this case. And I don't see a real reason why structs and at least scope classes couldn't be included. But I don't suppose it's all that easy. Maybe the perfect compiler would also pre-execute everything up until the first input is needed. And maybe some bits after. -- Michiel
Feb 15 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Michiel wrote:
 Frits van Bommel wrote:
 
 * Why is the second sqrt(10) run at runtime? In theory it's still a
 constant, right? Is this something that will work in a later version?
From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Well, then there is room for improvement. (Good thing, too. Can you imagine how bad it would be if perfection had already been achieved? ;)) Anyway, it looks to me like every subtree of the abstract syntax tree could potentially be collapsed by compile time function execution. This would include the mentioned sqrt call. Of course, I don't really know how the D compiler works internally, so I can't be sure in this case. And I don't see a real reason why structs and at least scope classes couldn't be included. But I don't suppose it's all that easy. Maybe the perfect compiler would also pre-execute everything up until the first input is needed. And maybe some bits after.
You do need some way to turn it off though. For an extreme example, most programs from the demoscene make extensive use of compressed data that is uncompressed as the first step before running. They would be very unhappy if their language chose to be "helpful" by running the decompression routines at compile-time thus resulting in a 20M executable. Extreme -- but it does demonstrate there are cases where you want to be sure some expansion happens at runtime. --bb
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
Bill Baxter wrote:

 Maybe the perfect compiler would also pre-execute everything up until
 the first input is needed. And maybe some bits after.
You do need some way to turn it off though. For an extreme example, most programs from the demoscene make extensive use of compressed data that is uncompressed as the first step before running. They would be very unhappy if their language chose to be "helpful" by running the decompression routines at compile-time thus resulting in a 20M executable. Extreme -- but it does demonstrate there are cases where you want to be sure some expansion happens at runtime.
Very good point. This could be solved by using some sort of conditional compilation construct on the bits you want to happen at runtime. -- Michiel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 Very good point. This could be solved by using some sort of conditional
 compilation construct on the bits you want to happen at runtime.
It's the other way around. Functions are always executed at runtime, unless they are in a context where a compile-time constant is *required*, such as for global variable initialization or template value arguments.
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
Walter Bright wrote:

 Very good point. This could be solved by using some sort of conditional
 compilation construct on the bits you want to happen at runtime.
It's the other way around. Functions are always executed at runtime, unless they are in a context where a compile-time constant is *required*, such as for global variable initialization or template value arguments.
I know that's how it's implemented now. But generally, it's a good thing if as much as possible happens at compile time. Unless you compile in debug mode, in which you want the compilation fast. And like Bill said, you also don't want to decompress everything at compile time and blow up the executable. But that's not a problem if all you're doing is shrinking subtrees. Like in the case of that second sqrt call. -- Michiel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 Walter Bright wrote:
 It's the other way around. Functions are always executed at runtime,
 unless they are in a context where a compile-time constant is
 *required*, such as for global variable initialization or template value
 arguments.
I know that's how it's implemented now. But generally, it's a good thing if as much as possible happens at compile time.
Many functions could be executed at compile time, but should not be. There's NO way for the compiler to figure this out. The only thing left is for it to be explicit whether you want compile or run time execution, and this is the way it is designed. There isn't any ambiguity - the context determines it completely. There's also an easy way to switch between the two, so I don't feel there's a need for anything additional.
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
Walter Bright wrote:

 Many functions could be executed at compile time, but should not be.
When should they not be? -- Michiel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 Walter Bright wrote:
 
 Many functions could be executed at compile time, but should not be.
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time. 3) As the spec mentions, there are cases where compile time would get different results than run time. 4) The compiler cannot determine in advance if a function can be executed at compile time. So speculatively doing so would have to be done for every function with constant arguments - this could be spectacularly slow. 5) It can cause problems for people who want to do runtime coverage testing - their code may actually never get executed, and they never notice.
Feb 15 2007
next sibling parent reply Michiel <nomail please.com> writes:
Walter Bright wrote:

 Many functions could be executed at compile time, but should not be.
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.
Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.
 2) A function that simply takes too long at compile time. Compile time
 execution is going to be >100 times slower than run time.
If I'm compiling for release, I wouldn't mind letting the compiler run a long time if it results in a faster executable.
 3) As the spec mentions, there are cases where compile time would get
 different results than run time.
It shouldn't, though, should it? I'm just talking about a possible ultimate goal here. I understand if D isn't at that stage yet.
 4) The compiler cannot determine in advance if a function can be
 executed at compile time. So speculatively doing so would have to be
 done for every function with constant arguments - this could be
 spectacularly slow.
Like I said, I wouldn't mind. As long as it doesn't take that long when I'm debug-mode compiling. I don't need compile time execution in that case.
 5) It can cause problems for people who want to do runtime coverage
 testing - their code may actually never get executed, and they never
 notice.
Nothing a compiler-switch can't fix, I think. -- Michiel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 Walter Bright wrote:
 
 Many functions could be executed at compile time, but should not be.
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.
Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.
The programmer *already* has explicit control over whether it is folded or not.
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
Walter Bright wrote:

 Many functions could be executed at compile time, but should not be.
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.
Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.
The programmer *already* has explicit control over whether it is folded or not.
Correction, I meant to say: "The programmer could explicitly tell the compiler to not execute that piece of code at *compile time*." And yes, the programmer already has explicit control. But not the way I mean. Right now, in places where a call could potentially be either a runtime- or compiletime- execution: * the compiler defaults to runtime execution. And like I said, that behavior isn't wanted in most cases (when compiling for release). I would suggest defaulting to compile time execution and using a special keyword to avoid that. The programmer will use this keyword if he's compressed some data and wants it to be decompressed at runtime. * the syntax for functions to be executed at compile time isn't the nice-and-simple D syntax, but the template-syntax. And in another thread you yourself have mentioned why that's not optimal. I agree. -- Michiel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 * the syntax for functions to be executed at compile time isn't the
 nice-and-simple D syntax, but the template-syntax. And in another thread
 you yourself have mentioned why that's not optimal. I agree.
I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.
Feb 15 2007
parent reply Reiner Pope <xxxx xxx.xxx> writes:
Walter Bright wrote:
 Michiel wrote:
 * the syntax for functions to be executed at compile time isn't the
 nice-and-simple D syntax, but the template-syntax. And in another thread
 you yourself have mentioned why that's not optimal. I agree.
I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.
But in many situations, you could view compile-time function execution simply as a high-level optimization, an extension on 'inline'. Why, then, does it make sense to require explicit eval!() annotations, when it clearly doesn't make sense to do this for inline? Viewing this as a more complex extension to inlining, I think the correct approach is to allow explicit annotations for 'only at runtime' and 'only at compile time' and have the rest decided according to a compiler switch, eg -pre-eval Cheers, Reiner
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Reiner Pope wrote:
 Walter Bright wrote:
 Michiel wrote:
 * the syntax for functions to be executed at compile time isn't the
 nice-and-simple D syntax, but the template-syntax. And in another thread
 you yourself have mentioned why that's not optimal. I agree.
I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.
But in many situations, you could view compile-time function execution simply as a high-level optimization, an extension on 'inline'. Why, then, does it make sense to require explicit eval!() annotations, when it clearly doesn't make sense to do this for inline?
You can completely ignore and omit the eval!() and your code will run just fine. The eval!() is only for those who wish to do some more advanced tweaking.
 Viewing this as a more complex extension to inlining, I think the 
 correct approach is to allow explicit annotations for 'only at runtime' 
 and 'only at compile time'
I just don't see what is to be gained by this (and consider the extra bloat in the language to provide such annotations). The language already gives complete, absolute control over when a function is executed. Can you give an example where such annotations improve things?
 and have the rest decided according to a 
 compiler switch, eg -pre-eval
Feb 15 2007
parent Reiner Pope <xxxx xxx.xxx> writes:
Walter Bright wrote:
 Viewing this as a more complex extension to inlining, I think the 
 correct approach is to allow explicit annotations for 'only at 
 runtime' and 'only at compile time'
I just don't see what is to be gained by this (and consider the extra bloat in the language to provide such annotations). The language already gives complete, absolute control over when a function is executed.
It's better for the user only to have complete control when desired. The compiler should infer the rest, but by a better policy than 'leave as much to runtime as possible', since such a policy misses optimization opportunities. Unless I misunderstand, it seems like compile time function execution opens up a lot of optimization opportunities, yet the programmer is required to explicitly specify every time they should be used. From what I understood, this function execution was implemented by extending const-folding. Wasn't const-folding initially a form of optimization, to pre-evaluate things, so they didn't need to be evaluated at runtime? Consider: int sq(int val) { return val*val; } void main() { printf( sq(1024) ); } Although sq() will probably be inlined, if it gets too big, then it won't. However, there is no reason that it can't be evaluated at compile-time, which would increase the runtime efficiency. Sure, the programmer could enforce compile-time evaluation by using printf( eval!( sq(1024) ) ); but if the compiler can evaluate it automatically, then there would be no need for programmer annotations here. I'm just agreeing with Michiel that there should be a compile-time switch tells the compiler to pre-evaluate as much as possible. This is an optimization because it means that some code doesn't need to be run at runtime. I just suggested the 'only at runtime' annotation so that the programmer has a safeguard against the compiler going wild with pre-evaluation, for situations like the compressed-code example you mentioned earlier.
 You can completely ignore and omit the eval!() and your code will run 
 just fine. The eval!() is only for those who wish to do some more 
 advanced tweaking.
For sure. However, what I referred to in my post was the similarity between requiring 'inline' in C++ to tell the compiler to optimize the function call and requiring eval!() in D to tell the compiler to pre-evaluate the function. Cheers, Reiner
Feb 15 2007
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 Michiel wrote:
 
 Walter Bright wrote:
 
 Many functions could be executed at compile time, but should not be.
 
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time. 3) As the spec mentions, there are cases where compile time would get different results than run time. 4) The compiler cannot determine in advance if a function can be executed at compile time. So speculatively doing so would have to be done for every function with constant arguments - this could be spectacularly slow. 5) It can cause problems for people who want to do runtime coverage testing - their code may actually never get executed, and they never notice.
6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.
Feb 15 2007
parent reply Michiel <nomail please.com> writes:
BCS wrote:

 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast. -- Michiel
Feb 15 2007
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Michiel,

 BCS wrote:
 
 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
 
That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
I have a program that calculates the number of N length sequences whose sum is less than M. M and N are consts so it has no inputs. Runtime speed is of no more importance than compile time speed because I only need to run it once for each time I compile it. I'd rather let the CPU do the crunching rather than the DMD. It takes about 15min to run as is, so that would be about 1500+ min under DMD.
Feb 15 2007
parent Michiel <nomail please.com> writes:
BCS wrote:

 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
I have a program that calculates the number of N length sequences whose sum is less than M. M and N are consts so it has no inputs. Runtime speed is of no more importance than compile time speed because I only need to run it once for each time I compile it. I'd rather let the CPU do the crunching rather than the DMD. It takes about 15min to run as is, so that would be about 1500+ min under DMD.
Ah, now I understand the fuss. :) Walter is using an interpreter for the compile time execution. All this time I was assuming multiple compilation stages. First compile the compile-time functions and let the processor run them at compile time. -- Michiel
Feb 15 2007
prev sibling next sibling parent reply =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= <jmjmak utu.fi.invalid> writes:
Michiel kirjoitti:
 BCS wrote:
 
 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
If the compiler generates a new instance of the algorithm whenever it is needed (i.e. function is partially evaluated), it can actually produce longer code than when simply calling the same function over and over. It's almost like inlining functions whenever possible.
Feb 15 2007
parent Michiel <nomail please.com> writes:
Jari-Matti Mäkelä wrote:

 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
If the compiler generates a new instance of the algorithm whenever it is needed (i.e. function is partially evaluated), it can actually produce longer code than when simply calling the same function over and over. It's almost like inlining functions whenever possible.
That's actually not the idea. Basically, the idea is that the compiler runs the algorithm and replaces every call to it with the answer. It's possible (though unlikely), that the executable would be a bit larger, but the executable would be much faster. And memory we have. In most cases it's speed that's the problem. -- Michiel
Feb 15 2007
prev sibling parent reply Russell Lewis <webmaster villagersonline.com> writes:
Michiel wrote:
 BCS wrote:
 
 6) when the whole program is const foldable, e.i. no runtime inputs,
 like with some scientific software.
That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
How about: import std.stdio; void main() { writefln("%d\n", CalculateTheAnswerToLifeUniverseEverything()); } if it took 7.5 million years to run on a supercomputer, how long is it going to take to run on your compiler?
Feb 15 2007
parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:

 
 import std.stdio;
 void main() {
    writefln("%d\n", CalculateTheAnswerToLifeUniverseEverything());
 }
 
 if it took 7.5 million years to run on a supercomputer, how long is it 
 going to take to run on your compiler?
int CalculateTheAnswerToLifeUniverseEverything() { return 42; } -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 4:37:59 PM
Feb 15 2007
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Derek Parnell wrote:
 On Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:
 if it took 7.5 million years to run on a supercomputer, how long is it 
 going to take to run on your compiler?
int CalculateTheAnswerToLifeUniverseEverything() { return 42; }
No, the function name says *calculate*, not *return* :P. Also, nitpick: wasn't it "The Answer To _The Ultimate Question_ Of Life, the Universe and Everything"? (i.e. you missed a bit ;) )
Feb 16 2007
parent Aarti_pl <aarti interia.pl> writes:
Frits van Bommel napisaĊ‚(a):
 Derek Parnell wrote:
 On Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:
 if it took 7.5 million years to run on a supercomputer, how long is 
 it going to take to run on your compiler?
int CalculateTheAnswerToLifeUniverseEverything() { return 42; }
No, the function name says *calculate*, not *return* :P.
I think that it was just a great example of brain-time constant folding :D Regards Marcin Kuszczak
Feb 16 2007
prev sibling parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 Michiel wrote:
 Walter Bright wrote:

 Many functions could be executed at compile time, but should not be.
When should they not be?
1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time.
One thought on this. What about leveraging multiple threads to calculate different algorithms at once. They should all be self contained, so this shouldn't be much of a problem. With the quad-cores the compiled program would run maybe at 30 times slower. Also its a good step towards future-proofing the compiler. -Joel
Feb 16 2007
parent Michiel <nomail please.com> writes:
janderson wrote:

 2) A function that simply takes too long at compile time. Compile time
 execution is going to be >100 times slower than run time.
One thought on this. What about leveraging multiple threads to calculate different algorithms at once. They should all be self contained, so this shouldn't be much of a problem. With the quad-cores the compiled program would run maybe at 30 times slower. Also its a good step towards future-proofing the compiler.
And I'm not 100% sure why you can't compile those functions and run another process to do the algorithms. Not with an interpreter, but with compiled D code. If course, I wouldn't know how to actually do that, but in theory it's possible. -- Michiel
Feb 17 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 That's a great feature! A couple of questions, if I may:
 
 * Can sqrt still call another function?
Yes, as long as that function can also be compile time executed. Recursion works, too.
 * Does it still work if classes or structs are involved in any way? What
 about enums? What about arrays?
No, yes, array literals only.
 * Why is the second sqrt(10) run at runtime?
Because the compiler cannot tell if it is a reasonable thing to try and execute it at compile time. To force it to happen at compile time: template eval(A...) { alias A eval; } writefln("%s", eval!(sqrt(10))); which works because templates can only take constants as value arguments.
 In theory it's still a
 constant, right?
No.
 Is this something that will work in a later version?
I don't see a motivating reason why, at the moment, given the eval template.
Feb 15 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Michiel wrote:
 That's a great feature! A couple of questions, if I may:

 * Can sqrt still call another function?
Yes, as long as that function can also be compile time executed. Recursion works, too.
 * Does it still work if classes or structs are involved in any way? What
 about enums? What about arrays?
No, yes, array literals only.
 * Why is the second sqrt(10) run at runtime?
Because the compiler cannot tell if it is a reasonable thing to try and execute it at compile time. To force it to happen at compile time: template eval(A...) { alias A eval; } writefln("%s", eval!(sqrt(10))); which works because templates can only take constants as value arguments.
The feature of binding expressions to aliases will break eval. Andrei
Feb 15 2007
prev sibling next sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Hmm... infinite loops are suddenly _really_ slow to compile ;) : ----- // Slightly modified from example import std.stdio; real test(real x) { real root = x / 2; for (int ntries = 0; ntries < 5 || true; ntries++) { root = (root + x / root) / 2; } return root; } void main() { static x = test(10); } ----- (Also, this seems to consume huge amounts of memory) And error checking could be improved as well: removing the parameter from the call in above code gives --- test.d(15): function test.test (real) does not match parameter types () test.d(15): Error: expected 1 arguments, not 0 dmd: interpret.c:96: Expression* FuncDeclaration::interpret(Expressions*): Assertion `parameters && parameters->dim == dim' failed. Aborted (core dumped) --- It tells you what's wrong, but still trips an assert that seems to be about the same error. However: ----- real test() { return 0.0; } void main() { static x = test(); } ----- Gives: --- dmd: interpret.c:96: Expression* FuncDeclaration::interpret(Expressions*): Assertion `parameters && parameters->dim == dim' failed. Aborted (core dumped) --- (the same assert is tripped) So I don't know, maybe it's a different bug entirely. Or it tripped the other clause of the assert. I don't feel like inspecting interpret.c to find out what exactly the assert is checking and why it's false...
Feb 15 2007
prev sibling next sibling parent Nicolai Waniek <no.spam thank.you> writes:
Indeed very nice! :)
Feb 15 2007
prev sibling next sibling parent reply Gregor Richards <Richards codu.org> writes:
I see that I can't do this:

char[] someCompileTimeFunction()
{
     return "writefln(\"Wowza!\");";
}

int main()
{
     mixin(someCompileTimeFunction());
     return 0;
}


Any chance of compile-time code generation via this mechanism? Or are 
they simply handled in different, incompatible steps of compilation?

  - Gregor Richards

PS: Yes, I realize this is a terrible idea ^^
Feb 15 2007
next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Gregor Richards wrote:
 I see that I can't do this:
 
 char[] someCompileTimeFunction()
 {
     return "writefln(\"Wowza!\");";
 }
 
 int main()
 {
     mixin(someCompileTimeFunction());
     return 0;
 }
 
 
 Any chance of compile-time code generation via this mechanism? Or are 
 they simply handled in different, incompatible steps of compilation?
 
  - Gregor Richards
 
 PS: Yes, I realize this is a terrible idea ^^
That's a bug. I'll fix it.
Feb 15 2007
parent reply Max Samukha <samukha voliacable.com> writes:
On Thu, 15 Feb 2007 13:02:40 -0800, Walter Bright
<newshound digitalmars.com> wrote:

Gregor Richards wrote:
 I see that I can't do this:
 
 char[] someCompileTimeFunction()
 {
     return "writefln(\"Wowza!\");";
 }
 
 int main()
 {
     mixin(someCompileTimeFunction());
     return 0;
 }
 
 
 Any chance of compile-time code generation via this mechanism? Or are 
 they simply handled in different, incompatible steps of compilation?
 
  - Gregor Richards
 
 PS: Yes, I realize this is a terrible idea ^^
That's a bug. I'll fix it.
The following must be a related bug. The compiler complains that the argument to the mixin is not a string and parse() cannot be evaliated at compile-time. char[] parse(char[] src) { return src; } class Test { mixin(parse(import("guts.dspx"))); } void main() { } BTW, thanks for the awesome feature!
Feb 16 2007
parent Walter Bright <newshound digitalmars.com> writes:
Max Samukha wrote:
 The following must be a related bug. The compiler complains that the
 argument to the mixin is not a string and parse() cannot be evaliated
 at compile-time.
Yes, in the compiler source the mixin argument failed to be marked as "must interpret", so all these will fail. Fortunately, it's a trivial fix, and will go out in the next update.
Feb 16 2007
prev sibling parent Witold Baryluk <baryluk mpi.int.pl> writes:
On Thu, 15 Feb 2007 11:36:54 -0800
Gregor Richards <Richards codu.org> wrote:

 I see that I can't do this:
 
 char[] someCompileTimeFunction()
 {
      return "writefln(\"Wowza!\");";
 }
 
 int main()
 {
      mixin(someCompileTimeFunction());
      return 0;
 }
 
 
 Any chance of compile-time code generation via this mechanism? Or are 
 they simply handled in different, incompatible steps of compilation?
It seems to be bug.
 
   - Gregor Richards
 
 PS: Yes, I realize this is a terrible idea ^^
NO. It is the most exciting usage of this feature. I was waiting for this a long time, and can't wait for using in really cool things. :) -- Witold Baryluk MAIL: baryluk smp.if.uj.edu.pl, baryluk mpi.int.pl JID: movax jabber.autocom.pl
Feb 16 2007
prev sibling next sibling parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Awesome!!!! Can't wait to try and play with it!! (downloading at ~ 11 KB/s)
Feb 15 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion? 2) Can't you just ignore 'synchronized' at compile time? 3) Would it be possible to add some sort of a version(CompileTime)? This would make it possible for those who want to be *sure* the function is only used at compile time to simply have it not exist as a runtime call. It could also be used to make slight modifications to functions that one would like to use as both compile-time and run-time. For example if you want to have synchronized/try/catch/throw/writefln type things in the runtime version. --bb
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Walter Bright wrote:
 ... is now in DMD 1.006. For example:
Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?
Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.
 2) Can't you just ignore 'synchronized' at compile time?
Pure functions (the kind that can be compile time executed) don't need to be synchronized anyway.
 3) Would it be possible to add some sort of a version(CompileTime)? This 
 would make it possible for those who want to be *sure* the function is 
 only used at compile time to simply have it not exist as a runtime 
 call.  It could also be used to make slight modifications to functions 
 that one would like to use as both compile-time and run-time.  For 
 example if you want to have synchronized/try/catch/throw/writefln type 
 things in the runtime version.
Once more, there is never a situation where a function *might* or *might not* get executed at compile time. A function is *always* executed at runtime unless it is in a situation where it *must* be executed at compile time, such as in an initializer for a global.
Feb 15 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 ... is now in DMD 1.006. For example:
Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?
Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.
 2) Can't you just ignore 'synchronized' at compile time?
Pure functions (the kind that can be compile time executed) don't need to be synchronized anyway.
 3) Would it be possible to add some sort of a version(CompileTime)? 
 This would make it possible for those who want to be *sure* the 
 function is only used at compile time to simply have it not exist as a 
 runtime call.  It could also be used to make slight modifications to 
 functions that one would like to use as both compile-time and 
 run-time.  For example if you want to have 
 synchronized/try/catch/throw/writefln type things in the runtime version.
Once more, there is never a situation where a function *might* or *might not* get executed at compile time. A function is *always* executed at runtime unless it is in a situation where it *must* be executed at compile time, such as in an initializer for a global.
Right. But if I understand correctly, the same code can get called either at runtime or compile time depending on the situation. But what if I want the runtime version to print out a message. Or add to a global counter variable for some simple home-brew profiling stats. It would be handy then if I could do: int square(int x) { version(compiletime) {}else{ writefln("trace: square"); G_CallsToSquare++; } return (x*x); } I think I'm getting at the same kind of thing Andrei is talking about. I don't want to have to limit what I do in the runtime version in order to make it acceptable for compile time. He's saying I should be able to have two versions of the function, one for compile time and one for run time. I think that's useful too. But for simpler cases like the above, it would be nice I think if one could just version-out the bad parts. And in this case: int calculationIShouldOnlyDoAtCompileTime(int x) { // * whatever it is * } int K = calculationIShouldOnlyDoAtCompileTime(4); Whoops! Silly programmer, looks like I forgot the 'const' on K. Would be nice if I could get the compiler to remind me when I'm silly like that. That could be arranged if there were a version(CompileTime). I think one holy grail of this stuff (and one Lisp apparently attained in the 60's or 70's) is to just be able to treat everything as either compile-time or run-time depending on how much information you have. For instance, it's not uncommon to make a fixed length vector class using templates. Vec!(N) kind of thing. But it would really be nice if the majority of that code could remain even when N is not constant. That means both on the user side and on the implementation side. I don't know how realistic that is, but I often find myself sitting on the fence trying to decide -- do I make this a compile-time parameter, thereby cutting off all opportunity for runtime creation of a particular instance, or do I make it runtime, thereby cutting my and the compiler's opportunity to make several obvious optimizations. --bb
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Right.  But if I understand correctly, the same code can get called 
 either at runtime or compile time depending on the situation.
Yes.
 But what if I want the runtime version to print out a message.  Or add 
 to a global counter variable for some simple home-brew profiling stats. 
  It would be handy then if I could do:
 
 int square(int x) {
    version(compiletime) {}else{
      writefln("trace: square");
      G_CallsToSquare++;
    }
    return (x*x);
 }
I see what you want, but it isn't going to work the way the compiler is built. Runtime interpretation occurs *after* semantic analysis, but versioning happens before. Right now, you're better off having two functions, one a wrapper around the other. After all, there isn't a version possible for inlining/not inlining, either.
 I think I'm getting at the same kind of thing Andrei is talking about. I 
 don't want to have to limit what I do in the runtime version in order to 
 make it acceptable for compile time.  He's saying I should be able to 
 have two versions of the function, one for compile time and one for run 
 time.  I think that's useful too.  But for simpler cases like the above, 
 it would be nice I think if one could just version-out the bad parts.
 
 And in this case:
 
 int calculationIShouldOnlyDoAtCompileTime(int x)
 {
    // * whatever it is *
 }
 
 int K = calculationIShouldOnlyDoAtCompileTime(4);
 
 Whoops!  Silly programmer, looks like I forgot the 'const' on K.  Would 
 be nice if I could get the compiler to remind me when I'm silly like 
 that.  That could be arranged if there were a version(CompileTime).
For compile time evaluation only, make the function instead a template that calls the function.
 For instance, it's not uncommon to make a fixed length vector class 
 using templates.  Vec!(N) kind of thing.  But it would really be nice if 
 the majority of that code could remain even when N is not constant. That 
 means both on the user side and on the implementation side.  I don't 
 know how realistic that is, but I often find myself sitting on the fence 
 trying to decide -- do I make this a compile-time parameter, thereby 
 cutting off all opportunity for runtime creation of a particular 
 instance, or do I make it runtime, thereby cutting my and the compiler's 
 opportunity to make several obvious optimizations.
Templates are supposed to fill that ecological niche.
Feb 15 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Walter Bright wrote:
 ... is now in DMD 1.006. For example:
Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?
Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.
Yeh, it's not a big deal. I was more just curious since you recently added it back to DMD. And since it does affect the /effective/ semantics if you have a very deep recursion that overflows the stack. Or is the stack for compile-time execution is actually on the heap? Either way, it might actually be /better/ to have the compile-time implementation have a limited stack so that infinite recursion errors result in stack overflows rather than just the compiler hanging. --bb
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Yeh, it's not a big deal.  I was more just curious since you recently 
 added it back to DMD.  And since it does affect the /effective/ 
 semantics if you have a very deep recursion that overflows the stack. Or 
 is the stack for compile-time execution is actually on the heap?
 
 Either way, it might actually be /better/ to have the compile-time 
 implementation have a limited stack so that infinite recursion errors 
 result in stack overflows rather than just the compiler hanging.
Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.
Feb 15 2007
parent reply janderson <askme me.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 
 Right now, the compiler will fail if the compile time execution results 
 in infinite recursion or an infinite loop. I'll have to figure out some 
 way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems. -Joel
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to figure 
 out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
Feb 15 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation. Andrei
Feb 15 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 That could be achieved with a watchdog process without changing the 
 compiler, and it's more flexible.
 
 I think you just let the compiler go and crunch at it. Since you 
 esentially have partial evaluation anyway, the execution process can be 
 seen as extended to compile time. If you have a non-terminating program, 
 that non-termination can be naturally manifest itself during 
 compilation=partial evaluation.
It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
Feb 16 2007
parent reply Dave <Dave_member pathlink.com> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 That could be achieved with a watchdog process without changing the 
 compiler, and it's more flexible.

 I think you just let the compiler go and crunch at it. Since you 
 esentially have partial evaluation anyway, the execution process can 
 be seen as extended to compile time. If you have a non-terminating 
 program, that non-termination can be naturally manifest itself during 
 compilation=partial evaluation.
It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
How about listing any CTFE with -v? That should be more reliable and useful in other ways too.
Feb 16 2007
parent BCS <BCS pathlink.com> writes:
Dave wrote:
 Bill Baxter wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 Walter Bright wrote:
 That could be achieved with a watchdog process without changing the 
 compiler, and it's more flexible.

 I think you just let the compiler go and crunch at it. Since you 
 esentially have partial evaluation anyway, the execution process can 
 be seen as extended to compile time. If you have a non-terminating 
 program, that non-termination can be naturally manifest itself during 
 compilation=partial evaluation.
It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
How about listing any CTFE with -v? That should be more reliable and useful in other ways too.
On the line, how about a timeout flag for unattended builds? As it is DMD can now fail to error on bad code by just running forever. With template code it would seg-v sooner or later.
Feb 16 2007
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.
Completely agree, otherwise it contradicts your "right way to build a compiler" statement except with time as the metric instead of memory. Imagine the frustration of someone who has legitimate code and the compiler always craps out half-way through a looong makefile, or worse only sometimes craps out depending on machine load. I think the best solution is to list out any compile-time execution with the '-v' switch. That way if someone runs into this, they can throw -v and find out where it's happening.
 
 Andrei
Feb 16 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Dave wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.
Completely agree, otherwise it contradicts your "right way to build a compiler" statement except with time as the metric instead of memory. Imagine the frustration of someone who has legitimate code and the compiler always craps out half-way through a looong makefile, or worse only sometimes craps out depending on machine load.
Yes, memory watching would be great. It is easy to write a script that watches dmd's memory and time consumed, and kills it past some configurable threshold.
 I think the best solution is to list out any compile-time execution with 
 the '-v' switch. That way if someone runs into this, they can throw -v 
 and find out where it's happening.
Sounds great. Andrei
Feb 16 2007
prev sibling next sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
I remember one of the first "facts of life" of computer science in college being that some problems can't be solved, and the example given was the 'halting problem'. If it runs until done or bust, that's okay with me. But for the purpose of distributing software, it would be a lot easier if either it was unbounded, or the bound was something that depended mostly or completely on the source code. That way I can compile on my super-duper development machine and if it doesn't "time out", I know whether it will fail on some user's second rate hardware or not. If the number used doesn't mean anything concrete, that's okay. I guess the simplest thing would be to bump a counter in some inner loop and cut out when it gets to a jillion. Kevin
Feb 15 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
Please don't. All sorts of things can affect performance it means that there is a remote possibility that it will fail one time out of 10. This is particularly the case for build machines where they may be doing lots of things at once. If it suddenly fails because of virtual memory thrashing or something, the programmer would get sent an annoying message "build failed". If it works then it needs to work every time. A counter or stack overflow of some sort would be much better. Even if not specifiable by the programmer. One way to use a timer though would be to display how long each bit took. That way the programmer would be able to figure out how to improve compile-time performance. -Joel
Feb 16 2007
prev sibling next sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Walter Bright wrote:
 Whether you tell it to fail at a smaller limit, or it fails by itself at 
 a smaller limit, doesn't make any difference as to whether it runs on a 
 less powerful system or not <g>.
I'd definitely prefer a way to make it fail early. When I was first trying out v1.006 I modified the sqrt() example to an infinite loop, and made the fatal mistake of switching focus away from my terminal window. My system ground to a halt as DMD tried to allocate pretty much all free memory + swap. (That's over 2 GB!) It got so bad it took a few minutes to switch to a virtual console and 'killall dmd'. And even after that it was slow for a while until the running programs got swapped back in... Surely it would have been possible to detect something having gone horribly awry before it got this bad?
 If your system is too primitive to run the compiler, you use a cross 
 compiler running on a more powerful machine.
My system is an AMD Sempron 3200+ with 1GB of RAM...
 I have thought of just putting a timer in the interpreter - if it runs 
 for more than a minute, assume things have gone terribly awry and quit 
 with a message.
This might be a good idea, but perhaps make the time depend on a command-line switch, and maybe add something for memory usage as well?
Feb 16 2007
prev sibling parent Joe <Joe_member pathlink.com> writes:
Walter Bright wrote:
 janderson wrote:
 Walter Bright wrote:
 Right now, the compiler will fail if the compile time execution 
 results in infinite recursion or an infinite loop. I'll have to 
 figure out some way to eventually deal with this.
Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.
This is exactly pertaining to compiler specific issues, but paralleling the above, why have "The total size of a static array cannot exceed 16Mb. A dynamic array should be used instead for such large arrays." in the Array spec. Isn't that kind of similar? 16Mb of static array data is a lot no doubt, but why put an arbitrary limit on it. Unless it's a DMD specific thing. Joe
Feb 17 2007
prev sibling next sibling parent reply torhu <fake address.dude> writes:
Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Feb 15 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
torhu wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Aggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(
Feb 15 2007
parent reply Dave <Dave_member pathlink.com> writes:
Walter Bright wrote:
 torhu wrote:
 Walter Bright wrote:
 This should obsolete using templates to compute values at compile time.
Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Aggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(
Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
Feb 16 2007
parent reply Michiel <nomail please.com> writes:
Dave wrote:

 Aggh, that's a compiler bug.

 int foo = square(5);

 does work. I knew I'd never get it right the first try :-(
Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
I don't think so. If I'm not mistaken, D would do that at runtime at the moment. -- Michiel
Feb 16 2007
next sibling parent Walter Bright <newshound digitalmars.com> writes:
Michiel wrote:
 Dave wrote:
 int foo = square(5);

 does work. I knew I'd never get it right the first try :-(
Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
I don't think so. If I'm not mistaken, D would do that at runtime at the moment.
Not for a global declaration, which must happen at compile time.
Feb 16 2007
prev sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Michiel wrote:
 Dave wrote:
 
 Aggh, that's a compiler bug.

 int foo = square(5);

 does work. I knew I'd never get it right the first try :-(
Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
I don't think so. If I'm not mistaken, D would do that at runtime at the moment.
It does that at runtime for variables in function-type scope. For global variables (as in the code posted) and IIRC aggregate members it determines initial values at compile time.
Feb 16 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
This doesn't seem to work either -- should it? char[] UpToSpace(char[] x) { int i=0; while (i<x.length && x[i] != ' ') { i++; } return x[0..i]; } void main() { const y = UpToSpace("first space was after first"); writefln(y); } It prints out the whole string rather than just "first". If you change it to return 'i' it does correctly evaluate to 5. If you change it to just 'return x[0..5];' it also works correctly. --bb
Feb 15 2007
parent Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 This doesn't seem to work either -- should it?
Looks like it should. I'll check it out.
Feb 15 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Any chance that concatenation onto a variable will be supported? I'd like to write this: char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } But that doesn't work. The recursive version does, though: char[] NReps2(char[] x, int n, char[] acc="") { if (n<=0) return acc; return NReps2(x, n-1, acc~x); } --bb
Feb 15 2007
next sibling parent Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 Any chance that concatenation onto a variable will be supported?
It should. I'll figure out what's going wrog, wring, worng, er, wrong.
Feb 15 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Bill Baxter wrote:
 I'd like to write this:
 
 char[] NReps(char[] x, int n)
 {
     char[] ret = "";
     for(int i=0; i<n; i++) { ret ~= x; }
     return ret;
 }
 
 But that doesn't work.
It does when I try it: --------------------- import std.stdio; char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } void main() { static x = NReps("3", 6); writefln(x); } ----------------------- prints: 333333
Feb 16 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 I'd like to write this:

 char[] NReps(char[] x, int n)
 {
     char[] ret = "";
     for(int i=0; i<n; i++) { ret ~= x; }
     return ret;
 }

 But that doesn't work.
It does when I try it: --------------------- import std.stdio; char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } void main() { static x = NReps("3", 6); writefln(x); } ----------------------- prints: 333333
Doh! You're right. I must have had some other bad code commented in accidentally at the time. --bb
Feb 16 2007
prev sibling next sibling parent reply "Frank Benoit (keinfarbton)" <benoit tionex.removethispart.de> writes:
many others asked about explicitly run at compile time, and there is
this template solution.

doesn't this again end up in a new type?

I personally do not like the eval!( func() ) syntax.
I think this is a new important feature, worth a new syntax.

How about something like

func!!()


This new call syntax forces a function to run in the compiler.
Feb 15 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Frank Benoit (keinfarbton) wrote:
 many others asked about explicitly run at compile time, and there is
 this template solution.
 
 doesn't this again end up in a new type?
 
 I personally do not like the eval!( func() ) syntax.
 I think this is a new important feature, worth a new syntax.
 
 How about something like
 
 func!!()

 
 This new call syntax forces a function to run in the compiler.
I like eval!(func()) for that. I keep thinking, though, that some new syntax would sure be nice for mixin(func!(arg)). It seems that things such as mixin(write!("foo %{bar} is %{baz}")); could potentially get very tiresome. If I have to write mixin(...) everywhere I'm probably just going to end up going with writefn("foo %s is %s", bar, baz); I was actually thinking of your func!!(). But eh. It's not so clear cut with all these possible forms of mixin() mixin("char x;"); mixin(func()); mixin(tfunc!()); mixin("char " ~ "x"); mixin("char " ~ func()); mixin("char " ~ tfunc!()); ... etc. --bb
Feb 15 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Frank Benoit (keinfarbton) wrote:
 many others asked about explicitly run at compile time, and there is
 this template solution.

 doesn't this again end up in a new type?

 I personally do not like the eval!( func() ) syntax.
 I think this is a new important feature, worth a new syntax.

 How about something like

 func!!()


 This new call syntax forces a function to run in the compiler.
I like eval!(func()) for that. I keep thinking, though, that some new syntax would sure be nice for mixin(func!(arg)). It seems that things such as mixin(write!("foo %{bar} is %{baz}")); could potentially get very tiresome.
Definitely. That's exactly why dispatch of the same code through compile-time and run-time channels, depending on the argument, must be easy. Andrei
Feb 15 2007
prev sibling next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:

 ... is now in DMD 1.006. For example:
On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:
 ... is now in DMD 1.006.
I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right? This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated. I'm thinking of the funny side this too, when it comes to putting some DbC validity tests in your code... const float x = SomeCompileTimeFunc(1,2,3); assert (x == 34.56); // Make sure the function worked. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 3:24:40 PM
Feb 15 2007
next sibling parent "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Derek Parnell" <derek nomail.afraid.org> wrote in message 
news:15k9gflddwrnp.qzqynewwbuou$.dlg 40tude.net...
 I'm thinking of the funny side this too, when it comes to putting some DbC
 validity tests in your code...

   const float x = SomeCompileTimeFunc(1,2,3);
   assert (x == 34.56); // Make sure the function worked.
You mean static assert(x == 34.56); ;)
Feb 15 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:
 
 ... is now in DMD 1.006. For example:
On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:
 ... is now in DMD 1.006.
I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right?
This is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."
 This is not a troll posting, so can anyone enlighten me on how this ability
 will reduce the cost of maintaining code? I am very open to being educated.
Great. The main uses of the feature will be in creating libraries that work with and _on_ your code (in the most literal sense). I've given the regexp example a few posts back: using the straight regular expression syntax, you direct a library into generating optimal code for each and every of your regular expressions, without ever having to do anything about it. There's been also much discussion about the applications of code generation, and you can be sure they will be simplified by one order of magnitude by dual functions. Andrei
Feb 15 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Thu, 15 Feb 2007 20:45:04 -0800, Andrei Alexandrescu (See Website For
Email) wrote:


 There's been also much discussion about the applications of code 
 generation, and you can be sure they will be simplified by one order of 
 magnitude by dual functions.
So this would mean that I could code ... mixin( Conv("moveto 34,56 " "drawto +100,-50 " "drawto +0,+100 " "pencolor red " "drawto -100,-50 " ); ); And expect that the Conv function will, at compile time, create the equivalent D code to implement the 2D drawn item for the target platform, and have the mixin insert it into the program being compiled. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 4:06:53 PM
Feb 15 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Thu, 15 Feb 2007 20:45:04 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 
 There's been also much discussion about the applications of code 
 generation, and you can be sure they will be simplified by one order of 
 magnitude by dual functions.
So this would mean that I could code ... mixin( Conv("moveto 34,56 " "drawto +100,-50 " "drawto +0,+100 " "pencolor red " "drawto -100,-50 " ); ); And expect that the Conv function will, at compile time, create the equivalent D code to implement the 2D drawn item for the target platform, and have the mixin insert it into the program being compiled.
I'm thinking of the much more boring and much crappier job of generating proxy and stub code for RPC and IPC. Andrei
Feb 15 2007
prev sibling parent Walter Bright <newshound digitalmars.com> writes:
Derek Parnell wrote:
 So this would mean that I could code ...
 
     mixin(
           Conv("moveto 34,56 "
                "drawto +100,-50 "
                "drawto +0,+100 "
                "pencolor red "
                "drawto -100,-50 "
               );
           );
 
 And expect that the Conv function will, at compile time, create the
 equivalent D code to implement the 2D drawn item for the target platform,
 and have the mixin insert it into the program being compiled. 
Exactamundo!
Feb 15 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 This is by far the least interesting application of this stuff. I don't 
 even count it when I think of the feature. "Oh, yeah, I could compile 
 square root at compile time. How quaint."
I agree. I need a better example. Any ideas?
Feb 15 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is by far the least interesting application of this stuff. I 
 don't even count it when I think of the feature. "Oh, yeah, I could 
 compile square root at compile time. How quaint."
I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway). Smart enums (that know printing & parsing) are another example. But the print() example is simple, of immediate clear benefit, and suggestive of more powerful stuff. Andrei
Feb 15 2007
next sibling parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is by far the least interesting application of this stuff. I 
 don't even count it when I think of the feature. "Oh, yeah, I could 
 compile square root at compile time. How quaint."
I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
Would this mean a type of function whose return value is automatically mixed in? This is getting awfully close to LISP macros... :)
 Smart enums (that know printing & parsing) are another example. But the 
 print() example is simple, of immediate clear benefit, and suggestive of 
 more powerful stuff.
Feb 16 2007
prev sibling parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is by far the least interesting application of this stuff. I 
 don't even count it when I think of the feature. "Oh, yeah, I could 
 compile square root at compile time. How quaint."
I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"? L.
Feb 16 2007
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Lionello Lunesu wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?
You currently also need a mixin() around the print!().
Feb 16 2007
parent reply Lionello Lunesu <lio lunesu.remove.com> writes:
Frits van Bommel wrote:
 Lionello Lunesu wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?
You currently also need a mixin() around the print!().
Aha.. Or "before", right? mixin print!("......"); L.
Feb 16 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Lionello Lunesu wrote:
 Frits van Bommel wrote:
 Lionello Lunesu wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 I agree. I need a better example. Any ideas?
Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?
You currently also need a mixin() around the print!().
Aha.. Or "before", right? mixin print!("......");
I think the example requires a string mixin statement, which according to the spec means parentheses are required[1]. Note that it needs to access variables whose names are specified in the string argument. [1]: http://www.digitalmars.com/d/statement.html#MixinStatement
Feb 16 2007
prev sibling parent Kevin Bealer <kevinbealer gmail.com> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 This is by far the least interesting application of this stuff. I 
 don't even count it when I think of the feature. "Oh, yeah, I could 
 compile square root at compile time. How quaint."
I agree. I need a better example. Any ideas?
(Sorry that this got so long -- it kind of turned into a duffel bag of things I've been thinking about.) I think the most common case is like the plot for a TV show. Code that is semi-interesting, semi-predictable, and semi-repetitive. If the code is too interesting, you would need to write it all. If it was too repetitive, you could just use a standard D template (boilerplate). Tasks that are in between -- non-boilerplate, but fairly 'formulaic', where the details are not interesting, is the candidate for this. To me there are a couple special reasons that stick out. 1. You're building complex types and need to base them on standard definitions that might change. I see this with ASN.1 definitions at work. We have a program that builds C++ code from ASN.1 or XML schemas. A. Some of our programs need to stream 100s of MB of data so this code needs to be as fast as possible. B. If a field is added to the definition it has to appear in all the code objects. C. There is additional logic -- i.e. if a 'mandatory' field is not assigned, the serializer has to throw an exception. 2. You're building the inner loop in some performance critical application and the rules (expressions and conditional logic) used there need or benefit from ultra-optimization. This is what (in my view) compile time regex is for, I would normally use a runtime regex for ordinary things like parsing configuration files. 3. You need a lot of code duplication (i.e. to provide stub functions something) and don't want to repeat yourself to get it. --- This is how I imagine it: Some of these ideas have been kicking around in my head, but I'm not sure how practical they are. When I use the word templates here but I mean any kind of code generation. Starting scenario: Let's say I'm writing a program to solve some mathematical task. 1. I create a top level class and add some members to it. 2. I add some sub-classes, a dozen or so, mostly just POD stuff. 3. Some of these have associative arrays, user-defined tree stuff, regular arrays, hand coded linked lists, etc. 4. I put in a bunch of graph theory code and number crunching stuff. Uses of metaprogramming: 1. Now let's say that this application is taking a while to run, so I decide to run it in steps and checkpoint the results to disk. - I write a simple template that can take an arbitrary class and write the pointer value and the class's data to disk. (Actual data is just strings and integers, so one template should cover all of these classes.) - For each internal container it can run over the members and do the same to every object with a distinct memory address. (one more template is needed for each container concept, like AA, list or array -- say 4 or 5 more templates. It only writes each object once by tracking pointers in a set. - Another template that can read this stuff back in, and fix the pointers so that they link up correctly. (** All of this is much easier than normal, because I can generate types using typelists as a starting point. I think in C++ this is a bit trickly because it convolutes the structure definition -- recursively nested structs and all that; but with code generation, the "struct builder" can take a list of strings and pump out a struct whose definition looks exactly like a hand coded struct would look, but maybe with more utilitarian functionality since its cheaper to add the automated stuff. **) Three or four templates later, I have a system for checkpointing any data structure (with a few exceptions like sockets etc.), to a string or stream. 2. I want to display this stuff to the user. I bang together another couple of templates that can show these kinds of code objects in a simple viewer. It works just like the last one, finding variable names and values and doing the writefln() types of tricks to give the user the details. Some kind of browser lets me examine the process starting at the top. Maybe it looks a little like a flow chart and a little like a debugger's print of a structure. - I can define hooks in the important kinds of objects so they can override their own displays but simple data can work without much help. 3. I want to build a distributed compute farm for this numerical task. - I just need to change the serialization to stream the data objects over the web or sockets, or queue the objects in SQL tables. Some load balancing, etc. Another application that has the same class definitions can pull in the XML or ASN.1 or home-made serialization format. The trick here is that we need to be able to build templates that can inspect the objects and trees of objects in complex ways -- does this class contain a field named "password"; is this other field a computed value that can be thrown away. Does this other class override a method named 'optimizeForTransport'. Adding arbitrary attributes and arbitrary bits of code and annotation to the classes is not too hard to do, because my original code generation functions used typelists and had hooks for specifying special behavior. 4. I decide to allow my ten closest friends to help with the application by rewriting important subroutines. - Each person adds code for the application to an SQL database. A simple script can now pull code from the database and dump it to text files. This code can be imported into classes and run. - I can generate ten different versions of a critical loop and select which one to run at random. The timing output results is stored in a text file. Later compiles of the code do "arc-profiling" of entire algorithms or modules. Kevin
Feb 16 2007
prev sibling next sibling parent Walter Bright <newshound digitalmars.com> writes:
Derek Parnell wrote:
 I guess its time I came clean and admitted that in spite of this being a
 huge technological advancement in the language, I can't see why I'd ever be
 needing it.
 
 I mean, when it comes down to it, it's just a fancy way of getting the
 compiler to calculate/generate literals that can be done by myself anyway,
 no? These literals are values that can be determined prior to writing one's
 code, right?
 
 This is not a troll posting, so can anyone enlighten me on how this ability
 will reduce the cost of maintaining code? I am very open to being educated.
It's a very good question, and I tried to answer it in the follow-on "Motivation for..." thread!
Feb 15 2007
prev sibling parent reply Russell Lewis <webmaster villagersonline.com> writes:
Derek Parnell wrote:
 On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:
 
 ... is now in DMD 1.006.
I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right? This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated.
It (sometimes) allows you to express things using the formulae that you used to derive them, which makes code more readable. It also allows you to express mathematically things that might depend on implementation-dependent parameters, or versions, or whatever. Say, like this: version(4K_PAGES) const int page_size = 4*1024; else const int page_size = 16*1024*1024; const int page_shift = eval!(log_base_2(page_size)); Sure, you could integrate page_shift into the version statement...but I think that the above is better. P.S. I would prefer the template compile_time!(value) over eval!(value) for readability reasons.
Feb 16 2007
parent Walter Bright <newshound digitalmars.com> writes:
Russell Lewis wrote:
 Say, like this:
     version(4K_PAGES)
         const int page_size = 4*1024;
     else
         const int page_size = 16*1024*1024;
     const int page_shift = eval!(log_base_2(page_size));
Don't need eval!() for const declarations.
 Sure, you could integrate page_shift into the version statement...but I 
 think that the above is better.
 
 P.S. I would prefer the template
     compile_time!(value)
 over
     eval!(value)
 for readability reasons.
eval!() isn't even in the standard library! You can name it whatever you wish. In any case, I used the name "eval" simply because it was analogous to the eval function found in many scripting languages.
Feb 16 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Man this kicks ass!!! Its the best implementation we could hope for. -Joel
Feb 15 2007
prev sibling next sibling parent Lionello Lunesu <lio lunesu.remove.com> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Can I use the results of compile-time evaluatable functions in "static if", "pragma(msg)" ? This makes D a scripting language :) // This does not (yet) work though: bool func() { return true; } static if (func()) { pragma(msg, "true" ); } else { pragma(msg, "false" ); } L.
Feb 16 2007
prev sibling next sibling parent Daniel919 <Daniel919 web.de> writes:
Hi, GREAT new feature !

1) There is a bug with parentheses:
---------------------------------------------
import std.stdio;

template eval(A...) { alias A eval; }

char[] trimfirst(char[] s)
{
	int x = 0;
	foreach (char each; s) {
		if (each != ' ')
			return s[x .. $];
		x++;
	}
	return s;
}

void main()
{
	writefln(eval!(trimfirst("  test")));
	writefln(trimfirst("  test"));
}
---------------------------------------------
  test
test
So you see, the compile-time version doesn't work. Now change line 9-10 to: if (each != ' ') { return s[x .. $]; } And voila ! Output is correct:
test
test
2) Would it be possible to make this working ? writefln(eval!(std.string.stripl(" test"))); And all the other string functions from phobos, too ? Daniel
Feb 16 2007
prev sibling next sibling parent reply Nicolai Waniek <no.spam thank.you> writes:
Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
I didn't read the whole thread, but I just found some replies about how to turn off/on the automatic compile time execution. I would suggest, that the default behaviour will be "execute as much as possible during compile time", but that there are two keywords (possibly "runtime" and "compiletime") that will indicate what the method is, for example: runtime float sqrt() { ... } That would expand to that the function is _not_ executed during compile time, even if it would be able to. On the other hand, compiletime float sqrt() { ... } Will be executed during compile time (as the standard). But as a difference to the regular behaviour, a function declared as "compiletime" that may not be executed during compile time may throw a hint when compiling so that the developer may change it until there's no hint left. greetings Nicolai
Feb 16 2007
parent Michiel <nomail please.com> writes:
Nicolai Waniek wrote:

 I didn't read the whole thread, but I just found some replies about how
 to turn off/on the automatic compile time execution.
 
 I would suggest, that the default behaviour will be "execute as much as
 possible during compile time", but that there are two keywords (possibly
 "runtime" and "compiletime") that will indicate what the method is, for
I am in total agreement. -- Michiel
Feb 16 2007
prev sibling next sibling parent reply Brian Byrne <bdbyrne wisc.edu> writes:
I'm having a few problems getting some simple examples to work, for 
instance:

-----
char[] foo() { return( "bar" ); }
void main() { const char[] bar = foo(); }

Assertion failure: 'parameters && parameters->dim == dim' on line 96 in 
file 'interpret.c'

abnormal program termination
-----

and

-----
template eval( A... ) { alias A eval; }
int square( int n ) { return( n * n ); }
void main() { int bar = eval!( square( 5 ) ); }

Error: cannot implicitly convert expression (tuple25) of type (int) to int
-----

What am I doing wrong?

Thanks,
Brian Byrne






Walter Bright wrote:
 ... is now in DMD 1.006. For example:
 
 -------------------------------------------
 import std.stdio;

 real sqrt(real x)
 {
    real root = x / 2;
    for (int ntries = 0; ntries < 5; ntries++)
    {
        if (root * root - x == 0)
            break;
        root = (root + x / root) / 2;
    }
    return root;
 }

 void main()
 {
    static x = sqrt(10);   // woo-hoo! set to 3.16228 at compile time!
    writefln("%s, %s", x, sqrt(10));  // this sqrt(10) runs at run time
 }
 ------------------------------------------
This should obsolete using templates to compute values at compile time.
Feb 16 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Brian Byrne wrote:
 
 I'm having a few problems getting some simple examples to work, for 
 instance:
 
 -----
 char[] foo() { return( "bar" ); }
 void main() { const char[] bar = foo(); }
 
 Assertion failure: 'parameters && parameters->dim == dim' on line 96 in 
 file 'interpret.c'
 
 abnormal program termination
 -----
This bug has already been reported: http://d.puremagic.com/issues/show_bug.cgi?id=968
Feb 16 2007
prev sibling next sibling parent reply J Duncan <jtd514 nospam.ameritech.net> writes:
Well I just tried compiling a rather large codebase on 1.006, it takes 
forever and errors out with a 2+ gig executable sitting there. So I must 
have something going crazy under the hood. :) I will try to track it down.
Feb 20 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
J Duncan wrote:
 Well I just tried compiling a rather large codebase on 1.006, it takes 
 forever and errors out with a 2+ gig executable sitting there. So I must 
 have something going crazy under the hood. :) I will try to track it down.
I had a huge memory usage like that when I tried to compile a file with an infinite loop in a function executed at compile time...
Feb 20 2007
prev sibling parent reply Serg Kovrov <kovrov no.spam> writes:
Sorry if this was already answered, but I can't find it..
Is compile-time execution of library functions allowed?

-- 
serg.
Feb 24 2007
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Serg Kovrov wrote:
 Sorry if this was already answered, but I can't find it..
 Is compile-time execution of library functions allowed?
That depends on what you mean by "library functions". Obviously you mean a function in a library, but that's not really what matters here. The important question is whether the source is available to the compiler. It doesn't care where the compiled version ends up, because it doesn't use it. It just needs to see the source. So basically, for a library with declaration-only (no or incomplete implementation) "headers" the answer is no, for libraries that ship with full source (that's used to satisfy imports, so no .di modules) the answer is yes. Just like with normal source files.
Feb 24 2007
parent reply Serg Kovrov <kovrov no.spam> writes:
Frits van Bommel wrote:
 That depends on what you mean by "library functions". Obviously you mean 
 a function in a library, but that's not really what matters here. The 
 important question is whether the source is available to the compiler. 
 It doesn't care where the compiled version ends up, because it doesn't 
 use it. It just needs to see the source.
 
 So basically, for a library with declaration-only (no or incomplete 
 implementation) "headers" the answer is no, for libraries that ship with 
 full source (that's used to satisfy imports, so no .di modules) the 
 answer is yes. Just like with normal source files.
My question was general. C runtime functions, Phobos functions, any third party functions that come as `lib` files (either C or D). -- serg.
Feb 24 2007
parent reply Tyler Knott <tywebmail mailcity.com> writes:
Serg Kovrov wrote:
 
 My question was general. C runtime functions, Phobos functions, any 
 third party functions that come as `lib` files (either C or D).
 
You can only compile-time execute D functions that have their full source (and the full source of any functions they call) available to the compiler at compile-time.
Feb 24 2007
parent reply Serg Kovrov <kovrov no.spam> writes:
Tyler Knott wrote:
 You can only compile-time execute D functions that have their full 
 source (and the full source of any functions they call) available to the 
 compiler at compile-time.
Yeah I figured this much yet. So, to access for example, math functions - I should provide D source code. Sadly, there is not much I can do without at least C runtime... -- serg.
Feb 24 2007
parent reply janderson <askme me.com> writes:
Serg Kovrov wrote:
 Tyler Knott wrote:
 You can only compile-time execute D functions that have their full 
 source (and the full source of any functions they call) available to 
 the compiler at compile-time.
Yeah I figured this much yet. So, to access for example, math functions - I should provide D source code. Sadly, there is not much I can do without at least C runtime...
It does limit what we can do however it is more secure. In time the functions you need may appear as compile-timeable functions in D libs. Or if you have to C source, you may have a chance of being able to port them over. If they use assembly, then it will make things more difficult. -Joel
Feb 24 2007
parent Walter Bright <newshound digitalmars.com> writes:
janderson wrote:
 It does limit what we can do however it is more secure.  In time the 
 functions you need may appear as compile-timeable functions in D libs. 
 Or if you have to C source, you may have a chance of being able to port 
 them over.  If they use assembly, then it will make things more difficult.
I expect that, over time, the capability of the compile time function evaluator will improve. Some things are likely never to be evaluated, however: 1) inline assembly - this would require building a CPU emulator. That's an incredible amount of work for essentially 0 gain. 2) C code - c'mon, write it in D! 3) Functions only available in object form - see (1).
Feb 24 2007