digitalmars.D - Compile time function execution...
- Walter Bright (2/23) Feb 15 2007 This should obsolete using templates to compute values at compile time.
- Ary Manzana (11/36) Feb 15 2007 This is really great, Walter! Congratulations!
- BCS (3/8) Feb 15 2007 int[foo()] should do it, I think, or temp!(foo()).
- Ary Manzana (9/19) Feb 15 2007 It would be nice to have something like this in phobos:
- BCS (4/32) Feb 15 2007 I'd go with an is expression
- Nicolai Waniek (5/7) Feb 15 2007 Just a note (has nothing to do with the topic): IMHO, phobos and tango
- Walter Bright (6/11) Feb 15 2007 The way to tell is it gives you an error if you try to execute it at
- BCS (3/18) Feb 15 2007 I think the issue is where compile time and runtime are both allowed,
- Walter Bright (4/6) Feb 15 2007 Right, which is why compile time execution is only done in contexts
- BCS (10/19) Feb 15 2007 Hmm, so compile time evaluation is lazy, not greedy? That makes me want ...
- Walter Bright (3/6) Feb 15 2007 I think the eval() template does that nicely.
- Russell Lewis (7/12) Feb 15 2007 Walter pointed out, faster than I, that an eval!() template does this.
- Walter Bright (3/5) Feb 15 2007 It's not necessary. The context completely determines what to
- Russell Lewis (15/21) Feb 16 2007 Please don't misunderstand me. I understand how your design works.
- Walter Bright (3/21) Feb 16 2007 That'll become possible once templates are extended to be able to take
- kris (3/29) Feb 15 2007 Good one. Can you perhaps explain what the execution mechanism is? That
- Walter Bright (5/8) Feb 15 2007 I thought you'd appreciate it because it involves NO new syntax. It just...
- Walter Bright (3/4) Feb 15 2007 For contrast, compare with the C++ proposal:
- Bill Baxter (12/17) Feb 15 2007 It's kinda long and boring, but it looks like the key differences are
- Andrei Alexandrescu (See Website For Email) (3/20) Feb 15 2007 3) The C++ feature is applicable to user-defined types.
- Lutger (9/31) Feb 15 2007 These user-defined literals seem useful too. Would it be hard to impleme...
- Reiner Pope (16/30) Feb 15 2007 Would this be useful as an optional tag? Suppose you wanted to make a
- Sean Kelly (4/9) Feb 15 2007 Huh, and Bjarne co-authored that proposal. I wonder what the reason is
- Walter Bright (4/13) Feb 15 2007 I'm guessing, but I suppose they wished to be as conservative as
- Andrei Alexandrescu (See Website For Email) (15/40) Feb 15 2007 This is a development of epic proportions.
- Walter Bright (3/21) Feb 15 2007 I think that can be done with an improvement to the existing compile
- Andrei Alexandrescu (See Website For Email) (33/55) Feb 15 2007 The answer is correct, but does not address the issue I raised.
- Walter Bright (6/49) Feb 15 2007 Remember your proposal for a expression type which resolves to "does
- Andrei Alexandrescu (See Website For Email) (12/33) Feb 15 2007 It could indeed; I'm just hoping it can be properly cloaked away (e.g.
- Walter Bright (2/5) Feb 15 2007 I agree.
- Michiel (8/33) Feb 15 2007 That's a great feature! A couple of questions, if I may:
- Frits van Bommel (20/27) Feb 15 2007 According to the documentation in the zip, yes. But only functions which...
- Michiel (13/28) Feb 15 2007 Well, then there is room for improvement. (Good thing, too. Can you
- Bill Baxter (10/40) Feb 15 2007 You do need some way to turn it off though.
- Michiel (5/17) Feb 15 2007 Very good point. This could be solved by using some sort of conditional
- Walter Bright (5/7) Feb 15 2007 It's the other way around. Functions are always executed at runtime,
- Michiel (10/17) Feb 15 2007 I know that's how it's implemented now. But generally, it's a good thing
- Walter Bright (8/16) Feb 15 2007 Many functions could be executed at compile time, but should not be.
- Michiel (4/5) Feb 15 2007 When should they not be?
- Walter Bright (14/19) Feb 15 2007 1) Another poster mentioned a function that decompressed a built-in
- Michiel (14/32) Feb 15 2007 Yes, that one was mentioned by Bill Baxter. But I think this is the
- Walter Bright (3/14) Feb 15 2007 The programmer *already* has explicit control over whether it is folded
- Michiel (16/28) Feb 15 2007 Correction, I meant to say: "The programmer could explicitly tell the
- Walter Bright (3/6) Feb 15 2007 I don't think eval!(expression) is an undue burden. It's hard to imagine...
- Reiner Pope (11/18) Feb 15 2007 But in many situations, you could view compile-time function execution
- Walter Bright (8/25) Feb 15 2007 You can completely ignore and omit the eval!() and your code will run
- Reiner Pope (41/51) Feb 15 2007 It's better for the user only to have complete control when desired. The...
- BCS (3/30) Feb 15 2007 6) when the whole program is const foldable, e.i. no runtime inputs, lik...
- Michiel (6/8) Feb 15 2007 That's interesting. What kind of program has no runtime inputs?
- BCS (7/18) Feb 15 2007 I have a program that calculates the number of N length sequences whose ...
- Michiel (7/22) Feb 15 2007 Ah, now I understand the fuss. :) Walter is using an interpreter for the
- =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= (5/12) Feb 15 2007 If the compiler generates a new instance of the algorithm whenever it is
- Michiel (8/18) Feb 15 2007 That's actually not the idea. Basically, the idea is that the compiler
- Russell Lewis (8/17) Feb 15 2007 How about:
- Derek Parnell (8/16) Feb 15 2007 int CalculateTheAnswerToLifeUniverseEverything() { return 42; }
- Frits van Bommel (4/9) Feb 16 2007 No, the function name says *calculate*, not *return* :P.
- Aarti_pl (4/13) Feb 16 2007 I think that it was just a great example of brain-time constant folding ...
- janderson (7/20) Feb 16 2007 One thought on this. What about leveraging multiple threads to
- Michiel (8/16) Feb 17 2007 And I'm not 100% sure why you can't compile those functions and run
- Walter Bright (11/20) Feb 15 2007 Yes, as long as that function can also be compile time executed.
- Andrei Alexandrescu (See Website For Email) (3/26) Feb 15 2007 The feature of binding expressions to aliases will break eval.
- Frits van Bommel (52/77) Feb 15 2007 Hmm... infinite loops are suddenly _really_ slow to compile ;) :
- Nicolai Waniek (1/1) Feb 15 2007 Indeed very nice! :)
- Gregor Richards (14/14) Feb 15 2007 I see that I can't do this:
- Walter Bright (2/22) Feb 15 2007 That's a bug. I'll fix it.
- Max Samukha (18/40) Feb 16 2007 The following must be a related bug. The compiler complains that the
- Walter Bright (4/7) Feb 16 2007 Yes, in the compiler source the mixin argument failed to be marked as
- Witold Baryluk (10/30) Feb 16 2007 It seems to be bug.
- Hasan Aljudy (4/29) Feb 15 2007 Awesome!!!!
- Bill Baxter (13/15) Feb 15 2007 Very nice!
- Walter Bright (9/25) Feb 15 2007 Tail recursion is a performance optimization, it has no effect on
- Bill Baxter (41/71) Feb 15 2007 Right. But if I understand correctly, the same code can get called
- Walter Bright (10/51) Feb 15 2007 I see what you want, but it isn't going to work the way the compiler is
- Bill Baxter (9/21) Feb 15 2007 Yeh, it's not a big deal. I was more just curious since you recently
- Walter Bright (4/12) Feb 15 2007 Right now, the compiler will fail if the compile time execution results
- janderson (7/12) Feb 15 2007 Maybe you could allow the user to specify stack size and maximum
- Walter Bright (17/27) Feb 15 2007 Whether you tell it to fail at a smaller limit, or it fails by itself at...
- Andrei Alexandrescu (See Website For Email) (9/40) Feb 15 2007 That could be achieved with a watchdog process without changing the
- Bill Baxter (7/16) Feb 16 2007 It would be nice though, if the compiler could trap sigint or something
- Dave (2/22) Feb 16 2007 How about listing any CTFE with -v? That should be more reliable and use...
- BCS (4/33) Feb 16 2007 On the line, how about a timeout flag for unattended builds? As it is
- Dave (7/51) Feb 16 2007 Completely agree, otherwise it contradicts your "right way to build a co...
- Andrei Alexandrescu (See Website For Email) (6/58) Feb 16 2007 Yes, memory watching would be great. It is easy to write a script that
- Kevin Bealer (14/45) Feb 15 2007 I remember one of the first "facts of life" of computer science in
- janderson (13/44) Feb 16 2007 Please don't. All sorts of things can affect performance it means that
- Frits van Bommel (13/21) Feb 16 2007 I'd definitely prefer a way to make it fail early. When I was first
- Joe (8/39) Feb 17 2007 This is exactly pertaining to compiler specific issues, but paralleling
- torhu (10/11) Feb 15 2007 Wonderful feature, but look at this:
- Walter Bright (4/21) Feb 15 2007 Aggh, that's a compiler bug.
- Dave (2/26) Feb 16 2007 Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesom...
- Michiel (5/12) Feb 16 2007 I don't think so. If I'm not mistaken, D would do that at runtime at the
- Walter Bright (2/11) Feb 16 2007 Not for a global declaration, which must happen at compile time.
- Frits van Bommel (4/15) Feb 16 2007 It does that at runtime for variables in function-type scope. For global...
- Bill Baxter (19/44) Feb 15 2007 This doesn't seem to work either -- should it?
- Walter Bright (2/3) Feb 15 2007 Looks like it should. I'll check it out.
- Bill Baxter (16/41) Feb 15 2007 Any chance that concatenation onto a variable will be supported?
- Walter Bright (2/3) Feb 15 2007 It should. I'll figure out what's going wrog, wring, worng, er, wrong.
- Walter Bright (18/28) Feb 16 2007 It does when I try it:
- Bill Baxter (4/36) Feb 16 2007 Doh! You're right. I must have had some other bad code commented in
- Frank Benoit (keinfarbton) (9/9) Feb 15 2007 many others asked about explicitly run at compile time, and there is
- Bill Baxter (18/32) Feb 15 2007 I like eval!(func()) for that.
- Andrei Alexandrescu (See Website For Email) (4/29) Feb 15 2007 Definitely. That's exactly why dispatch of the same code through
- Derek Parnell (21/23) Feb 15 2007 I guess its time I came clean and admitted that in spite of this being a
- Jarrett Billingsley (5/9) Feb 15 2007 You mean
- Andrei Alexandrescu (See Website For Email) (14/31) Feb 15 2007 This is by far the least interesting application of this stuff. I don't
- Derek Parnell (20/23) Feb 15 2007 So this would mean that I could code ...
- Andrei Alexandrescu (See Website For Email) (4/26) Feb 15 2007 I'm thinking of the much more boring and much crappier job of generating...
- Walter Bright (2/16) Feb 15 2007 Exactamundo!
- Walter Bright (2/5) Feb 15 2007 I agree. I need a better example. Any ideas?
- Andrei Alexandrescu (See Website For Email) (11/17) Feb 15 2007 Well we talked about:
- Frits van Bommel (3/22) Feb 16 2007 Would this mean a type of function whose return value is automatically
- Lionello Lunesu (4/20) Feb 16 2007 But add a "!" to the print, and it's already possible? What extra is
- Frits van Bommel (2/19) Feb 16 2007 You currently also need a mixin() around the print!().
- Lionello Lunesu (4/24) Feb 16 2007 Aha.. Or "before", right?
- Frits van Bommel (6/31) Feb 16 2007 I think the example requires a string mixin statement, which according
- Kevin Bealer (98/104) Feb 16 2007 (Sorry that this got so long -- it kind of turned into a duffel bag of
- Walter Bright (3/14) Feb 15 2007 It's a very good question, and I tried to answer it in the follow-on
- Russell Lewis (18/33) Feb 16 2007 It (sometimes) allows you to express things using the formulae that you
- Walter Bright (5/19) Feb 16 2007 eval!() isn't even in the standard library! You can name it whatever you...
- janderson (3/28) Feb 15 2007 Man this kicks ass!!! Its the best implementation we could hope for.
- Lionello Lunesu (11/36) Feb 16 2007 Can I use the results of compile-time evaluatable functions in "static
- Daniel919 (31/35) Feb 16 2007 Hi, GREAT new feature !
- Nicolai Waniek (25/50) Feb 16 2007 I didn't read the whole thread, but I just found some replies about how
- Michiel (4/10) Feb 16 2007 I am in total agreement.
- Brian Byrne (20/45) Feb 16 2007 I'm having a few problems getting some simple examples to work, for
- Frits van Bommel (3/16) Feb 16 2007 This bug has already been reported:
- J Duncan (3/3) Feb 20 2007 Well I just tried compiling a rather large codebase on 1.006, it takes
- Frits van Bommel (3/6) Feb 20 2007 I had a huge memory usage like that when I tried to compile a file with
- Serg Kovrov (4/4) Feb 24 2007 Sorry if this was already answered, but I can't find it..
- Frits van Bommel (10/12) Feb 24 2007 That depends on what you mean by "library functions". Obviously you mean...
- Serg Kovrov (5/15) Feb 24 2007 My question was general. C runtime functions, Phobos functions, any
- Tyler Knott (3/7) Feb 24 2007 You can only compile-time execute D functions that have their full sourc...
- Serg Kovrov (6/9) Feb 24 2007 Yeah I figured this much yet.
- janderson (6/16) Feb 24 2007 It does limit what we can do however it is more secure. In time the
- Walter Bright (8/12) Feb 24 2007 I expect that, over time, the capability of the compile time function
... is now in DMD 1.006. For example:------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Walter Bright escribió:... is now in DMD 1.006. For example:This is really great, Walter! Congratulations! I'm always thinking that D is focused on great runtime performance while having great expresiveness, which is awsome. I can't believe computers have evolved that much... and still we have to wait 10 o 20 seconds for a program to load :-( A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Ary Manzana wrote:A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
Feb 15 2007
BCS escribió:Ary Manzana wrote:It would be nice to have something like this in phobos: ------------------------------------------------- import ... ? int square(int i) { return i * i; } static assert (isCompileTimeExecution(square()); ------------------------------------------------- so that if the function is changed you can still assert that or know that you've lost it.A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
Feb 15 2007
Ary Manzana wrote:BCS escribió:I'd go with an is expression is(square(int) : const) sort of in line with the function/etc. stuff.Ary Manzana wrote:It would be nice to have something like this in phobos: ------------------------------------------------- import ... ? int square(int i) { return i * i; } static assert (isCompileTimeExecution(square()); ------------------------------------------------- so that if the function is changed you can still assert that or know that you've lost it.A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.int[foo()] should do it, I think, or temp!(foo()). Both hack, but...
Feb 15 2007
Ary Manzana wrote:It would be nice to have something like this in phobos:Just a note (has nothing to do with the topic): IMHO, phobos and tango should be merged somehow, I don't like the idea of having two standard libraries. Hopefully, there's someone recognizing this ;)
Feb 15 2007
Ary Manzana wrote:A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.The way to tell is it gives you an error if you try to execute it at compile time and it cannot be. Don't bother trying to memorize the rules, if you follow instead the rule "imagine regular compile time constant folding and extend it to functions" and you'll be about 99% correct.
Feb 15 2007
Walter Bright wrote:Ary Manzana wrote:I think the issue is where compile time and runtime are both allowed, but makes a *BIG* difference in performance.A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.The way to tell is it gives you an error if you try to execute it at compile time and it cannot be. Don't bother trying to memorize the rules, if you follow instead the rule "imagine regular compile time constant folding and extend it to functions" and you'll be about 99% correct.
Feb 15 2007
BCS wrote:I think the issue is where compile time and runtime are both allowed, but makes a *BIG* difference in performance.Right, which is why compile time execution is only done in contexts where it would otherwise error, such as in initialization of global constants.
Feb 15 2007
Reply to Walter,BCS wrote:Hmm, so compile time evaluation is lazy, not greedy? That makes me want a cast(const) to force it in some cases char[] CTwritef(...) const char[] message = CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1); log(message); vs. log(cast(const)CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1)); or best of all, why no have it greedy? log(CTwritef(">>%s:%d foobar", __FILE__, __LINE__+1));I think the issue is where compile time and runtime are both allowed, but makes a *BIG* difference in performance.Right, which is why compile time execution is only done in contexts where it would otherwise error, such as in initialization of global constants.
Feb 15 2007
BCS wrote:Hmm, so compile time evaluation is lazy, not greedy?Yes.That makes me want a cast(const) to force it in some casesI think the eval() template does that nicely.
Feb 15 2007
Ary Manzana wrote:A question: is there anyway the compiler can tell the user if a certain function can be executed at compile time? Those six rules may be hard to memorize and while coding sensitive code it would be great to ask the compiler "Is the function I'm writing can be executed at compile time?". Otherwise it's just a guess.Walter pointed out, faster than I, that an eval!() template does this. But here's my question: How do we have a compile-time switch which controls what to compile-time evaluate and what not? Somebody mentioned, elsewhere, that you would not want to do compile-time evaluation in a debug build but you would in a release build. How would one achieve that, other than wrapping every use with a version switch?
Feb 15 2007
Russell Lewis wrote:But here's my question: How do we have a compile-time switch which controls what to compile-time evaluate and what not?It's not necessary. The context completely determines what to compile-time and what to run-time.
Feb 15 2007
Walter Bright wrote:Russell Lewis wrote:Please don't misunderstand me. I understand how your design works. (And I think it's a pretty good one.) But what I'm saying is that somebody might want to write code like this: int i; version(debug) i = eval!(MyFunc()); else i = MyFunc(); Perhaps they want to do this because MyFunc() is still being debugged, or because MyFunc() takes a while to do compile-time function execution. How would one write a cleaner syntax for this? I'd like so see something like: int i = exec_compile_time_on_release_build!(MyFunc()); but I'm not sure how one would code it.But here's my question: How do we have a compile-time switch which controls what to compile-time evaluate and what not?It's not necessary. The context completely determines what to compile-time and what to run-time.
Feb 16 2007
Russell Lewis wrote:But what I'm saying is that somebody might want to write code like this: int i; version(debug) i = eval!(MyFunc()); else i = MyFunc(); Perhaps they want to do this because MyFunc() is still being debugged, or because MyFunc() takes a while to do compile-time function execution. How would one write a cleaner syntax for this? I'd like so see something like: int i = exec_compile_time_on_release_build!(MyFunc()); but I'm not sure how one would code it.That'll become possible once templates are extended to be able to take alias expressions as arguments.
Feb 16 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Good one. Can you perhaps explain what the execution mechanism is? That aspect is much more interesting to some of us geeks <g>------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
kris wrote:Good one.I thought you'd appreciate it because it involves NO new syntax. It just makes things work that were errors before.Can you perhaps explain what the execution mechanism is? That aspect is much more interesting to some of us geeks <g>The dirty deed is done in interpret.c. It basically just makes more powerful the existing constant folding code.
Feb 15 2007
Walter Bright wrote:This should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Walter Bright wrote:Walter Bright wrote:It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions! That's just a proposal, though, right? Has it been accepted for C++09? Seems like a hard sell given the limitations and the new keyword. --bbThis should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Bill Baxter wrote:Walter Bright wrote:3) The C++ feature is applicable to user-defined types. AndreiWalter Bright wrote:It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions!This should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:Bill Baxter wrote:These user-defined literals seem useful too. Would it be hard to implement this with structs, or are there perhaps more subtle issues? Here is an earlier article it was based on: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1511.pdf Btw this is a really cool feature, how it is done in D I mean. A little while ago someone posted a CachedFunction template to do memoization on any function, this is now easy to do 100% safe, right? Or at least for the subset of functions that pass the compile-time criteria.Walter Bright wrote:3) The C++ feature is applicable to user-defined types. AndreiWalter Bright wrote:It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9. 2) The restriction that a constexpr function can only contain "return" followed by exactly one expression. No loops for you! And I hope you like quadruply nested b?x:y expressions!This should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Bill Baxter wrote:Walter Bright wrote:Would this be useful as an optional tag? Suppose you wanted to make a utils library that was usable in both compile-time and run-time form. You write your code trying to follow the rules supplied in the specs, but how do you make sure you haven't slipped up anywhere? Would an additional keyword help here, or do you just have to do all your unit-tests in static form, eg: char[] itoa(long value) {...} unittest { static assert(itoa(12345) == "12345"); } I suppose there'e not much incentive to add a keyword in this situation, but perhaps it could be useful elsewhere? Cheers, ReinerWalter Bright wrote:It's kinda long and boring, but it looks like the key differences are 1) The need to tag compile-time functions with a new keyword, "constexpr", though they seem to sell this as an advantage. "a programmer can state that a function is intended to be used in a constant expression and the compiler can diagnose mistakes." -- page 9.This should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Walter Bright wrote:Walter Bright wrote:Huh, and Bjarne co-authored that proposal. I wonder what the reason is for all the restrictions. SeanThis should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Sean Kelly wrote:Walter Bright wrote:I'm guessing, but I suppose they wished to be as conservative as possible in order to avoid the unimplementable debacle exported templates were.Walter Bright wrote:Huh, and Bjarne co-authored that proposal. I wonder what the reason is for all the restrictions.This should obsolete using templates to compute values at compile time.For contrast, compare with the C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1972.pdf
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:This is a development of epic proportions. There is a need for a couple of ancillary features. Most importantly, a constant must be distinguishable from a variable. Consider the example of regex from our correspondence: bool b = regexmatch(a, "\n$"); vs. char[] pattern = argv[1]; bool b = regexmatch(a, pattern); You'd want to dispatch regexmatch differently: the first match should be passed to compile-time code that at the end of the day yields: bool b = (a[$-1] == '\n'); while the second should invoke the full-general dynamic pattern matching algorithm since a dynamic pattern is used. Andrei------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:There is a need for a couple of ancillary features. Most importantly, a constant must be distinguishable from a variable. Consider the example of regex from our correspondence: bool b = regexmatch(a, "\n$"); vs. char[] pattern = argv[1]; bool b = regexmatch(a, pattern); You'd want to dispatch regexmatch differently: the first match should be passed to compile-time code that at the end of the day yields: bool b = (a[$-1] == '\n'); while the second should invoke the full-general dynamic pattern matching algorithm since a dynamic pattern is used.I think that can be done with an improvement to the existing compile time regex library.
Feb 15 2007
Walter Bright wrote:Andrei Alexandrescu (See Website For Email) wrote:The answer is correct, but does not address the issue I raised. A simple question is: what is the signature of regexmatch? A runtime-only version is: bool regexmatch_1(char[] input, char[] pattern); A compile-time-only version is: bool regexmatch_2(pattern : char[])(char[] input); Notice how the two cannot be called the same way. So the burden is on the user to specify different syntaxes for the two cases: bool b1 = regexmatch_1(a, ".* = .*"); // forced runtime bool b2 = regexmatch_2!(".* = .*")(a); // forced compile-time Notice that b2 is NOT computed at compile time!!! This is because "a" is a regular variable. It's just that the code for computing b2 is radically different from the code for computing b1 because the former uses static knowledge of the pattern. The problem is that what's really needed is this: bool b = regexmatch(string, pattern); and have regexmatch dispatch to regexmatch_1 if pattern is a variable, or to regexmatch_2 if pattern is a compile-time constant. Do you feel me? What we need is allow a means to overload a function with a template, in a way that ensures unified invocation syntax, e.g.: bool regexmatch(char[] str, char[] pat); // 1 bool regexmatch(char[] pat)(char[] str, pat); // 2 bool regexmatch(char[] pat, char[] str)(str, pat); // 3 void main(int argc, char[][] argv) { regexmatch(argv[0], argv[1]); // goes to (1) regexmatch(argv[0], ".+"); // goes to (2) regexmatch("yah", ".+"); // goes to (3) } Notice how the invocation syntax is identical --- an essential artifact. AndreiThere is a need for a couple of ancillary features. Most importantly, a constant must be distinguishable from a variable. Consider the example of regex from our correspondence: bool b = regexmatch(a, "\n$"); vs. char[] pattern = argv[1]; bool b = regexmatch(a, pattern); You'd want to dispatch regexmatch differently: the first match should be passed to compile-time code that at the end of the day yields: bool b = (a[$-1] == '\n'); while the second should invoke the full-general dynamic pattern matching algorithm since a dynamic pattern is used.I think that can be done with an improvement to the existing compile time regex library.
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:A simple question is: what is the signature of regexmatch? A runtime-only version is: bool regexmatch_1(char[] input, char[] pattern); A compile-time-only version is: bool regexmatch_2(pattern : char[])(char[] input); Notice how the two cannot be called the same way. So the burden is on the user to specify different syntaxes for the two cases: bool b1 = regexmatch_1(a, ".* = .*"); // forced runtime bool b2 = regexmatch_2!(".* = .*")(a); // forced compile-time Notice that b2 is NOT computed at compile time!!! This is because "a" is a regular variable. It's just that the code for computing b2 is radically different from the code for computing b1 because the former uses static knowledge of the pattern. The problem is that what's really needed is this: bool b = regexmatch(string, pattern); and have regexmatch dispatch to regexmatch_1 if pattern is a variable, or to regexmatch_2 if pattern is a compile-time constant. Do you feel me?No, but I understand your point.What we need is allow a means to overload a function with a template, in a way that ensures unified invocation syntax, e.g.: bool regexmatch(char[] str, char[] pat); // 1 bool regexmatch(char[] pat)(char[] str, pat); // 2 bool regexmatch(char[] pat, char[] str)(str, pat); // 3 void main(int argc, char[][] argv) { regexmatch(argv[0], argv[1]); // goes to (1) regexmatch(argv[0], ".+"); // goes to (2) regexmatch("yah", ".+"); // goes to (3) } Notice how the invocation syntax is identical --- an essential artifact.Remember your proposal for a expression type which resolves to "does this expression compile"? That can be used here, along with expression aliasing, to test to see if it can be done at compile time, and then pick the right fork.
Feb 15 2007
Walter Bright wrote:Andrei Alexandrescu (See Website For Email) wrote:It could indeed; I'm just hoping it can be properly cloaked away (e.g. in a library) at little cognitive cost to both the user and library developer. I assume it will be something often asked for. Many Perl coders probably have no idea that: $a =~ ".*=.*"; is faster than, and handled very, very differently, from: $a =~ $b; where $b is a dynamic variable. They just use the uniform syntax and let the compiler do whatever the hell it has to do to generate good code. We should make that kind of partial evaluation easy to define and implement. AndreiWhat we need is allow a means to overload a function with a template, in a way that ensures unified invocation syntax, e.g.: bool regexmatch(char[] str, char[] pat); // 1 bool regexmatch(char[] pat)(char[] str, pat); // 2 bool regexmatch(char[] pat, char[] str)(str, pat); // 3 void main(int argc, char[][] argv) { regexmatch(argv[0], argv[1]); // goes to (1) regexmatch(argv[0], ".+"); // goes to (2) regexmatch("yah", ".+"); // goes to (3) } Notice how the invocation syntax is identical --- an essential artifact.Remember your proposal for a expression type which resolves to "does this expression compile"? That can be used here, along with expression aliasing, to test to see if it can be done at compile time, and then pick the right fork.
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:where $b is a dynamic variable. They just use the uniform syntax and let the compiler do whatever the hell it has to do to generate good code. We should make that kind of partial evaluation easy to define and implement.I agree.
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:That's a great feature! A couple of questions, if I may: * Can sqrt still call another function? * Does it still work if classes or structs are involved in any way? What about enums? What about arrays? * Why is the second sqrt(10) run at runtime? In theory it's still a constant, right? Is this something that will work in a later version? Congrats on the new feature!------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Michiel wrote:That's a great feature! A couple of questions, if I may: * Can sqrt still call another function?According to the documentation in the zip, yes. But only functions which can themselves be executed at compile time, of course.* Does it still work if classes or structs are involved in any way? What about enums? What about arrays?Classes and structs seem to be disallowed. Array and string literals can be used as parameters, as long as all members would also be valid parameters by themselves. The body may not use non-const arrays. Enums aren't mentioned, but either they qualify as integers (meaning they can be passes as parameters) or not. Either way, they're not disallowed in the body so they should be usable there.* Why is the second sqrt(10) run at runtime? In theory it's still a constant, right? Is this something that will work in a later version?From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Feb 15 2007
Frits van Bommel wrote:Well, then there is room for improvement. (Good thing, too. Can you imagine how bad it would be if perfection had already been achieved? ;)) Anyway, it looks to me like every subtree of the abstract syntax tree could potentially be collapsed by compile time function execution. This would include the mentioned sqrt call. Of course, I don't really know how the D compiler works internally, so I can't be sure in this case. And I don't see a real reason why structs and at least scope classes couldn't be included. But I don't suppose it's all that easy. Maybe the perfect compiler would also pre-execute everything up until the first input is needed. And maybe some bits after. -- Michiel* Why is the second sqrt(10) run at runtime? In theory it's still a constant, right? Is this something that will work in a later version?From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Feb 15 2007
Michiel wrote:Frits van Bommel wrote:You do need some way to turn it off though. For an extreme example, most programs from the demoscene make extensive use of compressed data that is uncompressed as the first step before running. They would be very unhappy if their language chose to be "helpful" by running the decompression routines at compile-time thus resulting in a 20M executable. Extreme -- but it does demonstrate there are cases where you want to be sure some expansion happens at runtime. --bbWell, then there is room for improvement. (Good thing, too. Can you imagine how bad it would be if perfection had already been achieved? ;)) Anyway, it looks to me like every subtree of the abstract syntax tree could potentially be collapsed by compile time function execution. This would include the mentioned sqrt call. Of course, I don't really know how the D compiler works internally, so I can't be sure in this case. And I don't see a real reason why structs and at least scope classes couldn't be included. But I don't suppose it's all that easy. Maybe the perfect compiler would also pre-execute everything up until the first input is needed. And maybe some bits after.* Why is the second sqrt(10) run at runtime? In theory it's still a constant, right? Is this something that will work in a later version?From the spec: """ In order to be executed at compile time, the function must appear in a context where it must be so executed, for example: * initialization of a static variable * dimension of a static array * argument for a template value parameter """ So this is only done if it's used in a context where only compile-time constants are allowed.
Feb 15 2007
Bill Baxter wrote:Very good point. This could be solved by using some sort of conditional compilation construct on the bits you want to happen at runtime. -- MichielMaybe the perfect compiler would also pre-execute everything up until the first input is needed. And maybe some bits after.You do need some way to turn it off though. For an extreme example, most programs from the demoscene make extensive use of compressed data that is uncompressed as the first step before running. They would be very unhappy if their language chose to be "helpful" by running the decompression routines at compile-time thus resulting in a 20M executable. Extreme -- but it does demonstrate there are cases where you want to be sure some expansion happens at runtime.
Feb 15 2007
Michiel wrote:Very good point. This could be solved by using some sort of conditional compilation construct on the bits you want to happen at runtime.It's the other way around. Functions are always executed at runtime, unless they are in a context where a compile-time constant is *required*, such as for global variable initialization or template value arguments.
Feb 15 2007
Walter Bright wrote:I know that's how it's implemented now. But generally, it's a good thing if as much as possible happens at compile time. Unless you compile in debug mode, in which you want the compilation fast. And like Bill said, you also don't want to decompress everything at compile time and blow up the executable. But that's not a problem if all you're doing is shrinking subtrees. Like in the case of that second sqrt call. -- MichielVery good point. This could be solved by using some sort of conditional compilation construct on the bits you want to happen at runtime.It's the other way around. Functions are always executed at runtime, unless they are in a context where a compile-time constant is *required*, such as for global variable initialization or template value arguments.
Feb 15 2007
Michiel wrote:Walter Bright wrote:Many functions could be executed at compile time, but should not be. There's NO way for the compiler to figure this out. The only thing left is for it to be explicit whether you want compile or run time execution, and this is the way it is designed. There isn't any ambiguity - the context determines it completely. There's also an easy way to switch between the two, so I don't feel there's a need for anything additional.It's the other way around. Functions are always executed at runtime, unless they are in a context where a compile-time constant is *required*, such as for global variable initialization or template value arguments.I know that's how it's implemented now. But generally, it's a good thing if as much as possible happens at compile time.
Feb 15 2007
Walter Bright wrote:Many functions could be executed at compile time, but should not be.When should they not be? -- Michiel
Feb 15 2007
Michiel wrote:Walter Bright wrote:1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time. 3) As the spec mentions, there are cases where compile time would get different results than run time. 4) The compiler cannot determine in advance if a function can be executed at compile time. So speculatively doing so would have to be done for every function with constant arguments - this could be spectacularly slow. 5) It can cause problems for people who want to do runtime coverage testing - their code may actually never get executed, and they never notice.Many functions could be executed at compile time, but should not be.When should they not be?
Feb 15 2007
Walter Bright wrote:Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.Many functions could be executed at compile time, but should not be.When should they not be?2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time.If I'm compiling for release, I wouldn't mind letting the compiler run a long time if it results in a faster executable.3) As the spec mentions, there are cases where compile time would get different results than run time.It shouldn't, though, should it? I'm just talking about a possible ultimate goal here. I understand if D isn't at that stage yet.4) The compiler cannot determine in advance if a function can be executed at compile time. So speculatively doing so would have to be done for every function with constant arguments - this could be spectacularly slow.Like I said, I wouldn't mind. As long as it doesn't take that long when I'm debug-mode compiling. I don't need compile time execution in that case.5) It can cause problems for people who want to do runtime coverage testing - their code may actually never get executed, and they never notice.Nothing a compiler-switch can't fix, I think. -- Michiel
Feb 15 2007
Michiel wrote:Walter Bright wrote:The programmer *already* has explicit control over whether it is folded or not.Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.Many functions could be executed at compile time, but should not be.When should they not be?
Feb 15 2007
Walter Bright wrote:Correction, I meant to say: "The programmer could explicitly tell the compiler to not execute that piece of code at *compile time*." And yes, the programmer already has explicit control. But not the way I mean. Right now, in places where a call could potentially be either a runtime- or compiletime- execution: * the compiler defaults to runtime execution. And like I said, that behavior isn't wanted in most cases (when compiling for release). I would suggest defaulting to compile time execution and using a special keyword to avoid that. The programmer will use this keyword if he's compressed some data and wants it to be decompressed at runtime. * the syntax for functions to be executed at compile time isn't the nice-and-simple D syntax, but the template-syntax. And in another thread you yourself have mentioned why that's not optimal. I agree. -- MichielThe programmer *already* has explicit control over whether it is folded or not.Yes, that one was mentioned by Bill Baxter. But I think this is the exception rather than the rule. The programmer could explicitly tell the compiler to not execute that piece of code at runtime.1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose.Many functions could be executed at compile time, but should not be.When should they not be?
Feb 15 2007
Michiel wrote:* the syntax for functions to be executed at compile time isn't the nice-and-simple D syntax, but the template-syntax. And in another thread you yourself have mentioned why that's not optimal. I agree.I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.
Feb 15 2007
Walter Bright wrote:Michiel wrote:But in many situations, you could view compile-time function execution simply as a high-level optimization, an extension on 'inline'. Why, then, does it make sense to require explicit eval!() annotations, when it clearly doesn't make sense to do this for inline? Viewing this as a more complex extension to inlining, I think the correct approach is to allow explicit annotations for 'only at runtime' and 'only at compile time' and have the rest decided according to a compiler switch, eg -pre-eval Cheers, Reiner* the syntax for functions to be executed at compile time isn't the nice-and-simple D syntax, but the template-syntax. And in another thread you yourself have mentioned why that's not optimal. I agree.I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.
Feb 15 2007
Reiner Pope wrote:Walter Bright wrote:You can completely ignore and omit the eval!() and your code will run just fine. The eval!() is only for those who wish to do some more advanced tweaking.Michiel wrote:But in many situations, you could view compile-time function execution simply as a high-level optimization, an extension on 'inline'. Why, then, does it make sense to require explicit eval!() annotations, when it clearly doesn't make sense to do this for inline?* the syntax for functions to be executed at compile time isn't the nice-and-simple D syntax, but the template-syntax. And in another thread you yourself have mentioned why that's not optimal. I agree.I don't think eval!(expression) is an undue burden. It's hard to imagine how any other syntax would look better.Viewing this as a more complex extension to inlining, I think the correct approach is to allow explicit annotations for 'only at runtime' and 'only at compile time'I just don't see what is to be gained by this (and consider the extra bloat in the language to provide such annotations). The language already gives complete, absolute control over when a function is executed. Can you give an example where such annotations improve things?and have the rest decided according to a compiler switch, eg -pre-eval
Feb 15 2007
Walter Bright wrote:It's better for the user only to have complete control when desired. The compiler should infer the rest, but by a better policy than 'leave as much to runtime as possible', since such a policy misses optimization opportunities. Unless I misunderstand, it seems like compile time function execution opens up a lot of optimization opportunities, yet the programmer is required to explicitly specify every time they should be used. From what I understood, this function execution was implemented by extending const-folding. Wasn't const-folding initially a form of optimization, to pre-evaluate things, so they didn't need to be evaluated at runtime? Consider: int sq(int val) { return val*val; } void main() { printf( sq(1024) ); } Although sq() will probably be inlined, if it gets too big, then it won't. However, there is no reason that it can't be evaluated at compile-time, which would increase the runtime efficiency. Sure, the programmer could enforce compile-time evaluation by using printf( eval!( sq(1024) ) ); but if the compiler can evaluate it automatically, then there would be no need for programmer annotations here. I'm just agreeing with Michiel that there should be a compile-time switch tells the compiler to pre-evaluate as much as possible. This is an optimization because it means that some code doesn't need to be run at runtime. I just suggested the 'only at runtime' annotation so that the programmer has a safeguard against the compiler going wild with pre-evaluation, for situations like the compressed-code example you mentioned earlier.Viewing this as a more complex extension to inlining, I think the correct approach is to allow explicit annotations for 'only at runtime' and 'only at compile time'I just don't see what is to be gained by this (and consider the extra bloat in the language to provide such annotations). The language already gives complete, absolute control over when a function is executed.You can completely ignore and omit the eval!() and your code will run just fine. The eval!() is only for those who wish to do some more advanced tweaking.For sure. However, what I referred to in my post was the similarity between requiring 'inline' in C++ to tell the compiler to optimize the function call and requiring eval!() in D to tell the compiler to pre-evaluate the function. Cheers, Reiner
Feb 15 2007
Reply to Walter,Michiel wrote:6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.Walter Bright wrote:1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time. 3) As the spec mentions, there are cases where compile time would get different results than run time. 4) The compiler cannot determine in advance if a function can be executed at compile time. So speculatively doing so would have to be done for every function with constant arguments - this could be spectacularly slow. 5) It can cause problems for people who want to do runtime coverage testing - their code may actually never get executed, and they never notice.Many functions could be executed at compile time, but should not be.When should they not be?
Feb 15 2007
BCS wrote:6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast. -- Michiel
Feb 15 2007
Reply to Michiel,BCS wrote:I have a program that calculates the number of N length sequences whose sum is less than M. M and N are consts so it has no inputs. Runtime speed is of no more importance than compile time speed because I only need to run it once for each time I compile it. I'd rather let the CPU do the crunching rather than the DMD. It takes about 15min to run as is, so that would be about 1500+ min under DMD.6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
Feb 15 2007
BCS wrote:Ah, now I understand the fuss. :) Walter is using an interpreter for the compile time execution. All this time I was assuming multiple compilation stages. First compile the compile-time functions and let the processor run them at compile time. -- MichielI have a program that calculates the number of N length sequences whose sum is less than M. M and N are consts so it has no inputs. Runtime speed is of no more importance than compile time speed because I only need to run it once for each time I compile it. I'd rather let the CPU do the crunching rather than the DMD. It takes about 15min to run as is, so that would be about 1500+ min under DMD.6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
Feb 15 2007
Michiel kirjoitti:BCS wrote:If the compiler generates a new instance of the algorithm whenever it is needed (i.e. function is partially evaluated), it can actually produce longer code than when simply calling the same function over and over. It's almost like inlining functions whenever possible.6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
Feb 15 2007
Jari-Matti Mäkelä wrote:That's actually not the idea. Basically, the idea is that the compiler runs the algorithm and replaces every call to it with the answer. It's possible (though unlikely), that the executable would be a bit larger, but the executable would be much faster. And memory we have. In most cases it's speed that's the problem. -- MichielIf the compiler generates a new instance of the algorithm whenever it is needed (i.e. function is partially evaluated), it can actually produce longer code than when simply calling the same function over and over. It's almost like inlining functions whenever possible.6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
Feb 15 2007
Michiel wrote:BCS wrote:How about: import std.stdio; void main() { writefln("%d\n", CalculateTheAnswerToLifeUniverseEverything()); } if it took 7.5 million years to run on a supercomputer, how long is it going to take to run on your compiler?6) when the whole program is const foldable, e.i. no runtime inputs, like with some scientific software.That's interesting. What kind of program has no runtime inputs? And why would it be a problem if the whole program is reduced to printing constants to the output? It would certainly be small and fast.
Feb 15 2007
On Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:import std.stdio; void main() { writefln("%d\n", CalculateTheAnswerToLifeUniverseEverything()); } if it took 7.5 million years to run on a supercomputer, how long is it going to take to run on your compiler?int CalculateTheAnswerToLifeUniverseEverything() { return 42; } -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 4:37:59 PM
Feb 15 2007
Derek Parnell wrote:On Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:No, the function name says *calculate*, not *return* :P. Also, nitpick: wasn't it "The Answer To _The Ultimate Question_ Of Life, the Universe and Everything"? (i.e. you missed a bit ;) )if it took 7.5 million years to run on a supercomputer, how long is it going to take to run on your compiler?int CalculateTheAnswerToLifeUniverseEverything() { return 42; }
Feb 16 2007
Frits van Bommel napisaĊ(a):Derek Parnell wrote:I think that it was just a great example of brain-time constant folding :D Regards Marcin KuszczakOn Thu, 15 Feb 2007 17:52:38 -0700, Russell Lewis wrote:No, the function name says *calculate*, not *return* :P.if it took 7.5 million years to run on a supercomputer, how long is it going to take to run on your compiler?int CalculateTheAnswerToLifeUniverseEverything() { return 42; }
Feb 16 2007
Walter Bright wrote:Michiel wrote:One thought on this. What about leveraging multiple threads to calculate different algorithms at once. They should all be self contained, so this shouldn't be much of a problem. With the quad-cores the compiled program would run maybe at 30 times slower. Also its a good step towards future-proofing the compiler. -JoelWalter Bright wrote:1) Another poster mentioned a function that decompressed a built-in string - the whole point of having it compressed was to reduce the exe file size. Decompressing it at compile time defeats the purpose. 2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time.Many functions could be executed at compile time, but should not be.When should they not be?
Feb 16 2007
janderson wrote:And I'm not 100% sure why you can't compile those functions and run another process to do the algorithms. Not with an interpreter, but with compiled D code. If course, I wouldn't know how to actually do that, but in theory it's possible. -- Michiel2) A function that simply takes too long at compile time. Compile time execution is going to be >100 times slower than run time.One thought on this. What about leveraging multiple threads to calculate different algorithms at once. They should all be self contained, so this shouldn't be much of a problem. With the quad-cores the compiled program would run maybe at 30 times slower. Also its a good step towards future-proofing the compiler.
Feb 17 2007
Michiel wrote:That's a great feature! A couple of questions, if I may: * Can sqrt still call another function?Yes, as long as that function can also be compile time executed. Recursion works, too.* Does it still work if classes or structs are involved in any way? What about enums? What about arrays?No, yes, array literals only.* Why is the second sqrt(10) run at runtime?Because the compiler cannot tell if it is a reasonable thing to try and execute it at compile time. To force it to happen at compile time: template eval(A...) { alias A eval; } writefln("%s", eval!(sqrt(10))); which works because templates can only take constants as value arguments.In theory it's still a constant, right?No.Is this something that will work in a later version?I don't see a motivating reason why, at the moment, given the eval template.
Feb 15 2007
Walter Bright wrote:Michiel wrote:The feature of binding expressions to aliases will break eval. AndreiThat's a great feature! A couple of questions, if I may: * Can sqrt still call another function?Yes, as long as that function can also be compile time executed. Recursion works, too.* Does it still work if classes or structs are involved in any way? What about enums? What about arrays?No, yes, array literals only.* Why is the second sqrt(10) run at runtime?Because the compiler cannot tell if it is a reasonable thing to try and execute it at compile time. To force it to happen at compile time: template eval(A...) { alias A eval; } writefln("%s", eval!(sqrt(10))); which works because templates can only take constants as value arguments.
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Hmm... infinite loops are suddenly _really_ slow to compile ;) : ----- // Slightly modified from example import std.stdio; real test(real x) { real root = x / 2; for (int ntries = 0; ntries < 5 || true; ntries++) { root = (root + x / root) / 2; } return root; } void main() { static x = test(10); } ----- (Also, this seems to consume huge amounts of memory) And error checking could be improved as well: removing the parameter from the call in above code gives --- test.d(15): function test.test (real) does not match parameter types () test.d(15): Error: expected 1 arguments, not 0 dmd: interpret.c:96: Expression* FuncDeclaration::interpret(Expressions*): Assertion `parameters && parameters->dim == dim' failed. Aborted (core dumped) --- It tells you what's wrong, but still trips an assert that seems to be about the same error. However: ----- real test() { return 0.0; } void main() { static x = test(); } ----- Gives: --- dmd: interpret.c:96: Expression* FuncDeclaration::interpret(Expressions*): Assertion `parameters && parameters->dim == dim' failed. Aborted (core dumped) --- (the same assert is tripped) So I don't know, maybe it's a different bug entirely. Or it tripped the other clause of the assert. I don't feel like inspecting interpret.c to find out what exactly the assert is checking and why it's false...------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
I see that I can't do this: char[] someCompileTimeFunction() { return "writefln(\"Wowza!\");"; } int main() { mixin(someCompileTimeFunction()); return 0; } Any chance of compile-time code generation via this mechanism? Or are they simply handled in different, incompatible steps of compilation? - Gregor Richards PS: Yes, I realize this is a terrible idea ^^
Feb 15 2007
Gregor Richards wrote:I see that I can't do this: char[] someCompileTimeFunction() { return "writefln(\"Wowza!\");"; } int main() { mixin(someCompileTimeFunction()); return 0; } Any chance of compile-time code generation via this mechanism? Or are they simply handled in different, incompatible steps of compilation? - Gregor Richards PS: Yes, I realize this is a terrible idea ^^That's a bug. I'll fix it.
Feb 15 2007
On Thu, 15 Feb 2007 13:02:40 -0800, Walter Bright <newshound digitalmars.com> wrote:Gregor Richards wrote:The following must be a related bug. The compiler complains that the argument to the mixin is not a string and parse() cannot be evaliated at compile-time. char[] parse(char[] src) { return src; } class Test { mixin(parse(import("guts.dspx"))); } void main() { } BTW, thanks for the awesome feature!I see that I can't do this: char[] someCompileTimeFunction() { return "writefln(\"Wowza!\");"; } int main() { mixin(someCompileTimeFunction()); return 0; } Any chance of compile-time code generation via this mechanism? Or are they simply handled in different, incompatible steps of compilation? - Gregor Richards PS: Yes, I realize this is a terrible idea ^^That's a bug. I'll fix it.
Feb 16 2007
Max Samukha wrote:The following must be a related bug. The compiler complains that the argument to the mixin is not a string and parse() cannot be evaliated at compile-time.Yes, in the compiler source the mixin argument failed to be marked as "must interpret", so all these will fail. Fortunately, it's a trivial fix, and will go out in the next update.
Feb 16 2007
On Thu, 15 Feb 2007 11:36:54 -0800 Gregor Richards <Richards codu.org> wrote:I see that I can't do this: char[] someCompileTimeFunction() { return "writefln(\"Wowza!\");"; } int main() { mixin(someCompileTimeFunction()); return 0; } Any chance of compile-time code generation via this mechanism? Or are they simply handled in different, incompatible steps of compilation?It seems to be bug.- Gregor Richards PS: Yes, I realize this is a terrible idea ^^NO. It is the most exciting usage of this feature. I was waiting for this a long time, and can't wait for using in really cool things. :) -- Witold Baryluk MAIL: baryluk smp.if.uj.edu.pl, baryluk mpi.int.pl JID: movax jabber.autocom.pl
Feb 16 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Awesome!!!! Can't wait to try and play with it!! (downloading at ~ 11 KB/s)------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion? 2) Can't you just ignore 'synchronized' at compile time? 3) Would it be possible to add some sort of a version(CompileTime)? This would make it possible for those who want to be *sure* the function is only used at compile time to simply have it not exist as a runtime call. It could also be used to make slight modifications to functions that one would like to use as both compile-time and run-time. For example if you want to have synchronized/try/catch/throw/writefln type things in the runtime version. --bb
Feb 15 2007
Bill Baxter wrote:Walter Bright wrote:Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.... is now in DMD 1.006. For example:Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?2) Can't you just ignore 'synchronized' at compile time?Pure functions (the kind that can be compile time executed) don't need to be synchronized anyway.3) Would it be possible to add some sort of a version(CompileTime)? This would make it possible for those who want to be *sure* the function is only used at compile time to simply have it not exist as a runtime call. It could also be used to make slight modifications to functions that one would like to use as both compile-time and run-time. For example if you want to have synchronized/try/catch/throw/writefln type things in the runtime version.Once more, there is never a situation where a function *might* or *might not* get executed at compile time. A function is *always* executed at runtime unless it is in a situation where it *must* be executed at compile time, such as in an initializer for a global.
Feb 15 2007
Walter Bright wrote:Bill Baxter wrote:Right. But if I understand correctly, the same code can get called either at runtime or compile time depending on the situation. But what if I want the runtime version to print out a message. Or add to a global counter variable for some simple home-brew profiling stats. It would be handy then if I could do: int square(int x) { version(compiletime) {}else{ writefln("trace: square"); G_CallsToSquare++; } return (x*x); } I think I'm getting at the same kind of thing Andrei is talking about. I don't want to have to limit what I do in the runtime version in order to make it acceptable for compile time. He's saying I should be able to have two versions of the function, one for compile time and one for run time. I think that's useful too. But for simpler cases like the above, it would be nice I think if one could just version-out the bad parts. And in this case: int calculationIShouldOnlyDoAtCompileTime(int x) { // * whatever it is * } int K = calculationIShouldOnlyDoAtCompileTime(4); Whoops! Silly programmer, looks like I forgot the 'const' on K. Would be nice if I could get the compiler to remind me when I'm silly like that. That could be arranged if there were a version(CompileTime). I think one holy grail of this stuff (and one Lisp apparently attained in the 60's or 70's) is to just be able to treat everything as either compile-time or run-time depending on how much information you have. For instance, it's not uncommon to make a fixed length vector class using templates. Vec!(N) kind of thing. But it would really be nice if the majority of that code could remain even when N is not constant. That means both on the user side and on the implementation side. I don't know how realistic that is, but I often find myself sitting on the fence trying to decide -- do I make this a compile-time parameter, thereby cutting off all opportunity for runtime creation of a particular instance, or do I make it runtime, thereby cutting my and the compiler's opportunity to make several obvious optimizations. --bbWalter Bright wrote:Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.... is now in DMD 1.006. For example:Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?2) Can't you just ignore 'synchronized' at compile time?Pure functions (the kind that can be compile time executed) don't need to be synchronized anyway.3) Would it be possible to add some sort of a version(CompileTime)? This would make it possible for those who want to be *sure* the function is only used at compile time to simply have it not exist as a runtime call. It could also be used to make slight modifications to functions that one would like to use as both compile-time and run-time. For example if you want to have synchronized/try/catch/throw/writefln type things in the runtime version.Once more, there is never a situation where a function *might* or *might not* get executed at compile time. A function is *always* executed at runtime unless it is in a situation where it *must* be executed at compile time, such as in an initializer for a global.
Feb 15 2007
Bill Baxter wrote:Right. But if I understand correctly, the same code can get called either at runtime or compile time depending on the situation.Yes.But what if I want the runtime version to print out a message. Or add to a global counter variable for some simple home-brew profiling stats. It would be handy then if I could do: int square(int x) { version(compiletime) {}else{ writefln("trace: square"); G_CallsToSquare++; } return (x*x); }I see what you want, but it isn't going to work the way the compiler is built. Runtime interpretation occurs *after* semantic analysis, but versioning happens before. Right now, you're better off having two functions, one a wrapper around the other. After all, there isn't a version possible for inlining/not inlining, either.I think I'm getting at the same kind of thing Andrei is talking about. I don't want to have to limit what I do in the runtime version in order to make it acceptable for compile time. He's saying I should be able to have two versions of the function, one for compile time and one for run time. I think that's useful too. But for simpler cases like the above, it would be nice I think if one could just version-out the bad parts. And in this case: int calculationIShouldOnlyDoAtCompileTime(int x) { // * whatever it is * } int K = calculationIShouldOnlyDoAtCompileTime(4); Whoops! Silly programmer, looks like I forgot the 'const' on K. Would be nice if I could get the compiler to remind me when I'm silly like that. That could be arranged if there were a version(CompileTime).For compile time evaluation only, make the function instead a template that calls the function.For instance, it's not uncommon to make a fixed length vector class using templates. Vec!(N) kind of thing. But it would really be nice if the majority of that code could remain even when N is not constant. That means both on the user side and on the implementation side. I don't know how realistic that is, but I often find myself sitting on the fence trying to decide -- do I make this a compile-time parameter, thereby cutting off all opportunity for runtime creation of a particular instance, or do I make it runtime, thereby cutting my and the compiler's opportunity to make several obvious optimizations.Templates are supposed to fill that ecological niche.
Feb 15 2007
Walter Bright wrote:Bill Baxter wrote:Yeh, it's not a big deal. I was more just curious since you recently added it back to DMD. And since it does affect the /effective/ semantics if you have a very deep recursion that overflows the stack. Or is the stack for compile-time execution is actually on the heap? Either way, it might actually be /better/ to have the compile-time implementation have a limited stack so that infinite recursion errors result in stack overflows rather than just the compiler hanging. --bbWalter Bright wrote:Tail recursion is a performance optimization, it has no effect on semantics, so it's not an issue for compile time execution.... is now in DMD 1.006. For example:Very nice! A few questions: 1) You say recursion is allowed -- does it do proper tail recursion?
Feb 15 2007
Bill Baxter wrote:Yeh, it's not a big deal. I was more just curious since you recently added it back to DMD. And since it does affect the /effective/ semantics if you have a very deep recursion that overflows the stack. Or is the stack for compile-time execution is actually on the heap? Either way, it might actually be /better/ to have the compile-time implementation have a limited stack so that infinite recursion errors result in stack overflows rather than just the compiler hanging.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.
Feb 15 2007
Walter Bright wrote:Bill Baxter wrote: Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems. -Joel
Feb 15 2007
janderson wrote:Walter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Feb 15 2007
Walter Bright wrote:janderson wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation. AndreiWalter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:Walter Bright wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
Feb 16 2007
Bill Baxter wrote:Andrei Alexandrescu (See Website For Email) wrote:How about listing any CTFE with -v? That should be more reliable and useful in other ways too.Walter Bright wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
Feb 16 2007
Dave wrote:Bill Baxter wrote:On the line, how about a timeout flag for unattended builds? As it is DMD can now fail to error on bad code by just running forever. With template code it would seg-v sooner or later.Andrei Alexandrescu (See Website For Email) wrote:How about listing any CTFE with -v? That should be more reliable and useful in other ways too.Walter Bright wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.It would be nice though, if the compiler could trap sigint or something and spit out an error message about which part of the code it was trying to compile when you killed it. Otherwise debugging accidental infinite loops in compile time code becomes...interesting. --bb
Feb 16 2007
Andrei Alexandrescu (See Website For Email) wrote:Walter Bright wrote:Completely agree, otherwise it contradicts your "right way to build a compiler" statement except with time as the metric instead of memory. Imagine the frustration of someone who has legitimate code and the compiler always craps out half-way through a looong makefile, or worse only sometimes craps out depending on machine load. I think the best solution is to list out any compile-time execution with the '-v' switch. That way if someone runs into this, they can throw -v and find out where it's happening.janderson wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.Walter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.Andrei
Feb 16 2007
Dave wrote:Andrei Alexandrescu (See Website For Email) wrote:Yes, memory watching would be great. It is easy to write a script that watches dmd's memory and time consumed, and kills it past some configurable threshold.Walter Bright wrote:Completely agree, otherwise it contradicts your "right way to build a compiler" statement except with time as the metric instead of memory. Imagine the frustration of someone who has legitimate code and the compiler always craps out half-way through a looong makefile, or worse only sometimes craps out depending on machine load.janderson wrote:That could be achieved with a watchdog process without changing the compiler, and it's more flexible. I think you just let the compiler go and crunch at it. Since you esentially have partial evaluation anyway, the execution process can be seen as extended to compile time. If you have a non-terminating program, that non-termination can be naturally manifest itself during compilation=partial evaluation.Walter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.I think the best solution is to list out any compile-time execution with the '-v' switch. That way if someone runs into this, they can throw -v and find out where it's happening.Sounds great. Andrei
Feb 16 2007
Walter Bright wrote:janderson wrote:I remember one of the first "facts of life" of computer science in college being that some problems can't be solved, and the example given was the 'halting problem'. If it runs until done or bust, that's okay with me. But for the purpose of distributing software, it would be a lot easier if either it was unbounded, or the bound was something that depended mostly or completely on the source code. That way I can compile on my super-duper development machine and if it doesn't "time out", I know whether it will fail on some user's second rate hardware or not. If the number used doesn't mean anything concrete, that's okay. I guess the simplest thing would be to bump a counter in some inner loop and cut out when it gets to a jillion. KevinWalter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Feb 15 2007
Walter Bright wrote:janderson wrote:Please don't. All sorts of things can affect performance it means that there is a remote possibility that it will fail one time out of 10. This is particularly the case for build machines where they may be doing lots of things at once. If it suddenly fails because of virtual memory thrashing or something, the programmer would get sent an annoying message "build failed". If it works then it needs to work every time. A counter or stack overflow of some sort would be much better. Even if not specifiable by the programmer. One way to use a timer though would be to display how long each bit took. That way the programmer would be able to figure out how to improve compile-time performance. -JoelWalter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Feb 16 2007
Walter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>.I'd definitely prefer a way to make it fail early. When I was first trying out v1.006 I modified the sqrt() example to an infinite loop, and made the fatal mistake of switching focus away from my terminal window. My system ground to a halt as DMD tried to allocate pretty much all free memory + swap. (That's over 2 GB!) It got so bad it took a few minutes to switch to a virtual console and 'killall dmd'. And even after that it was slow for a while until the running programs got swapped back in... Surely it would have been possible to detect something having gone horribly awry before it got this bad?If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine.My system is an AMD Sempron 3200+ with 1GB of RAM...I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.This might be a good idea, but perhaps make the time depend on a command-line switch, and maybe add something for memory usage as well?
Feb 16 2007
Walter Bright wrote:janderson wrote:This is exactly pertaining to compiler specific issues, but paralleling the above, why have "The total size of a static array cannot exceed 16Mb. A dynamic array should be used instead for such large arrays." in the Array spec. Isn't that kind of similar? 16Mb of static array data is a lot no doubt, but why put an arbitrary limit on it. Unless it's a DMD specific thing. JoeWalter Bright wrote:Whether you tell it to fail at a smaller limit, or it fails by itself at a smaller limit, doesn't make any difference as to whether it runs on a less powerful system or not <g>. The C standard has these "minimum translation limits" for all kinds of things - number of lines, chars in a string, expression nesting level, etc. It's all kind of bogus, hearkening back to primitive compilers that actually used fixed array sizes internally (Brand X, who-shall-not-be-named, was notorious for exceeding internal table limits, much to the delight of Zortech's sales staff). The right way to build a compiler is it either runs out of stack or memory, and that's the only limit. If your system is too primitive to run the compiler, you use a cross compiler running on a more powerful machine. I have thought of just putting a timer in the interpreter - if it runs for more than a minute, assume things have gone terribly awry and quit with a message.Right now, the compiler will fail if the compile time execution results in infinite recursion or an infinite loop. I'll have to figure out some way to eventually deal with this.Maybe you could allow the user to specify stack size and maximum iteration per loop/recursion function to the compiler as flags (with some defaults). This way the user can up the size if they really need it. This would make it a platform thing. That way a D compiler could still be made for less powerful systems.
Feb 17 2007
Walter Bright wrote:This should obsolete using templates to compute values at compile time.Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Feb 15 2007
torhu wrote:Walter Bright wrote:Aggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(This should obsolete using templates to compute values at compile time.Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Feb 15 2007
Walter Bright wrote:torhu wrote:Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.Walter Bright wrote:Aggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(This should obsolete using templates to compute values at compile time.Wonderful feature, but look at this: int square(int x) { return x * x; } const int foo = square(5); al_id4.d(6): Error: cannot evaluate (square)(5) at compile time I was hoping this would make some C macros easier to replace, without needing template versions for initializing consts.
Feb 16 2007
Dave wrote:I don't think so. If I'm not mistaken, D would do that at runtime at the moment. -- MichielAggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
Feb 16 2007
Michiel wrote:Dave wrote:Not for a global declaration, which must happen at compile time.I don't think so. If I'm not mistaken, D would do that at runtime at the moment.int foo = square(5); does work. I knew I'd never get it right the first try :-(Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
Feb 16 2007
Michiel wrote:Dave wrote:It does that at runtime for variables in function-type scope. For global variables (as in the code posted) and IIRC aggregate members it determines initial values at compile time.I don't think so. If I'm not mistaken, D would do that at runtime at the moment.Aggh, that's a compiler bug. int foo = square(5); does work. I knew I'd never get it right the first try :-(Wait -- int foo = square(5); is supposed to CTFE? If so that'd be awesome.
Feb 16 2007
Walter Bright wrote:... is now in DMD 1.006. For example:This doesn't seem to work either -- should it? char[] UpToSpace(char[] x) { int i=0; while (i<x.length && x[i] != ' ') { i++; } return x[0..i]; } void main() { const y = UpToSpace("first space was after first"); writefln(y); } It prints out the whole string rather than just "first". If you change it to return 'i' it does correctly evaluate to 5. If you change it to just 'return x[0..5];' it also works correctly. --bb------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Bill Baxter wrote:This doesn't seem to work either -- should it?Looks like it should. I'll check it out.
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Any chance that concatenation onto a variable will be supported? I'd like to write this: char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } But that doesn't work. The recursive version does, though: char[] NReps2(char[] x, int n, char[] acc="") { if (n<=0) return acc; return NReps2(x, n-1, acc~x); } --bb------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Bill Baxter wrote:Any chance that concatenation onto a variable will be supported?It should. I'll figure out what's going wrog, wring, worng, er, wrong.
Feb 15 2007
Bill Baxter wrote:I'd like to write this: char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } But that doesn't work.It does when I try it: --------------------- import std.stdio; char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } void main() { static x = NReps("3", 6); writefln(x); } ----------------------- prints: 333333
Feb 16 2007
Walter Bright wrote:Bill Baxter wrote:Doh! You're right. I must have had some other bad code commented in accidentally at the time. --bbI'd like to write this: char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } But that doesn't work.It does when I try it: --------------------- import std.stdio; char[] NReps(char[] x, int n) { char[] ret = ""; for(int i=0; i<n; i++) { ret ~= x; } return ret; } void main() { static x = NReps("3", 6); writefln(x); } ----------------------- prints: 333333
Feb 16 2007
many others asked about explicitly run at compile time, and there is this template solution. doesn't this again end up in a new type? I personally do not like the eval!( func() ) syntax. I think this is a new important feature, worth a new syntax. How about something like func!!() This new call syntax forces a function to run in the compiler.
Feb 15 2007
Frank Benoit (keinfarbton) wrote:many others asked about explicitly run at compile time, and there is this template solution. doesn't this again end up in a new type? I personally do not like the eval!( func() ) syntax. I think this is a new important feature, worth a new syntax. How about something like func!!() This new call syntax forces a function to run in the compiler.I like eval!(func()) for that. I keep thinking, though, that some new syntax would sure be nice for mixin(func!(arg)). It seems that things such as mixin(write!("foo %{bar} is %{baz}")); could potentially get very tiresome. If I have to write mixin(...) everywhere I'm probably just going to end up going with writefn("foo %s is %s", bar, baz); I was actually thinking of your func!!(). But eh. It's not so clear cut with all these possible forms of mixin() mixin("char x;"); mixin(func()); mixin(tfunc!()); mixin("char " ~ "x"); mixin("char " ~ func()); mixin("char " ~ tfunc!()); ... etc. --bb
Feb 15 2007
Bill Baxter wrote:Frank Benoit (keinfarbton) wrote:Definitely. That's exactly why dispatch of the same code through compile-time and run-time channels, depending on the argument, must be easy. Andreimany others asked about explicitly run at compile time, and there is this template solution. doesn't this again end up in a new type? I personally do not like the eval!( func() ) syntax. I think this is a new important feature, worth a new syntax. How about something like func!!() This new call syntax forces a function to run in the compiler.I like eval!(func()) for that. I keep thinking, though, that some new syntax would sure be nice for mixin(func!(arg)). It seems that things such as mixin(write!("foo %{bar} is %{baz}")); could potentially get very tiresome.
Feb 15 2007
On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:... is now in DMD 1.006. For example:On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:... is now in DMD 1.006.I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right? This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated. I'm thinking of the funny side this too, when it comes to putting some DbC validity tests in your code... const float x = SomeCompileTimeFunc(1,2,3); assert (x == 34.56); // Make sure the function worked. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 3:24:40 PM
Feb 15 2007
"Derek Parnell" <derek nomail.afraid.org> wrote in message news:15k9gflddwrnp.qzqynewwbuou$.dlg 40tude.net...I'm thinking of the funny side this too, when it comes to putting some DbC validity tests in your code... const float x = SomeCompileTimeFunc(1,2,3); assert (x == 34.56); // Make sure the function worked.You mean static assert(x == 34.56); ;)
Feb 15 2007
Derek Parnell wrote:On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:This is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."... is now in DMD 1.006. For example:On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:... is now in DMD 1.006.I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right?This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated.Great. The main uses of the feature will be in creating libraries that work with and _on_ your code (in the most literal sense). I've given the regexp example a few posts back: using the straight regular expression syntax, you direct a library into generating optimal code for each and every of your regular expressions, without ever having to do anything about it. There's been also much discussion about the applications of code generation, and you can be sure they will be simplified by one order of magnitude by dual functions. Andrei
Feb 15 2007
On Thu, 15 Feb 2007 20:45:04 -0800, Andrei Alexandrescu (See Website For Email) wrote:There's been also much discussion about the applications of code generation, and you can be sure they will be simplified by one order of magnitude by dual functions.So this would mean that I could code ... mixin( Conv("moveto 34,56 " "drawto +100,-50 " "drawto +0,+100 " "pencolor red " "drawto -100,-50 " ); ); And expect that the Conv function will, at compile time, create the equivalent D code to implement the 2D drawn item for the target platform, and have the mixin insert it into the program being compiled. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 16/02/2007 4:06:53 PM
Feb 15 2007
Derek Parnell wrote:On Thu, 15 Feb 2007 20:45:04 -0800, Andrei Alexandrescu (See Website For Email) wrote:I'm thinking of the much more boring and much crappier job of generating proxy and stub code for RPC and IPC. AndreiThere's been also much discussion about the applications of code generation, and you can be sure they will be simplified by one order of magnitude by dual functions.So this would mean that I could code ... mixin( Conv("moveto 34,56 " "drawto +100,-50 " "drawto +0,+100 " "pencolor red " "drawto -100,-50 " ); ); And expect that the Conv function will, at compile time, create the equivalent D code to implement the 2D drawn item for the target platform, and have the mixin insert it into the program being compiled.
Feb 15 2007
Derek Parnell wrote:So this would mean that I could code ... mixin( Conv("moveto 34,56 " "drawto +100,-50 " "drawto +0,+100 " "pencolor red " "drawto -100,-50 " ); ); And expect that the Conv function will, at compile time, create the equivalent D code to implement the 2D drawn item for the target platform, and have the mixin insert it into the program being compiled.Exactamundo!
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:This is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."I agree. I need a better example. Any ideas?
Feb 15 2007
Walter Bright wrote:Andrei Alexandrescu (See Website For Email) wrote:Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway). Smart enums (that know printing & parsing) are another example. But the print() example is simple, of immediate clear benefit, and suggestive of more powerful stuff. AndreiThis is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."I agree. I need a better example. Any ideas?
Feb 15 2007
Andrei Alexandrescu (See Website For Email) wrote:Walter Bright wrote:Would this mean a type of function whose return value is automatically mixed in? This is getting awfully close to LISP macros... :)Andrei Alexandrescu (See Website For Email) wrote:Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).This is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."I agree. I need a better example. Any ideas?Smart enums (that know printing & parsing) are another example. But the print() example is simple, of immediate clear benefit, and suggestive of more powerful stuff.
Feb 16 2007
Andrei Alexandrescu (See Website For Email) wrote:Walter Bright wrote:But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"? L.Andrei Alexandrescu (See Website For Email) wrote:Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).This is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."I agree. I need a better example. Any ideas?
Feb 16 2007
Lionello Lunesu wrote:Andrei Alexandrescu (See Website For Email) wrote:You currently also need a mixin() around the print!().Walter Bright wrote:But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?I agree. I need a better example. Any ideas?Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
Feb 16 2007
Frits van Bommel wrote:Lionello Lunesu wrote:Aha.. Or "before", right? mixin print!("......"); L.Andrei Alexandrescu (See Website For Email) wrote:You currently also need a mixin() around the print!().Walter Bright wrote:But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?I agree. I need a better example. Any ideas?Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
Feb 16 2007
Lionello Lunesu wrote:Frits van Bommel wrote:I think the example requires a string mixin statement, which according to the spec means parentheses are required[1]. Note that it needs to access variables whose names are specified in the string argument. [1]: http://www.digitalmars.com/d/statement.html#MixinStatementLionello Lunesu wrote:Aha.. Or "before", right? mixin print!("......");Andrei Alexandrescu (See Website For Email) wrote:You currently also need a mixin() around the print!().Walter Bright wrote:But add a "!" to the print, and it's already possible? What extra is needed, and is that just to get rid of the "!"?I agree. I need a better example. Any ideas?Well we talked about: int a = foo(); char[] b = bar(); print("a is $a and b is $b, dammit\n"); The interesting part is that this will also require you to screw in a couple of extra nuts & bolts (that were needed anyway).
Feb 16 2007
Walter Bright wrote:Andrei Alexandrescu (See Website For Email) wrote:(Sorry that this got so long -- it kind of turned into a duffel bag of things I've been thinking about.) I think the most common case is like the plot for a TV show. Code that is semi-interesting, semi-predictable, and semi-repetitive. If the code is too interesting, you would need to write it all. If it was too repetitive, you could just use a standard D template (boilerplate). Tasks that are in between -- non-boilerplate, but fairly 'formulaic', where the details are not interesting, is the candidate for this. To me there are a couple special reasons that stick out. 1. You're building complex types and need to base them on standard definitions that might change. I see this with ASN.1 definitions at work. We have a program that builds C++ code from ASN.1 or XML schemas. A. Some of our programs need to stream 100s of MB of data so this code needs to be as fast as possible. B. If a field is added to the definition it has to appear in all the code objects. C. There is additional logic -- i.e. if a 'mandatory' field is not assigned, the serializer has to throw an exception. 2. You're building the inner loop in some performance critical application and the rules (expressions and conditional logic) used there need or benefit from ultra-optimization. This is what (in my view) compile time regex is for, I would normally use a runtime regex for ordinary things like parsing configuration files. 3. You need a lot of code duplication (i.e. to provide stub functions something) and don't want to repeat yourself to get it. --- This is how I imagine it: Some of these ideas have been kicking around in my head, but I'm not sure how practical they are. When I use the word templates here but I mean any kind of code generation. Starting scenario: Let's say I'm writing a program to solve some mathematical task. 1. I create a top level class and add some members to it. 2. I add some sub-classes, a dozen or so, mostly just POD stuff. 3. Some of these have associative arrays, user-defined tree stuff, regular arrays, hand coded linked lists, etc. 4. I put in a bunch of graph theory code and number crunching stuff. Uses of metaprogramming: 1. Now let's say that this application is taking a while to run, so I decide to run it in steps and checkpoint the results to disk. - I write a simple template that can take an arbitrary class and write the pointer value and the class's data to disk. (Actual data is just strings and integers, so one template should cover all of these classes.) - For each internal container it can run over the members and do the same to every object with a distinct memory address. (one more template is needed for each container concept, like AA, list or array -- say 4 or 5 more templates. It only writes each object once by tracking pointers in a set. - Another template that can read this stuff back in, and fix the pointers so that they link up correctly. (** All of this is much easier than normal, because I can generate types using typelists as a starting point. I think in C++ this is a bit trickly because it convolutes the structure definition -- recursively nested structs and all that; but with code generation, the "struct builder" can take a list of strings and pump out a struct whose definition looks exactly like a hand coded struct would look, but maybe with more utilitarian functionality since its cheaper to add the automated stuff. **) Three or four templates later, I have a system for checkpointing any data structure (with a few exceptions like sockets etc.), to a string or stream. 2. I want to display this stuff to the user. I bang together another couple of templates that can show these kinds of code objects in a simple viewer. It works just like the last one, finding variable names and values and doing the writefln() types of tricks to give the user the details. Some kind of browser lets me examine the process starting at the top. Maybe it looks a little like a flow chart and a little like a debugger's print of a structure. - I can define hooks in the important kinds of objects so they can override their own displays but simple data can work without much help. 3. I want to build a distributed compute farm for this numerical task. - I just need to change the serialization to stream the data objects over the web or sockets, or queue the objects in SQL tables. Some load balancing, etc. Another application that has the same class definitions can pull in the XML or ASN.1 or home-made serialization format. The trick here is that we need to be able to build templates that can inspect the objects and trees of objects in complex ways -- does this class contain a field named "password"; is this other field a computed value that can be thrown away. Does this other class override a method named 'optimizeForTransport'. Adding arbitrary attributes and arbitrary bits of code and annotation to the classes is not too hard to do, because my original code generation functions used typelists and had hooks for specifying special behavior. 4. I decide to allow my ten closest friends to help with the application by rewriting important subroutines. - Each person adds code for the application to an SQL database. A simple script can now pull code from the database and dump it to text files. This code can be imported into classes and run. - I can generate ten different versions of a critical loop and select which one to run at random. The timing output results is stored in a text file. Later compiles of the code do "arc-profiling" of entire algorithms or modules. KevinThis is by far the least interesting application of this stuff. I don't even count it when I think of the feature. "Oh, yeah, I could compile square root at compile time. How quaint."I agree. I need a better example. Any ideas?
Feb 16 2007
Derek Parnell wrote:I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right? This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated.It's a very good question, and I tried to answer it in the follow-on "Motivation for..." thread!
Feb 15 2007
Derek Parnell wrote:On Thu, 15 Feb 2007 10:23:43 -0800, Walter Bright wrote:It (sometimes) allows you to express things using the formulae that you used to derive them, which makes code more readable. It also allows you to express mathematically things that might depend on implementation-dependent parameters, or versions, or whatever. Say, like this: version(4K_PAGES) const int page_size = 4*1024; else const int page_size = 16*1024*1024; const int page_shift = eval!(log_base_2(page_size)); Sure, you could integrate page_shift into the version statement...but I think that the above is better. P.S. I would prefer the template compile_time!(value) over eval!(value) for readability reasons.... is now in DMD 1.006.I guess its time I came clean and admitted that in spite of this being a huge technological advancement in the language, I can't see why I'd ever be needing it. I mean, when it comes down to it, it's just a fancy way of getting the compiler to calculate/generate literals that can be done by myself anyway, no? These literals are values that can be determined prior to writing one's code, right? This is not a troll posting, so can anyone enlighten me on how this ability will reduce the cost of maintaining code? I am very open to being educated.
Feb 16 2007
Russell Lewis wrote:Say, like this: version(4K_PAGES) const int page_size = 4*1024; else const int page_size = 16*1024*1024; const int page_shift = eval!(log_base_2(page_size));Don't need eval!() for const declarations.Sure, you could integrate page_shift into the version statement...but I think that the above is better. P.S. I would prefer the template compile_time!(value) over eval!(value) for readability reasons.eval!() isn't even in the standard library! You can name it whatever you wish. In any case, I used the name "eval" simply because it was analogous to the eval function found in many scripting languages.
Feb 16 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Man this kicks ass!!! Its the best implementation we could hope for. -Joel------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 15 2007
Walter Bright wrote:... is now in DMD 1.006. For example:Can I use the results of compile-time evaluatable functions in "static if", "pragma(msg)" ? This makes D a scripting language :) // This does not (yet) work though: bool func() { return true; } static if (func()) { pragma(msg, "true" ); } else { pragma(msg, "false" ); } L.------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 16 2007
Hi, GREAT new feature ! 1) There is a bug with parentheses: --------------------------------------------- import std.stdio; template eval(A...) { alias A eval; } char[] trimfirst(char[] s) { int x = 0; foreach (char each; s) { if (each != ' ') return s[x .. $]; x++; } return s; } void main() { writefln(eval!(trimfirst(" test"))); writefln(trimfirst(" test")); } ---------------------------------------------test testSo you see, the compile-time version doesn't work. Now change line 9-10 to: if (each != ' ') { return s[x .. $]; } And voila ! Output is correct:test test2) Would it be possible to make this working ? writefln(eval!(std.string.stripl(" test"))); And all the other string functions from phobos, too ? Daniel
Feb 16 2007
Walter Bright wrote:... is now in DMD 1.006. For example:I didn't read the whole thread, but I just found some replies about how to turn off/on the automatic compile time execution. I would suggest, that the default behaviour will be "execute as much as possible during compile time", but that there are two keywords (possibly "runtime" and "compiletime") that will indicate what the method is, for example: runtime float sqrt() { ... } That would expand to that the function is _not_ executed during compile time, even if it would be able to. On the other hand, compiletime float sqrt() { ... } Will be executed during compile time (as the standard). But as a difference to the regular behaviour, a function declared as "compiletime" that may not be executed during compile time may throw a hint when compiling so that the developer may change it until there's no hint left. greetings Nicolai------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 16 2007
Nicolai Waniek wrote:I didn't read the whole thread, but I just found some replies about how to turn off/on the automatic compile time execution. I would suggest, that the default behaviour will be "execute as much as possible during compile time", but that there are two keywords (possibly "runtime" and "compiletime") that will indicate what the method is, forI am in total agreement. -- Michiel
Feb 16 2007
I'm having a few problems getting some simple examples to work, for instance: ----- char[] foo() { return( "bar" ); } void main() { const char[] bar = foo(); } Assertion failure: 'parameters && parameters->dim == dim' on line 96 in file 'interpret.c' abnormal program termination ----- and ----- template eval( A... ) { alias A eval; } int square( int n ) { return( n * n ); } void main() { int bar = eval!( square( 5 ) ); } Error: cannot implicitly convert expression (tuple25) of type (int) to int ----- What am I doing wrong? Thanks, Brian Byrne Walter Bright wrote:... is now in DMD 1.006. For example:------------------------------------------- import std.stdio; real sqrt(real x) { real root = x / 2; for (int ntries = 0; ntries < 5; ntries++) { if (root * root - x == 0) break; root = (root + x / root) / 2; } return root; } void main() { static x = sqrt(10); // woo-hoo! set to 3.16228 at compile time! writefln("%s, %s", x, sqrt(10)); // this sqrt(10) runs at run time } ------------------------------------------This should obsolete using templates to compute values at compile time.
Feb 16 2007
Brian Byrne wrote:I'm having a few problems getting some simple examples to work, for instance: ----- char[] foo() { return( "bar" ); } void main() { const char[] bar = foo(); } Assertion failure: 'parameters && parameters->dim == dim' on line 96 in file 'interpret.c' abnormal program termination -----This bug has already been reported: http://d.puremagic.com/issues/show_bug.cgi?id=968
Feb 16 2007
Well I just tried compiling a rather large codebase on 1.006, it takes forever and errors out with a 2+ gig executable sitting there. So I must have something going crazy under the hood. :) I will try to track it down.
Feb 20 2007
J Duncan wrote:Well I just tried compiling a rather large codebase on 1.006, it takes forever and errors out with a 2+ gig executable sitting there. So I must have something going crazy under the hood. :) I will try to track it down.I had a huge memory usage like that when I tried to compile a file with an infinite loop in a function executed at compile time...
Feb 20 2007
Sorry if this was already answered, but I can't find it.. Is compile-time execution of library functions allowed? -- serg.
Feb 24 2007
Serg Kovrov wrote:Sorry if this was already answered, but I can't find it.. Is compile-time execution of library functions allowed?That depends on what you mean by "library functions". Obviously you mean a function in a library, but that's not really what matters here. The important question is whether the source is available to the compiler. It doesn't care where the compiled version ends up, because it doesn't use it. It just needs to see the source. So basically, for a library with declaration-only (no or incomplete implementation) "headers" the answer is no, for libraries that ship with full source (that's used to satisfy imports, so no .di modules) the answer is yes. Just like with normal source files.
Feb 24 2007
Frits van Bommel wrote:That depends on what you mean by "library functions". Obviously you mean a function in a library, but that's not really what matters here. The important question is whether the source is available to the compiler. It doesn't care where the compiled version ends up, because it doesn't use it. It just needs to see the source. So basically, for a library with declaration-only (no or incomplete implementation) "headers" the answer is no, for libraries that ship with full source (that's used to satisfy imports, so no .di modules) the answer is yes. Just like with normal source files.My question was general. C runtime functions, Phobos functions, any third party functions that come as `lib` files (either C or D). -- serg.
Feb 24 2007
Serg Kovrov wrote:My question was general. C runtime functions, Phobos functions, any third party functions that come as `lib` files (either C or D).You can only compile-time execute D functions that have their full source (and the full source of any functions they call) available to the compiler at compile-time.
Feb 24 2007
Tyler Knott wrote:You can only compile-time execute D functions that have their full source (and the full source of any functions they call) available to the compiler at compile-time.Yeah I figured this much yet. So, to access for example, math functions - I should provide D source code. Sadly, there is not much I can do without at least C runtime... -- serg.
Feb 24 2007
Serg Kovrov wrote:Tyler Knott wrote:It does limit what we can do however it is more secure. In time the functions you need may appear as compile-timeable functions in D libs. Or if you have to C source, you may have a chance of being able to port them over. If they use assembly, then it will make things more difficult. -JoelYou can only compile-time execute D functions that have their full source (and the full source of any functions they call) available to the compiler at compile-time.Yeah I figured this much yet. So, to access for example, math functions - I should provide D source code. Sadly, there is not much I can do without at least C runtime...
Feb 24 2007
janderson wrote:It does limit what we can do however it is more secure. In time the functions you need may appear as compile-timeable functions in D libs. Or if you have to C source, you may have a chance of being able to port them over. If they use assembly, then it will make things more difficult.I expect that, over time, the capability of the compile time function evaluator will improve. Some things are likely never to be evaluated, however: 1) inline assembly - this would require building a CPU emulator. That's an incredible amount of work for essentially 0 gain. 2) C code - c'mon, write it in D! 3) Functions only available in object form - see (1).
Feb 24 2007