digitalmars.D - Nim Nuggets: Nim talk at Strange Loop 2021
- jfondren (3/3) Oct 16 2021 https://www.youtube.com/watch?v=d2VRuZo2pdA
- Araq (4/7) Oct 16 2021 Ha, good one. So where is D's hygienic AST macro system that lets
- Petar Kirov [ZombineDev] (4/12) Oct 16 2021 D may not have an AST macro system but has had
- IGotD- (3/6) Oct 17 2021 Why do people call it "hygienic macros"? It feels like sales
- Timon Gehr (5/7) Oct 17 2021 "Hygienic" means it avoids name clashes/confusion between parameters and...
- Stefan Koch (5/13) Oct 17 2021 I agree that D's meta-programming isn't where it should be.
- jfondren (73/81) Oct 17 2021 I like how Andrei talks about mixin in
- Imperatorn (2/7) Oct 17 2021 D rox
- Paulo Pinto (3/6) Oct 16 2021 If you mean compile time evaluation, D wasn't the first with such
- Walter Bright (11/18) Oct 17 2021 Lisp is fundamentally different. It started out as an interpreter and la...
- Imperatorn (6/28) Oct 17 2021 And still, in 2021 using C++20, compile time features are
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (7/12) Oct 17 2021 How so? Anyway, what Walter said was not accurate. Languages are
- Imperatorn (8/21) Oct 17 2021 I'm not sure if you write C++ or not. But if you do a side by
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (7/9) Oct 17 2021 Not sure what you mean, but in C++ floating point is considered
- Imperatorn (7/16) Oct 17 2021 Maybe we're talking about different things.
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (7/8) Oct 17 2021 Well, you can't because pow() is not known at compile time (in a
- Bastiaan Veelo (5/14) Oct 18 2021 We can leave floating point out of the discussion. This is
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (3/5) Oct 18 2021 std::pow() is floating point: «If any argument has integral type,
- Imperatorn (5/11) Oct 18 2021 Still. The question was not about pow 😅
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (14/17) Oct 18 2021 There are many facets of C++. One is that the original C++
- Walter Bright (15/22) Oct 17 2021 You're confusing data flow analysis and function inlining with ctfe. I k...
- max haughton (4/10) Oct 17 2021 https://gcc.godbolt.org/z/jr5W6rY1W
- Walter Bright (24/38) Oct 17 2021 ------------------------
- max haughton (6/40) Oct 17 2021 I see. I thought you meant the constant-folding power rather than
- Walter Bright (3/5) Oct 17 2021 More precisely, I mean anything that is specified as a `constant-express...
- Patrick Schluter (4/16) Oct 17 2021 put the square function in another compilation unit (i.e. its
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (9/12) Oct 18 2021 There were C/C++ compilers that did whole program optimization
- Paulo Pinto (8/30) Oct 17 2021 Maybe you should have read more SIGPLAN papers then, there are
- russhy (6/41) Oct 17 2021 you are the one that started with:
- Paulo Pinto (13/58) Oct 17 2021 Was I corrected, really?
- jfondren (44/52) Oct 17 2021 I don't know of your other examples, but Common Lisp and Template
- russhy (3/5) Oct 17 2021 Nobody did that, you were the one..
- Paulo Pinto (6/12) Oct 18 2021 Wrong, it was in response to "About 50% of it is inadvertent
- jfondren (26/31) Oct 18 2021 There are two twin siblings. One is praised for having good looks
- Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= (7/16) Oct 18 2021 Dynamic languages can in general support "compile time function
- Walter Bright (10/12) Oct 18 2021 It seems strange that for something so widely known, it never came up in...
- Adam D Ruppe (2/3) Oct 18 2021 Don't throw stones from glass houses!
- H. S. Teoh (16/20) Oct 18 2021 Yeah... at one point I really wanted to show my coworkers what D was
- Tejas (3/13) Oct 18 2021 Fingers crossed for `core.reflect` and `core.codegen` making it
- Elronnd (5/9) Oct 17 2021 Lisp has pretty much always been compiled. EG. maclisp (a very
- jfondren (9/17) Oct 17 2021 I mean 50% of the whole thing, and without concern for who's
- Imperatorn (2/5) Oct 17 2021 Does it mention D anywhere?
- Deech (11/14) Oct 17 2021 Speaker here, thanks for watching and for your thoughts. In
- Imperatorn (4/20) Oct 17 2021 Nice to see you here! Didn't think anyone cared about D anymore 🍀
https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.
Oct 16 2021
On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Ha, good one. So where is D's hygienic AST macro system that lets you reflect over types? No, string mixins and experimental std.reflection in some offside branch don't count.
Oct 16 2021
On Sunday, 17 October 2021 at 05:26:25 UTC, Araq wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:D may not have an AST macro system but has had full compile-time type introspection for probably a decade (if not more). What's your point?https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Ha, good one. So where is D's hygienic AST macro system that lets you reflect over types? No, string mixins and experimental std.reflection in some offside branch don't count.
Oct 16 2021
On Sunday, 17 October 2021 at 05:26:25 UTC, Araq wrote:Ha, good one. So where is D's **hygienic AST macro** system that lets you reflect over types? No, string mixins and experimental std.reflection in some offside branch don't count.Why do people call it "hygienic macros"? It feels like sales pitch trying to sell something that is the exact opposite.
Oct 17 2021
On 17.10.21 12:33, IGotD- wrote:Why do people call it "hygienic macros"?"Hygienic" means it avoids name clashes/confusion between parameters and temporaries declared in the macro and identically-named variables at the call site. https://en.wikipedia.org/wiki/Hygienic_macro
Oct 17 2021
On Sunday, 17 October 2021 at 05:26:25 UTC, Araq wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:I agree that D's meta-programming isn't where it should be. My work is called core.reflect though and not std.reflection. I think apart from views on significant whitespace NIM and D shouldn't be too far apart.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Ha, good one. So where is D's hygienic AST macro system that lets you reflect over types? No, string mixins and experimental std.reflection in some offside branch don't count.
Oct 17 2021
On Sunday, 17 October 2021 at 05:26:25 UTC, Araq wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:I like how Andrei talks about mixin in https://youtu.be/WsgW4HJXEAg?t=3354 "and like an idiot you get to put a string in and give it to the compiler to compile for you. Which sounds ridiculous, right?", while the slide adds that this "isn't glamorous". But in Common Lisp you are also "like an idiot, putting a list in and giving it to the compiler to compile for you". Lists have structure but a typical macro is heavy on quoting and you can read the quoted parts to understand what the resulting code would look like. The 'code' of macros, the parts that aren't quoted but are doing work like constructing lists or modifying them, are generally also perfectly normal Common Lisp code, they just happen to be constructing lists of code rather than lists of numbers or strings. If you want to write or maintain a simple macro in Common Lisp, you don't need specialized knowledge: it is enough to know Common Lisp. What Nim has is more like browser DOM manipulation: knowing Nim is not enough, either to write a macro or maintain a macro. You must also know have copious specialized macro knowledge about nnkSmthng and nnkSmthngElseTy. The 'easy' way to approach Nim macros is to perform the equivalent of opening a browser's developer inspection tool on the desired Nim code. Common Lisp's macros scale from absolutely trivial ```lisp (defmacro sleep-units (value unit) `(sleep (* ,value ,(case unit ((s) 1) ((m) 60) ((h) 3600) ((d) 86400) ((ms) 1/1000) ((us) 1/1000000))))) ``` to whatever this is: https://github.com/thephoeron/let-over-lambda/blob/master/let-over-lambda.lisp#L356 For trivial cases, Nim has templates. Why? Why not remove templates from the language and tell people to use macros for trivial cases as well? Or suppose that during a presentation this code were put on the screen: ```d struct Object { float[2] position, velocity, facing; float size; } struct Player { mixin parent!Object; int hp; } mixin template parent(Struct) { static foreach (i, alias f; Struct.tupleof) { mixin("typeof(f) ", __traits(identifier, f), " = Struct.init.tupleof[i];"); } } void main() { import std.stdio : writeln; writeln(Player([0, 0], [0, 0], [0, -1], 5.0, 100)); } ``` I could say "as you can see, I'm looping here over the members of the given struct, and am injecting new field definitions that have the same types, names, and initial values that the struct has." And the "as you can see" would be literal, not ironic. People really can look at that mixin and see what it does. Do you think deech could've said something like that about https://github.com/deech/NimNuggets/blob/master/backup/migrationmacros.nim#L11 ? Do you think including that code in his presentation would've done more or less to sell Nim to the audience? Nim macros are more capable than string mixins, but Nim's metaprogramming story is really not that enviable.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Ha, good one. So where is D's hygienic AST macro system that lets you reflect over types? No, string mixins and experimental std.reflection in some offside branch don't count.
Oct 17 2021
On Sunday, 17 October 2021 at 11:10:16 UTC, jfondren wrote:On Sunday, 17 October 2021 at 05:26:25 UTC, Araq wrote:D rox[...]I like how Andrei talks about mixin in https://youtu.be/WsgW4HJXEAg?t=3354 [...]
Oct 17 2021
On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 16 2021
On 10/16/2021 11:05 PM, Paulo Pinto wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp. Nobody even thought of compile time function evaluation for C and C++ until D did it. Nobody. As evidence, when people discovered that C++ templates could be used to evaluate things at compile time, everyone was completely agog over it. I never heard *anyone* suggest that maybe ordinary functions could do this, too. Now everyone does it. Even C is considering adding it.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 08:15:26 UTC, Walter Bright wrote:On 10/16/2021 11:05 PM, Paulo Pinto wrote:And still, in 2021 using C++20, compile time features are severely crippled. I tried just some days ago doing some simple math pow stuff and the answer I got from the C++ community was "eh, well you'd have to use a constexpr compatible library for that"On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp. Nobody even thought of compile time function evaluation for C and C++ until D did it. Nobody. As evidence, when people discovered that C++ templates could be used to evaluate things at compile time, everyone was completely agog over it. I never heard *anyone* suggest that maybe ordinary functions could do this, too. Now everyone does it. Even C is considering adding it.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 08:20:24 UTC, Imperatorn wrote:And still, in 2021 using C++20, compile time features are severely crippled.How so? Anyway, what Walter said was not accurate. Languages are usually defined in a way where compile-time optimizations are optional and an implementation detail. Compile time evaluation of functions is nothing new, and a common optimization.I tried just some days ago doing some simple math pow stuff and the answer I got from the C++ community was "eh, well you'd have to use a constexpr compatible library for that"Because pow may be system specific at runtime. A compile time optimization should not lead to a different outcome.
Oct 17 2021
On Sunday, 17 October 2021 at 10:03:12 UTC, Ola Fosheim Grøstad wrote:On Sunday, 17 October 2021 at 08:20:24 UTC, Imperatorn wrote:I'm not sure if you write C++ or not. But if you do a side by side ct/meta comparison of common things you do in D vs C++, C++ is really behind when it comes to convenience, if it's even possible. Well pow was just a random example, but as an example, what would the C++ version look like?And still, in 2021 using C++20, compile time features are severely crippled.How so? Anyway, what Walter said was not accurate. Languages are usually defined in a way where compile-time optimizations are optional and an implementation detail. Compile time evaluation of functions is nothing new, and a common optimization.I tried just some days ago doing some simple math pow stuff and the answer I got from the C++ community was "eh, well you'd have to use a constexpr compatible library for that"Because pow may be system specific at runtime. A compile time optimization should not lead to a different outcome.
Oct 17 2021
On Sunday, 17 October 2021 at 11:14:15 UTC, Imperatorn wrote:Well pow was just a random example, but as an example, what would the C++ version look like?Not sure what you mean, but in C++ floating point is considered system specific. So the same executable could yield different results on different machines for more complex constructs. Since pow() could be provided by the OS/CPU there is no portable optimization for it. Basically, the result is unknown at compile time.
Oct 17 2021
On Sunday, 17 October 2021 at 14:39:08 UTC, Ola Fosheim Grøstad wrote:On Sunday, 17 October 2021 at 11:14:15 UTC, Imperatorn wrote:Maybe we're talking about different things. What I mean is basically (stupid example, but just for the sake of discussion): https://run.dlang.io/is/S3mZwq How would you write it in C++ without modifying the stdlibWell pow was just a random example, but as an example, what would the C++ version look like?Not sure what you mean, but in C++ floating point is considered system specific. So the same executable could yield different results on different machines for more complex constructs. Since pow() could be provided by the OS/CPU there is no portable optimization for it. Basically, the result is unknown at compile time.
Oct 17 2021
On Sunday, 17 October 2021 at 14:47:15 UTC, Imperatorn wrote:How would you write it in C++ without modifying the stdlibWell, you can't because pow() is not known at compile time (in a portable way). It is only known, in the general case, when executed. If you allow it to be evaluated at compile time you risk a compiletime pow(x,y) value to be different from a runtime pow(x,y) value even though the parameters are exactly the same. Which could lead to bugs (like comparing for equality).
Oct 17 2021
On Sunday, 17 October 2021 at 16:10:56 UTC, Ola Fosheim Grøstad wrote:On Sunday, 17 October 2021 at 14:47:15 UTC, Imperatorn wrote:We can leave floating point out of the discussion. This is integer arithmetic. — Bastiaan.How would you write it in C++ without modifying the stdlibWell, you can't because pow() is not known at compile time (in a portable way). It is only known, in the general case, when executed. If you allow it to be evaluated at compile time you risk a compiletime pow(x,y) value to be different from a runtime pow(x,y) value even though the parameters are exactly the same. Which could lead to bugs (like comparing for equality).
Oct 18 2021
On Monday, 18 October 2021 at 14:23:07 UTC, Bastiaan Veelo wrote:We can leave floating point out of the discussion. This is integer arithmetic.std::pow() is floating point: «If any argument has integral type, it is cast to double.»
Oct 18 2021
On Monday, 18 October 2021 at 15:39:32 UTC, Ola Fosheim Grøstad wrote:On Monday, 18 October 2021 at 14:23:07 UTC, Bastiaan Veelo wrote:Still. The question was not about pow 😅 It was about how poor C++ is doing stuff at compile time in general.We can leave floating point out of the discussion. This is integer arithmetic.std::pow() is floating point: «If any argument has integral type, it is cast to double.»
Oct 18 2021
On Monday, 18 October 2021 at 15:45:18 UTC, Imperatorn wrote:Still. The question was not about pow 😅Choose a better example then. :-)It was about how poor C++ is doing stuff at compile time in general.There are many facets of C++. One is that the original C++ compilation model is built around hand-tuned separate compilation to handle very large projects. Most projects are not all that large, certainly none of my C++ projects. And what was large in the 90s is different from what is considered large now. So, that is the downside of using languages like C++ which has a long history. Although, I think this is a weakness that affects D too. Anyway, most of the stuff I want to do at compile time in C++, can be done as constexpr. Although for source generation I sometimes prefer to use Python to generate C++ source as I find it more readable to have explicit code rather than meta code in the source.
Oct 18 2021
On 10/17/2021 3:03 AM, Ola Fosheim Grøstad wrote:On Sunday, 17 October 2021 at 08:20:24 UTC, Imperatorn wrote:You're confusing data flow analysis and function inlining with ctfe. I know how data flow optimizers work, I was the first to implement one for DOS compilers in the 80s. C and C++ compilers have enjoyed the best optimizers in the industry for decades. But not one of them could compile: int square(int x) { return x * x; } const int s = square(2); Being able to do such NEVER occurred to anyone in the business. It's one of those ridiculously obvious things that nobody thought of for decades. It's so obvious people today cannot even conceive of not thinking of it. I developed compilers with optimizers for 20 years before it occurred to me. Implementing it around 2007, it was like a bomb went off in the D community. (Don Clugston was the first to recognize what it could do.) From there it spread to C++ and pretty much every other compiled language.And still, in 2021 using C++20, compile time features are severely crippled.How so? Anyway, what Walter said was not accurate. Languages are usually defined in a way where compile-time optimizations are optional and an implementation detail. Compile time evaluation of functions is nothing new, and a common optimization.
Oct 17 2021
On Sunday, 17 October 2021 at 19:18:25 UTC, Walter Bright wrote:On 10/17/2021 3:03 AM, Ola Fosheim Grøstad wrote:https://gcc.godbolt.org/z/jr5W6rY1W If I read you correctly, gcc circa '06 could do what you describe. Or do you mean in windows land?[...]You're confusing data flow analysis and function inlining with ctfe. I know how data flow optimizers work, I was the first to implement one for DOS compilers in the 80s. [...]
Oct 17 2021
On 10/17/2021 2:11 PM, max haughton wrote:On Sunday, 17 October 2021 at 19:18:25 UTC, Walter Bright wrote:------------------------ mercury> cat test.c int square(int x) { return x * x; } const int c = square(2); mercury> cc test.c -O3 test.c:3:1: error: initializer element is not constant const int c = square(2); ^ mercury> cc test.c --version cc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ------------------------- Your example is fundamentally different: ------------------------- int square(int x) { return x * x; } int main() { const int s = square(2); <== not a constant-expression return s; } ----------------On 10/17/2021 3:03 AM, Ola Fosheim Grøstad wrote:https://gcc.godbolt.org/z/jr5W6rY1W If I read you correctly, gcc circa '06 could do what you describe. Or do you mean in windows land?[...]You're confusing data flow analysis and function inlining with ctfe. I know how data flow optimizers work, I was the first to implement one for DOS compilers in the 80s. [...]
Oct 17 2021
On Sunday, 17 October 2021 at 22:53:29 UTC, Walter Bright wrote:On 10/17/2021 2:11 PM, max haughton wrote:I see. I thought you meant the constant-folding power rather than initializing something static. "The manner and timing of static initialization" is listed as unspecified behaviour in the C11 standard, it's curious why this restriction persists.On Sunday, 17 October 2021 at 19:18:25 UTC, Walter Bright wrote:------------------------ mercury> cat test.c int square(int x) { return x * x; } const int c = square(2); mercury> cc test.c -O3 test.c:3:1: error: initializer element is not constant const int c = square(2); ^ mercury> cc test.c --version cc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ------------------------- Your example is fundamentally different: ------------------------- int square(int x) { return x * x; } int main() { const int s = square(2); <== not a constant-expression return s; } ----------------[...]https://gcc.godbolt.org/z/jr5W6rY1W If I read you correctly, gcc circa '06 could do what you describe. Or do you mean in windows land?
Oct 17 2021
On 10/17/2021 4:06 PM, max haughton wrote:I see. I thought you meant the constant-folding power rather than initializing something static.More precisely, I mean anything that is specified as a `constant-expression` in the C Standard.
Oct 17 2021
On Sunday, 17 October 2021 at 21:11:34 UTC, max haughton wrote:On Sunday, 17 October 2021 at 19:18:25 UTC, Walter Bright wrote:put the square function in another compilation unit (i.e. its source is not available), and see if it can optimize it. ;-) That's the difference between CTFE and inlining.On 10/17/2021 3:03 AM, Ola Fosheim Grøstad wrote:https://gcc.godbolt.org/z/jr5W6rY1W If I read you correctly, gcc circa '06 could do what you describe. Or do you mean in windows land?[...]You're confusing data flow analysis and function inlining with ctfe. I know how data flow optimizers work, I was the first to implement one for DOS compilers in the 80s. [...]
Oct 17 2021
On Monday, 18 October 2021 at 06:48:25 UTC, Patrick Schluter wrote:put the square function in another compilation unit (i.e. its source is not available), and see if it can optimize it. ;-) That's the difference between CTFE and inlining.There were C/C++ compilers that did whole program optimization (IR) in the 90s, but the key difference between optimization of function calls and compile time function evaluation is when the evaluation affects the type. (enum being one simple example, static array length being another). In this case it has to be required by the language, so it is no longer an implementation detail.
Oct 18 2021
On Sunday, 17 October 2021 at 08:15:26 UTC, Walter Bright wrote:On 10/16/2021 11:05 PM, Paulo Pinto wrote:Maybe you should have read more SIGPLAN papers then, there are several examples of those capabilities. Always screaming D did it before C and C++ is not going to increase D's userbase, specially since they get the feature without leaving their ecosystem. Maybe instead of complaining about who did it first, the community should focus on fixing all lose ends.On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp. Nobody even thought of compile time function evaluation for C and C++ until D did it. Nobody. As evidence, when people discovered that C++ templates could be used to evaluate things at compile time, everyone was completely agog over it. I never heard *anyone* suggest that maybe ordinary functions could do this, too. Now everyone does it. Even C is considering adding it.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 11:04:26 UTC, Paulo Pinto wrote:On Sunday, 17 October 2021 at 08:15:26 UTC, Walter Bright wrote:you are the one that started with: "X did it first" then you complain when people correct you, and then you say it's useless to focus on "who did it first" come on!On 10/16/2021 11:05 PM, Paulo Pinto wrote:Maybe you should have read more SIGPLAN papers then, there are several examples of those capabilities. Always screaming D did it before C and C++ is not going to increase D's userbase, specially since they get the feature without leaving their ecosystem. Maybe instead of complaining about who did it first, the community should focus on fixing all lose ends.On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp. Nobody even thought of compile time function evaluation for C and C++ until D did it. Nobody. As evidence, when people discovered that C++ templates could be used to evaluate things at compile time, everyone was completely agog over it. I never heard *anyone* suggest that maybe ordinary functions could do this, too. Now everyone does it. Even C is considering adding it.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 14:16:51 UTC, russhy wrote:On Sunday, 17 October 2021 at 11:04:26 UTC, Paulo Pinto wrote:Was I corrected, really? The very first compiler for Lisp was created in 1960 for the IBM 704. Common Lisp was just one example among others, here are a few more. - Dylan, released to the public in 1995 - PL/I included a macro subset, released in 1964 - Template Haskell, initially prototyped in 2002 - <bigwig> language research project at BRICS in 2002 - Luca Cardelli work on extensible languages at DEC Olivetti/HP It is useless for the community to whine who did it first, because it won't increase its audience.On Sunday, 17 October 2021 at 08:15:26 UTC, Walter Bright wrote:you are the one that started with: "X did it first" then you complain when people correct you, and then you say it's useless to focus on "who did it first" come on!On 10/16/2021 11:05 PM, Paulo Pinto wrote:Maybe you should have read more SIGPLAN papers then, there are several examples of those capabilities. Always screaming D did it before C and C++ is not going to increase D's userbase, specially since they get the feature without leaving their ecosystem. Maybe instead of complaining about who did it first, the community should focus on fixing all lose ends.On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp. Nobody even thought of compile time function evaluation for C and C++ until D did it. Nobody. As evidence, when people discovered that C++ templates could be used to evaluate things at compile time, everyone was completely agog over it. I never heard *anyone* suggest that maybe ordinary functions could do this, too. Now everyone does it. Even C is considering adding it.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 21:17:43 UTC, Paulo Pinto wrote:The very first compiler for Lisp was created in 1960 for the IBM 704. Common Lisp was just one example among others, here are a few more. - Dylan, released to the public in 1995AKA, Common Lisp.- Template Haskell, initially prototyped in 2002I don't know of your other examples, but Common Lisp and Template Haskell are in the category with C++ templates of "this could be technically abused towards compile time evaluation (which still isn't ctfe) if someone wanted to, but people generally didn't because it didn't occur to them". I gave an example of a trivial macro earlier. It's from https://letoverlambda.com/lol-orig.lisp , where it's preceded by ```lisp (defun sleep-units% (value unit) (sleep (* value (case unit ((s) 1) ((m) 60) ((h) 3600) ((d) 86400) ((ms) 1/1000) ((us) 1/1000000))))) ``` A normal function. Which was not simply used at compile-time; instead, a macro version was written so that the same calculation could occur at compile-time. And this is what people'd tend to do in Common Lisp. The function actually could be used at compile-time but the EVAL-WHEN syntax to do that is so heavy, nobody would bother without ctfe having occurred to them. An example of a language where people genuinely did write normal functions and then freely execute them at compile-time is Forth, and there the community mostly bemoaned that poor optimizers made it necessary to use even for constant-folding: ```forth : rotchar ( c -- c' ) dup [char] a [ char m 1+ ] literal within 13 and over [char] A [ char M 1+ ] literal within 13 and + over [char] n [ char z 1+ ] literal within -13 and + over [char] N [ char Z 1+ ] literal within -13 and + + ; ``` WITHIN checks a half-open range, so it's getting passed 'a' and 'z'+1, with compile-time calculation of the latter happening due to `[ ... ] literal` When a better optimizer is available you'd just write `'a' 'z' 1+ within`It is useless for the community to whine who did it first, because it won't increase its audience.This is a really strange axe to grind.
Oct 17 2021
On Sunday, 17 October 2021 at 21:17:43 UTC, Paulo Pinto wrote:It is useless for the community to whine who did it first, because it won't increase its audience.Nobody did that, you were the one.. https://forum.dlang.org/post/hbqsocypigbjefuwqpnm forum.dlang.org
Oct 17 2021
On Monday, 18 October 2021 at 01:01:53 UTC, russhy wrote:On Sunday, 17 October 2021 at 21:17:43 UTC, Paulo Pinto wrote:Wrong, it was in response to "About 50% of it is inadvertent praise for D.", because naturally for some people when any language has something that somehow resembles D, it was copied from D, regardless of prior art. And apparently some are quite touchy to facts.It is useless for the community to whine who did it first, because it won't increase its audience.Nobody did that, you were the one.. https://forum.dlang.org/post/hbqsocypigbjefuwqpnm forum.dlang.org
Oct 18 2021
On Monday, 18 October 2021 at 11:22:10 UTC, Paulo Pinto wrote:Wrong, it was in response to "About 50% of it is inadvertent praise for D.", because naturally for some people when any language has something that somehow resembles D, it was copied from D, regardless of prior art.There are two twin siblings. One is praised for having good looks and good grades. This praise is described as "50% inadvertent praise for the the other twin". Do you conclude that one child stole other's looks or grades? You might if you had an axe that needed grinding. But most people would get the joke: the other twin's grades are poor. If you'd watched any amount of the presentation before replying, you'd see there's lots of praise for things that can't even be "copied from D", like praise for a fast compiler, and there's also lots of praise that doesn't apply to D at all. It's about 50% unintentional praise for D, which I think makes it more of interest to a D audience. Do I need add a disclaimer any time this happens? "Here's a cool presentation about Kotlin. About 20% of it is inadvertent praise for D. My lawyers have advised me to include this addendum to this post: Common Lisp was standardized in 1994 with closures."And apparently some are quite touchy to facts.Part of your 'facts' are accusing Walter of simply lying about his memories of ctfe and its reception, as if everyone in all programming communities was obliged to be constantly aware of every innovation anywhere in computing. The average compiler developer, shown Forth, was going to say "well, yeah, if you have a single pass compiler and are constantly compiling into an interactive environment, I guess you could get compile-time interaction this way, but I don't see how I could do anything like that."
Oct 18 2021
On Monday, 18 October 2021 at 11:42:47 UTC, jfondren wrote:Part of your 'facts' are accusing Walter of simply lying about his memories of ctfe and its reception, as if everyone in all programming communities was obliged to be constantly aware of every innovation anywhere in computing. The average compiler developer, shown Forth, was going to say "well, yeah, if you have a single pass compiler and are constantly compiling into an interactive environment, I guess you could get compile-time interaction this way, but I don't see how I could do anything like that."Dynamic languages can in general support "compile time function evaluation". One strategy is to run a program then force a core-dump and use a core-dump loader as the executable. Making a point of ctfe only makes sense in the context of C++ type construction. In that regard it is an improvment (over using C++ templates in a functional programming manner).
Oct 18 2021
On 10/17/2021 4:04 AM, Paulo Pinto wrote:Maybe you should have read more SIGPLAN papers then, there are several examples of those capabilities.It seems strange that for something so widely known, it never came up in any enhancement requests for major languages like C, C++, Pascal, etc. Instead, when people discovered that C++ templates formed a turing-complete programming language to do computations at compile time, and would write about what a great new capability that was! (I also recall a C++ Committee person pooh-pooing D's CTFE, and showing how it could be done with C++ templates, at least until the compiler ran out of memory, which it did for non-trivial computations. It could do string manipulation, as long as the string wasn't longer than 8 characters.)
Oct 18 2021
On Tuesday, 19 October 2021 at 02:28:11 UTC, Walter Bright wrote:at least until the compiler ran out of memoryDon't throw stones from glass houses!
Oct 18 2021
On Tue, Oct 19, 2021 at 02:42:22AM +0000, Adam D Ruppe via Digitalmars-d wrote:On Tuesday, 19 October 2021 at 02:28:11 UTC, Walter Bright wrote:Yeah... at one point I really wanted to show my coworkers what D was capable of, but my initial private test caused dmd to run out of memory and crash on a low-memory box (which a requirement for our project). I quickly decided *not* to show my coworkers what dmd could do (or could not do!), in order not to give them a really bad initial impression of D. DMD's all-speed-or-nothing design makes it a memory-hungry beast. It works wonderfully on modern PCs overflowing with spare RAM; in low-memory environments, this leads to all sorts of problems, from thrashing on I/O (due to swapping) to outright crashing before it could finish compilation. I'd rather have a slow compiler than a super-fast one that crashes before it could finish doing what is its raison d'etre. T -- A linguistics professor was lecturing to his class one day. "In English," he said, "A double negative forms a positive. In some languages, though, such as Russian, a double negative is still a negative. However, there is no language wherein a double positive can form a negative." A voice from the back of the room piped up, "Yeah, yeah."at least until the compiler ran out of memoryDon't throw stones from glass houses!
Oct 18 2021
On Tuesday, 19 October 2021 at 02:54:22 UTC, H. S. Teoh wrote:On Tue, Oct 19, 2021 at 02:42:22AM +0000, Adam D Ruppe via Digitalmars-d wrote:Fingers crossed for `core.reflect` and `core.codegen` making it in soon![...]Yeah... at one point I really wanted to show my coworkers what D was capable of, but my initial private test caused dmd to run out of memory and crash on a low-memory box (which a requirement for our project). I quickly decided *not* to show my coworkers what dmd could do (or could not do!), in order not to give them a really bad initial impression of D. [...]
Oct 18 2021
On Sunday, 17 October 2021 at 08:15:26 UTC, Walter Bright wrote:Lisp is fundamentally different. It started out as an interpreter and later added native code generation. There never really was a difference between compile time and runtime for Lisp.Lisp has pretty much always been compiled. EG. maclisp (a very early, very influential lisp, started off in the 60s or 70s). And there were papers in the 80s comparing lisp's numerical performance to fortran's.
Oct 17 2021
On Sunday, 17 October 2021 at 06:05:21 UTC, Paulo Pinto wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:I mean 50% of the whole thing, and without concern for who's first. D looks familiar(TM), it has a fast compiler, you can relax and treat it like it's a dynamic language but still come back and get picky about unnecessary copies et al., it has UFCS, it has excellent static reflection and is capable of similar tricks like diffType(). Someone who watches this while knowing D is going to think "yeah, I like that about D" about half the time.https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.If you mean compile time evaluation, D wasn't the first with such features and it is quite far from what Common Lisp is capable of.
Oct 17 2021
On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Does it mention D anywhere?
Oct 17 2021
On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:https://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Speaker here, thanks for watching and for your thoughts. In retrospect I should have mentioned D (and Zig as well), my reason for not doing so was (1) to keep it focused on Nim since that was the subject of the talk and (2) I don't know enough about D to field questions about how they differ and when you might want to choose one over the other so I wanted to avoid that situation. For what it's worth I did a [broader talk on static introspection](https://www.youtube.com/watch?v=ElHi2h9Ho6M) which does feature D as well so you might be interested in checking that out.
Oct 17 2021
On Sunday, 17 October 2021 at 16:50:39 UTC, Deech wrote:On Sunday, 17 October 2021 at 04:17:38 UTC, jfondren wrote:Nice to see you here! Didn't think anyone cared about D anymore 🍀 Nice presentation btw. I use Nim occasionally. I got productive pretty quick with ithttps://www.youtube.com/watch?v=d2VRuZo2pdA About 50% of it is inadvertent praise for D. The rest is ARC and C++ interop.Speaker here, thanks for watching and for your thoughts. In retrospect I should have mentioned D (and Zig as well), my reason for not doing so was (1) to keep it focused on Nim since that was the subject of the talk and (2) I don't know enough about D to field questions about how they differ and when you might want to choose one over the other so I wanted to avoid that situation. For what it's worth I did a [broader talk on static introspection](https://www.youtube.com/watch?v=ElHi2h9Ho6M) which does feature D as well so you might be interested in checking that out.
Oct 17 2021