digitalmars.D - Why C++ compiles slowly
- Walter Bright (2/2) Aug 18 2010 http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html
- Marianne Gagnon (1/4) Aug 18 2010
- Justin Johansson (11/13) Aug 18 2010 While I am not a compiler writer, I do have a fairly good understanding
- bearophile (9/13) Aug 18 2010 Probably the latest GCC versions are able to do that.
- Walter Bright (3/6) Aug 19 2010 On reddit:
- Walter Bright (3/11) Aug 19 2010 Hacker News:
- Seth Hoenig (3/10) Aug 19 2010 Thanks for the free Karma, btw :P
- Andrei Alexandrescu (5/13) Aug 19 2010 At over 200 points, that was a homerun. I think it would be really
- BCS (5/8) Aug 19 2010 Maybe hold off till the one after that? If he doesn't do it sometime, I'...
- Eldar Insafutdinov (8/9) Aug 19 2010 I will say the contrary. Compiling medium size projects doesn't matter i...
- Andrei Alexandrescu (7/17) Aug 19 2010 I'm a bit confused - how do you define incremental compilation? The
- Eldar Insafutdinov (10/16) Aug 19 2010 I am not sure here, you'd better check that in posts of Tomasz Stachowia...
- Sean Kelly (3/22) Aug 19 2010 It's always possible to use headers in D as well, though I think the tip...
- Leandro Lucarella (27/37) Aug 19 2010 I think in D you can do the same level of incremental compilation as in
- dsimcha (12/22) Aug 19 2010 I think this is a perfectly reasonable design principle. Sometimes you ...
- bearophile (5/15) Aug 19 2010 When you compile a Java program the compiler is able to find and fetch t...
- retard (9/27) Aug 19 2010 Having written several university assignments in Java, small (< 500 LOC)...
- dsimcha (18/45) Aug 19 2010 I didn't mean my comment in terms of the compilation system. I meant it...
- Nick Sabalausky (6/32) Aug 19 2010 Yea. If Java's design philosophy were a valid one, there would never hav...
- Walter Bright (3/7) Aug 19 2010 Yeah, and I've seen OOP done in C, and it works. It's just awful. I've e...
- Andrej Mitrovic (7/15) Aug 19 2010 There's even a book about it!
- bearophile (4/6) Aug 19 2010 I may like to see the built-in asm of D replaced by HLA :-)
- Simen kjaeraas (5/9) Aug 20 2010 But why? Could you not simply drop in and out of assembly and use
- bearophile (8/10) Aug 20 2010 I don't know. I think that every time you drop in and out of assembly, u...
- Adam Ruppe (14/14) Aug 20 2010 Glancing over it really quickly, High Level Assembly is /completely
- Walter Bright (3/5) Aug 20 2010 What I did when faced with such code is assemble it, *disassemble* the o...
- dsimcha (4/9) Aug 20 2010 How did you do this? Don't you lose some important stuff like label nam...
- Adam Ruppe (7/10) Aug 20 2010 Yes, though a lot of label names aren't all that helpful in the first
- BCS (4/10) Aug 21 2010 that plus find/replace will get you a long way.
- Walter Bright (10/21) Aug 20 2010 Sure, it might need a bit of tidying up by hand, but that was a lot easi...
- Nick Sabalausky (10/18) Aug 19 2010 I've seen high-precision PI calculation done in MS batch:
- Adam Ruppe (5/7) Aug 19 2010 Did I post that to this list, or did it find its way around the
- Nick Sabalausky (16/24) Aug 19 2010 I honestly don't remember. All I know is whenever I did first see it, I
- Andrej Mitrovic (4/23) Aug 19 2010 Some guys are using a hotkey automation scripting language to
- BCS (4/34) Aug 19 2010 Um... does Boost fit in here?
- Nick Sabalausky (3/27) Aug 19 2010 Zing! :)
- Lutger (4/30) Aug 20 2010 Don't forget the perl regex to check for a prime number:
- Walter Bright (5/23) Aug 19 2010 That's why dmd can *automatically* generate .di files. But still, even w...
- Leandro Lucarella (22/36) Aug 19 2010 Is worse in the sense that you have the feeling that is free in D, but
- Eric Poggel (6/34) Aug 19 2010 I link my game engine (20kloc) with derelict, which is much larger. On
- bearophile (7/8) Aug 20 2010 HLA allows you to have a 1:1 mapping, if you want.
- Walter Bright (18/22) Aug 20 2010 I found this amusing:
- Steven Schveighoffer (8/10) Aug 23 2010 Very interesting stuff.
- Walter Bright (2/16) Aug 23 2010 You can start with -v.
- Steven Schveighoffer (11/25) Aug 23 2010 I get a long list of functions proceeding at a reasonable rate. I've do...
- Walter Bright (2/11) Aug 23 2010 with or without -O ?
- Steven Schveighoffer (6/16) Aug 23 2010 The compile line is:
- Walter Bright (2/22) Aug 23 2010 You could try running dmd under a profiler, then.
- Steven Schveighoffer (101/120) Aug 23 2010 I recompiled dmd 2.047 with -pg added and with the COV options uncomment...
- Walter Bright (3/14) Aug 24 2010 elf_findstr definitely looks like a problem area. I can't look at it rig...
- bearophile (7/9) Aug 24 2010 I am able to find two versions of elf_findstr, one in elfobj.c and one i...
- Jacob Carlborg (7/16) Aug 24 2010 As the files indicate elfobj.c is for generating ELF (linux) object
- Steven Schveighoffer (4/15) Aug 24 2010 http://d.puremagic.com/issues/show_bug.cgi?id=4721
- Walter Bright (2/19) Aug 24 2010 Also, putting a printf in elf_findstr to print its arguments will be hel...
- Steven Schveighoffer (35/54) Aug 24 2010 Through some more work with printf, I have to agree with bearophile, thi...
- Mafi (4/7) Aug 24 2010 Why are D's symbols verbose? if I understood you corectly, dmd makes a
- Steven Schveighoffer (7/14) Aug 24 2010 A symbol includes the module name, and the mangled version of the functi...
- bearophile (4/6) Aug 24 2010 And I think some more things needs to be added to that string, like a re...
- Jonathan M Davis (11/20) Aug 24 2010 They probably aren't there because
- Simen kjaeraas (8/31) Aug 24 2010 Pure might be worth stuffing in the symbol name, as the compiler may
- Steven Schveighoffer (6/40) Aug 25 2010 These are decisions made at the compilation stage, not the linking stage...
- Simen kjaeraas (8/13) Aug 25 2010 Absolutely. Now, you compile your module that uses a pure function foo i...
- Steven Schveighoffer (12/25) Aug 25 2010 You could say the same about just about any function. Changing
- Walter Bright (4/5) Aug 25 2010 Yes, that's done because the caller of a function may depend on that fun...
- bearophile (4/6) Aug 24 2010 In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 450...
- Jacob Carlborg (7/13) Aug 25 2010 According to the ABI pure should already be in the mangled name (don't
- bearophile (26/31) Aug 25 2010 Yes, it's there:
- Jonathan M Davis (3/21) Aug 25 2010 So, sodium is pure huh. :)
- Justin Johansson (2/23) Aug 25 2010 And natrium also? :-)
- Simen kjaeraas (4/11) Aug 25 2010 Natrium and sodium are the same.
- Justin Johansson (2/12) Aug 25 2010 Of course! Just a bit of tautological silliness on my part. :-)
- Walter Bright (5/7) Aug 24 2010 It is now, but when it was originally written (maybe as long as 20 years...
- dsimcha (6/13) Aug 24 2010 Wow, now it's really hit home for me how much programming languages and ...
- Steven Schveighoffer (4/11) Aug 25 2010 Yes, I'm glad you pushed me to do it. Looking forward to the fix.
- Walter Bright (7/10) Aug 25 2010 The two secrets to writing fast code are:
- dsimcha (9/19) Aug 25 2010 I think you overestimate the amount of programmers that can read assembl...
- Walter Bright (10/19) Aug 25 2010 The thing is, you *don't* need to be able to read assembler in order to ...
- Nick Sabalausky (7/11) Aug 25 2010 Heh, funny thing about difficulty is how relative it can be. I've heard
- Walter Bright (6/13) Aug 25 2010 Doing amateur rocketry isn't that hard, the formulas are simple and the ...
- BCS (7/11) Aug 25 2010 I still thing CS-101 should be in ASM. It would give people a better und...
- Steven Schveighoffer (5/13) Aug 25 2010 You mean like asking someone who reported low performance of your progra...
- retard (3/20) Aug 25 2010 He forgot:
- dsimcha (3/23) Aug 25 2010 Yeah, but unless you use a profiler, how are you going to find those spo...
- retard (3/28) Aug 25 2010 Test-driven develoment, automatic testing tools, common sense? Sometimes...
- Steven Schveighoffer (18/46) Aug 25 2010 On the contrary, this was one of those bugs that you almost need a
- Walter Bright (12/19) Aug 25 2010 Neither of those are designed to find bottlenecks, and I've never seen o...
- bearophile (10/12) Aug 25 2010 This is a big mistake, because:
- Walter Bright (3/7) Aug 25 2010 Yup, and that piece of code was written in a time where there were very ...
- dsimcha (6/13) Aug 25 2010 I wonder how much of the compile time of more typical projects is taken ...
- Walter Bright (2/7) Aug 25 2010 It could very well be the source of these issues.
- Walter Bright (5/8) Aug 25 2010 No, I didn't forget that. There's no benefit to using a better algorithm...
- Walter Bright (3/5) Aug 25 2010 1. He had the test case, I didn't.
- Steven Schveighoffer (19/25) Aug 25 2010 He == me :)
- Walter Bright (5/36) Aug 25 2010 I hope that you enjoyed doing this, and I hope to make building the comp...
- Era Scarecrow (11/21) Aug 25 2010 There are also those who are not programmers, and don't know what they...
- Nick Sabalausky (5/35) Aug 25 2010 From what I've seen, you get essentially the same results from most HR
- Walter Bright (3/13) Aug 25 2010 Sure, but my advice is directed at the people who *do* know what they ar...
- Walter Bright (3/20) Aug 25 2010 Let me know how this works:
- Jacob Carlborg (4/24) Aug 26 2010 Shouldn't machobj.c get the same optimization?
- BCS (4/32) Aug 26 2010 Shouldn't something like a table lookup be shared rather than duplicated...
- Jacob Carlborg (4/34) Aug 27 2010 Yes, that would be better.
- Steven Schveighoffer (24/43) Aug 26 2010 Better, now takes 20 seconds vs over 60. The new culprit:
- bearophile (4/6) Aug 26 2010 Fit for a new bugzilla entry?
- Steven Schveighoffer (5/9) Aug 26 2010 I'll just put into the same report, and let Walter decide if it's still ...
- Walter Bright (4/15) Aug 26 2010 That only happens if -X is passed on the command line, or one of the fil...
- Steven Schveighoffer (37/47) Aug 26 2010 I did some more testing. I think I compiled the profiled version of the...
- Walter Bright (2/3) Aug 26 2010 Thanks!
- Walter Bright (2/6) Aug 26 2010 Just for fun, searchfixlist goes back at least to 1983 or so.
- bearophile (27/28) Aug 26 2010 It contains this if (I am not able to indent it well):
- BCS (4/12) Aug 26 2010 Early or late '83? I ask because *I* go back to '83 or so. :)
- Walter Bright (2/14) Aug 26 2010 June 7th, 3:26 PM. Give or take 6 months.
- Kagamin (2/5) Aug 25 2010 Where did you get it? Digital Mars seems to not have an elf C compiler.
http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.
Aug 18 2010
Very right, and even more I might add a thing : the STL itself is just HUGE; and unless you live in a shell, you're going to use some library; that some library in all likeliness will include the STL directly or indirectly; and each and everyone of your files end up building the entire STL everytime they're builthttp://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.
Aug 18 2010
On 19/08/10 10:35, Walter Bright wrote:http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.While I am not a compiler writer, I do have a fairly good understanding of compiler mechanics. I think the length and depth of your article pretty is just about right and accordingly I found it sufficiently concise and succinct in explaining the issues with C++ compilation speed that one does not need much further explanation. May I join others in looking forward to the part 2 follow-up on why D compiles fast. Cheers Justin Johansson
Aug 18 2010
Walter Bright:http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.Thank you, the article is nice and I didn't know most of the things it contains.the compiler is doomed to uselessly reprocess them when one file is #include'd multiple times, even if it is protected by #ifndef pairs. (Kenneth Boyd tells me that upon careful reading the Standard may allow a compiler to skip reprocessing #include's protected by #ifndef pairs. I don't know which compilers, if any, take advantage of this.)<Probably the latest GCC versions are able to do that. And then there is #pragma once too: http://en.wikipedia.org/wiki/Pragma_onceJust #include'ing the Standard results, on Ubuntu, in 74 files being read of 37,687 lines (not including any lines from multiple #include's of the same file).<As benchmark on this Clang (the C/C++ compiler based on LLVM) uses a small program (~7500 lines of Object-C) that includes Cocoa/Cocoa.h, that is quite large: http://clang.llvm.org/performance.html Bye, bearophile
Aug 18 2010
Walter Bright wrote:http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Aug 19 2010
Walter Bright wrote:Walter Bright wrote:Hacker News: http://news.ycombinator.com/item?id=1617133http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Aug 19 2010
On Thu, Aug 19, 2010 at 4:45 AM, Walter Bright <newshound2 digitalmars.com>wrote:Walter Bright wrote:Thanks for the free Karma, btw :Phttp://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/
Aug 19 2010
On 08/19/2010 09:53 PM, Seth Hoenig wrote:On Thu, Aug 19, 2010 at 4:45 AM, Walter Bright <newshound2 digitalmars.com <mailto:newshound2 digitalmars.com>> wrote: Walter Bright wrote: http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast. On reddit: http://www.reddit.com/r/programming/comments/d2wwp/why_c_compiles_slow/ Thanks for the free Karma, btw :PAt over 200 points, that was a homerun. I think it would be really classy if Walter did /not/ write "Why D compiles quickly" for his next installment. Andrei
Aug 19 2010
Hello Andrei,I think it would be really classy if Walter did /not/ write "Why D compiles quickly" for his next installment.Maybe hold off till the one after that? If he doesn't do it sometime, I'll be bumed. -- ... <IXOYE><
Aug 19 2010
I'll be doing a followup on why D compiles fast.I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation. You will end up recompiling the whole thing each time which will take longer than just recompiling a single file in C++. Please be sure to mention it in your next article, otherwise it is a false advertisement. Of course it is not the language issue, but it's the issue of its only implementation. P.S. This problem was raised many times here by Tomasz Stachowiak.
Aug 19 2010
On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.I'll be doing a followup on why D compiles fast.I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.You will end up recompiling the whole thing each time which will take longer than just recompiling a single file in C++. Please be sure to mention it in your next article, otherwise it is a false advertisement. Of course it is not the language issue, but it's the issue of its only implementation. P.S. This problem was raised many times here by Tomasz Stachowiak.I'm not sure about that. On the large C++ systems I work on, compilation is absolute agony. I don't think that that sets the bar too high. Andrei
Aug 19 2010
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s articleI'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.I am not sure here, you'd better check that in posts of Tomasz Stachowiak. There was something wrong with how dmd emits template instantiations. He had to create a custom build tool that does some hackery. From my experience with D you just can't do that. I get weird errors and I end up rebuilding the whole thing.I'm not sure about that. On the large C++ systems I work on, compilation is absolute agony. I don't think that that sets the bar too high. AndreiCan you please elaborate on that? From my experience and understanding if you modify one cpp file for instance, only this file will be recompiled, then the project is linked and ready to be run. If you modify a header(which happens less often) the build system quite fairly recompiles files that include it. And I use make -j of course, which makes things even easier.
Aug 19 2010
Eldar Insafutdinov Wrote:== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s articleThere used to be a number of issues with where TypeInfo was generated, references to in/out contracts and other auto-generated functions, etc, but I think they've all been addressed.I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.I am not sure here, you'd better check that in posts of Tomasz Stachowiak. There was something wrong with how dmd emits template instantiations. He had to create a custom build tool that does some hackery. From my experience with D you just can't do that. I get weird errors and I end up rebuilding the whole thing.It's always possible to use headers in D as well, though I think the tipping point is far different from where it is in C++.I'm not sure about that. On the large C++ systems I work on, compilation is absolute agony. I don't think that that sets the bar too high. AndreiCan you please elaborate on that? From my experience and understanding if you modify one cpp file for instance, only this file will be recompiled, then the project is linked and ready to be run. If you modify a header(which happens less often) the build system quite fairly recompiles files that include it. And I use make -j of course, which makes things even easier.
Aug 19 2010
Andrei Alexandrescu, el 19 de agosto a las 08:50 me escribiste:On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:I think in D you can do the same level of incremental compilation as in C/C++ but is not as natural. For one, in D is not natural to separate declarations from definitions, so a file in D tends to be dependent in *many* *many* other files because of excessive imports, so even when you can do separate compilation, unless you are *extremely* careful (much more than in C/C++ I think) you'll end up having to recompile the whole project even you change just one file because of the dependency madness. I know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Hay manos capaces de fabricar herramientas con las que se hacen máquinas para hacer ordenadores que a su vez diseñan máquinas que hacen herramientas para que las use la manoI'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.I'll be doing a followup on why D compiles fast.I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.
Aug 19 2010
== Quote from Leandro Lucarella (luca llucax.com.ar)'s articleI know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things.I think this is a perfectly reasonable design principle. Sometimes you have to resort to things that are ugly, unsafe, a PITA, etc. to deal with some practical reality. What D gets right is that you shouldn't have to be burdened with it when you don't need it, and the simple, clean, safe way that works most of the time should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in the corner cases should be available, even if no serious effort is put into making it not ugly/unsafe/inconvenient. Languages like C++ and Java tend to ignore the simple, common case and force you to do things the hard way all the time, even when you don't need the benefits of doing things the hard way. Thus, these languages are utterly useless for anything but huge, enterprisey projects.
Aug 19 2010
dsimcha:What D gets right is that you shouldn't have to be burdened with it when you don't need it, and the simple, clean, safe way that works most of the time should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in the corner cases should be available, even if no serious effort is put into making it not ugly/unsafe/inconvenient. Languages like C++ and Java tend to ignore the simple, common case and force you to do things the hard way all the time, even when you don't need the benefits of doing things the hard way. Thus, these languages are utterly useless for anything but huge, enterprisey projects.When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.) Bye, bearophile
Aug 19 2010
Thu, 19 Aug 2010 15:52:25 -0400, bearophile wrote:dsimcha:Having written several university assignments in Java, small (< 500 LOC) to medium size (50000 LOC), I haven't encountered a single compilation related problem. One exception to this are some bindings to native code libraries -- you need to be careful with URLs when packaging external libraries inside a JAR. The class centric programming paradigm often gets in your way when programming in the small, but it's quite acceptable on large scale IMO. How is Java so utterly useless and D much better? Any use cases?What D gets right is that you shouldn't have to be burdened with it when you don't need it, and the simple, clean, safe way that works most of the time should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in the corner cases should be available, even if no serious effort is put into making it not ugly/unsafe/inconvenient. Languages like C++ and Java tend to ignore the simple, common case and force you to do things the hard way all the time, even when you don't need the benefits of doing things the hard way. Thus, these languages are utterly useless for anything but huge, enterprisey projects.When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.)
Aug 19 2010
== Quote from retard (re tard.com.invalid)'s articleThu, 19 Aug 2010 15:52:25 -0400, bearophile wrote:I didn't mean my comment in terms of the compilation system. I meant it as a more general statement of how these languages eschew convenience features. Examples: The class centric paradigm is one example. The ridiculously fine grained standard library import system. If you really want to make your imports this fine-grained, you should use selective imports. Strictly explicit, nominative typing. Lack of higher order functions and closure just because you **can** simulate these with classes, even though this is horribly verbose. No RAII, scope statements, or anything similar just because you **can** get by with finally statements, even though this is again horribly verbose, error-prone and unreadable. The requirement that you only have one top-level, public class per file. Lack of default function arguments just because these **can** be simulated with overloading, even though this is ridiculously verbose. Lack of operator overloading just because you **can** use regular method calls, even though properly used operator overloading makes code much more succinct and readable.dsimcha:Having written several university assignments in Java, small (< 500 LOC) to medium size (50000 LOC), I haven't encountered a single compilation related problem. One exception to this are some bindings to native code libraries -- you need to be careful with URLs when packaging external libraries inside a JAR. The class centric programming paradigm often gets in your way when programming in the small, but it's quite acceptable on large scale IMO. How is Java so utterly useless and D much better? Any use cases?What D gets right is that you shouldn't have to be burdened with it when you don't need it, and the simple, clean, safe way that works most of the time should be the idiomatic way, but the ugly/unsafe/inconvenient way that works in the corner cases should be available, even if no serious effort is put into making it not ugly/unsafe/inconvenient. Languages like C++ and Java tend to ignore the simple, common case and force you to do things the hard way all the time, even when you don't need the benefits of doing things the hard way. Thus, these languages are utterly useless for anything but huge, enterprisey projects.When you compile a Java program the compiler is able to find and fetch the files it needs. DMD isn't able to. So Java is more handy for small projects composed of something like 10-20 files. So I don't agree with you. (It's a feature I've asked for in my second message on the D newsgroups.)
Aug 19 2010
"dsimcha" <dsimcha yahoo.com> wrote in message news:i4k4b4$jsj$1 digitalmars.com...I didn't mean my comment in terms of the compilation system. I meant it as a more general statement of how these languages eschew convenience features. Examples: The class centric paradigm is one example. The ridiculously fine grained standard library import system. If you really want to make your imports this fine-grained, you should use selective imports. Strictly explicit, nominative typing. Lack of higher order functions and closure just because you **can** simulate these with classes, even though this is horribly verbose. No RAII, scope statements, or anything similar just because you **can** get by with finally statements, even though this is again horribly verbose, error-prone and unreadable. The requirement that you only have one top-level, public class per file. Lack of default function arguments just because these **can** be simulated with overloading, even though this is ridiculously verbose. Lack of operator overloading just because you **can** use regular method calls, even though properly used operator overloading makes code much more succinct and readable.Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).
Aug 19 2010
Nick Sabalausky wrote:Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
On Fri, Aug 20, 2010 at 2:48 AM, Walter Bright <newshound2 digitalmars.com> wrote:Nick Sabalausky wrote:There's even a book about it! [pdf] http://www.cs.rit.edu/~ats/books/ooc.pdf I've never read it though. You could do OOP in HLA (of course nobody treats that as a real assembler :p. But the book that comes with it is great.).Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
Andrej Mitrovic:You could do OOP in HLA (of course nobody treats that as a real assembler :p. But the book that comes with it is great.).I may like to see the built-in asm of D replaced by HLA :-) Bye, bearophile
Aug 19 2010
bearophile <bearophileHUGS lycos.com> wrote:Andrej Mitrovic:But why? Could you not simply drop in and out of assembly and use D for flow-control and the like? -- SimenYou could do OOP in HLA (of course nobody treats that as a real assembler :p. But the book that comes with it is great.).I may like to see the built-in asm of D replaced by HLA :-)
Aug 20 2010
Simen kjaeraas:But why? Could you not simply drop in and out of assembly and use D for flow-control and the like?I don't know. I think that every time you drop in and out of assembly, unless your used naked assembly, the compiler adds some leading and trailing instructions. In the biological and technological evolution most of the changes happen when a new "species" appear, then later the "species" are almost frozen, changes appear only very slowly. So for example you see many improvements in Java compared to many years of C/C++ evolution. This is part of the Punctuated Equilibria theory by S. J. Gould and others, and it's not specific of biological evolution, it's a property of dynamic systems that are in evolution. Assembly and assemblers were born many years ago, and even if today we have invented many better ideas to software, those ideas are usually not applied to asm world. The good thing of HLA is that it tries to break some of that tradition, and to bring a bit of innovation in the world of asm programming, and it does it well enough (despite the innovations it brings are probably mostly 30 years old, about as new as the original Pascal is; there are far more newer ideas that may be applied to asm programming. Some newer ideas can be seen in CorePy: http://www.corepy.org/ that allows to write computational kernels through Python code that are usually faster than D code). This is why there are moments when I'd like a more modern asm inside D. I've written few hundred lines of asm code inside D programs, this is not a lot, it's just a bit of code, but for me it's a pain to write asm normally, and I can seen tens of ways to improve that work of mine :-) Bye, bearophile
Aug 20 2010
Glancing over it really quickly, High Level Assembly is /completely insane/. The whole point of writing assembly language is to see and write exactly what the computer sees and executes. This makes it useful for coding, and also very easy to read (in the small, at least). The HLA examples on Wikipedia are horribly ugly messes of macros and other weird stuff. It is like a cross of Perl and C++! The Microsoft assembler used to have a whole bunch of weird macro capabilities and strange syntax. I hated it. This looks like that turned up to 11. D's assembler is almost perfect... it integrates without hassle, it gives you what you need, and it is very read/writable. The only complaint I have with it is that you have to capitalize register names. Blargh.
Aug 20 2010
Adam Ruppe wrote:The Microsoft assembler used to have a whole bunch of weird macro capabilities and strange syntax. I hated it.What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.
Aug 20 2010
== Quote from Walter Bright (newshound2 digitalmars.com)'s articleAdam Ruppe wrote:How did you do this? Don't you lose some important stuff like label names in the translation? Instead of LSomeLabelName you get some raw, inscrutable hexadecimal number in your jump instructions.The Microsoft assembler used to have a whole bunch of weird macro capabilities and strange syntax. I hated it.What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.
Aug 20 2010
On 8/20/10, dsimcha <dsimcha yahoo.com> wrote:How did you do this? Don't you lose some important stuff like label names in the translation?Yes, though a lot of label names aren't all that helpful in the first place. "done:" or worse yet, "L1:" don't help much. Those names are obvious from context anyway.Instead of LSomeLabelName you get some raw, inscrutable hexadecimalnumber in your jump instructions. A lot of disassemblers generate a label name instead of giving the hex. obj2asm for example translates most jumps into Lxxx: labels.
Aug 20 2010
Hello Adam,that plus find/replace will get you a long way. -- ... <IXOYE><Instead of LSomeLabelName you get some raw, inscrutable hexadecimal number in your jump instructions.A lot of disassemblers generate a label name instead of giving the hex. obj2asm for example translates most jumps into Lxxx: labels.
Aug 21 2010
dsimcha wrote:== Quote from Walter Bright (newshound2 digitalmars.com)'s articleobj2asm foo.obj >foo.asmAdam Ruppe wrote:How did you do this?The Microsoft assembler used to have a whole bunch of weird macro capabilities and strange syntax. I hated it.What I did when faced with such code is assemble it, *disassemble* the output, and paste the output back in the source code and work from that.Don't you lose some important stuff like label names in the translation? Instead of LSomeLabelName you get some raw, inscrutable hexadecimal number in your jump instructions.Sure, it might need a bit of tidying up by hand, but that was a lot easier than trying to spelunk what those macros actually did. I'm not the only one. I know a team at a large unnamed company that was faced with updating some legacy asm code that the original, long gone, programmers had gone to town with inventing their own high level macro language. Programmer after programmer gave up working on it, until one guy had no problem. He was asked how he worked with that mess, and he said no problem, he assembled it, obj2asm'd it, and that was the new source.
Aug 20 2010
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i4kjdp$2o9f$1 digitalmars.com...Nick Sabalausky wrote:I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
On 8/19/10, Nick Sabalausky <a a.a> wrote:And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.outDid I post that to this list, or did it find its way around the Internet on its own? I saw it randomly pop up on a Google search last year too, on a list I've never even heard of! The best part is it is mostly just a hello world...
Aug 19 2010
"Adam Ruppe" <destructionator gmail.com> wrote in message news:mailman.383.1282266517.13841.digitalmars-d puremagic.com...On 8/19/10, Nick Sabalausky <a a.a> wrote:I honestly don't remember. All I know is whenever I did first see it, I created a saved IM away message about it. I remembered I had it there, went to get the link from it, and thought "Oh, hey, I recognize that domain!" :)And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.outDid I post that to this list, or did it find its way around the Internet on its own?I saw it randomly pop up on a Google search last year too, on a list I've never even heard of!Funny how that happens sometimes. Back in college, a friend of mine was inspired by the Pokey The Penguin online comic ( http://ompf.org/forum/viewtopic.php?t=1556 ) and its deliberate MSPaint crappiness. So he created a "Poop and Friends" comic in a similar vein. It was deliberately stupid humor, although not gross-out stuff, despite the name. (It's no longer around in any form, and the wayback machine doesn't have any of the images: http://web.archive.org/web/20031118234038/http://www.poopandfriends.cjb.net/ ). But a few years after my friend started it, my brother was told by one of his friends "There's this site you have to see!" Turned out to be Poop and Friends.
Aug 19 2010
Some guys are using a hotkey automation scripting language to write/execute machine code: http://www.autohotkey.com/forum/viewtopic.php?t=21172&postdays=0&postorder=asc&start=0 On Fri, Aug 20, 2010 at 3:05 AM, Nick Sabalausky <a a.a> wrote:"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i4kjdp$2o9f$1 digitalmars.com...Nick Sabalausky wrote:I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
Hello Nick,"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i4kjdp$2o9f$1 digitalmars.com...Um... does Boost fit in here? -- ... <IXOYE><Nick Sabalausky wrote:I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.asp x And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 19 2010
"BCS" <none anon.com> wrote in message news:a6268ff1a3d88cd0def4795927c news.digitalmars.com...Hello Nick,Zing! :)"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i4kjdp$2o9f$1 digitalmars.com...Um... does Boost fit in here?Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.asp x And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)
Aug 19 2010
Nick Sabalausky wrote:"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i4kjdp$2o9f$1 digitalmars.com...Don't forget the perl regex to check for a prime number: perl -wle 'print "Prime" if (1 x shift) !~ /^1?$|^(11+?)\1+$/' [number] http://montreal.pm.org/tech/neil_kandalgaonkar.shtmlNick Sabalausky wrote:I've seen high-precision PI calculation done in MS batch: http://thedailywtf.com/Articles/Stupid-Coding-Tricks-A-Batch-of-Pi.aspx And Adam Ruppe did cgi in Asm: http://www.arsdnet.net/cgi-bin/a.out And some massochist did a compile-time raytracer in C++: http://ompf.org/forum/viewtopic.php?t=1556 Yea, I know that had already been done in D, but D's compile-time processing doesn't suck :)Yea. If Java's design philosophy were a valid one, there would never have been any reason to move beyond Altair-style programming (ie, entering machine code (not asm) in binary, one byte at a time, via physical toggle switches). You *can* do anything you need like that (It's Turing-complete!).Yeah, and I've seen OOP done in C, and it works. It's just awful. I've even seen OOP done in assembler (Optlink!).
Aug 20 2010
Leandro Lucarella wrote:I think in D you can do the same level of incremental compilation as in C/C++ but is not as natural. For one, in D is not natural to separate declarations from definitions, so a file in D tends to be dependent in *many* *many* other files because of excessive imports, so even when you can do separate compilation, unless you are *extremely* careful (much more than in C/C++ I think) you'll end up having to recompile the whole project even you change just one file because of the dependency madness.That's why dmd can *automatically* generate .di files. But still, even writing .di files by hand cannot be any harder than writing a C++ .h file.I know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things.In no case is it worse than C++, and as soon as you import a file more than once you're faster.
Aug 19 2010
Walter Bright, el 19 de agosto a las 11:00 me escribiste:Is worse in the sense that you have the feeling that is free in D, but it's not. In C++ you *have* to be careful, otherwise the compiler eats you. In D, when this starts to be significant, you already have a huge project. And again, I agree that it might be a very reasonable trade-off, but that doesn't mean the problem doesn't exist. That's all. I'm not trying to convince anyone that C++ is better, I'm just saying in C++ the problem is obvious while in D is much less visible, and you note it *only* when your project is big enough and you *need* incremental compilation. And I know also that DMD (and every DMD-based D compiler) can generate .di files. It would be really nice to have a -M option like GCC that automatically writes Makefile dependencies. But that's another topic. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- A veces quisiera ser un barco, para flotar como floto siendo humano, y no hundirme como me hundoI know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things.In no case is it worse than C++, and as soon as you import a file more than once you're faster.
Aug 19 2010
On 8/19/2010 11:13 AM, Leandro Lucarella wrote:Andrei Alexandrescu, el 19 de agosto a las 08:50 me escribiste:I link my game engine (20kloc) with derelict, which is much larger. On my 5 year old laptop, it takes about 3-4 seconds to compile the engine, importing the non di'd derelict headers, and linking with the derelict lib. If I compile the whole lot, it takes about 30 seconds. Just wanted to share some real-world stats.On 08/19/2010 07:48 AM, Eldar Insafutdinov wrote:I think in D you can do the same level of incremental compilation as in C/C++ but is not as natural. For one, in D is not natural to separate declarations from definitions, so a file in D tends to be dependent in *many* *many* other files because of excessive imports, so even when you can do separate compilation, unless you are *extremely* careful (much more than in C/C++ I think) you'll end up having to recompile the whole project even you change just one file because of the dependency madness. I know you can do separate compilation as in C/C++ writing the declarations in a different file, or generating/using .di files, but also you'll probably end up using libraries that don't do that (as somebody mentioned for C++ + STL) and end up in a dependency madness anyway. It's just not natural to do so in D, it even encourages not doing it as one of the main advertised features is you don't have to separate declarations from definitions. And I'm not saying that is an easy to solve problem, I'm just saying that I agree D doesn't scale well in terms of incremental compilations for big projects, unless you go against D natural way on doing things.I'm a bit confused - how do you define incremental compilation? The build system can be easily set up to compile individual D files to object files, and the use the linker in a traditional manner.I'll be doing a followup on why D compiles fast.I will say the contrary. Compiling medium size projects doesn't matter in either language. But when the size of your project starts getting very big you will have troubles in D because there is no incremental compilation.
Aug 19 2010
Adam Ruppe:The whole point of writing assembly language is to see and write exactly what the computer sees and executes.<HLA allows you to have a 1:1 mapping, if you want. You can find answers here: http://webster.cs.ucr.edu/AsmTools/HLA/HLADoc/HTMLDoc/hlafaq.txt Look especially at the answer to questions 6 and 23. Bye, bearophile
Aug 20 2010
bearophile wrote:HLA allows you to have a 1:1 mapping, if you want. You can find answers here: http://webster.cs.ucr.edu/AsmTools/HLA/HLADoc/HTMLDoc/hlafaq.txt Look especially at the answer to questions 6 and 23.I found this amusing: =============================================== 6: q. Why is HLA necessary? What's wrong with MASM, TASM, GAS, or NASM? Do we really need another incompatible assembler out there? a. HLA was written with two purposes in mind: The first was to provide a tool that makes it very easy (or, at least, easier) to teach assembly language programming to University students. Experiences at UCR bear out the success of HLA's design (even with prototype/alpha code with tons of bugs and little documentation, students are producing better projects than past courses that used MASM). =============================================== because they weren't teaching assembler, they were teaching their pascal-like embedded language. Of course that's easier than assembler, but it isn't assembler.
Aug 20 2010
On Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright <newshound2 digitalmars.com> wrote:http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc. -Steve
Aug 23 2010
Steven Schveighoffer wrote:On Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright <newshound2 digitalmars.com> wrote:You can start with -v.http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc.
Aug 23 2010
On Mon, 23 Aug 2010 12:44:50 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:I get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time. -SteveOn Wed, 18 Aug 2010 21:05:34 -0400, Walter Bright <newshound2 digitalmars.com> wrote:You can start with -v.http://www.drdobbs.com/blog/archives/2010/08/c_compilation_s.html I'll be doing a followup on why D compiles fast.Very interesting stuff. I'd like to have an article describing how to diagnose slow D compilation :P Dcollections with unit tests compiles in over a minute, with I think about 12 files that contain implementation. I estimate probably 5000 loc.
Aug 23 2010
Steven Schveighoffer wrote:I get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time.with or without -O ?
Aug 23 2010
On Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main. -SteveI get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time.with or without -O ?
Aug 23 2010
Steven Schveighoffer wrote:On Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright <newshound2 digitalmars.com> wrote:You could try running dmd under a profiler, then.Steven Schveighoffer wrote:The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main.I get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time.with or without -O ?
Aug 23 2010
On Mon, 23 Aug 2010 14:11:52 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:I recompiled dmd 2.047 with -pg added and with the COV options uncommented out (not sure what all is needed) I then tried running my build script, and it took about 5 minutes for me to give up :) So I reduced the build line to build just what is necessary to build a hash map. The compile line looks like this: dmd -unittest unit_test.d dcollections/HashMap.d dcollections/Hash.d dcollections/Iterators.d dcollections/model/* I don't think model/* is really needed, but I don't suspect there is too much code in there to compile, it's all interfaces, no unit tests. So without profiling, the compiler takes 4 seconds to compile this one file with unit tests. With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlist 1.28 6.97 0.11 663755 0.00 0.00 ScopeDsymbol::search(Loc, Identifier*, int) 1.05 7.06 0.09 2623497 0.00 0.00 isType(Object*) 0.76 7.12 0.07 911667 0.00 0.00 match(Object*, Object*, TemplateDeclaration*, Scope*) 0.76 7.19 0.07 656268 0.00 0.00 _aaGetRvalue(AA*, void*) 0.58 7.24 0.05 2507041 0.00 0.00 isTuple(Object*) 0.52 7.29 0.04 2548939 0.00 0.00 isExpression(Object*) 0.47 7.33 0.04 10124 0.00 0.01 ClassDeclaration::search(Loc, Identifier*, int) 0.35 7.36 0.03 136688 0.00 0.00 StringTable::search(char const*, unsigned int) 0.35 7.38 0.03 122998 0.00 0.00 Scope::search(Loc, Identifier*, Dsymbol**) 0.35 7.42 0.03 79912 0.00 0.00 Parameter::dim(Parameters*) 0.35 7.45 0.03 43500 0.00 0.00 AliasDeclaration::semantic(Scope*) 0.35 7.47 0.03 26358 0.00 0.01 TemplateInstance::semantic(Scope*, Expressions*) 0.29 7.50 0.03 2537875 0.00 0.00 isDsymbol(Object*) 0.23 7.52 0.02 4974808 0.00 0.00 Tuple::dyncast() 0.23 7.54 0.02 4843755 0.00 0.00 Type::dyncast() 0.23 7.56 0.02 1243524 0.00 0.00 operator new(unsigned int) 0.23 7.58 0.02 904514 0.00 0.00 arrayObjectMatch(Objects*, Objects*, TemplateDeclaration*, Scope*) 0.23 7.60 0.02 365820 0.00 0.00 speller_test(void*, char const*) 0.23 7.62 0.02 285816 0.00 0.00 Array::reserve(unsigned int) 0.23 7.64 0.02 271143 0.00 0.00 calccodsize 0.23 7.66 0.02 149682 0.00 0.00 Dchar::calcHash(char const*, unsigned int) 0.23 7.68 0.02 73379 0.00 0.00 TypeBasic::size(Loc) 0.23 7.70 0.02 39394 0.00 0.00 DsymbolExp::semantic(Scope*) 0.23 7.72 0.02 20885 0.00 0.00 TemplateInstance::semanticTiargs(Loc, Scope*, Objects*, int) 0.23 7.74 0.02 11877 0.00 0.00 TemplateDeclaration::deduceFunctionTemplateMatch(Scope*, Loc, Objects*, Expression*, Expressions*, Objects*) 0.23 7.76 0.02 5442 0.00 0.01 optelem(elem*, int) 0.23 7.78 0.02 __i686.get_pc_thunk.bx 0.12 7.79 0.01 1458990 0.00 0.00 Object::Object() 0.12 7.80 0.01 656266 0.00 0.00 DsymbolTable::lookup(Identifier*) 0.12 7.81 0.01 462797 0.00 0.00 Module::search(Loc, Identifier*, int) 0.12 7.82 0.01 414377 0.00 0.00 Dsymbol::isTemplateInstance() 0.12 7.83 0.01 354954 0.00 0.00 Expression::Expression(Loc, TOK, int) 0.12 7.84 0.01 354693 0.00 0.00 Dsymbol::pastMixin() 0.12 7.85 0.01 167119 0.00 0.00 Dsymbol::checkDeprecated(Loc, Scope*) 0.12 7.86 0.01 151694 0.00 0.00 Type::merge() 0.12 7.87 0.01 123694 0.00 0.00 Lstring::toDchars() 0.12 7.88 0.01 111982 0.00 0.00 el_calloc() 0.12 7.89 0.01 111569 0.00 0.00 resolveProperties(Scope*, Expression*) 0.12 7.90 0.01 107359 0.00 0.00 code_calloc 0.12 7.91 0.01 106932 0.00 0.00 Lexer::peek(Token*) 0.12 7.92 0.01 106468 0.00 0.00 Scope::pop() 0.12 7.93 0.01 99136 0.00 0.00 Array::push(void*) ... I can add more, but I have no idea what part of this is important for diagnosing the problem. From a naive look, it appears that elf_findstr is the problem (only 3k calls, but uses almost 80% of the runtime?), but I have no idea how to interpret this, and I don't know what the compiler does. The compiler ended up eventually not producing an exe with the message "cannot find ld", but I don't think the link step is where the problem is anyways. If you need more data, or want me to run something else, I can. -SteveOn Mon, 23 Aug 2010 13:41:07 -0400, Walter Bright <newshound2 digitalmars.com> wrote:You could try running dmd under a profiler, then.Steven Schveighoffer wrote:The compile line is: dmd -unittest unit_test.d dcollections/*.d dcollections/model/*.d Where unit_test.d is a dummy main.I get a long list of functions proceeding at a reasonable rate. I've done that in the past, I feel it's some sort of inner loop problem. Essentially, something takes way longer to compile than it should, but way longer on the order of .05 seconds instead of .005 seconds, so you don't notice it normally. But somehow my library is able to harness that deficiency and multiply by 1000. I don't know, it doesn't seem like dcollections should evoke such a long compile time.with or without -O ?
Aug 23 2010
Steven Schveighoffer wrote:With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 24 2010
Walter Bright:elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?I am able to find two versions of elf_findstr, one in elfobj.c and one in machobj.c, so it may be possible to remove one of them. Its docstring doesn't seem to show the 'suffix' argument. I have seen that it performs strlen() of str and suffix at the beginning, so using fat pointers (C structs that keep ptr + len) as D may be enough to avoid those strelen calls and save some time. From what I see it seems to perform a linear search inside an Outbuffer, something like a search of strtab~strs inside an array of strings, so the structure may be replaced by an associative set or ordered set lookup instead. Bye, bearophile
Aug 24 2010
On 2010-08-24 12:25, bearophile wrote:Walter Bright:As the files indicate elfobj.c is for generating ELF (linux) object files and machobj.c is for generating Mach-O (osx) object files, both are needed. I guess he uses the same name for the functions to have a uniform interface, no need to change the code on the calling side.elf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?I am able to find two versions of elf_findstr, one in elfobj.c and one in machobj.c, so it may be possible to remove one of them.Its docstring doesn't seem to show the 'suffix' argument. I have seen that it performs strlen() of str and suffix at the beginning, so using fat pointers (C structs that keep ptr + len) as D may be enough to avoid those strelen calls and save some time. From what I see it seems to perform a linear search inside an Outbuffer, something like a search of strtab~strs inside an array of strings, so the structure may be replaced by an associative set or ordered set lookup instead. Bye, bearophile-- /Jacob Carlborg
Aug 24 2010
On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721 -SteveWith profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 24 2010
Steven Schveighoffer wrote:On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Also, putting a printf in elf_findstr to print its arguments will be helpful.Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 24 2010
On Tue, 24 Aug 2010 14:31:26 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:Through some more work with printf, I have to agree with bearophile, this lookup function is horrid. I think it's supposed to look for a symbol in the symbol table, but it uses a linear search through all symbols to find it. Not only that, but the table is stored in one giant buffer, so once it finds the current symbol it's checking against doesn't match, it has to still loop through the remaining characters of the unmatched symbol to find the next 0 byte. I added a simple running printout of how many times the function has been called, along with how large the symbol table has grown. The code is as follows: static IDXSTR elf_findstr(Outbuffer *strtab, const char *str, const char *suffix) { + static int ncalls = 0; + ncalls++; + printf("\r%d\t%d", ncalls, strtab->size()); + fflush(stdout); const char *ent = (char *)strtab->buf+1; const char *pend = ent+strtab->size() - 1; At the end, the symbol table is over 4 million characters and the number of calls is 12677. You can watch it slow down noticeably. I also added some code to count the number of times a symbol is matched -- 648, so about 5% of the time. This means that 95% of the time, the whole table is searched. If you multiply those factors together, and take into account the nature of how it grows, you have probably 20 billion loop iterations. Whereas, a hash table would probably be much faster. I'm thinking a correct compilation time should be on the order of 3-4 seconds vs. 67 seconds it now takes. I am not sure how to fix it, but that's the gist of it. I think the symbol table is so large because of the template proliferation of dcollections, and the verbosity of D symbol names. -SteveOn Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Also, putting a printf in elf_findstr to print its arguments will be helpful.Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 24 2010
Am 24.08.2010 22:56, schrieb Steven Schveighoffer:I am not sure how to fix it, but that's the gist of it. I think the symbol table is so large because of the template proliferation of dcollections, and the verbosity of D symbol names.Why are D's symbols verbose? if I understood you corectly, dmd makes a linear search no matter if i used foo or ArrayOutOfBoundsException (that's a real Java exception).
Aug 24 2010
On Tue, 24 Aug 2010 17:05:30 -0400, Mafi <mafi example.org> wrote:Am 24.08.2010 22:56, schrieb Steven Schveighoffer:A symbol includes the module name, and the mangled version of the function argument types, which could be class/struct names, plus any template info associated with it. For example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZv -SteveI am not sure how to fix it, but that's the gist of it. I think the symbol table is so large because of the template proliferation of dcollections, and the verbosity of D symbol names.Why are D's symbols verbose? if I understood you corectly, dmd makes a linear search no matter if i used foo or ArrayOutOfBoundsException (that's a real Java exception).
Aug 24 2010
Steven Schveighoffer:For example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZvAnd I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
Aug 24 2010
On Tuesday, August 24, 2010 14:37:09 bearophile wrote:Steven Schveighoffer:They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer? - Jonathan M DavisFor example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZvAnd I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
Aug 24 2010
On Tue, 24 Aug 2010 23:53:44 +0200, Jonathan M Davis <jmdavisprog gmail.com> wrote:On Tuesday, August 24, 2010 14:37:09 bearophile wrote:Pure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice. -- SimenSteven Schveighoffer:They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer?For example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZvAnd I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
Aug 24 2010
On Tue, 24 Aug 2010 18:00:30 -0400, Simen kjaeraas <simen.kjaras gmail.com> wrote:On Tue, 24 Aug 2010 23:53:44 +0200, Jonathan M Davis <jmdavisprog gmail.com> wrote:These are decisions made at the compilation stage, not the linking stage. LDC I think does some link optimization, so it might make sense there, but I'm not sure. -SteveOn Tuesday, August 24, 2010 14:37:09 bearophile wrote:Pure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice.Steven Schveighoffer:They probably aren't there because 1. They have nothing to do with overrideability. 2. They have nothing to do with C linking. Presumably, dmd deals with those attributes at the appropriate time and then doesn't bother putting them in the symbol table because they're not relevant any more (or if they are relevant, it has other ways of getting at them). If they were actually necessary in the symbol name, they'd be there. If they aren't necessary, why bother putting them in there, making the symbol names even longer?For example, foo(HashSet!int hs) inside the module testme becomes: _D6testme3fooFC12dcollections7HashSet14__T7HashSetTiZ7HashSetZvAnd I think some more things needs to be added to that string, like a representation for the pure attribute, etc. Bye, bearophile
Aug 25 2010
Steven Schveighoffer <schveiguy yahoo.com> wrote:Absolutely. Now, you compile your module that uses a pure function foo in another module, and the above optimization is used. Later, that module is changed, and foo is changed to depend on some global state, and is thus no longer pure. After compiling this one module, you link your project, and the cached value is wrong, and boom! Nasal demons. -- SimenPure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice.These are decisions made at the compilation stage, not the linking stage.
Aug 25 2010
On Wed, 25 Aug 2010 11:36:24 -0400, Simen kjaeraas <simen.kjaras gmail.com> wrote:Steven Schveighoffer <schveiguy yahoo.com> wrote:You could say the same about just about any function. Changing implementation can be a bad source of stale object errors, I've had it happen many times in C++ without pure involved at all. Moral is, always recompile everything :) My point is just that name mangling was done to allow overloaded functions of the same name be linked by a linker who doesn't understand overloading. If pure functions cannot be overloaded on purity alone, then there's no reason to mangle purity into the symbol. But it's a moot point, since purity *is* mangled into the symbol name. -SteveAbsolutely. Now, you compile your module that uses a pure function foo in another module, and the above optimization is used. Later, that module is changed, and foo is changed to depend on some global state, and is thus no longer pure. After compiling this one module, you link your project, and the cached value is wrong, and boom! Nasal demons.Pure might be worth stuffing in the symbol name, as the compiler may optimize things differently for pure vs. non-pure(dirty?) code. E.g. the result of a large, pure function that takes a while to compute might be cached to prevent calling it twice.These are decisions made at the compilation stage, not the linking stage.
Aug 25 2010
Steven Schveighoffer wrote:But it's a moot point, since purity *is* mangled into the symbol name.Yes, that's done because the caller of a function may depend on that function's purity. Changing the name mangling when purity changes will ensure that the caller gets recompiled as well.
Aug 25 2010
Jonathan M Davis:They probably aren't there because ...In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
Aug 24 2010
On 2010-08-25 02:38, bearophile wrote:Jonathan M Davis:According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: Na -- /Jacob CarlborgThey probably aren't there because ...In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
Aug 25 2010
Jacob Carlborg:According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: NaYes, it's there: import std.c.stdio: printf; int function1(int x) { return x * 2; } pure int function2(int x) { return x * 2; } void main() { printf("%d\n", function1(10)); printf("%d\n", function2(10)); } _D5test29function1FiZi comdat enter 4,0 add EAX,EAX leave ret _D5test29function2FNaiZi comdat assume CS:_D5test29function2FNaiZi enter 4,0 add EAX,EAX leave ret Bye, bearophile
Aug 25 2010
On Wednesday, August 25, 2010 00:42:51 Jacob Carlborg wrote:On 2010-08-25 02:38, bearophile wrote:So, sodium is pure huh. :) - Jonathan M DavisJonathan M Davis:According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: NaThey probably aren't there because ...In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
Aug 25 2010
On 26/08/10 02:10, Jonathan M Davis wrote:On Wednesday, August 25, 2010 00:42:51 Jacob Carlborg wrote:And natrium also? :-)On 2010-08-25 02:38, bearophile wrote:So, sodium is pure huh. :) - Jonathan M DavisJonathan M Davis:According to the ABI pure should already be in the mangled name (don't know if dmd follows that though). The mangled form looks like this: FuncAttrPure: NaThey probably aren't there because ...In Bugzilla there are some pure-related bugs (3833, 3086/3831, maybe 4505) that I think need that attribute in the mangled string. But as usual I may be wrong, and other ways to solve those problems may be invented. Bye, bearophile
Aug 25 2010
Justin Johansson <no spam.com> wrote:Natrium and sodium are the same. -- SimenAnd natrium also? :-)FuncAttrPure: NaSo, sodium is pure huh. :) - Jonathan M Davis
Aug 25 2010
On 26/08/10 02:35, Simen kjaeraas wrote:Justin Johansson <no spam.com> wrote:Of course! Just a bit of tautological silliness on my part. :-)Natrium and sodium are the same.And natrium also? :-)FuncAttrPure: NaSo, sodium is pure huh. :) - Jonathan M Davis
Aug 25 2010
Steven Schveighoffer wrote:Through some more work with printf, I have to agree with bearophile, this lookup function is horrid.It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Aug 24 2010
== Quote from Walter Bright (newshound2 digitalmars.com)'s articleSteven Schveighoffer wrote:Wow, now it's really hit home for me how much programming languages and libraries have advanced in the past 20 years. Nowadays any reasonable person would generally use a hash table even for small N because it's not any harder to code. Any modern language worth its salt comes with one either built in or in the standard lib. I guess 20 years ago this wasn't so.Through some more work with printf, I have to agree with bearophile, this lookup function is horrid.It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Aug 24 2010
On Tue, 24 Aug 2010 18:00:32 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:Yes, I'm glad you pushed me to do it. Looking forward to the fix. -SteveThrough some more work with printf, I have to agree with bearophile, this lookup function is horrid.It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table. Just goes to show how useful a profiler is.
Aug 25 2010
Steven Schveighoffer wrote:The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
== Quote from Walter Bright (newshound2 digitalmars.com)'s articleSteven Schveighoffer wrote:I think you overestimate the amount of programmers that can read assembler nowadays. FWIW I only learned when I posted a bunch of stuff here about various performance issues and you kept asking me to read the disassembly. In hindsight it was well worth it, though. I think reading assembly language and understanding the gist of how things work at that level is still an important skill for modern programmers. While writing assembly is notoriously hard (I've never even tried for anything non-trivial), reading it is a heck of a lot easier to pick up. I went from zero to basically literate in a few evenings.The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
dsimcha wrote:I think you overestimate the amount of programmers that can read assembler nowadays.The thing is, you *don't* need to be able to read assembler in order to make sense of the assembler output! For example, if: f(); is in the source code, you don't need to know much assembler to see if it's generating one instruction or a hundred.FWIW I only learned when I posted a bunch of stuff here about various performance issues and you kept asking me to read the disassembly. In hindsight it was well worth it, though. I think reading assembly language and understanding the gist of how things work at that level is still an important skill for modern programmers. While writing assembly is notoriously hard (I've never even tried for anything non-trivial), reading it is a heck of a lot easier to pick up. I went from zero to basically literate in a few evenings.Right, assembler isn't hard to read after you spend a few moments with it. After all, MOV EAX,3 is hardly rocket science!
Aug 25 2010
"Walter Bright" <newshound2 digitalmars.com> wrote in message news:i53ucl$22nt$1 digitalmars.com...Right, assembler isn't hard to read after you spend a few moments with it. After all, MOV EAX,3 is hardly rocket science!Heh, funny thing about difficulty is how relative it can be. I've heard people who do rocketry say that rocket science really isn't as hard as people think. But programming doesn't come very naturally to most people, either. It would be funny to hear one rocket scientist say to another rocket scientist, "Oh come on, it's not computer programming!"
Aug 25 2010
Nick Sabalausky wrote:"Walter Bright" <newshound2 digitalmars.com> wrote in messageDoing amateur rocketry isn't that hard, the formulas are simple and the more complex stuff (the engines) are off-the-shelf components. It isn't even that hard to build your own engines. The harder stuff is when you put a man on the top of it, and you try to make it reliable.is hardly rocket science!Heh, funny thing about difficulty is how relative it can be. I've heard people who do rocketry say that rocket science really isn't as hard as people think. But programming doesn't come very naturally to most people, either. It would be funny to hear one rocket scientist say to another rocket scientist, "Oh come on, it's not computer programming!"
Aug 25 2010
Hello dsimcha,FWIW I only learned when I posted a bunch of stuff here about various performance issues and you kept asking me to read the disassembly. In hindsight it was well worth it, though.I still thing CS-101 should be in ASM. It would give people a better understanding of what really happens as well weed out the total incompetents. OTOH I think CS-102 should be in a scheam or one of it's ilk to teach how the theory works independent of the machine. :) -- ... <IXOYE><
Aug 25 2010
On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :) -SteveThe two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright <newshound2 digitalmars.com> wrote:He forgot: 0. use a better algorithm (the big O notation matters, like in this case)Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
== Quote from retard (re tard.com.invalid)'s articleWed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright <newshound2 digitalmars.com> wrote:He forgot: 0. use a better algorithm (the big O notation matters, like in this case)Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:== Quote from retard (re tard.com.invalid)'s articleTest-driven develoment, automatic testing tools, common sense? Sometimes the profiler's output is too fine-grained.Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright <newshound2 digitalmars.com> wrote:He forgot: 0. use a better algorithm (the big O notation matters, like in this case)Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
On Wed, 25 Aug 2010 15:11:17 -0400, retard <re tard.com.invalid> wrote:Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:On the contrary, this was one of those bugs that you almost need a profiler for. Consider that after over 10 years of d compilers nobody has found this deficiency until my little library came along. And even then, it's hard to say there actually *is* a problem, the compiler runs and outputs valid code, and if you use the -v switch it's continuously doing things. Even when you profile it, you can see that the errant function only consumes small chunks of time, but it adds up to an unacceptable level. Test-driven development is only useful if you have a certain criteria you expect to achieve. How do you define how fast the compiler *should* run until you run it? It's a very complex piece of software where performance is secondary to correctness. I can understand not having touched code that outputs an object format for 20 years. I don't regularly go through my code looking for opportunities to increase big-O performance. I'm just glad it's been found and will be fixed. -Steve== Quote from retard (re tard.com.invalid)'s articleTest-driven develoment, automatic testing tools, common sense? Sometimes the profiler's output is too fine-grained.Wed, 25 Aug 2010 14:53:58 -0400, Steven Schveighoffer wrote:Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?On Wed, 25 Aug 2010 14:37:33 -0400, Walter Bright <newshound2 digitalmars.com> wrote:He forgot: 0. use a better algorithm (the big O notation matters, like in this case)Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two...Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
retard wrote:Wed, 25 Aug 2010 19:08:37 +0000, dsimcha wrote:Neither of those are designed to find bottlenecks, and I've never seen one that could. Besides, why avoid a tool that is *designed* to find bottlenecks, like a profiler?Yeah, but unless you use a profiler, how are you going to find those spots where N isn't as small as you thought it would be?Test-driven develoment, automatic testing tools,common sense?Is not a substitute for measurement. Like I alluded to, I've seen lots of programmers using common sense to optimize the wrong part of the program, and failing to get useful results. Yes, I've had them *insist* to me (to the point of yelling) that that's were the bottlenecks were, until I ran the profiler on their code and showed them otherwise.Sometimes the profiler's output is too fine-grained.There are many different profilers, with all kinds of different approaches. Some high level, some at the instruction level, some free, some for pay. All of them are cheaper than spending hundreds of hours optimizing the wrong part of the code.
Aug 25 2010
retard:0. use a better algorithm (the big O notation matters, like in this case)This is a big mistake, because: - Optimizing before you know what to optimize is premature optimization. The profiler is one of the best tools to find what to optimize. - Often data structures and algorithms are a trade-off between different needs. So "better" is not absolute, it's problem-specific, and the profiler helps to find such specific problems. And regarding the problem of searching in a sequence of items, if the sequence is small (probably up to 10 or 20 if the items are integers, the language is a low level one and the associative array is not very efficient), a linear search or a binary search is often faster. --------------- dsimcha:While writing assembly is notoriously hard (I've never even tried for anything non-trivial),Using well certain Java frameworks is harder :-) Bye, bearophile
Aug 25 2010
bearophile wrote:And regarding the problem of searching in a sequence of items, if the sequence is small (probably up to 10 or 20 if the items are integers, the language is a low level one and the associative array is not very efficient), a linear search or a binary search is often faster.Yup, and that piece of code was written in a time where there were very few items added into the string table. It never showed up on the radar before.
Aug 25 2010
== Quote from Walter Bright (newshound2 digitalmars.com)'s articlebearophile wrote:I wonder how much of the compile time of more typical projects is taken up by this linear search. Could it be that that's also why std.stdio compiles relatively slow? It's a big module that does a lot of template instantiations. If this silly bug was a bottleneck everywhere, then I'd love to see D vs. Go compile times after this gets fixed.And regarding the problem of searching in a sequence of items, if the sequence is small (probably up to 10 or 20 if the items are integers, the language is a low level one and the associative array is not very efficient), a linear search or a binary search is often faster.Yup, and that piece of code was written in a time where there were very few items added into the string table. It never showed up on the radar before.
Aug 25 2010
dsimcha wrote:I wonder how much of the compile time of more typical projects is taken up by this linear search. Could it be that that's also why std.stdio compiles relatively slow? It's a big module that does a lot of template instantiations. If this silly bug was a bottleneck everywhere, then I'd love to see D vs. Go compile times after this gets fixed.It could very well be the source of these issues.
Aug 25 2010
retard wrote:He forgot: 0. use a better algorithm (the big O notation matters, like in this case)No, I didn't forget that. There's no benefit to using a better algorithm in the code that isn't the bottleneck. In my experience, even very experienced developers are nearly always wrong about where the bottlenecks are if they've never used a profiler.
Aug 25 2010
Steven Schveighoffer wrote:You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)1. He had the test case, I didn't. 2. People have repeatedly suggested I delegate some of the compiler work. Why not?
Aug 25 2010
On Wed, 25 Aug 2010 16:29:06 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:He == me :) The test case was available as a tarball download at www.dsource.org/projects/dcollections. Not that I mind doing the dirty work, if it gets results, but asking someone to compile your product in a different way and then asking them to try and analyze the output of *your* program isn't the best way to get results. If I told Microsoft that Word was crashing on a document it made, and they responded by sending me the source code for Word and said "You have the test case, so you figure it out" I don't think people would like them very much. I have had this problem for months, and haven't really pushed it except for snide remarks until recently, when I figured if I didn't do it, nobody would. I understand the lack of time, and that was why I did the work, but I didn't really expect to get results.You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)1. He had the test case, I didn't.2. People have repeatedly suggested I delegate some of the compiler work. Why not?What I've done hardly qualifies as doing compiler work. I just helped identify the problem :) I hope you plan on fixing it, I can't. -Steve
Aug 25 2010
Steven Schveighoffer wrote:On Wed, 25 Aug 2010 16:29:06 -0400, Walter Bright <newshound2 digitalmars.com> wrote:I hope that you enjoyed doing this, and I hope to make building the compiler an easy thing for users to do, if they are so inclined. I also wanted to push the issue of using a profiler <g>.Steven Schveighoffer wrote:He == me :) The test case was available as a tarball download at www.dsource.org/projects/dcollections. Not that I mind doing the dirty work, if it gets results, but asking someone to compile your product in a different way and then asking them to try and analyze the output of *your* program isn't the best way to get results. If I told Microsoft that Word was crashing on a document it made, and they responded by sending me the source code for Word and said "You have the test case, so you figure it out" I don't think people would like them very much. I have had this problem for months, and haven't really pushed it except for snide remarks until recently, when I figured if I didn't do it, nobody would. I understand the lack of time, and that was why I did the work, but I didn't really expect to get results.You mean like asking someone who reported low performance of your program on the newsgroup to do it for you? :)1. He had the test case, I didn't.Yes, I'll fix it.2. People have repeatedly suggested I delegate some of the compiler work. Why not?What I've done hardly qualifies as doing compiler work. I just helped identify the problem :) I hope you plan on fixing it, I can't.
Aug 25 2010
== Quote from Walter Bright (newshound2 digitalmars.com)'s articleSteven Schveighoffer wrote:There are also those who are not programmers, and don't know what they are doing in the first place. A couple years back i was hired as part of a 'rural our-sourcing' experiment where they took people in the local area who had _some_ technical potential. They would then be hired out cheaper than experienced programmers. We went through a 14 week course for Java boot camp. Out of 24 I was the only one who knew anything about programming. Through the course they weren't told anything about profiling, looking at assembly language, or using a debugger. They were taught the absolute minimum. I watched several when the program wouldn't compile or work right they would randomly make changes trying to get the code to work. Be afraid. Be very afraid.The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
"Era Scarecrow" <rtcvb32 yahoo.com> wrote in message news:i54qi9$1d2g$1 digitalmars.com...== Quote from Walter Bright (newshound2 digitalmars.com)'s articleFrom what I've seen, you get essentially the same results from most HR depts. The worst applicants always seem to look the best to the HR folks and vice versa.Steven Schveighoffer wrote:There are also those who are not programmers, and don't know what they are doing in the first place. A couple years back i was hired as part of a 'rural our-sourcing' experiment where they took people in the local area who had _some_ technical potential. They would then be hired out cheaper than experienced programmers. We went through a 14 week course for Java boot camp. Out of 24 I was the only one who knew anything about programming. Through the course they weren't told anything about profiling, looking at assembly language, or using a debugger. They were taught the absolute minimum. I watched several when the program wouldn't compile or work right they would randomly make changes trying to get the code to work. Be afraid. Be very afraid.The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.Just goes to show how useful a profiler is.Yes, I'm glad you pushed me to do it. Looking forward to the fix.
Aug 25 2010
Era Scarecrow wrote:== Quote from Walter Bright (newshound2 digitalmars.com)'s articleSure, but my advice is directed at the people who *do* know what they are doing, but are avoiding using a profiler and looking at the assembly output.The two secrets to writing fast code are: 1. using a profiler 2. looking at the assembler output of the compiler In my experience, programmers will go to astonishing lengths to avoid doing those two, and will correspondingly expend hundreds of hours "optimizing" and getting perplexing results.There are also those who are not programmers, and don't know what they are doing in the first place.
Aug 25 2010
Steven Schveighoffer wrote:On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 25 2010
On 2010-08-26 08:13, Walter Bright wrote:Steven Schveighoffer wrote:Shouldn't machobj.c get the same optimization? -- /Jacob CarlborgOn Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 26 2010
Hello Jacob,On 2010-08-26 08:13, Walter Bright wrote:Shouldn't something like a table lookup be shared rather than duplicated? -- ... <IXOYE><Steven Schveighoffer wrote:Shouldn't machobj.c get the same optimization?On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 26 2010
On 2010-08-26 16:14, BCS wrote:Hello Jacob,Yes, that would be better. -- /Jacob CarlborgOn 2010-08-26 08:13, Walter Bright wrote:Shouldn't something like a table lookup be shared rather than duplicated?Steven Schveighoffer wrote:Shouldn't machobj.c get the same optimization?On Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 27 2010
On Thu, 26 Aug 2010 02:13:34 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:Better, now takes 20 seconds vs over 60. The new culprit: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 75.79 6.51 6.51 8103 0.80 0.80 TemplateDeclaration::toJsonBuffer(OutBuffer*) 3.14 6.78 0.27 1668093 0.00 0.00 StructDeclaration::semantic(Scope*) 2.10 6.96 0.18 1 180.00 180.00 do32bit(FL, evc*, int) 1.98 7.13 0.17 15445 0.01 0.01 EnumDeclaration::toJsonBuffer(OutBuffer*) 0.70 7.19 0.06 656268 0.00 0.00 Port::isSignallingNan(long double) 0.47 7.23 0.04 915560 0.00 0.00 StructDeclaration::toCBuffer(OutBuffer*, HdrGenState*) 0.47 7.27 0.04 Dsymbol::searchX(Loc, Scope*, Identifier*) I haven't looked at toJsonBuffer at all (btw, why are we calling this function if I'm not outputting json?) -SteveOn Tue, 24 Aug 2010 03:58:57 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Let me know how this works: http://www.dsource.org/projects/dmd/changeset/628Steven Schveighoffer wrote:http://d.puremagic.com/issues/show_bug.cgi?id=4721With profiling enabled, gprof outputs this as the top hitters: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 77.76 6.68 6.68 2952 2.26 2.26 elf_findstr(Outbuffer*, char const*, char const*) 2.10 6.86 0.18 4342 0.04 0.04 searchfixlistelf_findstr definitely looks like a problem area. I can't look at it right now, so can you post this to bugzilla please?
Aug 26 2010
Steven Schveighoffer:I haven't looked at toJsonBuffer at all (btw, why are we calling this function if I'm not outputting json?)Fit for a new bugzilla entry? Bye, bearophile
Aug 26 2010
On Thu, 26 Aug 2010 08:36:44 -0400, bearophile <bearophileHUGS lycos.com> wrote:Steven Schveighoffer:I'll just put into the same report, and let Walter decide if it's still a bug. I am less than ignorant when it comes to compiler innards. -SteveI haven't looked at toJsonBuffer at all (btw, why are we calling this function if I'm not outputting json?)Fit for a new bugzilla entry?
Aug 26 2010
Steven Schveighoffer wrote:Better, now takes 20 seconds vs over 60. The new culprit: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 75.79 6.51 6.51 8103 0.80 0.80 TemplateDeclaration::toJsonBuffer(OutBuffer*)This is most peculiar, as that should have shown up on the previous profile.I haven't looked at toJsonBuffer at all (btw, why are we calling this function if I'm not outputting json?)That only happens if -X is passed on the command line, or one of the files on the command line has a .json index.
Aug 26 2010
On Thu, 26 Aug 2010 12:53:59 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Steven Schveighoffer wrote:I did some more testing. I think I compiled the profiled version of the svn trunk dmd wrong. This is what happens when you let idiots debug your code for you ;) I recompiled it, and here is the new list: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 80.31 11.99 11.99 19000 0.63 0.63 searchfixlist 0.67 12.09 0.10 203173 0.00 0.00 StringTable::search(char const*, unsigned int) 0.60 12.18 0.09 369389 0.00 0.00 Lexer::scan(Token*) 0.54 12.26 0.08 953613 0.00 0.00 ScopeDsymbol::search(Loc, Identifier*, int) 0.47 12.33 0.07 1449798 0.00 0.00 calccodsize 0.40 12.39 0.06 587814 0.00 0.00 code_calloc 0.40 12.45 0.06 41406 0.00 0.00 pinholeopt 0.33 12.50 0.05 901563 0.00 0.00 _aaGetRvalue(AA*, void*) 0.33 12.55 0.05 138329 0.00 0.00 reftoident(int, unsigned long long, Symbol*, unsigned long long, int) 0.33 12.60 0.05 26849 0.00 0.00 ecom(elem**) 0.27 12.64 0.04 230869 0.00 0.00 Type::totym() 0.27 12.68 0.04 62784 0.00 0.00 touchfunc(int) 0.27 12.72 0.04 37623 0.00 0.00 optelem(elem*, int) 0.27 12.76 0.04 28348 0.00 0.00 assignaddrc It looks like searchfixlist is another linear search, looking back at the other profile, it was the second highest consumer of runtime at 2% of the runtime before your fix, so it catapulted up to 80% of the runtime. It looks like a linked-list search, so it might benefit from a hash table as well? I'm not really sure. Also, the 2% was on only one file being compiled, with the shortened run for searchfixlist is much higher. I'll update the bug.Better, now takes 20 seconds vs over 60. The new culprit: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 75.79 6.51 6.51 8103 0.80 0.80 TemplateDeclaration::toJsonBuffer(OutBuffer*)This is most peculiar, as that should have shown up on the previous profile.
Aug 26 2010
Steven Schveighoffer wrote:I'll update the bug.Thanks!
Aug 26 2010
Steven Schveighoffer wrote:Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 80.31 11.99 11.99 19000 0.63 0.63 searchfixlistJust for fun, searchfixlist goes back at least to 1983 or so.
Aug 26 2010
Walter Bright:Just for fun, searchfixlist goes back at least to 1983 or so.It contains this if (I am not able to indent it well): if (s->Sseg == p->Lseg && (s->Sclass == SCstatic || #if TARGET_LINUX || TARGET_OSX || TARGET_FREEBSD || TARGET_SOLARIS (!(config.flags3 & CFG3pic) && s->Sclass == SCglobal)) && #else s->Sclass == SCglobal) && #endif s->Sxtrnnum == 0 && p->Lflags & CFselfrel) { How do you rewrite that in good D code? A possible way is to split that messy if into two nested ifs. Between first and second if you define a boolean variable in two different ways using a static if. And in the second if you use the boolean variable and the second part of the runtime test. Something like this: if (part1) { static if (versions_test) { bool aux = ... } else { bool aux = ... } if (aux && part2) { // ... } else { // ... } } aux is defined in the middle and not before the first if because performing this runtime test is not necessary when part1 fails. Bye, bearophile
Aug 26 2010
Hello Walter,Steven Schveighoffer wrote:Early or late '83? I ask because *I* go back to '83 or so. :) -- ... <IXOYE><Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 80.31 11.99 11.99 19000 0.63 0.63 searchfixlistJust for fun, searchfixlist goes back at least to 1983 or so.
Aug 26 2010
BCS wrote:Hello Walter,June 7th, 3:26 PM. Give or take 6 months.Steven Schveighoffer wrote:Early or late '83? I ask because *I* go back to '83 or so. :)Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 80.31 11.99 11.99 19000 0.63 0.63 searchfixlistJust for fun, searchfixlist goes back at least to 1983 or so.
Aug 26 2010
Walter Bright Wrote:It is now, but when it was originally written (maybe as long as 20 years ago) there were only a few strings in the table, and it was fine. It's just outlived its design. Clearly, it should now be a hash table.Where did you get it? Digital Mars seems to not have an elf C compiler.
Aug 25 2010