digitalmars.D.announce - DMD 1.029 and 2.013 releases
- Walter Bright (5/5) Apr 23 2008 http://www.digitalmars.com/d/1.0/changelog.html
- davidl (14/19) Apr 24 2008 nice opDot feature in 2.0.
- Christopher Wright (8/35) Apr 24 2008 You mean, there are situations in which you want to be sure that you're
- bearophile (12/16) Apr 24 2008 This bugfix will probably save me lot of problems (and code), thank you.
- davidl (5/33) Apr 24 2008 opDot(char[]) is what all I meant
- Bill Baxter (4/11) Apr 24 2008 Congrats! Hopefully that 'wrong vtable call' fix will make the
- Robert Fraser (5/12) Apr 24 2008 Awesome, awesome...
- Robert Fraser (3/6) Apr 24 2008 Oops, just checked the page source, and indeed this refers to switch
- Steven Schveighoffer (6/12) Apr 24 2008 I think it is a bug in the changelog. I think he meant to write somethi...
- Gide Nwawudu (8/13) Apr 24 2008 Nice release.
- Tom S (5/5) Apr 24 2008 Pretty wild stuff :) Thanks!
- Sean Kelly (4/11) Apr 24 2008 TLS huh? Nice! So what will replace the volatile statement? As it is,...
- Steven Schveighoffer (6/16) Apr 24 2008 You can sort of work around it by wrapping the previously volatile state...
- Walter Bright (5/9) Apr 24 2008 Because after listening in while experts debated how to do write
- Jarrett Billingsley (5/14) Apr 24 2008 But what about things like accessing memory-mapped registers? That is, ...
- Walter Bright (7/10) Apr 24 2008 I've written code that uses memory mapped registers. Even in programs
- Sean Kelly (7/17) Apr 24 2008 From my understanding, the problem with doing this via inline assembler ...
- Walter Bright (4/9) Apr 24 2008 There's always a way to do it, even if you have to write an external
- Sean Kelly (4/13) Apr 24 2008 Well... I'm looking forward to seeing what you all have planned for mult...
- Steven Schveighoffer (8/19) Apr 24 2008 Who's adding? We already have it and it works.
- Walter Bright (5/29) Apr 24 2008 No, we don't. There's volatile in C, which is being abandoned as a mess
- Jarrett Billingsley (5/24) Apr 25 2008 Wait, you mean the volatile statement was never even implemented, even i...
- Walter Bright (3/6) Apr 25 2008 All it did was apply C's volatile semantics to the enclosed statements,
- Jarrett Billingsley (7/12) Apr 25 2008 No, it's not. But it's good enough for system programming, when you hav...
- Walter Bright (8/13) Apr 25 2008 I don't agree. I've done ISRs and memory mapped I/O. The actual piece of...
- Sean Kelly (3/5) Apr 25 2008 I thought volatile was a statement in D, not a type?
- Walter Bright (2/7) Apr 26 2008 It's a type for C and C++.
- Sean Kelly (3/11) Apr 27 2008 Right. And volatile in C/C++ is a mess :-)
- Sean Kelly (15/24) Apr 24 2008 Every tool can be mis-used with insufficient understanding. Look at sha...
- Walter Bright (15/31) Apr 24 2008 Of course. But successfully writing multithreaded code that uses shared
- Sean Kelly (16/47) Apr 24 2008 I suppose I should have been more clear. An underlying assumption of mi...
- Walter Bright (29/68) Apr 24 2008 The problem with locks are:
- Sean Kelly (30/98) Apr 24 2008 1) The cost of acquiring or committing a lock is generally roughly equiv...
- Walter Bright (4/5) Apr 24 2008 Sure they can forget. All the memory in the process can be accessed by
- Russell Lewis (10/14) Apr 24 2008 What exactly do you mean by "memory synchronization?" Just a write
- Sean Kelly (8/22) Apr 24 2008 Yeah I meant an atomic RMW, or at least a load barrier for the acquire. ...
- Russell Lewis (7/29) Apr 24 2008 Ah, now I get what you were saying. Yes, I agree that atomic
- Sean Chittenden (17/22) Apr 24 2008 Having had several run ins with pthreads_*(3) implementations earlier
- Bruno Medeiros (5/6) Apr 27 2008 I tried to look up that term. Did you mean a "cubit" or a "qubit"?
- Bill Baxter (5/10) Apr 27 2008 Probably he's referring to this:
- Walter Bright (2/13) Apr 29 2008 Isn't google grand?
- Bruno Medeiros (10/24) Apr 29 2008 To understand, I had to search a bit more, to find this:
- Kevin Bealer (27/55) Apr 24 2008 I've use a tiny amount of lock-free-like programming here and there, whi...
- Sean Kelly (16/68) Apr 24 2008 Tango (and Ares before it) has support for atomic load, store, storeIf (...
- Kevin Bealer (4/79) Apr 24 2008 Thanks Sean -- By now I should know to just check Tango! (It's also pro...
- Bruno Medeiros (13/19) Apr 27 2008 Are you talking about some actual online discussion? If so, can you
- Walter Bright (2/5) Apr 27 2008 Yes.
- Bruno Medeiros (8/14) Apr 28 2008 Cool. I hope you really bring the experts on this one, cause it sure
- Sean Kelly (6/19) Apr 27 2008 I'm guessing there is, but since Walter appears opposed to atomics in
- Robert Fraser (3/6) Apr 27 2008 Given Bartoz's presentation last year he probably isn't totally opposed
- 0ffh (9/13) Apr 24 2008 Just out of curiosity, which approach would you recommend to ensure
- Walter Bright (2/9) Apr 25 2008 I suggest wrapping it in a mutex.
- Sean Kelly (4/13) Apr 25 2008 I suppose the obvious question here is: what if I want to create a mutex
- Walter Bright (2/4) Apr 25 2008 Why do you need volatile for that?
- Sean Kelly (3/8) Apr 25 2008 To restrict compiler optimizations performed on the code.
- Walter Bright (3/11) Apr 25 2008 The optimizer won't move global or pointer references across a function
- Lars Ivar Igesund (7/19) Apr 26 2008 Is that true for all compiler's or only DigitalMars ones?
- Walter Bright (2/15) Apr 26 2008 DM ones certainly. Others, I don't know about.
- Lars Ivar Igesund (8/24) Apr 26 2008 So you are saying that you're removing (or not going to implement) a fea...
- Walter Bright (7/9) Apr 26 2008 "volatile" doesn't work in other C++ compilers for multithreaded code.
- Charles D Hixson (7/23) Apr 26 2008 Perhaps that should be a part of the language spec? Or at
- Sean Kelly (3/16) Apr 26 2008 Even if the function is inlined?
- Walter Bright (2/18) Apr 26 2008 No, but a mutex involves an OS call. Inlining is also easily prevented.
- Sean Kelly (4/23) Apr 26 2008 An OS call isn't always involved. See, for example:
- Walter Bright (3/5) Apr 26 2008 Then you can write the mutex as your own external function which cannot
- Sean Kelly (5/11) Apr 27 2008 And perhaps write portions of that mutex as separate external functions
- BCS (3/6) Apr 26 2008 using atomic ASM ops a (single process) mutex can be implemented with no...
- Walter Bright (2/10) Apr 26 2008 Those use inline assembler (which is fine).
- Bill Baxter (6/25) Apr 26 2008 Maybe you two could arrange a time to have a higher bandwidth
- Walter Bright (4/8) Apr 26 2008 For the moment, if you are really concerned about it, write it in the 2
- Sean Kelly (11/20) Apr 27 2008 That's easy for x86 in D, but for other platforms it requires using C or...
- Sean Kelly (10/26) Apr 24 2008 ...and the function has to be opaque. And even then, I think there's a ...
- Walter Bright (3/5) Apr 24 2008 It wasn't safe anyway, because it wasn't implemented. For now, just use
- Sean Kelly (5/10) Apr 24 2008 Um, I thought that the volatile statement effectively turned off optimiz...
- Walter Bright (3/10) Apr 24 2008 Just turning off optimization isn't good enough. The processor can
- Sean Kelly (4/14) Apr 24 2008 Of corse it can. But there are assembly instructions for that bit. The...
- Steven Schveighoffer (10/15) Apr 24 2008 "Hidden methods now get a compile time warning rather than a runtime one...
- Bruno Medeiros (5/18) Apr 27 2008 *sigh of relief*
- davidl (9/29) Apr 27 2008 shit! I've spent a lot of effort on debugging my legacy d1.0 code while ...
- =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= (25/27) Apr 24 2008 I wanted to install both of dmd and dmd2,
- Jesse Phillips (3/40) Apr 24 2008 I agree here, I feel the compiler should do more distinction between v1
- Bruno Medeiros (25/27) Apr 27 2008 In addition, it seems that now the order of evaluation is less
- Bruno Medeiros (11/17) Apr 27 2008 This FAQ entry was made in response to the suggestion someone made that
- Bill Baxter (19/35) Apr 27 2008 Yes that was indeed one of the things I was thinking.
- Robert Fraser (6/9) Apr 27 2008 Such tools are very possible. JDT can automatically add "final" to every...
http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zip
Apr 23 2008
在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright <newshound1 digitalmars.com> 写道:http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipnice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling. Consider ActiveX stuff. People are not always want to have to create their own bindings... especially for some R&D test. myActiveXObject.Some_Func_Can_be_Determinated_at_runtime(); myActiveXObject.Some_Compile_Time_Unchecked_Var = 3; with current opDot, we are still not able to do so. Yet current opDot looks cleaner. I feel it's kinda dilemma... -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Apr 24 2008
davidl wrote:在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright <newshound1 digitalmars.com> 写道:You mean, there are situations in which you want to be sure that you're using opDot and some in which you want to be sure you're not? The former, you can just write "foo.opDot.x", but not the latter. I wonder how this works with overloads, too.http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipnice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling.Consider ActiveX stuff. People are not always want to have to create their own bindings... especially for some R&D test. myActiveXObject.Some_Func_Can_be_Determinated_at_runtime(); myActiveXObject.Some_Compile_Time_Unchecked_Var = 3; with current opDot, we are still not able to do so. Yet current opDot looks cleaner. I feel it's kinda dilemma...You mean, some sort of dynamic function call system? Like opDot(char[]) so you can do: auto x = foo.bar; // calls foo.opDot("bar");
Apr 24 2008
Bugzilla 1741: crash on associative array with static array as index type<This bugfix will probably save me lot of problems (and code), thank you. For a future release of D 1.x I hope to see the module system too fixed (currently not in bugzilla), that may solve another big chunk of my problems. Christopher Wright:You mean, some sort of dynamic function call system? Like opDot(char[]) so you can do: auto x = foo.bar; // calls foo.opDot("bar");As you know Python has the built-in methods __getattr__() and __getattribute__(): http://docs.python.org/ref/attribute-access.html http://docs.python.org/ref/new-style-attribute-access.html They are useful, but they are probably more fit in a dynamic language with a shell interface. I have used them, once in a while: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/409000 But I don't know where I can use them in D yet... Bye, bearophile
Apr 24 2008
在 Thu, 24 Apr 2008 21:46:06 +0800,Christopher Wright <dhasenan gmail.com> 写道:davidl wrote:opDot(char[]) is what all I meant -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright <newshound1 digitalmars.com> 写道:You mean, there are situations in which you want to be sure that you're using opDot and some in which you want to be sure you're not? The former, you can just write "foo.opDot.x", but not the latter. I wonder how this works with overloads, too.http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipnice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling.Consider ActiveX stuff. People are not always want to have to create their own bindings... especially for some R&D test. myActiveXObject.Some_Func_Can_be_Determinated_at_runtime(); myActiveXObject.Some_Compile_Time_Unchecked_Var = 3; with current opDot, we are still not able to do so. Yet current opDot looks cleaner. I feel it's kinda dilemma...You mean, some sort of dynamic function call system? Like opDot(char[]) so you can do: auto x = foo.bar; // calls foo.opDot("bar");
Apr 24 2008
Walter Bright wrote:http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipCongrats! Hopefully that 'wrong vtable call' fix will make the DMD/DWT/Tango combination work once again. --bb
Apr 24 2008
Walter Bright wrote:http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipAwesome, awesome... What does "s can now accept runtime initialized const and invariant case statements" mean? Do you mean switch statements or is this referring to something else?
Apr 24 2008
Robert Fraser wrote:What does "s can now accept runtime initialized const and invariant case statements" mean? Do you mean switch statements or is this referring to something else?Oops, just checked the page source, and indeed this refers to switch statements. Sorry!
Apr 24 2008
"Robert Fraser" <fraserofthenight gmail.com> wrote in message news:fupdeb$pph$1 digitalmars.com...Robert Fraser wrote:I think it is a bug in the changelog. I think he meant to write something like: <a href="...switchstatement.html">switch statement</a>s can now accept... -SteveWhat does "s can now accept runtime initialized const and invariant case statements" mean? Do you mean switch statements or is this referring to something else?Oops, just checked the page source, and indeed this refers to switch statements. Sorry!
Apr 24 2008
On Wed, 23 Apr 2008 23:35:40 -0700, Walter Bright <newshound1 digitalmars.com> wrote:http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipNice release. On the D2 Change Log, download latest alpha compiler points to the wrong zip file. http://ftp.digitalmars.com/dmd.2.010.zip Also, http://www.digitalmars.com/d/download.html requires updating. Gide
Apr 24 2008
Pretty wild stuff :) Thanks! -- Tomasz Stachowiak http://h3.team0xf.com/ h3/h3r3tic on #D freenode
Apr 24 2008
Walter Bright wrote:http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipTLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D. Sean
Apr 24 2008
"Sean Kelly" wroteWalter Bright wrote:You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing... -Stevehttp://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipTLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.
Apr 24 2008
Steven Schveighoffer wrote:You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
"Walter Bright" <newshound1 digitalmars.com> wrote in message news:fuqed8$18bm$2 digitalmars.com...Steven Schveighoffer wrote:But what about things like accessing memory-mapped registers? That is, as a hint to the compiler to say "don't inline this; don't cache results in registers"?You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
Jarrett Billingsley wrote:But what about things like accessing memory-mapped registers? That is, as a hint to the compiler to say "don't inline this; don't cache results in registers"?I've written code that uses memory mapped registers. Even in programs that manipulate hardware directly, the percentage of code that does this is vanishingly small. It is a very poor cost/benefit ratio to support such a capability directly. It's more appropriate to support it via peek/poke methods (which can be builtin compiler intrinsics), or inline assembler.
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleJarrett Billingsley wrote:From my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned. SeanBut what about things like accessing memory-mapped registers? That is, as a hint to the compiler to say "don't inline this; don't cache results in registers"?I've written code that uses memory mapped registers. Even in programs that manipulate hardware directly, the percentage of code that does this is vanishingly small. It is a very poor cost/benefit ratio to support such a capability directly. It's more appropriate to support it via peek/poke methods (which can be builtin compiler intrinsics), or inline assembler.
Apr 24 2008
Sean Kelly wrote:From my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned.There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSean Kelly wrote:Well... I'm looking forward to seeing what you all have planned for multiprogramming in D! SeanFrom my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned.There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Apr 24 2008
"Walter Bright" wroteSean Kelly wrote:Who's adding? We already have it and it works. If volatile was not already a solved problem, I'd say yeah, sure it might be more trouble than it's worth. But to remove it from the language seems unnecessary to me. I was just asking for justification for *removing* it, not justification for having it :) -SteveFrom my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned.There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Apr 24 2008
Steven Schveighoffer wrote:"Walter Bright" wroteNo, we don't. There's volatile in C, which is being abandoned as a mess and C++ is going with a new type, atomic. There's the volatile statement in D, which is unimplemented.Sean Kelly wrote:Who's adding? We already have it and it works.From my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned.There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.If volatile was not already a solved problem, I'd say yeah, sure it might be more trouble than it's worth. But to remove it from the language seems unnecessary to me.I don't agree that it's a solved problem.I was just asking for justification for *removing* it, not justification for having it :) -Steve
Apr 24 2008
"Walter Bright" <newshound1 digitalmars.com> wrote in message news:furjvd$1hlm$1 digitalmars.com...Steven Schveighoffer wrote:Wait, you mean the volatile statement was never even implemented, even in D1? Where is this mentioned, anywhere?"Walter Bright" wroteNo, we don't. There's volatile in C, which is being abandoned as a mess and C++ is going with a new type, atomic. There's the volatile statement in D, which is unimplemented.Sean Kelly wrote:Who's adding? We already have it and it works.From my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned.There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Apr 25 2008
Jarrett Billingsley wrote:Wait, you mean the volatile statement was never even implemented, even in D1?All it did was apply C's volatile semantics to the enclosed statements, which is really not good enough for multithreading.Where is this mentioned, anywhere?
Apr 25 2008
"Walter Bright" <newshound1 digitalmars.com> wrote in message news:fut4bg$30a2$1 digitalmars.com...Jarrett Billingsley wrote:No, it's not. But it's good enough for system programming, when you have things like memory-mapped registers and memory locations that change on interrupts. Relying on ASM or other languages to do something so fundamental in a language that's _meant_ to be a system programming language seems like a terrible omission.Wait, you mean the volatile statement was never even implemented, even in D1?All it did was apply C's volatile semantics to the enclosed statements, which is really not good enough for multithreading.
Apr 25 2008
Jarrett Billingsley wrote:No, it's not. But it's good enough for system programming, when you have things like memory-mapped registers and memory locations that change on interrupts. Relying on ASM or other languages to do something so fundamental in a language that's _meant_ to be a system programming language seems like a terrible omission.I don't agree. I've done ISRs and memory mapped I/O. The actual piece of code that accessed the data that way formed a miniscule part of the program, even on programs that were completely interrupt driven (like I wrote an ASCII terminal program). Those are well served by two lines of inline asm or a compiler builtin PEEK and POKE function. Building a whole volatile subsystem into the type system for that is a huge waste of resources.
Apr 25 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleBuilding a whole volatile subsystem into the type system for that is a huge waste of resources.I thought volatile was a statement in D, not a type? Sean
Apr 25 2008
Sean Kelly wrote:== Quote from Walter Bright (newshound1 digitalmars.com)'s articleIt's a type for C and C++.Building a whole volatile subsystem into the type system for that is a huge waste of resources.I thought volatile was a statement in D, not a type?
Apr 26 2008
Walter Bright wrote:Sean Kelly wrote:Right. And volatile in C/C++ is a mess :-) Sean== Quote from Walter Bright (newshound1 digitalmars.com)'s articleIt's a type for C and C++.Building a whole volatile subsystem into the type system for that is a huge waste of resources.I thought volatile was a statement in D, not a type?
Apr 27 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSteven Schveighoffer wrote:Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. SeanYou can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
Sean Kelly wrote:Every tool can be mis-used with insufficient understanding.Of course. But successfully writing multithreaded code that uses shared memory requires a level of expertise that is rare, and the need to write safe multithreaded code is far greater than the expertise available. Even for those capable of doing it, writing correct multithreaded code is hard, time-consuming, resistant to testing, and essentially impossible to prove correct. It's like writing assembler code with a hex editor.Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threadsIt is until one of those threads tries to change the data.(which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard.The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example.I agree that lock free programming is important, but volatile doesn't get you there.As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved.Yes, there are a handful who do really understand it (Hans Boehm and Herb Sutter come to mind). If only the rest of us were half as smart <g>.
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSean Kelly wrote:I disagree... see below.Every tool can be mis-used with insufficient understanding.Of course. But successfully writing multithreaded code that uses shared memory requires a level of expertise that is rare, and the need to write safe multithreaded code is far greater than the expertise available. Even for those capable of doing it, writing correct multithreaded code is hard, time-consuming, resistant to testing, and essentially impossible to prove correct. It's like writing assembler code with a hex editor.I suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threadsIt is until one of those threads tries to change the data.My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.(which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard.The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example.I agree that lock free programming is important, but volatile doesn't get you there.> As for the C++0x discussions, I feel > that some of the participants of the memory model discussion are experts > in the field and understand quite well the issues involved. Yes, there are a handful who do really understand it (Hans Boehm and Herb Sutter come to mind). If only the rest of us were half as smart <g>.My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-) Sean
Apr 24 2008
Sean Kelly wrote:The problem with locks are: 1) they are expensive, so people try to optimize them away (grep for "double checked locking") 2) people forget to use the locks 3) deadlocksI suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threadsIt is until one of those threads tries to change the data.When people as smart and savvy as Scott Meyers find it confusing, it's confusing. (Scott Meyers wrote the definitive paper on doubled checked locking, and what's wrong with it.) Heck, I have a hard enough time explaining what the difference between const and invariant is, how is memory coherency going to go down? <g>My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.(which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard.The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.volatile actually puts locks around accesses (at least in the Java memory model it does). So, it doesn't get you lock-free programming. Just avoiding caching of reloads is not the key to lock-free programming. There's the ordering problem.How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example.I agree that lock free programming is important, but volatile doesn't get you there.I've seen many attempts at explaining it, including presentations by Herb Sutter himself. Sorry, but most of the audience doesn't get it. I attended a conference a couple years back on what do do about adding multithreading support to C++. There were about 30 attendees, pretty much the top guys in C++ programming, including Herb Sutter and Hans Boehm. Herb and Hans did most of the talking, and the rest of us sat there wondering "what's a cubit". Things have improved a bit since then, but it's pretty clear that the bulk of programmers are never going to get it, and getting mp programs to work will have the status of a black art. What's needed is something like what garbage collection did for memory management. The language has to take care of synchronization *automatically*. Being D, of course there will be a way for the sorcerers to practice the black art, but for the rest of us there needs to be a reliable and reasonable alternative.> As for the C++0x discussions, I feel > that some of the participants of the memory model discussion are experts > in the field and understand quite well the issues involved. Yes, there are a handful who do really understand it (Hans Boehm and Herb Sutter come to mind). If only the rest of us were half as smart <g>.My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-)
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSean Kelly wrote:1) The cost of acquiring or committing a lock is generally roughly equivalent to a memory synchronization, and sometimes less than that (futexes, etc). So it's not insignificant, but also not as bad as people seem to think. I suspect that locked operations are often subject to premature optimization. 2) If locking is built into the API then they can't forget. 3) Deadlocks aren't typically an issue with the approach I described above because it largely eliminates the chance that the programmer will call into unknown code while holding a lock. I do think that locks stink as a general multiprogramming tool, but they can be quite useful in implementing more complex multiprogramming tools, if nothing else. Also, they can be about the fastest option in some cases, and this can be important. For example, locks are much faster than transactional memory--they just introduce problems like priority inversion and deadlock (fun fun). That said, transactional memory can result in livelock, so neither is a clear win.The problem with locks are: 1) they are expensive, so people try to optimize them away (grep for "double checked locking") 2) people forget to use the locks 3) deadlocksI suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threadsIt is until one of those threads tries to change the data.Fair enough :-)When people as smart and savvy as Scott Meyers find it confusing, it's confusing. (Scott Meyers wrote the definitive paper on doubled checked locking, and what's wrong with it.) Heck, I have a hard enough time explaining what the difference between const and invariant is, how is memory coherency going to go down? <g>My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.(which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard.The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.I must be missing something... I thought 'volatile' addressed compiler reordering as well? That aside, I do think that the implementation of 'volatile' in D 1.0 is too complicated for the average programmer to use correctly and thus may not be the perfect solution for D, but I also think that it solves the language/compiler part of the problem.volatile actually puts locks around accesses (at least in the Java memory model it does). So, it doesn't get you lock-free programming. Just avoiding caching of reloads is not the key to lock-free programming. There's the ordering problem.How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example.I agree that lock free programming is important, but volatile doesn't get you there.I very much agree. My real interest in preserving the black arts in D is so that library developers can produce code which solves these problems in a more elegant manner, whatever that may be. I don't have any expectation that the average programmer would ever want or need to use something like 'volatile' or even ordered atomics. It's far too low-level of a solution to the problem at hand. However, if this can be accomplished without any language facilities at all then I'm all for it. I simply don't want to have to rely on compiler-specific knowledge when writing code, be it at a high level or a low level. SeanI've seen many attempts at explaining it, including presentations by Herb Sutter himself. Sorry, but most of the audience doesn't get it. I attended a conference a couple years back on what do do about adding multithreading support to C++. There were about 30 attendees, pretty much the top guys in C++ programming, including Herb Sutter and Hans Boehm. Herb and Hans did most of the talking, and the rest of us sat there wondering "what's a cubit". Things have improved a bit since then, but it's pretty clear that the bulk of programmers are never going to get it, and getting mp programs to work will have the status of a black art. What's needed is something like what garbage collection did for memory management. The language has to take care of synchronization *automatically*. Being D, of course there will be a way for the sorcerers to practice the black art, but for the rest of us there needs to be a reliable and reasonable alternative.> As for the C++0x discussions, I feel > that some of the participants of the memory model discussion are experts > in the field and understand quite well the issues involved. Yes, there are a handful who do really understand it (Hans Boehm and Herb Sutter come to mind). If only the rest of us were half as smart <g>.My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-)
Apr 24 2008
Sean Kelly wrote:2) If locking is built into the API then they can't forget.Sure they can forget. All the memory in the process can be accessed by any thread, so it's easy to share globals (for example) without locking of any sort.
Apr 24 2008
Sean Kelly wrote:1) The cost of acquiring or committing a lock is generally roughly equivalent to a memory synchronization, and sometimes less than that (futexes, etc). So it's not insignificant, but also not as bad as people seem to think. I suspect that locked operations are often subject to premature optimization.What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Apr 24 2008
== Quote from Russell Lewis (webmaster villagersonline.com)'s articleSean Kelly wrote:Yeah I meant an atomic RMW, or at least a load barrier for the acquire. Releasing a mutex can often be done using a plain old store though, since write ops are typically ordered anyway and moving loads up into the mutex doesn't break anything. My point, however, was simply that mutexes aren't terribly slower than atomic operations, since a mutex acquire/release is really little more than an atomic operation itself, at least in the simple case. Sean1) The cost of acquiring or committing a lock is generally roughly equivalent to a memory synchronization, and sometimes less than that (futexes, etc). So it's not insignificant, but also not as bad as people seem to think. I suspect that locked operations are often subject to premature optimization.What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Apr 24 2008
Sean Kelly wrote:== Quote from Russell Lewis (webmaster villagersonline.com)'s articleAh, now I get what you were saying. Yes, I agree that atomic instructions are not likely to be much faster than mutexes. (Ofc, pthread mutexes, when they sleep, are a whole 'nother beast.) What I thought you were referring to were barriers, which are (in the many-cache case) *far* faster than atomic operations. Which is why I disagreed, in my previous post.Sean Kelly wrote:Yeah I meant an atomic RMW, or at least a load barrier for the acquire. Releasing a mutex can often be done using a plain old store though, since write ops are typically ordered anyway and moving loads up into the mutex doesn't break anything. My point, however, was simply that mutexes aren't terribly slower than atomic operations, since a mutex acquire/release is really little more than an atomic operation itself, at least in the simple case.1) The cost of acquiring or committing a lock is generally roughly equivalent to a memory synchronization, and sometimes less than that (futexes, etc). So it's not insignificant, but also not as bad as people seem to think. I suspect that locked operations are often subject to premature optimization.What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Apr 24 2008
The problem with locks are: 1) they are expensive, so people try to optimize them away (grep for "double checked locking") 2) people forget to use the locks 3) deadlocksHaving had several run ins with pthreads_*(3) implementations earlier this year, I started digging around for alternatives and stashed two such nuggets away. Both papers struck me as "not stupid" and rang high on my "I wish this would make its way into D" scale. "Transactional Locking II" http://research.sun.com/scalable/pubs/DISC2006.pdf "Software Transactional Memory Should Not Be Obstruction-Free" http://berkeley.intel-research.net/rennals/pubs/052RobEnnals.pdf How you'd wrap these primitives into the low level language, is left up to an exercise for the implementor and language designer, but, getting low-level primitives in place that allow for efficient locks strikes me as highly keen. -sc -- Sean Chittenden sean chittenden.org http://sean.chittenden.org/
Apr 24 2008
Walter Bright wrote:there wondering "what's a cubit".I tried to look up that term. Did you mean a "cubit" or a "qubit"? -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
Bruno Medeiros wrote:Walter Bright wrote:Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby. --bbthere wondering "what's a cubit".I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Apr 27 2008
Bill Baxter wrote:Bruno Medeiros wrote:Isn't google grand?Walter Bright wrote:Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby.there wondering "what's a cubit".I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Apr 29 2008
Walter Bright wrote:Bill Baxter wrote:To understand, I had to search a bit more, to find this: http://www.youtube.com/watch?v=Zyc1315KawQ But I was really thinking Walter meant qubit, which means quantum bit of information, and quite looks like a term that could be applied to concurrency (the smallest unit of information that can be assigned atomically in a CPU or something :P ) -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#DBruno Medeiros wrote:Isn't google grand?Walter Bright wrote:Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby.there wondering "what's a cubit".I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Apr 29 2008
Sean Kelly Wrote:== Quote from Walter Bright (newshound1 digitalmars.com)'s articleI've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment. Kevin BealerSteven Schveighoffer wrote:Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. SeanYou can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
== Quote from Kevin Bealer (kevinbealer gmail.com)'s articleSean Kelly Wrote:Tango (and Ares before it) has support for atomic load, store, storeIf (CAS), increment, and decrement. Currently, x86 is the only architecture that's truly atomic however, other platforms fall back to using synchronized (largely because D doesn't support inline ASM for other platforms and because no one has asked for other platforms to be supported). The API docs are here: http://www.dsource.org/projects/tango/docs/current/tango.core.Atomic.html And this is the source file: http://www.dsource.org/projects/tango/browser/trunk/tango/core/Atomic.d The unit tests all pass with DMD and I /think/ they pass with GDC as well, but I haven't verified this personally. Also, the docs above are a bit misleading in that increment and decrement operations are actually available for the Atomic struct if T is an integer or a pointer type. the doc tool doesn't communicate that properly because it has issues with "static if". Oh, and I'd just use the default msync option unless your needs are really specific. The acquire/release options are a bit tricky to use properly in practice. Sean== Quote from Walter Bright (newshound1 digitalmars.com)'s articleI've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment.Steven Schveighoffer wrote:Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved.You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
Sean Kelly Wrote:== Quote from Kevin Bealer (kevinbealer gmail.com)'s articleThanks Sean -- By now I should know to just check Tango! (It's also probably a good way for me to learn the D inline assembly techniques.) KevinSean Kelly Wrote:Tango (and Ares before it) has support for atomic load, store, storeIf (CAS), increment, and decrement. Currently, x86 is the only architecture that's truly atomic however, other platforms fall back to using synchronized (largely because D doesn't support inline ASM for other platforms and because no one has asked for other platforms to be supported). The API docs are here: http://www.dsource.org/projects/tango/docs/current/tango.core.Atomic.html And this is the source file: http://www.dsource.org/projects/tango/browser/trunk/tango/core/Atomic.d The unit tests all pass with DMD and I /think/ they pass with GDC as well, but I haven't verified this personally. Also, the docs above are a bit misleading in that increment and decrement operations are actually available for the Atomic struct if T is an integer or a pointer type. the doc tool doesn't communicate that properly because it has issues with "static if". Oh, and I'd just use the default msync option unless your needs are really specific. The acquire/release options are a bit tricky to use properly in practice. Sean== Quote from Walter Bright (newshound1 digitalmars.com)'s articleI've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment.Steven Schveighoffer wrote:Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved.You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
Sean Kelly wrote:As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. SeanAre you talking about some actual online discussion? If so, can you point to where it is? (comp.lang.c++ maybe?) Ever since I read about the double-checked locking pattern, that I've felt as if the carpet was pulled under my feet (even though I never used the pattern), as it clearly illustrated how tricky the memory model concurrency issues are. Speaking of which, is a memory model specification also being worked out for D, since the concurrent programming aspects of the language are being developed? -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
Bruno Medeiros wrote:Speaking of which, is a memory model specification also being worked out for D, since the concurrent programming aspects of the language are being developed?Yes.
Apr 27 2008
Walter Bright wrote:Bruno Medeiros wrote:Cool. I hope you really bring the experts on this one, cause it sure ain't gonna be easy, likely much harder than the const/invariant system. That is, unless the semantics can be mostly (or even entirely) copied from the work being done on other languages (like C++0x or Java). -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#DSpeaking of which, is a memory model specification also being worked out for D, since the concurrent programming aspects of the language are being developed?Yes.
Apr 28 2008
Bruno Medeiros wrote:Sean Kelly wrote:It was done via a listserv. I don't recall offhand where the archives are.As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. SeanAre you talking about some actual online discussion? If so, can you point to where it is? (comp.lang.c++ maybe?)Speaking of which, is a memory model specification also being worked out for D, since the concurrent programming aspects of the language are being developed?I'm guessing there is, but since Walter appears opposed to atomics in the language, your guess is as good as mine what it will be. I had been expecting that D would copy C++0x here. Sean
Apr 27 2008
Sean Kelly wrote:I'm guessing there is, but since Walter appears opposed to atomics in the language, your guess is as good as mine what it will be. I had been expecting that D would copy C++0x here.Given Bartoz's presentation last year he probably isn't totally opposed to atomics in STM.
Apr 27 2008
Walter Bright wrote:Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.Just out of curiosity, which approach would you recommend to ensure that a variable which is updated from an interrupt service routine (and, implicitly, any other thread) will be read from common memory and not cached in a register? I know there are a few, but which would you recommend? I think ensuring that memory access happens at every variable access is a straightforward solution (and a good one, if access is atomar). Regards, frank
Apr 24 2008
0ffh wrote:Just out of curiosity, which approach would you recommend to ensure that a variable which is updated from an interrupt service routine (and, implicitly, any other thread) will be read from common memory and not cached in a register? I know there are a few, but which would you recommend? I think ensuring that memory access happens at every variable access is a straightforward solution (and a good one, if access is atomar).I suggest wrapping it in a mutex.
Apr 25 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s article0ffh wrote:I suppose the obvious question here is: what if I want to create a mutex in D? SeanJust out of curiosity, which approach would you recommend to ensure that a variable which is updated from an interrupt service routine (and, implicitly, any other thread) will be read from common memory and not cached in a register? I know there are a few, but which would you recommend? I think ensuring that memory access happens at every variable access is a straightforward solution (and a good one, if access is atomar).I suggest wrapping it in a mutex.
Apr 25 2008
Sean Kelly wrote:I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 25 2008
Walter Bright wrote:Sean Kelly wrote:To restrict compiler optimizations performed on the code. SeanI suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 25 2008
Sean Kelly wrote:Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 25 2008
Walter Bright wrote:Sean Kelly wrote:Is that true for all compiler's or only DigitalMars ones? -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the TangoWalter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Lars Ivar Igesund wrote:Walter Bright wrote:DM ones certainly. Others, I don't know about.Sean Kelly wrote:Is that true for all compiler's or only DigitalMars ones?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Walter Bright wrote:Lars Ivar Igesund wrote:So you are saying that you're removing (or not going to implement) a feature due to a restriction in the DM optimizer? -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the TangoWalter Bright wrote:DM ones certainly. Others, I don't know about.Sean Kelly wrote:Is that true for all compiler's or only DigitalMars ones?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Lars Ivar Igesund wrote:So you are saying that you're removing (or not going to implement) a feature due to a restriction in the DM optimizer?"volatile" doesn't work in other C++ compilers for multithreaded code. It's a huge screwup. Not only do the optimizers move things about, but the CPU inself reorders things in ways that move things past mutexes, even if the compiler gets it right. Again, see Scott Meyer's doubled checked locking example. There'll be a way to do lock-free programming. Volatile isn't the right way.
Apr 26 2008
Walter Bright wrote:Lars Ivar Igesund wrote:Perhaps that should be a part of the language spec? Or at least it be documented that this is a requirement to allow for multiprocessing? It sounds like a simple enough feature to implement. (But what do I know? I haven't written a compiler since a class in college...decades ago.)Walter Bright wrote:DM ones certainly. Others, I don't know about.Sean Kelly wrote:Is that true for all compiler's or only DigitalMars ones?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Walter Bright wrote:Sean Kelly wrote:Even if the function is inlined? SeanWalter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Sean Kelly wrote:Walter Bright wrote:No, but a mutex involves an OS call. Inlining is also easily prevented.Sean Kelly wrote:Even if the function is inlined?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Walter Bright wrote:Sean Kelly wrote:An OS call isn't always involved. See, for example: http://en.wikipedia.org/wiki/Futex. SeanWalter Bright wrote:No, but a mutex involves an OS call. Inlining is also easily prevented.Sean Kelly wrote:Even if the function is inlined?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Sean Kelly wrote:An OS call isn't always involved. See, for example: http://en.wikipedia.org/wiki/Futex.Then you can write the mutex as your own external function which cannot be inlined.
Apr 26 2008
Walter Bright wrote:Sean Kelly wrote:And perhaps write portions of that mutex as separate external functions to prevent reordering within the mutex code itself... surely you can see why this isn't terribly appealing. SeanAn OS call isn't always involved. See, for example: http://en.wikipedia.org/wiki/Futex.Then you can write the mutex as your own external function which cannot be inlined.
Apr 27 2008
Reply to Walter,No, but a mutex involves an OS call. Inlining is also easily prevented.using atomic ASM ops a (single process) mutex can be implemented with no OS interaction at all.
Apr 26 2008
BCS wrote:Reply to Walter,Those use inline assembler (which is fine).No, but a mutex involves an OS call. Inlining is also easily prevented.using atomic ASM ops a (single process) mutex can be implemented with no OS interaction at all.
Apr 26 2008
Walter Bright wrote:Sean Kelly wrote:Maybe you two could arrange a time to have a higher bandwidth IM/irc/skype/telephone chat on the subject? This seems important, but this one-line-at-a-time back and forth style of discussion is going nowhere fast. --bbWalter Bright wrote:No, but a mutex involves an OS call. Inlining is also easily prevented.Sean Kelly wrote:Even if the function is inlined?Walter Bright wrote:The optimizer won't move global or pointer references across a function call boundary.Sean Kelly wrote:To restrict compiler optimizations performed on the code.I suppose the obvious question here is: what if I want to create a mutex in D?Why do you need volatile for that?
Apr 26 2008
Bill Baxter wrote:Maybe you two could arrange a time to have a higher bandwidth IM/irc/skype/telephone chat on the subject? This seems important, but this one-line-at-a-time back and forth style of discussion is going nowhere fast.For the moment, if you are really concerned about it, write it in the 2 lines of inline assembler. That's what I've done to do lock-free CAS stuff. It's really not a big deal.
Apr 26 2008
Walter Bright wrote:Bill Baxter wrote:That's easy for x86 in D, but for other platforms it requires using C or a standalone assembler, which is workable but annoying. And regarding the assembler approach in general, I label all the asm blocks as volatile for safety (you fixed a ticket I submitted regarding this a few years back). I know that DMD doesn't optimize within or across asm blocks, but I don't trust that every D compiler does or will do the same. Particularly since D doesn't actually have a multithreaded memory model. If it did, I may trust that seeing a 'lock' expression in x86 inline asm would be enough. SeanMaybe you two could arrange a time to have a higher bandwidth IM/irc/skype/telephone chat on the subject? This seems important, but this one-line-at-a-time back and forth style of discussion is going nowhere fast.For the moment, if you are really concerned about it, write it in the 2 lines of inline assembler. That's what I've done to do lock-free CAS stuff. It's really not a big deal.
Apr 27 2008
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article"Sean Kelly" wrote...and the function has to be opaque. And even then, I think there's a risk that something undesirable may happen--I'd have to give it some thought. I'd guess that 'volatile' is being deprecated in favor of some sort of C++0x style atomics, but for the moment D no longer has a solution for this. It's a bit upsetting, particularly since it effectively deprecates the atomic library code I wrote for D some three years ago. As a point of interest, that code is structurally very similar to what the C++0x group decided on just last summer, but it's been around for at least a year and a half longer. SeanWalter Bright wrote:You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zipTLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.
Apr 24 2008
Sean Kelly wrote:TLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSean Kelly wrote:Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick. SeanTLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.
Apr 24 2008
Sean Kelly wrote:== Quote from Walter Bright (newshound1 digitalmars.com)'s articleJust turning off optimization isn't good enough. The processor can reorder things!It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick.
Apr 24 2008
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleSean Kelly wrote:Of corse it can. But there are assembly instructions for that bit. The unaccounted for problem is/was the compiler. Sean== Quote from Walter Bright (newshound1 digitalmars.com)'s articleJust turning off optimization isn't good enough. The processor can reorder things!It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick.
Apr 24 2008
"Walter Bright" wrotehttp://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zip"Hidden methods now get a compile time warning rather than a runtime one." Yay! The pure function description needs a lot more filling out. I'm particularly interested in whether mutable heap data can be created and used from inside a pure function, and how that would work with class constructors. I won't poke you any more, because you did qualify that they aren't really implemented yet :) Nice work! -Steve
Apr 24 2008
Steven Schveighoffer wrote:"Walter Bright" wrote*sigh of relief* -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#Dhttp://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zip"Hidden methods now get a compile time warning rather than a runtime one." Yay!
Apr 27 2008
在 Thu, 24 Apr 2008 23:35:24 +0800,Steven Schveighoffer <schveiguy yahoo.com> 写道:"Walter Bright" wroteshit! I've spent a lot of effort on debugging my legacy d1.0 code while I port to d2.0 with this new feature. Yet hidden method is good, runtime error is correct, current warning is awesome, just original code is bad :( I think I need to do more change to my code to take advantage of d2.0 features , then commit.http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.029.zip This starts laying the foundation for multiprogramming support: http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.013.zip"Hidden methods now get a compile time warning rather than a runtime one."Yay! The pure function description needs a lot more filling out. I'm particularly interested in whether mutable heap data can be created and used from inside a pure function, and how that would work with class constructors. I won't poke you any more, because you did qualify that they aren't really implemented yet :) Nice work! -Steve-- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Apr 27 2008
Walter Bright wrote:http://ftp.digitalmars.com/dmd.1.029.zip...http://ftp.digitalmars.com/dmd.2.013.zipI wanted to install both of dmd and dmd2, but they both wanted to use /etc/dmd.conf So I modified my setup, so that dmd2 would instead read dmd2.conf which had phobos2... I moved /usr/bin/dmd2 over to another dir, I called mine "dmd2": /usr/libexec/dmd2/dmd In that directory I created a symlink file: /usr/libexec/dmd2/dmd.conf -> /etc/dmd2.conf And then I set up a shell wrapper for "dmd2", that would call the relocated binary instead: exec /usr/libexec/dmd2/dmd "$*" So now I can have my old D1 configuration in dmd.conf and my D2 configuration in dmd2.conf. And have both RPM packages installed at once, without the file conflicts on /etc/dmd.conf. Maybe something for the compiler to do too ? (at least look for dmd2.conf before dmd.conf) --anders PS. wxD is now tested OK with DMD 1.029 and 2.013 At least on Linux, as usual Windows is left... "make DC=dmd" and "make DC=dmd2", respectively.
Apr 24 2008
On Fri, 25 Apr 2008 00:45:58 +0200, Anders F Björklund wrote:Walter Bright wrote:I agree here, I feel the compiler should do more distinction between v1 and v2.http://ftp.digitalmars.com/dmd.1.029.zip...http://ftp.digitalmars.com/dmd.2.013.zipI wanted to install both of dmd and dmd2, but they both wanted to use /etc/dmd.conf So I modified my setup, so that dmd2 would instead read dmd2.conf which had phobos2... I moved /usr/bin/dmd2 over to another dir, I called mine "dmd2": /usr/libexec/dmd2/dmd In that directory I created a symlink file: /usr/libexec/dmd2/dmd.conf -> /etc/dmd2.conf And then I set up a shell wrapper for "dmd2", that would call the exec /usr/libexec/dmd2/dmd "$*" So now I can have my old D1 configuration in dmd.conf and my D2 configuration in dmd2.conf. And have both RPM packages installed at once, without the file conflicts on /etc/dmd.conf. Maybe something for the compiler to do too ? (at least look for dmd2.conf before dmd.conf) --anders PS. wxD is now tested OK with DMD 1.029 and 2.013 At least on Linux, as usual Windows is left... "make DC=dmd" and "make DC=dmd2", respectively.
Apr 24 2008
Walter Bright wrote:http://www.digitalmars.com/d/2.0/changelog.htmlIn addition, it seems that now the order of evaluation is less undefined. The following was added in http://www.digitalmars.com/d/2.0/expression.html "The following binary expressions are evaluated in strictly left-to-right order: OrExpression, XorExpression, AndExpression, CmpExpression, ShiftExpression, AddExpression, CatExpression, MulExpression, CommaExpression, OrOrExpression, AndAndExpression " Also added: "Associativity and Commutativity An implementation may rearrange the evaluation of expressions according to arithmetic associativity and commutativity rules as long as, within that thread of execution, no observable different is possible. This rule precludes any associative or commutative reordering of floating point expressions." Walter, note the different->difference typo. Some additions to the float page as well. And a new FAQ question was added: "Can't a sufficiently smart compiler figure out that a function is pure automatically?" http://www.digitalmars.com/d/2.0/faq.html#pure -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
Bruno Medeiros wrote:And a new FAQ question was added: "Can't a sufficiently smart compiler figure out that a function is pure automatically?" http://www.digitalmars.com/d/2.0/faq.html#pureThis FAQ entry was made in response to the suggestion someone made that pure be automatically detected by the compiler. But I think the suggestion made wasn't to remove the pure attribute, and make the compiler detect *all* pure functions. One would still be able to use the pure function parameter. That would invalidate points 1 and 3. As for 2: well, just don't do automatic pure detection for virtual functions (unless they are final). -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
Bruno Medeiros wrote:Bruno Medeiros wrote:Yes that was indeed one of the things I was thinking. But imagine function A is not declared pure, but just happens to be so. Programmer B discovers that and starts to rely on it as a pure function. Programmer of A later makes an enhancement that kills the purity of A, but he never intended A to be pure so he doesn't notice or care. Programmer B updates library and subsequently is heard to utter the So I think I have to agree that if you're going to have pure functions in a mixed procedural/functional world, then explicit labeling is probably unavoidable. However, it may still be useful to have tools that discover and recommend tagging of functions which are in fact pure. Same goes for nothrow. Anyway, I would really like for there to be some way to gain the benefits of these attributes without me having to think about it. There are already more than enough dimensions of the problem space to keep in mind when writing programs without adding more, like pure and nothrow do. --bbAnd a new FAQ question was added: "Can't a sufficiently smart compiler figure out that a function is pure automatically?" http://www.digitalmars.com/d/2.0/faq.html#pureThis FAQ entry was made in response to the suggestion someone made that pure be automatically detected by the compiler. But I think the suggestion made wasn't to remove the pure attribute, and make the compiler detect *all* pure functions. One would still be able to use the pure function parameter. That would invalidate points 1 and 3. As for 2: well, just don't do automatic pure detection for virtual functions (unless they are final).
Apr 27 2008
Bill Baxter wrote:However, it may still be useful to have tools that discover and recommend tagging of functions which are in fact pure. Same goes for nothrow.Such tools are very possible. JDT can automatically add "final" to every variable it can in Java, so it's not a big leap to say a tool could be implemented for D that would constify/invariantify every variable in your source it could. Such tools, for the reasons you described (API specification) would be easily abused, though.
Apr 27 2008