www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DMD 1.029 and 2.013 releases

reply Walter Bright <newshound1 digitalmars.com> writes:
http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.029.zip

This starts laying the foundation for multiprogramming support:

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.013.zip
Apr 23 2008
next sibling parent reply davidl <davidl 126.com> writes:
在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright  
<newshound1 digitalmars.com> 写道:

 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
nice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling. Consider ActiveX stuff. People are not always want to have to create their own bindings... especially for some R&D test. myActiveXObject.Some_Func_Can_be_Determinated_at_runtime(); myActiveXObject.Some_Compile_Time_Unchecked_Var = 3; with current opDot, we are still not able to do so. Yet current opDot looks cleaner. I feel it's kinda dilemma... -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Apr 24 2008
parent reply Christopher Wright <dhasenan gmail.com> writes:
davidl wrote:
 在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright 
 <newshound1 digitalmars.com> 写道:
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
nice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling.
You mean, there are situations in which you want to be sure that you're using opDot and some in which you want to be sure you're not? The former, you can just write "foo.opDot.x", but not the latter. I wonder how this works with overloads, too.
 Consider ActiveX stuff.
 
 People are not always want to have to create their own bindings... 
 especially for some R&D test.
 
 myActiveXObject.Some_Func_Can_be_Determinated_at_runtime();
 myActiveXObject.Some_Compile_Time_Unchecked_Var = 3;
 
 with current opDot, we are still not able to do so.
 
 Yet current opDot looks cleaner.
 
 I feel it's kinda dilemma...
You mean, some sort of dynamic function call system? Like opDot(char[]) so you can do: auto x = foo.bar; // calls foo.opDot("bar");
Apr 24 2008
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Bugzilla 1741: crash on associative array with static array as index type<
This bugfix will probably save me lot of problems (and code), thank you. For a future release of D 1.x I hope to see the module system too fixed (currently not in bugzilla), that may solve another big chunk of my problems. Christopher Wright:
 You mean, some sort of dynamic function call system? Like opDot(char[]) 
 so you can do:
 auto x = foo.bar; // calls foo.opDot("bar");
As you know Python has the built-in methods __getattr__() and __getattribute__(): http://docs.python.org/ref/attribute-access.html http://docs.python.org/ref/new-style-attribute-access.html They are useful, but they are probably more fit in a dynamic language with a shell interface. I have used them, once in a while: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/409000 But I don't know where I can use them in D yet... Bye, bearophile
Apr 24 2008
prev sibling parent davidl <davidl 126.com> writes:
在 Thu, 24 Apr 2008 21:46:06 +0800,Christopher Wright  
<dhasenan gmail.com> 写道:

 davidl wrote:
 在 Thu, 24 Apr 2008 14:35:40 +0800,Walter Bright  
 <newshound1 digitalmars.com> 写道:

 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
nice opDot feature in 2.0. Though sometimes on windows, people need some unchecked opDot calling.
You mean, there are situations in which you want to be sure that you're using opDot and some in which you want to be sure you're not? The former, you can just write "foo.opDot.x", but not the latter. I wonder how this works with overloads, too.
 Consider ActiveX stuff.
  People are not always want to have to create their own bindings...  
 especially for some R&D test.
  myActiveXObject.Some_Func_Can_be_Determinated_at_runtime();
 myActiveXObject.Some_Compile_Time_Unchecked_Var = 3;
  with current opDot, we are still not able to do so.
  Yet current opDot looks cleaner.
  I feel it's kinda dilemma...
You mean, some sort of dynamic function call system? Like opDot(char[]) so you can do: auto x = foo.bar; // calls foo.opDot("bar");
opDot(char[]) is what all I meant -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Apr 24 2008
prev sibling next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip
 
 This starts laying the foundation for multiprogramming support:
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
Congrats! Hopefully that 'wrong vtable call' fix will make the DMD/DWT/Tango combination work once again. --bb
Apr 24 2008
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright wrote:
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip
 
 This starts laying the foundation for multiprogramming support:
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
Awesome, awesome... What does "s can now accept runtime initialized const and invariant case statements" mean? Do you mean switch statements or is this referring to something else?
Apr 24 2008
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Robert Fraser wrote:
 What does "s can now accept runtime initialized const and invariant case 
 statements" mean? Do you mean switch statements or is this referring to 
 something else?
Oops, just checked the page source, and indeed this refers to switch statements. Sorry!
Apr 24 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Robert Fraser" <fraserofthenight gmail.com> wrote in message 
news:fupdeb$pph$1 digitalmars.com...
 Robert Fraser wrote:
 What does "s can now accept runtime initialized const and invariant case 
 statements" mean? Do you mean switch statements or is this referring to 
 something else?
Oops, just checked the page source, and indeed this refers to switch statements. Sorry!
I think it is a bug in the changelog. I think he meant to write something like: <a href="...switchstatement.html">switch statement</a>s can now accept... -Steve
Apr 24 2008
prev sibling next sibling parent Gide Nwawudu <gide btinternet.com> writes:
On Wed, 23 Apr 2008 23:35:40 -0700, Walter Bright
<newshound1 digitalmars.com> wrote:

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.029.zip

This starts laying the foundation for multiprogramming support:

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.013.zip
Nice release. On the D2 Change Log, download latest alpha compiler points to the wrong zip file. http://ftp.digitalmars.com/dmd.2.010.zip Also, http://www.digitalmars.com/d/download.html requires updating. Gide
Apr 24 2008
prev sibling next sibling parent Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Pretty wild stuff :) Thanks!

-- 
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode
Apr 24 2008
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip
 
 This starts laying the foundation for multiprogramming support:
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
TLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D. Sean
Apr 24 2008
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Sean Kelly" wrote
 Walter Bright wrote:
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
TLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.
You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing... -Steve
Apr 24 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile statement 
 in a function, but it seems like having volatile doesn't really hurt 
 anything.  I'm curious to know why it was so bad that it was worth 
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Apr 24 2008
next sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:fuqed8$18bm$2 digitalmars.com...
 Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile 
 statement in a function, but it seems like having volatile doesn't really 
 hurt anything.  I'm curious to know why it was so bad that it was worth 
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
But what about things like accessing memory-mapped registers? That is, as a hint to the compiler to say "don't inline this; don't cache results in registers"?
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 But what about things like accessing memory-mapped registers?  That is, as a 
 hint to the compiler to say "don't inline this; don't cache results in 
 registers"? 
I've written code that uses memory mapped registers. Even in programs that manipulate hardware directly, the percentage of code that does this is vanishingly small. It is a very poor cost/benefit ratio to support such a capability directly. It's more appropriate to support it via peek/poke methods (which can be builtin compiler intrinsics), or inline assembler.
Apr 24 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Jarrett Billingsley wrote:
 But what about things like accessing memory-mapped registers?  That is, as a
 hint to the compiler to say "don't inline this; don't cache results in
 registers"?
I've written code that uses memory mapped registers. Even in programs that manipulate hardware directly, the percentage of code that does this is vanishingly small. It is a very poor cost/benefit ratio to support such a capability directly. It's more appropriate to support it via peek/poke methods (which can be builtin compiler intrinsics), or inline assembler.
From my understanding, the problem with doing this via inline assembler is that some compilers can actually optimize inline assembler, leaving no truly portable way to do this in language. This issue has come up on comp.programming.threads in the past, but I don't remember whether there was any resolution insofar as C++ is concerned. Sean
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 From my understanding, the problem with doing this via inline assembler is
 that some compilers can actually optimize inline assembler, leaving no truly
 portable way to do this in language.  This issue has come up on
 comp.programming.threads in the past, but I don't remember whether there
 was any resolution insofar as C++ is concerned.
There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Apr 24 2008
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 From my understanding, the problem with doing this via inline assembler is
 that some compilers can actually optimize inline assembler, leaving no truly
 portable way to do this in language.  This issue has come up on
 comp.programming.threads in the past, but I don't remember whether there
 was any resolution insofar as C++ is concerned.
There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Well... I'm looking forward to seeing what you all have planned for multiprogramming in D! Sean
Apr 24 2008
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Sean Kelly wrote:
 From my understanding, the problem with doing this via inline assembler 
 is
 that some compilers can actually optimize inline assembler, leaving no 
 truly
 portable way to do this in language.  This issue has come up on
 comp.programming.threads in the past, but I don't remember whether there
 was any resolution insofar as C++ is concerned.
There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Who's adding? We already have it and it works. If volatile was not already a solved problem, I'd say yeah, sure it might be more trouble than it's worth. But to remove it from the language seems unnecessary to me. I was just asking for justification for *removing* it, not justification for having it :) -Steve
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 "Walter Bright" wrote
 Sean Kelly wrote:
 From my understanding, the problem with doing this via inline assembler 
 is
 that some compilers can actually optimize inline assembler, leaving no 
 truly
 portable way to do this in language.  This issue has come up on
 comp.programming.threads in the past, but I don't remember whether there
 was any resolution insofar as C++ is concerned.
There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Who's adding? We already have it and it works.
No, we don't. There's volatile in C, which is being abandoned as a mess and C++ is going with a new type, atomic. There's the volatile statement in D, which is unimplemented.
 If volatile was not already a solved problem, I'd say yeah, sure it might be 
 more trouble than it's worth.  But to remove it from the language seems 
 unnecessary to me.
I don't agree that it's a solved problem.
 
 I was just asking for justification for *removing* it, not justification for 
 having it :)
 
 -Steve 
 
 
Apr 24 2008
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:furjvd$1hlm$1 digitalmars.com...
 Steven Schveighoffer wrote:
 "Walter Bright" wrote
 Sean Kelly wrote:
 From my understanding, the problem with doing this via inline assembler 
 is
 that some compilers can actually optimize inline assembler, leaving no 
 truly
 portable way to do this in language.  This issue has come up on
 comp.programming.threads in the past, but I don't remember whether 
 there
 was any resolution insofar as C++ is concerned.
There's always a way to do it, even if you have to write an external function and link it in. I still don't believe memory mapped register access justifies adding complex language features.
Who's adding? We already have it and it works.
No, we don't. There's volatile in C, which is being abandoned as a mess and C++ is going with a new type, atomic. There's the volatile statement in D, which is unimplemented.
Wait, you mean the volatile statement was never even implemented, even in D1? Where is this mentioned, anywhere?
Apr 25 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 Wait, you mean the volatile statement was never even implemented, even in 
 D1?
All it did was apply C's volatile semantics to the enclosed statements, which is really not good enough for multithreading.
 Where is this mentioned, anywhere? 
Apr 25 2008
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:fut4bg$30a2$1 digitalmars.com...
 Jarrett Billingsley wrote:
 Wait, you mean the volatile statement was never even implemented, even in 
 D1?
All it did was apply C's volatile semantics to the enclosed statements, which is really not good enough for multithreading.
No, it's not. But it's good enough for system programming, when you have things like memory-mapped registers and memory locations that change on interrupts. Relying on ASM or other languages to do something so fundamental in a language that's _meant_ to be a system programming language seems like a terrible omission.
Apr 25 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 No, it's not.  But it's good enough for system programming, when you have 
 things like memory-mapped registers and memory locations that change on 
 interrupts.  Relying on ASM or other languages to do something so 
 fundamental in a language that's _meant_ to be a system programming language 
 seems like a terrible omission. 
I don't agree. I've done ISRs and memory mapped I/O. The actual piece of code that accessed the data that way formed a miniscule part of the program, even on programs that were completely interrupt driven (like I wrote an ASCII terminal program). Those are well served by two lines of inline asm or a compiler builtin PEEK and POKE function. Building a whole volatile subsystem into the type system for that is a huge waste of resources.
Apr 25 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Building a whole volatile subsystem into the type system for that is a
 huge waste of resources.
I thought volatile was a statement in D, not a type? Sean
Apr 25 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Building a whole volatile subsystem into the type system for that is a
 huge waste of resources.
I thought volatile was a statement in D, not a type?
It's a type for C and C++.
Apr 26 2008
parent Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Building a whole volatile subsystem into the type system for that is a
 huge waste of resources.
I thought volatile was a statement in D, not a type?
It's a type for C and C++.
Right. And volatile in C/C++ is a mess :-) Sean
Apr 27 2008
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile statement
 in a function, but it seems like having volatile doesn't really hurt
 anything.  I'm curious to know why it was so bad that it was worth
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. Sean
Apr 24 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Every tool can be mis-used with insufficient understanding.
Of course. But successfully writing multithreaded code that uses shared memory requires a level of expertise that is rare, and the need to write safe multithreaded code is far greater than the expertise available. Even for those capable of doing it, writing correct multithreaded code is hard, time-consuming, resistant to testing, and essentially impossible to prove correct. It's like writing assembler code with a hex editor.
 Look at shared-
 memory multiprogramming for instance.  It's quite easy and understandable
 to share a few data structures between threads
It is until one of those threads tries to change the data.
 (which I'd assert is the original
 intent anyway), but common practice among non-experts is to use mutexes
 to protect code rather than data, and to call across threads willy-nilly.  It's
 no wonder the commonly held belief is that multiprogramming is hard.
The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.
 Regarding lock-free programming in particular, I think it's worth pointing
 out that leaving out support for lock-free programming in general excludes
 an entire realm of code being written--not only library code to be ultimately
 used by everyday programmers, but kernel code and such as well.  Look at
 the Linux source code, for example.
I agree that lock free programming is important, but volatile doesn't get you there.
  As for the C++0x discussions, I feel
 that some of the participants of the memory model discussion are experts
 in the field and understand quite well the issues involved.
Yes, there are a handful who do really understand it (Hans Boehm and Herb Sutter come to mind). If only the rest of us were half as smart <g>.
Apr 24 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 Every tool can be mis-used with insufficient understanding.
Of course. But successfully writing multithreaded code that uses shared memory requires a level of expertise that is rare, and the need to write safe multithreaded code is far greater than the expertise available. Even for those capable of doing it, writing correct multithreaded code is hard, time-consuming, resistant to testing, and essentially impossible to prove correct. It's like writing assembler code with a hex editor.
I disagree... see below.
 Look at shared-
 memory multiprogramming for instance.  It's quite easy and understandable
 to share a few data structures between threads
It is until one of those threads tries to change the data.
I suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.
 (which I'd assert is the original
 intent anyway), but common practice among non-experts is to use mutexes
 to protect code rather than data, and to call across threads willy-nilly.  It's
 no wonder the commonly held belief is that multiprogramming is hard.
The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.
My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.
 Regarding lock-free programming in particular, I think it's worth pointing
 out that leaving out support for lock-free programming in general excludes
 an entire realm of code being written--not only library code to be ultimately
 used by everyday programmers, but kernel code and such as well.  Look at
 the Linux source code, for example.
I agree that lock free programming is important, but volatile doesn't get you there.
How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.
  >  As for the C++0x discussions, I feel
  > that some of the participants of the memory model discussion are experts
  > in the field and understand quite well the issues involved.
 Yes, there are a handful who do really understand it (Hans Boehm and
 Herb Sutter come to mind). If only the rest of us were half as smart <g>.
My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-) Sean
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Look at shared-
 memory multiprogramming for instance.  It's quite easy and understandable
 to share a few data structures between threads
It is until one of those threads tries to change the data.
I suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.
The problem with locks are: 1) they are expensive, so people try to optimize them away (grep for "double checked locking") 2) people forget to use the locks 3) deadlocks
 (which I'd assert is the original
 intent anyway), but common practice among non-experts is to use mutexes
 to protect code rather than data, and to call across threads willy-nilly.  It's
 no wonder the commonly held belief is that multiprogramming is hard.
The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.
My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.
When people as smart and savvy as Scott Meyers find it confusing, it's confusing. (Scott Meyers wrote the definitive paper on doubled checked locking, and what's wrong with it.) Heck, I have a hard enough time explaining what the difference between const and invariant is, how is memory coherency going to go down? <g>
 Regarding lock-free programming in particular, I think it's worth pointing
 out that leaving out support for lock-free programming in general excludes
 an entire realm of code being written--not only library code to be ultimately
 used by everyday programmers, but kernel code and such as well.  Look at
 the Linux source code, for example.
I agree that lock free programming is important, but volatile doesn't get you there.
How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.
volatile actually puts locks around accesses (at least in the Java memory model it does). So, it doesn't get you lock-free programming. Just avoiding caching of reloads is not the key to lock-free programming. There's the ordering problem.
  >  As for the C++0x discussions, I feel
  > that some of the participants of the memory model discussion are experts
  > in the field and understand quite well the issues involved.
 Yes, there are a handful who do really understand it (Hans Boehm and
 Herb Sutter come to mind). If only the rest of us were half as smart <g>.
My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-)
I've seen many attempts at explaining it, including presentations by Herb Sutter himself. Sorry, but most of the audience doesn't get it. I attended a conference a couple years back on what do do about adding multithreading support to C++. There were about 30 attendees, pretty much the top guys in C++ programming, including Herb Sutter and Hans Boehm. Herb and Hans did most of the talking, and the rest of us sat there wondering "what's a cubit". Things have improved a bit since then, but it's pretty clear that the bulk of programmers are never going to get it, and getting mp programs to work will have the status of a black art. What's needed is something like what garbage collection did for memory management. The language has to take care of synchronization *automatically*. Being D, of course there will be a way for the sorcerers to practice the black art, but for the rest of us there needs to be a reliable and reasonable alternative.
Apr 24 2008
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 Look at shared-
 memory multiprogramming for instance.  It's quite easy and understandable
 to share a few data structures between threads
It is until one of those threads tries to change the data.
I suppose I should have been more clear. An underlying assumption of mine is that no thread maintains references into shared data unless they hold the lock that protects that data.
The problem with locks are: 1) they are expensive, so people try to optimize them away (grep for "double checked locking") 2) people forget to use the locks 3) deadlocks
1) The cost of acquiring or committing a lock is generally roughly equivalent to a memory synchronization, and sometimes less than that (futexes, etc). So it's not insignificant, but also not as bad as people seem to think. I suspect that locked operations are often subject to premature optimization. 2) If locking is built into the API then they can't forget. 3) Deadlocks aren't typically an issue with the approach I described above because it largely eliminates the chance that the programmer will call into unknown code while holding a lock. I do think that locks stink as a general multiprogramming tool, but they can be quite useful in implementing more complex multiprogramming tools, if nothing else. Also, they can be about the fastest option in some cases, and this can be important. For example, locks are much faster than transactional memory--they just introduce problems like priority inversion and deadlock (fun fun). That said, transactional memory can result in livelock, so neither is a clear win.
 (which I'd assert is the original
 intent anyway), but common practice among non-experts is to use mutexes
 to protect code rather than data, and to call across threads willy-nilly.  It's
 no wonder the commonly held belief is that multiprogramming is hard.
The "multiprogramming is hard" is not based on a misunderstanding. It really is hard.
My claim is that multiprogramming is hard because the ability to share memory has been mis-used. It's not hard in general, in my opinion.
When people as smart and savvy as Scott Meyers find it confusing, it's confusing. (Scott Meyers wrote the definitive paper on doubled checked locking, and what's wrong with it.) Heck, I have a hard enough time explaining what the difference between const and invariant is, how is memory coherency going to go down? <g>
Fair enough :-)
 Regarding lock-free programming in particular, I think it's worth pointing
 out that leaving out support for lock-free programming in general excludes
 an entire realm of code being written--not only library code to be ultimately
 used by everyday programmers, but kernel code and such as well.  Look at
 the Linux source code, for example.
I agree that lock free programming is important, but volatile doesn't get you there.
How is it lacking? I grant that it's very low-level, but it does address the key concern for lock-free programming.
volatile actually puts locks around accesses (at least in the Java memory model it does). So, it doesn't get you lock-free programming. Just avoiding caching of reloads is not the key to lock-free programming. There's the ordering problem.
I must be missing something... I thought 'volatile' addressed compiler reordering as well? That aside, I do think that the implementation of 'volatile' in D 1.0 is too complicated for the average programmer to use correctly and thus may not be the perfect solution for D, but I also think that it solves the language/compiler part of the problem.
  >  As for the C++0x discussions, I feel
  > that some of the participants of the memory model discussion are experts
  > in the field and understand quite well the issues involved.
 Yes, there are a handful who do really understand it (Hans Boehm and
 Herb Sutter come to mind). If only the rest of us were half as smart <g>.
My personal belief is that the issue is really more a lack of plain old explanation of the concepts than anything else. The topic is rarely discussed outside of research papers, and most other documentation is either confusing or just plain wrong (the IA-86 memory model spec comes to mind, for example). Not to belittle the knowledge or experience of the C++ folks in any respect--this is simply my experience with the information surrounding the topic :-)
I've seen many attempts at explaining it, including presentations by Herb Sutter himself. Sorry, but most of the audience doesn't get it. I attended a conference a couple years back on what do do about adding multithreading support to C++. There were about 30 attendees, pretty much the top guys in C++ programming, including Herb Sutter and Hans Boehm. Herb and Hans did most of the talking, and the rest of us sat there wondering "what's a cubit". Things have improved a bit since then, but it's pretty clear that the bulk of programmers are never going to get it, and getting mp programs to work will have the status of a black art. What's needed is something like what garbage collection did for memory management. The language has to take care of synchronization *automatically*. Being D, of course there will be a way for the sorcerers to practice the black art, but for the rest of us there needs to be a reliable and reasonable alternative.
I very much agree. My real interest in preserving the black arts in D is so that library developers can produce code which solves these problems in a more elegant manner, whatever that may be. I don't have any expectation that the average programmer would ever want or need to use something like 'volatile' or even ordered atomics. It's far too low-level of a solution to the problem at hand. However, if this can be accomplished without any language facilities at all then I'm all for it. I simply don't want to have to rely on compiler-specific knowledge when writing code, be it at a high level or a low level. Sean
Apr 24 2008
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 2) If locking is built into the API then they can't forget.
Sure they can forget. All the memory in the process can be accessed by any thread, so it's easy to share globals (for example) without locking of any sort.
Apr 24 2008
prev sibling parent reply Russell Lewis <webmaster villagersonline.com> writes:
Sean Kelly wrote:
 1) The cost of acquiring or committing a lock is generally roughly equivalent
to
      a memory synchronization, and sometimes less than that (futexes, etc).  So
      it's not insignificant, but also not as bad as people seem to think.  I
suspect
      that locked operations are often subject to premature optimization.
What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Apr 24 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Russell Lewis (webmaster villagersonline.com)'s article
 Sean Kelly wrote:
 1) The cost of acquiring or committing a lock is generally roughly equivalent
to
      a memory synchronization, and sometimes less than that (futexes, etc).  So
      it's not insignificant, but also not as bad as people seem to think.  I
suspect
      that locked operations are often subject to premature optimization.
What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Yeah I meant an atomic RMW, or at least a load barrier for the acquire. Releasing a mutex can often be done using a plain old store though, since write ops are typically ordered anyway and moving loads up into the mutex doesn't break anything. My point, however, was simply that mutexes aren't terribly slower than atomic operations, since a mutex acquire/release is really little more than an atomic operation itself, at least in the simple case. Sean
Apr 24 2008
parent Russell Lewis <webmaster villagersonline.com> writes:
Sean Kelly wrote:
 == Quote from Russell Lewis (webmaster villagersonline.com)'s article
 Sean Kelly wrote:
 1) The cost of acquiring or committing a lock is generally roughly equivalent
to
      a memory synchronization, and sometimes less than that (futexes, etc).  So
      it's not insignificant, but also not as bad as people seem to think.  I
suspect
      that locked operations are often subject to premature optimization.
What exactly do you mean by "memory synchronization?" Just a write barrier instruction, or something else? If what you mean is a write barrier, then what you said isn't necessarily true, especially as we head toward more and more cores, and thus more and more caches. Locks are almost always atomic read/modify/write operations, and those can cause terrible cache bouncing problems. If you have N cores (each with its own cache) race for the same lock (even if they are trying to get shared locks), you can have up to N^2 bounces of the cache line around.
Yeah I meant an atomic RMW, or at least a load barrier for the acquire. Releasing a mutex can often be done using a plain old store though, since write ops are typically ordered anyway and moving loads up into the mutex doesn't break anything. My point, however, was simply that mutexes aren't terribly slower than atomic operations, since a mutex acquire/release is really little more than an atomic operation itself, at least in the simple case.
Ah, now I get what you were saying. Yes, I agree that atomic instructions are not likely to be much faster than mutexes. (Ofc, pthread mutexes, when they sleep, are a whole 'nother beast.) What I thought you were referring to were barriers, which are (in the many-cache case) *far* faster than atomic operations. Which is why I disagreed, in my previous post.
Apr 24 2008
prev sibling next sibling parent Sean Chittenden <sean chittenden.org> writes:
 The problem with locks are:

 1) they are expensive, so people try to optimize them away (grep for  
 "double checked locking")
 2) people forget to use the locks
 3) deadlocks
Having had several run ins with pthreads_*(3) implementations earlier this year, I started digging around for alternatives and stashed two such nuggets away. Both papers struck me as "not stupid" and rang high on my "I wish this would make its way into D" scale. "Transactional Locking II" http://research.sun.com/scalable/pubs/DISC2006.pdf "Software Transactional Memory Should Not Be Obstruction-Free" http://berkeley.intel-research.net/rennals/pubs/052RobEnnals.pdf How you'd wrap these primitives into the low level language, is left up to an exercise for the implementor and language designer, but, getting low-level primitives in place that allow for efficient locks strikes me as highly keen. -sc -- Sean Chittenden sean chittenden.org http://sean.chittenden.org/
Apr 24 2008
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 there wondering "what's a cubit". 
I tried to look up that term. Did you mean a "cubit" or a "qubit"? -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bruno Medeiros wrote:
 Walter Bright wrote:
 there wondering "what's a cubit". 
I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby. --bb
Apr 27 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Bruno Medeiros wrote:
 Walter Bright wrote:
 there wondering "what's a cubit". 
I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby.
Isn't google grand?
Apr 29 2008
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Bruno Medeiros wrote:
 Walter Bright wrote:
 there wondering "what's a cubit". 
I tried to look up that term. Did you mean a "cubit" or a "qubit"?
Probably he's referring to this: http://www.google.com/search?hl=en&q=what%27s+a+cubit&btnG=Google+Search A classic American comedy monologue by Bill Cosby.
Isn't google grand?
To understand, I had to search a bit more, to find this: http://www.youtube.com/watch?v=Zyc1315KawQ But I was really thinking Walter meant qubit, which means quantum bit of information, and quite looks like a term that could be applied to concurrency (the smallest unit of information that can be assigned atomically in a CPU or something :P ) -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 29 2008
prev sibling next sibling parent reply Kevin Bealer <kevinbealer gmail.com> writes:
Sean Kelly Wrote:

 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile statement
 in a function, but it seems like having volatile doesn't really hurt
 anything.  I'm curious to know why it was so bad that it was worth
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved. Sean
I've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment. Kevin Bealer
Apr 24 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Kevin Bealer (kevinbealer gmail.com)'s article
 Sean Kelly Wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile statement
 in a function, but it seems like having volatile doesn't really hurt
 anything.  I'm curious to know why it was so bad that it was worth
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved.
I've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment.
Tango (and Ares before it) has support for atomic load, store, storeIf (CAS), increment, and decrement. Currently, x86 is the only architecture that's truly atomic however, other platforms fall back to using synchronized (largely because D doesn't support inline ASM for other platforms and because no one has asked for other platforms to be supported). The API docs are here: http://www.dsource.org/projects/tango/docs/current/tango.core.Atomic.html And this is the source file: http://www.dsource.org/projects/tango/browser/trunk/tango/core/Atomic.d The unit tests all pass with DMD and I /think/ they pass with GDC as well, but I haven't verified this personally. Also, the docs above are a bit misleading in that increment and decrement operations are actually available for the Atomic struct if T is an integer or a pointer type. the doc tool doesn't communicate that properly because it has issues with "static if". Oh, and I'd just use the default msync option unless your needs are really specific. The acquire/release options are a bit tricky to use properly in practice. Sean
Apr 24 2008
parent Kevin Bealer <kevinbealer gmail.com> writes:
Sean Kelly Wrote:

 == Quote from Kevin Bealer (kevinbealer gmail.com)'s article
 Sean Kelly Wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 You can sort of work around it by wrapping the previously volatile statement
 in a function, but it seems like having volatile doesn't really hurt
 anything.  I'm curious to know why it was so bad that it was worth
 removing...
Because after listening in while experts debated how to do write multithreaded code safely, it became pretty clear that the chances of using volatile statements correctly is very small, even for experts. It's the wrong approach.
Every tool can be mis-used with insufficient understanding. Look at shared- memory multiprogramming for instance. It's quite easy and understandable to share a few data structures between threads (which I'd assert is the original intent anyway), but common practice among non-experts is to use mutexes to protect code rather than data, and to call across threads willy-nilly. It's no wonder the commonly held belief is that multiprogramming is hard. Regarding lock-free programming in particular, I think it's worth pointing out that leaving out support for lock-free programming in general excludes an entire realm of code being written--not only library code to be ultimately used by everyday programmers, but kernel code and such as well. Look at the Linux source code, for example. As for the C++0x discussions, I feel that some of the participants of the memory model discussion are experts in the field and understand quite well the issues involved.
I've use a tiny amount of lock-free-like programming here and there, which is to say, code that uses the "compare and swap" idiom (on an IBM OS/390) for a few very limited purposes, and just the "atomic swap" (via a portable library). I was trying to do this with D a week or two back. I wrote some inline ASM code using "xchg" and "cmpxchg8b". I was able to get xchg working (as far as I can tell) on DMD. The same inline ASM code on GDC (64 bit machine) just threw a BUS error for some reason. I couldn't get cmpxchg8b to do what I expected on either platform, but my assembly skills are weak, and my inline assembly skills are even weaker. (it was my first stab at inline ASM in D). 1. I have no idea if my code was reasonable or did what I thought but... 2. there might be a gdc / dmd ASM compatability issue. 3. I think it would be cool if there were atomic swap and ideally, compare and swap type functions in D -- one more thing we could do portably that C++ has to do non-portably. 4. By the way these links contain public domain versions of the "swap pointers atomically" code from my current work location that might be useful: http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/doxyhtml/ncbiatomic_8h-source.html http://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/include/corelib/impl/ncbi_atomic_defs.h Unfortunately, it looks like they don't define the _CONDITIONALLY versions for the x86 or x86_64 platforms. One of my libraries at work uses the atomic pointer-swapping to implement a lightweight mutex for a libraries I wrote and it's a big win. Any thoughts? It would be neat to play with lock-free algorithms in D, especially since the papers I've read on the subject (Andrei's I think) say that it's much easier to get the simpler ones right in a garbage collected environment.
Tango (and Ares before it) has support for atomic load, store, storeIf (CAS), increment, and decrement. Currently, x86 is the only architecture that's truly atomic however, other platforms fall back to using synchronized (largely because D doesn't support inline ASM for other platforms and because no one has asked for other platforms to be supported). The API docs are here: http://www.dsource.org/projects/tango/docs/current/tango.core.Atomic.html And this is the source file: http://www.dsource.org/projects/tango/browser/trunk/tango/core/Atomic.d The unit tests all pass with DMD and I /think/ they pass with GDC as well, but I haven't verified this personally. Also, the docs above are a bit misleading in that increment and decrement operations are actually available for the Atomic struct if T is an integer or a pointer type. the doc tool doesn't communicate that properly because it has issues with "static if". Oh, and I'd just use the default msync option unless your needs are really specific. The acquire/release options are a bit tricky to use properly in practice. Sean
Thanks Sean -- By now I should know to just check Tango! (It's also probably a good way for me to learn the D inline assembly techniques.) Kevin
Apr 24 2008
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Sean Kelly wrote:
  As for the C++0x discussions, I feel
 that some of the participants of the memory model discussion are experts
 in the field and understand quite well the issues involved.
 
 
 Sean
Are you talking about some actual online discussion? If so, can you point to where it is? (comp.lang.c++ maybe?) Ever since I read about the double-checked locking pattern, that I've felt as if the carpet was pulled under my feet (even though I never used the pattern), as it clearly illustrated how tricky the memory model concurrency issues are. Speaking of which, is a memory model specification also being worked out for D, since the concurrent programming aspects of the language are being developed? -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bruno Medeiros wrote:
 Speaking of which, is a memory model specification also being worked out 
 for D, since the concurrent programming aspects of the language are 
 being developed?
Yes.
Apr 27 2008
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Bruno Medeiros wrote:
 Speaking of which, is a memory model specification also being worked 
 out for D, since the concurrent programming aspects of the language 
 are being developed?
Yes.
Cool. I hope you really bring the experts on this one, cause it sure ain't gonna be easy, likely much harder than the const/invariant system. That is, unless the semantics can be mostly (or even entirely) copied from the work being done on other languages (like C++0x or Java). -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 28 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Bruno Medeiros wrote:
 Sean Kelly wrote:
  As for the C++0x discussions, I feel
 that some of the participants of the memory model discussion are experts
 in the field and understand quite well the issues involved.


 Sean
Are you talking about some actual online discussion? If so, can you point to where it is? (comp.lang.c++ maybe?)
It was done via a listserv. I don't recall offhand where the archives are.
 Speaking of which, is a memory model specification also being worked out 
 for D, since the concurrent programming aspects of the language are 
 being developed?
I'm guessing there is, but since Walter appears opposed to atomics in the language, your guess is as good as mine what it will be. I had been expecting that D would copy C++0x here. Sean
Apr 27 2008
parent Robert Fraser <fraserofthenight gmail.com> writes:
Sean Kelly wrote:
 I'm guessing there is, but since Walter appears opposed to atomics in 
 the language, your guess is as good as mine what it will be.  I had been 
 expecting that D would copy C++0x here.
Given Bartoz's presentation last year he probably isn't totally opposed to atomics in STM.
Apr 27 2008
prev sibling parent reply 0ffh <frank youknow.what.todo.interNETz> writes:
Walter Bright wrote:
 Because after listening in while experts debated how to do write 
 multithreaded code safely, it became pretty clear that the chances of 
 using volatile statements correctly is very small, even for experts. 
 It's the wrong approach.
Just out of curiosity, which approach would you recommend to ensure that a variable which is updated from an interrupt service routine (and, implicitly, any other thread) will be read from common memory and not cached in a register? I know there are a few, but which would you recommend? I think ensuring that memory access happens at every variable access is a straightforward solution (and a good one, if access is atomar). Regards, frank
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
0ffh wrote:
 Just out of curiosity, which approach would you recommend to ensure
 that a variable which is updated from an interrupt service routine
 (and, implicitly, any other thread) will be read from common memory
 and not cached in a register?
 I know there are a few, but which would you recommend?
 I think ensuring that memory access happens at every variable access
 is a straightforward solution (and a good one, if access is atomar).
I suggest wrapping it in a mutex.
Apr 25 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 0ffh wrote:
 Just out of curiosity, which approach would you recommend to ensure
 that a variable which is updated from an interrupt service routine
 (and, implicitly, any other thread) will be read from common memory
 and not cached in a register?
 I know there are a few, but which would you recommend?
 I think ensuring that memory access happens at every variable access
 is a straightforward solution (and a good one, if access is atomar).
I suggest wrapping it in a mutex.
I suppose the obvious question here is: what if I want to create a mutex in D? Sean
Apr 25 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a mutex
 in D?
Why do you need volatile for that?
Apr 25 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code. Sean
Apr 25 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Apr 25 2008
next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:

 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a
 mutex in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Is that true for all compiler's or only DigitalMars ones? -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Apr 26 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Lars Ivar Igesund wrote:
 Walter Bright wrote:
 
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a
 mutex in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Is that true for all compiler's or only DigitalMars ones?
DM ones certainly. Others, I don't know about.
Apr 26 2008
next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:

 Lars Ivar Igesund wrote:
 Walter Bright wrote:
 
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a
 mutex in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Is that true for all compiler's or only DigitalMars ones?
DM ones certainly. Others, I don't know about.
So you are saying that you're removing (or not going to implement) a feature due to a restriction in the DM optimizer? -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Apr 26 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
Lars Ivar Igesund wrote:
 So you are saying that you're removing (or not going to implement) a feature
 due to a restriction in the DM optimizer?
"volatile" doesn't work in other C++ compilers for multithreaded code. It's a huge screwup. Not only do the optimizers move things about, but the CPU inself reorders things in ways that move things past mutexes, even if the compiler gets it right. Again, see Scott Meyer's doubled checked locking example. There'll be a way to do lock-free programming. Volatile isn't the right way.
Apr 26 2008
prev sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
Walter Bright wrote:
 Lars Ivar Igesund wrote:
 Walter Bright wrote:

 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a
 mutex in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Is that true for all compiler's or only DigitalMars ones?
DM ones certainly. Others, I don't know about.
Perhaps that should be a part of the language spec? Or at least it be documented that this is a requirement to allow for multiprocessing? It sounds like a simple enough feature to implement. (But what do I know? I haven't written a compiler since a class in college...decades ago.)
Apr 26 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a 
 mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Even if the function is inlined? Sean
Apr 26 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a 
 mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Even if the function is inlined?
No, but a mutex involves an OS call. Inlining is also easily prevented.
Apr 26 2008
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a 
 mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Even if the function is inlined?
No, but a mutex involves an OS call. Inlining is also easily prevented.
An OS call isn't always involved. See, for example: http://en.wikipedia.org/wiki/Futex. Sean
Apr 26 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 An OS call isn't always involved.  See, for example: 
 http://en.wikipedia.org/wiki/Futex.
Then you can write the mutex as your own external function which cannot be inlined.
Apr 26 2008
parent Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 An OS call isn't always involved.  See, for example: 
 http://en.wikipedia.org/wiki/Futex.
Then you can write the mutex as your own external function which cannot be inlined.
And perhaps write portions of that mutex as separate external functions to prevent reordering within the mutex code itself... surely you can see why this isn't terribly appealing. Sean
Apr 27 2008
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Walter,

 No, but a mutex involves an OS call. Inlining is also easily
 prevented.
 
using atomic ASM ops a (single process) mutex can be implemented with no OS interaction at all.
Apr 26 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 Reply to Walter,
 
 No, but a mutex involves an OS call. Inlining is also easily
 prevented.
using atomic ASM ops a (single process) mutex can be implemented with no OS interaction at all.
Those use inline assembler (which is fine).
Apr 26 2008
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 Walter Bright wrote:
 Sean Kelly wrote:
 I suppose the obvious question here is: what if I want to create a 
 mutex
 in D?
Why do you need volatile for that?
To restrict compiler optimizations performed on the code.
The optimizer won't move global or pointer references across a function call boundary.
Even if the function is inlined?
No, but a mutex involves an OS call. Inlining is also easily prevented.
Maybe you two could arrange a time to have a higher bandwidth IM/irc/skype/telephone chat on the subject? This seems important, but this one-line-at-a-time back and forth style of discussion is going nowhere fast. --bb
Apr 26 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 Maybe you two could arrange a time to have a higher bandwidth 
 IM/irc/skype/telephone chat on the subject?  This seems important, but 
 this one-line-at-a-time back and forth style of discussion is going 
 nowhere fast.
For the moment, if you are really concerned about it, write it in the 2 lines of inline assembler. That's what I've done to do lock-free CAS stuff. It's really not a big deal.
Apr 26 2008
parent Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 Maybe you two could arrange a time to have a higher bandwidth 
 IM/irc/skype/telephone chat on the subject?  This seems important, but 
 this one-line-at-a-time back and forth style of discussion is going 
 nowhere fast.
For the moment, if you are really concerned about it, write it in the 2 lines of inline assembler. That's what I've done to do lock-free CAS stuff. It's really not a big deal.
That's easy for x86 in D, but for other platforms it requires using C or a standalone assembler, which is workable but annoying. And regarding the assembler approach in general, I label all the asm blocks as volatile for safety (you fixed a ticket I submitted regarding this a few years back). I know that DMD doesn't optimize within or across asm blocks, but I don't trust that every D compiler does or will do the same. Particularly since D doesn't actually have a multithreaded memory model. If it did, I may trust that seeing a 'lock' expression in x86 inline asm would be enough. Sean
Apr 27 2008
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 "Sean Kelly" wrote
 Walter Bright wrote:
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
TLS huh? Nice! So what will replace the volatile statement? As it is, that was the only safe way to perform lock-free operations in D.
You can sort of work around it by wrapping the previously volatile statement in a function, but it seems like having volatile doesn't really hurt anything. I'm curious to know why it was so bad that it was worth removing...
...and the function has to be opaque. And even then, I think there's a risk that something undesirable may happen--I'd have to give it some thought. I'd guess that 'volatile' is being deprecated in favor of some sort of C++0x style atomics, but for the moment D no longer has a solution for this. It's a bit upsetting, particularly since it effectively deprecates the atomic library code I wrote for D some three years ago. As a point of interest, that code is structurally very similar to what the C++0x group decided on just last summer, but it's been around for at least a year and a half longer. Sean
Apr 24 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 TLS huh?  Nice!  So what will replace the volatile statement?  As it is, 
 that was the only safe way to perform lock-free operations in D.
It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.
Apr 24 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 TLS huh?  Nice!  So what will replace the volatile statement?  As it is,
 that was the only safe way to perform lock-free operations in D.
It wasn't safe anyway, because it wasn't implemented. For now, just use synchronized instead.
Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick. Sean
Apr 24 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 It wasn't safe anyway, because it wasn't implemented. For now, just use
 synchronized instead.
Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick.
Just turning off optimization isn't good enough. The processor can reorder things!
Apr 24 2008
parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 It wasn't safe anyway, because it wasn't implemented. For now, just use
 synchronized instead.
Um, I thought that the volatile statement effectively turned off optimization in the function containing the volatile block? This wasn't ideal, but it should have done the trick.
Just turning off optimization isn't good enough. The processor can reorder things!
Of corse it can. But there are assembly instructions for that bit. The unaccounted for problem is/was the compiler. Sean
Apr 24 2008
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
"Hidden methods now get a compile time warning rather than a runtime one." Yay! The pure function description needs a lot more filling out. I'm particularly interested in whether mutable heap data can be created and used from inside a pure function, and how that would work with class constructors. I won't poke you any more, because you did qualify that they aren't really implemented yet :) Nice work! -Steve
Apr 24 2008
next sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Steven Schveighoffer wrote:
 "Walter Bright" wrote
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
"Hidden methods now get a compile time warning rather than a runtime one." Yay!
*sigh of relief* -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
prev sibling parent davidl <davidl 126.com> writes:
在 Thu, 24 Apr 2008 23:35:24 +0800,Steven Schveighoffer  
<schveiguy yahoo.com> 写道:

 "Walter Bright" wrote
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.029.zip

 This starts laying the foundation for multiprogramming support:

 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.013.zip
"Hidden methods now get a compile time warning rather than a runtime one."
shit! I've spent a lot of effort on debugging my legacy d1.0 code while I port to d2.0 with this new feature. Yet hidden method is good, runtime error is correct, current warning is awesome, just original code is bad :( I think I need to do more change to my code to take advantage of d2.0 features , then commit.
 Yay!

 The pure function description needs a lot more filling out.  I'm
 particularly interested in whether mutable heap data can be created and  
 used
 from inside a pure function, and how that would work with class
 constructors.  I won't poke you any more, because you did qualify that  
 they
 aren't really implemented yet :)

 Nice work!

 -Steve
-- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Apr 27 2008
prev sibling next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 http://ftp.digitalmars.com/dmd.1.029.zip
...
 http://ftp.digitalmars.com/dmd.2.013.zip
I wanted to install both of dmd and dmd2, but they both wanted to use /etc/dmd.conf So I modified my setup, so that dmd2 would instead read dmd2.conf which had phobos2... I moved /usr/bin/dmd2 over to another dir, I called mine "dmd2": /usr/libexec/dmd2/dmd In that directory I created a symlink file: /usr/libexec/dmd2/dmd.conf -> /etc/dmd2.conf And then I set up a shell wrapper for "dmd2", that would call the relocated binary instead: exec /usr/libexec/dmd2/dmd "$*" So now I can have my old D1 configuration in dmd.conf and my D2 configuration in dmd2.conf. And have both RPM packages installed at once, without the file conflicts on /etc/dmd.conf. Maybe something for the compiler to do too ? (at least look for dmd2.conf before dmd.conf) --anders PS. wxD is now tested OK with DMD 1.029 and 2.013 At least on Linux, as usual Windows is left... "make DC=dmd" and "make DC=dmd2", respectively.
Apr 24 2008
parent Jesse Phillips <jessekphillips gmail.com> writes:
On Fri, 25 Apr 2008 00:45:58 +0200, Anders F Björklund wrote:

 Walter Bright wrote:
 
 http://ftp.digitalmars.com/dmd.1.029.zip
...
 http://ftp.digitalmars.com/dmd.2.013.zip
I wanted to install both of dmd and dmd2, but they both wanted to use /etc/dmd.conf So I modified my setup, so that dmd2 would instead read dmd2.conf which had phobos2... I moved /usr/bin/dmd2 over to another dir, I called mine "dmd2": /usr/libexec/dmd2/dmd In that directory I created a symlink file: /usr/libexec/dmd2/dmd.conf -> /etc/dmd2.conf And then I set up a shell wrapper for "dmd2", that would call the exec /usr/libexec/dmd2/dmd "$*" So now I can have my old D1 configuration in dmd.conf and my D2 configuration in dmd2.conf. And have both RPM packages installed at once, without the file conflicts on /etc/dmd.conf. Maybe something for the compiler to do too ? (at least look for dmd2.conf before dmd.conf) --anders PS. wxD is now tested OK with DMD 1.029 and 2.013 At least on Linux, as usual Windows is left... "make DC=dmd" and "make DC=dmd2", respectively.
I agree here, I feel the compiler should do more distinction between v1 and v2.
Apr 24 2008
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 
 http://www.digitalmars.com/d/2.0/changelog.html
In addition, it seems that now the order of evaluation is less undefined. The following was added in http://www.digitalmars.com/d/2.0/expression.html "The following binary expressions are evaluated in strictly left-to-right order: OrExpression, XorExpression, AndExpression, CmpExpression, ShiftExpression, AddExpression, CatExpression, MulExpression, CommaExpression, OrOrExpression, AndAndExpression " Also added: "Associativity and Commutativity An implementation may rearrange the evaluation of expressions according to arithmetic associativity and commutativity rules as long as, within that thread of execution, no observable different is possible. This rule precludes any associative or commutative reordering of floating point expressions." Walter, note the different->difference typo. Some additions to the float page as well. And a new FAQ question was added: "Can't a sufficiently smart compiler figure out that a function is pure automatically?" http://www.digitalmars.com/d/2.0/faq.html#pure -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Bruno Medeiros wrote:
 
 And a new FAQ question was added:
 "Can't a sufficiently smart compiler figure out that a function is pure 
 automatically?"
 http://www.digitalmars.com/d/2.0/faq.html#pure
 
This FAQ entry was made in response to the suggestion someone made that pure be automatically detected by the compiler. But I think the suggestion made wasn't to remove the pure attribute, and make the compiler detect *all* pure functions. One would still be able to use the pure function parameter. That would invalidate points 1 and 3. As for 2: well, just don't do automatic pure detection for virtual functions (unless they are final). -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bruno Medeiros wrote:
 Bruno Medeiros wrote:
 And a new FAQ question was added:
 "Can't a sufficiently smart compiler figure out that a function is 
 pure automatically?"
 http://www.digitalmars.com/d/2.0/faq.html#pure
This FAQ entry was made in response to the suggestion someone made that pure be automatically detected by the compiler. But I think the suggestion made wasn't to remove the pure attribute, and make the compiler detect *all* pure functions. One would still be able to use the pure function parameter. That would invalidate points 1 and 3. As for 2: well, just don't do automatic pure detection for virtual functions (unless they are final).
Yes that was indeed one of the things I was thinking. But imagine function A is not declared pure, but just happens to be so. Programmer B discovers that and starts to rely on it as a pure function. Programmer of A later makes an enhancement that kills the purity of A, but he never intended A to be pure so he doesn't notice or care. Programmer B updates library and subsequently is heard to utter the So I think I have to agree that if you're going to have pure functions in a mixed procedural/functional world, then explicit labeling is probably unavoidable. However, it may still be useful to have tools that discover and recommend tagging of functions which are in fact pure. Same goes for nothrow. Anyway, I would really like for there to be some way to gain the benefits of these attributes without me having to think about it. There are already more than enough dimensions of the problem space to keep in mind when writing programs without adding more, like pure and nothrow do. --bb
Apr 27 2008
parent Robert Fraser <fraserofthenight gmail.com> writes:
Bill Baxter wrote:
 However, it may still be useful to have tools that discover and 
 recommend tagging of functions which are in fact pure.  Same goes for 
 nothrow.
Such tools are very possible. JDT can automatically add "final" to every variable it can in Java, so it's not a big leap to say a tool could be implemented for D that would constify/invariantify every variable in your source it could. Such tools, for the reasons you described (API specification) would be easily abused, though.
Apr 27 2008