www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Fully transitive const is not necessary

reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
I have been reading through the reddit posts about the const faq, and also 
some posts here by guslay, and I believe Walter needs to re-think his 
beliefs about how transitive const is necessary for the future.

First, I think transitive const should be the default.  I am not a fan of 
C++'s head-const, where only the head is const, and all data referenced 
through the head is mutable.  I think this promotes "shoot yourself in the 
foot" code.  It is a valuable tool to have the compiler tell you "hey, you 
wanted this to be const, remember?"  At the same time it is also useful to 
tell the compiler "hey, I want this to not be affected by const, because 
it's not logically part of the state of the object."

Second, I'll show how logical const is ALREADY available with D's const, 
just in a less convenient form.  Let's look at an invariant property that is 
not invariant:

class X
{
   private static int _x;
   invariant int x()
   {
       return _x++;
   }
}

Notice that despite all efforts of the compiler to make sure invariant is 
transitive, it still cannot do any parallelizations or optimizations on the 
call to x.  If it does not have the source code for x, it cannot be sure 
that x isn't doing something like the above.

We can translate this into a form where we keep an associative array that 
maps each instance to a mutable struct that can be the non-const portion of 
the instance, so each instance can be separate:

class X
{
    private static int[X] _x;
    this()
    {
        _x[this] = 0;
    }

    ~this()
    {
         _x.remove(this);
    }

    invariant int x()
    {
       int *myx = this in _x;
       return (*myx)++;
    }
}

The problem is, you can think of each function not only passing the 
arguments, and possibly a this pointer, but passing a context pointer to the 
global namespace, which is never const.  As long as that remains mutable, we 
can store mutable data associated with a class in that space.  In order to 
allow multiprogramming, you need to ensure that the function does not access 
that space, unless the data is invariant.  But this is called a 'pure' 
function, and can exist WITH logical const and invariant.  If it couldn't, 
then it couldn't exist with today's version of const, which allows logical 
const and invariant as I have demonstrated.

Here is my interpretation of all this:

const is a tool for developers, not for the compiler.  Since const data can 
change at any time, it cannot be used for multithreaded or other 
optimizations.

invariant allows for some optimization on the compiler.  Invariant 
optimization is limited to POD values, because invariant functions can 
always be logically invariant, and there is no way to detect it.

pure is a tool for multiprogramming.  This allows all the fun stuff that 
Walter is always talking about.  pure can co-exist with logical const and 
logical invariant, because if it couldn't, then pure couldn't exist in 
today's version of const.

Since const is a tool for developers, and CANNOT BE USED for optimization, 
let's please have logical const in a form that performs better and is easier 
to implement than the above example.  Something like C++'s mutable (but not 
exactly that, I'll explain below).

Since invariant functions cannot be guaranteed as pure functions, let's have 
logical invariance in a form that performs better and is easier to implement 
than the above example.  Optimizations based on POD types can still be had 
even with logical invariance.

Head const by default sucks, and still does, because const is less useful as 
a tool in that state.  However, as I have shown, head const cannot be 
avoided, and so why not support it as the exception to the transitive const 
rule?

mutable is used as the keyword in C++ to describe this type.  However, I do 
not like this word.  For example, whatever keyword is used, it should be 
usable just like const or invariant:

mutable(X)* refToX;

This would describe a reference to a mutable X.  adding const to this would 
then make the reference const, but the thing it points to mutable, giving us 
a head-const variable.

But what if X is an alias to const(Y)?  The statement implies that 
mutable(X) makes X mutable, even if it is was previously const or invariant, 
but this would be no good.  We need a word that says "this isn't affected by 
const or invariant, but it could already be const or invariant".  Something 
that is short and not heavily used in code.  Mabye even some form of 
punctuation.

Walter, I think you need to look at this, because not doing this is going to 
keep D out of the mainstream, as most developers will (and already do) 
complain about the lack of logical const.  I was one of those, but I gave 
up, because I convinced myself that logical const wasn't that important, and 
was a design flaw.  But since fully transitive const does not give us the 
benefits that you have been saying it will, why not allow it?

At the very least, this proof shows that you need a better reason to require 
fully transitive const/invariant than "I need it for the future."

-Steve 
Apr 01 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  Second, I'll show how logical const is ALREADY available with D's const,
  just in a less convenient form.  Let's look at an invariant property that is
  not invariant:

  class X
  {
    private static int _x;
    invariant int x()
    {
        return _x++;
    }
  }
That does not break transitivity. You have misunderstood. "invariant int x()" declares x() to be a function, within whose body the variable "this" is invariant. X._x is a global variable, unrelated to "this". Transitivity is not compromised.
  Notice that despite all efforts of the compiler to make sure invariant is
  transitive, it still cannot do any parallelizations or optimizations on the
  call to x.  If it does not have the source code for x, it cannot be sure
  that x isn't doing something like the above.
However, in the future, you will be able to declare class X { private static int _x; invariant int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile. You gotta think ahead.
Apr 01 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 01/04/2008, Steven Schveighoffer wrote:
  Second, I'll show how logical const is ALREADY available with D's const,
  just in a less convenient form.  Let's look at an invariant property 
 that is
  not invariant:

  class X
  {
    private static int _x;
    invariant int x()
    {
        return _x++;
    }
  }
That does not break transitivity. You have misunderstood. "invariant int x()" declares x() to be a function, within whose body the variable "this" is invariant. X._x is a global variable, unrelated to "this". Transitivity is not compromised.
No you have missed my point :) Yes, transitivity is still intact, but Walter is always saying that fully transitive const is necessary for compiler optimizations, and so there is no place for logical const. My point is, even with fully transitive const, the compiler cannot do optimizations based on the fact that a function is invariant, as it cannot guarantee that the function has no side effects. And in fact, this IS logically const. Logical const does not break transitive const, and despite all efforts to make logical const invalid, it is still present.
  Notice that despite all efforts of the compiler to make sure invariant 
 is
  transitive, it still cannot do any parallelizations or optimizations on 
 the
  call to x.  If it does not have the source code for x, it cannot be sure
  that x isn't doing something like the above.
However, in the future, you will be able to declare class X { private static int _x; invariant int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile. You gotta think ahead.
I should hope this is not the case. My understanding is that invariant and const will remain intact, and pure functions will be introduced to allow this kind of behavior. What new rule are you thinking of that will apply to an invariant function that forces this? Remember than an invariant function just means the instance is invariant, not the static or global data. My entire point is that pure functions can exist with logical const, so there is no reason (that I know of) we need fully transitive const. When pure functions are introduced, they simply will not be allowed to use the mutable members of a logically const class/struct. -Steve
Apr 01 2008
prev sibling next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 However, in the future, you will be able to declare

     class X
     {
         private static int _x;
         invariant int x()
         {

             return _x++; /*ERROR*/
        }
    }

  and suddenly - bingo - the offending line will not compile.
Well, not exactly. I made a fatal typo in that code, and consequently completely screwed it up! So much for forward planning! Let's try again. What I meant was: class X { private static int _x; pure int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile.
Apr 01 2008
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 01/04/2008, Janice Caron wrote:
 However, in the future, you will be able to declare

     class X
     {
         private static int _x;
         invariant int x()
         {

             return _x++; /*ERROR*/
        }
    }

  and suddenly - bingo - the offending line will not compile.
Well, not exactly. I made a fatal typo in that code, and consequently completely screwed it up! So much for forward planning! Let's try again. What I meant was: class X { private static int _x; pure int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile.
OK, now you're making a little more sense :) This still does not require fully transitive const, and so there is no reason to force people who wish to develop with logical const to do so in the hack-ish manner that my example does... -Steve
Apr 01 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Janice Caron (caron800 googlemail.com)'s article
 On 01/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 However, in the future, you will be able to declare

     class X
     {
         private static int _x;
         invariant int x()
         {

             return _x++; /*ERROR*/
        }
    }

  and suddenly - bingo - the offending line will not compile.
Well, not exactly. I made a fatal typo in that code, and consequently completely screwed it up! So much for forward planning! Let's try again. What I meant was: class X { private static int _x; pure int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile.
You know, I'm a huge fan of D, but now we have const, invariant, and pure, all ostensibly for making it possible for D to act more like a functional language when functional languages don't need all this mess to be parallelizable. Perhaps it's because of the C++ 0x post from yesterday, but the above is beginning to give me flashbacks to C++ in that I'm starting to feel like D is trying to be the universal solution to all problems, and thereby not an ideal solution for any problem at all. One thing D has really had going for it thus far is conceptual and semantic simplicity, but I'm starting the feel that the const features are causing that simplicity to be lost. Here's the thing: if I want to write heavily concurrent code I'm not going to use D because imperative languages, D included, are not structured in a way that inherently lend themselves to parallelization. It's certainly possible to write code that could come close to matching a functional language--use 'pure' and 'invariant' everywhere, forego loops in favor of tail recursion (which is hopefully optimized by the compiler to avoid stack issues), etc. But why mimic a functional programming structure in an imperative language when I can simply use a functional language in the first place? I can only think of one reason: in a large company it is often more difficult to choose one's language than to choose one's programming style. That's it. But if that's the case then I probably won't be able to use D anyway--I'll be using C++. I can truly appreciate the desire to have D scale effectively as programming becomes more concurrent. At the same time, I am beginning to worry that the struggle to achieve this will end up destroying D's primary selling point (in my view)--its simplicity. I know this is all new ground for an imperative language so there's nothing to do but wait and see... I just wanted to get this out so I could move on with my day :-) Here's to hoping that D doesn't follow in the way of C++. Sean
Apr 01 2008
next sibling parent "Craig Black" <craigblack2 cox.net> writes:
"Sean Kelly" <sean invisibleduck.org> wrote in message 
news:fstoa0$14ss$1 digitalmars.com...
 == Quote from Janice Caron (caron800 googlemail.com)'s article
 On 01/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 However, in the future, you will be able to declare

     class X
     {
         private static int _x;
         invariant int x()
         {

             return _x++; /*ERROR*/
        }
    }

  and suddenly - bingo - the offending line will not compile.
Well, not exactly. I made a fatal typo in that code, and consequently completely screwed it up! So much for forward planning! Let's try again. What I meant was: class X { private static int _x; pure int x() { return _x++; /*ERROR*/ } } and suddenly - bingo - the offending line will not compile.
You know, I'm a huge fan of D, but now we have const, invariant, and pure, all ostensibly for making it possible for D to act more like a functional language when functional languages don't need all this mess to be parallelizable. Perhaps it's because of the C++ 0x post from yesterday, but the above is beginning to give me flashbacks to C++ in that I'm starting to feel like D is trying to be the universal solution to all problems, and thereby not an ideal solution for any problem at all. One thing D has really had going for it thus far is conceptual and semantic simplicity, but I'm starting the feel that the const features are causing that simplicity to be lost.
Agreed, but this one's tough. I've thought it through a million times, and the arguments for const (of some sort) are very compelling.
 Here's the thing: if I want to write heavily concurrent code I'm
 not going to use D because imperative languages, D included,
 are not structured in a way that inherently lend themselves to
 parallelization.  It's certainly possible to write code that could
 come close to matching a functional language--use 'pure' and
 'invariant' everywhere, forego loops in favor of tail recursion
 (which is hopefully optimized by the compiler to avoid stack
 issues), etc.  But why mimic a functional programming structure
 in an imperative language when I can simply use a functional
 language in the first place?  I can only think of one reason:
 in a large company it is often more difficult to choose one's
 language than to choose one's programming style.  That's it.
 But if that's the case then I probably won't be able to use D
 anyway--I'll be using C++.
As you well know imperative/OOP languagaes are much more popular than functional languages. Functional features in D will ease the transition into a more functional style. From what I have read, pure functional languages suffer from severe limitations, just as pure OOP languages do, so a good mix is probably a better solution. And while we are comparing functional languages to D, I'm not so sure that functional languages will be any better at being simple, since non-pure functional languages (Haskell in particular) seem to suffer from feature drift and language complexity just as much as D does. The growing push toward concurrency is still sketchy. Although it will probably help, I'm not so sure the functional paradigm alone will be "the solution" to the problem. I find it hard to believe that any paradigm will magically make parallelism automatic. I think it will always require some knowledge of the underlying hardware, and how things work at the low level to glean optimum performance. Language features such as pure functions may help some with this, but I still think it's going to be increasingly hard to write software that scales well. It is difficult to forsee the scalability issues that will arise the number of processors per die increases at an exponential rate, and as streaming processors like those found in the Cell, AMD's FireGL, and NVidia's Tesla become mainstream.
 I can truly appreciate the desire to have D scale effectively
 as programming becomes more concurrent.  At the same
 time, I am beginning to worry that the struggle to achieve
 this will end up destroying D's primary selling point (in my
 view)--its simplicity.  I know this is all new ground for an
 imperative language so there's nothing to do but wait and
 see... I just wanted to get this out so I could move on with
 my day :-)  Here's to hoping that D doesn't follow in the
 way of C++.
Time will tell. A reassuring observation about Walter is that he has been known to change directions if something doesn't pan out (although it may take an act of God for this to happen with the const stuff). -Craig
Apr 01 2008
prev sibling parent reply Tim Burrell <tim timburrell.net> writes:
Sean Kelly wrote:
 You know, I'm a huge fan of D, but now we have const, invariant,
 and pure, all ostensibly for making it possible for D to act more
 like a functional language when functional languages don't need
 all this mess to be parallelizable.  Perhaps it's because of the C++
 0x post from yesterday, but the above is beginning to give me
 flashbacks to C++ in that I'm starting to feel like D is trying to
 be the universal solution to all problems, and thereby not an
 ideal solution for any problem at all.  One thing D has really had
 going for it thus far is conceptual and semantic simplicity, but
 I'm starting the feel that the const features are causing that
 simplicity to be lost.
Agreed!
 I can truly appreciate the desire to have D scale effectively
 as programming becomes more concurrent.  At the same
 time, I am beginning to worry that the struggle to achieve
 this will end up destroying D's primary selling point (in my
 view)--its simplicity.  I know this is all new ground for an
 imperative language so there's nothing to do but wait and
 see... I just wanted to get this out so I could move on with
 my day :-)  Here's to hoping that D doesn't follow in the
 way of C++.
If D did move the way of C++ it would be a lot better off than it is now. But, that being said, I agree with you... there's such a huge opportunity here! Instead of going the way C++ did, or the way D is going now (which is even worse), why not take the time to find a really good, simple and effective solution? For large parallel projects, I have seen some successful uses of Erlang, but most of the parallel development these days is done in Fortran and C++ because of OpenMP (even some big distributed apps are moving to OpenMP via Intel's clustering OpenMP compiler). C++ also has a nice join calculus module available in Boost, not to mention Boost's [new] MPI support. People aren't using C++ and Fortan because they're the best suited languages... and parallel people are always looking for something better because, like you said, in theory there should be a lot better choices available to us. Functional languages promise neat and tidy parallelization with no side affects, etc, but in reality they're too slow and don't produce optimized code on the machines that parallel developers need to write code for. What we want is a fast, system level language, that can be targeted by an optimizing compiler, but that is also quick and enjoyable to write code in. If D did have some support for parallel programming I think it would be a major selling point, and I don't think it would need to complicate the language at all. The whole problem here isn't that adding parallel support automatically makes a language ridiculously complex. The real issue is Walter's wacky and complex const system. Somehow he thinks it's going to help with parallel development, so the issues are getting mixed, but in reality they're completely unrelated.
Apr 01 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, Tim Burrell <tim timburrell.net> wrote:
  Instead of going ... the way D is
  going now ... why not take the time to find a really
  good, simple and effective solution?
<slaps head> Doh! Why anyone else think of that before!?
Apr 01 2008
parent Tim Burrell <tim timburrell.net> writes:
Janice Caron wrote:
 On 01/04/2008, Tim Burrell <tim timburrell.net> wrote:
  Instead of going ... the way D is
  going now ... why not take the time to find a really
  good, simple and effective solution?
<slaps head> Doh! Why anyone else think of that before!?
I get the sarcasm, and it's funny... but it rings a bit of truth too. Nothing has been proposed for D and concurrency thus far! I think it's an important discussion too.
Apr 01 2008
prev sibling next sibling parent reply "Craig Black" <craigblack2 cox.net> writes:
 If D did move the way of C++ it would be a lot better off than it is now. 
 But, that being said, I agree with you... there's such a huge opportunity 
 here!  Instead of going the way C++ did, or the way D is going now (which 
 is even worse), why not take the time to find a really good, simple and 
 effective solution?
The D community has been working on this problem for about a year now. I'm don't know if such a "simple effective" solution exists as you imply. And even if it does, if it takes us another year to figure it out, then that's a year that we could be doing more productive things.
 For large parallel projects, I have seen some successful uses of Erlang, 
 but most of the parallel development these days is done in Fortran and C++ 
 because of OpenMP (even some big distributed apps are moving to OpenMP via 
 Intel's clustering OpenMP compiler).  C++ also has a nice join calculus 
 module available in Boost, not to mention Boost's [new] MPI support.

 People aren't using C++ and Fortan because they're the best suited 
 languages... and parallel people are always looking for something better 
 because, like you said, in theory there should be a lot better choices 
 available to us.  Functional languages promise neat and tidy 
 parallelization with no side affects, etc, but in reality they're too slow 
 and don't produce optimized code on the machines that parallel developers 
 need to write code for.

 What we want is a fast, system level language, that can be targeted by an 
 optimizing compiler, but that is also quick and enjoyable to write code 
 in.
Yes, I think the solution to scalable concurrency will probably be a systems programming language that supports multiple paradigms. D is a good candidate here, being very multi-paradigm. I still don't think writing scalable multithreaded apps will ever be easy.
 If D did have some support for parallel programming I think it would be a 
 major selling point, and I don't think it would need to complicate the 
 language at all.  The whole problem here isn't that adding parallel 
 support automatically makes a language ridiculously complex.

 The real issue is Walter's wacky and complex const system.  Somehow he 
 thinks it's going to help with parallel development, so the issues are 
 getting mixed, but in reality they're completely unrelated.
I'm not so sure it's any more complex than C++ const, but from a developer perspective, I think it may be considered wacky, because it's more difficult to work with. This is due to the fact that it has a completely different design goal from C++'s const, based on a hypothetical compiler optimization benefit that no one seems to fully understand. My suggestion would be to implement a C++-like "logical const" system using the const keyword, and use the invariant keyword for Walter/Andrei's transitive const. Maybe that would give us the best of both worlds. -Craig
Apr 01 2008
next sibling parent Tim Burrell <tim timburrell.net> writes:
Craig Black wrote:
 The D community has been working on this problem for about a year now.  I'm 
 don't know if such a "simple effective" solution exists as you imply.  And 
 even if it does, if it takes us another year to figure it out, then that's a 
 year that we could be doing more productive things.
Working on const and working on a scheme for doing parallel programming are basically completely unrelated. Yeah you may need one to do the other, but a means for parallel programming isn't born out of proper const support.
 Yes, I think the solution to scalable concurrency will probably be a systems 
 programming language that supports multiple paradigms.  D is a good 
 candidate here, being very multi-paradigm.  I still don't think writing 
 scalable multithreaded apps will ever be easy.
Definitely not. But there are ways to make it easier. Currently D employs none of these. Const will not improve the situation.
 My suggestion would be to implement a C++-like "logical const" system using 
 the const keyword, and use the invariant keyword for Walter/Andrei's 
 transitive const.  Maybe that would give us the best of both worlds.
I think I agree with you. I'd just really like to see this debacle sorted out so some real effort can be made toward D3 and the possibility of parallel programming support.
Apr 01 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Craig Black wrote:
 This is due to the fact that it has a 
 completely different design goal from C++'s const, based on a 
 hypothetical compiler optimization benefit that no one seems to fully 
 understand.
See http://www.digitalmars.com/d/2.0/const-faq.html#const for what the goals are.
Apr 02 2008
parent "Craig Black" <craigblack2 cox.net> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:ft1poo$1js8$1 digitalmars.com...
 Craig Black wrote:
 This is due to the fact that it has a completely different design goal 
 from C++'s const, based on a hypothetical compiler optimization benefit 
 that no one seems to fully understand.
See http://www.digitalmars.com/d/2.0/const-faq.html#const for what the goals are.
Yeah I understand the concept, but I have doubts as to whether the benefits you speak of will materialize. Multiprogramming is very complex. But I hope it will work as you say, and I do think it's worth a try. I'm also coming to the realization that D's const is not as bad as everyone is making it out to be. For example, D doesn't provide mutable fields. I thought this was going to be a big problem, but as Janice pointed out, there are trivial workarounds. It's similar to how D doesn't provide bitfields, but it does provide std.bitfield, which works just as good. So I think I'm coming around. -Craig
Apr 02 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Tim Burrell (tim timburrell.net)'s article
...
 People aren't using C++ and Fortan because they're the best suited
 languages... and parallel people are always looking for something better
 because, like you said, in theory there should be a lot better choices
 available to us.  Functional languages promise neat and tidy
 parallelization with no side affects, etc, but in reality they're too
 slow and don't produce optimized code on the machines that parallel
 developers need to write code for.
I'm currently of the opinion that the ideal solution is to use a functional language like Erlang for the concurrency aspect and then to use a language like C for the performance-critical or systems bits. For example, from a statistic I saw somewhere, the Ericsson switch code has approximately as many lines of C code as it does Erlang code (a few hundred K of each), with the structure I've described.
 What we want is a fast, system level language, that can be targeted by
 an optimizing compiler, but that is also quick and enjoyable to write
 code in.
I think we have that with D 1.0.
 If D did have some support for parallel programming I think it would be
 a major selling point, and I don't think it would need to complicate the
 language at all.  The whole problem here isn't that adding parallel
 support automatically makes a language ridiculously complex.
I'd modify that by saying that adding concurrency support makes an imperative language ridiculously complex. The problem (as someone in this NG put it) is that while functional languages describe /what/ should happen, imperative languages describe /how/ it should happen. In instances where the programmer knows more about how to optimize the process than the compiler may, using an imperative language is clearly the correct solution... and systems languages are clearly necessary when systems work is to be done (ie. operations involving direct memory access, etc). But it's fairly well accepted these days that concurrent programming is just too difficult for most programmers if they have to worry about the 'how' part. At best, most such programs achieve only a large-scale granularity as tasks with logically distinct divisions are manually separated out into jobs for thread pools and the like. By comparison, because data in most functional languages is immutable and the program structure is inherently function-oriented, it is easy for the environment to spread function invocations across multiple cores, machines, whatever. There is no shared/global data to worry about, no state to synchronize, and no loops to unroll. Combine this with a message passing system for explicit parallelization and the program may take advantage of not only automatic parallelization but explicit parallelization as well. (For the record, I've looked at languages Cilk and think they're a pretty neat idea but still too bogged down in the 'how' aspect. Haven't looked at OpenMP yet though, and I really should do so.)
 The real issue is Walter's wacky and complex const system.  Somehow he
 thinks it's going to help with parallel development, so the issues are
 getting mixed, but in reality they're completely unrelated.
I actually think that the const system could help somewhat. Pure functions, for example, offer guarantees that are fairly similar to what a functional programming language offers. However, for such an approach to compete with a true functional language the D programmer would have to use pure functions almost exclusively rather than sparely, which is what I expect. And in the case that the programmer uses pure functions exclusively, what he's doing is mimicking functional programming in a language that isn't naturally suited to it. So in short, I think the pure/const/whatever stuff may actually get D part of the way there, but I also think that it's a bit like using a boat in an automobile race. Sure you can bolt some wheels to it and drop in an engine, but why do that when you could use a car instead? Insofar as concurrency is concerned with D, I'd prefer to focus on tight integration with "control" languages such as Erlang and to add a few simple constructs to make more traditional imperative concurrency easier. Just give me a reason to use D instead of C/C++ for the areas I care about: * Embedded systems programming * General systems programming * Optimization points for programs in less performant languages (Python, Erlang, whatever) In these arenas, the only mainstream competitors for D that I'm aware of are C and C++ (and maybe Java, if you count it as an embedded language). That's it. And this field isn't liklely to expand any time soon. Further, I think we've gotten to the point where there's no longer any compelling reason to write monolithic programs entirely in a single language, so why even try to be the go-to language for every problem? Particularly if doing so risks diluting the strength of one's offering in another area. I could be totally off base here, but I think that D's advantage over C is its support for sensible OOP, optional garbage collection, Unicode, and elegance / understandability (lack of a pre-processor, delegates, etc), with its semantic simplicity as an additional advantage over C++--it's far easier to create a compiler for D than it is for C++. This means that I feel D is more powerful than C, easier to learn than C++, and in my experience is prone to fewer bugs (by virtue of its syntax) than either language. These are its core strengths, and I worry about any feature that threatens to dilute them. On the other hand, I suppose it's difficult to sell a language to a C/C++ shop without a list of bullet points. C++ folks often insist on logical const features, etc. Also, I could just be worrying about nothing and all this stuff will turn out to be the bees knees for D. My initial impression is simply that we're trading away too great an increase in complexity for the benefit offered by the 2.0 rendition of const. All that said, I do think that CTFE is an absolutely fantastic option for certain programs, and pure functions seem to be a great way for extending this feature quite naturally. If the goal is simply blistering end-to-end runtime performance then there may well be something too all this mess. I only wish that it could be done in a way that doesn't bring so much baggage along with it. We went from one way to declare functions in D 1.0 to at least four: unlabeled, const, invariant, and pure, all of which cover the same basic conceptual domain. Sean
Apr 01 2008
parent Tim Burrell <tim timburrell.net> writes:
Sean Kelly wrote:
 I'm currently of the opinion that the ideal solution is to use a functional
 language like Erlang for the concurrency aspect and then to use a language
 like C for the performance-critical or systems bits.  For example, from a
 statistic I saw somewhere, the Ericsson switch code has approximately as
 many lines of C code as it does Erlang code (a few hundred K of each), with
 the structure I've described.
Agreed. But what we really need is a language that's also good at shared memory parallelism (Erlang's really geared toward shared-nothing setups -- good back in the day, but today a modern shared memory compiler can perform suitable optimizations). We really only have Fortran and C++ that currently fit the bill -- but perhaps D could be that language?
 I'd modify that by saying that adding concurrency support makes an
 imperative language ridiculously complex.  The problem (as someone in
 this NG put it) is that while functional languages describe /what/ should
 happen, imperative languages describe /how/ it should happen.  In
 instances where the programmer knows more about how to optimize
 the process than the compiler may, using an imperative language is
 clearly the correct solution... and systems languages are clearly necessary
 when systems work is to be done (ie. operations involving direct memory
 access, etc).  But it's fairly well accepted these days that concurrent
 programming is just too difficult for most programmers if they have to
 worry about the 'how' part.  At best, most such programs achieve only
 a large-scale granularity as tasks with logically distinct divisions are
 manually separated out into jobs for thread pools and the like.
I agree with you here on all parts except the one that an imperative language needs to become complex by adding concurrency support. It's all about how it's done. You're right, we need to be describing what, not how. I think I'd call D a multi-paradigm language in the same way I'd call C++ multi-paradigm. It's all how you use it. If we had some proper multiprogramming capabilities it could very well be a description of what, and not how. That'd be the goal for sure.
 I actually think that the const system could help somewhat.  Pure functions,
 for example, offer guarantees that are fairly similar to what a functional
 programming language offers.  However, for such an approach to compete
 with a true functional language the D programmer would have to use pure
 functions almost exclusively rather than sparely, which is what I expect.  And
 in the case that the programmer uses pure functions exclusively, what he's
 doing is mimicking functional programming in a language that isn't naturally
 suited to it.  So in short, I think the pure/const/whatever stuff may actually
 get D part of the way there, but I also think that it's a bit like using a boat
 in an automobile race.  Sure you can bolt some wheels to it and drop in an
 engine, but why do that when you could use a car instead?
Yeah I hear what you're saying here. It's a tough thing, because you definitely don't want the bolted on after-thought feeling. On the other hand it works pretty good for C++, and since D is directly competing against other system level languages (like C++) it's really not going to be a competitor in the upcoming parallel battle if something's not figured out.
 In these arenas, the only mainstream competitors for D that I'm aware of
 are C and C++ (and maybe Java, if you count it as an embedded language).
 That's it.  And this field isn't liklely to expand any time soon.  Further, I
 think we've gotten to the point where there's no longer any compelling
 reason to write monolithic programs entirely in a single language, so why
 even try to be the go-to language for every problem?  Particularly if doing
 so risks diluting the strength of one's offering in another area.
You make good points. And I agree with everything you say. I just keep wanting to hold onto the idea that it should be possible to provide a reasonably simple way to give parallel programmers a hand when using D. Hell I'd even be happy with an implementation of the upcoming OpenMP 3.0 draft. It's not original, but it'd sure as hell help!
 performance then there may well be something too all this mess.  I only wish
 that it could be done in a way that doesn't bring so much baggage along
 with it.  We went from one way to declare functions in D 1.0 to at least four:
 unlabeled, const, invariant, and pure, all of which cover the same basic
 conceptual domain.
I feel the same way.
Apr 01 2008
prev sibling next sibling parent reply "Craig Black" <craigblack2 cox.net> writes:
Walter/Andrei are under the impression that transitive const is necessary 
for multiprogramming, but they have not presented a detailed explanation 
about why it is necessary.  It sounds like a more detailed explanation of 
the merits of transitive const from Walter/Andrei would help here.

One idea that I had recently was that the const keyword could provide 
logical const (for developers) and the invariant keyword could provide 
transitive const (for the compiler).  I agree with you that for logical 
const, it is practical to have something like C++ mutable for class/struct 
fields.  I use C++ const, and it is practical to be able to subvert it.

-Craig 
Apr 01 2008
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Craig Black" wrote
 One idea that I had recently was that the const keyword could provide 
 logical const (for developers) and the invariant keyword could provide 
 transitive const (for the compiler).
Making D not support logical invariant does not make it impossible to have logically invariant functions as I have already demonstrated. If there is a way to legally implement logical invariant, then I think the language should not impose that restriction. Imposing a restriction that you cannot have logical invariance is just going to be a nuisance for those who want to do it, and it does not provide any benefits for compiler optimization. The only place where it is important to ban logical const is for pure functions as far as I can tell, and that can be dealt with when pure functions are introduced. -Steve
Apr 01 2008
parent "Craig Black" <craigblack2 cox.net> writes:
"Steven Schveighoffer" <schveiguy yahoo.com> wrote in message 
news:fsu80i$2pj1$1 digitalmars.com...
 "Craig Black" wrote
 One idea that I had recently was that the const keyword could provide 
 logical const (for developers) and the invariant keyword could provide 
 transitive const (for the compiler).
Making D not support logical invariant does not make it impossible to have logically invariant functions as I have already demonstrated. If there is a way to legally implement logical invariant, then I think the language should not impose that restriction. Imposing a restriction that you cannot have logical invariance is just going to be a nuisance for those who want to do it, and it does not provide any benefits for compiler optimization.
You are confident that Walter wrong about the benefits of transitivity. Honestly, I don't understand it deeply enough to agree or disagree. I do think the const system should be easier for developers and having logical const will give us that. If we get logical invariance too, then that's fine with me. I was proposing a compromise since the D community and Walter each seem to have such strong opinions on the matter.
 The only place where it is important to ban logical const is for pure 
 functions as far as I can tell, and that can be dealt with when pure 
 functions are introduced.
I'd wager that Andrei/Walter have probably though about this quite a lot, and concluded that invariant transitivity was somehow the best solution to it. Still, they could be wrong, and it's worth discussing, since even the smartest of us gets their wires crossed once in a while. -Craig
Apr 01 2008
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Craig Black wrote:
 Walter/Andrei are under the impression that transitive const is 
 necessary for multiprogramming, but they have not presented a detailed 
 explanation about why it is necessary.  It sounds like a more detailed 
 explanation of the merits of transitive const from Walter/Andrei would 
 help here.
That'll all come when we get the details worked out on how D will support multiprogramming. It's an ongoing effort, and current events are leading us to believe that this is getting extremely important.
Apr 01 2008
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Craig Black wrote:
 Walter/Andrei are under the impression that transitive const is 
 necessary for multiprogramming, but they have not presented a detailed 
 explanation about why it is necessary.  It sounds like a more detailed 
 explanation of the merits of transitive const from Walter/Andrei would 
 help here.
That'll all come when we get the details worked out on how D will support multiprogramming. It's an ongoing effort, and current events are leading us to believe that this is getting extremely important.
If the ultimate goal is support for multiprogramming, then shouldn't the detailed design work should start *there*, with how to do great multiprogramming? Rather than with const. Not saying that you guys have done this, but I know from my own experience doing research that it's easy to get hung up trying to solve a tough but solvable problem that seems relevant for getting from A to B, only to realize in the end that it was not as relevant as I thought. --bb
Apr 01 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 If the ultimate goal is support for multiprogramming, then shouldn't the 
 detailed design work should start *there*, with how to do great 
 multiprogramming?  Rather than with const.
 
 Not saying that you guys have done this, but I know from my own 
 experience doing research that it's easy to get hung up trying to solve 
 a tough but solvable problem that seems relevant for getting from A to 
 B, only to realize in the end that it was not as relevant as I thought.
I think it is fairly obvious that transitive invariant (and const) is key to multiprogramming. The transitive closure of the state of everything reachable through an object is part of the state of that object, and all the troubles with multiprogramming stem from the state of an object changing asynchronously.
Apr 01 2008
next sibling parent reply Jason House <jason.james.house gmail.com> writes:
Walter Bright wrote:

 Bill Baxter wrote:
 If the ultimate goal is support for multiprogramming, then shouldn't the
 detailed design work should start *there*, with how to do great
 multiprogramming?  Rather than with const.
 
 Not saying that you guys have done this, but I know from my own
 experience doing research that it's easy to get hung up trying to solve
 a tough but solvable problem that seems relevant for getting from A to
 B, only to realize in the end that it was not as relevant as I thought.
I think it is fairly obvious that transitive invariant (and const) is key to multiprogramming. The transitive closure of the state of everything reachable through an object is part of the state of that object, and all the troubles with multiprogramming stem from the state of an object changing asynchronously.
And how do global variables fit into that? Technically, they're always reachable and never inherit const.
Apr 01 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jason House wrote:
 Walter Bright wrote:
 And how do global variables fit into that?  Technically, they're always
 reachable and never inherit const.
If the global variables are not invariant, then you're going to have synchronization problems when accessing them from multiple threads. If they are invariant, you cannot have sync problems.
Apr 01 2008
parent Jason House <jason.james.house gmail.com> writes:
My point is that global variables violate the guarantee transitive const
provides... My question is how having them as an exception does not invalidate
the whole argument that transitive const is needed for multithreading. I view
global variables the same as head const.

Walter Bright Wrote:

 Jason House wrote:
 Walter Bright wrote:
 And how do global variables fit into that?  Technically, they're always
 reachable and never inherit const.
If the global variables are not invariant, then you're going to have synchronization problems when accessing them from multiple threads. If they are invariant, you cannot have sync problems.
Apr 02 2008
prev sibling next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Jason House <jason.james.house gmail.com> wrote:
 And how do global variables fit into that?  Technically, they're always
  reachable and never inherit const.
Global variables will /not/ be reachable from a pure function. Example: int g; pure int f() { return g; /*ERROR*/ } The above will not compile, because global variables won't be available to pure functions. So the answer is, you will still be able to use global variables, but only in non-pure functions. I've always thought that using global variables is a very bad idea anyway, and they certainly can cause problems for multithreading.
Apr 02 2008
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Janice Caron wrote:
 On 02/04/2008, Jason House <jason.james.house gmail.com> wrote:
 And how do global variables fit into that?  Technically, they're always
  reachable and never inherit const.
Global variables will /not/ be reachable from a pure function. Example: int g; pure int f() { return g; /*ERROR*/ } The above will not compile, because global variables won't be available to pure functions.
That's right, unless they are invariant globals.
Apr 02 2008
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Janice Caron, el  2 de abril a las 08:00 me escribiste:
 On 02/04/2008, Jason House <jason.james.house gmail.com> wrote:
 And how do global variables fit into that?  Technically, they're always
  reachable and never inherit const.
Global variables will /not/ be reachable from a pure function. Example:
And how is that related to transitive const? pure functions could be implemented in D1.0 without const at all... -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- You can try the best you can If you try the best you can The best you can is good enough
Apr 03 2008
parent "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Leandro Lucarella <llucax gmail.com> wrote:
  > Global variables will /not/ be reachable from a pure function.

  And how is that related to transitive const?
It isn't.
 pure functions could be
 implemented in D1.0 without const at all...
Stating the obvious isn't very interesting.
Apr 03 2008
prev sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 Global variables will /not/ be reachable from a pure function. Example:

     int g;

     pure int f()
     {
         return g; /*ERROR*/
     }

  The above will not compile, because global variables won't be
  available to pure functions.
However, I suspect that the following will be OK. enum g = 42; pure int f() { return g; } I'm only guessing here - I don't have any inside knowledge. But it seems reasonable to me that global compile-time constants declared outside the function ought to be available inside it. I guess we'll just have to wait and see. But global variables - no.
Apr 02 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Janice Caron wrote:
 However, I suspect that the following will be OK.
 
     enum g = 42;
 
     pure int f()
     {
         return g;
     }
Yes, because the value of g can never change. The way to think about pure functions is that if the arguments are the same, the result will be the same. This precludes reading any global state that could change, and precludes writing to any global state. But global invariants and manifest constants cannot ever change state, and cannot be written to, so using them in a pure function is ok.
Apr 02 2008
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Janice Caron wrote:
 However, I suspect that the following will be OK.

     enum g = 42;

     pure int f()
     {
         return g;
     }
Yes, because the value of g can never change. The way to think about pure functions is that if the arguments are the same, the result will be the same. This precludes reading any global state that could change, and precludes writing to any global state. But global invariants and manifest constants cannot ever change state, and cannot be written to, so using them in a pure function is ok.
So does that mean pure int f(const Class c) { ... } will *not* be allowed? Because some part of c *could* change, even if it's not f() doing the changing. I.e. another thread could be changing c concurrently. If so, that seems unnecessarily restrictive to me. Any side effect or indeterminacy of f() in that case is not because of f()'s doing. So f()'s behavior *is* pure even if it takes a const argument. It's not f's fault if the environment it's living in is unstable. Maybe there need to be two levels of pure? One says this function by itself has no side effects, the other says additionally that the function is invariant with respect to the environment. The former category of functions can all be run in parallel safely provided there are no other threads modifying the environment simultaneously. The latter category can be run absolutely any time, any order, and any way you want, no matter what else is going on with the environment. --bb
Apr 02 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 So does that mean
 
   pure int f(const Class c)
   { ...
   }
 
 will *not* be allowed?  Because some part of c *could* change, even if 
 it's not f() doing the changing.  I.e. another thread could be changing 
 c concurrently.
Right. That declaration of f is erroneous.
 If so, that seems unnecessarily restrictive to me.  Any side effect or 
 indeterminacy of f() in that case is not because of f()'s doing.  So 
 f()'s behavior *is* pure even if it takes a const argument.  It's not 
 f's fault if the environment it's living in is unstable.
A function is pure or it isn't, there really isn't any room for "unstable" purity, or the point of purity is lost. Look at it another way. You can take the value of the bits pushed on the stack as arguments to a pure function, and compare that with previous bits passed to the same function, and if there's a binary bit match, you can skip calling the function and return a previously cached result. If that cannot be done, then the function is NOT pure.
 Maybe there need to be two levels of pure?  One says this function by 
 itself has no side effects, the other says additionally that the 
 function is invariant with respect to the environment.  The former 
 category of functions can all be run in parallel safely provided there 
 are no other threads modifying the environment simultaneously.
No, you cannot run them simultaneously. The problem with const (instead of invariant) is that the transitive state of the passed object is NOT guaranteed to be the same: pure int foo(const C c) { return c.p[0]; } class C { int* p; } C c; c.p = q; int i = foo(c); q[0] = 3; int j = foo(c); Note that these foo's cannot be run in parallel, despite c never having been changed. I think that notion of purity is not useful.
Apr 02 2008
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 If so, that seems unnecessarily restrictive to me.  Any side effect or 
 indeterminacy of f() in that case is not because of f()'s doing.  So 
 f()'s behavior *is* pure even if it takes a const argument.  It's not 
 f's fault if the environment it's living in is unstable.
A function is pure or it isn't, there really isn't any room for "unstable" purity, or the point of purity is lost. Look at it another way. You can take the value of the bits pushed on the stack as arguments to a pure function, and compare that with previous bits passed to the same function, and if there's a binary bit match, you can skip calling the function and return a previously cached result. If that cannot be done, then the function is NOT pure.
If you want to define it that way then ok. So lets say the kind of function I'm talking about would be labeled "nosfx".
 Maybe there need to be two levels of pure?  One says this function by 
 itself has no side effects, the other says additionally that the 
 function is invariant with respect to the environment.  The former 
 category of functions can all be run in parallel safely provided there 
 are no other threads modifying the environment simultaneously.
No, you cannot run them simultaneously. The problem with const (instead of invariant) is that the transitive state of the passed object is NOT guaranteed to be the same: pure int foo(const C c) { return c.p[0]; } class C { int* p; } C c; c.p = q; int i = foo(c); q[0] = 3; int j = foo(c); Note that these foo's cannot be run in parallel, despite c never having been changed. I think that notion of purity is not useful.
They can be run in parallel, as long as you don't stick non-pure mutating operations in between them. That's quite useful for automatically parallelizing tight loops like this: nosfx int foo(const C c) { return c.p[0]; } ... C[] array; ...// put bunch of C's in array int sum = 0; foreach(c; array) { sum += foo(c); } Those foreach bodies could be run in parallel (with appropriate reduce logic added to handling of 'sum' by compiler) since we know each call to foo() in that loop has no external side effects. This is the kind of thing OpenMP lets you do with directives like "#pragma pfor reduce". Except I don't believe OpenMP has any way to verify that foo() is really side-effect free. --bb
Apr 02 2008
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Thu, 03 Apr 2008 03:55:47 +0200, Bill Baxter  =

<dnewsgroup billbaxter.com> wrote:

 They can be run in parallel, as long as you don't stick non-pure  =
 mutating operations in between them.

 That's quite useful for automatically parallelizing tight loops like  =
 this:

 nosfx int foo(const C c) { return c.p[0]; }
 ...

 C[] array;
 ...// put bunch of C's in array
 int sum =3D 0;
 foreach(c; array) {
     sum +=3D foo(c);
 }

 Those foreach bodies could be run in parallel (with appropriate reduce=
=
 logic added to handling of 'sum' by compiler) since we know each call =
to =
 foo() in that loop has no external side effects.

 This is the kind of thing OpenMP lets you do with directives like  =
 "#pragma pfor reduce".  Except I don't believe OpenMP has any way to  =
 verify that foo() is really side-effect free.

 --bb
And could this not be done with real pure functions? pure int foo(invariant C c) { return c.p[0];} C[] array; ...// put bunch of C's in array int sum =3D 0; foreach(c; array) { sum +=3D foo(cast(invariant)c); } We know that c will not change, so we can cast it to invariant. --Simen
Apr 03 2008
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Simen Kjaeraas wrote:
 On Thu, 03 Apr 2008 03:55:47 +0200, Bill Baxter 
 <dnewsgroup billbaxter.com> wrote:
 
 They can be run in parallel, as long as you don't stick non-pure 
 mutating operations in between them.

 That's quite useful for automatically parallelizing tight loops like 
 this:

 nosfx int foo(const C c) { return c.p[0]; }
 ...

 C[] array;
 ...// put bunch of C's in array
 int sum = 0;
 foreach(c; array) {
     sum += foo(c);
 }

 Those foreach bodies could be run in parallel (with appropriate reduce 
 logic added to handling of 'sum' by compiler) since we know each call 
 to foo() in that loop has no external side effects.

 This is the kind of thing OpenMP lets you do with directives like 
 "#pragma pfor reduce".  Except I don't believe OpenMP has any way to 
 verify that foo() is really side-effect free.

 --bb
And could this not be done with real pure functions? pure int foo(invariant C c) { return c.p[0];} C[] array; ...// put bunch of C's in array int sum = 0; foreach(c; array) { sum += foo(cast(invariant)c); } We know that c will not change, so we can cast it to invariant. --Simen
That complicates things if you want to modify the C's in the array after the foreach. --bb
Apr 03 2008
parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Thu, 03 Apr 2008 15:33:20 +0200, Bill Baxter  =

<dnewsgroup billbaxter.com> wrote:

 Simen Kjaeraas wrote:
 On Thu, 03 Apr 2008 03:55:47 +0200, Bill Baxter  =
 <dnewsgroup billbaxter.com> wrote:

 They can be run in parallel, as long as you don't stick non-pure  =
 mutating operations in between them.

 That's quite useful for automatically parallelizing tight loops like=
=
 this:

 nosfx int foo(const C c) { return c.p[0]; }
 ...

 C[] array;
 ...// put bunch of C's in array
 int sum =3D 0;
 foreach(c; array) {
     sum +=3D foo(c);
 }

 Those foreach bodies could be run in parallel (with appropriate redu=
ce =
 logic added to handling of 'sum' by compiler) since we know each cal=
l =
 to foo() in that loop has no external side effects.

 This is the kind of thing OpenMP lets you do with directives like  =
 "#pragma pfor reduce".  Except I don't believe OpenMP has any way to=
=
 verify that foo() is really side-effect free.

 --bb
And could this not be done with real pure functions? pure int foo(invariant C c) { return c.p[0];} C[] array; ...// put bunch of C's in array int sum =3D 0; foreach(c; array) { sum +=3D foo(cast(invariant)c); } We know that c will not change, so we can cast it to invariant. --Simen
That complicates things if you want to modify the C's in the array aft=
er =
 the foreach.

 --bb
I can't quite see why, as long as we know the foreach has completed = running. foo does not care whether the array changes after it's exited, and the array does not consist of invariant Cs, they're only cast to invariant w= hen fed to foo. -- Simen
Apr 03 2008
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 So does that mean

   pure int f(const Class c)
   { ...
   }

 will *not* be allowed?  Because some part of c *could* change, even if 
 it's not f() doing the changing.  I.e. another thread could be 
 changing c concurrently.
Right. That declaration of f is erroneous.
 If so, that seems unnecessarily restrictive to me.  Any side effect or 
 indeterminacy of f() in that case is not because of f()'s doing.  So 
 f()'s behavior *is* pure even if it takes a const argument.  It's not 
 f's fault if the environment it's living in is unstable.
A function is pure or it isn't, there really isn't any room for "unstable" purity, or the point of purity is lost.
I think there is. A function which accesses a global is inherently impure. But, a function which takes a non-invariant argument, can in theory safely be called from inside a pure function, provided any non-invariant arguments are local variables, or invariant. eg. class C { int numcalls; this() { numcalls=0; } void foo() { ++numcalls; } // Not pure - but no global side effects. } pure int bar(int x) { C c = new C; for (int i=0; i<x; ++i) c.foo(); return c.numcalls; } Is bar() pure or not? Incidentally this type of code currently works in CTFE.
 
 Look at it another way. You can take the value of the bits pushed on the 
 stack as arguments to a pure function, and compare that with previous 
 bits passed to the same function, and if there's a binary bit match, you 
 can skip calling the function and return a previously cached result.
 
 If that cannot be done, then the function is NOT pure.
 
 Maybe there need to be two levels of pure?  One says this function by 
 itself has no side effects, the other says additionally that the 
 function is invariant with respect to the environment.  The former 
 category of functions can all be run in parallel safely provided there 
 are no other threads modifying the environment simultaneously.
No, you cannot run them simultaneously. The problem with const (instead of invariant) is that the transitive state of the passed object is NOT guaranteed to be the same:
Apr 03 2008
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Don Clugston" wrote
 Walter Bright wrote:
 Bill Baxter wrote:
 So does that mean

   pure int f(const Class c)
   { ...
   }

 will *not* be allowed?  Because some part of c *could* change, even if 
 it's not f() doing the changing.  I.e. another thread could be changing 
 c concurrently.
Right. That declaration of f is erroneous.
 If so, that seems unnecessarily restrictive to me.  Any side effect or 
 indeterminacy of f() in that case is not because of f()'s doing.  So 
 f()'s behavior *is* pure even if it takes a const argument.  It's not 
 f's fault if the environment it's living in is unstable.
A function is pure or it isn't, there really isn't any room for "unstable" purity, or the point of purity is lost.
I think there is. A function which accesses a global is inherently impure. But, a function which takes a non-invariant argument, can in theory safely be called from inside a pure function, provided any non-invariant arguments are local variables, or invariant. eg. class C { int numcalls; this() { numcalls=0; } void foo() { ++numcalls; } // Not pure - but no global side effects. } pure int bar(int x) { C c = new C; for (int i=0; i<x; ++i) c.foo(); return c.numcalls; } Is bar() pure or not? Incidentally this type of code currently works in CTFE.
Yes, bar is pure, but in order to be statically verifyable that it is pure, I think you need to slap a pure tag on C.foo, because without source code, the compiler cannot check foo to see if it accesses any other values. I think Walter's vision is to have pure be statically verified by the compiler to ensure that one does not erroneously label a non-pure function as pure. In this context, I am very interested to hear the set of rules that allow this. What I am also interested is whether general pointers to non-invariant data will be allowed to be used. For example: char[] globaldata; pure invariant(char)[] transform(invariant(char)[] input) { char[] result = input.dup; // transform result return cast(invariant)result; } Will this compile? If so, how does the compiler statically know during the transformation phase that result was pointing to data that was local, since it is on the heap? Is the compiler going to 'tag' such variables as pure-safe? i.e. how does it have the context information about result to know that it is ok to use, it could have been pointing to globaldata? -Steve
Apr 03 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 Walter Bright wrote:
 A function is pure or it isn't, there really isn't any room for 
 "unstable" purity, or the point of purity is lost.
I think there is. A function which accesses a global is inherently impure. But, a function which takes a non-invariant argument, can in theory safely be called from inside a pure function, provided any non-invariant arguments are local variables, or invariant. eg. class C { int numcalls; this() { numcalls=0; } void foo() { ++numcalls; } // Not pure - but no global side effects. } pure int bar(int x) { C c = new C; for (int i=0; i<x; ++i) c.foo(); return c.numcalls; } Is bar() pure or not?
I think it is pure, although it might be difficult for the compiler to detect it. Like for CTFE, I think the compiler should start out rather restrictive about what it regards as pure, and over time expand it as it gets more capable.
Apr 03 2008
next sibling parent reply Georg Wrede <georg nospam.org> writes:
Walter Bright wrote:
 Don Clugston wrote:
 Walter Bright wrote:

 A function is pure or it isn't, there really isn't any room for 
 "unstable" purity, or the point of purity is lost.
I think there is. A function which accesses a global is inherently impure. But, a function which takes a non-invariant argument, can in theory safely be called from inside a pure function, provided any non-invariant arguments are local variables, or invariant. eg. class C { int numcalls; this() { numcalls=0; } void foo() { ++numcalls; } // Not pure - but no global side effects. } pure int bar(int x) { C c = new C; for (int i=0; i<x; ++i) c.foo(); return c.numcalls; } Is bar() pure or not?
I think it is pure, although it might be difficult for the compiler to detect it. Like for CTFE, I think the compiler should start out rather restrictive about what it regards as pure, and over time expand it as it gets more capable.
No. The result of having run the function changes something outside the function, and that makes it impure. It does not matter that what's being changed is hidde behind the bushes in a small tin box.
Apr 03 2008
parent reply Don Clugston <dac nospam.com.au> writes:
Georg Wrede wrote:
 Walter Bright wrote:
 Don Clugston wrote:
 Walter Bright wrote:

 A function is pure or it isn't, there really isn't any room for 
 "unstable" purity, or the point of purity is lost.
I think there is. A function which accesses a global is inherently impure. But, a function which takes a non-invariant argument, can in theory safely be called from inside a pure function, provided any non-invariant arguments are local variables, or invariant. eg. class C { int numcalls; this() { numcalls=0; } void foo() { ++numcalls; } // Not pure - but no global side effects. } pure int bar(int x) { C c = new C; for (int i=0; i<x; ++i) c.foo(); return c.numcalls; } Is bar() pure or not?
I think it is pure, although it might be difficult for the compiler to detect it. Like for CTFE, I think the compiler should start out rather restrictive about what it regards as pure, and over time expand it as it gets more capable.
No. The result of having run the function changes something outside the function, and that makes it impure.
foo() is definitely not pure. But bar() doesn't change anything outside bar(). * foo() doesn't read or write anything other than its parameters. * The only parameters bar() provides to foo() are local.
 It does not matter that what's being changed is hidde behind the bushes 
 in a small tin box.
I don't think that's what happening here. A 'partially pure' function like foo() can be wrapped in a pure function. The question is whether the compiler can allow this, without foo() being marked in some way. It works for CTFE, because in CTFE, the compiler has access to the source code; but that's not necessarily true for run time functions. This partial impurity is not transitive! (whereas accessing a global is transitive).
Apr 04 2008
parent Leandro Lucarella <llucax gmail.com> writes:
Don Clugston, el  4 de abril a las 09:52 me escribiste:
The result of having run the function changes something outside the function,
and that 
makes it impure.
foo() is definitely not pure. But bar() doesn't change anything outside bar().
Maybe foo should be marked as ... "local"? I.e. make some distinction between methods that afect globals and methods that only modify locals.
 * foo() doesn't read or write anything other than its parameters.
 * The only parameters bar() provides to foo() are local.
 
It does not matter that what's being changed is hidde behind the bushes in a
small tin 
box.
I don't think that's what happening here. A 'partially pure' function like foo() can be wrapped in a pure function. The question is whether the compiler can allow this, without foo() being marked in some way. It works for CTFE, because in CTFE, the compiler has access to the source code; but that's not necessarily true for run time functions. This partial impurity is not transitive! (whereas accessing a global is transitive).
Well, this apply to functions too, what about: class Foo { int i; } void foo(Foo f) { f.i = 5; } pure void bar() { Foo f; foo(f); } bar() has no side effects either. Maybe foo should be marked... like "local". But I think it should be much nicer if can be infered by the compiler, but as you say it could be a problema if the source code is not available. Maybe this kind of "annotations" can be automatically generated by the compiler and put in the .di "headers"? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Wake from your sleep, the drying of your tears, Today we escape, we escape.
Apr 04 2008
prev sibling next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 04/04/2008, Walter Bright <newshound1 digitalmars.com> wrote:
  I think it is pure, although it might be difficult for the compiler to
 detect it. Like for CTFE, I think the compiler should start out rather
 restrictive about what it regards as pure, and over time expand it as it
 gets more capable.
What about this one. Is f pure? import std.date; void main() { invariant s = toString(getUTCtime()); pure string f() { return s; } writefln(s); } The function f relies only on the string s. which is fully invariant and in scope. Yet f will give a different return value for the same input (void), each time the program is run. I assume the answer is no, but this one seems a bit grey to me. Is it impure because s is not global?
Apr 03 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
Janice Caron wrote:
 What about this one. Is f pure?
 
     import std.date;
 
     void main()
     {
         invariant s = toString(getUTCtime());
 
         pure string f()
         {
             return s;
         }
 
         writefln(s);
     }
 
 The function f relies only on the string s. which is fully invariant
 and in scope. Yet f will give a different return value for the same
 input (void), each time the program is run.
 
 I assume the answer is no, but this one seems a bit grey to me. Is it
 impure because s is not global?
f() is pure because it relies on invariant data. Invariant data can be initialized at runtime, as long as it is before any pure functions attempt to read it.
Apr 04 2008
prev sibling next sibling parent "Janice Caron" <caron800 googlemail.com> writes:
On 04/04/2008, Janice Caron <caron800 googlemail.com> wrote:
         writefln(s);
That would have been a more effective question if I'd written writefln(f); :-)
Apr 03 2008
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2008-04-04 01:36:42 -0400, Walter Bright <newshound1 digitalmars.com> said:

 Don Clugston wrote:
 
 class C
 {
   int numcalls;
   this() { numcalls=0; }
   void foo() { ++numcalls; } // Not pure - but no global side effects.
 }
 
 pure int bar(int x)
 {
     C c = new C;
     for (int i=0; i<x; ++i) c.foo();
     return c.numcalls;
 }
 
 Is bar() pure or not?
I think it is pure, although it might be difficult for the compiler to detect it. Like for CTFE, I think the compiler should start out rather restrictive about what it regards as pure, and over time expand it as it gets more capable.
Personally, I'd rather say foo is pure. I'd define pure as "always giving the same outputs given the same inputs and having no external effect". That would make foo pure, because given any object C, it'll always change C in the same, predictable way. This definition allows parallelizing loops that wouldn't be parallelizable if foo wasn't pure (assuming you label foo as pure), such as this one: foreach (C c; arrayOfUniqueC) c.foo(); It also mean that purity doesn't prevent you from altering an out parameter and that you don't have to make all your parameters invariant. As long as you only alter what was given to you in the parameters, since the compiler knows about these parameters from the call site it can take the decision to parallelize or not, at the call site. In the example above, if you did this: C c; for (int i = 0; i < 10; ++i) c.foo(); it wouldn't be parallelizable because calling foo alters the state for the next call. But it's okay, because the compiler can figure that out from the call site and it won't do the parallelization. In other words, if the result depends on the previous result, you can't parallelize. That same situation can happen with a pure function returning a value: C c; for (int i = 0; i < 10; ++i) c = pureFunction(c); So in other words, altering a non-const parameter isn't a side effect, it's a predictable effect you can deduce from the function signature, and I think it should be allowed in a pure function. P.S.: I'm not sure what would happen if arrayOfUniqueC in the example above was to contain two or more pointers to the same object. That would be a problem, and that's why I put the word "unique" in the name to mean there are no duplicate. I'm not sure how the compiler could find about that though. Anyway, the worse that can happen is that the compiler cannot parallelize this particular case without an additional clue; allowing foo to be pure would still permit the optimization in the case of the bar function in Don's example, and in any case the compiler knows there are no duplicate pointers. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Apr 04 2008
prev sibling parent reply Georg Wrede <georg nospam.org> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 
 So does that mean

   pure int f(const Class c)
   { ...
   }

 will *not* be allowed?  Because some part of c *could* change, even if 
 it's not f() doing the changing.  I.e. another thread could be 
 changing c concurrently.
Right. That declaration of f is erroneous.
I'd disagree. Whether a function is pure or not, has nothing to do with whether c could change or not. Take, for example sin(x). We all know that it's pure, period. Now, whether x changes between two consecutive invocations or not, is irrelevant. Of course you get a different result. And you should. But pure, IMHO, means (1) the function is guaranteed to give the same result every time /if/ the input is the same (like sin(x) does), (2) it does not change anything at all anywhere, it's only way of affecting life is by its return value, (3) it doesn't access info from non-constants anywhere. Now, to the issue between (1)pure int f(const Class c) (2)pure int f(Class c) I'd say that, since a Pure function promises not to change /anything/ (other than its return value, of course) anywhere, this means a pure function /simply has to implicitly treat its arguments as unchangeable/! Therefore, depending on what A/W want, either: - a pure function has to have const decorated parameters only (1) - and (2) would be a syntax error or - a pure function's parameters are implicitly const, and (1) is then a syntax error (Not making the latter a syntax error isn't technically impossible, but for those learning the language and the casual readers of ex. magazine articles, this would wreak havoc with their understanding D.) *Back to concurrency*, if someone changes c in mid-run, then it's the programmer's fault. It's his headache. The function f only guarantees not to change c /by itself/. If the programmer wants to guarantee that c doesn't change during f's invocation, then the invocation of f should be put in a synchronized block. This might be the case if c has a complicated state (with several fields).
 A function is pure or it isn't, there really isn't any room for 
 "unstable" purity, or the point of purity is lost.
 
 Look at it another way. You can take the value of the bits pushed on the 
 stack as arguments to a pure function, and compare that with previous 
 bits passed to the same function, and if there's a binary bit match, you 
 can skip calling the function and return a previously cached result.
 
 If that cannot be done, then the function is NOT pure.
I wonder if that's so simple when we allow non-POD type arguments? Of course, if you're speaking metaphorically here, then that's another thing. But the same literal bit pattern of course could mean the same object referred to, only that now it suddenly contains other data. To further clarify, If a pure function gets a tree t passed to it, and makes a (potentially very complicated and) deep search into it, then the only thing the function's pureness guarantees is that if the particular nodes and leaves it's looked at stay the same, then it should return the same result the next time. The rest of the tree could undergo severe changes in between, or even during an invocation of the pure function, for all it knows. (Of course this is academic, but tries to illustrate the issue literally.) In practice we wouldn't want that, but that's not relevant to the Pureness Issue.
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Georg Wrede" wrote
 Walter Bright wrote:
 Bill Baxter wrote:

 So does that mean

   pure int f(const Class c)
   { ...
   }

 will *not* be allowed?  Because some part of c *could* change, even if 
 it's not f() doing the changing.  I.e. another thread could be changing 
 c concurrently.
Right. That declaration of f is erroneous.
I'd disagree. Whether a function is pure or not, has nothing to do with whether c could change or not.
The question is, should pure functions be protected from other threads changing c? I think Walter wants this to be true, and for that, c must be invariant. I think Walter is trying to promote a different definition for pure than what you are thinking of.
 Take, for example sin(x). We all know that it's pure, period. Now, whether 
 x changes between two consecutive invocations or not, is irrelevant. Of 
 course you get a different result. And you should.
sin(x) is a completely different animal, because x is not a reference type, it is pushed on the stack, and the pure function has the only reference to it. Nothing can change x while sin is running.
 But pure, IMHO, means (1) the function is guaranteed to give the same 
 result every time /if/ the input is the same (like sin(x) does), (2) it 
 does not change anything at all anywhere, it's only way of affecting life 
 is by its return value, (3) it doesn't access info from non-constants 
 anywhere.

 Now, to the issue between

     (1)pure int f(const Class c)
     (2)pure int f(Class c)

 I'd say that, since a Pure function promises not to change /anything/ 
 (other than its return value, of course) anywhere, this means a pure 
 function /simply has to implicitly treat its arguments as unchangeable/!

 Therefore, depending on what A/W want, either:

  - a pure function has to have const decorated parameters only (1)
  - and (2) would be a syntax error

 or

  - a pure function's parameters are implicitly const, and (1) is then a 
 syntax error
If the arguments must be invariant, then they cannot be implicitly cast. I understand what you are saying, but I think A/W want something different.
 (Not making the latter a syntax error isn't technically impossible, but 
 for those learning the language and the casual readers of ex. magazine 
 articles, this would wreak havoc with their understanding D.)


 *Back to concurrency*, if someone changes c in mid-run, then it's the 
 programmer's fault. It's his headache. The function f only guarantees not 
 to change c /by itself/.
I think this is what A/W want to prevent. Otherwise, all this touting about pure functions being the way to multiprogram in D is not completely true, you still have to deal with multithreadded issues. Walter has said before he wants multithreaded programming to "just work" without having to do any locking, just like FP languages have. -Steve
Apr 04 2008
parent reply Georg Wrede <georg nospam.org> writes:
Steven Schveighoffer wrote:
 "Georg Wrede" wrote
Walter Bright wrote:
Bill Baxter wrote:

So does that mean

  pure int f(const Class c)
  { ...
  }

will *not* be allowed?  Because some part of c *could* change, even if 
it's not f() doing the changing.  I.e. another thread could be changing 
c concurrently.
Right. That declaration of f is erroneous.
I'd disagree. Whether a function is pure or not, has nothing to do with whether c could change or not.
The question is, should pure functions be protected from other threads changing c? I think Walter wants this to be true, and for that, c must be invariant. I think Walter is trying to promote a different definition for pure than what you are thinking of.
Hmm. (There ought to be a law against stealing other people's words!!! And jail time for changing them subtly and inconspicuously!) Hundreds of (otherwise unnecessary) posts even in this NG are written every month, just because two people argue on something, that in the end turns out to be just them having different notions of what a word means. And reading those posts wastes everyones time! And, in that case, it would have been prudent to clarify this as a comment to "What is pure and what is not". Not only for my sake, but for all the readers who've seen this post stay undisputed or uncommented.
Take, for example sin(x). We all know that it's pure, period. Now, whether 
x changes between two consecutive invocations or not, is irrelevant. Of 
course you get a different result. And you should.
sin(x) is a completely different animal, because x is not a reference type, it is pushed on the stack, and the pure function has the only reference to it. Nothing can change x while sin is running.
So you mean that Walter's pure implies that the argument has to be Petrified? (Dammit, I don't even dare to use the normal words for stuff anymore, so I make my own words! Petrified here simply means "we know it won't change". Vague, I know.) Does it also mean that the pure function itself causes this Petrified thing, or is it the programmer's responsibility to Petrify the object before passing it?
But pure, IMHO, means (1) the function is guaranteed to give the same 
result every time /if/ the input is the same (like sin(x) does), (2) it 
does not change anything at all anywhere, it's only way of affecting life 
is by its return value, (3) it doesn't access info from non-constants 
anywhere.

Now, to the issue between

    (1)pure int f(const Class c)
    (2)pure int f(Class c)

I'd say that, since a Pure function promises not to change /anything/ 
(other than its return value, of course) anywhere, this means a pure 
function /simply has to implicitly treat its arguments as unchangeable/!

Therefore, depending on what A/W want, either:

 - a pure function has to have const decorated parameters only (1)
 - and (2) would be a syntax error

or

 - a pure function's parameters are implicitly const, and (1) is then a 
syntax error
If the arguments must be invariant, then they cannot be implicitly cast. I understand what you are saying, but I think A/W want something different.
Arrgh. You lost me here. (I'm getting paranoid: did you write this because it says "implicitly const" above? What if it had been just "petrified", as in "somehow just known not to change while the pure function is running"?)
(Not making the latter a syntax error isn't technically impossible, but 
for those learning the language and the casual readers of ex. magazine 
articles, this would wreak havoc with their understanding D.)

*Back to concurrency*, if someone changes c in mid-run, then it's the 
programmer's fault. It's his headache. The function f only guarantees not 
to change c /by itself/.
I think this is what A/W want to prevent. Otherwise, all this touting about pure functions being the way to multiprogram in D is not completely true, you still have to deal with multithreadded issues.
Heh, I always thought everybody already knew and agreed on that. Running pure (as in "my" definition ;-( ) functions on atomic data would be nice and fast. No concurrency issues, IMHO. But obviously, with any reference type one has to have some kind of mechanism that guarantees atomicity of access.
 Walter has said before 
 he wants multithreaded programming to "just work" without having to do any 
 locking, just like FP languages have.
That'd be excellent. Problem is, then the data would have to be Petrified all the FP time. This would probably result in a common programming pattern, where things are first set up using non-functional programming, then the data is Petrified (what do I know, explicitly, implicitly?), and then a lot of functional stuff is run, most likely in concurrent processes. Then some non-functional code is run to prepare for the next functional bout, etc. Hmm, might not be too bad a programming pattern. At least it could ease the use of both functional and non-functional code in the same program.
Apr 04 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Georg Wrede" wrote
 Steven Schveighoffer wrote:
 "Georg Wrede" wrote
Walter Bright wrote:
Bill Baxter wrote:

So does that mean

  pure int f(const Class c)
  { ...
  }

will *not* be allowed?  Because some part of c *could* change, even if 
it's not f() doing the changing.  I.e. another thread could be changing 
c concurrently.
Right. That declaration of f is erroneous.
I'd disagree. Whether a function is pure or not, has nothing to do with whether c could change or not.
The question is, should pure functions be protected from other threads changing c? I think Walter wants this to be true, and for that, c must be invariant. I think Walter is trying to promote a different definition for pure than what you are thinking of.
Hmm. (There ought to be a law against stealing other people's words!!! And jail time for changing them subtly and inconspicuously!) Hundreds of (otherwise unnecessary) posts even in this NG are written every month, just because two people argue on something, that in the end turns out to be just them having different notions of what a word means. And reading those posts wastes everyones time!
Sorry. Usually, to convince "the world" of an opinion, you must argue continuously with the single doubter. Otherwise, it looks like you give in :)
 And, in that case, it would have been prudent to clarify this as a comment 
 to "What is pure and what is not". Not only for my sake, but for all the 
 readers who've seen this post stay undisputed or uncommented.
I still don't know the answer to this, as I've never used 'pure' and I'm pretty sure Walter intends to use pure in a different way than it has been used before :)
Take, for example sin(x). We all know that it's pure, period. Now, 
whether x changes between two consecutive invocations or not, is 
irrelevant. Of course you get a different result. And you should.
sin(x) is a completely different animal, because x is not a reference type, it is pushed on the stack, and the pure function has the only reference to it. Nothing can change x while sin is running.
So you mean that Walter's pure implies that the argument has to be Petrified? (Dammit, I don't even dare to use the normal words for stuff anymore, so I make my own words! Petrified here simply means "we know it won't change". Vague, I know.)
Only for reference types. For value types (such as structs without pointers/references and basic types like int), they are copied on the stack every time they are passed to a function, and so they are guaranteed to be unique. That is why sin(x) doesn't need 'Petrified' data, because a guaranteed unique copy of x (presumably a double?) is made.
 Does it also mean that the pure function itself causes this Petrified 
 thing, or is it the programmer's responsibility to Petrify the object 
 before passing it?
I would guess that the caller has to ensure this for reference/pointer types, otherwise, the compiler can't tell whether the data pointed to is not going to change during the function with a different thread. BTW, the keyword that means 'Petrified' is invariant.
But pure, IMHO, means (1) the function is guaranteed to give the same 
result every time /if/ the input is the same (like sin(x) does), (2) it 
does not change anything at all anywhere, it's only way of affecting life 
is by its return value, (3) it doesn't access info from non-constants 
anywhere.

Now, to the issue between

    (1)pure int f(const Class c)
    (2)pure int f(Class c)

I'd say that, since a Pure function promises not to change /anything/ 
(other than its return value, of course) anywhere, this means a pure 
function /simply has to implicitly treat its arguments as unchangeable/!

Therefore, depending on what A/W want, either:

 - a pure function has to have const decorated parameters only (1)
 - and (2) would be a syntax error

or

 - a pure function's parameters are implicitly const, and (1) is then a 
 syntax error
If the arguments must be invariant, then they cannot be implicitly cast. I understand what you are saying, but I think A/W want something different.
Arrgh. You lost me here. (I'm getting paranoid: did you write this because it says "implicitly const" above? What if it had been just "petrified", as in "somehow just known not to change while the pure function is running"?)
I'm saying, if Walter and Andrei define pure functions as ONLY taking invariant data, then 2 must be the syntax error. In fact, to get technical, 1 is a syntax error also, it should be: pure int f(invariant Class c); as const does not guarantee that c will not change, it just guarantees that f will not change c. const in D means 'constant through this view' Of course, I could also be wrong, and they do not intend to implement pure functions as requiring invariant reference parameters. In that case, I've totally misunderstood the whole point of having pure functions, and I'd hazard to guess it negates the reason for having invariant in the first place :)
(Not making the latter a syntax error isn't technically impossible, but 
for those learning the language and the casual readers of ex. magazine 
articles, this would wreak havoc with their understanding D.)

*Back to concurrency*, if someone changes c in mid-run, then it's the 
programmer's fault. It's his headache. The function f only guarantees not 
to change c /by itself/.
I think this is what A/W want to prevent. Otherwise, all this touting about pure functions being the way to multiprogram in D is not completely true, you still have to deal with multithreadded issues.
Heh, I always thought everybody already knew and agreed on that. Running pure (as in "my" definition ;-( ) functions on atomic data would be nice and fast. No concurrency issues, IMHO. But obviously, with any reference type one has to have some kind of mechanism that guarantees atomicity of access.
No need to guarantee atomicity for data that will not change. That is why I think invariant reference parameters are necessary.
 Walter has said before he wants multithreaded programming to "just work" 
 without having to do any locking, just like FP languages have.
That'd be excellent. Problem is, then the data would have to be Petrified all the FP time.
Not only all FP time, but all time. If you are executing a pure function in one thread and a non-pure function in another thread, and both have references to the same data, this won't work without synchronization issues if the non-pure function has a writable reference, but the FP function has a 'Petrified' reference. Sorry to be so confusing. It's really tough to explain things that I'm not totally sure of, but I have no choice, as Walter is never specific about what the rules will be :) We have to imply them from his statements of how pure functions will work. -Steve
Apr 05 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 05/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  In fact, to get technical,
 it should be:
 pure int f(invariant Class c);
I wondered about that. It seems to me that if /all/ of the parameters, /and/ the return value /must/ be invariant (or implicitly castable to invariant, e.g. int) then couldn't pure R f(A a, B b, C c, D d) be syntactic sugar for pure invariant(R) f(invariant(A) a, invariant(B) b, invariant(C) c, invariant(D) d) ? (Especially what with "invariant" being a nine-letter keyword and all!)
Apr 05 2008
parent Georg Wrede <georg nospam.org> writes:
Janice Caron wrote:
 On 05/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 
 In fact, to get technical,
it should be:
pure int f(invariant Class c);
I wondered about that. It seems to me that if /all/ of the parameters, /and/ the return value /must/ be invariant (or implicitly castable to invariant, e.g. int) then couldn't pure R f(A a, B b, C c, D d) be syntactic sugar for pure invariant(R) f(invariant(A) a, invariant(B) b, invariant(C) c, invariant(D) d) ? (Especially what with "invariant" being a nine-letter keyword and all!)
(OT) This shows how misleading the "transitive const" moniker is. Obviously, we should name the topic "recursive invariant".
Apr 05 2008
prev sibling parent Georg Wrede <georg nospam.org> writes:
Steven Schveighoffer wrote:

 Sorry.  Usually, to convince "the world" of an opinion, you must argue 
 continuously with the single doubter.  Otherwise, it looks like you give in 
 :)
This being a public forum (as Forum in the antique Greece), i.e. a place for debating, it's your duty to do that. That was the whole point of having these forums. And ever since then, those who had no clue, as well as most of the audience in general, could then rely on (or at least successfully bet on) the guy or opinion that wins in the end. (Not of course right every time, but often enough.) (( rest of excellent and lucid post not repeated here )) Thanks!
Apr 05 2008
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  2 de abril a las 15:22 me escribiste:
 Janice Caron wrote:
However, I suspect that the following will be OK.
    enum g = 42;
    pure int f()
    {
        return g;
    }
Yes, because the value of g can never change. The way to think about pure functions is that if the arguments are the same, the result will be the same. This precludes reading any global state that could change, and precludes writing to any global state.
And using most of the IO functions, like read/write. pure byte f() { byte b; read(0, &b, 1); return b; } Can't be a pure function. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Dentro de 30 años Argentina va a ser un gran supermercado con 15 changuitos, porque esa va a ser la cantidad de gente que va a poder comprar algo. -- Sidharta Wiki
Apr 03 2008
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Bill Baxter wrote:
 If the ultimate goal is support for multiprogramming, then shouldn't 
 the detailed design work should start *there*, with how to do great 
 multiprogramming?  Rather than with const.

 Not saying that you guys have done this, but I know from my own 
 experience doing research that it's easy to get hung up trying to 
 solve a tough but solvable problem that seems relevant for getting 
 from A to B, only to realize in the end that it was not as relevant as 
 I thought.
I think it is fairly obvious that transitive invariant (and const) is key to multiprogramming. The transitive closure of the state of everything reachable through an object is part of the state of that object, and all the troubles with multiprogramming stem from the state of an object changing asynchronously.
There are two very different aspects to the multiprogramming problem though: * make it correct * make it fast. As far as I can tell, it's like any optimisation problem -- it's only 5-10% of the code that matters: it's only necessary to use multiple cores efficiently in a small fraction of the code (obviously the entire code needs to be correct). So I'm a little uneasy about arguments for const based on efficiency concerns -- there are many opportunities to worry about stuff which is ultimately irrelevant. I hope that's not happening here.
Apr 02 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 As far as I can tell, it's like any optimisation problem -- it's only 
 5-10% of the code that matters: it's only necessary to use multiple 
 cores efficiently in a small fraction of the code (obviously the entire 
 code needs to be correct). So I'm a little uneasy about arguments for 
 const based on efficiency concerns -- there are many opportunities to 
 worry about stuff which is ultimately irrelevant. I hope that's not 
 happening here.
Although better code generation is a positive consequence of const, and is worth mentioning, it isn't the motivation for it. What const and transitive const achieves is more reliable and more demonstrably correct code, which will indirectly improve productivity.
Apr 02 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Don Clugston wrote:
 As far as I can tell, it's like any optimisation problem -- it's only
 5-10% of the code that matters: it's only necessary to use multiple
 cores efficiently in a small fraction of the code (obviously the entire
 code needs to be correct). So I'm a little uneasy about arguments for
 const based on efficiency concerns -- there are many opportunities to
 worry about stuff which is ultimately irrelevant. I hope that's not
 happening here.
Although better code generation is a positive consequence of const, and is worth mentioning, it isn't the motivation for it. What const and transitive const achieves is more reliable and more demonstrably correct code, which will indirectly improve productivity.
I agree that transitive const can achieve more demonstrably correct code but I don't think it follows that this will necessarily improve productivity. My experience with const in C++ is that it actually reduces productivity because I spend more time dealing with the misapplication of const than I gain from whatever bugs the application of const may have prevented. In fact, in all my time as a programmer I can't think of a single instance where I've encountered a bug that could have been prevented through the use of const. Sean
Apr 02 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 I agree that transitive const can achieve more demonstrably correct code
 but I don't think it follows that this will necessarily improve productivity.
 My experience with const in C++ is that it actually reduces productivity
 because I spend more time dealing with the misapplication of const than
 I gain from whatever bugs the application of const may have prevented.
 In fact, in all my time as a programmer I can't think of a single instance
 where I've encountered a bug that could have been prevented through
 the use of const.
C++ const is more of a suggestion than anything verifiable by the compiler and static analysis. This will become more obvious as C++ tries to compete in the multiprogramming arena. The C++ compiler *cannot* help with multiprogramming issues; the C++ programmer is entirely dependent on visual inspection of the code, making C++ a tedious, labor intensive language for multiprogramming. How many times have you looked at the documentation for a function API, and wondered which of its parameters were input and which were output? When you talk about the "misapplication" of const, does that mean failure to understand what the specific API of a function is?
Apr 02 2008
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 I agree that transitive const can achieve more demonstrably correct code
 but I don't think it follows that this will necessarily improve 
 productivity.
 My experience with const in C++ is that it actually reduces productivity
 because I spend more time dealing with the misapplication of const than
 I gain from whatever bugs the application of const may have prevented.
 In fact, in all my time as a programmer I can't think of a single 
 instance
 where I've encountered a bug that could have been prevented through
 the use of const.
C++ const is more of a suggestion than anything verifiable by the compiler and static analysis. This will become more obvious as C++ tries to compete in the multiprogramming arena. The C++ compiler *cannot* help with multiprogramming issues; the C++ programmer is entirely dependent on visual inspection of the code, making C++ a tedious, labor intensive language for multiprogramming. How many times have you looked at the documentation for a function API, and wondered which of its parameters were input and which were output? When you talk about the "misapplication" of const, does that mean failure to understand what the specific API of a function is?
My guess is he means trying to call a function with a const value where the author forgot to put the const label on an argument. So you go fix that function's signature. And then find that 3 functions it calls also have incorrect signatures. And then find 5 more that those call that are also incorrect. Then you start thinking, wow, this library isn't const-correct at all, maybe I should just back out all my changes and cast away const at the top. Or maybe I should slog it through and fix the library? That kind of thing is all too familiar to C++ programmers, and quite annoying to deal with. So it leaves an impression. On the other hand, const preventing you from calling some method you shouldn't be calling (and thereby preventing a bug) doesn't leave much of an impression. You just go "duh -- of course I can't call that" and move on. Also bugs caused by data changing when you didn't expect it to don't usually jump out as due to lack of const. Instead you go "doh! I shouldn't change that there" or "doh! I shouldn't call that function here". So unfortunately I think the value of const is very difficult to judge using subjective measures. But it's definitely somewhere in between "no use at all" and "absolutely required". :-) --bb
Apr 02 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 On the other hand, const preventing you from calling some method you 
 shouldn't be calling (and thereby preventing a bug) doesn't leave much 
 of an impression.  You just go "duh -- of course I can't call that" and 
 move on.  Also bugs caused by data changing when you didn't expect it to 
 don't usually jump out as due to lack of const.  Instead you go "doh! I 
 shouldn't change that there" or "doh! I shouldn't call that function 
 here".  So unfortunately I think the value of const is very difficult to 
 judge using subjective measures.
Misuse of an API is a perennial problem (it has caused Microsoft endless grief trying to maintain backwards compatibility). Transitive const can help with that, as it makes it clear what is "no toucha my mustache!" and what is not.
Apr 02 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 I agree that transitive const can achieve more demonstrably correct code
 but I don't think it follows that this will necessarily improve productivity.
 My experience with const in C++ is that it actually reduces productivity
 because I spend more time dealing with the misapplication of const than
 I gain from whatever bugs the application of const may have prevented.
 In fact, in all my time as a programmer I can't think of a single instance
 where I've encountered a bug that could have been prevented through
 the use of const.
C++ const is more of a suggestion than anything verifiable by the compiler and static analysis.
I know you like to talk about the unreliability of const in C++ because of const_cast and the like, but in my opinion those are theoretical objections because I've never encountered even a single use of const_cast in actual code. So while I do agree that C++ const may not be useful for static analysis, I also believe that C++ const works just fine for achieving more demonstrably correct code. For example, if I create this class: class BoxedInt { int val; public: BoxedInt( int v ) : val( v ) {} int get() const { return val; } void set( int v ) { val = v; } }; I am confident that if someone tries to call set() on a const instance of BoxedInt a compile-time error will occur. To me, this means that C++ const helps to write demonstrably correct code because the compiler will trap at least some logic errors related to intended program behavior. Sure, the user could theoretically cast away const-ness of his BoxedInt object and thus break everything, but as far as I'm concerned that's his problem.
 This will become more obvious as C++ tries
 to compete in the multiprogramming arena. The C++ compiler *cannot* help
 with multiprogramming issues; the C++ programmer is entirely dependent
 on visual inspection of the code, making C++ a tedious, labor intensive
 language for multiprogramming.
I agree with this. C++ const is intended to catch logical mistakes and is not useful for multiprogramming.
 How many times have you looked at the documentation for a function API,
 and wondered which of its parameters were input and which were output?
Never. Though it perhaps helps a bit that C++ classes are value types. I know that counter-examples could be provided along the lines of: struct C { char* buf; }; void fn( const C val ) { val.buf[0] = 0; } But I've never seen anyone actually make this kind of mistake. Perhaps I've just been lucky.
 When you talk about the "misapplication" of const, does that mean
 failure to understand what the specific API of a function is?
Most often, the problem I've seen is for const to not be consistently applied to class member functions or within the program itself. It's been a while so I'm actually fuzzy on what the exact problems were, but I remember modifying a bunch of unrelated code when I created a new member function in some class. If I had to guess I'd say this had something to do with the non-transitive nature of const in C++ and that the class contained pointers which were being used in questionable ways. In fact, here's one such example that I know I encountered, though it may not be the exact situation I'm recalling above. It does involve some minor refactoring so it's not exactly a "misapplication" of const however: class C { D* d; E* e; public: C () : d ( new D ), e( new E ) {} D* getD() const { assert( d ); return d; } E* getE() const { assert( e ); return e; } }; class D { public: char* getName() const { ... } int getAge() { ... } }; void useC( C const& c ) { C c; D* d = c.getD(); assert( d ); printf( "%s %d\n", d->getName(), d->getAge() ); } I wanted to add a new get method to this class but noticed that, based on the class' design there was no reason for it to return pointers. The contained objects were owned by C and they were guaranteed to be non-null, so why not let the class interface reflect that. In exchange the compiler would catch some stupid mistakes on the user side and the user would be able to do away with their null checks. So: class C { D* d; E* e; F* f; public: C () : d ( new D ), e( new E ), f( new F ) {} D const& getD() const { return *d; } E const& getE() const { return *e; } F const& getF() const { return *f; } }; The obvious problem being that the class is now returning const references where it could previously return pointers to non-const data. This was entirely fine within the context of what was actually being done, but the methods in D, E, and F weren't labeled as const in a consistent manner. So I basically ended up having to change code all over the place because C++ does not have transitive const. Obviously, the above issue won't exist in D however, and the other situations I can't recall may be quite similar to this and thus a non- issue in D. I wish I could recall more exactly. Sean
Apr 02 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 I know you like to talk about the unreliability of const in C++ because of
 const_cast and the like, but in my opinion those are theoretical objections
 because I've never encountered even a single use of const_cast in actual
 code.
It's not even const_cast. It's simply going *one* level of indirection, and all the const checking is gone, gone, gone. You can have const objects in C++, but not const data structures.
 So while I do agree that C++ const may not be useful for static
 analysis, I also believe that C++ const works just fine for achieving more
 demonstrably correct code.  For example, if I create this class:
 
     class BoxedInt
     {
         int val;
     public:
         BoxedInt( int v ) : val( v ) {}
         int get() const { return val; }
         void set( int v ) { val = v; }
     };
 
 I am confident that if someone tries to call set() on a const instance of
 BoxedInt a compile-time error will occur.  To me, this means that C++
 const helps to write demonstrably correct code because the compiler
 will trap at least some logic errors related to intended program behavior.
 Sure, the user could theoretically cast away const-ness of his BoxedInt
 object and thus break everything, but as far as I'm concerned that's his
 problem.
You've got no references in BoxedInt. How many data structures do you use that contain no references? Essentially, const in C++ only works for trivial objects. (I'm not saying trivial objects are not useful, I'm just saying they are trivial.)
 This will become more obvious as C++ tries
 to compete in the multiprogramming arena. The C++ compiler *cannot* help
 with multiprogramming issues; the C++ programmer is entirely dependent
 on visual inspection of the code, making C++ a tedious, labor intensive
 language for multiprogramming.
I agree with this. C++ const is intended to catch logical mistakes and is not useful for multiprogramming.
 How many times have you looked at the documentation for a function API,
 and wondered which of its parameters were input and which were output?
Never.
Ok, but I do it all the time.
 Though it perhaps helps a bit that C++ classes are value types.
They are value types, true, but I don't think that is relevant to the issue of transitive const.
 I know that counter-examples could be provided along the lines of:
 
     struct C
     {
         char* buf;
     };
 
     void fn( const C val )
     {
         val.buf[0] = 0;
     }
 
 But I've never seen anyone actually make this kind of mistake.  Perhaps I've
 just been lucky.
So what you're saying is you follow the *convention* that const in C++ is transitive. I suggest that this convention is the norm, and that normal idioms are ripe fruit for codification into the language rules. I further suggest that since the C++ compiler cannot help you find any of those mistakes, your projects may unknowingly suffer from them. I do know that when I've tried to do COW in C++ (which relies on transitive const), there have been many bugs and it's been hard to get them flushed out.
 When you talk about the "misapplication" of const, does that mean
 failure to understand what the specific API of a function is?
Most often, the problem I've seen is for const to not be consistently applied to class member functions or within the program itself. It's been a while so I'm actually fuzzy on what the exact problems were, but I remember modifying a bunch of unrelated code when I created a new member function in some class. If I had to guess I'd say this had something to do with the non-transitive nature of const in C++ and that the class contained pointers which were being used in questionable ways.
Doesn't something used in a questionable way sound like a bug that const helped you find, even if it was just pointing out something that should be clarified and better documented?
Apr 02 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 I know you like to talk about the unreliability of const in C++ because of
 const_cast and the like, but in my opinion those are theoretical objections
 because I've never encountered even a single use of const_cast in actual
 code.
It's not even const_cast. It's simply going *one* level of indirection, and all the const checking is gone, gone, gone. You can have const objects in C++, but not const data structures.
 So while I do agree that C++ const may not be useful for static
 analysis, I also believe that C++ const works just fine for achieving more
 demonstrably correct code.  For example, if I create this class:

     class BoxedInt
     {
         int val;
     public:
         BoxedInt( int v ) : val( v ) {}
         int get() const { return val; }
         void set( int v ) { val = v; }
     };

 I am confident that if someone tries to call set() on a const instance of
 BoxedInt a compile-time error will occur.  To me, this means that C++
 const helps to write demonstrably correct code because the compiler
 will trap at least some logic errors related to intended program behavior.
 Sure, the user could theoretically cast away const-ness of his BoxedInt
 object and thus break everything, but as far as I'm concerned that's his
 problem.
You've got no references in BoxedInt. How many data structures do you use that contain no references?
No references at all? Perhaps half. But very few of my C++ objects contain references to external data, and references are almost always managed via shared_ptr or the equivalent. I can say with confidence that I've never been bitten by a bug resulting from non-transitive const in C++. That isn't to say that I think there's no need for transitive const--I think it's a great idea-- but only that this hasn't been a problem in the code that I've written.
 Essentially, const in C++ only works for
 trivial objects. (I'm not saying trivial objects are not useful, I'm
 just saying they are trivial.)
I think this statement is open to interpretation. From a theoretical perspective I agree. But with RAII objects managing references and the like, it's rare for even very complex objects to contain raw pointers, and such RAII objects can be made to fake transitivity of const by overloading their const and non-const methods appropriately.
 This will become more obvious as C++ tries
 to compete in the multiprogramming arena. The C++ compiler *cannot* help
 with multiprogramming issues; the C++ programmer is entirely dependent
 on visual inspection of the code, making C++ a tedious, labor intensive
 language for multiprogramming.
I agree with this. C++ const is intended to catch logical mistakes and is not useful for multiprogramming.
 How many times have you looked at the documentation for a function API,
 and wondered which of its parameters were input and which were output?
Never.
Ok, but I do it all the time.
That's fine. I recognize that I'm not singularly representative of the entire programming world :-)
  > Though it perhaps helps a bit that C++ classes are value types.
 They are value types, true, but I don't think that is relevant to the
 issue of transitive const.
 I know that counter-examples could be provided along the lines of:

     struct C
     {
         char* buf;
     };

     void fn( const C val )
     {
         val.buf[0] = 0;
     }

 But I've never seen anyone actually make this kind of mistake.  Perhaps I've
 just been lucky.
So what you're saying is you follow the *convention* that const in C++ is transitive. I suggest that this convention is the norm, and that normal idioms are ripe fruit for codification into the language rules.
I agree.
 I further suggest that since the C++ compiler cannot help you find any
 of those mistakes, your projects may unknowingly suffer from them. I do
 know that when I've tried to do COW in C++ (which relies on transitive
 const), there have been many bugs and it's been hard to get them flushed
 out.
Yup.
 When you talk about the "misapplication" of const, does that mean
 failure to understand what the specific API of a function is?
Most often, the problem I've seen is for const to not be consistently applied to class member functions or within the program itself. It's been a while so I'm actually fuzzy on what the exact problems were, but I remember modifying a bunch of unrelated code when I created a new member function in some class. If I had to guess I'd say this had something to do with the non-transitive nature of const in C++ and that the class contained pointers which were being used in questionable ways.
Doesn't something used in a questionable way sound like a bug that const helped you find, even if it was just pointing out something that should be clarified and better documented?
Sure. I'll definitely agree that the code was poorly written and not at all documented, even if it actually worked as-is. Sean
Apr 03 2008
parent Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 Essentially, const in C++ only works for
 trivial objects. (I'm not saying trivial objects are not useful, I'm
 just saying they are trivial.)
I think this statement is open to interpretation. From a theoretical perspective I agree. But with RAII objects managing references and the like, it's rare for even very complex objects to contain raw pointers, and such RAII objects can be made to fake transitivity of const by overloading their const and non-const methods appropriately.
For a new design, we shouldn't have to fake it. And faking it is complicated, which means that realistically, it isn't going to get done. People will just rely on convention.
Apr 03 2008
prev sibling next sibling parent "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 I know you like to talk about the unreliability of const in C++ because of
  const_cast and the like, but in my opinion those are theoretical objections
  because I've never encountered even a single use of const_cast in actual
  code.
One anecdote is not statistically significant. The company I work for once wasted six weeks while six programmers tracked down a multithreading bug, an obscure race condition that brought down our server after some random period in time measured in days. We finally found and fixed it, but it turns out that particular bug could never have happened in D. Transitive const would have stopped it dead. And yes, I know, one anecdote is not statistically significant. I just thought I'd mention, there are other stories. I have used const_cast, but it is rare. But that's kinda the point - the thing about const_cast is that not that you're supposed to use it, it's that other forms of cast won't accidently cast away constancy. For example: class C {}; const C c; C d = static_cast<C>(c); //ERROR Trouble is, legacy casting still works: C e = (C)c; //OK So the deal is, static_cast<T> is safer than <T> because it preserves const correctness (in C++ terms). const_cast<T> is only necessary for those rare cases where you actually need to change the constancy. At least - that was the theory. In practice, it never worked, because they forgot to deprecate (T).
Apr 02 2008
prev sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Janice Caron <caron800 googlemail.com> wrote:
  So the deal is, static_cast<T> is safer than <T>
...than (T)... Sorry.
Apr 02 2008
parent reply "Koroskin Denis" <2korden gmail.com> writes:
0) Do I understand correctly, that having on pure member function, class=
  =

should have _all_ its member functions pure?

1) What about random numbers? rand() can not be pure since 'rand() +  =

rand() !=3D 2*rand()'. However, functional languages do have rand().
And any function, that relies on rand() cannot be pure, too!

Maybe rand() should belong to some other kind of functions, which is not=
  =

pure, but thread-safe. Thread safety is a great attribute, that I would =
 =

like to see at my functions. It's a great contract that gives benefits t=
o  =

compiler AND programmer.

2)Suppose, you wrote some math code that operates on Matrices:

class Matrix(int rows, int cols)
{
     private float[rows][cols] elements;

     pure float at(int row, int col)
     in
     {
         assert((0 <=3D row) && (row < rows));
         assert((0 <=3D col) && (col < cols));
     }
     body
     {
         return elements[row][col];
     }

     static if (cols =3D=3D rows) {
         pure float determinant()
         {
             float result;
             /* do some calculus */
             return result;
         }
     }
}

template doSomeStuff(int rows, int cols)
{
     pure invariant (Matrix!(rows,cols)) doSomeStuff(invariant  =

Matrix!(rows,cols) a, invariant Matrix!(rows,cols) b)
     {
         // do some calculus
         return result;
     }
}

It works, you are very glad that your program is parallelized and uses  =

full power of your multi-processor CPU.
But now you want to make some optimizations to speed it up a little. You=
  =

add some SSE instruction in assembly (it's ok in pure methods, I hope?).=

And then you take a step further and it looks like you calculate  =

determinant for your Matrix every time, and it consumes quite some time.=
  =

You move you determinant calculations to class' ctor and just store the =
 =

value. But now it turns out that performance degraded dramatically becau=
se  =

now you are forced to calculate determinant for every Matrix, even  =

temporary ones, during addition, subtruction etc.

A C++ programmer would add mutable member here:

struct mutable(T)
{
     private T value;

     pure T get()
     {
         return value;
     }

     pure void set(T value)
     {
         mutable* m =3D cast(mutable*)this;   // yes, it's unsafe, but n=
ow  =

programmer
         m.value =3D value;                   // is responsible for this=
, not  =

compiler

         // if cast-away is not allowed, then asm would do the trick
     }
}

class Matrix(int rows, int cols)
{
     private float[rows][cols] elements;

     static if (rows =3D=3D cols) {
         mutable!(float) _determinant =3D mutable!(float)(float.nan);  /=
/ nan  =

=3D=3D not calculated yet
     }

     pure float at(int row, int col)
     in
     {
         assert((0 <=3D row) && (row < rows));
         assert((0 <=3D col) && (col < cols));
     }
     body
     {
         return elements[row][col];
     }

     static if (cols =3D=3D rows) {
         pure float calcDeterminant()
         {
             float result =3D 0;
             /* some implementaion follows */
             return result;
         }

         pure float determinant()
         {
             if (_determinant.get() =3D=3D float.nan) {
                 synchronized (this) {
                     if (_determinant.get() =3D=3D float.nan) {
                         _determinant.set(calcDeterminant());
                     }
                 }
             }
             return _determinant.get();
         }
     }
}

This is *transparent* optimization, and it's a programmer now who makes =
 =

guaranties, that his code makes no side effect. Yes, binary represinatio=
n  =

of the class is changed, but its logical invariance is preserved. If  =

programmers fails to do it correctly, then it's his fault. By signing hi=
s  =

code as pure he claims that the method is thread-safe, doesn't rely on  =

other threads' execution order, calls to it can be rearranged and given =
 =

the same input it yields the same result.

You might say that this code smells, but it's correct. And it could look=
  =

slightly better if mutable was not a user-defined template hack, but a  =

language-level feature. You just should expose some restrictions on its =
 =

usage (like, only private members can be mutable, access to it can be  =

achieved via read-write lock only  =

http://en.wikipedia.org/wiki/Readers-writers_problem).

IMO, mutable is a powerful optimization mechanish, that should be used  =

with *great* care.
Apr 03 2008
parent Jason House <jason.james.house gmail.com> writes:
Koroskin Denis Wrote:

 It works, you are very glad that your program is parallelized and uses  
 full power of your multi-processor CPU.
 But now you want to make some optimizations to speed it up a little. You  
 add some SSE instruction in assembly (it's ok in pure methods, I hope?).
 And then you take a step further and it looks like you calculate  
 determinant for your Matrix every time, and it consumes quite some time.  
 You move you determinant calculations to class' ctor and just store the  
 value. But now it turns out that performance degraded dramatically because  
 now you are forced to calculate determinant for every Matrix, even  
 temporary ones, during addition, subtruction etc.
 
 A C++ programmer would add mutable member here:
 
 This is *transparent* optimization, and it's a programmer now who makes  
 guaranties, that his code makes no side effect. Yes, binary represination  
 of the class is changed, but its logical invariance is preserved. If  
 programmers fails to do it correctly, then it's his fault. By signing his  
 code as pure he claims that the method is thread-safe, doesn't rely on  
 other threads' execution order, calls to it can be rearranged and given  
 the same input it yields the same result.
 
 You might say that this code smells, but it's correct. And it could look  
 slightly better if mutable was not a user-defined template hack, but a  
 language-level feature. You just should expose some restrictions on its  
 usage (like, only private members can be mutable, access to it can be  
 achieved via read-write lock only  
 http://en.wikipedia.org/wiki/Readers-writers_problem).
 
 IMO, mutable is a powerful optimization mechanish, that should be used  
 with *great* care.
I think the future of D will no require the mutable keyword as you describe. class matrix{ pure int determinant(){ ... } ... } By simply making the determinant function pure, the compiler is free to do optimization/caching of the result. I haven't been in the habit of giving compilers that much trust...
Apr 03 2008
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Bill Baxter wrote:
 If the ultimate goal is support for multiprogramming, then shouldn't the 
 detailed design work should start *there*, with how to do great 
 multiprogramming?  Rather than with const.

 Not saying that you guys have done this, but I know from my own 
 experience doing research that it's easy to get hung up trying to solve a 
 tough but solvable problem that seems relevant for getting from A to B, 
 only to realize in the end that it was not as relevant as I thought.
I think it is fairly obvious that transitive invariant (and const) is key to multiprogramming. The transitive closure of the state of everything reachable through an object is part of the state of that object, and all the troubles with multiprogramming stem from the state of an object changing asynchronously.
I think it's not so fairly obvious but true that transitive invariant and const are equivalent to logical invariant or const. Please re-read my original example for the proof. This puts a big dent in your argument that logical const doesn't cut it for multiprogramming, because what we have now is a logical const system. -Steve
Apr 02 2008
prev sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, Craig Black <craigblack2 cox.net> wrote:
 Walter/Andrei are under the impression that transitive const is necessary
  for multiprogramming, but they have not presented a detailed explanation
  about why it is necessary.
It hardly needs a detailed explanation. In fact it's trivial to understand. Watch: // in C++ void f(const C &c) { c.some.deeply.nested.but.reachable.var++; } In a multithreaded program, a thread switch may occur between the read and the write of the read/write cycle that is ++. If that happens, you're screwed. // in D void f(const C c) { c.some.deeply.nested.but.reachable.var++; /*ERROR*/ } In D, such dodgy code just won't compile.
  It sounds like a more detailed explanation of
  the merits of transitive const from Walter/Andrei would help here.
Was that not detailed enough for you? It's not rocket science.
  One idea that I had recently was that the const keyword could provide
  logical const
Logical const is mutually incompatible with transitive const. If an object is transitively const, then it means "I promise not to modify anything reachable through this pointer", wheras if an object is logically const, it means that some bits exists which are reachable throgh the pointer, but which you are allowed to modify. The two promises: (1) "I promise not to modify anything reachable through this pointer" (transitive const) (2) "I promise not to modify anything reachable through this pointer, except - oh wait! I might modify these bits here" (logical const) cannot both be simultaneously true. Multiprogramming needs transitive const. It's that simple.
Apr 02 2008
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Janice Caron wrote:
 On 01/04/2008, Craig Black <craigblack2 cox.net> wrote:
 Walter/Andrei are under the impression that transitive const is necessary
  for multiprogramming, but they have not presented a detailed explanation
  about why it is necessary.
It hardly needs a detailed explanation. In fact it's trivial to understand. Watch: // in C++ void f(const C &c) { c.some.deeply.nested.but.reachable.var++; } In a multithreaded program, a thread switch may occur between the read and the write of the read/write cycle that is ++. If that happens, you're screwed. // in D void f(const C c) { c.some.deeply.nested.but.reachable.var++; /*ERROR*/ } In D, such dodgy code just won't compile.
But this dodgy code will void f(const C c) { some_global++; } So const isn't a guarantee of thread safety. And watch this: void f(const C c) { assert( c.we_like_const == c.we_like_const, "Oops! " "Some other thread must have changed c " "while we weren't looking!"); } Might just assert. Const gives no guarantee that values won't change out from under you when used in a multithreaded environment. And watch this: void f(C c) { int x = c.get_some_value(); } Hey! That probably *is* thread safe, and it didn't even use const at all. So it seems "const is necessary for multiprogramming" is not quite the whole truth, which was all the original poster was talking about. 'Course that may be a strawman. I don't know that Walter has ever said precisely that. He's usually phrases it more like "it will be important for what's coming". I can see invariant having some value in conjunction with pure, but I also suspect that invariant, being so strict, is also something that will not get used a lot outside of simple constants and code carefully designed for running in parallel. Like what Don's said. --bb
Apr 02 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Bill Baxter <dnewsgroup billbaxter.com> wrote:
  But this dodgy code will

  void f(const C c) {
     some_global++;
  }

  So const isn't a guarantee of thread safety.
Correct. And nobody's saying it is. You, me, Walter, Andrei - we all agree on this point, so it needs no further discussion. This is simple logic. A => B isn't the same thing as B => A. Unless of course you would argue that everything with four legs is a dog. (1) const does not guarantee thread safety (2) thread safety requires const guarantees Both of these statements are simultaneous true.
  So it seems "const is necessary for multiprogramming" is not quite the
 whole truth, which was all the original poster was talking about. 'Course
 that may be a strawman.  I don't know that Walter has ever said precisely
 that.
And that's the nub of the matter. Folk that have thought the whole pure/multiprocessor thing through are saying "A => B", and detractors are saying "Wait a minute, B !=> A". It's sort-of a strawman, but only in the sense of misunderstanding.
Apr 02 2008
parent reply Christian Kamm <kamm.incasoftware shift-at-left-and-remove-this.de> writes:
Janice Caron wrote:
 (1) const does not guarantee thread safety
 (2) thread safety requires const guarantees
Steven is arguing that thread safety does not require transitive const guarantees. It was already established that pure functions will not have access to global variables. If you add the rule that pure functions cannot access the mutable part of a const object, const and pure work together to provide thread safety just as well as with fully transitive const. That's where the examples comparing transitive const with global variables to logical const with mutable members originated: int x; class C { void foo() const { x++; } // const but can't be pure void bar() pure { /* automatically safe */ } } versus class C { mutable int x; void foo() const { x++; } // const but can't be pure void bar() pure { /* can't do anything in here you couldn't have done above */ } }
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Christian Kamm
<kamm.incasoftware shift-at-left-and-remove-this.de> wrote:
 Steven is arguing that thread safety does not require transitive const
  guarantees.
 <snip>
Thank you. That was a perfect explanation. I get it now.
  class C
  {
   mutable int x;
   void foo() const { x++; } // const but can't be pure
   void bar() pure
   { /* can't do anything in here you couldn't have done above */ }
  }
But you could equally well write it like this: class C { int x; void foo() { x++; } void bar() pure { /* can't do anything in here you couldn't have done above */ } } If it can change, then don't call it const. Seems a simple enough rule to me.
Apr 02 2008
parent reply Christian Kamm <kamm.incasoftware shift-at-left-and-remove-this.de> writes:
  class C
  {
   mutable int x;
   void foo() const { x++; } // const but can't be pure
   void bar() pure
   { /* can't do anything in here you couldn't have done above */ }
  }
Janice Caron wrote:
 But you could equally well write it like this:
 
     class C
     {
         int x;
         void foo() { x++; }
         void bar() pure
         { /* can't do anything in here you couldn't have done above */ }
     }
 
 If it can change, then don't call it const. Seems a simple enough rule to
 me.
Yes, but now const's transitivity isn't a necessity for 'future multiprogramming features' anymore. Instead, we're discussing a const scheme where mutable members could be allowed, but aren't - for simplicity, consistency or some other reason. I have not used D2 yet, so I'm not sure it is as restricting as some people suggest. From the outside transitive const certainly looks simpler and more useful than C++'s variant. However, if there are several genuine cases where escapes from transitivity would be advantageous, allowing exceptions should be considered.
Apr 02 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Christian Kamm" wrote
  class C
  {
   mutable int x;
   void foo() const { x++; } // const but can't be pure
   void bar() pure
   { /* can't do anything in here you couldn't have done above */ }
  }
Janice Caron wrote:
 But you could equally well write it like this:

     class C
     {
         int x;
         void foo() { x++; }
         void bar() pure
         { /* can't do anything in here you couldn't have done above */ }
     }

 If it can change, then don't call it const. Seems a simple enough rule to
 me.
Yes, but now const's transitivity isn't a necessity for 'future multiprogramming features' anymore. Instead, we're discussing a const scheme where mutable members could be allowed, but aren't - for simplicity, consistency or some other reason. I have not used D2 yet, so I'm not sure it is as restricting as some people suggest. From the outside transitive const certainly looks simpler and more useful than C++'s variant. However, if there are several genuine cases where escapes from transitivity would be advantageous, allowing exceptions should be considered.
The point of this whole thread is that exceptions ARE possible now, so why must they be difficult? -Steve
Apr 02 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 So const isn't a guarantee of thread safety.
You're right, it isn't sufficient. But it is necessary.
 Hey! That probably *is* thread safe, and it didn't even use const at all.
You can certainly write thread-safe code that doesn't use const at all. You just have to be careful not to change it - i.e. the checking will be done by you, the programmer, rather than the compiler.
 I can see invariant having some value in conjunction with pure, but I 
 also suspect that invariant, being so strict, is also something that 
 will not get used a lot outside of simple constants and code carefully 
 designed for running in parallel.  Like what Don's said.
Invariant strings have turned out to be a resounding success (and I was very, very skeptical of that initially). I suspect that more and more programs will gravitate towards using invariant as people get more used to the idea. I know my programs will.
Apr 02 2008
parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:

 Bill Baxter wrote:
 So const isn't a guarantee of thread safety.
You're right, it isn't sufficient. But it is necessary.
 Hey! That probably *is* thread safe, and it didn't even use const at all.
You can certainly write thread-safe code that doesn't use const at all. You just have to be careful not to change it - i.e. the checking will be done by you, the programmer, rather than the compiler.
 I can see invariant having some value in conjunction with pure, but I
 also suspect that invariant, being so strict, is also something that
 will not get used a lot outside of simple constants and code carefully
 designed for running in parallel.  Like what Don's said.
Invariant strings have turned out to be a resounding success (and I was very, very skeptical of that initially). I suspect that more and more programs will gravitate towards using invariant as people get more used to the idea. I know my programs will.
Do you have any references to this resounding success? -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Apr 03 2008
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Lars Ivar Igesund wrote:
 Walter Bright wrote:
 Invariant strings have turned out to be a resounding success (and I was
 very, very skeptical of that initially). I suspect that more and more
 programs will gravitate towards using invariant as people get more used
 to the idea. I know my programs will.
Do you have any references to this resounding success?
It's success in other languages for one, and not finding any problems with it when using it in D for two.
Apr 03 2008
prev sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Lars Ivar Igesund wrote:
 Walter Bright wrote:
 Invariant strings have turned out to be a resounding success (and I was
 very, very skeptical of that initially). I suspect that more and more
 programs will gravitate towards using invariant as people get more used
 to the idea. I know my programs will.
Do you have any references to this resounding success?
I think that what Walter said ("Invariant strings have turned out to be a resounding success") is quite obvious to be true. But note that he said "invariant strings", and not "invariant" by itself. In other words, what he said does not imply that D's invariant/const system has been a resounding success. -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
Bruno Medeiros wrote:
 Lars Ivar Igesund wrote:
 Walter Bright wrote:
 Invariant strings have turned out to be a resounding success (and I was
 very, very skeptical of that initially). I suspect that more and more
 programs will gravitate towards using invariant as people get more used
 to the idea. I know my programs will.
Do you have any references to this resounding success?
I think that what Walter said ("Invariant strings have turned out to be a resounding success") is quite obvious to be true.
A success in what way? And what do they achieve that could not have been achieved via plain old const strings? Sean
Apr 27 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 27/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 what do they achieve that could not have been
 achieved via plain old const strings?
In a word: (OK - in three words): Copy On Write. For example, one could write a function string escape(string s); which escaped certain characters (by preceding them with '\' or whatever). Because s is an array of invariant chars, if nothing needs to be escaped, the function is able to return s. This would not be possible with "plain old const strings". To illustrate, suppose the declaration of the function had been const(char)[] escape(const(char)[] s); Then: char[] s = "fixed".dup; auto t = escape(s); assert(t == "fixed"); s[0] = 'm'; assert(t == "fixed"); /* FAILS */ With invariant, it just works.
Apr 27 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
Janice Caron wrote:
 On 27/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 what do they achieve that could not have been
 achieved via plain old const strings?
In a word: (OK - in three words): Copy On Write. For example, one could write a function string escape(string s); which escaped certain characters (by preceding them with '\' or whatever). Because s is an array of invariant chars, if nothing needs to be escaped, the function is able to return s.
So this is predicated on the idea that the optimal strategy is to assume that library functions will not actually need to make any changes the majority of the time, and that they do COW internally. Fair enough. I agree that this makes it a clear win for Phobos, which is designed around this assumption. Sean
Apr 27 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 So this is predicated on the idea that the optimal strategy is to assume 
 that library functions will not actually need to make any changes the 
 majority of the time, and that they do COW internally.  Fair enough.  I 
 agree that this makes it a clear win for Phobos, which is designed 
 around this assumption.
It turns out to be a very natural way of writing string code. And yes, I was surprised to discover that in-place string modification turned out to be rare. Most operations are slicing/concatenating, which work just fine on invariant strings.
Apr 27 2008
parent reply Sascha Katzner <sorry.no spam.invalid> writes:
Walter Bright wrote:
 Most operations are slicing/concatenating, which work just fine on
 invariant strings.
Correct me if I'm wrong, but if you want to concatenate two invariant strings, you have to allocate a new buffer. Isn't that a possible speed penalty compared to regular strings? Since if you use regular strings you can allocate one of the two string buffers beforehand big enough and save this allocation, with invariant strings you don't have this option. ...or to rephrase my question, wouldn't a StringBuilder Class without invariant strings be much faster? LLAP, Sascha
Apr 27 2008
next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Sascha Katzner wrote:

 Walter Bright wrote:
 Most operations are slicing/concatenating, which work just fine on
 invariant strings.
Correct me if I'm wrong, but if you want to concatenate two invariant strings, you have to allocate a new buffer. Isn't that a possible speed penalty compared to regular strings?
Indeed it is, and this is the main reason for Tango being so damn fast at text processing. -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Apr 27 2008
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Lars Ivar Igesund wrote:
 Sascha Katzner wrote:
 
 Walter Bright wrote:
 Most operations are slicing/concatenating, which work just fine on
 invariant strings.
Correct me if I'm wrong, but if you want to concatenate two invariant strings, you have to allocate a new buffer. Isn't that a possible speed penalty compared to regular strings?
Indeed it is, and this is the main reason for Tango being so damn fast at text processing.
But note that Tango also is suffering from bugs now that are due to that design choice (i.e. the choice to require pre-allocation for almost all text processing functions). http://www.dsource.org/projects/tango/ticket/787 http://www.dsource.org/projects/tango/ticket/978 --bb
Apr 27 2008
parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Bill Baxter wrote:

 Lars Ivar Igesund wrote:
 Sascha Katzner wrote:
 
 Walter Bright wrote:
 Most operations are slicing/concatenating, which work just fine on
 invariant strings.
Correct me if I'm wrong, but if you want to concatenate two invariant strings, you have to allocate a new buffer. Isn't that a possible speed penalty compared to regular strings?
Indeed it is, and this is the main reason for Tango being so damn fast at text processing.
But note that Tango also is suffering from bugs now that are due to that design choice (i.e. the choice to require pre-allocation for almost all text processing functions).
Only one of those are related to the above, the second is a different issue (about specification). -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Apr 28 2008
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Lars Ivar Igesund wrote:
 Bill Baxter wrote:
 
 Lars Ivar Igesund wrote:
 Sascha Katzner wrote:

 Walter Bright wrote:
 Most operations are slicing/concatenating, which work just fine on
 invariant strings.
Correct me if I'm wrong, but if you want to concatenate two invariant strings, you have to allocate a new buffer. Isn't that a possible speed penalty compared to regular strings?
Indeed it is, and this is the main reason for Tango being so damn fast at text processing.
But note that Tango also is suffering from bugs now that are due to that design choice (i.e. the choice to require pre-allocation for almost all text processing functions).
Only one of those are related to the above, the second is a different issue (about specification).
Ah. Ok. -bb
Apr 28 2008
prev sibling next sibling parent BCS <ao pathlink.com> writes:
Reply to Sascha,

 Correct me if I'm wrong, but if you want to concatenate two invariant
 strings, you have to allocate a new buffer.
yes, but the invariant part has nothing to do with it. Concatenation always copies.
 LLAP,
 Sascha
Apr 27 2008
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Sascha Katzner wrote:
 Correct me if I'm wrong, but if you want to concatenate two invariant 
 strings, you have to allocate a new buffer. Isn't that a possible speed 
 penalty compared to regular strings? Since if you use regular strings 
 you can allocate one of the two string buffers beforehand big enough and 
 save this allocation, with invariant strings you don't have this option.
 
 ...or to rephrase my question, wouldn't a StringBuilder Class without 
 invariant strings be much faster?
D doesn't *prevent* you from building a string using a mutable character array. The cases where it is faster to do so, however, tend to be modularizable. When the function is done, the result can be cast to an invariant string.
Apr 27 2008
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Sean Kelly wrote:
 Janice Caron wrote:
 On 27/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 what do they achieve that could not have been
 achieved via plain old const strings?
In a word: (OK - in three words): Copy On Write. For example, one could write a function string escape(string s); which escaped certain characters (by preceding them with '\' or whatever). Because s is an array of invariant chars, if nothing needs to be escaped, the function is able to return s.
So this is predicated on the idea that the optimal strategy is to assume that library functions will not actually need to make any changes the majority of the time, and that they do COW internally. Fair enough. I agree that this makes it a clear win for Phobos, which is designed around this assumption. Sean
And like Walter mentioned before, most modern high-level languages have if I'm not mistaken, Python and Ruby as well. -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote in message
 It hardly needs a detailed explanation. In fact it's trivial to
 understand. Watch:

    // in C++
    void f(const C &c)
    {
        c.some.deeply.nested.but.reachable.var++;
    }

 In a multithreaded program, a thread switch may occur between the read
 and the write of the read/write cycle that is ++. If that happens,
 you're screwed.

    // in D
    void f(const C c)
    {
        c.some.deeply.nested.but.reachable.var++; /*ERROR*/
    }

 In D, such dodgy code just won't compile.
int x; struct Var { const int opPostInc() { return x++; } // when it is supported // int opImplicitCast() { return x; } } struct Reachable { Var var; } struct But { Reachable reachable; } struct Nested { But but; } struct Deeply { Nested nested; } struct Some { Deeply deeply; } class C { Some some; } void f(const C c) { c.some.deeply.nested.but.reachable.var++; // OK (compiled with D2) } Notice that the Var sturct is semantically equivalent to an int data member (or will be, at least, when opImplicit cast works). The point is that logical const is still possible, even with transitive const, because the global namespace is not const. There is no way around this except with pure functions. Which I think you agree. HOWEVER, the point that everyone is arguing is why does logical const make pure functions or functional programming impossible? Clearly, it is ALREADY POSSIBLE to have logical const, and clearly, pure functions are possible! I'm saying transitive const is mathematically equivalent to logical const ALREADY. Please try and grasp that concept. -Steve
Apr 02 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  HOWEVER, the point that everyone is arguing is why does logical const make
  pure functions or functional programming impossible?
I thought I'd covered that, but no matter. I'll try again. Logical const in mutually incompatible with transitive const. No object can be simultaneously both fully transitive and have reachable mutable fields. It's just impossible. (Here I define "reachable" as meaning " Clearly, it is ALREADY
  POSSIBLE to have logical const, and clearly, pure functions are possible!
  I'm saying transitive const is mathematically equivalent to logical const
  ALREADY.  Please try and grasp that concept.

  -Steve
Apr 02 2008
parent reply guslay <guslay gmail.com> writes:
Janice Caron Wrote:
 
 Logical const in mutually incompatible with transitive const. No
 object can be simultaneously both fully transitive and have reachable
 mutable fields. It's just impossible. (Here I define "reachable" as
 meaning "
 
What's logical const anyway? The bitwise/mutable definition is skewed in the context of D. Saying that is logical const : class C { mutable int a; int f() const { ++a; } } while this isn't: class C { static int a; int f() const { ++a; } } is kind of a reach. Proposition are dismissed because they exhibit logical const behavior, like it's some kind of disease. Those propositions merely point out that D const is just logical const anyway (even if it pretends not to be). At the function level, const/invariant is nothing more that an abstraction. It just means that the method will run in some "protected mode". Some data reachable trough const function are immutable, some are not. Even with the transitive rule. Even without mutable members. At the data level, const/invariant is enforced (bitwise const). The problem with C++ const is not that it is logical. The difference is that C++ const is basically just an annotation, while D has some potential for enforcement (data are bitwise const). D - Functions => logical const. - Data => bitwise const. C++ - Functions => logical const. - Data => const just a annotation.
Apr 02 2008
next sibling parent "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, guslay <guslay gmail.com> wrote:
 What's logical const anyway?
By my definition, a C++ class is logically const, if and only if at least one of its member variables is declared using the keyword "mutable". If other people use the phrase to mean something different, that probably explains much of the confusion.
Apr 02 2008
prev sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, guslay <guslay gmail.com> wrote:
  Saying that is logical const :
 <snip>
  while this isn't:
 <snip>
  is kind of a reach.
From where I stand, it's true by definition. But I'll agree to use your definition of logical const for the rest of this post (but don't expect me to use it thereafter). So, by your definition a class which contains member functions which are declared const and which modify global variables, is logically const. Hmm... I called that "globby" in another post.
 Those propositions merely point out that D const is just logical const anyway
(even if it pretends not to be).
D doesn't pretend not be support classes which contains member functions which are declared const and which modify global variables. By your definition, D does not pretend not to be logical const. (Now see why it's so confusing to have different definitions).
  At the function level, const/invariant is nothing more that an abstraction.
I said that. No disagreement there. All it does is specify the constancy of the hidden function parameter known as "this".
  At the data level, const/invariant is enforced (bitwise const).
Again, I said that. No disagreement there. So apart from a little bit of terminology, we're agreeing.
Apr 02 2008
parent guslay <guslay gmail.com> writes:
Janice Caron Wrote:

 On 02/04/2008, guslay <guslay gmail.com> wrote:
 So apart from a little bit of terminology, we're agreeing.
Yes, it seems that we agree on a lot of things, yet reach different conclusions. I'm not sure yet how much of it is still due to misunderstandment, or just different expectations about the language. I still think that some valid points have been raised by this thread, have not been answered. I understand that language design is about compromise, I just don't see what is the negative counterpart of allowing mutable. And I certainly don't see how claims such as (from the Const article): "not having mutable facilitates code reviews" and "The problem with logical const is that const is no longer transitive. Not being transitive means there is the potential for threading race conditions..." can be considered something other than gross overstatements in the lights of our discussion.
Apr 02 2008
prev sibling next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
Bugger! That last post got sent before I'd finished writing it.

     c.some.deeply.nested.but.reachable.var++; // OK (compiled with D2)
Overriding "opPostInc() const" doesn't prove anything. If you want to make a serious point, just call you function f or foo or something. Using operator overloads to disguise what's going on doesn't make /anything/ clearer.
  The point is that logical const is still possible, even with transitive
  const, because the global namespace is not const.
It seems you and I disagree about the meaning of the phrase "logical const". The ability to manipulate global variables does not constitute logical constancy in my book.
  HOWEVER, the point that everyone is arguing is why does logical const make
  pure functions or functional programming impossible?  Clearly, it is ALREADY
  POSSIBLE to have logical const
We are suffering from a communications difficulty caused by you and I using the same phrase ("logical const") to mean entirely different things. Unless we can agree on a common terminology, we're not going to be able to get anywhere with this discussion.
Apr 02 2008
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 Bugger! That last post got sent before I'd finished writing it.

     c.some.deeply.nested.but.reachable.var++; // OK (compiled with D2)
Overriding "opPostInc() const" doesn't prove anything. If you want to make a serious point, just call you function f or foo or something. Using operator overloads to disguise what's going on doesn't make /anything/ clearer.
You are missing the point. x is part of the class state, because it is accessible through var. It is a mutable part of the const class state because it resides in global (non-const) space. You have to think outside the box.
  The point is that logical const is still possible, even with transitive
  const, because the global namespace is not const.
It seems you and I disagree about the meaning of the phrase "logical const". The ability to manipulate global variables does not constitute logical constancy in my book.
The ability to use global mutable variables as part of the class state is the point.
  HOWEVER, the point that everyone is arguing is why does logical const 
 make
  pure functions or functional programming impossible?  Clearly, it is 
 ALREADY
  POSSIBLE to have logical const
We are suffering from a communications difficulty caused by you and I using the same phrase ("logical const") to mean entirely different things. Unless we can agree on a common terminology, we're not going to be able to get anywhere with this discussion.
The communications gap is not in that I do not understand what logical const is. The communications gap is that you are not understanding that what I have posted is semantically equivalent to logical const. What I am showing is that transitive const is the same as logical const because you can use the always mutable global state as part of the class state. What this translates to is that logical const is sufficient for multi-programming as long as the correct rules are chosen. What I am asking is, because semantically logical const is possible, why is actual logical const not allowed? -Steve
Apr 02 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  >>  Clearly, it is
  >> ALREADY
  >> POSSIBLE to have logical const
Clearly it isn't.
 We are suffering from a communications difficulty caused by you and I
> using the same phrase ("logical const") to mean entirely different > things. Unless we can agree on a common terminology, we're not going > to be able to get anywhere with this discussion. The communications gap is not in that I do not understand what logical const is. The communications gap is that you are not understanding that what I have posted is semantically equivalent to logical const.
Look, how about this. How about if, instead of calling such classes "logically const", you instead describe them as "classes whose methods modify global data"? (or "globby" for short). That would keep me happy. In return, I'll agree to call classes with mutable member variables "classes with mutable member variables". ("muty" for short). Hopefully, we can then have a conversation without getting confused, or arguing over what words mean.
  What I am showing
  is that transitive const is the same as logical const because you can use
  the always mutable global state as part of the class state.  What this
  translates to is that logical const is sufficient for multi-programming as
  long as the correct rules are chosen.
I just don't get what you're saying. It doesn't make sense to me.
  What I am asking is, because semantically logical const is possible
I don't even know what you mean by that.
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 We are suffering from a communications difficulty caused by you and I
> using the same phrase ("logical const") to mean entirely different > things. Unless we can agree on a common terminology, we're not going > to be able to get anywhere with this discussion. The communications gap is not in that I do not understand what logical const is. The communications gap is that you are not understanding that what I have posted is semantically equivalent to logical const.
Look, how about this. How about if, instead of calling such classes "logically const", you instead describe them as "classes whose methods modify global data"? (or "globby" for short). That would keep me happy. In return, I'll agree to call classes with mutable member variables "classes with mutable member variables". ("muty" for short). Hopefully, we can then have a conversation without getting confused, or arguing over what words mean.
Sure. Now I'll restate what I have been stating in these terms: globby classes are EQUIVALENT to logically const classes (or "muty" classes as you call them). Since they are equivalent, and we can have globby classes today with transitive const, so what is the problem with allowing muty classes? How would this break the const system? -Steve
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Sure.  Now I'll restate what I have been stating in these terms:  globby
  classes are EQUIVALENT to logically const classes (or "muty" classes as you
  call them).  Since they are equivalent, and we can have globby classes today
  with transitive const, so what is the problem with allowing muty classes?
  How would this break the const system?
Great! I understood. (I disagree, but I understood). OK, in what sense are globby classes equivalent to muty classes? Because, you see, I don't think they are. When you modify a global variable, you are modifying something /else/, something other than the class. In no way can this be said to be violating transitive const. But when you modify a mutable variable in C++ (and I have to use C++ as my example, because it's not legal in D), then you /have/ violated transitive const. That difference makes them seem not equivalent to me.
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 Sure.  Now I'll restate what I have been stating in these terms:  globby
  classes are EQUIVALENT to logically const classes (or "muty" classes as 
 you
  call them).  Since they are equivalent, and we can have globby classes 
 today
  with transitive const, so what is the problem with allowing muty 
 classes?
  How would this break the const system?
Great! I understood. (I disagree, but I understood). OK, in what sense are globby classes equivalent to muty classes? Because, you see, I don't think they are. When you modify a global variable, you are modifying something /else/, something other than the class. In no way can this be said to be violating transitive const. But when you modify a mutable variable in C++ (and I have to use C++ as my example, because it's not legal in D), then you /have/ violated transitive const. That difference makes them seem not equivalent to me.
globby classes are equivalent to muty classes because in both cases I am storing information outside the const scope. In globby classes, I'm storing it in a global AA that has an element for each instance of the class that's created. In muty classes, I'm storing it in a member variable with the class. In all cases, it's not considered to be a state of the class, it's just that for muty classes, I'm storing it with the class (and reaping the benefits of not having to manage a globally stored AA). I can prove that muty classes are equivalent to globby classes: class muty { mutable X x; } is equivalent to class globby { X x() const {...} X x(X newvalue) const {...} } Assume the ellipsis' are implementations that get and set a globally (or statically) stored x. Please don't point out that I can't do the same operations on x in the globby version because it is not a data member, this is a limitation of the D language, and I can always make functions that do the operation needed. The essential proof is that I can re-assign x. There is no semantic difference. Both have an x member that is mutable when the instance is const. The globby version can be supported with today's const regime and compiler, why not also allow the muty version. As a challenge, come up with a muty type that cannot be implemented as a globby type in the context of constancy/invariance. -Steve
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 globby classes are equivalent to muty classes because in both cases I am
  storing information outside the const scope.  In globby classes, I'm storing
  it in a global AA that has an element for each instance of the class that's
  created.  In muty classes, I'm storing it in a member variable with the
  class.  In all cases, it's not considered to be a state of the class, it's
  just that for muty classes, I'm storing it with the class (and reaping the
  benefits of not having to manage a globally stored AA).

  I can prove that muty classes are equivalent to globby classes:

  class muty
  {
     mutable X x;
  }

  is equivalent to
  class globby
  {
    X x() const {...}
    X x(X newvalue) const {...}
  }
So you're mimicking mutable fields with a global AA. Gotcha. There are several problems with this, including: (1) the global variable is visible to at least the entire module, so something else might modify it when you're not looking. (2) are we really sure that modifying an AA is an atomic operation? I'm not. It's also worth mentioning that you won't be able to modify global variables in a pure function, so the ability to mimic mutable member variables is lost at that point.
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 globby classes are equivalent to muty classes because in both cases I am
  storing information outside the const scope.  In globby classes, I'm 
 storing
  it in a global AA that has an element for each instance of the class 
 that's
  created.  In muty classes, I'm storing it in a member variable with the
  class.  In all cases, it's not considered to be a state of the class, 
 it's
  just that for muty classes, I'm storing it with the class (and reaping 
 the
  benefits of not having to manage a globally stored AA).

  I can prove that muty classes are equivalent to globby classes:

  class muty
  {
     mutable X x;
  }

  is equivalent to
  class globby
  {
    X x() const {...}
    X x(X newvalue) const {...}
  }
So you're mimicking mutable fields with a global AA. Gotcha. There are several problems with this, including: (1) the global variable is visible to at least the entire module, so something else might modify it when you're not looking.
Who cares? I am in charge of my module, it's not possible for another author to change the AA in a way that I do not define. If anything, this is an argument FOR logical const so I can have the compiler help me prevent this :)
 (2) are we really sure that modifying an AA is an atomic operation? I'm 
 not.
I am really sure that modifying an AA is not an atomic operation, but that has no bearing on the proof. Setting x in the mutable version is also not atomic. They are still equivalent.
 It's also worth mentioning that you won't be able to modify global
 variables in a pure function, so the ability to mimic mutable member
 variables is lost at that point.
Again, if logical const were allowed, pure functions would not be able to modify the mutable portions of classes, so this is not a difference between muty and globby. I think I may have not explained my opinion correctly on pure functions. I believe transitive invariance is required for enforcable pure functions. In order to enforce that, the compiler should be able to tell what is supposed to be invariant and what is not. Having logical const be a possibility does not prevent this, because the compiler can just force the pure function not to use the mutable portions of the class/struct. If mutable member variables were allowed, they would have the same restrictions for pure functions as any other external mutable variable. -Steve
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  > (2) are we really sure that modifying an AA is an atomic operation? I'm
  > not.

 I am really sure that modifying an AA is not an atomic operation, but that
  has no bearing on the proof.  Setting x in the mutable version is also not
  atomic.
Two instances of the same muty class would each have their own independent mutable variables. That means that modifications to those variables don't have to be atomic. However, two instances of the same globby class would share the /same/ AA, so accesses to that AA would need to be atomic, otherwise, the AA could itself end up with a corrupt memory layout.
Apr 02 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
  > (2) are we really sure that modifying an AA is an atomic operation? 
 I'm
  > not.

 I am really sure that modifying an AA is not an atomic operation, but 
 that
  has no bearing on the proof.  Setting x in the mutable version is also 
 not
  atomic.
Two instances of the same muty class would each have their own independent mutable variables. That means that modifications to those variables don't have to be atomic. However, two instances of the same globby class would share the /same/ AA, so accesses to that AA would need to be atomic, otherwise, the AA could itself end up with a corrupt memory layout.
OK, sure, so we lock the AA with a mutex lock :) Again, has no bearing on the proof. -Steve
Apr 02 2008
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 The communications gap is not in that I do not understand what logical const 
 is.  The communications gap is that you are not understanding that what I 
 have posted is semantically equivalent to logical const.
A static member is not semantically equivalent to a non-static member, that's why both are supported. A static member is part of global state, not the state of the object itself.
Apr 02 2008
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Janice Caron wrote:
 We are suffering from a communications difficulty caused by you and I
 using the same phrase ("logical const") to mean entirely different
 things. Unless we can agree on a common terminology, we're not going
 to be able to get anywhere with this discussion.
You're right. This thread will go nowhere as long as "logical const" isn't defined. My understanding of logical const, and the meaning I use of it, is the C++ notion of a class that uses non-static fields declared as "mutable" to implement a class that appears to be const from the user's perspective, but actually has changing field values. Mutating a static member is not logical const.
Apr 02 2008
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Walter Bright wrote:
 Janice Caron wrote:
 We are suffering from a communications difficulty caused by you and I
 using the same phrase ("logical const") to mean entirely different
 things. Unless we can agree on a common terminology, we're not going
 to be able to get anywhere with this discussion.
You're right. This thread will go nowhere as long as "logical const" isn't defined. My understanding of logical const, and the meaning I use of it, is the C++ notion of a class that uses non-static fields declared as "mutable" to implement a class that appears to be const from the user's perspective, but actually has changing field values. Mutating a static member is not logical const.
By using a global you can come up with something that is operationally equivalent to that definition of logical const. So it made sense to also call that "logical const" as a short hand for "operationally equivalent to logical const using globals". I'm not sure why anyone found that so terribly confusing. But there's nothing wrong with being a little more precise, though we still don't have a good name for "operationally equivalent to logical const by use of globals". Other than "globby" I mean. :-) --bb
Apr 02 2008
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Steven Schveighoffer wrote:
 "Janice Caron" wrote in message
 The point is that logical const is still possible, even with transitive 
 const, because the global namespace is not const.  There is no way around 
 this except with pure functions.  Which I think you agree.
 
 HOWEVER, the point that everyone is arguing is why does logical const make 
 pure functions or functional programming impossible?  Clearly, it is ALREADY 
 POSSIBLE to have logical const, and clearly, pure functions are possible! 
 I'm saying transitive const is mathematically equivalent to logical const 
 ALREADY.  Please try and grasp that concept.
Globals are a loophole. Pure will close that loophole. Allowing mutable members would be another loophole. Maybe pure can just close that one too? It would have to check that any access to a const object does not touch mutable fields. It seems possible. Would it be impossible for the compiler to check that for some reason? Probably pure functions will only be able to call other pure functions, so that rules out the possibility of indirectly accessing mutable data via a function or method call. So the only case that needs to be checked is directly referencing such a mutable field. It does seem like something that could be tacked on after the rest of const/invariant/pure is done, though. Much like C++'s mutable was. --bb
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Bill Baxter" wrote
 Steven Schveighoffer wrote:
 "Janice Caron" wrote in message
 The point is that logical const is still possible, even with transitive 
 const, because the global namespace is not const.  There is no way around 
 this except with pure functions.  Which I think you agree.

 HOWEVER, the point that everyone is arguing is why does logical const 
 make pure functions or functional programming impossible?  Clearly, it is 
 ALREADY POSSIBLE to have logical const, and clearly, pure functions are 
 possible! I'm saying transitive const is mathematically equivalent to 
 logical const ALREADY.  Please try and grasp that concept.
Globals are a loophole. Pure will close that loophole. Allowing mutable members would be another loophole. Maybe pure can just close that one too? It would have to check that any access to a const object does not touch mutable fields. It seems possible. Would it be impossible for the compiler to check that for some reason? Probably pure functions will only be able to call other pure functions, so that rules out the possibility of indirectly accessing mutable data via a function or method call. So the only case that needs to be checked is directly referencing such a mutable field.
Thanks Bill, this is exactly what I'm trying to say :)
 It does seem like something that could be tacked on after the rest of 
 const/invariant/pure is done, though.  Much like C++'s mutable was.
Absolutely, but I'd at least like Walter to stop saying that transitive const (read, transitive *keyword* const) is neccessary for pure functions :) If he says "Oh yeah, I guess transitive const isn't necessary, but it's not a priority to fix it right now", then I'd be happy. -Steve
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Absolutely, but I'd at least like Walter to stop saying that transitive
  const (read, transitive *keyword* const) is neccessary for pure functions :)
  If he says "Oh yeah, I guess transitive const isn't necessary, but it's not
  a priority to fix it right now", then I'd be happy.
But hang on. It's all very well saying "pure functions can access the non-mutable bits only", but in every case I can think of where I might use the mutable keyword in C++, the non-mutable subset of the class is completely useless without the mutable subset. Can you give me a counterexample of a logically const (muty) class in which the non-mutable subset is actually useful?
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 Absolutely, but I'd at least like Walter to stop saying that transitive
  const (read, transitive *keyword* const) is neccessary for pure 
 functions :)
  If he says "Oh yeah, I guess transitive const isn't necessary, but it's 
 not
  a priority to fix it right now", then I'd be happy.
But hang on. It's all very well saying "pure functions can access the non-mutable bits only", but in every case I can think of where I might use the mutable keyword in C++, the non-mutable subset of the class is completely useless without the mutable subset. Can you give me a counterexample of a logically const (muty) class in which the non-mutable subset is actually useful?
Certainly: class SuperIntensiveCalculator { private mutable int[int] cache; int f(int x) const { int *result = x in cache; if(result !is null) return result; /* do really intense calculation, cache the value */ } pure int puref(int x) invariant { /* do really intense calculation, do not use cache */ } } If your instance is invariant, you can call puref, which doesn't use the cache, but allows the compiler to do it's own optimizations (and possibly memoization). Yes, I could make cache just a normal member, and remove the const on f(x), but then I couldn't call f on const instances, which gives me less functionality. I could also implement this in globby form, but then the lookup for the cache is going through 2 AA's, and I have to manually initialize and destroy the cache for each instance. -Steve
Apr 03 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  > Can you give me a counterexample of a logically const (muty) class in
  > which the non-mutable subset is actually useful?

 Certainly:
 <snip>
In the example you just gave me, the non-mutable subset consisted of the empty set - i.e. no members whatsoever. I don't call that useful. Given that, the function pure f could just have easily have been taken outside the class altogether and been a standalone function. Do you want to try again?
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 03/04/2008, Steven Schveighoffer wrote:
  > Can you give me a counterexample of a logically const (muty) class in
  > which the non-mutable subset is actually useful?

 Certainly:
 <snip>
In the example you just gave me, the non-mutable subset consisted of the empty set - i.e. no members whatsoever. I don't call that useful. Given that, the function pure f could just have easily have been taken outside the class altogether and been a standalone function. Do you want to try again?
Have you no imagination? :) class SuperIntensiveCalculator { private mutable int[int] cache; private int configParameterThatIsNecessaryForIntenseCalculation; int f(int x) const { int *result = x in cache; if(result !is null) return result; /* do really intense calculation, cache the value */ } pure int puref(int x) invariant { /* do really intense calculation, do not use cache */ } }
Apr 03 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Have you no imagination? :)
Apparently. So to cut a long story short, you're saying that you can have a class in which a non-pure function caches a result (in a mutable variable), and in which the pure version of the same function doesn't cache the result. You might just as well say, you can have a class in which a non-const function caches a result, and in which the const version of the same function doesn't. What's the difference? I guess what I'm trying to get at is that it is never /necessary/ to use mutable variables. There's always a way of rewriting the code so you don't need them. In this case: class Calclulator { int config; int f(int x) const { ... } pure int f(int x) invariant { ... } } class CachingCalculator { Calculator calculator; int[int] cache; int f(int x) { auto p = x in cache; if (p !is null) return *p; int r = calculator.f(x); cache[x] = r; return r; } } You just have to wean yourself off the addiction to logical const. After you've gone cold turkey for a bit, you'll just stop wanting to use it.
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 03/04/2008, Steven Schveighoffer wrote:
 Have you no imagination? :)
Apparently. So to cut a long story short, you're saying that you can have a class in which a non-pure function caches a result (in a mutable variable), and in which the pure version of the same function doesn't cache the result. You might just as well say, you can have a class in which a non-const function caches a result, and in which the const version of the same function doesn't. What's the difference?
The difference is that things start breaking when they expect to receive a const calculator. For example, if I have a function that looks like this: int[] calculateSomeValues(const(Calculator) c); written by some other developer who says "I won't be changing c, and c defines that f is const, so I'll declare that c is const". And then I say "hey, why don't I add a caching feature to Calculator that speeds up the code tremendously", but now, since I didn't write calculateSomeValues, I cannot pass the caching version to it. So what ends up happening is I'll create a globby so I can "work around" the const system and create faster executing code (the globby has a penalty in the AA lookup, but we're assuming that each call to f is much more expensive than that). The problem is that in this case, const gets in the way, and didn't help me at all. Const is supposed to help multiple developers work together, not prevent them from doing so. You could say "so pass the cache in with the calculator", which is fine, but then I'm losing the whole point of being able to group related objects together! Not only that, but now caluclulateSomeValues has to know at least that I'm passing some mutable state to it, so it has knowledge of the implementation, which it shouldn't have to know. Why not encapsulate the mutable state in with the argument? All a mutable state says is to the compiler "this part is not part of the object, so ignore it for const purposes." That's all. Think of it as an extra argument to all functions which take a class of that type, and that argument is implicitly passed to all member functions of the object, just like the 'this' pointer is.
 I guess what I'm trying to get at is that it is never /necessary/ to
 use mutable variables. There's always a way of rewriting the code so
 you don't need them. In this case:

    class Calclulator
    {
        int config;
        int f(int x) const { ... }
        pure int f(int x) invariant { ... }
    }

    class CachingCalculator
    {
        Calculator calculator;
        int[int] cache;
        int f(int x)
        {
            auto p = x in cache;
            if (p !is null) return *p;
            int r = calculator.f(x);
            cache[x] = r;
            return r;
        }
    }

 You just have to wean yourself off the addiction to logical const.
 After you've gone cold turkey for a bit, you'll just stop wanting to
 use it.
I don't use const in most of my stuff, since 1) I only use Tango, and 2) all my D apps, and most of my other apps, are written only by me :) The problem is that inevitably, I'll want to use someone else's library whose author decided to use const, and inevitably, they will have incorrectly implemented it, since they didn't think of all possible ways someone can use their lib. -Steve
Apr 03 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  For example, if I have a function that looks like this:

  int[] calculateSomeValues(const(Calculator) c);

  written by some other developer who says "I won't be changing c, and c
  defines that f is const, so I'll declare that c is const".
The difference between Calculator and CachingCalculator is a bit like the difference between File and BufferedFile. We all /know/ that file access is slow, and so buffering speeds things up. Ditto here. That other developer should have written int[] calculateSomeValues(CachingCalculator c); and if they were dumb enough to use an underpowered class, then don't call their function.
  and inevitably, they [other people] will have
  incorrectly implemented it
One doesn't introduce new language features on the assumption that other people will incorrectly implement other language features. It's just education, that's all. When using D, you program "the D way". And that means, you don't declare something const, if you know that bits of it are going to change. That's how it works in D. Look at it like this - D mandates good habits.
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 03/04/2008, Steven Schveighoffer wrote:
  For example, if I have a function that looks like this:

  int[] calculateSomeValues(const(Calculator) c);

  written by some other developer who says "I won't be changing c, and c
  defines that f is const, so I'll declare that c is const".
The difference between Calculator and CachingCalculator is a bit like the difference between File and BufferedFile. We all /know/ that file access is slow, and so buffering speeds things up. Ditto here. That other developer should have written int[] calculateSomeValues(CachingCalculator c); and if they were dumb enough to use an underpowered class, then don't call their function.
They did not write CachingCalculator, I did. They have Calculator, I wanted to make a derived Calculator that uses caching and pass it into the function. What if the function simply took an interface? The issue is that the developer of that function is promising not to change c, which he doesn't. He cannot know how I will want to implement c. In fact, given the current const regime, the only correct choice in this is to NOT use const on Calculator.f and therefore on any functions that take the calculator, as it imposes too many restrictions on whoever is developing the type that will be passed in. And the "don't call badly defined functions" scheme is not always possible.
  and inevitably, they [other people] will have
  incorrectly implemented it
One doesn't introduce new language features on the assumption that other people will incorrectly implement other language features.
I'm not introducing a 'new' language feature. Logical const is already available through workarounds. I'm suggesting we make it easier to express logical const than by the hackish workarounds I've posted.
 It's just education, that's all. When using D, you program "the D
 way". And that means, you don't declare something const, if you know
 that bits of it are going to change. That's how it works in D. Look at
 it like this - D mandates good habits.
D const mandates workarounds, which I do not consider to be good habits. Any time a language feature that is supposed to help developers work together makes the language harder to work with, that's a red flag to me. I'm not saying I disagree with const, I'm just saying its absolute transitivity is not preventing us from using logical const-like features, and so the notion that it is required, or at least the notion that logical const won't work, is false. -Steve
Apr 03 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 They did not write CachingCalculator, I did.  They have Calculator, I wanted
  to make a derived Calculator that uses caching and pass it into the
  function.  What if the function simply took an interface?  The issue is that
  the developer of that function is promising not to change c, which he
  doesn't.  He cannot know how I will want to implement c.  In fact, given the
  current const regime, the only correct choice in this is to NOT use const on
  Calculator.f and therefore on any functions that take the calculator, as it
  imposes too many restrictions on whoever is developing the type that will be
  passed in.
Fair point.
 I'm not introducing a 'new' language feature.  Logical const is already
  available through workarounds.  I'm suggesting we make it easier to express
  logical const than by the hackish workarounds I've posted.
Presumably though, what you call a "hackish workaround" is exactly what the compiler would have to implement for you if you got what you want. That is, class C { mutable M m; } would just be syntactic sugar for class C { private static M[const(C)] __cache; /* and appropriate functions to make it work */ } Maybe it's better to have all that explicit, rather than hidden, so that it becomes obvious there's a price to pay, and that the lookup time for an AA had better be insignificant compared to the cost of computing an M or it's not worth it.
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 03/04/2008, Steven Schveighoffer wrote:
 I'm not introducing a 'new' language feature.  Logical const is already
  available through workarounds.  I'm suggesting we make it easier to 
 express
  logical const than by the hackish workarounds I've posted.
Presumably though, what you call a "hackish workaround" is exactly what the compiler would have to implement for you if you got what you want. That is, class C { mutable M m; } would just be syntactic sugar for class C { private static M[const(C)] __cache; /* and appropriate functions to make it work */ } Maybe it's better to have all that explicit, rather than hidden, so that it becomes obvious there's a price to pay, and that the lookup time for an AA had better be insignificant compared to the cost of computing an M or it's not worth it.
This isn't what I'm requesting. I'm requesting that class C { mutable M m; } Mean exactly what it says, that there is a piece of data stored with the class that is always mutable (to get nitpicky, I don't like the keyword mutable, as it implies that everything reachable through m is mutable, which it may not be). I'm asking for the compiler to treat it as not part of the object state, *as if* it were a variable outside the class, in terms of const. Think of it as an extra argument to all member functions that is not colored with the constancy of the member function. There is no need for the compiler to implement my hack, I can do that easily enough with a mixin :) If that was the problem, I would have never started this thread. -Steve
Apr 03 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 03/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 Mean exactly what it says, that there is a piece of data stored with the
  class that is always mutable (to get nitpicky, I don't like the keyword
  mutable, as it implies that everything reachable through m is mutable, which
  it may not be).  I'm asking for the compiler to treat it as not part of the
  object state, *as if* it were a variable outside the class, in terms of
  const.  Think of it as an extra argument to all member functions that is not
  colored with the constancy of the member function.
Perhaps there is a better solution. Perhaps, we could annotate functions which computationally expensive. class SuperIntensiveCalculator { int f(int x) const expensive { /* do really intense calculation */ } } I like that more. We simply give the compiler a hint that it might be worth caching the result.
Apr 03 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 03/04/2008, Steven Schveighoffer wrote:
 Mean exactly what it says, that there is a piece of data stored with the
  class that is always mutable (to get nitpicky, I don't like the keyword
  mutable, as it implies that everything reachable through m is mutable, 
 which
  it may not be).  I'm asking for the compiler to treat it as not part of 
 the
  object state, *as if* it were a variable outside the class, in terms of
  const.  Think of it as an extra argument to all member functions that is 
 not
  colored with the constancy of the member function.
Perhaps there is a better solution. Perhaps, we could annotate functions which computationally expensive. class SuperIntensiveCalculator { int f(int x) const expensive { /* do really intense calculation */ } } I like that more. We simply give the compiler a hint that it might be worth caching the result.
This might solve this particular problem, but there are other reasons to have logical const types that are not solved this way. -Steve
Apr 03 2008
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 HOWEVER, the point that everyone is arguing is why does logical const make 
 pure functions or functional programming impossible?  Clearly, it is ALREADY 
 POSSIBLE to have logical const, and clearly, pure functions are possible! 
 I'm saying transitive const is mathematically equivalent to logical const 
 ALREADY.  Please try and grasp that concept.
You do not need const at all to do function programming. But if you do that, you'll give up all compiler help. What const/invariant does is enable the compiler to enforce certain guarantees. Multiprogramming is notoriously difficult in languages that provide no language guarantees (like C++) and is fairly straightforward in languages that do provide guarantees (like Erlang). There has been recently a huge surge in interest in Erlang and Haskell for multiprogramming, and they are willing to put up with all the other faults in those languages, because people are able to get their multiprograms to work reliably in those languages without herculean efforts.
Apr 02 2008
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Good arguments, Steven.
The hand-waving over future benefits enabled by transitive const also 
has me feeling uneasy.  If the benefits are known then it should be 
possible to show some examples.  Clearly, though, as you say the benefit 
is not (at least directly) related to parallel optimizations.  So what 
optimizations does it allow?

So probably to use "pure" is going to require that a function A)
touches only const/invariant data and B) does not reference any globals.

So it seems logical, then, that there would need to be a const system in 
place to specify what things are ok for a pure function to use and do. 
Imagine a pure function that manipulates an array of external class 
objects.  The array will need to be of type const(Klass)[] to denote 
that the Klass objects won't be touched.  Or if the pure function wants 
to call a Klass method, it better be a pure method.

But as I think Sean mentioned before, the constness in that case could 
conceivably be checked by the compiler without requiring user 
annotations.  Consider this case though:

   // Hypothetical pure function with implicit/deduced constness
   pure int foo(Klass a) {
      scope b = new Klass;
      Klass[] x;
      x ~= a;
      x ~= b;
      random_shuffle(x);
      x[0].prop = 10;  // is this ok or not?
   }

The trouble is that it's difficult for the compiler to guess whether to 
treat x as const(Klass)[] or mutable Klass[].   It is possible in this 
case though.  Basically it can infer from the x[0].prop=10 line that the 
container must be mutable, and thus the line  x ~= a is invalid because 
the parameters to a pure function are assumed const.

So hmm, it does seem conceivable to me that the compiler could enforce 
"pure" without needing any explicit const notation.


I also agree with Sean that D is great, and that the simplicity of D is 
one of it's biggest selling points.  And also that const currently 
doesn't seem to be helping much in that department.


--bb

Steven Schveighoffer wrote:
 I have been reading through the reddit posts about the const faq, and also 
 some posts here by guslay, and I believe Walter needs to re-think his 
 beliefs about how transitive const is necessary for the future.
 
 First, I think transitive const should be the default.  I am not a fan of 
 C++'s head-const, where only the head is const, and all data referenced 
 through the head is mutable.  I think this promotes "shoot yourself in the 
 foot" code.  It is a valuable tool to have the compiler tell you "hey, you 
 wanted this to be const, remember?"  At the same time it is also useful to 
 tell the compiler "hey, I want this to not be affected by const, because 
 it's not logically part of the state of the object."
 
 Second, I'll show how logical const is ALREADY available with D's const, 
 just in a less convenient form.  Let's look at an invariant property that is 
 not invariant:
 
 class X
 {
    private static int _x;
    invariant int x()
    {
        return _x++;
    }
 }
 
 Notice that despite all efforts of the compiler to make sure invariant is 
 transitive, it still cannot do any parallelizations or optimizations on the 
 call to x.  If it does not have the source code for x, it cannot be sure 
 that x isn't doing something like the above.
 
 We can translate this into a form where we keep an associative array that 
 maps each instance to a mutable struct that can be the non-const portion of 
 the instance, so each instance can be separate:
 
 class X
 {
     private static int[X] _x;
     this()
     {
         _x[this] = 0;
     }
 
     ~this()
     {
          _x.remove(this);
     }
 
     invariant int x()
     {
        int *myx = this in _x;
        return (*myx)++;
     }
 }
 
 The problem is, you can think of each function not only passing the 
 arguments, and possibly a this pointer, but passing a context pointer to the 
 global namespace, which is never const.  As long as that remains mutable, we 
 can store mutable data associated with a class in that space.  In order to 
 allow multiprogramming, you need to ensure that the function does not access 
 that space, unless the data is invariant.  But this is called a 'pure' 
 function, and can exist WITH logical const and invariant.  If it couldn't, 
 then it couldn't exist with today's version of const, which allows logical 
 const and invariant as I have demonstrated.
 
 Here is my interpretation of all this:
 
 const is a tool for developers, not for the compiler.  Since const data can 
 change at any time, it cannot be used for multithreaded or other 
 optimizations.
 
 invariant allows for some optimization on the compiler.  Invariant 
 optimization is limited to POD values, because invariant functions can 
 always be logically invariant, and there is no way to detect it.
 
 pure is a tool for multiprogramming.  This allows all the fun stuff that 
 Walter is always talking about.  pure can co-exist with logical const and 
 logical invariant, because if it couldn't, then pure couldn't exist in 
 today's version of const.
 
 Since const is a tool for developers, and CANNOT BE USED for optimization, 
 let's please have logical const in a form that performs better and is easier 
 to implement than the above example.  Something like C++'s mutable (but not 
 exactly that, I'll explain below).
 
 Since invariant functions cannot be guaranteed as pure functions, let's have 
 logical invariance in a form that performs better and is easier to implement 
 than the above example.  Optimizations based on POD types can still be had 
 even with logical invariance.
 
 Head const by default sucks, and still does, because const is less useful as 
 a tool in that state.  However, as I have shown, head const cannot be 
 avoided, and so why not support it as the exception to the transitive const 
 rule?
 
 mutable is used as the keyword in C++ to describe this type.  However, I do 
 not like this word.  For example, whatever keyword is used, it should be 
 usable just like const or invariant:
 
 mutable(X)* refToX;
 
 This would describe a reference to a mutable X.  adding const to this would 
 then make the reference const, but the thing it points to mutable, giving us 
 a head-const variable.
 
 But what if X is an alias to const(Y)?  The statement implies that 
 mutable(X) makes X mutable, even if it is was previously const or invariant, 
 but this would be no good.  We need a word that says "this isn't affected by 
 const or invariant, but it could already be const or invariant".  Something 
 that is short and not heavily used in code.  Mabye even some form of 
 punctuation.
 
 Walter, I think you need to look at this, because not doing this is going to 
 keep D out of the mainstream, as most developers will (and already do) 
 complain about the lack of logical const.  I was one of those, but I gave 
 up, because I convinced myself that logical const wasn't that important, and 
 was a design flaw.  But since fully transitive const does not give us the 
 benefits that you have been saying it will, why not allow it?
 
 At the very least, this proof shows that you need a better reason to require 
 fully transitive const/invariant than "I need it for the future."
 
 -Steve 
 
 
Apr 01 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, Bill Baxter <dnewsgroup billbaxter.com> wrote:
    // Hypothetical pure function with implicit/deduced constness
    pure int foo(Klass a) {
       scope b = new Klass;
       Klass[] x;
       x ~= a;
       x ~= b;
       random_shuffle(x);
       x[0].prop = 10;  // is this ok or not?
    }
Just for the record, random_shuffle() is not a pure function, and hence cannot be called from within a pure function.
Apr 01 2008
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Janice Caron wrote:
 On 01/04/2008, Bill Baxter <dnewsgroup billbaxter.com> wrote:
    // Hypothetical pure function with implicit/deduced constness
    pure int foo(Klass a) {
       scope b = new Klass;
       Klass[] x;
       x ~= a;
       x ~= b;
       random_shuffle(x);
       x[0].prop = 10;  // is this ok or not?
    }
Just for the record, random_shuffle() is not a pure function, and hence cannot be called from within a pure function.
Ah, good point. So make it: x = pure_random_shuffle(x); Still I think that points to an unnecessary restriction on pure functions. A function that _only_ modifies its arguments can be safely called from a pure function when the data being passed is local to the pure function. For example this should be fine: void inc(ref int x) { x++; } pure int pure_func() { int x=0; inc(x); return x; } --bb
Apr 01 2008
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bill Baxter wrote:
 So hmm, it does seem conceivable to me that the compiler could enforce 
 "pure" without needing any explicit const notation.
Deducing purity doesn't help when one wants to specify an interface. It would only be useful in verifying conformance to the interface.
Apr 01 2008
prev sibling next sibling parent reply guslay <guslay gmail.com> writes:
Steven Schveighoffer Wrote:

 I have been reading through the reddit posts about the const faq, and also 
 some posts here by guslay, and I believe Walter needs to re-think his 
 beliefs about how transitive const is necessary for the future.
 
I'm glad to see this issue catch on. I'll try to summarize of the thoughts on const (in reality, invariant) expressed by others and myself. -= A const method may exhibit side effects =- The transitivity of const does not extends beyond the scope of the class. You may call static/global functions and modify static/global data, modify mutable objects accessible through static/global pointers, perform I/O, etc. -= Const != thread-safe =- A method with potential side effects is not implicitly thread-safe. -= Const is not the key to functional programming =- Immutability is required for FP, but const "mathematical" guarantees are too weak to provide that ability. -= Pure function + invariant data is required for FP =- As many have pointed out, the expected pure function will enable FP. This concept is not yet into place, but I will define it has a method that only call other pure functions, and performs read-only operation on its class members and static/global data. In other terms, a method with no side effect, that can only modify its input and output. A pure method is const, but a const method is not pure. -= Const is part of the interface, not part of the implementation =- Const part of the interface: you need to have a const method to call on a const object. It is also an abstraction. In some sense, D const is already logical const. Sure, the bits of the object cannot change, but since /everything else/ can change, the "purity" of constness is defined by the implementation. In other words, const is a conceptual. It does not mean that it is useless or without teeth, because D const is enforceable (but only on a member-by member basis). Defining const as bitwise const for the whole object is simplistic. You can still (imperfectly) fake mutability. Allowing this would be desirable. class MyMutableClass { private int normal_part; private mutable int mutable_part; public int const lazyGetter(); } const MyMutableClass obj = new MyMutableClass; If not, people will find a way to hack around it with ugly tricks. int mutable_part; class MyHackishClass { private int normal_part; public int const lazyGetter(); } const MyHackishClass obj = new MyHackishClass; Mutable is an acknowledgment of the current limits of const. -= Allowing mutable members does not break or weaken the const model =- D const is powerful because it is enforceable. What mutable does is allowing finer-grained control over the powerful const system. It does not weaken it, it controls its scope. Those are orthogonal issues (as far as I have yet to see an instance where having half the fields of an object const, instead of all the fields of the object, limits anything in any way). The internal state of the object will not be modified beyond what the class designer intended and allowed. The extent of const should be an implementation detail. -= Mutability is required =- C++ mutable keyword is not strictly required. Given that constness is almost never enforced in practice, one can cast away constness, modify "constant" data, and move on. Mutable offers a cleaner way to do that in the clear, as part of the class definition. The casting trick cannot be used in D (gladly). But which is worst? An object is fully initialized, but some parts are lazyly evaluated and require mutable members. The object as to be sent to an external entity. a) Mutable members are not allowed. The object cannot be passed as const. The non-const object is passed to the external function, renouncing to any control at all on the immutability of the object. b) Mutable members are allowed. The object is passed as const. The caller can be confident that the internal state of the object will not be modified beyond the intention of the class designer. Mutable strengthen the const system.
Apr 01 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 01/04/2008, guslay <guslay gmail.com> wrote:
  -= A const method may exhibit side effects =-
There is really no such thing as a const method. What we /call/ a const method is just a function, like any other. In any function, some parameters may be const, others not. In what you call a "const method", the hidden variable "this" is typed const - but that has no impact on any other function parameter, nor on any other variable in scope. In fact, I could even do class C { int x; void f(C c) const { ++c.x; } void g() { f(this); } } and have f() modify a member variable, despite being declared const, simply by doing it through another pointer. So, while what you say is true, it's nothing but a statement of the obvious. It's /how things are supposed to work/. If you have any expectation that things are supposed to behave differently, then you just haven't understood what a const member function is.
  The transitivity of const does not extends beyond the scope of the class.
It would be more accurate to say that the transitivity of const(T) does not extend beyond that which is reachable through T. But T doesn't have to be a class.
 You may call static/global functions and modify static/global data, modify
mutable objects accessible through static/global pointers, perform I/O, etc.
Of course. Again, you're stating the obvious.
  -= Const != thread-safe =-

  A method with potential side effects is not implicitly thread-safe.
Again with the obvious. Is there any point to this? Are you somehow suggesting that anyone on this newsgroup believes otherwise?
  -= Const is not the key to functional programming =-
Yes it is. But const is /part/ of the solution, not the /whole/ of the solution. Saying "const is not the key to functional programming" is a bit like saying "wheels are not the key to making cars". Well, it's true that wheels aren't /enough/ to make a car - an engine would help, for as start - but they do matter. It would be a tough job making a car without wheels. And likewise it would be a tough job making functional programming work without (transitive) const.
  Immutability is required for FP, but const "mathematical" guarantees are too
weak to provide that ability.
I don't understand that sentence. Why is the word mathematical in quotes? I hope you're not suggesting that mathematics is a dodgy discipline, because it's probably the /only/ discipline in the universe which offers solid proofs for its theorems. Even ignoring that, I have no idea what you're trying to say here.
  -= Pure function + invariant data is required for FP =-
That sounds reasonable, though it should really say /transitive/ invariant data.
  As many have pointed out, the expected pure function will enable FP.
OK.
  This concept is not yet into place, but I will define it has a method that
only call other pure functions, and performs read-only operation on its class
members and static/global data.
INCORRECT. A pure function cannot read static or global data, period.
  -= Const is part of the interface, not part of the implementation =-

  Const part of the interface: you need to have a const method to call on a
const object.

  It is also an abstraction. In some sense, D const is already logical const.
No it isn't.
  Sure, the bits of the object cannot change, but since /everything else/ can
change,
the "purity" of constness Const doesn't claim to be pure.
  It does not mean that it is useless or without teeth, because D const is
enforceable (but only on a member-by member basis).
On a variable by variable basis, yes. /That's what const means/.
 Allowing this would be desirable.

    class MyMutableClass
    {
       private         int normal_part;
       private mutable int mutable_part;

       public int const lazyGetter();
    }

    const MyMutableClass obj = new MyMutableClass;
The bottom line is that any class which is not /truly/ const, (as in physically, not merely logically), cannot be passed to any function which modifies the mutable parts, and simultaneously stay thread-safe.
  If not, people will find a way to hack around it with ugly tricks.
And their code won't be thread-safe.
  -= Allowing mutable members does not break or weaken the const model =-
I'm afraid it does. A type with mutable members can never be cast - either implicitly or explicitly - to fully const. That breaks everything.
  What mutable does is allowing finer-grained control over the powerful const
system. It does not weaken it, it controls its scope. Those are orthogonal
issues (as far as I have yet to see an instance where having half the fields of
an object const, instead of all the fields of the object, limits anything in
any way).
Really? class C { int x; mutable int y; // just pretend this is legal } void f(const(C) c) { int temp = y; y = temp + 1; } Now imagine two threads calling f simultaneously with the same c. Imagine a thread switch occurs between the two statements.
  -= Mutability is required =-
Mutability, you have. It happens when you don't declare a thing const.
  An object is fully initialized, but some parts are lazyly evaluated and
require mutable members.
Then it isn't const.
  a) Mutable members are not allowed. The object cannot be passed as const. The
non-const object is passed to the external function, renouncing to any control
at all on the immutability of the object.
There are trivial workarounds. Instead of class C { int x; mutable int y; } void f(const(C) c); just do this: class C { int x; } class D { int y; } void f(const(C) c, D d); This is just a simple matter of saying what you really mean.
  b) Mutable members are allowed. The object is passed as const. The caller can
be confident that the internal state of the object will not be modified beyond
the intention of the class designer.
...or it might be screwed up entirely as a result of a threading conflict.
Apr 02 2008
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Janice Caron wrote:
 On 01/04/2008, guslay <guslay gmail.com> wrote:
  -= Const != thread-safe =-

  A method with potential side effects is not implicitly thread-safe.
Again with the obvious. Is there any point to this? Are you somehow suggesting that anyone on this newsgroup believes otherwise?
  -= Const is not the key to functional programming =-
Yes it is. But const is /part/ of the solution, not the /whole/ of the solution. Saying "const is not the key to functional programming" is a bit like saying "wheels are not the key to making cars". Well, it's true that wheels aren't /enough/ to make a car - an engine would help, for as start - but they do matter. It would be a tough job making a car without wheels. And likewise it would be a tough job making functional programming work without (transitive) const.
Analogies can be use to prove a lot of untrue things. Legs are used by pretty much all creatures on the planet to get from point A to point B, so it is obvious that a man-made vehicle will also certainly require legs. Maybe const is like legs.
  This concept is not yet into place, but I will define it has a method that
only call other pure functions, and performs read-only operation on its class
members and static/global data.
INCORRECT. A pure function cannot read static or global data, period.
Unless the data is invariant. Shouldn't be a problem then.
 Allowing this would be desirable.

    class MyMutableClass
    {
       private         int normal_part;
       private mutable int mutable_part;

       public int const lazyGetter();
    }

    const MyMutableClass obj = new MyMutableClass;
The bottom line is that any class which is not /truly/ const, (as in physically, not merely logically), cannot be passed to any function which modifies the mutable parts, and simultaneously stay thread-safe.
  If not, people will find a way to hack around it with ugly tricks.
And their code won't be thread-safe.
And const will not have prevented it. So what use was its transitivity?
  -= Allowing mutable members does not break or weaken the const model =-
I'm afraid it does. A type with mutable members can never be cast - either implicitly or explicitly - to fully const. That breaks everything.
But a type that depends on globals *can* be made fully const. So why doesn't that break everything?
  What mutable does is allowing finer-grained control over the powerful const
system. It does not weaken it, it controls its scope. Those are orthogonal
issues (as far as I have yet to see an instance where having half the fields of
an object const, instead of all the fields of the object, limits anything in
any way).
Really? class C { int x; mutable int y; // just pretend this is legal } void f(const(C) c) { int temp = y; y = temp + 1; } Now imagine two threads calling f simultaneously with the same c. Imagine a thread switch occurs between the two statements.
Now imagine instead of mutable int it's static int[C]. It's the same loophole.
  b) Mutable members are allowed. The object is passed as const. The caller can
be confident that the internal state of the object will not be modified beyond
the intention of the class designer.
...or it might be screwed up entirely as a result of a threading conflict.
As would also be the case with the globals loophole. Would think and write more, but gotta go now. --bb
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Bill Baxter <dnewsgroup billbaxter.com> wrote:
 INCORRECT. A pure function cannot read static or global data, period.
Unless the data is invariant. Shouldn't be a problem then.
Maybe. I don't know the details. My guess would be that only manifest constants will be allowed. Invariant data can be created with idup, and might not be considered sufficiently pure. We shall have to wait and see.
  And const will not have prevented it.  So what use was its transitivity?
pure would have prevented it, and pure requires transitive const.
  But a type that depends on globals *can* be made fully const.  So why
 doesn't that break everything?
I don't understand the question. What is a "type that depends on globals"? Do you mean something like this: string g = "hello"; class C { string s; this() { s = g; } } I don't think that passing a const(C) to a pure function would break anything. The essential point is not that g's character data can be reached, it's that it can be reached /through a pointer passed to the function/, and that's what purity requires.
 Now imagine instead of mutable int it's static int[C].  It's the same
 loophole.
It is. But again, static data will not be accessible in pure functions, so the loophole is closed there.
 ...or it might be screwed up entirely as a result of a threading conflict.
As would also be the case with the globals loophole.
which pure closes.
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 pure requires transitive const.
Forgive me if I don't take your word for it. Please prove this. Specifically, why pure fails with logical const (not head const as C++ is). Assume these are the rules for pure: - a pure function can only access local variables - all parameters to a pure function must be logically invariant - a pure function can only call pure functions or methods. - a pure method cannot access the mutable portion of a logically invariant data value. -Steve
Apr 02 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 "Janice Caron" wrote
  > pure requires transitive const.

  Forgive me if I don't take your word for it.  Please prove this.
  Specifically, why pure fails with logical const (not head const as C++ is).
class Stupid { int x; mutable int y; } not_pure void f(const(Stupid) stupid) const { int t = y; ++t; y = t; } Try calling f from two threads at the same time, imagine a thread switch partway through f.
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 "Janice Caron" wrote
  > pure requires transitive const.

  Forgive me if I don't take your word for it.  Please prove this.
  Specifically, why pure fails with logical const (not head const as C++ 
 is).
class Stupid { int x; mutable int y; } not_pure void f(const(Stupid) stupid) const { int t = y; ++t; y = t; } Try calling f from two threads at the same time, imagine a thread switch partway through f.
I'm assuming you meant f to be part of Stupid? And that not_pure means that you are trying to make f pure, but this is your comment that it doesn't work? This would not compile under the rules I specified. Note the rules: - a pure function can only access local variables (I'm going to ammend this to say local variables and globally invariant variables). - all parameters to a pure function must be logically invariant - a pure function can only call pure functions or methods. - a pure method cannot access the mutable portion of a logically invariant data value. Your code violates rules 2 and 4. -Steve
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
 "Janice Caron" wrote

 On 02/04/2008, Steven Schveighoffer wrote:
>> "Janice Caron" wrote >> > pure requires transitive const. >> >> Forgive me if I don't take your word for it. Please prove this. >> Specifically, why pure fails with logical const (not head const as C++ >> is).
 I'm assuming you meant f to be part of Stupid?
No, that was just a typo. Or lazy typing - take your pick. I should have written not_pure void f(const(Stupid) stupid) { int t = stupid.y; ++stupid.t; stupid.y = t; } Try calling f from two threads at the same time, imagine a thread switch partway through f.
  - a pure method cannot access the mutable portion of a logically invariant
  data value.
Well, the easy way to do that is to redesign the class into two classes, one const and the other not. That way, you get the exact same benefit, but in addition, you end up saying what you mean, instead of writing obfuscated code.
Apr 02 2008
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, Steven Schveighoffer wrote:
 "Janice Caron" wrote

 On 02/04/2008, Steven Schveighoffer wrote:
>> "Janice Caron" wrote >> > pure requires transitive const. >> >> Forgive me if I don't take your word for it. Please prove this. >> Specifically, why pure fails with logical const (not head const as C++ >> is).
 I'm assuming you meant f to be part of Stupid?
No, that was just a typo. Or lazy typing - take your pick. I should have written not_pure void f(const(Stupid) stupid) { int t = stupid.y; ++stupid.t; stupid.y = t; }
Again, this is not a pure function. You cannot access the mutable portion of a logically invariant data value. This does not disprove that logical const can be used alongside pure.
  - a pure method cannot access the mutable portion of a logically 
 invariant
  data value.
Well, the easy way to do that is to redesign the class into two classes, one const and the other not. That way, you get the exact same benefit, but in addition, you end up saying what you mean, instead of writing obfuscated code.
I can ALREADY write "obfuscated" code as you say it. See my original example. Supporting logical const directly just makes that easier so I don't have to keep the part of the class that's not part of the state outside the class, and have to "work around" the const system. Note that none of this violates const or functional programming. The problem with your solution is that I can't just turn part of a class const, how would one specify that anyways? I want to just say: class C{ ??? int mutablePart; int constPart} C c = new C; const(C) cc = c; // mutablePart is still mutable I don't even know how you can do something like this without templates, and then, I am bloating the namespace, and I can't automatically assign one to the other. -Steve
Apr 02 2008
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Janice Caron wrote:

 
  - a pure method cannot access the mutable portion of a logically invariant
  data value.
Well, the easy way to do that is to redesign the class into two classes, one const and the other not. That way, you get the exact same benefit, but in addition, you end up saying what you mean, instead of writing obfuscated code.
I don't understand how to do what you're saying. Here's the version using mutable: class Calc { private mutable int result_cache; int calc_result(int) { .../*uses result_cache to save results*/ } ... } auto x = new Calc; int y = x.calc_result(32); So how do you separate that into two classes? How do I create an instance of that two-class thing? Via a third class? How can Calc access its cache? The user has to pass in the cache to every call to calc_result? The goal is for the caching to be an implementation detail that users need not worry about. The above seems about as straightforward as it gets. Any else is probably going to have to come out *more* obfuscated. --bb
Apr 02 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Bill Baxter" wrote
 Janice Caron wrote:

  - a pure method cannot access the mutable portion of a logically 
 invariant
  data value.
Well, the easy way to do that is to redesign the class into two classes, one const and the other not. That way, you get the exact same benefit, but in addition, you end up saying what you mean, instead of writing obfuscated code.
I don't understand how to do what you're saying. Here's the version using mutable: class Calc { private mutable int result_cache; int calc_result(int) { .../*uses result_cache to save results*/ } ... } auto x = new Calc; int y = x.calc_result(32); So how do you separate that into two classes? How do I create an instance of that two-class thing? Via a third class? How can Calc access its cache? The user has to pass in the cache to every call to calc_result? The goal is for the caching to be an implementation detail that users need not worry about. The above seems about as straightforward as it gets. Any else is probably going to have to come out *more* obfuscated.
I don't think you can even wrap the class, because what about this: class CachedCalc { Calc c; int result_cache; } class ConstCalc { const(Calc) c; int result_cache; } This should work, but because ConstCalc and CachedCalc are not derived classes it doesn't: ConstCalc cc = new CachedCalc(); So the only way to do it is to carry around both calc and the cache, and pass the cache in to every call that needs it. I agree this is useless. On top of that, you can't create a constructor that uses a CachedCalc, because you can't re-assign c, thanks to no tail-const class references... -Steve
Apr 02 2008
prev sibling parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 - a pure method cannot access the mutable portion of a logically  
 invariant data value.
Wouldn't this basically make it transitive invariant?
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  wrote:

 - a pure method cannot access the mutable portion of a logically 
 invariant data value.
Wouldn't this basically make it transitive invariant?
Yes, which makes my point :) pure must be transitive, but const / invariant by itself does not need to be. -Steve
Apr 02 2008
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 02 Apr 2008 16:41:33 +0200, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  wrote:

 - a pure method cannot access the mutable portion of a logically
 invariant data value.
Wouldn't this basically make it transitive invariant?
Yes, which makes my point :) pure must be transitive, but const / invariant by itself does not need to be. -Steve
So yes, you can do without transitive const, as long as you define logical const as transitive. I can't quite see what point you're trying to make. -- Simen
Apr 02 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:41:33 +0200, Steven Schveighoffer  wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  wrote:

 - a pure method cannot access the mutable portion of a logically
 invariant data value.
Wouldn't this basically make it transitive invariant?
Yes, which makes my point :) pure must be transitive, but const / invariant by itself does not need to be. -Steve
So yes, you can do without transitive const, as long as you define logical const as transitive. I can't quite see what point you're trying to make.
No, I'm not defining logical const as transitive. I'm defining that pure is transitive. pure functions have nothing to do with requiring const to be transitive, which is my point. Did you look at my example in the original post? What we have now is semantically equivalent to logical const. -Steve
Apr 02 2008
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 02 Apr 2008 16:50:12 +0200, Steven Schveighoffer  
<schveiguy yahoo.com> wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:41:33 +0200, Steven Schveighoffer  wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  wrote:

 - a pure method cannot access the mutable portion of a logically
 invariant data value.
Wouldn't this basically make it transitive invariant?
Yes, which makes my point :) pure must be transitive, but const / invariant by itself does not need to be. -Steve
So yes, you can do without transitive const, as long as you define logical const as transitive. I can't quite see what point you're trying to make.
No, I'm not defining logical const as transitive. I'm defining that pure is transitive. pure functions have nothing to do with requiring const to be transitive, which is my point. Did you look at my example in the original post? What we have now is semantically equivalent to logical const. -Steve
Yes, there are ways to bypass the const system. We know that, and we accept it. Why not just do: class foo { int bar; const void baz() { (cast(foo)this).bar++; } } That also works. -- Simen
Apr 03 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:50:12 +0200, Steven Schveighoffer  wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:41:33 +0200, Steven Schveighoffer  wrote:

 "Simen Kjaeraas" wrote
 On Wed, 02 Apr 2008 16:04:36 +0200, Steven Schveighoffer  wrote:

 - a pure method cannot access the mutable portion of a logically
 invariant data value.
Wouldn't this basically make it transitive invariant?
Yes, which makes my point :) pure must be transitive, but const / invariant by itself does not need to be. -Steve
So yes, you can do without transitive const, as long as you define logical const as transitive. I can't quite see what point you're trying to make.
No, I'm not defining logical const as transitive. I'm defining that pure is transitive. pure functions have nothing to do with requiring const to be transitive, which is my point. Did you look at my example in the original post? What we have now is semantically equivalent to logical const. -Steve
Yes, there are ways to bypass the const system. We know that, and we accept it. Why not just do: class foo { int bar; const void baz() { (cast(foo)this).bar++; } } That also works.
There is a huge difference between my code and your code. My code is well-defined by the language spec, yours is not. I hope to never write undefined code, as I'm not sure it will always work. -Steve
Apr 03 2008
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2008-04-02 04:14:28 -0400, "Janice Caron" <caron800 googlemail.com> said:

 -= Pure function + invariant data is required for FP =-
That sounds reasonable, though it should really say /transitive/ invariant data.
Hum, I'm wondering if an invariant class couldn't have a mutable member, but access to the mutable member from a const or invariant reference would require synchronized access to keep thread safety guaranties. In other words: class X { mutable int a; const void foo() { writefln(a); // illegal, const this doesn't give access to mutable member. synchronized (this) { writefln(a); // legal, synchronization gives access to mutable member. ++a; // legal since a is mutable. } } } -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Apr 02 2008
prev sibling parent "Koroskin Denis" <2korden+dmd gmail.com> writes:
On Wed, 02 Apr 2008 12:14:28 +0400, Janice Caron <caron800 googlemail.co=
m>  =

wrote:

 On 01/04/2008, guslay <guslay gmail.com> wrote:
 What mutable does is allowing finer-grained control over the powerful=
=
 const system. It does not weaken it, it controls its scope. Those are=
=
 orthogonal issues (as far as I have yet to see an instance where havi=
ng =
 half the fields of an object const, instead of all the fields of the =
=
 object, limits anything in any way).
Really? class C { int x; mutable int y; // just pretend this is legal } void f(const(C) c) { int temp =3D y; y =3D temp + 1; } Now imagine two threads calling f simultaneously with the same c. Imagine a thread switch occurs between the two statements.
I like current const system, and I understand that it would help to do = multiprogramming in D. But maybe we sould look to the problem from different point. The problem you have just shown doesn't correlate with const correctness= . Maybe we should introduce some other data status like 'concurrent' or = 'thread-safe' to determine that data can be accessed or method can be = called with no side effects. It is _great_ contract which not only gives= = programmer additional guaranties, but it also helps compiler to make = optimizations. class C { // Call this method as you wish, it's safe! // Author takes all the responsibility for its safety concurrent void f() { // ... some data manipulation, guarded by mutexes } // This method is concurrent, too // Compiler makes guaranty it's safe synchornized void g() { } } Neither programmer nor compiler can rely on method if it behaves like th= is: class C { int refCount =3D 0; void addRef() { ++refCount; // not safe } } And we cannot mark the method 'pure'. Instead, we could mark it = 'concurrent': class C { int refCount =3D 0; // via 'synchronized' modifier synchronized void addRef() { ++refCount; } // or by hand concurrent void addRef_2() { while (true) { int refs =3D refCount; if (thread.atomic.cas(&refCount, refs, refs+1)) { break; } } } } Note that ALL pure methods are concurrent. This way, we could allow mutable data, but only if _all_ access to it is= = passes through concurrent methods: class Square { private mutable int area; private int width; this(int width) { this.width =3D width; this.area =3D -1; // ok, safe } invariant int getArea() { if (area =3D=3D -1) { area =3D width*width; // BAD // method cannot be marked as invariant, if it modifies mutab= le = variables } return area; } invariant int getArea() { if (area =3D=3D -1) { synchronized(this) { if (area =3D=3D -1) { area =3D width*width; // OK now, no sife effects. } } } return area; } } Logical const _can_ be used with no sife effects, but programmer should = = pay attention to this kind of data. Waht do you think?
Apr 02 2008
prev sibling next sibling parent Jason House <jason.james.house gmail.com> writes:
Just like the main thread, my question was about transitive const and its
support for multithreading. If transitive const only supports it with pure then
the OP is correct...

Janice Caron Wrote:

 On 02/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 Global variables will /not/ be reachable from a pure function. Example:

     int g;

     pure int f()
     {
         return g; /*ERROR*/
     }

  The above will not compile, because global variables won't be
  available to pure functions.
However, I suspect that the following will be OK. enum g = 42; pure int f() { return g; } I'm only guessing here - I don't have any inside knowledge. But it seems reasonable to me that global compile-time constants declared outside the function ought to be available inside it. I guess we'll just have to wait and see. But global variables - no.
Apr 02 2008
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
 pure is a tool for multiprogramming.  This allows all the fun stuff that 
 Walter is always talking about.  pure can co-exist with logical const and 
 logical invariant, because if it couldn't, then pure couldn't exist in 
 today's version of const.
Let me expand on this point, because I may not have explained this correctly and as a result, a lot of you are confused at what I'm talking about. pure functions require transitive invariance. But this does not mean that an invariant object must be transitively invariant for pure to work. For a logically invariant object, a pure function will simply not be able to use the mutable portion of that object. Put another way: pure requires transitive invariance, but making the keywords invariant and const mean 'transitive' is not *required* for this to happen. What is required is a well defined const system, where the compiler can tell what is const and what is not const. The problem with C++'s const and pure is that by default const is not transitive. Therefore, the compiler has no way of knowing if the developer needs data referenced by const members to be const or not. If D had this kind of system, then the compiler would have to restrict pure functions to only interact with local value members of a class that did not reference other data. With const being transitive by default, the compiler can make more assumptions, and have greater assurance that pure has no side effects, but allowing a developer to specify that a piece of a class is not part of the class state does NOT prevent useful pure functions, and does NOT prevent the compiler from enforcing const or pure. i.e. logical const does not invalidate pure functions and does not prevent multiprogramming as Walter has suggested. To summarize: Yes I understand pure functions require transitive invariance. Yes I understand what logical invariance means. I have shown that pure functions are still possible alongside logical invariance, as long as the pure function does not use the mutable portion of the class (which is technically not part of the class state anyways). I have also shown that logical invariance is possible with today's transitive const system, proving that transitive const is semantically equivalent to logical const. -Steve
Apr 02 2008
prev sibling next sibling parent reply guslay <guslay gmail.com> writes:
Janice Caron Wrote:

 On 01/04/2008, guslay <guslay gmail.com> wrote:
  -= A const method may exhibit side effects =-
 You may call static/global functions and modify static/global data, modify
mutable objects accessible through static/global pointers, perform I/O, etc.
Of course. Again, you're stating the obvious.
It's a walkthrough.
  -= Const != thread-safe =-

  A method with potential side effects is not implicitly thread-safe.
Again with the obvious. Is there any point to this? Are you somehow suggesting that anyone on this newsgroup believes otherwise?
Yes. Sometimes.
  -= Const is not the key to functional programming =-
Yes it is. But const is /part/ of the solution, not the /whole/ of the solution.
That's what I said.
 And likewise it would be a tough job making
 functional programming work without (transitive) const.
I still assume const to be transitive. But transitivity is not absolute.
  Immutability is required for FP, but const "mathematical" guarantees are too
weak to provide that ability.
I don't understand that sentence. Why is the word mathematical in quotes? I hope you're not suggesting that mathematics is a dodgy discipline, because it's probably the /only/ discipline in the universe which offers solid proofs for its theorems. Even ignoring that, I have no idea what you're trying to say here.
Abusive claims are made hinting take const/invariant provide a strong mathematical foundation to support multiprogramming, without details. I am trying to show that the const/invariant model, as applied to qualifying functions (not data), is too weak to prove such thing Hence the quotes. I meant no disrespect to your field.
  -= Pure function + invariant data is required for FP =-
That sounds reasonable, though it should really say /transitive/ invariant data.
You are right. As I've stated in the introduction, a really meant invariant functions, not const, and I should have used that terminology.
  This concept is not yet into place, but I will define it has a method that
only call other pure functions, and performs read-only operation on its class
members and static/global data.
INCORRECT. A pure function cannot read static or global data, period.
Let's settle for invariant/enum global data.
  -= Const is part of the interface, not part of the implementation =-
  -= Allowing mutable members does not break or weaken the const model =-
  What mutable does is allowing finer-grained control over the powerful const
system. It does not weaken it, it controls its scope. Those are orthogonal
issues (as far as I have yet to see an instance where having half the fields of
an object const, instead of all the fields of the object, limits anything in
any way).
Really? class C { int x; mutable int y; // just pretend this is legal } void f(const(C) c) { int temp = y; y = temp + 1; } Now imagine two threads calling f simultaneously with the same c. Imagine a thread switch occurs between the two statements.
You have just made the mistake to assume that const implies thread-safe.
  -= Mutability is required =-
  An object is fully initialized, but some parts are lazyly evaluated and
require mutable members.
Then it isn't const.
Based on a very restrictive definition of what const is.
  a) Mutable members are not allowed. The object cannot be passed as const. The
non-const object is passed to the external function, renouncing to any control
at all on the immutability of the object.
There are trivial workarounds. Instead of class C { int x; mutable int y; } void f(const(C) c); just do this: class C { int x; } class D { int y; } void f(const(C) c, D d); This is just a simple matter of saying what you really mean.
I would love to be able to bend framework interfaces at my will.
  b) Mutable members are allowed. The object is passed as const. The caller can
be confident that the internal state of the object will not be modified beyond
the intention of the class designer.
...or it might be screwed up entirely as a result of a threading conflict.
Again, const/invariant != thread safe. Which is obvious, but often forgotten.
Apr 02 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, guslay <guslay gmail.com> wrote:
  You have just made the mistake to assume that const implies thread-safe.
No I didn't. I stated that const is a /necessary/ condition for thread safety, but nowhere did I claim it was a /sufficient/ condition. I assumed (demonstrated, in fact), that mutable implies thread-unsafe, but not that const implies thread-safe. There is a difference.
Apr 02 2008
next sibling parent guslay <guslay gmail.com> writes:
Janice Caron Wrote:

 On 02/04/2008, guslay <guslay gmail.com> wrote:
  You have just made the mistake to assume that const implies thread-safe.
No I didn't. I stated that const is a /necessary/ condition for thread safety, but nowhere did I claim it was a /sufficient/ condition. I assumed (demonstrated, in fact), that mutable implies thread-unsafe, but not that const implies thread-safe. There is a difference.
If const does not imply thread-safety, mutable clearly does not. No one as claimed that. Bizarrely, I was under the impression that you made it sound like it was my position. At least we agree on that.
Apr 02 2008
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Janice Caron" wrote
 On 02/04/2008, guslay wrote:
  You have just made the mistake to assume that const implies thread-safe.
No I didn't. I stated that const is a /necessary/ condition for thread safety, but nowhere did I claim it was a /sufficient/ condition. I assumed (demonstrated, in fact), that mutable implies thread-unsafe, but not that const implies thread-safe. There is a difference.
Here is the full context:
  What mutable does is allowing finer-grained control over the powerful 
 const system. It does not weaken it, it controls its scope. Those are 
 orthogonal issues (as far as I have yet to see an instance where having 
 half the fields of an object const, instead of all the fields of the 
 object, limits anything in any way).
Really? class C { int x; mutable int y; // just pretend this is legal } void f(const(C) c) { int temp = y; y = temp + 1; } Now imagine two threads calling f simultaneously with the same c. Imagine a thread switch occurs between the two statements.
guslay is saying "mutable gives you finer control over const, does not weaken it..." You are saying "Really? <example that proves mutable const is not thread-safe>" I think guslay interpreted your response as a rebuttal to his argument, and so he thought (as I did) that what you were saying is: "Really? Here's an example that is not thread safe (but is if mutable isn't allowed)" In reality, I think your example has nothing to do with the difference between transitive and logical const at all. -Steve
Apr 02 2008
prev sibling next sibling parent guslay <guslay gmail.com> writes:
Janice Caron Wrote:

 On 02/04/2008, guslay <guslay gmail.com> wrote:
 What's logical const anyway?
By my definition, a C++ class is logically const, if and only if at least one of its member variables is declared using the keyword "mutable". If other people use the phrase to mean something different, that probably explains much of the confusion.
Ok, let's take your definition. I would like to hear your reflexion about the rest of the post, that is how D "totally-not-logical-const" is conceptually not different than "logical const".
Apr 02 2008
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
If you do away with transitive const, you cannot have invariant either. 
Const allows you to reuse code that works on invariants to work with 
mutables, too.

Logical const just utterly breaks the whole system. Every scheme we've 
looked at for logical const winds up in some way breaking invariants. If 
invariants are broken, the advantages of them all just go away. I 
suspect that logical const is like perpetual motion - if you think 
you've solved it, you've just made a mistake somewhere <g>. I also 
suspect that the failure of logical const validates the D scheme of 
invariants as being correct - there shouldn't be any holes in it.

You're right that invariant by itself is not enough to specify a pure 
function, a pure function also cannot read or write global state. But 
invariant is still a necessary condition, it's just not sufficient.

Peoples' troubles with const seem to stem from attempting to defeat it 
in one way or another. While defeating const is legal and trivial in 
C++, D tries to close those doors.
Apr 02 2008
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 If you do away with transitive const, you cannot have invariant either.
 Const allows you to reuse code that works on invariants to work with
 mutables, too.
 Logical const just utterly breaks the whole system. Every scheme we've
 looked at for logical const winds up in some way breaking invariants. If
 invariants are broken, the advantages of them all just go away. I
 suspect that logical const is like perpetual motion - if you think
 you've solved it, you've just made a mistake somewhere <g>.
My traditional argument in support of logical const is this: class C { mutable mutex monitor; std::string name; public: std::string name() const { scoped_lock sl( monitor ); return name; } void name( std::string const& n ) { scoped_lock sl( monitor ); name = n; } }; Here, the mutex has nothing to do with the state of the object and must be modified even during logically non-modifying operations. Similar behavior might be necessary for a logging system in some instances, etc. However, I'm not certain whether this is sufficient to justify logical const in general. As you say--it has some serious problems as well. Sean
Apr 02 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 My traditional argument in support of logical const is this:
 
     class C
     {
         mutable mutex monitor;
         std::string name;
     public:
         std::string name() const
         {
             scoped_lock sl( monitor );
             return name;
         }
 
         void name( std::string const& n )
         {
             scoped_lock sl( monitor );
             name = n;
         }
     };
 
 Here, the mutex has nothing to do with the state of the object and
 must be modified even during logically non-modifying operations.
 Similar behavior might be necessary for a logging system in some
 instances, etc.  However, I'm not certain whether this is sufficient
 to justify logical const in general.  As you say--it has some serious
 problems as well.
If C.name were invariant, there would be no need for any locks. I don't think this is a good example, because with D's invariant strings there is no need for such locking.
Apr 02 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 My traditional argument in support of logical const is this:

     class C
     {
         mutable mutex monitor;
         std::string name;
     public:
         std::string name() const
         {
             scoped_lock sl( monitor );
             return name;
         }

         void name( std::string const& n )
         {
             scoped_lock sl( monitor );
             name = n;
         }
     };

 Here, the mutex has nothing to do with the state of the object and
 must be modified even during logically non-modifying operations.
 Similar behavior might be necessary for a logging system in some
 instances, etc.  However, I'm not certain whether this is sufficient
 to justify logical const in general.  As you say--it has some serious
 problems as well.
If C.name were invariant, there would be no need for any locks. I don't think this is a good example, because with D's invariant strings there is no need for such locking.
I'm not sure that's true. Mutexes do two things: first, they guarantee atomicity for an arbitrary sequence of instructions and second, they provide a memory barrier so the result of those instructions is made visible to any other thread which later acquires the same mutex. The invariant label provides a slightly different guarantee: that the data underlying an invariant reference will not ever change. Here's an off-the-cuff example: string getInput() { auto buf = new char[64]; // A readln( buf ); return cast(string) buf; } class Person { string name() { return m_name; } void name( string n ) { name = n; } private string m_name; } auto p = new Person; // start Thread B // Thread A writef( "enter your name: " ); p.name = getInput(); // Thread B while( !p.name ) Thread.yield(); writefln( "Your name is ", p.name ); In the above code, the only memory barrier exists at point A, where the GC acquires a mutex to handle the allocation (let's assume that no IO synchronization occurs for the sake of this example). After this memory barrier, Thread A does this: * transfers input from the keyboard into buf * declares buf to be invariant because it will never change * assigns buf to the name member in p Meanwhile, Thread B is waiting for name to be non-empty, but specifically, what I believe it's doing is this: while p.name.ptr is null wait Then it prints p.name to the screen using the length attribute to determine how many bytes to print. In this example, then, we are relying on the following set of operations to occur sequentially, from a global perspective: 1. the input to be written into buf 2. the length of p.name to be set to the length of buf 3. the ptr of p.name to be set to the ptr of buf (note that operations 2 and 3 are actually expected to be a single atomic operation, but I've ordered them this way based on how the code in Thread B is written) However, as you're no doubt aware, the underlying hardware and even the generated code dictate the order in which things actually occur. So it's theoretically possible for the above operations to happen in any order. What the mutex does in my original example is provide for two things: 1. that the assignment of p.name will be logically atomic for users of p 2. that any operation that happens before p.name is assigned in Thread A will be completed before the mutex is released Thus, with the simple addition of a mutex within the p.name get/set operations above, the code is actually safe and correct (though obviously slower, since Thread B is spinning on a mutex lock). I'll admit that in most real code, however, issue 1 will not exist because the buffer returned fro getInput will be created via an idup operation, which again involves a mutex-protected GC operation (with the current GC anyway). So the example was a bit contrived, but I feel it does illustrate the point that invariance as an optional attribute isn't a panacea for multiprogramming. At the very least with an imperative language, there must be some means of guaranteeing expected behavior for shared data. One route would be to have a fairly strict memory model (like Java) and another would be to provide constructs whereby the programmer can indicate the order of operations ('volatile' in D). C++ 0x will be somewhere in the middle by providing specific atomic types and operations without dictating too much about the relative order of other operations (last time I checked anyway--I haven't read the final proposal). And please forgive me if I'm belaboring the obvious. It was easier to just cover the topic from end to end than to make assumptions about what was mutually understood. Sean
Apr 02 2008
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Invariants won't remove the need for mutexes and the like, but 
invariants themselves won't need to be wrapped inside mutexes. 
Furthermore, accessing invariants won't need to consider memory fences 
and order of evaluation issues.
Apr 02 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Invariants won't remove the need for mutexes and the like, but
 invariants themselves won't need to be wrapped inside mutexes.
 Furthermore, accessing invariants won't need to consider memory fences
 and order of evaluation issues.
I don't know how to view topics as threaded with this web client so I'll assume you're responding to me. From the above, does this mean that you're taking back what you said previously:
 If C.name were invariant, there would be no need for any locks. I don't
 think this is a good example, because with D's invariant strings there
 is no need for such locking.
Also, I believe my example demonstrated that memory fences and order of evaluation issues are still a concern with invariants. They aren't a magical safe ticket to lock-free programming land. Let me re-iterate my point by saying that the /only/ feature provided by invariants is that once invariant data is shared through a properly synchronized method, the invariant is guaranteed not to change out from under the viewer. This is identical to the guarantee provided by returning a copy of the same data when it is requested. Invariant references simply guarantee that any copying will be done up front, automatically, so it does not have to occur manually later on. In general, I feel that the amount of copying caused by invariants is greater than without them in a typical program, but invariants have the advantage of making this copying automatic. Java strings, for example, are ostensibly invariant (even though this can be subverted by the clever programmer), so experience with them there can be directly applied to invariant strings in D (Java memory model notwithstanding). Sean
Apr 03 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 If C.name were invariant, there would be no need for any locks. I don't
 think this is a good example, because with D's invariant strings there
 is no need for such locking.
Also, I believe my example demonstrated that memory fences and order of evaluation issues are still a concern with invariants. They aren't a magical safe ticket to lock-free programming land.
While creating an invariant is something that should be done with care, once it is created, locks are no longer needed on it. After all, since it cannot change, where is the need to sync it?
 Let me re-iterate my point by saying that the /only/ feature provided by
 invariants is that once invariant data is shared through a properly
 synchronized method, the invariant is guaranteed not to change out from
 under the viewer.  This is identical to the guarantee provided by returning
 a copy of the same data when it is requested.  Invariant references simply
 guarantee that any copying will be done up front, automatically, so it does
 not have to occur manually later on.
You're right that invariants can give equivalent semantics to always getting a local copy.
 In general, I feel that the amount of copying caused by invariants is greater
 than without them in a typical program,
I'm not convince of that. Invariants make sharing much more possible and practical, because one doesn't have to worry about making a local copy to protect it from some other alias.
Apr 03 2008
parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 If C.name were invariant, there would be no need for any locks. I don't
 think this is a good example, because with D's invariant strings there
 is no need for such locking.
Also, I believe my example demonstrated that memory fences and order of evaluation issues are still a concern with invariants. They aren't a magical safe ticket to lock-free programming land.
While creating an invariant is something that should be done with care, once it is created, locks are no longer needed on it. After all, since it cannot change, where is the need to sync it?
The issue is largely with being sure that the creation process is complete and the changes made appropriately visible by the time other threads attempt to view the data. Strings also impose an additional requirement that any shared referencemust be set carefully because the references are the size of two pointers and thus are not set in an atomic manner even on x86. My sample program illustrated both problems. Sadly, not even D's 'volatile' statement can do anything about safe setting of string references, since the single instruction will still involve at least two separate stores. Sean
Apr 03 2008
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Sean Kelly (sean invisibleduck.org)'s article
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Sean Kelly wrote:
 My traditional argument in support of logical const is this:

     class C
     {
         mutable mutex monitor;
         std::string name;
     public:
         std::string name() const
         {
             scoped_lock sl( monitor );
             return name;
         }

         void name( std::string const& n )
         {
             scoped_lock sl( monitor );
             name = n;
         }
     };

 Here, the mutex has nothing to do with the state of the object and
 must be modified even during logically non-modifying operations.
 Similar behavior might be necessary for a logging system in some
 instances, etc.  However, I'm not certain whether this is sufficient
 to justify logical const in general.  As you say--it has some serious
 problems as well.
If C.name were invariant, there would be no need for any locks. I don't think this is a good example, because with D's invariant strings there is no need for such locking.
I'm not sure that's true. Mutexes do two things: first, they guarantee atomicity for an arbitrary sequence of instructions and second, they provide a memory barrier so the result of those instructions is made visible to any other thread which later acquires the same mutex. The invariant label provides a slightly different guarantee: that the data underlying an invariant reference will not ever change.
I realized that I didn't mention it in this message so I wanted to follow up. From my perspective, the advantage of invariant strings are that the user doesn't have to worry about COW or anything like that because the duping is done up front. So: class Person { private string m_name; void name( string n ) { synchronized m_name = n; } string name() { synchronized return m_name; } } If m_name were not invariant, the code would have to look like this to guarantee safety: class Person { private char[] m_name; void name( string n ) { synchronized m_name = n.dup; } string name() { synchronized return m_name.dup; } // added .dup } Without the explicit copying, the creator of Person would have to rely on documentation to enforce ownership rules (ie. that passing a string to Person.name indicates a transferral of ownership and that the user should not retain a reference to this buffer or modify it after the transfer takes place) because there's no way to indicate them in code in D. In C++, as I'm sure you're aware, std::auto_ptr is used to indicate transfer of ownership for dynamic data, or plain old pass by value is used instead. Sean
Apr 03 2008
prev sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 02/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 My traditional argument in support of logical const is this:

     class C
     {
         mutable mutex monitor;
         <snip>
     };
But in D, the class C will already have a mutex, by virtue of the fact that it derives from Object. ...which raises an interesting point. Can "synchronized" be used on a const object? What about an invariant object?
Apr 03 2008
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Janice Caron wrote:
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
"No" for an invariant object, because there is no point to locking something that is invariant. Since an invariant can be implicitly cast to const, that means it can't be allowed for const, either.
Apr 03 2008
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
== Quote from Janice Caron (caron800 googlemail.com)'s article
 On 02/04/2008, Sean Kelly <sean invisibleduck.org> wrote:
 My traditional argument in support of logical const is this:

     class C
     {
         mutable mutex monitor;
         <snip>
     };
But in D, the class C will already have a mutex, by virtue of the fact that it derives from Object.
Yup. I did mention object-local logs or other similar things, but mutexes are far and away my most common use of mutable in C++. So no worries here in D.
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
They'll work just fine with const objects in D. Object monitors in D currently live outside the const checking scheme. Sean
Apr 03 2008
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Sean Kelly" wrote
 == Quote from Janice Caron article
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
They'll work just fine with const objects in D. Object monitors in D currently live outside the const checking scheme.
Oh so you mean Walter couldn't implement mutexes without logical const huh? I think this part of the system should be ripped out and you should have to pass mutexes along with objects everywhere you want to ensure thread safety and const is transitive. That's easier and less obfuscated, right? -Steve
Apr 03 2008
parent reply Sean Reque <seanthenewt yahoo.com> writes:
Steven Schveighoffer Wrote:

 "Sean Kelly" wrote
 == Quote from Janice Caron article
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
They'll work just fine with const objects in D. Object monitors in D currently live outside the const checking scheme.
Oh so you mean Walter couldn't implement mutexes without logical const huh? I think this part of the system should be ripped out and you should have to pass mutexes along with objects everywhere you want to ensure thread safety and const is transitive. That's easier and less obfuscated, right? -Steve
The whole point of invariant objects and pure functions in terms of thread safety is that you don't need any form of synchronized data access. If, for instance a piece of data is guaranteed not to change, then two cores can hold two separate copies of the same data in each of their local caches instead of having to fetch the same data from an external source. Requiring synchronization on all const objects would entirely defeat optimizations like these. The end result is, don't use const, invariant, or pure if you plan on modifying internal data on an object. A caching calculator, for instance, is inherently not thread safe, because one thread might cause the calculator to write to the cache at the same time another thread reads from it, so the calculator should never be allowed to be const or invariant. If I understand correctly, Walter wants the compiler to be able to make assumptions on an invariant object that would not be possible if it had mutable members. I think that the ideas of pure and invariant will be very valuable in the end and one of D's strong selling points. It will require more careful library design though, and some designs and features, such as caching calculations or synchronizing memory access, will simply not be usable with it.
Apr 03 2008
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
== Quote from Sean Reque (seanthenewt yahoo.com)'s article
 Steven Schveighoffer Wrote:
 "Sean Kelly" wrote
 == Quote from Janice Caron article
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
They'll work just fine with const objects in D. Object monitors in D currently live outside the const checking scheme.
Oh so you mean Walter couldn't implement mutexes without logical const huh? I think this part of the system should be ripped out and you should have to pass mutexes along with objects everywhere you want to ensure thread safety and const is transitive. That's easier and less obfuscated, right? -Steve
The whole point of invariant objects and pure functions in terms of thread safety is that you don't
need any form of synchronized data access. If, for instance a piece of data is guaranteed not to change, then two cores can hold two separate copies of the same data in each of their local caches instead of having to fetch the same data from an external source. Requiring synchronization on all const objects would entirely defeat optimizations like these. As I've illustrated previously, invariance doesn't eliminate the need for synchronization. It's true that if two threads have existing reference to a shared piece of invariant data then further accesses of that data do not need to be synchronized, but unless the data exists in ROM or was created at application start time there must be a synchronization point involved when each new thread obtains a reference to this data. If this were not the case, lock-free programming would be largely unnecessary or at least painfully simple. The problem with shared-memory multiprogramming is the fact that memory is shared. Functional languages are good for multiprogramming because they don't have any shared data, not because their data is all invariant (though I believe the latter should help for additional automatic parallelization of loops and such even though that's not commonly done today AFAIK). Sean
Apr 03 2008
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Sean Reque" wrote
 Steven Schveighoffer Wrote:

 "Sean Kelly" wrote
 == Quote from Janice Caron article
 ...which raises an interesting point. Can "synchronized" be used on a
 const object? What about an invariant object?
They'll work just fine with const objects in D. Object monitors in D currently live outside the const checking scheme.
Oh so you mean Walter couldn't implement mutexes without logical const huh? I think this part of the system should be ripped out and you should have to pass mutexes along with objects everywhere you want to ensure thread safety and const is transitive. That's easier and less obfuscated, right? -Steve
The whole point of invariant objects and pure functions in terms of thread safety is that you don't need any form of synchronized data access. If, for instance a piece of data is guaranteed not to change, then two cores can hold two separate copies of the same data in each of their local caches instead of having to fetch the same data from an external source. Requiring synchronization on all const objects would entirely defeat optimizations like these.
Correct, for pure functions. But for non-pure functions, which is 100% of D2 code right now, you do need synchronization. When pure is introduced, then I believe at least 75% of D will not use it (just a guess, but I believe more D code will be non-pure than will be pure), and you STILL will need synchronization for those functions. In that case, you still need to synchronize at least const objects, and where do we store the mutex? WITH THE OBJECT! Which means, it is not part of the object state, because otherwise you couldn't lock it while the object was const! I'm not arguing about how mutexes are implemented. I'm pointing out the hipocrasy of how logical const is not allowed for developers, yet Walter uses it in the standard library to implement mutexes.
 The end result is, don't use const, invariant, or pure if you plan on 
 modifying internal data on an object. A caching calculator, for instance, 
 is inherently not thread safe, because one thread might cause the 
 calculator to write to the cache at the same time another thread reads 
 from it, so the calculator should never be allowed to be const or 
 invariant. If I understand correctly, Walter wants the compiler to be able 
 to make assumptions on an invariant object that would not be possible if 
 it had mutable members.
I'd like to do that, but the way D const works, it's very difficult. For example, what if a calculator is expected to be const? (see the other branch of this thread with the arguments between myself and Janice) Now, I cannot pass my caching calculator unless I make the cache a global, which is totally unnecessary, it should be piggy-backed around with the calculator object, but not part of the object state, just like mutexes are...
 I think that the ideas of pure and invariant will be very valuable in the 
 end and one of D's strong selling points. It will require more careful 
 library design though, and some designs and features, such as caching 
 calculations or synchronizing memory access, will simply not be usable 
 with it.
I do not have an opinion on whether pure and invariant will be huge for D, as I am not experienced enough with FP, but I know for a fact that having logical const will not inhibit this, just as logical const mutexes will not. -Steve
Apr 04 2008
prev sibling next sibling parent reply Jason House <jason.james.house gmail.com> writes:
Walter Bright wrote:

 If you do away with transitive const, you cannot have invariant either.
 Const allows you to reuse code that works on invariants to work with
 mutables, too.
 
 Logical const just utterly breaks the whole system. Every scheme we've
 looked at for logical const winds up in some way breaking invariants. If
 invariants are broken, the advantages of them all just go away. I
 suspect that logical const is like perpetual motion - if you think
 you've solved it, you've just made a mistake somewhere <g>. I also
 suspect that the failure of logical const validates the D scheme of
 invariants as being correct - there shouldn't be any holes in it.
 
 You're right that invariant by itself is not enough to specify a pure
 function, a pure function also cannot read or write global state. But
 invariant is still a necessary condition, it's just not sufficient.
Is it possible to describe what tricks the compiler can do given: A. transitive const w/ mutable access to globals/statics B. transitive invariance w/ mutable access to globals/statics C. transitive const w/ const access to globals/statics D. transitive invariance w/ invariant access to globals/statics It'd be extremely helpful to have that kind of discussion in the const FAQ. Without it, I feel like we're all discussing theoretical stuff with little basis for our arguments. PS: I went to digitalmars and couldn't find the const FAQ. It's not linked to from the normal FAQ and is not included in the left-side bar on the D 2.0 page(s).
 
 Peoples' troubles with const seem to stem from attempting to defeat it
 in one way or another. While defeating const is legal and trivial in
 C++, D tries to close those doors.
Apr 02 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jason House wrote:
 Walter Bright wrote:
 
 If you do away with transitive const, you cannot have invariant either.
 Const allows you to reuse code that works on invariants to work with
 mutables, too.

 Logical const just utterly breaks the whole system. Every scheme we've
 looked at for logical const winds up in some way breaking invariants. If
 invariants are broken, the advantages of them all just go away. I
 suspect that logical const is like perpetual motion - if you think
 you've solved it, you've just made a mistake somewhere <g>. I also
 suspect that the failure of logical const validates the D scheme of
 invariants as being correct - there shouldn't be any holes in it.

 You're right that invariant by itself is not enough to specify a pure
 function, a pure function also cannot read or write global state. But
 invariant is still a necessary condition, it's just not sufficient.
Is it possible to describe what tricks the compiler can do given: A. transitive const w/ mutable access to globals/statics B. transitive invariance w/ mutable access to globals/statics C. transitive const w/ const access to globals/statics D. transitive invariance w/ invariant access to globals/statics It'd be extremely helpful to have that kind of discussion in the const FAQ. Without it, I feel like we're all discussing theoretical stuff with little basis for our arguments.
Sure, but static class members are *not* part of the transitive closure state of the object. They're just global variables.
 PS: I went to digitalmars and couldn't find the const FAQ.  It's not linked
 to from the normal FAQ and is not included in the left-side bar on the D
 2.0 page(s).
That'll happen with the next update.
Apr 02 2008
next sibling parent Jason House <jason.james.house gmail.com> writes:
Walter Bright wrote:

 Jason House wrote:
 Walter Bright wrote:
 
 If you do away with transitive const, you cannot have invariant either.
 Const allows you to reuse code that works on invariants to work with
 mutables, too.

 Logical const just utterly breaks the whole system. Every scheme we've
 looked at for logical const winds up in some way breaking invariants. If
 invariants are broken, the advantages of them all just go away. I
 suspect that logical const is like perpetual motion - if you think
 you've solved it, you've just made a mistake somewhere <g>. I also
 suspect that the failure of logical const validates the D scheme of
 invariants as being correct - there shouldn't be any holes in it.

 You're right that invariant by itself is not enough to specify a pure
 function, a pure function also cannot read or write global state. But
 invariant is still a necessary condition, it's just not sufficient.
Is it possible to describe what tricks the compiler can do given: A. transitive const w/ mutable access to globals/statics B. transitive invariance w/ mutable access to globals/statics C. transitive const w/ const access to globals/statics D. transitive invariance w/ invariant access to globals/statics It'd be extremely helpful to have that kind of discussion in the const FAQ. Without it, I feel like we're all discussing theoretical stuff with little basis for our arguments.
Sure, but static class members are *not* part of the transitive closure state of the object. They're just global variables.
That's why I lumped the two together :) Does saying sure mean that we can look forward to seeing some info along those lines?
 PS: I went to digitalmars and couldn't find the const FAQ.  It's not
 linked to from the normal FAQ and is not included in the left-side bar on
 the D 2.0 page(s).
That'll happen with the next update.
Thanks.
Apr 02 2008
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Jason House wrote:
 Walter Bright wrote:

 If you do away with transitive const, you cannot have invariant either.
 Const allows you to reuse code that works on invariants to work with
 mutables, too.

 Logical const just utterly breaks the whole system. Every scheme we've
 looked at for logical const winds up in some way breaking invariants. If
 invariants are broken, the advantages of them all just go away. I
 suspect that logical const is like perpetual motion - if you think
 you've solved it, you've just made a mistake somewhere <g>. I also
 suspect that the failure of logical const validates the D scheme of
 invariants as being correct - there shouldn't be any holes in it.

 You're right that invariant by itself is not enough to specify a pure
 function, a pure function also cannot read or write global state. But
 invariant is still a necessary condition, it's just not sufficient.
Is it possible to describe what tricks the compiler can do given: A. transitive const w/ mutable access to globals/statics B. transitive invariance w/ mutable access to globals/statics C. transitive const w/ const access to globals/statics D. transitive invariance w/ invariant access to globals/statics It'd be extremely helpful to have that kind of discussion in the const FAQ. Without it, I feel like we're all discussing theoretical stuff with little basis for our arguments.
Sure, but static class members are *not* part of the transitive closure state of the object. They're just global variables.
Neither are mutable member variables. -Steve
Apr 03 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 "Walter Bright" wrote
 Sure, but static class members are *not* part of the transitive closure 
 state of the object. They're just global variables.
Neither are mutable member variables.
How can a member not be a member of the state of an object?
Apr 03 2008
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Steven Schveighoffer wrote:
 "Walter Bright" wrote
 Sure, but static class members are *not* part of the transitive closure 
 state of the object. They're just global variables.
Neither are mutable member variables.
How can a member not be a member of the state of an object?
The very act of declaring it mutable tells the compiler "this is not object state." I don't like the keyword mutable, what about 'nonstate' or something like that. It's not really saying "this will always be mutable", it's just saying "this is not part of the object state, so don't apply const casts to it," which happens to mean it's mutable when the rest of the object is const or invariant. Declaring a nonstate member is basically a way to associate data with an object that is not owned by the object. For example, a cache in a calculator object may be used by the methods of the object, but it's not 'owned' by anything, just like the methods of the object can use global data, or data from the arguments passed to the function. I like to think of nonstate data as extra arguments passed to every method call, in addition to the 'this' pointer. In fact, this could be how it was implemented in the compiler (but I think implementing it inline with the class would have better performance): class C { // this is like a struct passed as an argument to all methods, and is always mutable nonstate { int c; } // is passed an implicit nonstate pointer to the nonstate values int f(int x) const { return nonstate.c++; } } This way, the nonstate part could be laid out in the same memory chunk as the class, and is passed around with the class, but is not technically part of the class. This is similar to how a mutex is associated with a class, lives in the same memory space as the class, but is not considered part of the transitive const closure of the object. There could be a vtable entry that points to the nonstate data, so the layout would move with derived classes. -Steve
Apr 04 2008
parent reply Leandro Lucarella <llucax gmail.com> writes:
Steven Schveighoffer, el  4 de abril a las 09:24 me escribiste:
 class C
 {
    // this is like a struct passed as an argument to all methods, and is 
 always mutable
    nonstate
    {
        int c;
    }
 
    // is passed an implicit nonstate pointer to the nonstate values
    int f(int x) const
    {
        return nonstate.c++;
    }
 }
Why don't you just do something like this: static int[C] nonstate; class C { // is passed an implicit nonstate pointer to the nonstate values int f(int x) const { return nonstate[this] += 1; } } If nonstate is not part of the object, why to put it in it? I this the cache-problem is a very specific problem, which is OK to have a specific solution. Even more, I find a lot more clear if the cache it's outside the class, because is not part of it (if it's not part of the state of the object, it's not part of the object at all!). So C.f() is clearly not pure, but it's const, it doesn't change the state of the object. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Pack and get dressed before your father hears us, before all hell breaks loose.
Apr 04 2008
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 04/04/2008, Leandro Lucarella <llucax gmail.com> wrote:
 Why don't you just do something like this:
 <snip>
  If nonstate is not part of the object, why to put it in it?
You're having the same problem with Steven's jargon as I had. I found that terminology confusing. Rest assured, we /are/ talking about part of the object. If we come up with a better way of describing it, we'll tell the world. What we're talking about here is a member whose constancy cannot be changed (but whose value maybe can).
Apr 04 2008
parent Leandro Lucarella <llucax gmail.com> writes:
Janice Caron, el  4 de abril a las 17:46 me escribiste:
 On 04/04/2008, Leandro Lucarella <llucax gmail.com> wrote:
 Why don't you just do something like this:
 <snip>
  If nonstate is not part of the object, why to put it in it?
You're having the same problem with Steven's jargon as I had. I found that terminology confusing. Rest assured, we /are/ talking about part of the object. If we come up with a better way of describing it, we'll tell the world. What we're talking about here is a member whose constancy cannot be changed (but whose value maybe can).
What about a real example? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- CAROZO CON FARINGITIS -- Crónica TV
Apr 04 2008
prev sibling next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 04/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 On 04/04/2008, Leandro Lucarella <llucax gmail.com> wrote:
  > Why don't you just do something like this:

 <snip>
  If nonstate is not part of the object, why to put it in it?
You're having the same problem with Steven's jargon as I had. I found that terminology confusing. Rest assured, we /are/ talking about part of the object. If we come up with a better way of describing it, we'll tell the world. What we're talking about here is a member whose constancy cannot be changed (but whose value maybe can).
See the thread "unpaintable..." for an alternative description of the solution.
Apr 04 2008
parent Leandro Lucarella <llucax gmail.com> writes:
Janice Caron, el  4 de abril a las 17:53 me escribiste:
 On 04/04/2008, Janice Caron <caron800 googlemail.com> wrote:
 On 04/04/2008, Leandro Lucarella <llucax gmail.com> wrote:
  > Why don't you just do something like this:

 <snip>
  If nonstate is not part of the object, why to put it in it?
You're having the same problem with Steven's jargon as I had. I found that terminology confusing. Rest assured, we /are/ talking about part of the object. If we come up with a better way of describing it, we'll tell the world. What we're talking about here is a member whose constancy cannot be changed (but whose value maybe can).
See the thread "unpaintable..." for an alternative description of the solution.
I saw it and says nothing new to me. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- This homeless guy asked me for some money the other day. And I was gonna give it to him but then I thought you're just gonna use it on drugs or alcohol. And then I thought, that's what I'm gonna use it on. Why am I judging this poor bastard.
Apr 04 2008
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Leandro Lucarella" wrote
 Steven Schveighoffer, el  4 de abril a las 09:24 me escribiste:
 class C
 {
    // this is like a struct passed as an argument to all methods, and is
 always mutable
    nonstate
    {
        int c;
    }

    // is passed an implicit nonstate pointer to the nonstate values
    int f(int x) const
    {
        return nonstate.c++;
    }
 }
Why don't you just do something like this: static int[C] nonstate; class C { // is passed an implicit nonstate pointer to the nonstate values int f(int x) const { return nonstate[this] += 1; } }
because this solution requires a non-trivial lookup time, whereas my solution does not require a lookup time. Plus you have more synchronization to worry about (all instances must have a global mutex to the cache). The point is, yes, you can solve it that way. But why can't we have a way that has higher performance and less threading issues? There is no reason I can see.
 If nonstate is not part of the object, why to put it in it? I this the
 cache-problem is a very specific problem, which is OK to have a specific
 solution. Even more, I find a lot more clear if the cache it's outside the
 class, because is not part of it (if it's not part of the state of the
 object, it's not part of the object at all!).
I agree it is confusing, but it's just a tacked-on piece that isn't part of the class, although it lives in the same allocated memory space. What if I put it like: class C { nonstate { int c; } body { int f(int x) const { return nonstate.c++; } } } similar to how function contracts split the body from the contract. Does this help your understanding? what I don't get is how people can understand that static is not part of an instance, yet that goes into the class declaration, but they can't understand this.
 So C.f() is clearly not pure, but it's const, it doesn't change the state
 of the object.
Agreed. -Steve
Apr 04 2008
parent "Janice Caron" <caron800 googlemail.com> writes:
On 04/04/2008, Steven Schveighoffer <schveiguy yahoo.com> wrote:
  what I don't get is how people ... can't
  understand this.
It's because you (...and now me...) are describing a totally new concept which does not yet exist. Even the "mutable" keyword in C++ doesn't come close to this, because C++ doesn't have either transitive const or invariant. This really is something new. (Everyone else, see the "unpaintable..." thread for a first stab at an explanation).
Apr 04 2008
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Steven Schveighoffer wrote:
 "Walter Bright" wrote
 Sure, but static class members are *not* part of the transitive 
 closure state of the object. They're just global variables.
Neither are mutable member variables.
How can a member not be a member of the state of an object?
If you have a tree node, you can have a member that is a link to the parent node. While the child nodes are considered part of the state of the tree node, the parent node is not. (That's why it's conceptually an optional member in such structure, unlike the child nodes). Another example is a GUI widget object that has a link (aka a member) to it's Display (an class that represents the display properties where the GUI object will be rendered). The Display member is not part of the state of the GUI widget. It makes perfect sense to want const (or invariant) to restrict it's effect only to the other members of the GUI widget, but not the display member (and others which are not part of the logical state of the object). It all comes down to the familiar concept of composition vs. aggregation. I can give a link to an academic paper that details what I've mentioned, if you're still not convinced. -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Apr 27 2008
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 If you do away with transitive const, you cannot have invariant either. 
 Const allows you to reuse code that works on invariants to work with 
 mutables, too.
I'm not "doing away with" transitive const. I'm saying allow the exception that already exists as global variables to be easier to implement. I still believe that the default meaning of const is transitive. This is unlike C++'s const. The exception is when you declare a piece of a class to be mutable, it is now declared to be not part of the state of the class, but a piece of data associated with the class, just like a global AA would associate a class (the key) with some data (the value).
 Logical const just utterly breaks the whole system. Every scheme we've 
 looked at for logical const winds up in some way breaking invariants. If 
 invariants are broken, the advantages of them all just go away. I suspect 
 that logical const is like perpetual motion - if you think you've solved 
 it, you've just made a mistake somewhere <g>. I also suspect that the 
 failure of logical const validates the D scheme of invariants as being 
 correct - there shouldn't be any holes in it.
Think of the mutable portion of logically invariant classes as global variables, but with better protection capabilities. And saying "logical const winds up in some way breaking invariants" doesn't make it true :) In fact, it is false. I have proven that logical const is already possible. You have not pointed out any mistakes I have made. History is full of examples where people were absolutely sure that something was true, but they turned out to be wrong. I have offered proof for my views. You have offered anecdotes and analogies. Please try harder :)
 You're right that invariant by itself is not enough to specify a pure 
 function, a pure function also cannot read or write global state. But 
 invariant is still a necessary condition, it's just not sufficient.
Exactly, since the mutable portion is not part of the object state, it should be inaccessible to pure functions.
 Peoples' troubles with const seem to stem from attempting to defeat it in 
 one way or another. While defeating const is legal and trivial in C++, D 
 tries to close those doors.
I'm not trying to defeat the const system. I'm showing you that the const system already supports "operationally equivalent" logical const. Since it already supports an equivalent version of logical const, why not support it directly, and make coders lives easier who want to use it. I can come up with a set of rules for logical const that allow pure functions which the compiler can statically verify that the function has no side effects and is not affected by any other function's side effects. I think this is the goal, is it not? -Steve
Apr 03 2008
prev sibling parent reply guslay <guslay gmail.com> writes:
Walter Bright Wrote:

 
 Logical const just utterly breaks the whole system. 
I think we had a different expectations about invariant and pure. Do you believe that (A), invariant A a; int x = f_pure(a); g_not_pure(a); // potentially mutating mutable members of A int y = f_pure(a); int z = f_pure(a); which is can be transformed without problem to (B), invariant A a; int x = f_pure(a); g_not_pure(a); int y = f_pure(a); int z = y; MUST also be equivalent to (C) ? invariant A a; int x = f_pure(a); int y = x; int z = x; g_not_pure(a); A = B = C is only possible if invariant A as no mutable part. I agree. Is this your position? I was expecting a weaker pure model, where only A = B is true (optimization based on purity only possible inside a contiguous block of pure functions). If this is correct then I understand your concerns against logical const.
Apr 03 2008
next sibling parent reply Fawzi Mohamed <fmohamed mac.com> writes:
I understant Steven proposal, and it is a nice proposal and it could 
work, and indeed it is nice to have a way out of the const, but I am 
terribly afraid of this.

Maybe I am wong (in this case please point out a pratical example where 
this is not the case), but what is the use of such a data?

The only use that I can imagine (if the object is really constant) are 
cache or performance related, so informations not really needed, but 
useful for improving the performance.

So naturally also "pure" functions would like to take advantage of it, 
or the usefulness of the whole is very diminished.
But then the cache has to be done *very* carefully to avoid curruptions 
due to multiple threads accessing the object...

It is possible but really not easy (and needs locking or atomic 
operations that probably will not be good for performance), so very 
likely it will be done badly and will lead to very subtle bugs.

I really think that the controlled and already studied ways to relax 
the const are "suspended evaluation", and "undefined settable value 
blocking on read".

I also think that indeed what walter wants is as guslay said

 A = B = C
because you never know (and for performance reasons do not want to know) if another thread is executing a g_not_pure on the same data on which a pure function is working. You do not want to aquire locks before executing a pure function. Fawzi <guslay gmail.com> said:
 Walter Bright Wrote:
 
 
 Logical const just utterly breaks the whole system.
I think we had a different expectations about invariant and pure. Do you believe that (A), invariant A a; int x = f_pure(a); g_not_pure(a); // potentially mutating mutable members of A int y = f_pure(a); int z = f_pure(a); which is can be transformed without problem to (B), invariant A a; int x = f_pure(a); g_not_pure(a); int y = f_pure(a); int z = y; MUST also be equivalent to (C) ? invariant A a; int x = f_pure(a); int y = x; int z = x; g_not_pure(a); A = B = C is only possible if invariant A as no mutable part. I agree. Is this your position? I was expecting a weaker pure model, where only A = B is true (optimization based on purity only possible inside a contiguous block of pure functions). If this is correct then I understand your concerns against logical const.
Apr 04 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Fawzi Mohamed" wrote
 So naturally also "pure" functions would like to take advantage of it, or 
 the usefulness of the whole is very diminished.
 But then the cache has to be done *very* carefully to avoid curruptions 
 due to multiple threads accessing the object...
In fact, if pure does memoization, it must too be aware of threading implications, unless the memoization is per-thread (which I have no idea why it would be).
 It is possible but really not easy (and needs locking or atomic operations 
 that probably will not be good for performance), so very likely it will be 
 done badly and will lead to very subtle bugs.
Anytime you run 2 non-pure functions that access mutable data, be it global or as a logical const portion of the object, you will need synchronization. Adding logical const does not add to or diminish that requirement. For pure to be statically verifiable by the compiler, it must not be allowed to access any mutable data that is at the same time accessible through another thread, so it should not be able to access the data at the same time, and no synchronization is necessary.
 I really think that the controlled and already studied ways to relax the 
 const are "suspended evaluation", and "undefined settable value blocking 
 on read".
I'm not familiar with these concepts. Frankly, I'm not sure they are relevant to this discussion (although they might be pertinent to making multiprogramming easier). I'm trying to take an already existing possibility, and make it easier to implement and perform better.
 I also think that indeed what walter wants is as guslay said

 A = B = C
because you never know (and for performance reasons do not want to know) if another thread is executing a g_not_pure on the same data on which a pure function is working.
But if A is invariant, it cannot be changed, so there are no thread concerns. Think of the mutable portion of a logical-const class as being global data (or as I like to think, not part of the class state). A pure function cannot access global data, and so likewise it cannot access the mutable portions of the class. There is no need for synchronization because the pure function will not change it's output if the mutable portion changes. -Steve
Apr 04 2008
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"guslay" wrote
 Walter Bright Wrote:

 Logical const just utterly breaks the whole system.
I think we had a different expectations about invariant and pure. Do you believe that (A), invariant A a; int x = f_pure(a); g_not_pure(a); // potentially mutating mutable members of A int y = f_pure(a); int z = f_pure(a); which is can be transformed without problem to (B), invariant A a; int x = f_pure(a); g_not_pure(a); int y = f_pure(a); int z = y; MUST also be equivalent to (C) ? invariant A a; int x = f_pure(a); int y = x; int z = x; g_not_pure(a); A = B = C is only possible if invariant A as no mutable part. I agree. Is this your position? I was expecting a weaker pure model, where only A = B is true (optimization based on purity only possible inside a contiguous block of pure functions). If this is correct then I understand your concerns against logical const.
If we say that the mutable part of invariant A is not accessible during pure functions, then (A = B = C) holds true. In fact, I would argue that for logical const, the mutable part is not part of the invariant A instance, but is an 'associated' or 'tacked-on' piece that happens to be carried around with A. It is used by A to help for various tools such as syncrhonization, memoization, etc, but it is not part of A's state. Think of it as living in A's namespace, but not in the instance, and so it is inaccessible to pure functions. -Steve
Apr 04 2008
prev sibling parent reply guslay <guslay gmail.com> writes:
Just a quick reply, I still this it should be discussed elsewhere...

 If we say that the mutable part of invariant A is not accessible during pure
 functions, then (A = B = C) holds true.  In fact, I would argue that for
 logical const, the mutable part is not part of the invariant A instance, but
 is an 'associated' or 'tacked-on' piece that happens to be carried around
 with A.  It is used by A to help for various tools such as syncrhonization,
 memoization, etc, but it is not part of A's state.  Think of it as living in
 A's namespace, but not in the instance, and so it is inaccessible to pure
 functions.
 
 -Steve 
My idea was that you cannot prove A = B = C if pure functions can access mutable members of an invariant object. - Class A has a mutable _lock member - f = isLocked() pure - g = unlock() unpure But now that I think of it, you are right, the same constraints also applies to global state. So tho make purity works, you would need to applies the same rules to mutable members as to global states: Minimally, - Forbid pure functions to call non-pure functions or to access mutable members directly. - Forbid pure functions to call non-pure functions or to access mutable global state directly. More drastically, - Forbid invariant object with mutable members to take any part in pure expressions. - Forbid invariant object with accessing mutable global states to take any part in pure expressions. To only things I see that can break a pure method are: - If A is abstract type, can the concrete type of the object subvert the purity analysis. - Casting inside the pure function. - Invariant parameter not really invariant. - Aliasing. All those things are some sort of cheating, and those have nothing to do with mutable members. I still believe!
Apr 04 2008
parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"guslay" wrote
 Just a quick reply, I still this it should be discussed elsewhere...
I like this forum, as it allows people to make points where people who might be interested are usually listening. Walter also at least reads the NG. Outlook Express automatically threads the NG for me, so if there is a thread I'm not interested in, I just ignore it (the web interface sucks, don't use it :) ). Another form of discussion might not be heard by people who have good ideas, and I'd rather everything be open. If people don't want to discuss this, just ignore the thread.
 More drastically,
 - Forbid invariant object with mutable members to take any part in pure 
 expressions.
I don't think this is possible without fully transitive const, as the compiler cannot be sure that a derived class or concrete implementation does not use mutable members. This could be accomplished with extra tagging, i.e. transitive invariant / logical invariant. That being said, I don't think this is necessary.
 To only things I see that can break a pure method are:

 - If A is abstract type, can the concrete type of the object subvert the 
 purity analysis.
Since a pure function can only call other pure functions, purity is fully transitive (and has to be in order to be statically verified), and therefore you cannot subvert the purity analysis in a derived type because you should only be able to implement/override a pure method signature with a pure method. However, the cheats below can do this, so this is really a result of one of the cheats :)
 - Casting inside the pure function.
I think this should be allowed, but undefined, just like casting away const is. There may be cases where casting can allow for a huge optimization, and still be a pure function, but there is no way to statically analyze it.
 - Invariant parameter not really invariant.
Agree.
 - Aliasing.
Not sure what you mean here...
 All those things are some sort of cheating, and those have nothing to do 
 with mutable members.
Yes.
 I still believe!
:) -Steve
Apr 04 2008