www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Contradictory justification for status quo

reply "deadalnix" <deadalnix gmail.com> writes:
Here is something I've noticed going on various time, recently in 
the memory management threads, but as to avoid an already heated 
debate, I'll use enum types as an example.

We have a problem with the way enums are defined. If you have :

enum E { A, B, C }
E e;

We have (1)
final switch(e) with(E) {
     case A:
         // ...
     case B:
         // ...
     case C:
         // ...
}

But also have (2):
typeof(E.A | E.B) == E

When raising this, the discussion goes as follow
  - "If you have (1), we can't have (2), as break guarantee (1) 
rely on."
  - "(2) is usefull. For instance UserID or Color"
  - "Then let's get rid of (1)"
  - "(1) is useful, for instance to achieve X or Y"
  - "Let's create a new type in the language, (1) would be enum 
and (2) would be this new type"
  - "It adds too much complexity in the language"

This very conversation went on in a very lengthy thread a while 
ago (note, for SDC, I just dropped (2), typeof(E.A | E.B) == int 
and I consider it a closed issue).

It can go forever, as all reason given for every concern raised 
is valid. Yes, adding a new type is probably too much for the 
benefit, yes (1) and (2) are useful in various scenarios. And, by 
refusing to take a step back look at the problem as a whole, we 
can turn around forever and never conclude anything, benefiting 
only the status quo.

I've seen this attitude going on on various topics. This is the 
surest and fastest way to end up with C++ without the C source 
compatibility. I'd like that we just stop this attitude.
Feb 25 2015
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-02-25 22:06, deadalnix wrote:
 Here is something I've noticed going on various time, recently in the
 memory management threads, but as to avoid an already heated debate,
 I'll use enum types as an example.

 We have a problem with the way enums are defined. If you have :

 enum E { A, B, C }
 E e;

 We have (1)
 final switch(e) with(E) {
      case A:
          // ...
      case B:
          // ...
      case C:
          // ...
 }

 But also have (2):
 typeof(E.A | E.B) == E
How about allowing something like this: enum E : void { A, B, C } The above would disallow use case (2). The compiler would statically make sure a variable of type "E" can never have any value other than E.A, E.B or E.C. This should be completely backwards compatible since the above syntax is currently not allowed. It also doesn't introduce a new type, at least not syntactically. -- /Jacob Carlborg
Feb 25 2015
prev sibling next sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 25 February 2015 at 21:06:54 UTC, deadalnix wrote:
 This very conversation went on in a very lengthy thread a while 
 ago (note, for SDC, I just dropped (2), typeof(E.A | E.B) == 
 int and I consider it a closed issue).
If you keep the type, you can decompose the flag set when https://msdn.microsoft.com/en-us/library/vstudio/system.flagsattribute.aspx
Feb 25 2015
prev sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wednesday, February 25, 2015 21:06:53 deadalnix via Digitalmars-d wrote:
 Here is something I've noticed going on various time, recently in
 the memory management threads, but as to avoid an already heated
 debate, I'll use enum types as an example.

 We have a problem with the way enums are defined. If you have :

 enum E { A, B, C }
 E e;

 We have (1)
 final switch(e) with(E) {
      case A:
          // ...
      case B:
          // ...
      case C:
          // ...
 }

 But also have (2):
 typeof(E.A | E.B) == E

 When raising this, the discussion goes as follow
   - "If you have (1), we can't have (2), as break guarantee (1)
 rely on."
   - "(2) is usefull. For instance UserID or Color"
   - "Then let's get rid of (1)"
   - "(1) is useful, for instance to achieve X or Y"
   - "Let's create a new type in the language, (1) would be enum
 and (2) would be this new type"
   - "It adds too much complexity in the language"

 This very conversation went on in a very lengthy thread a while
 ago (note, for SDC, I just dropped (2), typeof(E.A | E.B) == int
 and I consider it a closed issue).

 It can go forever, as all reason given for every concern raised
 is valid. Yes, adding a new type is probably too much for the
 benefit, yes (1) and (2) are useful in various scenarios. And, by
 refusing to take a step back look at the problem as a whole, we
 can turn around forever and never conclude anything, benefiting
 only the status quo.

 I've seen this attitude going on on various topics. This is the
 surest and fastest way to end up with C++ without the C source
 compatibility. I'd like that we just stop this attitude.
Personally, I think that the status quo with enums is horrible and that no operation should be legal on them which is not guaranteed to result in a valid enum value and that if you want to create a new type that can have values other than those enumerated, you create a struct with some predefined static values and don't use enums at all, but others disagree vehemently (Andrei included). However, as it stands - as you point out - final switch is completely broken, which is not a subjective issue at all but rather quite an objective one. The solution that was proposed (by Andrei I think) the last time that I was in a discussion on that was to introduce the concept of "final enums" so that if you do something like final enum E { A, B, C } it's then the case that no operation on E is legal unless it's guaranteed to result in an E (with casting being the way out when required, of course), and then we would move towards making final switch illegal on normal enums. I was going to create a DIP for it but forgot. I still think that it's something that needs to be resolved though, since otherwise, enums are ugly to work with (subjective though that may be), and final switch is broken. - Jonathan M Davis
Feb 26 2015
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Thursday, 26 February 2015 at 09:49:26 UTC, Jonathan M Davis 
wrote:
 On Wednesday, February 25, 2015 21:06:53 deadalnix via 
 Digitalmars-d wrote:
 I'll use enum types as an example.
<snip>
 I've seen this attitude going on on various topics. This is the
 surest and fastest way to end up with C++ without the C source
 compatibility. I'd like that we just stop this attitude.
 Personally, I think that the status quo with enums is horrible 
 ...
I think the real problem goes clearly beyond enums, it's an overall approach to changes in the D language itself. I join to deadalnix's worries. --- Paolo
Feb 26 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 11:10:16 UTC, Paolo Invernizzi 
wrote:
 I think the real problem goes clearly beyond enums, it's an 
 overall approach to changes in the D language itself.

 I join to deadalnix's worries.

 ---
 Paolo
Yes, I don't care about the specific enum case, in fact, that is one of the least offender and this is why I choose it as an example here. What worries me is that whatever way you take the problem there is a good reason not to proceed. And, taken independently, every one of these reason are valid, or at least something reasonable people can agree upon. But there is no way we can agree on all of these at once.
Feb 26 2015
next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thursday, February 26, 2015 20:35:02 deadalnix via Digitalmars-d wrote:
 On Thursday, 26 February 2015 at 11:10:16 UTC, Paolo Invernizzi
 wrote:
 I think the real problem goes clearly beyond enums, it's an
 overall approach to changes in the D language itself.

 I join to deadalnix's worries.

 ---
 Paolo
Yes, I don't care about the specific enum case, in fact, that is one of the least offender and this is why I choose it as an example here. What worries me is that whatever way you take the problem there is a good reason not to proceed. And, taken independently, every one of these reason are valid, or at least something reasonable people can agree upon. But there is no way we can agree on all of these at once.
Well, I suspect that each case would have to be examined individually to decide upon the best action, but I think that what it comes down to is the same problem that we have with getting anything done around here - someone has to do it. For instance, we all agree that std.xml needs to be replaced with something range-based and fast rather than what we currently have, but no one has made the time or put forth the effort to create a replacement and get it through the review process. Someone has to step up and do it, or it'll never get done. With language changes, it's often the same. Someone needs to come up with a reasonable solution and then create a PR for it. They then have a much stronger position to argue from, and it may get in and settle the issue. And it may not get merged, because it still can't be agreed upon as the correct solution, but an actual implementation carries a lot more weight than an idea, and in most of these discussions, we just discuss ideas without actually getting the work done. If we want stuff like this to get done then more of us need to find the time and put forth the effort to actually do it and be willing to have put forth the time and effort only to have our work rejected. But many of us are very busy, and we've failed to attract enough new contributors on big stuff to get many big things done by folks who haven't been doing a lot already. We probably need to find a way to encourage folks to do bigger things than simply making small improvements to Phobos - be it writing a potential, new module for Phobos or be it implementing critical stuff in the compiler. - Jonathan M Davis
Feb 26 2015
parent reply "Zach the Mystic" <reachzach gggmail.com> writes:
On Friday, 27 February 2015 at 01:33:58 UTC, Jonathan M Davis 
wrote:
 Well, I suspect that each case would have to be examined 
 individually to
 decide upon the best action, but I think that what it comes 
 down to is the
 same problem that we have with getting anything done around 
 here - someone
 has to do it.
This isn't true at all. Things need to be approved first, then implemented.
 With language changes, it's often the same. Someone needs to 
 come up with a
 reasonable solution and then create a PR for it.  They then 
 have a much
 stronger position to argue from, and it may get in and settle 
 the issue.
I sometimes feel so bad for Kenji, who has come up with several reasonable solutions for longstanding problems, *and* implemented them, only to have them be frozen for *years* by indecision at the top. I'll never believe your side until this changes. You can see exactly how D works by looking at how Kenji spends his time. For a while he's only been fixing ICEs and other little bugs which he knows for certain will be accepted. I'm not saying any of these top level decisions are easy, but I don't believe you for a second, at least when it comes to the language itself. Phobos may be different.
Feb 26 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/15 5:48 PM, Zach the Mystic wrote:
 I sometimes feel so bad for Kenji, who has come up with several
 reasonable solutions for longstanding problems, *and* implemented them,
 only to have them be frozen for *years* by indecision at the top.
Yah, we need to be quicker with making decisions, even negative. This requires collaboration from both sides - people shouldn't get furious if their proposal is rejected. Kenji has been incredibly gracious about this. -- Andrei
Feb 26 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Feb 26, 2015 at 05:57:53PM -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 2/26/15 5:48 PM, Zach the Mystic wrote:
I sometimes feel so bad for Kenji, who has come up with several
reasonable solutions for longstanding problems, *and* implemented
them, only to have them be frozen for *years* by indecision at the
top.
Yah, we need to be quicker with making decisions, even negative. This requires collaboration from both sides - people shouldn't get furious if their proposal is rejected. Kenji has been incredibly gracious about this.
[...] I don't think people would be furious if they knew from the beginning that something would be rejected. At least, most reasonable people won't, and I'm assuming that the set of unreasonable people who contribute major features is rather small (i.e., near cardinality 0). What *does* make people furious / disillusioned is when they are led to believe that their work would be accepted, and then after they put in all the effort to implement it, make it mergeable, keep it up to date with the moving target of git HEAD, etc., it then gets summarily dismissed. Or ignored for months and years, and then suddenly shot down. Or worse, get *merged*, only to be reverted later because the people who didn't bother giving feedback earlier now show up and decide that they don't like the idea after all. (It's a different story if post-merge rejection happened because it failed in practice -- I think reasonable people would accept that. But post-merge rejection because of earlier indecision / silence kills morale really quickly. Don't expect to attract major contributors if morale is low.) T -- One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie. -- The Silicon Valley Tarot
Feb 26 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/15 6:17 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Feb 26, 2015 at 05:57:53PM -0800, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 2/26/15 5:48 PM, Zach the Mystic wrote:
 I sometimes feel so bad for Kenji, who has come up with several
 reasonable solutions for longstanding problems, *and* implemented
 them, only to have them be frozen for *years* by indecision at the
 top.
Yah, we need to be quicker with making decisions, even negative. This requires collaboration from both sides - people shouldn't get furious if their proposal is rejected. Kenji has been incredibly gracious about this.
[...] I don't think people would be furious if they knew from the beginning that something would be rejected. At least, most reasonable people won't, and I'm assuming that the set of unreasonable people who contribute major features is rather small (i.e., near cardinality 0).
Well yes in theory there's no difference between theory and practice etc. What has happened historically (fortunately not as much lately) was that statistically most proposals have been simply Not Good. Statistically, proposal authors have been Positively Convinced that their proposals were of Obviously Excellent. (That includes me; statistically most ideas I've ever had have been utter crap, but seldom seemed like it in the beginning.) This cycle has happened numerous times. We've handled that poorly in the past, and we're working on handling it better.
 What *does* make people furious / disillusioned is when they are led to
 believe that their work would be accepted, and then after they put in
 all the effort to implement it, make it mergeable, keep it up to date
 with the moving target of git HEAD, etc., it then gets summarily
 dismissed.  Or ignored for months and years, and then suddenly shot
 down. Or worse, get *merged*, only to be reverted later because the
 people who didn't bother giving feedback earlier now show up and decide
 that they don't like the idea after all.  (It's a different story if
 post-merge rejection happened because it failed in practice -- I think
 reasonable people would accept that.  But post-merge rejection because
 of earlier indecision / silence kills morale really quickly. Don't
 expect to attract major contributors if morale is low.)
Yes, getting back on a decision or promise is a failure of leadership. For example, what happened with [$] was regrettable. We will do our best to avoid such in the future. I should add, however, that effort in and by itself does not warrant approval per se. Labor is a prerequisite of any good accomplishment, but is not all that's needed. I'm following with interest the discussion "My Reference Safety System (DIP???)". Right now it looks like a lot of work - a long opener, subsequent refinements, good discussion. It also seems just that - there's work but there's no edge to it yet; right now a DIP along those ideas is more likely to be rejected than approved. But I certainly hope something good will come out of it. What I hope will NOT happen is that people come to me with a mediocre proposal going, "We've put a lot of Work into this. Well?" Andrei
Feb 26 2015
next sibling parent reply "Zach the Mystic" <reachzach gggmail.com> writes:
On Friday, 27 February 2015 at 02:58:31 UTC, Andrei Alexandrescu 
wrote:
 I'm following with interest the discussion "My Reference Safety 
 System (DIP???)". Right now it looks like a lot of work - a 
 long opener, subsequent refinements, good discussion. It also 
 seems just that - there's work but there's no edge to it yet; 
 right now a DIP along those ideas is more likely to be rejected 
 than approved. But I certainly hope something good will come 
 out of it. What I hope will NOT happen is that people come to 
 me with a mediocre proposal going, "We've put a lot of Work 
 into this. Well?"
Can I ask you a general question about safety: If you became convinced that really great safety would *require* more function attributes, what would be the threshold for including them? I'm trying to "go the whole hog" with safety, but I'm paying what seems to me the necessary price -- more parameter attributes. Some of these gains ("out!" parameters, e.g.) seem like they would only apply to very rare code, and yet they *must* be there, in order for functions to "talk" to each other accurately. Are you interested in accommodating the rare use cases for the sake of robust safety, or do you just want to stop at the very common use cases ("ref returns", e.g.)? "ref returns" will probably cover more than half of all use cases for memory safety. Each smaller category will require additions to what a function signature can contain (starting with expanding `return` to all reference types, e.g.), while covering a smaller number of actual use cases... but on the other hand, it's precisely because they cover fewer use cases that they will appear so much less often.
Feb 26 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/15 8:21 PM, Zach the Mystic wrote:
 Can I ask you a general question about safety: If you became convinced
 that really great safety would *require* more function attributes, what
 would be the threshold for including them? I'm trying to "go the whole
 hog" with safety, but I'm paying what seems to me the necessary price --
 more parameter attributes. Some of these gains ("out!" parameters, e.g.)
 seem like they would only apply to very rare code, and yet they *must*
 be there, in order for functions to "talk" to each other accurately.

 Are you interested in accommodating the rare use cases for the sake of
 robust safety, or do you just want to stop at the very common use cases
 ("ref returns", e.g.)? "ref returns" will probably cover more than half
 of all use cases for memory safety. Each smaller category will require
 additions to what a function signature can contain (starting with
 expanding `return` to all reference types, e.g.), while covering a
 smaller number of actual use cases... but on the other hand, it's
 precisely because they cover fewer use cases that they will appear so
 much less often.
Safety is good to have, and the simple litmus test is if you slap safe: at the top of all modules and you use no trusted (or of course use it correctly), you should have memory safety, guaranteed. A feature that is safe except for certain constructs is undesirable. Generally having a large number of corner cases that require special language constructs to address is a Bad Sign. Andrei
Feb 27 2015
next sibling parent "Zach the Mystic" <reachzach gggmail.com> writes:
On Friday, 27 February 2015 at 14:02:58 UTC, Andrei Alexandrescu 
wrote:
 Safety is good to have, and the simple litmus test is if you 
 slap  safe: at the top of all modules and you use no  trusted 
 (or of course use it correctly), you should have memory safety, 
 guaranteed.

 A feature that is safe except for certain constructs is 
 undesirable.
It seems like you're agreeing with my general idea of "going the whole hog".
 Generally having a large number of corner cases that require 
 special language constructs to address is a Bad Sign.
But D inherits C's separate compilation model. All these cool function and parameter attributes (pure, safe, return ref, etc.) could be kept hidden and just used and they would Just Work if D didn't have to accommodate separation compilation. From my perspective, the only "Bad Sign" is that D has to navigate the tradeoff between: * concise function signatures * accurate communication between functions * enabling separate compilation It's like you have to sacrifice one to get the other two. Naturally I'm not keen on this, so I rush to see how far attribute inference for all functions can be taken. Then Dicebot suggests automated .di file generation with statically verified matching binaries: http://forum.dlang.org/post/otejdbgnhmyvbyaxatsk forum.dlang.org The point is that I don't feel the ominous burden of a Bad Sign here, because of the inevitability of this conflict.
Feb 27 2015
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Feb 27, 2015 at 06:02:57AM -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
 Safety is good to have, and the simple litmus test is if you slap
  safe: at the top of all modules and you use no  trusted (or of course
 use it correctly), you should have memory safety, guaranteed.
[...] safe has some pretty nasty holes right now... like: https://issues.dlang.org/show_bug.cgi?id=5270 https://issues.dlang.org/show_bug.cgi?id=8838 https://issues.dlang.org/show_bug.cgi?id=12822 https://issues.dlang.org/show_bug.cgi?id=13442 https://issues.dlang.org/show_bug.cgi?id=13534 https://issues.dlang.org/show_bug.cgi?id=13536 https://issues.dlang.org/show_bug.cgi?id=13537 https://issues.dlang.org/show_bug.cgi?id=14136 https://issues.dlang.org/show_bug.cgi?id=14138 There are probably other holes that we haven't discovered yet. All in all, it's not looking like much of a guarantee right now. It's more like a cheese grater. This is a symptom of the fact that safe, as currently implemented, starts by assuming the whole language is safe, and then checking for exceptions that are deemed unsafe. Since D has become quite a large, complex language, many unsafe operations and unsafe combinations of features are bound to be overlooked (cf. combinatorial explosion), hence there are a lot of known holes and probably just as many, if not more, unknown ones. Trying to fix them is like playing whack-a-mole: there's always yet one more loophole that we overlooked, and that one hole compromises the whole system. Not to mention, every time a new language feature is added, safe is potentially compromised by newly introduced combinations of features that are permitted by default. Rather, what *should* have been done is to start with safe *rejecting* everything in the language, and then gradually relaxed to permit more operations as they are vetted to be safe on a case-by-case basis. That way, instead of having a long list of holes in safe that need to be plugged, we *already* have guaranteed safety and just need to allow more safe operations that are currently prohibited. safe bugs should have been of the form "operation X is rejected but ought to be legal", rather than "operation X is accepted but compromises safe". In the former case we would already have achieved guaranteed safety, but in the latter case, as is the current situation, we don't have guaranteed safety and it's an uphill battle to get there (and we don't know if we'll ever arrive). See: https://issues.dlang.org/show_bug.cgi?id=12941 T -- Verbing weirds language. -- Calvin (& Hobbes)
Feb 27 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/15 7:33 AM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Feb 27, 2015 at 06:02:57AM -0800, Andrei Alexandrescu via
Digitalmars-d wrote:
 [...]
 Safety is good to have, and the simple litmus test is if you slap
  safe: at the top of all modules and you use no  trusted (or of course
 use it correctly), you should have memory safety, guaranteed.
[...] safe has some pretty nasty holes right now... like: https://issues.dlang.org/show_bug.cgi?id=5270 https://issues.dlang.org/show_bug.cgi?id=8838 https://issues.dlang.org/show_bug.cgi?id=12822 https://issues.dlang.org/show_bug.cgi?id=13442 https://issues.dlang.org/show_bug.cgi?id=13534 https://issues.dlang.org/show_bug.cgi?id=13536 https://issues.dlang.org/show_bug.cgi?id=13537 https://issues.dlang.org/show_bug.cgi?id=14136 https://issues.dlang.org/show_bug.cgi?id=14138 There are probably other holes that we haven't discovered yet.
Yah, safe is in need of some good TLC. How about we make it a priority for 2.068?
 All in all, it's not looking like much of a guarantee right now.  It's
 more like a cheese grater.

 This is a symptom of the fact that  safe, as currently implemented,
 starts by assuming the whole language is  safe, and then checking for
 exceptions that are deemed unsafe. Since D has become quite a large,
 complex language, many unsafe operations and unsafe combinations of
 features are bound to be overlooked (cf. combinatorial explosion), hence
 there are a lot of known holes and probably just as many, if not more,
 unknown ones.
I'd have difficulty agreeing with this. The issues you quoted don't seem to follow a pattern of combinatorial explosion. On another vein, consider that the Java Virtual Machine has had for many, many years bugs in its safety, even though it was touted to be safe from day one. With each of the major bugs, naysayers claimed it's unfixable and it belies the claim of memory safety. A safe function may assume that the code surrounding it has not broken memory integrity. Under that assumption, it is required (and automatically checked) that it leaves the system with memory integrity. This looks like a reasonable stance to me, and something I'm committed to work with.
 Trying to fix them is like playing whack-a-mole: there's
 always yet one more loophole that we overlooked, and that one hole
 compromises the whole system. Not to mention, every time a new language
 feature is added,  safe is potentially compromised by newly introduced
 combinations of features that are permitted by default.
There aren't many large features to be added, and at this point with safe being a major priority I just find it difficult to understand this pessimism. Probably a good thing to do, whether you're right or overly pessimistic, is to fix these bugs. In the worst case we have a slightly tighter "cheese grate". In the best case we get to safety.
 Rather, what *should* have been done is to start with  safe *rejecting*
 everything in the language, and then gradually relaxed to permit more
 operations as they are vetted to be safe on a case-by-case basis.
Yah, time travel is always so enticing. What I try to do is avoid telling people sentences that start with "You/We should have". They're not productive. Instead I want to focus on what we should do starting now.
 See: https://issues.dlang.org/show_bug.cgi?id=12941
I'm unclear how this is actionable. Andrei
Feb 27 2015
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Feb 27, 2015 at 07:57:22AM -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 2/27/15 7:33 AM, H. S. Teoh via Digitalmars-d wrote:
On Fri, Feb 27, 2015 at 06:02:57AM -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
Safety is good to have, and the simple litmus test is if you slap
 safe: at the top of all modules and you use no  trusted (or of course
use it correctly), you should have memory safety, guaranteed.
[...] safe has some pretty nasty holes right now... like: https://issues.dlang.org/show_bug.cgi?id=5270 https://issues.dlang.org/show_bug.cgi?id=8838 https://issues.dlang.org/show_bug.cgi?id=12822 https://issues.dlang.org/show_bug.cgi?id=13442 https://issues.dlang.org/show_bug.cgi?id=13534 https://issues.dlang.org/show_bug.cgi?id=13536 https://issues.dlang.org/show_bug.cgi?id=13537 https://issues.dlang.org/show_bug.cgi?id=14136 https://issues.dlang.org/show_bug.cgi?id=14138 There are probably other holes that we haven't discovered yet.
Yah, safe is in need of some good TLC. How about we make it a priority for 2.068?
If we're going to do that, let's do it right. Let's outlaw everything in safe and then start expanding it by adding explicitly-vetted operations. See below.
All in all, it's not looking like much of a guarantee right now.
It's more like a cheese grater.

This is a symptom of the fact that  safe, as currently implemented,
starts by assuming the whole language is  safe, and then checking for
exceptions that are deemed unsafe. Since D has become quite a large,
complex language, many unsafe operations and unsafe combinations of
features are bound to be overlooked (cf. combinatorial explosion),
hence there are a lot of known holes and probably just as many, if
not more, unknown ones.
I'd have difficulty agreeing with this. The issues you quoted don't seem to follow a pattern of combinatorial explosion.
No, what I meant was that in an "assume safe unless proven otherwise" system, there's bound to be holes because the combinatorial explosion of feature combinations makes it almost certain there's *some* unsafe combination we haven't thought of yet that the compiler currently accepts. And it may be a long time before we discover this flaw. This means that the current implementation almost certainly has holes (and in fact it has quite a few known ones, and very likely more as-yet unknown ones), therefore it's not much of a "guarantee" of safety at all. What I'm proposing is that we reverse that: start with prohibiting everything, which is by definition safe, since doing nothing is guaranteed to be safe. Then slowly add to it the things that are deemed safe after careful review, until it becomes a usable subset of the language. This way, we actually *have* the guarantee of safety from day one, and all we have to do is to make sure each new addition to the list of permitted operations doesn't introduce any new holes. And even in the event that it does, the damage is confined because we know exactly where the problem came from: we know that X commits in the past safe had no holes, and now there's a hole, so git bisect will quickly locate the offending change. Whereas in our current approach, everything is permitted by default, which means the safety guarantee is broken *by default*, except where we noticed and plugged it. We're starting with a cheese grater and plugging the holes one by one, hoping that one day it will become a solid plate. Why not start with a solid plate in the first place, and make sure we don't accidentally punch holes through it?
 On another vein, consider that the Java Virtual Machine has had for
 many, many years bugs in its safety, even though it was touted to be
 safe from day one. With each of the major bugs, naysayers claimed it's
 unfixable and it belies the claim of memory safety.
Fallacy: Language X did it this way, therefore it's correct to do it this way.
 A  safe function may assume that the code surrounding it has not
 broken memory integrity. Under that assumption, it is required (and
 automatically checked) that it leaves the system with memory
 integrity. This looks like a reasonable stance to me, and something
 I'm committed to work with.
That's beside the point. Assuming the surrounding context is safe or not has no bearing on whether certain combinations of operations inside the safe function has unsafe semantics -- because the compiler failed to recognize a certain construct as unsafe. The latter is what I'm talking about.
Trying to fix them is like playing whack-a-mole: there's always yet
one more loophole that we overlooked, and that one hole compromises
the whole system. Not to mention, every time a new language feature
is added,  safe is potentially compromised by newly introduced
combinations of features that are permitted by default.
There aren't many large features to be added, and at this point with safe being a major priority I just find it difficult to understand this pessimism.
It's not about the size of a new feature. Every new feature, even a seemingly small one, causes an exponential growth in the number of language features one may put together, thereby increasing the surface area for some feature combinations to interact in unexpected ways. Surely you must know this, since this is why we generally try not to add new language features if they don't pull their own weight. The problem with this is that when compiling safe code, the compiler is not looking at a list of permitted features, but checking a list of prohibited features. So by default, new feature X (along with all combinations of it with existing language features) is permitted unless somebody took the pains to evaluate its safety in every possible context in which it might be used, *and* check for all those cases when compiling in safe mode. Given the size of the language, something is bound to be missed. So the safety guarantee may have been silently broken, but we're none the wiser until some unfortunate user stumbles upon it and takes the time to file a bug. Until then, safe is broken but we don't even know about it. If, OTOH, the compiler checks against a list of permitted features instead, feature X will be rejected in safe code by default, and we would slowly expand the scope of X within safe code by adding specific instances of it to the list of permitted features as we find them. If we miss any case, there's no problem -- it gets (wrongly) rejected at compile time, but nothing will slip through that might break safe guarantees. We just get a rejects-valid bug report, add that use case to the permitted list, and close the bug. Safety is not broken throughout the process. [...]
Rather, what *should* have been done is to start with  safe
*rejecting* everything in the language, and then gradually relaxed to
permit more operations as they are vetted to be safe on a
case-by-case basis.
Yah, time travel is always so enticing. What I try to do is avoid telling people sentences that start with "You/We should have". They're not productive. Instead I want to focus on what we should do starting now.
See: https://issues.dlang.org/show_bug.cgi?id=12941
I'm unclear how this is actionable.
[...] What about this, if we're serious about safe actually *guaranteeing* anything: after 2.067 is released, we reimplement safe by making it reject every language construct by default. (This will, of course, cause all safe code to no longer compile.) Then we slowly add back individual language features to the list of permitted operations in safe code until existing safe code successfully compiles. That gives us a reliable starting point where we *know* that safe is actually, y'know, safe. Of course, many legal things will now be (wrongly) rejected in safe code, but that's OK, because we will add them to the list of things permitted in safe code as we find them. Meanwhile, safe actually *guarantees* safety. As opposed to the current situation, where safe sorta-kinda gives you memory safety, provided you don't use unanticipated combinations of features that the compiler failed to recognize as unsafe, or use new features that weren't thoroughly checked beforehand, or do something blatantly stupid, or do something known to trigger a compiler bug, or ... -- then maybe, fingers crossed, you will have memory safety. Or so we hope. T -- Question authority. Don't ask why, just do it.
Feb 27 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/15 1:07 PM, H. S. Teoh via Digitalmars-d wrote:
 What about this, if we're serious about  safe actually*guaranteeing*
 anything: after 2.067 is released, we reimplement  safe by making it
 reject every language construct by default.
I don't think this is practical. It's a huge amount of work over a long time. Besides, even with that approach there's still no guarantee; implementation bugs are always possible in either approach. I think the closest thing to what you're after is progress and preservation proofs on top of a core subset of the language. It would be great if somebody wanted to do this. Andrei
Feb 27 2015
parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Friday, 27 February 2015 at 21:19:31 UTC, Andrei Alexandrescu 
wrote:
 On 2/27/15 1:07 PM, H. S. Teoh via Digitalmars-d wrote:
 What about this, if we're serious about  safe 
 actually*guaranteeing*
 anything: after 2.067 is released, we reimplement  safe by 
 making it
 reject every language construct by default.
I don't think this is practical. It's a huge amount of work over a long time.
That's easy enough to solve, though. The new behaviour can at first be opt-in, enabled by a command-line flag (like we already do with -dip25). We have an entire release cycle, or longer if we need that, to get at least Phobos and druntime to compile. Once that is done, people can test their own code with it, and it can be enabled on the auto-tester by default. When some time has passed, we invert the switch - it's then opt-out. People can still disable it if they just want to get their code to compile without fixing it (or reporting a bug if it's our fault). Finally, the old behaviour can be removed altogether.
 Besides, even with that approach there's still no guarantee; 
 implementation bugs are always possible in either approach.
As H.S. Teoh said, these can be detected by git bisect.
 I think the closest thing to what you're after is progress and 
 preservation proofs on top of a core subset of the language. It 
 would be great if somebody wanted to do this.
Wouldn't that effectively mean introducing another kind of ` safe`, that only allows to use said core subset? Or a compiler switch? How else would it be practicable?
Feb 28 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/15 7:53 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" 
wrote:
 On Friday, 27 February 2015 at 21:19:31 UTC, Andrei Alexandrescu wrote:
 On 2/27/15 1:07 PM, H. S. Teoh via Digitalmars-d wrote:
 What about this, if we're serious about  safe actually*guaranteeing*
 anything: after 2.067 is released, we reimplement  safe by making it
 reject every language construct by default.
I don't think this is practical. It's a huge amount of work over a long time.
That's easy enough to solve, though.
I figure there are ways to go about it. I just don't find it practical. It might have been a good idea eight years ago. -- Andrei
Feb 28 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 27 February 2015 at 21:09:51 UTC, H. S. Teoh wrote:
 No, what I meant was that in an "assume safe unless proven 
 otherwise"
 system, there's bound to be holes because the combinatorial 
 explosion of
 feature combinations makes it almost certain there's *some* 
 unsafe
 combination we haven't thought of yet that the compiler 
 currently
 accepts. And it may be a long time before we discover this flaw.
To be back to the original problem, there are various instances of: - A is safe and useful in safe code, let's not making it unsafe ! - B is safe and useful in safe code, let's not making it unsafe ! Yet A and B may be unsafe used together, so one of them should be made unsafe. You ends up in the same situation than exposed in the first post.
Feb 27 2015
prev sibling parent "Zach the Mystic" <reachzach gggmail.com> writes:
On Friday, 27 February 2015 at 21:09:51 UTC, H. S. Teoh wrote:
	https://issues.dlang.org/show_bug.cgi?id=12822
	https://issues.dlang.org/show_bug.cgi?id=13442
	https://issues.dlang.org/show_bug.cgi?id=13534
	https://issues.dlang.org/show_bug.cgi?id=13536
	https://issues.dlang.org/show_bug.cgi?id=13537
	https://issues.dlang.org/show_bug.cgi?id=14136
	https://issues.dlang.org/show_bug.cgi?id=14138

There are probably other holes that we haven't discovered yet.
I wanted to say that besides the first two bugs I tried to address, none of the rest in your list involves more than just telling the compiler to check for this or that, whatever the case may be, per bug. Maybe blanket use of ` trusted` to bypass an over-cautious compiler is the only real danger I personally am able to worry about. I simplified my thinking by dividing everything into "in function" and "outside of function". So I ask, within a function, what do I need to know to ensure everything is safe? And then, from outside a function, what do I need to know to ensure everything is safe? The function has inputs and outputs, sources and destinations.
Feb 27 2015
prev sibling next sibling parent "Zach the Mystic" <reachzach gggmail.com> writes:
On Friday, 27 February 2015 at 15:35:46 UTC, H. S. Teoh wrote:
  safe has some pretty nasty holes right now... like:

 	https://issues.dlang.org/show_bug.cgi?id=5270
 	https://issues.dlang.org/show_bug.cgi?id=8838
My new reference safety system: http://forum.dlang.org/post/offurllmuxjewizxedab forum.dlang.org ...would solve the above two bugs. In fact, it's designed precisely for bugs like those. Here's your failing use case for bug 5270. I'll explain how my system would track and catch the bug: int delegate() globDg; void func(scope int delegate() dg) { globDg = dg; // should be rejected but isn't globDg(); } If func is marked safe and no attribute inference is permitted, this would error, as it copies a reference parameter to a global. However, let's assume we have inference. The signature would now be inferred to: void func(noscope scope int delegate() dg); Yeah it's obviously weird having both `scope` and `noscope`, but that's pure coincidence, and moreover, I think the use of `scope` here would be made obsolete by my system anyway. (Note also that the `noscope` bikeshed has been suggested to be painted `static` instead -- it's not about the name, yet... ;-) void sub() { int x; func(() { return ++x; }); } Well I suppose this rvalue delegate is allocated on the stack, which will have local reference scope. This is where you'd get the safety error in the case of attribute inference, as you can't pass a local reference to a `noscope` parameter. The rest is just a foregone conclusion (added here for completion): void trashme() { import std.stdio; writeln(globDg()); // prints garbage } void main() { sub(); trashme(); } The next bug, 8838, is a very simple case, I think: int[] foo() safe { int[5] a; return a[]; } `a`, being a static array, would have a reference scope depth of 1, and when you copy the reference to make a dynamic array in the return value, the reference scope inherits that of `a`. Any scope system would catch this one, I'm afraid. Mine seems like overkill in this case. :-/
Feb 27 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2015 7:33 AM, H. S. Teoh via Digitalmars-d wrote:
  safe has some pretty nasty holes right now... like:

 	https://issues.dlang.org/show_bug.cgi?id=5270
 	https://issues.dlang.org/show_bug.cgi?id=8838
 	https://issues.dlang.org/show_bug.cgi?id=12822
 	https://issues.dlang.org/show_bug.cgi?id=13442
 	https://issues.dlang.org/show_bug.cgi?id=13534
 	https://issues.dlang.org/show_bug.cgi?id=13536
 	https://issues.dlang.org/show_bug.cgi?id=13537
 	https://issues.dlang.org/show_bug.cgi?id=14136
 	https://issues.dlang.org/show_bug.cgi?id=14138
None of those are a big deal (i.e. fundamental), and are certainly not a justification for throwing the whole thing out and starting over. Some of them aren't even in the core language, but in incorrect usage of trusted in Phobos. Just fix them.
Feb 28 2015
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Safety is good to have, and the simple litmus test is if you 
 slap  safe: at the top of all modules and you use no  trusted 
 (or of course use it correctly), you should have memory safety, 
 guaranteed.
I have suggested to switch to safe by default: https://issues.dlang.org/show_bug.cgi?id=13838 Bye, bearophile
Feb 27 2015
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2015 6:58 PM, Andrei Alexandrescu wrote:
 I should add, however, that effort in and by itself does not warrant approval
 per se. Labor is a prerequisite of any good accomplishment, but is not all
 that's needed.
Yeah, that's always a problem. Ideally, how much work someone put into a proposal should have nothing to do with whether it is incorporated or not. But being human, we feel a tug to incorporate something because someone expended much effort on it. It's always much harder to turn them down.
Feb 26 2015
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 27 February 2015 at 02:58:31 UTC, Andrei Alexandrescu 
wrote:
 I should add, however, that effort in and by itself does not 
 warrant approval per se. Labor is a prerequisite of any good 
 accomplishment, but is not all that's needed.
Everyone's a Marxist when it comes to their own labour :)
Feb 27 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/15 1:09 AM, John Colvin wrote:
 On Friday, 27 February 2015 at 02:58:31 UTC, Andrei Alexandrescu wrote:
 I should add, however, that effort in and by itself does not warrant
 approval per se. Labor is a prerequisite of any good accomplishment,
 but is not all that's needed.
Everyone's a Marxist when it comes to their own labour :)
Nice! -- Andrei
Feb 27 2015
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 I'm following with interest the discussion "My Reference Safety 
 System (DIP???)". Right now it looks like a lot of work - a 
 long opener, subsequent refinements, good discussion. It also 
 seems just that - there's work but there's no edge to it yet; 
 right now a DIP along those ideas is more likely to be rejected 
 than approved. But I certainly hope something good will come 
 out of it.
The second scope proposal looks simpler than the first: http://wiki.dlang.org/User:Schuetzm/scope2 Later in Rust they have added some lifetime inference to reduce the annotation burden in many cases. Bye, bearophile
Feb 28 2015
parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Saturday, 28 February 2015 at 10:40:32 UTC, bearophile wrote:
 Andrei Alexandrescu:

 I'm following with interest the discussion "My Reference 
 Safety System (DIP???)". Right now it looks like a lot of work 
 - a long opener, subsequent refinements, good discussion. It 
 also seems just that - there's work but there's no edge to it 
 yet; right now a DIP along those ideas is more likely to be 
 rejected than approved. But I certainly hope something good 
 will come out of it.
The second scope proposal looks simpler than the first: http://wiki.dlang.org/User:Schuetzm/scope2
Still working on it, but I think we're on the right track. Zach had some really good ideas.
 Later in Rust they have added some lifetime inference to reduce 
 the annotation burden in many cases.
I just modified Walter's RCArray to work with the new proposal. It looks almost identical, but now supports safe slicing. (It currently lacks a way to actually get an RCArray of a slice, but that can be added as an `alias this`). There's only one `scope` annotation in the entire code. Even though the proposal includes a `return` annotation, it's not needed in this case. In general, if it works out like I imagine, there will be almost no annotations. They will only ever be necessary for function signatures, and then only in ` system` code, `extern`/`export` declarations, and occasionally for ` safe` functions. Everything else will just work. ( safe will become slightly stricter, but that's a good thing.)
Feb 28 2015
prev sibling parent reply "Sativa" <Sativa Indica.org> writes:
On Friday, 27 February 2015 at 02:58:31 UTC, Andrei Alexandrescu 
wrote:
 On 2/26/15 6:17 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Feb 26, 2015 at 05:57:53PM -0800, Andrei Alexandrescu 
 via Digitalmars-d wrote:
 On 2/26/15 5:48 PM, Zach the Mystic wrote:
 I sometimes feel so bad for Kenji, who has come up with 
 several
 reasonable solutions for longstanding problems, *and* 
 implemented
 them, only to have them be frozen for *years* by indecision 
 at the
 top.
Yah, we need to be quicker with making decisions, even negative. This requires collaboration from both sides - people shouldn't get furious if their proposal is rejected. Kenji has been incredibly gracious about this.
[...] I don't think people would be furious if they knew from the beginning that something would be rejected. At least, most reasonable people won't, and I'm assuming that the set of unreasonable people who contribute major features is rather small (i.e., near cardinality 0).
Well yes in theory there's no difference between theory and practice etc. What has happened historically (fortunately not as much lately) was that statistically most proposals have been simply Not Good. Statistically, proposal authors have been Positively Convinced that their proposals were of Obviously Excellent. (That includes me; statistically most ideas I've ever had have been utter crap, but seldom seemed like it in the beginning.) This cycle has happened numerous times. We've handled that poorly in the past, and we're working on handling it better.
 What *does* make people furious / disillusioned is when they 
 are led to
 believe that their work would be accepted, and then after they 
 put in
 all the effort to implement it, make it mergeable, keep it up 
 to date
 with the moving target of git HEAD, etc., it then gets 
 summarily
 dismissed.  Or ignored for months and years, and then suddenly 
 shot
 down. Or worse, get *merged*, only to be reverted later 
 because the
 people who didn't bother giving feedback earlier now show up 
 and decide
 that they don't like the idea after all.  (It's a different 
 story if
 post-merge rejection happened because it failed in practice -- 
 I think
 reasonable people would accept that.  But post-merge rejection 
 because
 of earlier indecision / silence kills morale really quickly. 
 Don't
 expect to attract major contributors if morale is low.)
Yes, getting back on a decision or promise is a failure of leadership. For example, what happened with [$] was regrettable. We will do our best to avoid such in the future. I should add, however, that effort in and by itself does not warrant approval per se. Labor is a prerequisite of any good accomplishment, but is not all that's needed. I'm following with interest the discussion "My Reference Safety System (DIP???)". Right now it looks like a lot of work - a long opener, subsequent refinements, good discussion. It also seems just that - there's work but there's no edge to it yet; right now a DIP along those ideas is more likely to be rejected than approved. But I certainly hope something good will come out of it. What I hope will NOT happen is that people come to me with a mediocre proposal going, "We've put a lot of Work into this. Well?" Andrei
I'm curious if project management(e.g., MS Project) is used to optimize and clarify goals for the D language? If such a project was maintained, anyone could download it and see the current state of D. The main use being the optimization of tasks and display the "timeline". If something has been sitting around for a year and is blocking other tasks then you can easily see that. It obviously would lot of work to setup such a project. I imagine you could write some script to import data from github or whatever into the project and possibly vice versa.
Feb 28 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/28/15 12:20 PM, Sativa wrote:
 I'm curious if project management(e.g., MS Project) is used to optimize
 and clarify goals for the D language?
I've pushed for trello for a good while, it didn't catch up. -- Andrei
Feb 28 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Saturday, 28 February 2015 at 21:11:54 UTC, Andrei 
Alexandrescu wrote:
 On 2/28/15 12:20 PM, Sativa wrote:
 I'm curious if project management(e.g., MS Project) is used to 
 optimize
 and clarify goals for the D language?
I've pushed for trello for a good while, it didn't catch up. -- Andrei
Trello would be nice, it even has a good feature request/voting system IIRC.
Feb 28 2015
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-02-28 22:11, Andrei Alexandrescu wrote:

 I've pushed for trello for a good while, it didn't catch up. -- Andrei
There's something called HuBoard [1], project management for Github issues. I haven't used it myself but might be worth taking a look at. Although it looks like you need the issues in Github, we only have pull requests. [1] https://huboard.com/ -- /Jacob Carlborg
Mar 01 2015
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 28 February 2015 at 21:11:54 UTC, Andrei 
Alexandrescu wrote:
 On 2/28/15 12:20 PM, Sativa wrote:
 I'm curious if project management(e.g., MS Project) is used to 
 optimize
 and clarify goals for the D language?
I've pushed for trello for a good while, it didn't catch up. -- Andrei
Note that most never had any access to it.
Mar 01 2015
prev sibling next sibling parent reply Jonathan M Davis via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Friday, February 27, 2015 01:48:00 Zach the Mystic via Digitalmars-d wrote:
 On Friday, 27 February 2015 at 01:33:58 UTC, Jonathan M Davis
 wrote:
 Well, I suspect that each case would have to be examined
 individually to
 decide upon the best action, but I think that what it comes
 down to is the
 same problem that we have with getting anything done around
 here - someone
 has to do it.
This isn't true at all. Things need to be approved first, then implemented.
If something is implemented, then there's an actual implementation to discuss and accept or reject, and sometimes that leads to the problem getting resolved, whereas just discussing it frequently just results in discussion rather than anything actually happening. Sure, if a decision isn't made before something is implemented, then the odds of it getting rejected are higher, and that can be very frustrating, but sometimes, it's the only way that anything gets done.
 With language changes, it's often the same. Someone needs to
 come up with a
 reasonable solution and then create a PR for it.  They then
 have a much
 stronger position to argue from, and it may get in and settle
 the issue.
I sometimes feel so bad for Kenji, who has come up with several reasonable solutions for longstanding problems, *and* implemented them, only to have them be frozen for *years* by indecision at the top. I'll never believe your side until this changes. You can see exactly how D works by looking at how Kenji spends his time. For a while he's only been fixing ICEs and other little bugs which he knows for certain will be accepted. I'm not saying any of these top level decisions are easy, but I don't believe you for a second, at least when it comes to the language itself. Phobos may be different.
Yes. Sometimes stuff gets rejected or stuck in limbo, but there's been plenty that has gotten done because someone like Kenji just decided to do it. The fact that stuff is stuck in limbdo for years is definitely bad - no question about it - but so much more wouldn't have been done had no one at least implemented something and tried to get it into the compiler or the standard library - especially when you're talking about the compiler. Language discussions frequently never result in anything, whereas creating a PR for dmd will sometimes put things in a position where Walter finally approves it (or rejects it) rather than simply discussing it and getting nowhere. - Jonathan M Davis
Feb 26 2015
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Thu, 26 Feb 2015 18:13:12 -0800, Jonathan M Davis via Digitalmars-d
wrote:

 whereas creating a PR for dmd will sometimes put things in a position
 where Walter finally approves it (or rejects it) rather than simply
 discussing it and getting nowhere.
oh, i see. i really enjoy multiple `alias this` now, it's a great merge!=20 and that great new tuple syntax... i love it! ah, sorry, i was daydreaming.=
Feb 26 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Friday, 27 February 2015 at 02:46:57 UTC, ketmar wrote:
 On Thu, 26 Feb 2015 18:13:12 -0800, Jonathan M Davis via 
 Digitalmars-d
 wrote:

 whereas creating a PR for dmd will sometimes put things in a 
 position
 where Walter finally approves it (or rejects it) rather than 
 simply
 discussing it and getting nowhere.
oh, i see. i really enjoy multiple `alias this` now, it's a great merge! and that great new tuple syntax... i love it! ah, sorry, i was daydreaming.
Didn't multiple alias this get merged? I wasn't even following it.
Feb 26 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Fri, 27 Feb 2015 06:34:40 +0000, weaselcat wrote:

 On Friday, 27 February 2015 at 02:46:57 UTC, ketmar wrote:
 On Thu, 26 Feb 2015 18:13:12 -0800, Jonathan M Davis via Digitalmars-d
 wrote:

 whereas creating a PR for dmd will sometimes put things in a position
 where Walter finally approves it (or rejects it) rather than simply
 discussing it and getting nowhere.
oh, i see. i really enjoy multiple `alias this` now, it's a great merge! and that great new tuple syntax... i love it! ah, sorry, i was daydreaming.
=20 Didn't multiple alias this get merged? I wasn't even following it.
it was blessed and... and then everybody forgot about it.=
Feb 26 2015
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Zach the Mystic:

 You can see exactly how D works by looking at how Kenji spends 
 his time. For a while he's only been fixing ICEs and other 
 little bugs which he knows for certain will be accepted.
I agree that probably there are often better ways to use Kenji time for the development of D. Bye, bearophile
Feb 28 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2015 2:31 AM, bearophile wrote:
 Zach the Mystic:

 You can see exactly how D works by looking at how Kenji spends his time. For a
 while he's only been fixing ICEs and other little bugs which he knows for
 certain will be accepted.
I agree that probably there are often better ways to use Kenji time for the development of D.
Actually, Kenji fearlessly deals with some of the hardest bugs in the compiler that require a deep understanding of how the compiler works and how it is supposed to work. He rarely does trivia. I regard Kenji's contributions as invaluable to the community.
Feb 28 2015
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Actually, Kenji fearlessly deals with some of the hardest bugs 
 in the compiler that require a deep understanding of how the 
 compiler works and how it is supposed to work. He rarely does 
 trivia. I regard Kenji's contributions as invaluable to the 
 community.
But my point was that probably there are even better things that Kenji can do in part of the time he works on D. Bye, bearophile
Mar 01 2015
parent "Zach the Mystic" <reachzach gggmail.com> writes:
On Sunday, 1 March 2015 at 11:30:52 UTC, bearophile wrote:
 Walter Bright:

 Actually, Kenji fearlessly deals with some of the hardest bugs 
 in the compiler that require a deep understanding of how the 
 compiler works and how it is supposed to work. He rarely does 
 trivia. I regard Kenji's contributions as invaluable to the 
 community.
But my point was that probably there are even better things that Kenji can do in part of the time he works on D.
I think this once again brings up the issue of what might be called "The Experimental Space" (for which std.experimental is the only official acknowledgment thus far). Simply put, there are things which it would be nice to try out, which can be conditionally pre-approved depending on how they work in real life. There are a lot of things which would be great to have, if only some field testing could verify that they aren't laden with show-stopping flaws. But these represent a whole "middle ground" between pre-approved, and rejected. The middle ground is fraught with tradeoffs -- most prominently that if the field testers find the code useful it becomes the de facto standard *even if* fatal flaws are discovered in the design. Yet if you tell people honestly, "this may not be the final design," a lot fewer people will be willing to test it. The Experimental Space must have a whole different philosophy about what it is -- the promises you make, or more accurately don't make, and the courage you have to reject a bad design even when it is already being used in real-world code. Basically, the experimental space must claim "tentatively approved for D, pending field testing" -- and it must courageously stick to that claim. That might give Kenji the motivation to implement some interesting new approaches to old problems, knowing that even if in the final analysis they fail, they will at least get a chance to prove themselves first. (Maybe there aren't really that many candidates for this approach anyway, but I thought the idea should be articulated at least.)
Mar 01 2015
prev sibling parent reply "Zach the Mystic" <reachzach gggmail.com> writes:
On Saturday, 28 February 2015 at 23:03:23 UTC, Walter Bright 
wrote:
 On 2/28/2015 2:31 AM, bearophile wrote:
 Zach the Mystic:

 You can see exactly how D works by looking at how Kenji 
 spends his time. For a
 while he's only been fixing ICEs and other little bugs which 
 he knows for
 certain will be accepted.
I agree that probably there are often better ways to use Kenji time for the development of D.
Actually, Kenji fearlessly deals with some of the hardest bugs in the compiler that require a deep understanding of how the compiler works and how it is supposed to work. He rarely does trivia. I regard Kenji's contributions as invaluable to the community.
I don't think anybody disagrees with this. Kenji's a miracle.
Mar 01 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Mon, 02 Mar 2015 00:48:54 +0000, Zach the Mystic wrote:

 I don't think anybody disagrees with this. Kenji's a miracle.
he's like Chuck Norris, only better. ;-)=
Mar 01 2015
prev sibling parent reply "Dominikus Dittes Scherkl" writes:
On Thursday, 26 February 2015 at 20:35:04 UTC, deadalnix wrote:
 Yes, I don't care about the specific enum case, in fact, that 
 is one of the least offender and this is why I choose it as an 
 example here.
Hmm. I still consider it a major issue and thought we agreed to introduce "final enum" to be used with "final switch" - and was sightly dissapointed that it didn't go in 2.067 How can we make progress, if even the things that have reached consens on are not promoted? Same goes for multiple "alias this".
Feb 27 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2015 2:42 AM, Dominikus Dittes Scherkl wrote:
 How can we make progress,
Look at the changelog, there's tremendous progress.
Feb 28 2015