digitalmars.D - Why is `scope` planned for deprecation?
- Mike (24/24) Nov 10 2014 First of all, what exactly is planned for deprecated? It [1] says
- Manu via Digitalmars-d (5/27) Nov 10 2014 I think the general direction is that scope will be re-purposed as a
- Steven Schveighoffer (21/43) Nov 10 2014 Well, that's a funny thing. I looked it up, apparently using scope to
- Dicebot (13/13) Nov 11 2014 This is a bit complicated. Originally intention was to deprecate
- ixid (6/7) Nov 11 2014 The ship will have sailed by the time it's ready to fly
- Dicebot (4/11) Nov 11 2014 It is going to take such long time not because no one considers
- bearophile (12/21) Nov 11 2014 I agree it's a very important topic (more important/urgent than
- Manu via Digitalmars-d (8/30) Nov 12 2014 I agree. scope is top of my wishlist these days. Above RC/GC, or
- Nick Treleaven (13/18) Nov 12 2014 I think Rust's lifetimes would be a huge change if ported to D. In Rust
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (15/38) Nov 12 2014 Have you seen my proposal?
- Nick Treleaven (9/24) Nov 13 2014 Looks good. Personally I've been meaning to study your (whole) proposal,...
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (12/51) Nov 13 2014 Const borrowing is not even necessary for that. The problem with
- deadalnix (11/17) Nov 12 2014 Rust is not the first language going that road. The problem is
- Andrei Alexandrescu (5/23) Nov 12 2014 I agree. This is one of those cases in which a good engineering solution...
- Manu via Digitalmars-d (7/35) Nov 13 2014 Are you guys saying you don't feel this proposal is practical?
- deadalnix (3/10) Nov 13 2014 You need to define ownership before defining borrowing.
- Manu via Digitalmars-d (6/16) Nov 13 2014 I don't think this proposal has issues with that.
- deadalnix (9/15) Nov 13 2014 That is way to define ownerhsip so that is not a rebutal of my
- Manu via Digitalmars-d (14/28) Nov 13 2014 I'm super happy your on board with this. You're often a hard sell :)
- deadalnix (22/37) Nov 13 2014 I don't find it problematic. However, the concept of burrowing
- Araq (5/13) Nov 14 2014 Do you happen to have any concrete reasons for that? An example
- deadalnix (24/39) Nov 14 2014 I'm not sure we understand rust type system to be too complicated
- Walter Bright (2/4) Nov 15 2014 Spoken like a true engineer!
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (9/13) Nov 16 2014 More like a consultant for self-help:
- Walter Bright (35/37) Nov 16 2014 Everyone likes to rag on the Titanic's design, but I've read a fair amou...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (26/35) Nov 16 2014 «The 20 lifeboats that she did carry could only take 1,178
- Walter Bright (12/18) Nov 16 2014 You can do anything with a language if it is Turing complete.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (16/29) Nov 16 2014 That's not the point. If you have to avoid features because they
- Walter Bright (14/16) Nov 16 2014 Not at all in my view. It has two miserable failures:
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (15/24) Nov 16 2014 Depends on your view of C, if you view C as step above assembly
- Walter Bright (7/29) Nov 16 2014 If you read my article, the fix does not take away anything.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (7/10) Nov 16 2014 Yes, but that is just what all other languages had at the time,
- Walter Bright (7/16) Nov 16 2014 Since structs were supported, this rationale does not work.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (21/23) Nov 18 2014 BTW, this is not entirely correct. It had autoincrement on
- Walter Bright (13/33) Nov 18 2014 Those are not dedicated string instructions. Autoincrement was an addres...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (30/43) Nov 18 2014 Yes, Motorola 68000 also had those. Very useful combined with
- Walter Bright (20/22) Nov 18 2014 char s[] = "filename.ext";
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (12/25) Nov 19 2014 Wait, we are either discussing the design goals of the original C
- ketmar via Digitalmars-d (3/5) Nov 16 2014 that's why warp is faster than cpp? ;-)
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (4/5) Nov 16 2014 Which implementation of cpp?
- ketmar via Digitalmars-d (9/14) Nov 16 2014 gcc implementation, afair. it's slowness was the reason for warping.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (7/19) Nov 16 2014 Ok, I haven't seen an independent benchmark, but I believe clang
- ketmar via Digitalmars-d (6/8) Nov 16 2014 FSA code is a fsckn mess. either adding dependency of external tool and
- Walter Bright (4/16) Nov 16 2014 Notice the total lack of strlen()'s in Warp.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (9/14) Nov 16 2014 Why would you need that? You know where the lexeme begins and
- Walter Bright (17/31) Nov 17 2014 The preprocessor stores lots of strings. Things like identifiers, keywor...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (15/30) Nov 17 2014 Oh, I am not saying that strlen() is a good contemporary
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (4/4) Nov 17 2014 Remember that the alternative to zero-terminated strings at that
- Paulo Pinto (5/10) Nov 17 2014 Black hat hackers, virus and security tools vendors around the
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (24/34) Nov 17 2014 I don't think buffer overflow and string fundamentals are closely
- Paulo Pinto (8/44) Nov 17 2014 I am fully aware how UNIX designers decided to ignore the systems
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (6/10) Nov 17 2014 I wouldn't say that Algol is a systems programming language, and
- Walter Bright (3/5) Nov 17 2014 No, that was not the alternative.
- Walter Bright (7/10) Nov 17 2014 I know what you're saying.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (32/38) Nov 17 2014 You are twisting and turning so much in discussions that you make
- Walter Bright (13/16) Nov 17 2014 When designing a language data type, you don't design it for "some" oper...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (17/24) Nov 17 2014 Ok, but I would rather say it like this: the language C doesn't
- Walter Bright (9/17) Nov 17 2014 The combination of the inescapable array-to-ptr decay when calling a fun...
- Paulo Pinto (12/36) Nov 18 2014 Heartbleed is a nice example.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (23/26) Nov 18 2014 Sure, but these are not a strict language issues since the same
- Walter Bright (14/15) Nov 18 2014 To bring up the aviation industry again, they long ago recognized that "...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (11/17) Nov 18 2014 Please note that I said it was a management issue. Clearly if
- deadalnix (6/33) Nov 18 2014 There are good answer to most of this but most importantly, this
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (18/22) Nov 19 2014 The topic is if "scope" is planned for deprecation. That has been
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (61/64) Nov 18 2014 I'd rather say that it is the industry that has misappropriated
- Paulo Pinto (16/38) Nov 18 2014 Lint was created in 1979 when it was already clear most AT&T
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (13/30) Nov 18 2014 Sure, but most operating system vendors considered it a strategic
- Paulo Pinto (10/20) Nov 18 2014 Since when do developers use a different systems programming
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (15/21) Nov 18 2014 Depends on what you mean by system programming. I posit that most
- Paulo Pinto (22/46) Nov 18 2014 In the 80's almost everything was system programming, even
- Walter Bright (6/59) Nov 18 2014 I'm sorry to say this, but these rationalizations as to why C cannot add...
- H. S. Teoh via Digitalmars-d (6/11) Nov 18 2014 What's the trivial thing that will solve most buffer overflow problems?
- Walter Bright (2/7) Nov 18 2014 http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701...
- H. S. Teoh via Digitalmars-d (10/20) Nov 18 2014 That's not a trivial change at all -- it will break pretty much every C
- Walter Bright (3/18) Nov 18 2014 No, I proposed a new syntax that would have different behavior:
- H. S. Teoh via Digitalmars-d (7/29) Nov 18 2014 Ah, I see. How would that be different from just declaring an array
- Walter Bright (6/15) Nov 18 2014 foo("string");
- Paulo Pinto (3/117) Nov 18 2014 So useless that it became optional in C11.
- Walter Bright (24/28) Nov 18 2014 Note the Rationale given:
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (12/15) Nov 18 2014 They can add whatever they want.
- Walter Bright (6/12) Nov 18 2014 The proposals I made do not change that in any way, and if K&R designed ...
- "Alo Miehsof =?UTF-8?B?RGF0c8O4cmci?= (3/21) Nov 18 2014 Argumentative ?!! More like a fucking gaping fucking asshole. His
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (6/9) Nov 19 2014 If you are going ad hominem, please post under your own name. I
- Walter Bright (3/5) Nov 20 2014 Rude posts are not welcome here.
- uri (9/36) Nov 20 2014 Wow that's uncalled for.
- "Alo Miehsof =?UTF-8?B?RGF0c8O4cmci?= (2/15) Nov 17 2014 Stop wasting time with the mouth breather.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (3/4) Nov 18 2014 Please write under your full name.
- Paulo Pinto (16/22) Nov 16 2014 My view is of a "kind of" portable macro assembler, even MASM and TASM
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (7/20) Nov 16 2014 Not such a bad idea if you can blend it with regular assembly
- deadalnix (9/25) Nov 16 2014 Sorry but that is dumb, and the fact you are on the D newsgroup
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (26/33) Nov 16 2014 Define what you mean by 100%? By 100% I mean that you can
- Max Samukha (3/7) Nov 20 2014 85% often means being at the bottom of the uncanny valey. 65% or
- deadalnix (13/23) Nov 20 2014 85% is an image rather than an exact number. The point being,
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (24/31) Nov 20 2014 FWIW, among language designers it is usually considered a
- deadalnix (19/53) Nov 20 2014 All of this is beautiful until you try to implement a quicksort
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (41/50) Nov 20 2014 Sure, I am not arguing in favour of functional programming. But
- deadalnix (1/1) Nov 20 2014 You are a goalspot shifting champion, aren't you ?
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (9/10) Nov 20 2014 Nope, it follows up your line of argument, but the
- Walter Bright (2/19) Nov 20 2014 Monads!
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (13/17) Nov 20 2014 […]
- Walter Bright (5/19) Nov 20 2014 That's correct.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (16/18) Nov 20 2014 Yes, at least in Haskell, but I find monads in Haskell harder to
- Walter Bright (3/8) Nov 20 2014 Exactly my point (and I presume deadalnix's, too).
- Andrei Alexandrescu (8/33) Nov 20 2014 As I like to say, this troika has inflicted a lot of damage on both FP
- Paulo Pinto (9/51) Nov 21 2014 Just like the OOP introductory books that still insist in talking
- Andrei Alexandrescu (12/59) Nov 21 2014 The first public example found by google (oop introduction) lists a
- Abdulhaq (8/15) Nov 21 2014 Hear, hear. One of the problems with many introductions to
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (13/21) Nov 22 2014 Yes, the problem is that you should not teach OOP, but object
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (16/23) Nov 21 2014 Be careful with that attitude. It is an excellent strategy to
- bearophile (5/7) Nov 21 2014 Take also a look at "Clean" language. It doesn't use monads and
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (18/25) Nov 13 2014 It is better solved using static analysis and it is part of a
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (13/42) Nov 13 2014 You mean without additional hints by the programmer? That's not
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (22/42) Nov 13 2014 I think it can happen. D needs a new intermediate layer where you
- Manu via Digitalmars-d (24/29) Nov 13 2014 I don't follow how you associate that opinion with implementation of
- Paulo Pinto (9/59) Nov 13 2014 C++14 is quite nice and C++17 will be even better.
- Daniel Murphy (4/15) Nov 13 2014 I know, it's easy to forget how bad C++ is to work with. The new versio...
- Manu via Digitalmars-d (9/24) Nov 13 2014 Yeah... nar. Not really. Every line of code is at least 3-4 times as
- Walter Bright (4/7) Nov 15 2014 You should submit a presentation proposal to the O'Reilly Software Archi...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (24/37) Nov 13 2014 I don't like semantics where I have to state that the parameters
- Wyatt (7/10) Nov 13 2014 Unfortunately for your sanity, this isn't going to happen.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (23/27) Nov 14 2014 D needs to start to focus on providing an assumption free system
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (12/30) Nov 13 2014 I agree with this in principle, but it is unrealistic for D2.
- Manu via Digitalmars-d (17/32) Nov 13 2014 D has attribute inference, that's like, a thing now.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (35/50) Nov 13 2014 Yes, these days D arguments go like this:
- Jacob Carlborg (7/9) Nov 14 2014 Can't you use Xcode 6 and set the minimum deploy target to iOS 5.1? If
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (4/8) Nov 14 2014 I don't know yet, but the 5.1 simulator will probably have to run
- Jacob Carlborg (6/9) Nov 15 2014 The simulator bundled with Xcode 5 run on Yosemite but not the one
- deadalnix (3/9) Nov 13 2014 Yes, that is the only sane road forward.
- Walter Bright (3/9) Nov 15 2014 What I find odd about the progress of C++ (11, 14, 17, ...) is that ther...
- Paulo Pinto (7/16) Nov 15 2014 What about templates, compile time reflection, modules and compile time
- Walter Bright (8/15) Nov 15 2014 Competent and prominent C++ coding teams still manage to find complex an...
- Paulo Pinto (2/21) Nov 15 2014 That was quite bad how it happened.
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (29/55) Nov 13 2014 I think I understand now how you want to use templates. You
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (13/14) Nov 13 2014 Well, "move()" should obviously not be a dummy, but just a
- bearophile (6/9) Nov 13 2014 I am sure you are aware that the solution you are talking about
- Andrei Alexandrescu (2/9) Nov 13 2014 In fact I am not! -- Andrei
First of all, what exactly is planned for deprecated? It [1] says "Note that scope for other usages (e.g. scoped variables) is unrelated to this feature and will not be deprecated.", but the example... void main() { A obj; { scope A a = new A(1); obj = a; } assert(obj.x == 1); // fails, 'a' has been destroyed } ... looks a lot like a scoped variable to me, so it's not clear to me what exactly is planned for deprecation. Please clarify. Ok, with that out of the way, I get why it is unsafe, but isn't it only unsafe because it has not yet been implemented? Isn't it possible to implement escape analysis and make it a safe and useful feature? This question was asked before, but never received an answer.[2] Mike [1] http://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack [2] http://forum.dlang.org/post/k549l4$1s24$1 digitalmars.com
Nov 10 2014
I think the general direction is that scope will be re-purposed as a type modifier for implementing effective borrowing/escape analysis. ...at least, I really really hope that's the plan! :) On 11 November 2014 09:33, Mike via Digitalmars-d <digitalmars-d puremagic.com> wrote:First of all, what exactly is planned for deprecated? It [1] says "Note that scope for other usages (e.g. scoped variables) is unrelated to this feature and will not be deprecated.", but the example... void main() { A obj; { scope A a = new A(1); obj = a; } assert(obj.x == 1); // fails, 'a' has been destroyed } ... looks a lot like a scoped variable to me, so it's not clear to me what exactly is planned for deprecation. Please clarify. Ok, with that out of the way, I get why it is unsafe, but isn't it only unsafe because it has not yet been implemented? Isn't it possible to implement escape analysis and make it a safe and useful feature? This question was asked before, but never received an answer.[2] Mike [1] http://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack [2] http://forum.dlang.org/post/k549l4$1s24$1 digitalmars.com
Nov 10 2014
On 11/10/14 6:33 PM, Mike wrote:First of all, what exactly is planned for deprecated? It [1] says "Note that scope for other usages (e.g. scoped variables) is unrelated to this feature and will not be deprecated.", but the example... void main() { A obj; { scope A a = new A(1); obj = a; } assert(obj.x == 1); // fails, 'a' has been destroyed } .... looks a lot like a scoped variable to me, so it's not clear to me what exactly is planned for deprecation. Please clarify. Ok, with that out of the way, I get why it is unsafe, but isn't it only unsafe because it has not yet been implemented? Isn't it possible to implement escape analysis and make it a safe and useful feature? This question was asked before, but never received an answer.[2] Mike [1] http://dlang.org/deprecate.html#scope%20for%20allocating%20classes%20on%20the%20stack [2] http://forum.dlang.org/post/k549l4$1s24$1 digitalmars.comWell, that's a funny thing. I looked it up, apparently using scope to designate "scope variables" is a thing: http://dlang.org/attribute.html#scope " For local declarations, scope implements the RAII (Resource Acquisition Is Initialization) protocol. This means that the destructor for an object is automatically called when the reference to it goes out of scope. The destructor is called even if the scope is exited via a thrown exception, thus scope is used to guarantee cleanup. " Anyone used to using structs for RAII would think WAT? But a long time ago, structs did not have dtors. So I think at that time, scope simply applied only to classes. Note how it specifically says "objects" What I think it means is, scope declarations for allocating classes will be destroyed when leaving scope, but will not be allocated on the stack. I don't know why this is less dangerous. Perhaps it's destroyed but not deallocated? But the deprecation says "(e.g. scoped variables) is unrelated to this feature." Seems pretty related. My real guess is that the deprecation message is wrong. We have scope(exit), I don't see why we would need scope variables as well. -Steve
Nov 10 2014
This is a bit complicated. Originally intention was to deprecate scope variables (scope var = new Class) completely and make people switch to std.typecons.scoped - primarily because of how fragile and inflexible its implementation was (can't have scope fields in aggregates for example) However it never actually got deprecated and still kind of works with no warnings printed by compiler. Also I remember Daniel mentioning that he uses it extensively in DDMD project which, unfortunately, makes full deprecation unlikely. There is however a long standing desire to re-purpose `scope` as qualifier for lifetime/ownership semantics which could have made current uses simply a subset of full `scope` implementation. But this is very complicated topic and may take years to fly.
Nov 11 2014
On Tuesday, 11 November 2014 at 15:29:49 UTC, Dicebot wrote:But this is very complicated topic and may take years to fly.The ship will have sailed by the time it's ready to fly (gloriously mixed metaphors), this would seem like such a fundamental issue with a big knock-on effect on everything else that it should surely be prioritized higher than that? I am aware you're not the one setting priorities. =)
Nov 11 2014
On Tuesday, 11 November 2014 at 16:54:10 UTC, ixid wrote:On Tuesday, 11 November 2014 at 15:29:49 UTC, Dicebot wrote:It is going to take such long time not because no one considers it important but because designing and implementing such system is damn hard. Prioritization does not make a difference here.But this is very complicated topic and may take years to fly.The ship will have sailed by the time it's ready to fly (gloriously mixed metaphors), this would seem like such a fundamental issue with a big knock-on effect on everything else that it should surely be prioritized higher than that? I am aware you're not the one setting priorities. =)
Nov 11 2014
Dicebot:ixid:I agree it's a very important topic (more important/urgent than the GC, also because it reduces the need of the GC). But I think Walter thinks this kind of change introduces too much complexity in D (despite it may eventually become inevitable for D once Rust becomes more popular and programmers get used to that kind of static enforcement). Regarding the design and implementation difficulties, is it possible to ask for help to one of the persons that designed (or watched closely design) the similar thing for Rust? Bye, bearophileThe ship will have sailed by the time it's ready to fly (gloriously mixed metaphors), this would seem like such a fundamental issue with a big knock-on effect on everything else that it should surely be prioritized higher than that? I am aware you're not the one setting priorities. =)It is going to take such long time not because no one considers it important but because designing and implementing such system is damn hard. Prioritization does not make a difference here.
Nov 11 2014
On 12 November 2014 04:01, bearophile via Digitalmars-d <digitalmars-d puremagic.com> wrote:Dicebot:I agree. scope is top of my wishlist these days. Above RC/GC, or anything else you hear me talking about. I don't think quality RC is practical without scope implemented, and rvalue temps -> references will finally be solved too. Quite a few things I care about rest on this, but it doesn't seem to be a particularly popular topic :(ixid:I agree it's a very important topic (more important/urgent than the GC, also because it reduces the need of the GC). But I think Walter thinks this kind of change introduces too much complexity in D (despite it may eventually become inevitable for D once Rust becomes more popular and programmers get used to that kind of static enforcement).The ship will have sailed by the time it's ready to fly (gloriously mixed metaphors), this would seem like such a fundamental issue with a big knock-on effect on everything else that it should surely be prioritized higher than that? I am aware you're not the one setting priorities. =)It is going to take such long time not because no one considers it important but because designing and implementing such system is damn hard. Prioritization does not make a difference here.Regarding the design and implementation difficulties, is it possible to ask for help to one of the persons that designed (or watched closely design) the similar thing for Rust? Bye, bearophile
Nov 12 2014
On 11/11/2014 18:01, bearophile wrote:I agree it's a very important topic (more important/urgent than the GC, also because it reduces the need of the GC). But I think Walter thinks this kind of change introduces too much complexity in D (despite it may eventually become inevitable for D once Rust becomes more popular and programmers get used to that kind of static enforcement).I think Rust's lifetimes would be a huge change if ported to D. In Rust user types often need annotations as well as function parameters. People tend to want Rust's guarantees without the limitations. I think D does need some kind of scope attribute verification, but we need to throw out some of the guarantees Rust makes to get an appropriate fit for existing D code. For example, taking a mutable borrowed pointer for a variable means you can't even *read* the original variable whilst the pointer lives. I think no one would try to make D do that, but Rust's reason for adding it is actually memory safety (I don't quite understand it, but it involves iterator invalidation apparently). It's possible their feature can be refined, but basically 'mut' in Rust really means 'unique'.
Nov 12 2014
On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:On 11/11/2014 18:01, bearophile wrote:Have you seen my proposal? http://wiki.dlang.org/User:Schuetzm/scope It takes a slightly different approach from Rust. Instead of specifying lifetimes, it uses owners, and it's also otherwise more simple than Rust's system. E.g. there is no full blown borrow checker (and no need for it).I agree it's a very important topic (more important/urgent than the GC, also because it reduces the need of the GC). But I think Walter thinks this kind of change introduces too much complexity in D (despite it may eventually become inevitable for D once Rust becomes more popular and programmers get used to that kind of static enforcement).I think Rust's lifetimes would be a huge change if ported to D. In Rust user types often need annotations as well as function parameters. People tend to want Rust's guarantees without the limitations. I think D does need some kind of scope attribute verification, but we need to throw out some of the guarantees Rust makes to get an appropriate fit for existing D code.For example, taking a mutable borrowed pointer for a variable means you can't even *read* the original variable whilst the pointer lives. I think no one would try to make D do that, but Rust's reason for adding it is actually memory safety (I don't quite understand it, but it involves iterator invalidation apparently). It's possible their feature can be refined, but basically 'mut' in Rust really means 'unique'.In my proposal, there's "const borrowing". It still allows access to the owner, but not mutation. This is necessary for safe implementation of move semantics, and to guard against iterator invalidation. It also has other uses, like the problems with "transient range", e.g. stdin.byLine(), which overwrite their buffer in popFront(). On the other hand, it's opt-in; by default, owners are mutable while borrowed references exist.
Nov 12 2014
On 12/11/2014 17:16, "Marc Schütz" <schuetzm gmx.net>" wrote:On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:Looks good. Personally I've been meaning to study your (whole) proposal, I think its a valuable analysis of what problems we could/should solve. Just from a quick look, I wonder if 'const borrowing' could solve the scoped!C premature destruction problem: C c = std.typecons.scoped!C(); // memory for c has already been freed (need auto c = ...) If the destruction of Scoped is disallowed whilst an (implicit alias this) borrow for c is alive, the compiler can generate an error.For example, taking a mutable borrowed pointer for a variable means you can't even *read* the original variable whilst the pointer lives. I think no one would try to make D do that, but Rust's reason for adding it is actually memory safety (I don't quite understand it, but it involves iterator invalidation apparently). It's possible their feature can be refined, but basically 'mut' in Rust really means 'unique'.In my proposal, there's "const borrowing". It still allows access to the owner, but not mutation. This is necessary for safe implementation of move semantics, and to guard against iterator invalidation. It also has other uses, like the problems with "transient range", e.g. stdin.byLine(), which overwrite their buffer in popFront(). On the other hand, it's opt-in; by default, owners are mutable while borrowed references exist.
Nov 13 2014
On Thursday, 13 November 2014 at 16:56:01 UTC, Nick Treleaven wrote:On 12/11/2014 17:16, "Marc Schütz" <schuetzm gmx.net>" wrote:Const borrowing is not even necessary for that. The problem with `std.typecons.scoped!C` is that it implicitly converts to `C`. Instead, it should only convert to `scope!this(C)`, then the assignment will be rejected correctly: // ERROR: type mismatch: C != scope(C) C c = std.typecons.scoped!C(); // ERROR: `c` outlives it's owner (temporary) scope(C) c = std.typecons.scoped!C(); // OK: typeof(c) is now scoped!C auto c = std.typecons.scoped!C();On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:Looks good. Personally I've been meaning to study your (whole) proposal, I think its a valuable analysis of what problems we could/should solve. Just from a quick look, I wonder if 'const borrowing' could solve the scoped!C premature destruction problem: C c = std.typecons.scoped!C(); // memory for c has already been freed (need auto c = ...) If the destruction of Scoped is disallowed whilst an (implicit alias this) borrow for c is alive, the compiler can generate an error.For example, taking a mutable borrowed pointer for a variable means you can't even *read* the original variable whilst the pointer lives. I think no one would try to make D do that, but Rust's reason for adding it is actually memory safety (I don't quite understand it, but it involves iterator invalidation apparently). It's possible their feature can be refined, but basically 'mut' in Rust really means 'unique'.In my proposal, there's "const borrowing". It still allows access to the owner, but not mutation. This is necessary for safe implementation of move semantics, and to guard against iterator invalidation. It also has other uses, like the problems with "transient range", e.g. stdin.byLine(), which overwrite their buffer in popFront(). On the other hand, it's opt-in; by default, owners are mutable while borrowed references exist.
Nov 13 2014
On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:I think Rust's lifetimes would be a huge change if ported to D. In Rust user types often need annotations as well as function parameters. People tend to want Rust's guarantees without the limitations. I think D does need some kind of scope attribute verification, but we need to throw out some of the guarantees Rust makes to get an appropriate fit for existing D code.Rust is not the first language going that road. The problem is that you get great complexity if you don't want to be too limiting in what you can do. This complexity ultimately ends up costing more than what you gain. I think the sane road to go into is supporting ownership/burrowing for common cases, and fallback on the GC, or unsafe construct for the rest. One have to admit there is no silver bullet, and shoehorning everything in the same solution is not gonna work.
Nov 12 2014
On 11/12/14 2:10 PM, deadalnix wrote:On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:I agree. This is one of those cases in which a good engineering solution may be a lot better than the "perfect" solution (and linear types are not even perfect...). AndreiI think Rust's lifetimes would be a huge change if ported to D. In Rust user types often need annotations as well as function parameters. People tend to want Rust's guarantees without the limitations. I think D does need some kind of scope attribute verification, but we need to throw out some of the guarantees Rust makes to get an appropriate fit for existing D code.Rust is not the first language going that road. The problem is that you get great complexity if you don't want to be too limiting in what you can do. This complexity ultimately ends up costing more than what you gain. I think the sane road to go into is supporting ownership/burrowing for common cases, and fallback on the GC, or unsafe construct for the rest. One have to admit there is no silver bullet, and shoehorning everything in the same solution is not gonna work.
Nov 12 2014
Are you guys saying you don't feel this proposal is practical? http://wiki.dlang.org/User:Schuetzm/scope I think it's a very interesting approach, and comes from a practical point of view. It solves the long-standings issues, like scope return values, in a very creative way. On 13 November 2014 08:33, Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com> wrote:On 11/12/14 2:10 PM, deadalnix wrote:On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven wrote:I agree. This is one of those cases in which a good engineering solution may be a lot better than the "perfect" solution (and linear types are not even perfect...). AndreiI think Rust's lifetimes would be a huge change if ported to D. In Rust user types often need annotations as well as function parameters. People tend to want Rust's guarantees without the limitations. I think D does need some kind of scope attribute verification, but we need to throw out some of the guarantees Rust makes to get an appropriate fit for existing D code.Rust is not the first language going that road. The problem is that you get great complexity if you don't want to be too limiting in what you can do. This complexity ultimately ends up costing more than what you gain. I think the sane road to go into is supporting ownership/burrowing for common cases, and fallback on the GC, or unsafe construct for the rest. One have to admit there is no silver bullet, and shoehorning everything in the same solution is not gonna work.
Nov 13 2014
On Thursday, 13 November 2014 at 09:29:22 UTC, Manu via Digitalmars-d wrote:Are you guys saying you don't feel this proposal is practical? http://wiki.dlang.org/User:Schuetzm/scope I think it's a very interesting approach, and comes from a practical point of view. It solves the long-standings issues, like scope return values, in a very creative way.You need to define ownership before defining borrowing.
Nov 13 2014
On 13 November 2014 19:56, deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Thursday, 13 November 2014 at 09:29:22 UTC, Manu via Digitalmars-d wrote:I don't think this proposal has issues with that. The thing at the root of the call tree is the 'owner'. Nothing can escape a scope call-tree, so the owner or allocation policy doesn't matter, and that's the whole point.Are you guys saying you don't feel this proposal is practical? http://wiki.dlang.org/User:Schuetzm/scope I think it's a very interesting approach, and comes from a practical point of view. It solves the long-standings issues, like scope return values, in a very creative way.You need to define ownership before defining borrowing.
Nov 13 2014
On Thursday, 13 November 2014 at 10:32:05 UTC, Manu via Digitalmars-d wrote:I don't think this proposal has issues with that. The thing at the root of the call tree is the 'owner'. Nothing can escape a scope call-tree, so the owner or allocation policy doesn't matter, and that's the whole point.That is way to define ownerhsip so that is not a rebutal of my comment. This makes assumption about ownership, that we may or may not want; I think the proposal is sound overall (I haven't try to explore all special cases scenarios, so it is a reserved yes for now) but going forward with this before defining ownership is not a good approach.
Nov 13 2014
On 14 November 2014 09:28, deadalnix via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Thursday, 13 November 2014 at 10:32:05 UTC, Manu via Digitalmars-d wrote:I'm super happy your on board with this. You're often a hard sell :) What about the definition of 'ownership' do you find problematic? It's clear we're not going to get multiple pointer types (read: owner types) like in Rust... so then ownership strategy will probably remain fairly arbitrary. The point of scope seems to be precisely to set the problem of ownership aside. That leaves ownership at the root of the call-tree to remain being managed however the user likes. Once we have scope, then we can have meaningful implementations of things like a unique pointer, and finally approach efficient RC. Personally, it has proven to be the most inhibiting barrier to further development in most areas I care about left.I don't think this proposal has issues with that. The thing at the root of the call tree is the 'owner'. Nothing can escape a scope call-tree, so the owner or allocation policy doesn't matter, and that's the whole point.That is way to define ownerhsip so that is not a rebutal of my comment. This makes assumption about ownership, that we may or may not want; I think the proposal is sound overall (I haven't try to explore all special cases scenarios, so it is a reserved yes for now) but going forward with this before defining ownership is not a good approach.
Nov 13 2014
On Friday, 14 November 2014 at 03:20:42 UTC, Manu via Digitalmars-d wrote:I'm super happy your on board with this. You're often a hard sell :) What about the definition of 'ownership' do you find problematic?I don't find it problematic. However, the concept of burrowing (what we do with scope in that proposal) makes assumptions about ownership. So I'd like to see ownership addressed. Ultimately, we can decide that ownership is loosely defined, but that may have very important consequences on the possibility - or not - to introduce ownership. I think ownership is an important concept to have to get the gc world and the non gc worlds interact nicely (which should be a big interest of yours as well).It's clear we're not going to get multiple pointer types (read: owner types) like in Rust... so then ownership strategy will probably remain fairly arbitrary.I think it make sense to have something for ownership. The error of rust wasn't going that road, but going in that road 100%, which come at a cost at interface level which is too important. A simpler ownership system, that fallback on the GC or unsafe feature when it fall short. I'm confident at this point that we can get most of the benefit of an ownership system with something way simpler than rust's system if you accept to not cover 100% of the scenarios.Once we have scope, then we can have meaningful implementations of things like a unique pointer, and finally approach efficient RC. Personally, it has proven to be the most inhibiting barrier to further development in most areas I care about left.Yes, that is absolutely necessary to have safe RC, and a great tool to make it more efficient. I'm not fan of unique as this is, IMO, a dumbed down version of what ownership can be, with no real upside.
Nov 13 2014
I think it make sense to have something for ownership. The error of rust wasn't going that road, but going in that road 100%, which come at a cost at interface level which is too important. A simpler ownership system, that fallback on the GC or unsafe feature when it fall short. I'm confident at this point that we can get most of the benefit of an ownership system with something way simpler than rust's system if you accept to not cover 100% of the scenarios.Do you happen to have any concrete reasons for that? An example maybe? Maybe start with explaining how in detail Rust's system is too complex? I'm sure the Rust people will be interested in how you can simplify a (most likely sound) type system that took years to come up with and refine.
Nov 14 2014
On Friday, 14 November 2014 at 14:59:39 UTC, Araq wrote:I'm not sure we understand rust type system to be too complicated the same way. Let's be clear: There is no accidental complexity in Rust's type system. It is sound and very powerful. There is no way I can think of you could make it simpler. That being said, there are cases where Rust's type system shine, for instance tree like datastructures with same lifetime, passing down immutable objects to pure functions and so on. But there are also cases when it become truly infamous like a digraph of object with disparate lifetime. Rust made the choice to have this safe memory management that do not rely on the GC, so they have to handle the infamous cases. This require a rich and complex type system. My point is that we can support the nice cases with something much simpler, while delegating the infamous ones to the GC or unsafe constructs. The good news is that the nice cases are more common that the hard ones (or Rust would be absolutely unusable) so we can reap most of the benefices of a rust like approach while introducing much less complexity in the language. From a cost benefice perspective, this seems like the right way forward to me. To quote the guy from the PL for video games video serie, a 85% solution often is preferable.I think it make sense to have something for ownership. The error of rust wasn't going that road, but going in that road 100%, which come at a cost at interface level which is too important. A simpler ownership system, that fallback on the GC or unsafe feature when it fall short. I'm confident at this point that we can get most of the benefit of an ownership system with something way simpler than rust's system if you accept to not cover 100% of the scenarios.Do you happen to have any concrete reasons for that? An example maybe? Maybe start with explaining how in detail Rust's system is too complex? I'm sure the Rust people will be interested in how you can simplify a (most likely sound) type system that took years to come up with and refine.
Nov 14 2014
On 11/14/2014 4:32 PM, deadalnix wrote:To quote the guy from the PL for video games video serie, a 85% solution often is preferable.Spoken like a true engineer!
Nov 15 2014
On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote:On 11/14/2014 4:32 PM, deadalnix wrote:More like a consultant for self-help: http://www.amazon.com/85%25-Solution-Personal-Accountability-Guarantees/dp/0470500166 Real world 85% engineered solutions: 1. Titanic 2. Chernobyl 3. Challenger 4. C++ …To quote the guy from the PL for video games video serie, a 85% solution often is preferable.Spoken like a true engineer!
Nov 16 2014
On 11/16/2014 3:30 AM, "Ola Fosheim Grøstad"Real world 85% engineered solutions: 1. TitanicEveryone likes to rag on the Titanic's design, but I've read a fair amount about it, and it's quite an unfair rap. It was, for its day, the safest ship afloat, and did represent a significant step forward in safety: 1. The watertight compartments were innovative and kept the Titanic afloat for hours. Without them, it would have sank very quickly. The damage the Titanic suffered was very unusual in its extensiveness, and would have sunk any ship of the day. 2. The wireless was new and state of the art, without it the Titanic would have sunk with all aboard without a trace, and what happened to it would have been a great mystery. The fault with the wireless had nothing to do with its engineering, but with its management (the California did not keep a 24 hr watch on the radio). 3. The hull steel was inferior by today's standards, but was the best available by the standards of its time. 4. The rudder was inadequate, but little was known at the time about how such large ships would handle, and they didn't exactly have computer simulation software available. 5. The oft-repeated thing about the lifeboats was a little unreasonable. The way ships usually sink it's very difficult to launch any lifeboats successfully. If the ship listed, the boats on the high side could not be launched at all, and if it tilted down at a steeper angle none of them could be launched. The way the Titanic sank, slowly and fairly levelly, enabling nearly all the boats to be launched, was very unusual. The idea was with the watertight compartments it would sink slowly enough that the boats could be used to ferry the passengers to safety. That in fact would have worked if the California had been monitoring the wireless. It's unfair to apply the hubris of hindsight. Apply instead the standards and practices of the foresight, and the Titanic comes off very well. It was not designed to drive full speed into an iceberg, and modern ships can't handle that, either. Actually, the Titantic would likely have fared better than modern ships if it didn't try to turn but simply rammed it head on. The watertight compartments would have kept it afloat. For comparison, look what happened to that italian cruise ship a few years ago. It got a minor hole punched in the side by a rock, rolled over and sank.
Nov 16 2014
On Sunday, 16 November 2014 at 17:46:09 UTC, Walter Bright wrote:Everyone likes to rag on the Titanic's design, but I've read a fair amount about it, and it's quite an unfair rap. It was, for its day, the safest ship afloat, and did represent a significant step forward in safety:«The 20 lifeboats that she did carry could only take 1,178 people, even though there were about 2,223 on board.» http://en.wikipedia.org/wiki/Lifeboats_of_the_RMS_Titanic Thats not even a 85% solution, it is a 53% solution.It's unfair to apply the hubris of hindsight. Apply instead the standards and practices of the foresight, and the Titanic comes off very well.I don't know, my grandfather's uncle went with one of the expeditions around Greenland and they did not sink. That ship (Fram) was designed for being frozen into the ice as it was designed for being used in to reach the north pole. The shape of the hull was designed to "pop out" of the ice rather than being pushed down so that the ship could float over the arctic as part of the ice. It was later used for several trips, notably the famous trip to reach the south pole. That's a lot closer to a 100% engineering solution!It was not designed to drive full speed into an iceberg, and modern ships can't handle that, either.It was not sane to drive at full speed I guess. There was a lot of arrogance in the execution around Titanic, both leaving with insufficient life boats and driving at full speed suggest a lack of understanding… Returning to programming languages: if I cannot implement 100% of my design with a language then it is a non-solution. 85% is not enough. In business applications people sometimes have to settle for ready-made 85% solutions and change their business practices to get the last 15%, but that is not good enough for systems programming IMO. That's how you think about frameworks, but not how you think about language design (or system level runtime).
Nov 16 2014
On 11/16/2014 10:27 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Returning to programming languages: if I cannot implement 100% of my design with a language then it is a non-solution. 85% is not enough.You can do anything with a language if it is Turing complete.In business applications people sometimes have to settle for ready-made 85% solutions and change their business practices to get the last 15%, but that is not good enough for systems programming IMO. That's how you think about frameworks, but not how you think about language design (or system level runtime).Be careful you don't fall into kitchen sink syndrome. Add enough features, and the language becomes unusable. Features are almost never orthogonal, they always interact and interfere with each other. This applies to all engineering, not just languages. There are no 100% solutions. For example, D doesn't support multiple inheritance. This is on purpose. Yes, some C++ programmers believe D is badly broken because of this. I don't at all believe it is unreasonable that one should make adaptations in design in order to use a language successfully. After all, D is explicitly designed to be a "pragmatic" language.
Nov 16 2014
On Sunday, 16 November 2014 at 18:36:01 UTC, Walter Bright wrote:You can do anything with a language if it is Turing complete.That's not the point. If you have to avoid features because they aren't general enough or have to change the design to fit the language and not the hardware, then the language design becomes a problem.Be careful you don't fall into kitchen sink syndrome. Add enough features, and the language becomes unusable. Features are almost never orthogonal, they always interact and interfere with each other. This applies to all engineering, not just languages. There are no 100% solutions.I think C is pretty close to a 98% solution for system level programming. Granted, it relies on macros to reach that.For example, D doesn't support multiple inheritance. This is on purpose. Yes, some C++ programmers believe D is badly broken because of this. I don't at all believe it is unreasonable that one should make adaptations in design in order to use a language successfully. After all, D is explicitly designed to be a "pragmatic" language.I am not sure if OO-inheritance and virtual functions are all that important for system level programming, but generally features should work across the board if it can be implemented efficiently. E.g. creating "weird" typing rules due to ease of implementation does not sit well with me. Or to put it in more simple terms: figuring out how a programming language works is a necessary investment, but having to figure out how a programming language does not work and when is really annoying.
Nov 16 2014
On 11/16/2014 10:52 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:I think C is pretty close to a 98% solution for system level programming.Not at all in my view. It has two miserable failures: 1. C's Biggest Mistake http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625 This made C far, far more difficult and buggy to work with than it should have been. 2. 0 terminated strings This makes it surprisingly difficult to do performant string manipulation, and also results in a excessive memory consumption.Granted, it relies on macros to reach that.And it's a crummy macro system, even for its day. My above remarks should be put in context of when C was designed. As with the Titanic, it is unfair to apply modern sensibilities to it. But if we were to, a vast amount of C could be dramatically improved without changing its fundamental nature.
Nov 16 2014
On Sunday, 16 November 2014 at 19:24:47 UTC, Walter Bright wrote:This made C far, far more difficult and buggy to work with than it should have been.Depends on your view of C, if you view C as step above assembly then it makes sense to treat everything as pointers. It is a bit confusing in the beginning since it is more or less unique to C.2. 0 terminated strings This makes it surprisingly difficult to do performant string manipulation, and also results in a excessive memory consumption.Whether using sentinels is slow or fast depends on what you want to do, but it arguably save space for small strings (add a length + alignment and you loose ~6 bytes). Also dealing with a length means you cannot keep everything in registers on simple CPUs. A lexer that takes zero terminated input is a lot easier to write and make fast than one that use length. Nothing prevents you from creating a slice as a struct though.sensibilities to it. But if we were to, a vast amount of C could be dramatically improved without changing its fundamental nature.To me the fundamental nature of C is: 1. I can visually imagine how the code maps onto the hardware 2. I am not bound to a complicated runtime
Nov 16 2014
On 11/16/2014 11:59 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Sunday, 16 November 2014 at 19:24:47 UTC, Walter Bright wrote:If you read my article, the fix does not take away anything.This made C far, far more difficult and buggy to work with than it should have been.Depends on your view of C, if you view C as step above assembly then it makes sense to treat everything as pointers.I've worked enough with C to know that these arguments do not hold up in real code.2. 0 terminated strings This makes it surprisingly difficult to do performant string manipulation, and also results in a excessive memory consumption.Whether using sentinels is slow or fast depends on what you want to do, but it arguably save space for small strings (add a length + alignment and you loose ~6 bytes). Also dealing with a length means you cannot keep everything in registers on simple CPUs. A lexer that takes zero terminated input is a lot easier to write and make fast than one that use length.Nothing prevents you from creating a slice as a struct though.I've tried that, too. Doesn't work - the C runtime library prevents it, as well as every other library.None of the fixes I've suggested impair that in any way.sensibilities to it. But if we were to, a vast amount of C could be dramatically improved without changing its fundamental nature.To me the fundamental nature of C is: 1. I can visually imagine how the code maps onto the hardware 2. I am not bound to a complicated runtime
Nov 16 2014
On Sunday, 16 November 2014 at 20:26:36 UTC, Walter Bright wrote:If you read my article, the fix does not take away anything.Yes, but that is just what all other languages had at the time, so leaving it out was obviously deliberate. I assume they wanted a very simple model where each parameter could fit in a register.I've worked enough with C to know that these arguments do not hold up in real code.But you have to admit that older CPUS/tight RAM does have an effect? Even 8086 have dedicated string instructions with the ability to terminate on zero (REPNZ)
Nov 16 2014
On 11/16/2014 12:44 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Sunday, 16 November 2014 at 20:26:36 UTC, Walter Bright wrote:Since structs were supported, this rationale does not work.If you read my article, the fix does not take away anything.Yes, but that is just what all other languages had at the time, so leaving it out was obviously deliberate. I assume they wanted a very simple model where each parameter could fit in a register.Remember that I wrote successful C and C++ compilers for 16 bit 8086 machines, and programmed on it for a decade. I know about those instructions, and I'm familiar with the tradeoffs. It's not worth it. Besides, C was designed for the PDP-11, which had no such instructions.I've worked enough with C to know that these arguments do not hold up in real code.But you have to admit that older CPUS/tight RAM does have an effect? Even 8086 have dedicated string instructions with the ability to terminate on zero (REPNZ)
Nov 16 2014
On Sunday, 16 November 2014 at 21:54:40 UTC, Walter Bright wrote:Besides, C was designed for the PDP-11, which had no such instructions.BTW, this is not entirely correct. It had autoincrement on registers. This is the example given on Wikipedia: MOV #MSG,R1 1$: MOVB (R1)+,R0 BEQ DONE .TTYOUT BR 1$ .EXIT MSG: .ASCIZ /Hello, world!/ The full example: http://en.wikipedia.org/wiki/MACRO-11 So the print loop is 4 instructions (I assume .TTYOUT is a I/O instruction), with a length you would at least have 5 instructions and use an extra register, as you would have an additional compare. (As for concat, that I almost never use. In systems programming you mostly append to buffers and flush when the buffer is full. Don't need length for that. Even in javascript and python I avoid regular concat due to the inefficency of concat versus a buffered join.)
Nov 18 2014
On 11/18/2014 9:01 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Sunday, 16 November 2014 at 21:54:40 UTC, Walter Bright wrote:Those are not dedicated string instructions. Autoincrement was an addressing mode that could be used with any register and any instruction, including the stack and program counter (!). Autoincrement/autodecrement gave rise to *p++ and *p-- in C.Besides, C was designed for the PDP-11, which had no such instructions.BTW, this is not entirely correct. It had autoincrement on registers.This is the example given on Wikipedia: MOV #MSG,R1 1$: MOVB (R1)+,R0 BEQ DONE .TTYOUT BR 1$ .EXIT MSG: .ASCIZ /Hello, world!/ The full example: http://en.wikipedia.org/wiki/MACRO-11More than destroyed by every time you have to call strlen().So the print loop is 4 instructions (I assume .TTYOUT is a I/O instruction), with a length you would at least have 5 instructions and use an extra register, as you would have an additional compare..TTYOUT is a macro that expands to code that calls the operating system. The 11 doesn't have I/O instructions.(As for concat, that I almost never use. In systems programming you mostly append to buffers and flush when the buffer is full. Don't need length for that.Uh, you need the length to determine when "the buffer is full".Even in javascript and python I avoid regular concat due to the inefficency of concat versus a buffered join.)Just try to manipulate paths, filenames, and extensions without using strlen() and strcat(). Your claim that C string code doesn't use strlen() is patently absurd. Besides, you wouldn't be using javascript or python if efficiency mattered.
Nov 18 2014
On Tuesday, 18 November 2014 at 21:07:22 UTC, Walter Bright wrote:Those are not dedicated string instructions. Autoincrement was an addressing mode that could be used with any register and any instruction, including the stack and program counter (!).Yes, Motorola 68000 also had those. Very useful combined with sentinels! ;^) It was one of those things that made the 68K asm feel a bit like a high level language.Autoincrement/autodecrement gave rise to *p++ and *p-- in C.Might have, but not from PDP-11. It came to C from B which predated PDP-11.More than destroyed by every time you have to call strlen().Don't! And factor in the performance loss coming from reading from punched tape… ;-) (Actually sentinels between fields are also better for recovery if you have data corruption in files, although there are many other solutions, but this is a non-problem today.).TTYOUT is a macro that expands to code that calls the operating system. The 11 doesn't have I/O instructions.Ah, ok, so it was a system call.Uh, you need the length to determine when "the buffer is full".For streaming: fixed size, modulo 2. For allocating: worst case allocate, then release.Just try to manipulate paths, filenames, and extensions without using strlen() and strcat(). Your claim that C string code doesn't use strlen() is patently absurd.No, I don't claim that I never used strlen(), but I never used strcat() IIRC, and never had the need to repeatedly call strlen() on long strings. Long strings would usually sit in a struct where there is space for a length. Slightly annoying, but not the big deal. Filenames are easy, just allocate a large fixed size buffer, then fill in. open(). reuse buffer.Besides, you wouldn't be using javascript or python if efficiency mattered.Actually, lately most of my efficiency related programming is done in javascript! I spend a lot of time breaking up javascript code into async calls to get good responsiveness. But most of my efficiency related problems are in browser engine layout-code (not javascript) that I have to work around somehow. Javascript in isolation is getting insanely fast in the last generations of browser JITs. It is almost a bit scary, because that means that we might be stuck with it forever…
Nov 18 2014
On 11/18/2014 1:56 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Filenames are easy, just allocate a large fixed size buffer, then fill in. open(). reuse buffer.char s[] = "filename.ext"; foo(s[0..8]); But hey, it's simpler, faster, less code, less bug prone, easier to understand and uses less memory to: 1. strlen 2. allocate 3. memcpy 4. append a 0 foo 5. free instead, right? I know you said "just allocate a large fixed size buffer", but I hope you realize that such practice is the root cause of most buffer overflow bugs, because the "640K is enough for anyone" just never works. And your "just use a struct" argument also promptly falls apart with: foo("string") Now, I know that you'll never concede destruction, after all, this is the internet, but give it up :-)
Nov 18 2014
On Wednesday, 19 November 2014 at 00:04:50 UTC, Walter Bright wrote:I know you're simply being argumentative when you defend VLAs, a complex and useless feature, and denigrate simple ptr/length pairs as complicated.Wait, we are either discussing the design goals of the original C or the evolved C. VLAs did not fit the original C either, but in the google discussion you find people who find VLAs very useful. It looks a loot better than alloca. The reason it is made optional is to make embedded-C compilers easier to write, I think.But hey, it's simpler, faster, less code, less bug prone, easier to understand and uses less memory to: 1. strlen 2. allocate… Not faster, but if speed is no concern, sure. It seldom is when it comes to filenames.I know you said "just allocate a large fixed size buffer", but I hope you realize that such practice is the root cause of most buffer overflow bugs,strcat() should never have been created, but strlcat is safe.Now, I know that you'll never concede destruction, after all, this is the internet, but give it up :-)I always concede destruction :-)
Nov 19 2014
On Sun, 16 Nov 2014 19:59:52 +0000 via Digitalmars-d <digitalmars-d puremagic.com> wrote:A lexer that takes zero terminated input is a lot easier to write=20 and make fast than one that use length.that's why warp is faster than cpp? ;-)
Nov 16 2014
On Sunday, 16 November 2014 at 22:00:10 UTC, ketmar via Digitalmars-d wrote:that's why warp is faster than cpp? ;-)Which implementation of cpp? (Btw, take a look at lexer.c in DMD :-P)
Nov 16 2014
On Sun, 16 Nov 2014 22:09:00 +0000 via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Sunday, 16 November 2014 at 22:00:10 UTC, ketmar via=20 Digitalmars-d wrote:gcc implementation, afair. it's slowness was the reason for warping.that's why warp is faster than cpp? ;-)Which implementation of cpp?(Btw, take a look at lexer.c in DMD :-P)c++ has no good string type, so there is no much choice. as a writer of at least four "serious" scripting languages (and alot more as "toy" ones) i can tell you that zero-terminated strings are PITA. the only sane way to write a good lexer is working with structure which emulates D string and slicing (if we must parse text from in-memory buffer, of course).
Nov 16 2014
On Sunday, 16 November 2014 at 22:18:51 UTC, ketmar via Digitalmars-d wrote:On Sun, 16 Nov 2014 22:09:00 +0000 via Digitalmars-d <digitalmars-d puremagic.com> wrote:Ok, I haven't seen an independent benchmark, but I believe clang is faster. But… https://github.com/facebook/warp/blob/master/lexer.d#L173On Sunday, 16 November 2014 at 22:00:10 UTC, ketmar via Digitalmars-d wrote:gcc implementation, afair. it's slowness was the reason for warping.that's why warp is faster than cpp? ;-)Which implementation of cpp?PITA. the only sane way to write a good lexer is working with structure which emulates D string and slicing (if we must parse text from in-memory buffer, of course).Nah, if you know that the file ends with zero then you can build an efficient finite automata as a classifier.
Nov 16 2014
On Sun, 16 Nov 2014 22:22:42 +0000 via Digitalmars-d <digitalmars-d puremagic.com> wrote:Nah, if you know that the file ends with zero then you can build=20 an efficient finite automata as a classifier.FSA code is a fsckn mess. either adding dependency of external tool and alot of messy output to project, or writing that messy code manually. and FSA is not necessary faster, as it's bigger and so it trashing CPU cache.
Nov 16 2014
On 11/16/2014 2:22 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Sunday, 16 November 2014 at 22:18:51 UTC, ketmar via Digitalmars-d wrote:Notice the total lack of strlen()'s in Warp.On Sun, 16 Nov 2014 22:09:00 +0000 via Digitalmars-d <digitalmars-d puremagic.com> wrote:Ok, I haven't seen an independent benchmark, but I believe clang is faster. But… https://github.com/facebook/warp/blob/master/lexer.d#L173On Sunday, 16 November 2014 at 22:00:10 UTC, ketmar via Digitalmars-d wrote:gcc implementation, afair. it's slowness was the reason for warping.that's why warp is faster than cpp? ;-)Which implementation of cpp?Nah, if you know that the file ends with zero then you can build an efficient finite automata as a classifier.deadalnix busted that myth a while back with benchmarks.
Nov 16 2014
On Monday, 17 November 2014 at 01:39:38 UTC, Walter Bright wrote:Notice the total lack of strlen()'s in Warp.Why would you need that? You know where the lexeme begins and ends? If we are talking about old architectures you have to acknowledge that storage was premium and that the major cost was getting the strings into memory in the first place.I haven't seen it, but it is difficult to avoid lexers being bandwidth limited these days. Besides, how do you actually implement a lexer without constructing a FA one way or the other?Nah, if you know that the file ends with zero then you can build an efficient finite automata as a classifier.deadalnix busted that myth a while back with benchmarks.
Nov 16 2014
On 11/16/2014 5:43 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Monday, 17 November 2014 at 01:39:38 UTC, Walter Bright wrote:The preprocessor stores lots of strings. Things like identifiers, keywords, string literals, expanded macro text, etc. The C preprocessor I wrote in C years ago is filled with strlen(), as is about every C string processing program ever written. Heck, how do you think strcat() works? (Another problem with strlen() is that the string pointed to is in a different piece of memory, and it'll have to be loaded into the cache to scan for the 0. Whereas with slices, the length data is in the hot cache.)Notice the total lack of strlen()'s in Warp.Why would you need that? You know where the lexeme begins and ends? If we are talking about old architectures you have to acknowledge that storage was premium and that the major cost was getting the strings into memory in the first place.It's in the n.g. archives somewhere in a thread about implementing lexers.I haven't seen it,Nah, if you know that the file ends with zero then you can build an efficient finite automata as a classifier.deadalnix busted that myth a while back with benchmarks.but it is difficult to avoid lexers being bandwidth limited these days. Besides, how do you actually implement a lexer without constructing a FA one way or the other?That's the wrong question. The question is does a trailing sentinel result in a faster FA? deadalnix demonstrated that the answer is 'no'. You know, Ola, I've been in the trenches with this problem for decades. Sometimes I still learn something new, as I did with deadalnix's benchmark. But the stuff you are positing is well-trodden ground. There's a damn good reason why D uses slices and not 0 terminated strings.
Nov 17 2014
On Monday, 17 November 2014 at 10:18:41 UTC, Walter Bright wrote:(Another problem with strlen() is that the string pointed to is in a different piece of memory, and it'll have to be loaded into the cache to scan for the 0. Whereas with slices, the length data is in the hot cache.)Oh, I am not saying that strlen() is a good contemporary solution. I am saying that when you have <32KiB RAM total it makes sense to save space by not storing the string length.Well, then it is just words.It's in the n.g. archives somewhere in a thread about implementing lexers.deadalnix busted that myth a while back with benchmarks.I haven't seen it,That's the wrong question. The question is does a trailing sentinel result in a faster FA? deadalnix demonstrated that the answer is 'no'.I hear that, but the fact remains, you do less work. It should therefore be faster. So if it is not, then you're either doing something wrong or you have bubbles in the pipeline on a specific CPU that you fail to fill. On newer CPUs you have a tiny loop buffer for tight inner loops that runs microops without decode, you want to keep the codesize down there. Does it matter a lot in the world of SIMD? Probably not, but then you get a more complex lexer to maintain.deadalnix's benchmark. But the stuff you are positing is well-trodden ground. There's a damn good reason why D uses slices and not 0 terminated strings.I've never said that D should use 0 terminated strings. Now you twist the debate.
Nov 17 2014
Remember that the alternative to zero-terminated strings at that time was to have 2 string types, one with a one byte length and one with a larger length. So I think C made the right choice for it's time, to have a single string type without a length.
Nov 17 2014
On Monday, 17 November 2014 at 11:43:45 UTC, Ola Fosheim Grøstad wrote:Remember that the alternative to zero-terminated strings at that time was to have 2 string types, one with a one byte length and one with a larger length. So I think C made the right choice for it's time, to have a single string type without a length.Black hat hackers, virus and security tools vendors around the world rejoice of that decision... It was anything but right.
Nov 17 2014
On Monday, 17 November 2014 at 12:36:49 UTC, Paulo Pinto wrote:On Monday, 17 November 2014 at 11:43:45 UTC, Ola Fosheim Grøstad wrote:I don't think buffer overflow and string fundamentals are closely related, if used reasonably, but I'm not surprised you favour Pascal's solution of having two string types: one for strings up to 255 bytes and another one for longer strings. Anyway, here is the real reason for how C implemented strings: «None of BCPL, B, or C supports character data strongly in the language; each treats strings much like vectors of integers and supplements general rules by a few conventions. In both BCPL and B a string literal denotes the address of a static area initialized with the characters of the string, packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and strings are terminated by a special character, which B spelled `*e'. This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator. Individual characters in a BCPL string were usually manipulated by spreading the string out into another array, one character per cell, and then repacking it later; B provided corresponding routines, but people more often used other library functions that accessed or replaced individual characters in a string.» http://cm.bell-labs.com/cm/cs/who/dmr/chist.htmlRemember that the alternative to zero-terminated strings at that time was to have 2 string types, one with a one byte length and one with a larger length. So I think C made the right choice for it's time, to have a single string type without a length.Black hat hackers, virus and security tools vendors around the world rejoice of that decision... It was anything but right.
Nov 17 2014
On Monday, 17 November 2014 at 12:49:16 UTC, Ola Fosheim Grøstad wrote:On Monday, 17 November 2014 at 12:36:49 UTC, Paulo Pinto wrote:I am fully aware how UNIX designers decided to ignore the systems programming being done in Algol variants, PL/I variants and many other wannabe systems programming languages that came before C. Which they are repeating again with Go. -- PauloOn Monday, 17 November 2014 at 11:43:45 UTC, Ola Fosheim Grøstad wrote:I don't think buffer overflow and string fundamentals are closely related, if used reasonably, but I'm not surprised you favour Pascal's solution of having two string types: one for strings up to 255 bytes and another one for longer strings. Anyway, here is the real reason for how C implemented strings: «None of BCPL, B, or C supports character data strongly in the language; each treats strings much like vectors of integers and supplements general rules by a few conventions. In both BCPL and B a string literal denotes the address of a static area initialized with the characters of the string, packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and strings are terminated by a special character, which B spelled `*e'. This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator. Individual characters in a BCPL string were usually manipulated by spreading the string out into another array, one character per cell, and then repacking it later; B provided corresponding routines, but people more often used other library functions that accessed or replaced individual characters in a string.» http://cm.bell-labs.com/cm/cs/who/dmr/chist.htmlRemember that the alternative to zero-terminated strings at that time was to have 2 string types, one with a one byte length and one with a larger length. So I think C made the right choice for it's time, to have a single string type without a length.Black hat hackers, virus and security tools vendors around the world rejoice of that decision... It was anything but right.
Nov 17 2014
On Monday, 17 November 2014 at 13:39:05 UTC, Paulo Pinto wrote:I am fully aware how UNIX designers decided to ignore the systems programming being done in Algol variants, PL/I variants and many other wannabe systems programming languages that came before C.I wouldn't say that Algol is a systems programming language, and Pascal originally only had fixed width strings! (But Simula actually had decent GC backed string support with substrings pointing to the same buffer and a link to the full buffer from substrings, thus somewhat more advanced than D ;-)
Nov 17 2014
On 11/17/2014 3:43 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Remember that the alternative to zero-terminated strings at that time was to have 2 string types, one with a one byte length and one with a larger length.No, that was not the alternative.
Nov 17 2014
On 11/17/2014 3:00 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:I am saying that when you have <32KiB RAM total it makes sense to save space by not storing the string length.I know what you're saying. You're saying without evidence that sentinels are faster. They are not. You're saying without evidence that 0 terminated strings use less memory. They do not. (It does not save space when "filename" and "filename.ext" cannot be overlapped.)
Nov 17 2014
On Monday, 17 November 2014 at 19:24:49 UTC, Walter Bright wrote:You're saying without evidence that sentinels are faster. They are not.You are twisting and turning so much in discussions that you make me dizzy. I've been saying that for SOME OPERATIONS they are too, and that is not without evidence. Just plot it out for a 65xx, 680xx, Z80 etc CPU and it becomes self-evident. Any system level programmer should be able to do it in a few minutes. Using sentinels is a common trick for speeding up algorithms, it has some downsides, and some upsides, but they are used for a reason (either speed, convenience or both). Pretending that sentinels are entirely useless is not a sane line of argument. I use sentinels in many situations and for many purposes, and they can greatly speed up and/or simplify code.You're saying without evidence that 0 terminated strings use less memory. They do not. (It does not save space when "filename" and "filename.ext" cannot be overlapped.)0-terminated and shortstring (first byte being used for length) takes the same amount of space, but permanent substring reference slices are very wasteful of memory for low memory situations: 1. you need a ref count on the base buffer (2-4 bytes) 2. you need pointer to base + 2 offsets (4-12 bytes) And worst is you retain the whole buffer even if you only reference a tiny portion of it. Yuk! In such a use scenario you are generally better of reallocation or use compaction. For non-permanent substrings you can still use begin/end pointers. And please no, GC is not the answer, Simula had GC and the kind of strings and substrings you argued for but it was not intended for system level programming and it was not resource efficient. It was convenient. Scripty style concatenation and substring slicing is fun, but it is not system level programming. System level programming is about taking control over the hardware and use it most efficiently. Abstractions "that lie" mess this up. Is wasting space on meta information less critical today? YES, OF COURSE! It does matter that we have 100.000 times more RAM available.
Nov 17 2014
On 11/17/2014 1:08 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:I've been saying that for SOME OPERATIONS they are too, and that is not without evidence. Just plot it out for a 65xx, 680xx, Z80 etc CPU and it becomes self-evident. Any system level programmer should be able to do it in a few minutes.When designing a language data type, you don't design it for "some" operations. You design it so that it works best most of the time, or at least let the user decide. You can always add a sentinel for specific cases. But C forces its use for all strings for practical purposes. The design is backwards, and most of the time a sentinel is the wrong choice. BTW, I learned how to program on a 6800. I'm not ignorant of those machines. And frankly, C is too high level for the 6800 (and the other 8 bit CPUs). The idea that C maps well onto those processors is mistaken. Which is hardly surprising, as C was developed for the PDP-11, a 16 bit machine. Yes, I know that people did use C for 8 bit machines.
Nov 17 2014
On Monday, 17 November 2014 at 22:03:48 UTC, Walter Bright wrote:You can always add a sentinel for specific cases. But C forces its use for all strings for practical purposes. The design is backwards, and most of the time a sentinel is the wrong choice.Ok, but I would rather say it like this: the language C doesn't really provide strings, it only provides literals in a particular format. So the literal-format is a trade-off between having something generic and simple and having something more complex and possibly limited (having 255 char limit is not good enough in the long run). I think there is a certain kind of beauty to the minimalistic approach taken with C (well, at least after ANSI-C came about in the late 80s). I like the language better than the libraries…BTW, I learned how to program on a 6800. I'm not ignorant of those machines. And frankly, C is too high level for the 6800 (and the other 8 bit CPUs). The idea that C maps well onto those processors is mistaken.Yes I agree, but those instruction sets are simple. :-) With only 256 bytes of builtin RAM (IIRC) the 6800 was kind of skimpy on memory! We used it in high school for our classes in digital circuitry/projects. (It is very difficult to discuss performance on x86, there is just too much clutter and machinery in the core that can skew results.)
Nov 17 2014
On 11/17/2014 3:15 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Ok, but I would rather say it like this: the language C doesn't really provide strings, it only provides literals in a particular format. So the literal-format is a trade-off between having something generic and simple and having something more complex and possibly limited (having 255 char limit is not good enough in the long run).The combination of the inescapable array-to-ptr decay when calling a function, coupled with the Standard library which is part of the language that takes char* as strings, means that for all practical purposes C does provide strings, and pretty much forces it on the programmer.I think there is a certain kind of beauty to the minimalistic approach taken with C (well, at least after ANSI-C came about in the late 80s). I like the language better than the libraries…C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.
Nov 17 2014
On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:On 11/17/2014 3:15 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Heartbleed is a nice example. The amount of money in developer time, delivery software updates to customers and buying new hardware with firmware that cannot be replaced. This is just one case, the CVE List gets updated every day and 90% of the issues are the usual C suspects regarding pointer misuse and out of bounds. Anyone writing C code should by following practices like https://wiki.debian.org/Hardening -- PauloOk, but I would rather say it like this: the language C doesn't really provide strings, it only provides literals in a particular format. So the literal-format is a trade-off between having something generic and simple and having something more complex and possibly limited (having 255 char limit is not good enough in the long run).The combination of the inescapable array-to-ptr decay when calling a function, coupled with the Standard library which is part of the language that takes char* as strings, means that for all practical purposes C does provide strings, and pretty much forces it on the programmer.I think there is a certain kind of beauty to the minimalistic approach taken with C (well, at least after ANSI-C came about in the late 80s). I like the language better than the libraries…C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.
Nov 18 2014
On Tuesday, 18 November 2014 at 08:28:19 UTC, Paulo Pinto wrote:This is just one case, the CVE List gets updated every day and 90% of the issues are the usual C suspects regarding pointer misuse and out of bounds.Sure, but these are not a strict language issues since the same developers would turn off bounds-checking at the first opportunity anyway! Professionalism does not involve blaming the tool, it involves picking the right tools and process for the task. Unfortunately the IT industry has over time suffered from a lack of formal education and immature markets. Software is considered to work when it crash only once every 24 hours, we would not accept that from any other utility? I've never heard anyone in academia claim that C is anything more than a small step up from assembler (i.e. low level), so why allow intermediate skilled programmers to write C code if you for the same application would not allow an excellent programmer to write the same program in assembly (about the same risk of having a crash). People get what they deserve. Never blame the tool for bad management. You get to pick the tool and the process, right? Neither the tool or testing will ensure correct behaviour on its own. You have many factors that need to play together (mindset, process and the tool set). If you want a compiler that works, you're probably better off writing it in ML than in C, but people implement it in C. Why? Because they FEEL like it… It is not rational. It is emotional.
Nov 18 2014
On 11/18/2014 4:18 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Never blame the tool for bad management.To bring up the aviation industry again, they long ago recognized that "blame the pilot" and "blame the mechanics" is not how safe airplanes are made. They are made, in part, by fixing the tools so mistakes cannot happen, as even the best humans keep making mistakes. C is a mistake-prone tool, and suggesting that programmers get better educated about how to use it does not work. As I showed, a great deal of C's propensity for buffer overflows can be eliminated by a TRIVIAL change to the language, one that is fully backwards compatible, and takes NOTHING away from C's power. I've brought this up in conference presentations more than once, and the blank silence I get from C programmers just baffles me. Blaming the tools is often appropriate.
Nov 18 2014
On Tuesday, 18 November 2014 at 19:42:20 UTC, Walter Bright wrote:To bring up the aviation industry again, they long ago recognized that "blame the pilot" and "blame the mechanics" is not how safe airplanes are made. They are made, in part, by fixing the tools so mistakes cannot happen, as even the best humans keep making mistakes.Please note that I said it was a management issue. Clearly if management equip workers with unsafe tools that is bad. But there have always been safer tools available. It has always been possible to do things differently. It has always been possible to do risk assessment and adopt to it, in tools, education and process. I am sure the aviation industry is doing a lot better than the IT industry!Blaming the tools is often appropriate.If you are forced to use one while being asked to run for a deadline, sure.
Nov 18 2014
On Tuesday, 18 November 2014 at 12:18:16 UTC, Ola Fosheim Grøstad wrote:On Tuesday, 18 November 2014 at 08:28:19 UTC, Paulo Pinto wrote:There are good answer to most of this but most importantly, this do not contain anything actionable and is completely off topic( reminder, the topic of the thread is SCOPE ). Reader's time is precious, please don't waste it.This is just one case, the CVE List gets updated every day and 90% of the issues are the usual C suspects regarding pointer misuse and out of bounds.Sure, but these are not a strict language issues since the same developers would turn off bounds-checking at the first opportunity anyway! Professionalism does not involve blaming the tool, it involves picking the right tools and process for the task. Unfortunately the IT industry has over time suffered from a lack of formal education and immature markets. Software is considered to work when it crash only once every 24 hours, we would not accept that from any other utility? I've never heard anyone in academia claim that C is anything more than a small step up from assembler (i.e. low level), so why allow intermediate skilled programmers to write C code if you for the same application would not allow an excellent programmer to write the same program in assembly (about the same risk of having a crash). People get what they deserve. Never blame the tool for bad management. You get to pick the tool and the process, right? Neither the tool or testing will ensure correct behaviour on its own. You have many factors that need to play together (mindset, process and the tool set). If you want a compiler that works, you're probably better off writing it in ML than in C, but people implement it in C. Why? Because they FEEL like it… It is not rational. It is emotional.
Nov 18 2014
On Wednesday, 19 November 2014 at 01:13:04 UTC, deadalnix wrote:There are good answer to most of this but most importantly, this do not contain anything actionable and is completely off topic( reminder, the topic of the thread is SCOPE ).The topic is if "scope" is planned for deprecation. That has been answered. The thread has been off topic for a long time, so please don't use moderation for silencing an opposing viewpoint from a single party. Unless you are a moderator, but then you should also point to the rules of the forum. If you are a moderator, then you should first and foremost moderate people who go ad hominem without providing an identity. Self-appointing moderation will always lead to a very bad situation. If the forums need moderation appoint a moderator.Reader's time is precious, please don't waste it.Then don't read. The topic of this thread was never actionable. Sounds like you want a pure developers forum. If you want a community you need a lower threshold general forum. "D" is the general forum until it has been defined to not be so, but you do need a low threshold forum. (I never give in to online bullies in unmoderated media… That gives them a foothold.)
Nov 19 2014
On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.I'd rather say that it is the industry that has misappropriated C, which in my view basically was "typed portable assembly" with very little builtin presumptions by design. This is important when getting control over layout, and this transparency is a quality that only C gives me. BCPL might be considered to have more presumptions (such as string length), being a minimal "bootstrapping subset" of CPL. You always had the ability in C to implement arrays as a variable sized struct with a length and a trailing data section, so I'd say that the C provided type safe variable length arrays. Many people don't use it. Many people don't know how to use it. Ok, but then they don't understand that they are programming in a low level language and are responsible for creating their own environment. I think C's standard lib mistakingly created an illusion of high level programming that the language only partially supported. Adding the ability to transfer structs by value as a parameter was probably not worth the implementation cost at the time… Having a "magic struct/tuple" that transfer length or end pointer with the head pointer does not fit the C design. If added it should have been done as a struct and to make that work you would have to add operator overloading. There's an avalanche effect of features and additional language design issues there. I think K&R deserves credit for being able to say no and stay minimal, I think the Go team deserves the same credit. As you've experienced with D, saying no is hard because there are often good arguments for features being useful and difficult to say in advance with certainty what kind of avalanche effect adding features have (in terms of semantics, special casing and new needs for additional support/features, time to complete implementation/debugging). So saying no until practice shows that a feature is sorely missed is a sign of good language design practice. The industry wanted portability and high speed and insisted moving as a flock after C and BLINDLY after C++. Seriously, the media frenzy around C++ was hysterical despite C++ being a bad design from the start. The C++ media noise was worse than with Java IIRC. Media are incredibly shallow when they are trying to sell mags/books based on the "next big thing" and they can accelerate adoption beyond merits. Which both C++ and Java are two good examples of. There were alternatives such as Turbo Pascal, Modula-2/3, Simula, Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought C was cool because it was "portable assembly" and "industry standard" and "fast" and "safe bet". So they were happy with it, because C compiler emitted fast code. And fast was more important to them than safe. Well, they got what they deserved, right? Not adding additional features is not a design mistake if you try hard to stay minimal and don't claim to support high level programming. The mistake is in using a tool as if it supports something it does not. You might be right that K&R set the bar too high for adding extra features. Yet others might be right that D has been too willing to add features. As you know, the perfect balance is difficult to find and it is dependent on the use context, so it materialize after the fact (after implementation). And C's use context has expanded way beyond the original use context where people were not afraid to write assembly. (But the incomprehensible typing notation for function pointers was a design mistake since that was a feature of the language.)
Nov 18 2014
On Tuesday, 18 November 2014 at 11:15:28 UTC, Ola Fosheim Grøstad wrote:On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:Lint was created in 1979 when it was already clear most AT&T developers weren't writing correct C code!C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.I'd rather say that it is the industry that has misappropriated C, which in my view basically was "typed portable assembly" with very little builtin presumptions by design.I think K&R deserves credit for being able to say no and stay minimal, I think the Go team deserves the same credit.Of course, two of them are from the same team.The industry wanted portability and high speed and insisted moving as a flock after C and BLINDLY after C++. Seriously, the media frenzy around C++ was hysterical despite C++ being a bad design from the start. The C++ media noise was worse than with Java IIRC. Media are incredibly shallow when they are trying to sell mags/books based on the "next big thing" and they can accelerate adoption beyond merits. Which both C++ and Java are two good examples of. There were alternatives such as Turbo Pascal, Modula-2/3, Simula, Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought C was cool because it was "portable assembly" and "industry standard" and "fast" and "safe bet".This was a consequence of UNIX spreading into the enterprise, like we have to endure JavaScript to target the browser, we were forced to code in C to target UNIX. Other OS just followed along, as we started to want to port those big iron utilities to smaller computers. If UNIX had been written in XPTO-LALA, we would all be coding in XPTO-LALA today. -- Paulo
Nov 18 2014
On Tuesday, 18 November 2014 at 12:02:01 UTC, Paulo Pinto wrote:On Tuesday, 18 November 2014 at 11:15:28 UTC, Ola Fosheim Grøstad wrote:Sure, but most operating system vendors considered it a strategic move to ensure availability of high level languages on their mainframes. E.g. Univac provided Algol and gave a significant rebate to the developers of Simula on the purchase of a Univac to ensure that Simula would be available for high level programming.I'd rather say that it is the industry that has misappropriated C, which in my view basically was "typed portable assembly" with very little builtin presumptions by design.Lint was created in 1979 when it was already clear most AT&T developers weren't writing correct C code!Nobody were forced to write code in C to target anything, it was a choice. And a choice that grew out of a focus on performance and the fact that people still dropped down to write machine language quit frequently. Mentality matters. Javascript is different, since it is "the exposed VM" in the browser, but even there you don't have to write in Javascript. You can write in a language that compiles to javascript.There were alternatives such as Turbo Pascal, Modula-2/3, Simula, Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought C was cool because it was "portable assembly" and "industry standard" and "fast" and "safe bet".This was a consequence of UNIX spreading into the enterprise, like we have to endure JavaScript to target the browser, we were forced to code in C to target UNIX.
Nov 18 2014
On Tuesday, 18 November 2014 at 13:50:59 UTC, Ola Fosheim Grøstad wrote:On Tuesday, 18 November 2014 at 12:02:01 UTC, Paulo Pinto wrote: .... Nobody were forced to write code in C to target anything, it was a choice. And a choice that grew out of a focus on performance and the fact that people still dropped down to write machine language quit frequently. Mentality matters. Javascript is different, since it is "the exposed VM" in the browser, but even there you don't have to write in Javascript. You can write in a language that compiles to javascript.Since when do developers use a different systems programming language than the one sold by the OS vendor? Who has the pleasure to waste work hours writing FFI wrappers around SDK tools? All successful systems programming languages, even if only for a few years, were tied to a specific OS. -- Paulo
Nov 18 2014
On Tuesday, 18 November 2014 at 14:56:42 UTC, Paulo Pinto wrote:Since when do developers use a different systems programming language than the one sold by the OS vendor? Who has the pleasure to waste work hours writing FFI wrappers around SDK tools? All successful systems programming languages, even if only for a few years, were tied to a specific OS.Depends on what you mean by system programming. I posit that most programs that have been written in C are primarily application level programs. Meaning that you could factor out the C component as a tiny unit and write the rest in another language… Most high level languages provide integration with C. These things are entirely cultural. In the late 80s you could do the same stuff in Turbo Pascal as in C, and integrate with asm with no problem. Lots of decent software for MSDOS was written in TP, such as BBS server software dealing with many connections. On regular micros you didn't have a MMU so there was actually a great penalty for using an unsafe language even during development: the OS would reboot (or you would get the famous guru meditation on Amiga). That sucked.
Nov 18 2014
On Tuesday, 18 November 2014 at 15:36:58 UTC, Ola Fosheim Grøstad wrote:On Tuesday, 18 November 2014 at 14:56:42 UTC, Paulo Pinto wrote:In the 80's almost everything was system programming, even business applications. You are forgetting the UNIX factor again. We only had C available in UNIX systems as compiled language. HP-UX was the only commercial UNIX I used where we had access to compilers for other languages. So who would pay for third party tooling, specially with the way software used to cost? Then of course, many wanted to do on their CP/M, Spectrum and similar systems the type of coding possible at work or university, which lead to Small C and other C based compilers, thus spreading the language outside UNIX.Since when do developers use a different systems programming language than the one sold by the OS vendor? Who has the pleasure to waste work hours writing FFI wrappers around SDK tools? All successful systems programming languages, even if only for a few years, were tied to a specific OS.Depends on what you mean by system programming. I posit that most programs that have been written in C are primarily application level programs. Meaning that you could factor out the C component as a tiny unit and write the rest in another language… Most high level languages provide integration with C. These things are entirely cultural.In the late 80s you could do the same stuff in Turbo Pascal as in C, and integrate with asm with no problem. Lots of decent software for MSDOS was written in TP, such as BBS server software dealing with many connections.I was doing Turbo Pascal most of the time, by the time I learned C with Turbo C 2.0, Turbo C++ 1.0 was just around the corner and I only touched pure C again on teachers and employers request.On regular micros you didn't have a MMU so there was actually a great penalty for using an unsafe language even during development: the OS would reboot (or you would get the famous guru meditation on Amiga). That sucked.Amiga was programmed in Assembly. Except for Amos, we didn't use anything else. -- Paulo
Nov 18 2014
On 11/18/2014 3:15 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head. (C has added useless enhancements, like VLAs.)C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.I'd rather say that it is the industry that has misappropriated C, which in my view basically was "typed portable assembly" with very little builtin presumptions by design. This is important when getting control over layout, and this transparency is a quality that only C gives me. BCPL might be considered to have more presumptions (such as string length), being a minimal "bootstrapping subset" of CPL. You always had the ability in C to implement arrays as a variable sized struct with a length and a trailing data section, so I'd say that the C provided type safe variable length arrays. Many people don't use it. Many people don't know how to use it. Ok, but then they don't understand that they are programming in a low level language and are responsible for creating their own environment. I think C's standard lib mistakingly created an illusion of high level programming that the language only partially supported. Adding the ability to transfer structs by value as a parameter was probably not worth the implementation cost at the time… Having a "magic struct/tuple" that transfer length or end pointer with the head pointer does not fit the C design. If added it should have been done as a struct and to make that work you would have to add operator overloading. There's an avalanche effect of features and additional language design issues there. I think K&R deserves credit for being able to say no and stay minimal, I think the Go team deserves the same credit. As you've experienced with D, saying no is hard because there are often good arguments for features being useful and difficult to say in advance with certainty what kind of avalanche effect adding features have (in terms of semantics, special casing and new needs for additional support/features, time to complete implementation/debugging). So saying no until practice shows that a feature is sorely missed is a sign of good language design practice. The industry wanted portability and high speed and insisted moving as a flock after C and BLINDLY after C++. Seriously, the media frenzy around C++ was hysterical despite C++ being a bad design from the start. The C++ media noise was worse than with Java IIRC. Media are incredibly shallow when they are trying to sell mags/books based on the "next big thing" and they can accelerate adoption beyond merits. Which both C++ and Java are two good examples of. There were alternatives such as Turbo Pascal, Modula-2/3, Simula, Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought C was cool because it was "portable assembly" and "industry standard" and "fast" and "safe bet". So they were happy with it, because C compiler emitted fast code. And fast was more important to them than safe. Well, they got what they deserved, right? Not adding additional features is not a design mistake if you try hard to stay minimal and don't claim to support high level programming. The mistake is in using a tool as if it supports something it does not. You might be right that K&R set the bar too high for adding extra features. Yet others might be right that D has been too willing to add features. As you know, the perfect balance is difficult to find and it is dependent on the use context, so it materialize after the fact (after implementation). And C's use context has expanded way beyond the original use context where people were not afraid to write assembly. (But the incomprehensible typing notation for function pointers was a design mistake since that was a feature of the language.)
Nov 18 2014
On Tue, Nov 18, 2014 at 11:45:13AM -0800, Walter Bright via Digitalmars-d wrote: [...]I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head. (C has added useless enhancements, like VLAs.)What's the trivial thing that will solve most buffer overflow problems? T -- Dogs have owners ... cats have staff. -- Krista Casada
Nov 18 2014
On 11/18/2014 12:10 PM, H. S. Teoh via Digitalmars-d wrote:On Tue, Nov 18, 2014 at 11:45:13AM -0800, Walter Bright via Digitalmars-d wrote:http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head.What's the trivial thing that will solve most buffer overflow problems?
Nov 18 2014
On Tue, Nov 18, 2014 at 12:44:35PM -0800, Walter Bright via Digitalmars-d wrote:On 11/18/2014 12:10 PM, H. S. Teoh via Digitalmars-d wrote:That's not a trivial change at all -- it will break pretty much every C program there is out there. Just think of how much existing C code relies on this conflation between arrays and pointers, and implicit conversions between them. Once you start going down that path, you might as well just start over with a brand new language. Which ultimately leads to D. :-P T -- Microsoft is to operating systems & security ... what McDonalds is to gourmet cooking.On Tue, Nov 18, 2014 at 11:45:13AM -0800, Walter Bright via Digitalmars-d wrote:http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head.What's the trivial thing that will solve most buffer overflow problems?
Nov 18 2014
On 11/18/2014 1:12 PM, H. S. Teoh via Digitalmars-d wrote:On Tue, Nov 18, 2014 at 12:44:35PM -0800, Walter Bright via Digitalmars-d wrote:No, I proposed a new syntax that would have different behavior: void foo(char a[..])On 11/18/2014 12:10 PM, H. S. Teoh via Digitalmars-d wrote:That's not a trivial change at all -- it will break pretty much every C program there is out there. Just think of how much existing C code relies on this conflation between arrays and pointers, and implicit conversions between them.On Tue, Nov 18, 2014 at 11:45:13AM -0800, Walter Bright via Digitalmars-d wrote:http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head.What's the trivial thing that will solve most buffer overflow problems?
Nov 18 2014
On Tue, Nov 18, 2014 at 02:14:20PM -0800, Walter Bright via Digitalmars-d wrote:On 11/18/2014 1:12 PM, H. S. Teoh via Digitalmars-d wrote:Ah, I see. How would that be different from just declaring an array struct and using it pervasively? Existing C code would not benefit from such an addition without a lot of effort put into refactoring. T -- If you look at a thing nine hundred and ninety-nine times, you are perfectly safe; if you look at it the thousandth time, you are in frightful danger of seeing it for the first time. -- G. K. ChestertonOn Tue, Nov 18, 2014 at 12:44:35PM -0800, Walter Bright via Digitalmars-d wrote:No, I proposed a new syntax that would have different behavior: void foo(char a[..])On 11/18/2014 12:10 PM, H. S. Teoh via Digitalmars-d wrote:That's not a trivial change at all -- it will break pretty much every C program there is out there. Just think of how much existing C code relies on this conflation between arrays and pointers, and implicit conversions between them.On Tue, Nov 18, 2014 at 11:45:13AM -0800, Walter Bright via Digitalmars-d wrote:http://www.drdobbs.com/architecture-and-design/cs-biggest-mistake/228701625I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head.What's the trivial thing that will solve most buffer overflow problems?
Nov 18 2014
On 11/18/2014 2:35 PM, H. S. Teoh via Digitalmars-d wrote:On Tue, Nov 18, 2014 at 02:14:20PM -0800, Walter Bright via Digitalmars-d wrote:foo("string"); won't work using a struct parameter.On 11/18/2014 1:12 PM, H. S. Teoh via Digitalmars-d wrote: No, I proposed a new syntax that would have different behavior: void foo(char a[..])Ah, I see. How would that be different from just declaring an array struct and using it pervasively?Existing C code would not benefit from such an addition without a lot of effort put into refactoring.Except that the syntax is not viral and can be done as convenient. And, as mentioned in the article, people do make the effort to do other, much more complicated, schemes.
Nov 18 2014
On Tuesday, 18 November 2014 at 19:45:12 UTC, Walter Bright wrote:On 11/18/2014 3:15 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:So useless that it became optional in C11. https://groups.google.com/forum/#!topic/comp.std.c/AoB6LFHcd88On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away and solves most of the buffer overflow problems leaves me shaking my head. (C has added useless enhancements, like VLAs.)C is a brilliant language. That doesn't mean it hasn't made serious mistakes in its design. The array decay and 0 strings have proven to be very costly to programmers over the decades.I'd rather say that it is the industry that has misappropriated C, which in my view basically was "typed portable assembly" with very little builtin presumptions by design. This is important when getting control over layout, and this transparency is a quality that only C gives me. BCPL might be considered to have more presumptions (such as string length), being a minimal "bootstrapping subset" of CPL. You always had the ability in C to implement arrays as a variable sized struct with a length and a trailing data section, so I'd say that the C provided type safe variable length arrays. Many people don't use it. Many people don't know how to use it. Ok, but then they don't understand that they are programming in a low level language and are responsible for creating their own environment. I think C's standard lib mistakingly created an illusion of high level programming that the language only partially supported. Adding the ability to transfer structs by value as a parameter was probably not worth the implementation cost at the time… Having a "magic struct/tuple" that transfer length or end pointer with the head pointer does not fit the C design. If added it should have been done as a struct and to make that work you would have to add operator overloading. There's an avalanche effect of features and additional language design issues there. I think K&R deserves credit for being able to say no and stay minimal, I think the Go team deserves the same credit. As you've experienced with D, saying no is hard because there are often good arguments for features being useful and difficult to say in advance with certainty what kind of avalanche effect adding features have (in terms of semantics, special casing and new needs for additional support/features, time to complete implementation/debugging). So saying no until practice shows that a feature is sorely missed is a sign of good language design practice. The industry wanted portability and high speed and insisted moving as a flock after C and BLINDLY after C++. Seriously, the media frenzy around C++ was hysterical despite C++ being a bad design from the start. The C++ media noise was worse than with Java IIRC. Media are incredibly shallow when they are trying to sell mags/books based on the "next big thing" and they can accelerate adoption beyond merits. Which both C++ and Java are two good examples of. There were alternatives such as Turbo Pascal, Modula-2/3, Simula, Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought C was cool because it was "portable assembly" and "industry standard" and "fast" and "safe bet". So they were happy with it, because C compiler emitted fast code. And fast was more important to them than safe. Well, they got what they deserved, right? Not adding additional features is not a design mistake if you try hard to stay minimal and don't claim to support high level programming. The mistake is in using a tool as if it supports something it does not. You might be right that K&R set the bar too high for adding extra features. Yet others might be right that D has been too willing to add features. As you know, the perfect balance is difficult to find and it is dependent on the use context, so it materialize after the fact (after implementation). And C's use context has expanded way beyond the original use context where people were not afraid to write assembly. (But the incomprehensible typing notation for function pointers was a design mistake since that was a feature of the language.)
Nov 18 2014
On 11/18/2014 12:53 PM, Paulo Pinto wrote:On Tuesday, 18 November 2014 at 19:45:12 UTC, Walter Bright wrote:Note the Rationale given: --- - Putting arbitrarily large arrays on the stack causes trouble in multithreaded programs in implementations where stack growth is bounded. - There's no way to recover from an out-of-memory condition when allocating a VLA. - Microsoft declines to support them. - VLAs aren't used much. There appear to be only three in Google Code, and no VLA parameters. The Linux kernel had one, but it was taken out because there was no way to handle an out of space condition. (If anyone can find an example of a VLA parameter in publicly visible production code, please let me know.) - The semantics of VLA parameters is painful. They're automatically reduced to pointers, with the length information lost. "sizeof" returns the size of a pointer. - Prototypes of functions with VLA parameters do not have to exactly match the function definition. This is incompatible with C++ style linkage and C++ function overloading, preventing the extension of this feature into C++. John Nagle(C has added useless enhancements, like VLAs.)So useless that it became optional in C11. https://groups.google.com/forum/#!topic/comp.std.c/AoB6LFHcd88
Nov 18 2014
On Tuesday, 18 November 2014 at 19:45:12 UTC, Walter Bright wrote:I'm sorry to say this, but these rationalizations as to why C cannot add a trivial enhancement that takes nothing away andThey can add whatever they want. I am arguing against the position that it was a design mistake to keep the semantic model simple and with few presumptions. On the contrary, it was the design goal. Another goal for a language like C is ease of implementation so that you can easily port it to new hardware. The original C was a very simple language. Most decent programmers can create their own C-compiler (but not a good optimizer).(C has added useless enhancements, like VLAs.)VLAs have been available in gcc for a long time. They are not useless, I've used them from time to time.
Nov 18 2014
On 11/18/2014 1:23 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:I am arguing against the position that it was a design mistake to keep the semantic model simple and with few presumptions. On the contrary, it was the design goal. Another goal for a language like C is ease of implementation so that you can easily port it to new hardware.The proposals I made do not change that in any way, and if K&R designed C without those mistakes, it would have not made C more complex in the slightest.VLAs have been available in gcc for a long time. They are not useless, I've used them from time to time.I know you're simply being argumentative when you defend VLAs, a complex and useless feature, and denigrate simple ptr/length pairs as complicated.
Nov 18 2014
On Tuesday, 18 November 2014 at 23:48:27 UTC, Walter Bright wrote:On 11/18/2014 1:23 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group.I am arguing against the position that it was a design mistake to keep the semantic model simple and with few presumptions. On the contrary, it was the design goal. Another goal for a language like C is ease of implementation so that you can easily port it to new hardware.The proposals I made do not change that in any way, and if K&R designed C without those mistakes, it would have not made C more complex in the slightest.VLAs have been available in gcc for a long time. They are not useless, I've used them from time to time.I know you're simply being argumentative when you defend VLAs, a complex and useless feature, and denigrate simple ptr/length pairs as complicated.
Nov 18 2014
On Wednesday, 19 November 2014 at 01:35:19 UTC, Alo Miehsof Datsørg wrote:Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group.If you are going ad hominem, please post under your own name. I never go ad hominem, and therefore your response will achieve the exact opposite of what you are trying to achieve to ensure that ad hominem does not become an acceptable line of action.
Nov 19 2014
On 11/18/2014 5:35 PM, "Alo Miehsof Datsørg" <Ola.Fosheim.Grostad sucks.goat.ass>" wrote:Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group.Rude posts are not welcome here.
Nov 20 2014
On Wednesday, 19 November 2014 at 01:35:19 UTC, Alo Miehsof Datsørg wrote:On Tuesday, 18 November 2014 at 23:48:27 UTC, Walter Bright wrote:Wow that's uncalled for. I don't always agree with Ola but his posts are rarely uninformed and often backed up with actual code examples or links supoprting his arguments. They generally lead to very interesting discussions on the forum. Cheers, uriOn 11/18/2014 1:23 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group.I am arguing against the position that it was a design mistake to keep the semantic model simple and with few presumptions. On the contrary, it was the design goal. Another goal for a language like C is ease of implementation so that you can easily port it to new hardware.The proposals I made do not change that in any way, and if K&R designed C without those mistakes, it would have not made C more complex in the slightest.VLAs have been available in gcc for a long time. They are not useless, I've used them from time to time.I know you're simply being argumentative when you defend VLAs, a complex and useless feature, and denigrate simple ptr/length pairs as complicated.
Nov 20 2014
On Monday, 17 November 2014 at 19:24:49 UTC, Walter Bright wrote:On 11/17/2014 3:00 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Stop wasting time with the mouth breather.I am saying that when you have <32KiB RAM total it makes sense to save space by not storing the string length.I know what you're saying. You're saying without evidence that sentinels are faster. They are not. You're saying without evidence that 0 terminated strings use less memory. They do not. (It does not save space when "filename" and "filename.ext" cannot be overlapped.)
Nov 17 2014
On Tuesday, 18 November 2014 at 04:58:43 UTC, Anonymous Coward wrote:Stop wasting time with the mouth breather.Please write under your full name.
Nov 18 2014
Am 16.11.2014 um 20:59 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang gmail.com>":On Sunday, 16 November 2014 at 19:24:47 UTC, Walter Bright wrote:My view is of a "kind of" portable macro assembler, even MASM and TASM were more feature rich back in the day. Actually I remember reading a DDJ article about a Texas Instruments Assembler that looked like C, just with one simple expression per line. So you could not do a = b + c * 4; rather r0 = b r1 = c r1 *= 4 r0 += r1 Just an idea, I don't remember any longer how it actually was. -- PauloThis made C far, far more difficult and buggy to work with than it should have been.Depends on your view of C, if you view C as step above assembly then it makes sense to treat everything as pointers. It is a bit confusing in the beginning since it is more or less unique to C.
Nov 16 2014
On Sunday, 16 November 2014 at 22:19:16 UTC, Paulo Pinto wrote:My view is of a "kind of" portable macro assembler, even MASM and TASM were more feature rich back in the day. Actually I remember reading a DDJ article about a Texas Instruments Assembler that looked like C, just with one simple expression per line. So you could not do a = b + c * 4; rather r0 = b r1 = c r1 *= 4 r0 += r1 Just an idea, I don't remember any longer how it actually was.Not such a bad idea if you can blend it with regular assembly mnemonics. When I did the course in machine near programming university I believe I chose to do the exam in Motorola 68000 machine language because I found it no harder than C at the time…(?) I surely would not have done the same with the x86 instruction set though.
Nov 16 2014
On Sunday, 16 November 2014 at 11:30:01 UTC, Ola Fosheim Grøstad wrote:On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote:Sorry but that is dumb, and the fact you are on the D newsgroup rather on 100% solution languages newsgroup (Java is 100% OOP, Haskell is 100% functional, Rust is 100% linear types, Javascript is 100% callbacks, erlang is 100% concurrent, LISP is 100% meta, BASIC is 100% imperative, python is 100% slow, PHP 100% inconsistent) tells me that not even you believe in your own bullshit.On 11/14/2014 4:32 PM, deadalnix wrote:More like a consultant for self-help: http://www.amazon.com/85%25-Solution-Personal-Accountability-Guarantees/dp/0470500166 Real world 85% engineered solutions: 1. Titanic 2. Chernobyl 3. Challenger 4. C++ …To quote the guy from the PL for video games video serie, a 85% solution often is preferable.Spoken like a true engineer!
Nov 16 2014
On Sunday, 16 November 2014 at 22:55:54 UTC, deadalnix wrote:Sorry but that is dumb, and the fact you are on the D newsgroup rather on 100% solution languages newsgroup (Java is 100% OOP, Haskell is 100% functional, Rust is 100% linear types, Javascript is 100% callbacks, erlang is 100% concurrent, LISP is 100% meta, BASIC is 100% imperative, python is 100% slow, PHP 100% inconsistent) tells me that not even you believe in your own bullshit.Define what you mean by 100%? By 100% I mean that you can implement your system level design without bending it around special cases induced by the language. The term "85% solution" is used for implying that it only provides a solution to 85% of what you want to achieve (like a framework) and that you have to change your goals or go down a painful path to get the last 15%. ASM is 100% (or 0%). You can do anything the hardware supports. C is close to 98%. You can easily get the last 2% by writing asm. HTML5/JS is 80%. You can do certain things efficiently, but other things are plain difficult. Flash/ActionScript is 60%. … What Jonathan Blunt apparently wants is a language that is tailored to the typical patterns seen in games programming, so that might mean that e.g. certain allocation patterns are supported, but others not. (Leaving out the 15% that is not used in games programming). This is characteristic of programming frameworks. I think it is reasonable to push back when D is moving towards becoming a framework. There are at least two factions in the D community. One faction is looking for an application framework and the other faction is looking for a low level programming language. These two perspectives are not fully compatible.
Nov 16 2014
On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote:On 11/14/2014 4:32 PM, deadalnix wrote:85% often means being at the bottom of the uncanny valey. 65% or 95% are more preferable.To quote the guy from the PL for video games video serie, a 85% solution often is preferable.Spoken like a true engineer!
Nov 20 2014
On Thursday, 20 November 2014 at 10:24:30 UTC, Max Samukha wrote:On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote:85% is an image rather than an exact number. The point being, every construct are good at some thing, and bad at other. Making them capable of doing everything come at a great complexity cost, so it is preferable to aim for a solution that cope well with most use cases, and provide alternative solutions for the horrible cases. Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway.On 11/14/2014 4:32 PM, deadalnix wrote:85% often means being at the bottom of the uncanny valey. 65% or 95% are more preferable.To quote the guy from the PL for video games video serie, a 85% solution often is preferable.Spoken like a true engineer!
Nov 20 2014
On Thursday, 20 November 2014 at 20:15:03 UTC, deadalnix wrote:Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway.FWIW, among language designers it is usually considered a desirable trait to have orthogonality between constructs and let them be combinable in expressive ways. This reduce the burden on the user who then only have to truly understand the key concepts to build a clear mental image of the semantic model. Then you can figure out ways to add syntactical sugar if needed. Having a smaller set of basic constructs makes it easier to prove correctness, which turn is important for optimization (which depends on the ability to prove equivalence over the pre/post semantics). It makes it easier to prove properties such as " (un)safe". It also makes it easier to later extend the language. Just think about all the areas "fibers" in D affect. It affects garbage collection and memory handling. It affects the ability to do deep semantic analysis. It affects implementation of fast multi-threaded ADTs. One innocent feature can have a great impact. Providing a 70% solution like Go is fine as they have defined a narrow domain for the language, servers, thus as a programmer you don't hit the 30% they left out. But D has not defined narrow use domain, so as a designer you cannot make up a good rationale for which 15-30% to leave out. Design is always related to a specific use scenario. (I like the uncanny valley metaphor, had not thought about using it outside 3D. Cool association!)
Nov 20 2014
On Thursday, 20 November 2014 at 21:26:18 UTC, Ola Fosheim Grøstad wrote:On Thursday, 20 November 2014 at 20:15:03 UTC, deadalnix wrote:All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool. Once you acknowledge that, you have 2 road forward : - You create bizarre features to implement quicksort in a functional way. Tge concept become more complex, but some expert guru will secure their job. - Keep your functional feature as they are, but allow for other styles, which cope better with quicksort. The situation 2 is the practical one. There is no point in creating an ackward hammer that can also screw things if I can have a hammer and a screwdriver. Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug.Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway.FWIW, among language designers it is usually considered a desirable trait to have orthogonality between constructs and let them be combinable in expressive ways. This reduce the burden on the user who then only have to truly understand the key concepts to build a clear mental image of the semantic model. Then you can figure out ways to add syntactical sugar if needed. Having a smaller set of basic constructs makes it easier to prove correctness, which turn is important for optimization (which depends on the ability to prove equivalence over the pre/post semantics). It makes it easier to prove properties such as " (un)safe". It also makes it easier to later extend the language. Just think about all the areas "fibers" in D affect. It affects garbage collection and memory handling. It affects the ability to do deep semantic analysis. It affects implementation of fast multi-threaded ADTs. One innocent feature can have a great impact. Providing a 70% solution like Go is fine as they have defined a narrow domain for the language, servers, thus as a programmer you don't hit the 30% they left out. But D has not defined narrow use domain, so as a designer you cannot make up a good rationale for which 15-30% to leave out. Design is always related to a specific use scenario. (I like the uncanny valley metaphor, had not thought about using it outside 3D. Cool association!)
Nov 20 2014
On Thursday, 20 November 2014 at 21:55:16 UTC, deadalnix wrote:All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool.Sure, I am not arguing in favour of functional programming. But it is possible to define a tight core language (or VM) with the understanding that all other "high level" constructs have to be expressed within that core language in the compiler internals. Then you can do all the analysis on that small critical subset of constructs. With this approach you can create/modify all kinds of convenience features without affect the core semantics that keeps it sounds and clean. Take for instance the concept of isolates, which I believe we both think can be useful. If the concept of an isolated-group-of-objects it is taken to the abstract level and married to a simple core language (or VM) in a sound way, then the more complicated stuff can hopefully be built on top of it. So you get a bottom-up approach to the language that meets the end user. Rather than what happens now where feature requests seem to be piled on top-down, they ought to be "digested" into something that can grow bottom-up. I believe this is what you try to do with your GC proposal.Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug.Well, I think this holds: 1. Good language creation goes bottom-up. 2. Good language evaluation goes top-down. 3. Good language design is a circular process between 1 and 2. In essence having a tight "engine" is important (the bottom), but you also need to understand the use context and how it will be used (the top). In D the bottom-part is not so clear and could need a cleanup, but then the community would have to accept the effects of that propagate to the top. Without defining some use contexts for the language I think the debates get very long, because without "data" you cannot do "analysis" and then you end up with "feels right to me" and that is not engineering, it is art or what you are used to. And, there are more good engineers in the world than good artists… If one can define a single use scenario that is demanding enough to ensure that an evaluation against that scenario also will work for the other less demanding scenarios, then maybe some more rational discussions about the direction of D as language could be possible and you could leave out say the 10% that are less useful. When everybody argues out from their own line of work and habits… then they talk past each other.
Nov 20 2014
You are a goalspot shifting champion, aren't you ?
Nov 20 2014
On Thursday, 20 November 2014 at 23:22:40 UTC, deadalnix wrote:You are a goalspot shifting champion, aren't you ?Nope, it follows up your line of argument, but the screwdriver/hammer metaphor is not a good one. You can implement your hammer and your screwdriver at the top if you have a lower level screwdriver/hammer-components at the bottom. That is the good. Piling together hammers and screwdrivers and hoping that nobody are going to miss the remaining 35% involving glue and tape… That is bad.
Nov 20 2014
On 11/20/2014 1:55 PM, deadalnix wrote:All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool. Once you acknowledge that, you have 2 road forward : - You create bizarre features to implement quicksort in a functional way. Tge concept become more complex, but some expert guru will secure their job. - Keep your functional feature as they are, but allow for other styles, which cope better with quicksort. The situation 2 is the practical one. There is no point in creating an ackward hammer that can also screw things if I can have a hammer and a screwdriver. Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug.Monads!
Nov 20 2014
On Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote:On 11/20/2014 1:55 PM, deadalnix wrote:[…]All of this is beautiful until you try to implement a quicksort in, haskell.Monads!I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell. Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell
Nov 20 2014
On 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote:That's correct.On 11/20/2014 1:55 PM, deadalnix wrote:[…]All of this is beautiful until you try to implement a quicksort in, haskell.Monads!I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell.Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_HaskellExcept that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.
Nov 20 2014
On Friday, 21 November 2014 at 01:09:27 UTC, Walter Bright wrote:Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.Yes, at least in Haskell, but I find monads in Haskell harder to read than regular imperative code. You can apparently cheat a little using libraries, this doesn't look too bad (from slashdot): import qualified Data.Vector.Generic as V import qualified Data.Vector.Generic.Mutable as M qsort :: (V.Vector v a, Ord a) => v a -> v a qsort = V.modify go where go xs | M.length xs < 2 = return () | otherwise = do p <- M.read xs (M.length xs `div` 2) j <- M.unstablePartition (< p) xs let (l, pr) = M.splitAt j xs k <- M.unstablePartition (== p) pr go l; go $ M.drop k pr http://stackoverflow.com/questions/7717691/why-is-the-minimalist-example-haskell-quicksort-not-a-true-quicksort
Nov 20 2014
On 11/20/2014 5:27 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Friday, 21 November 2014 at 01:09:27 UTC, Walter Bright wrote:Exactly my point (and I presume deadalnix's, too).Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.Yes, at least in Haskell, but I find monads in Haskell harder to read than regular imperative code.
Nov 20 2014
On 11/20/14 5:09 PM, Walter Bright wrote:On 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:As I like to say, this troika has inflicted a lot of damage on both FP and those beginning to learn it: * Linear-space factorial * Doubly exponential Fibonacci * (Non)Quicksort These losers appear with depressing frequency in FP introductory texts. AndreiOn Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote:That's correct.On 11/20/2014 1:55 PM, deadalnix wrote:[…]All of this is beautiful until you try to implement a quicksort in, haskell.Monads!I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell.Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_HaskellExcept that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.
Nov 20 2014
On Friday, 21 November 2014 at 02:56:09 UTC, Andrei Alexandrescu wrote:On 11/20/14 5:09 PM, Walter Bright wrote:Just like the OOP introductory books that still insist in talking about Cars and Vehicles, Managers and Employees, Animals and Bees, always using inheritance as code reuse. Barely talking about is-a and has-a, and all the issues about fragile base classes. -- PauloOn 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:As I like to say, this troika has inflicted a lot of damage on both FP and those beginning to learn it: * Linear-space factorial * Doubly exponential Fibonacci * (Non)Quicksort These losers appear with depressing frequency in FP introductory texts. AndreiOn Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote:That's correct.On 11/20/2014 1:55 PM, deadalnix wrote:[…]All of this is beautiful until you try to implement a quicksort in, haskell.Monads!I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell.Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_HaskellExcept that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.
Nov 21 2014
On 11/21/14 12:17 AM, Paulo Pinto wrote:On Friday, 21 November 2014 at 02:56:09 UTC, Andrei Alexandrescu wrote:The first public example found by google (oop introduction) lists a class Student as the first example of a class: http://www.codeproject.com/Articles/22769/Introduction-to-Object-Oriented-Programming-Concep#Object and IOException inheriting Exception as the first example of inheritance: http://www.codeproject.com/Articles/22769/Introduction-to-Object-Oriented-Programming-Concep#Inheritance First example for overriding is Complex.ToString: http://www.codeproject.com/Articles/22769/Introduction-to-Object-Oriented-Programming-Concep#OverloadingOn 11/20/14 5:09 PM, Walter Bright wrote:Just like the OOP introductory books that still insist in talking about Cars and Vehicles, Managers and Employees, Animals and Bees, always using inheritance as code reuse.On 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:As I like to say, this troika has inflicted a lot of damage on both FP and those beginning to learn it: * Linear-space factorial * Doubly exponential Fibonacci * (Non)Quicksort These losers appear with depressing frequency in FP introductory texts. AndreiOn Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote:That's correct.On 11/20/2014 1:55 PM, deadalnix wrote:[…]All of this is beautiful until you try to implement a quicksort in, haskell.Monads!I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell.Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_HaskellExcept that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.Barely talking about is-a and has-a, and all the issues about fragile base classes.Even to the extent those old texts have persisted, they are "only" poor style. In contrast, the three FP example I mentioned are computationally bankrupt. There is really no excuse for teaching them. Andrei
Nov 21 2014
Just like the OOP introductory books that still insist in talking about Cars and Vehicles, Managers and Employees, Animals and Bees, always using inheritance as code reuse. Barely talking about is-a and has-a, and all the issues about fragile base classes. -- PauloHear, hear. One of the problems with many introductions to OOP-paradigmed languages such as C++ is that by having to spend a lot of time explaining how to implement inheritance, the novice reader thinks that OOP is the 'right' approach to solving many problems when in fact other techniques ('prefer composition over inheritance' springs to mind) are far more appropriate. This is one of the primary problems I find in code of even more experienced programmers.
Nov 21 2014
On Friday, 21 November 2014 at 16:07:20 UTC, Abdulhaq wrote:Hear, hear. One of the problems with many introductions to OOP-paradigmed languages such as C++ is that by having to spend a lot of time explaining how to implement inheritance, the novice reader thinks that OOP is the 'right' approach to solving many problems when in fact other techniques ('prefer composition over inheritance' springs to mind) are far more appropriate. This is one of the primary problems I find in code of even more experienced programmers.Yes, the problem is that you should not teach OOP, but object oriented analysis and object oriented modelling in a language agnostic fashion… but you need to touch both structured and object oriented programming first to create motivation in the student for learning analysis and modelling… The same goes for performance and complexity. You should only cover structured programming/abstraction in the first programming course with no hindsight to performance, then touch performance and algorithmic complexity in the second course, then do complexity proofs in an advanced course. If a single course cover too much ground students get confused, the learning goals become hazy and you loose half your audience.
Nov 22 2014
On Friday, 21 November 2014 at 02:56:09 UTC, Andrei Alexandrescu wrote:As I like to say, this troika has inflicted a lot of damage on both FP and those beginning to learn it: * Linear-space factorial * Doubly exponential Fibonacci * (Non)Quicksort These losers appear with depressing frequency in FP introductory texts.Be careful with that attitude. It is an excellent strategy to start with the simple implementation and then move on to other techniques in later chapters or more advanced texts. https://www.haskell.org/haskellwiki/The_Fibonacci_sequence https://www.haskell.org/haskellwiki/Memoization Some compilers are even capable of adding memoization/caching behind the scenes which brings naive fibonacci down to O(n) with no change in the source. Also, keep in mind that non-mutating quick sort has the same space/time complexity as the mutating variant. The non-mutating variant is no doubt faster on massively parallel hardware. You can do quicksort on GPUs. The landscape of performance and complexity is not so simple these days.
Nov 21 2014
Walter Bright:Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.Take also a look at "Clean" language. It doesn't use monads and it's a very efficient functional language. Bye, bearophile
Nov 21 2014
On Thursday, 13 November 2014 at 09:29:22 UTC, Manu via Digitalmars-d wrote:Are you guys saying you don't feel this proposal is practical? http://wiki.dlang.org/User:Schuetzm/scope I think it's a very interesting approach, and comes from a practical point of view. It solves the long-standings issues, like scope return values, in a very creative way.It is better solved using static analysis and it is part of a bigger problem complex where ref counting should be considered. Otherwise you end up writing N versions of the same code. You want the same interface for GC, shared_ptr, unique_ptr, stack_allocated_data etc. Let the compiler do the checking. What does "shared" tell the compiler? It tells it "retain no references after completion of this function". Like with "pure", it should be opposite. You should tell the compiler "I transfer ownership of this parameter". Then have a generic concept "owned" for parameters that is resolved using templates. Types that can be owned has to provide release() and move(). That would work for GC, shared_ptr, unique_ptr, but not for stack allocated data: GC ptr: release() and move() are dummies. shared_ptr: release() decrements, move() just transfers unique_ptr: release() destroys, move() transfers D has to stop adding crutches. Generalize!
Nov 13 2014
On Thursday, 13 November 2014 at 10:00:10 UTC, Ola Fosheim Grøstad wrote:On Thursday, 13 November 2014 at 09:29:22 UTC, Manu via Digitalmars-d wrote:You mean without additional hints by the programmer? That's not going to happen, realistically, for many reasons, separate compilation being one of them.Are you guys saying you don't feel this proposal is practical? http://wiki.dlang.org/User:Schuetzm/scope I think it's a very interesting approach, and comes from a practical point of view. It solves the long-standings issues, like scope return values, in a very creative way.It is better solved using static analysisand it is part of a bigger problem complex where ref counting should be considered. Otherwise you end up writing N versions of the same code. You want the same interface for GC, shared_ptr, unique_ptr, stack_allocated_data etc. Let the compiler do the checking.Huh? That's exactly what _borrowing_ does. Ref-counting OTOH adds yet another reference type and thereby makes the situation worse.What does "shared" tell the compiler?I guess you mean "scope"?It tells it "retain no references after completion of this function". Like with "pure", it should be opposite. You should tell the compiler "I transfer ownership of this parameter". Then have a generic concept "owned" for parameters that is resolved using templates.That's what deadalnix's proposal does. Though I don't quite see what templates have to do with it.Types that can be owned has to provide release() and move(). That would work for GC, shared_ptr, unique_ptr, but not for stack allocated data: GC ptr: release() and move() are dummies. shared_ptr: release() decrements, move() just transfers unique_ptr: release() destroys, move() transfersFor a new language built from scratch, this might make sense. D is already existing, and needs to work with what it has.D has to stop adding crutches. Generalize!That's what I'm trying to do with my proposal :-)
Nov 13 2014
On Thursday, 13 November 2014 at 10:24:44 UTC, Marc Schütz wrote:On Thursday, 13 November 2014 at 10:00:10 UTC, Ola Fosheim Grøstad wrote:I think it can happen. D needs a new intermediate layer where you can do global flow analysis. compile src -> ast -> partial evaluation -> high level IR -> disk instantiate high level IR -> intermediate IR -> disk global analysis over intermediate IR intermediate IR -> LLVM -> asmIt is better solved using static analysisYou mean without additional hints by the programmer? That's not going to happen, realistically, for many reasons, separate compilation being one of them.Huh? That's exactly what _borrowing_ does. Ref-counting OTOH adds yet another reference type and thereby makes the situation worse.I don't like explicit ref counting, but it is sometimes useful and I think Rust-style ownership is pretty close to unique_ptr which is ref-counting with a max count of 1… You can also view GC as being implicitly ref counted too (it is "counted" during collection).:)What does "shared" tell the compiler?I guess you mean "scope"?My understanding of Deadalnix' proposal is that "owned" objects can only reference other "owned" objects. I think region allocators do better if you start constraining relations by ownership.It tells it "retain no references after completion of this function". Like with "pure", it should be opposite. You should tell the compiler "I transfer ownership of this parameter". Then have a generic concept "owned" for parameters that is resolved using templates.That's what deadalnix's proposal does. Though I don't quite see what templates have to do with it.For a new language built from scratch, this might make sense. D is already existing, and needs to work with what it has.D need to appropriate what C++ has and do it better. Basically it means integrating GC pointers with unique_ptr and shared_ptr. If D is going to be stuck on what it has and "fix" it with addig crutches it will go nowhere, and C++ will start to look like a better option…
Nov 13 2014
On 13 November 2014 20:38, via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Thursday, 13 November 2014 at 10:24:44 UTC, Marc Schütz wrote: D need to appropriate what C++ has and do it better. Basically it means integrating GC pointers with unique_ptr and shared_ptr. If D is going to be stuck on what it has and "fix" it with addig crutches it will go nowhere, and C++ will start to look like a better option…I don't follow how you associate that opinion with implementation of scope. I think it's practical and important, and the point is the opposite of what you say from my perspective. scope is the best approach I've heard to address these differences in allocation patterns without asserting any particular policy on the user. Escape analysis is the only solution I know to safely allow pointers to be passed around without having to worry about how they were allocated. By contrast, I have no idea what you're suggesting, or how it's not a 'crutch'... but if it's anything to do with C++, I'm dubious, and kinda frightened. Incidentally, I've recently started a new C++ job, first C++ I've written in some years... (after ~18 years, 12 professionally, full-time C/C++) After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point. Judged on common ground, there is no competition. It's only 'the devil you know' case that I think can possibly make an argument for C++.
Nov 13 2014
On Thursday, 13 November 2014 at 11:44:31 UTC, Manu via Digitalmars-d wrote:On 13 November 2014 20:38, via Digitalmars-d <digitalmars-d puremagic.com> wrote:C++14 is quite nice and C++17 will be even better. Then there is the advantage it is available in all OS vendors SDKs, with very nice tooling. However, the hard reality in most corporations is that code bases will be pre-C++98 with its own set of guidelines, if any. -- PauloOn Thursday, 13 November 2014 at 10:24:44 UTC, Marc Schütz wrote: D need to appropriate what C++ has and do it better. Basically it means integrating GC pointers with unique_ptr and shared_ptr. If D is going to be stuck on what it has and "fix" it with addig crutches it will go nowhere, and C++ will start to look like a better option…I don't follow how you associate that opinion with implementation of scope. I think it's practical and important, and the point is the opposite of what you say from my perspective. scope is the best approach I've heard to address these differences in allocation patterns without asserting any particular policy on the user. Escape analysis is the only solution I know to safely allow pointers to be passed around without having to worry about how they were allocated. By contrast, I have no idea what you're suggesting, or how it's not a 'crutch'... but if it's anything to do with C++, I'm dubious, and kinda frightened. Incidentally, I've recently started a new C++ job, first C++ I've written in some years... (after ~18 years, 12 professionally, full-time C/C++) After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point. Judged on common ground, there is no competition. It's only 'the devil you know' case that I think can possibly make an argument for C++.
Nov 13 2014
"Manu via Digitalmars-d" wrote in message news:mailman.1926.1415879071.9932.digitalmars-d puremagic.com...Incidentally, I've recently started a new C++ job, first C++ I've written in some years... (after ~18 years, 12 professionally, full-time C/C++) After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point. Judged on common ground, there is no competition. It's only 'the devil you know' case that I think can possibly make an argument for C++.I know, it's easy to forget how bad C++ is to work with. The new versions have fixed some of the pain points, but only some.
Nov 13 2014
On 13 November 2014 21:54, Daniel Murphy via Digitalmars-d <digitalmars-d puremagic.com> wrote:"Manu via Digitalmars-d" wrote in message news:mailman.1926.1415879071.9932.digitalmars-d puremagic.com...Yeah... nar. Not really. Every line of code is at least 3-4 times as long as it needs to be. It's virtually impossible to see the code through the syntactic noise. I like nullptr, I can get behind that ;) I realised within minutes that it's almost impossible to live without slices. On the plus side, I've already made lots of converts in my new office from my constant ranting :PIncidentally, I've recently started a new C++ job, first C++ I've written in some years... (after ~18 years, 12 professionally, full-time C/C++) After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point. Judged on common ground, there is no competition. It's only 'the devil you know' case that I think can possibly make an argument for C++.I know, it's easy to forget how bad C++ is to work with. The new versions have fixed some of the pain points, but only some.
Nov 13 2014
On 11/13/2014 5:55 AM, Manu via Digitalmars-d wrote:I realised within minutes that it's almost impossible to live without slices. On the plus side, I've already made lots of converts in my new office from my constant ranting :PYou should submit a presentation proposal to the O'Reilly Software Architecture Conference! http://softwarearchitecturecon.com/sa2015
Nov 15 2014
On Thursday, 13 November 2014 at 11:44:31 UTC, Manu via Digitalmars-d wrote:I don't follow how you associate that opinion with implementation of scope.I don't like semantics where I have to state that the parameters and the function should be "pure". It should be opposite. Say, if you have an array on the stack, then I'd like to take a slice of it and send it to a function to compute a sum(). But, I don't want the type system to prevent me from doing it because the author of sum() forgot to add "scope" to the parameter. What is the difference between a function that is annotated as "pure" and a function where all input is "scope"? This is backwards! Function signatures should not say "I am playing nice…", that should be the default. They should say "watch out, I'm stealing your stuff!".By contrast, I have no idea what you're suggesting, or how it's not a 'crutch'... but if it's anything to do with C++, I'm dubious, and kinda frightened.C++ is multi-paradigm and backwards compatible focused, and is therefore ruled by a mess of conventions and fixes, true.After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture!C++ is not excellent… Too verbose and grown out of the include/macro system (which suits C better than C++). I find C++ ok when I use it like C with bells (and leave out the whistles).Judged on common ground, there is no competition. It's only 'the devil you know' case that I think can possibly make an argument for C++.I don't know. I only use C++ for things that is suitable for C. Like DSP/realtime. Fortunately I don't have to deal with other people's C++ code. Many C++ frameworks look really ugly, but with C++14 I think I shall be able to make my own code look acceptable (readable).
Nov 13 2014
On Thursday, 13 November 2014 at 12:01:33 UTC, Ola Fosheim Grøstad wrote:I don't like semantics where I have to state that the parameters and the function should be "pure". It should be opposite.Unfortunately for your sanity, this isn't going to happen. Similarly unlikely is multiple pointer types, which Walter has repeatedly shot down. I'd suggest bringing it back up if and when discussion of D3 begins in earnest. -Wyatt
Nov 13 2014
On Thursday, 13 November 2014 at 13:29:00 UTC, Wyatt wrote:Unfortunately for your sanity, this isn't going to happen. Similarly unlikely is multiple pointer types, which Walter has repeatedly shot down. I'd suggest bringing it back up if and when discussion of D3 begins in earnest.D needs to start to focus on providing an assumption free system level programming language that supports the kind of modelling done for system level programming. I am not sure if adding templates to D was a good idea, but now that you have gone that route to such a large extent, you might as well do it wholesale with better support for templated SYSTEM programming would make sense. Make it your advantage. (including deforesting/common subexpression substitution, constraints systems etc) As an application level programming language D stands no chance. More crutches and special casing will not make D a system level programming language. Neither does adding features designed for other languages geared towards functional programming (which is the antithesis of system level programming). Yes, it can be done using a source to source upgrade tool. No, attribute inference is not a silver bullet, it means changes to libraries would silently break applications. Yes, function signatures matters. Function signatures are contracts, they need to be visually clear and the semantics have to be easy to grok. No, piling up low hanging fruits that are not yet ripe is not a great way to do language design.
Nov 14 2014
On Thursday, 13 November 2014 at 12:01:33 UTC, Ola Fosheim Grøstad wrote:On Thursday, 13 November 2014 at 11:44:31 UTC, Manu via Digitalmars-d wrote:I agree with this in principle, but it is unrealistic for D2. This is stuff that can go into a future D3, together with safe by default, pure by default, and maybe immutable by default. But that doesn't mean that it shouldn't be introduced in D2 already, so that we can gain experience with it. That said, it might not be so bad with `scope`. The latest iteration of the proposal has been simplified a lot; scope annotations will mostly be needed for function signatures, and explicit owners are only allowed there. There's also some potential for inference with templates.I don't follow how you associate that opinion with implementation of scope.I don't like semantics where I have to state that the parameters and the function should be "pure". It should be opposite. Say, if you have an array on the stack, then I'd like to take a slice of it and send it to a function to compute a sum(). But, I don't want the type system to prevent me from doing it because the author of sum() forgot to add "scope" to the parameter. What is the difference between a function that is annotated as "pure" and a function where all input is "scope"? This is backwards! Function signatures should not say "I am playing nice…", that should be the default. They should say "watch out, I'm stealing your stuff!".
Nov 13 2014
On 13 November 2014 22:01, via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Thursday, 13 November 2014 at 11:44:31 UTC, Manu via Digitalmars-d wrote:D has attribute inference, that's like, a thing now. Theoretically, the compiler may be able to determine that a reference does not escape, and infer the 'scope' attribute, in many cases. This would be consistent with other attributes.I don't follow how you associate that opinion with implementation of scope.I don't like semantics where I have to state that the parameters and the function should be "pure". It should be opposite. Say, if you have an array on the stack, then I'd like to take a slice of it and send it to a function to compute a sum(). But, I don't want the type system to prevent me from doing it because the author of sum() forgot to add "scope" to the parameter. What is the difference between a function that is annotated as "pure" and a function where all input is "scope"? This is backwards!Function signatures should not say "I am playing nice…", that should be the default. They should say "watch out, I'm stealing your stuff!".But that's already a concrete pattern throughout D. To do something otherwise would be an unexpected deviation from the norm. For the record, I agree with you, but that boat sailed a very long time ago. We must now stick to the pattern that's in place.Many C++ frameworks look really ugly, but with C++14 I think I shall be able to make my own code look acceptable (readable).I don't see anything in C++11/14/17 that looks like they'll salvage the language from the sea of barely decipherable template mess and endless boilerplate. It seems they're getting deeper into that madness, not less. I spent the last 2 days doing some string processing in C++... possibly the least fun I've ever had programming. Somehow I used to find it tolerable!
Nov 13 2014
On Thursday, 13 November 2014 at 13:46:20 UTC, Manu via Digitalmars-d wrote:On 13 November 2014 22:01, via Digitalmars-d <digitalmars-d puremagic.com> wrote:Yes, these days D arguments go like this: A: "I am saying no because it would go against separate compilation units." B: "I am saying yes because we have attribute inference." A: "But when will it be implemented?" B: "After we have resolved all issues in the bugtracker." A: "But C++17 will be out by then!" B: "Please don't compare D to C++, it is a unique language" A: "And Rust will be out too!" B: "Hey, that's a low blow. And unfair! Besides, linear types suck." A: "But 'scope' is a linear type qualifier, kinda?" B: "Ok, we will only do it as a library type then." A: "How does that improve anything?" B: "It changes a lot, it means Walter can focus on ironing out bugs and Andrei will implement it after he has fixed the GC". A: "When will that happen?" B: "After he is finished with adding ref counters to Phobos" A: "I thought that was done?" B: "Don't be unreasonable, Phobos is huge, it takes at least 6 months! Besides, it is obvious that we need to figure out how to do scope before completing ref counting anyway." A: "I agree…Where were we?" B: "I'm not sure. I'll try to find time to write a DIP."On Thursday, 13 November 2014 at 11:44:31 UTC, Manu via Digitalmars-d wrote:D has attribute inference, that's like, a thing now.I don't see anything in C++11/14/17 that looks like they'll salvage the language from the sea of barely decipherable template mess and endless boilerplate. It seems they're getting deeper into that madness, not less.Stuff like auto on return types etc makes it easier and less verbose when dealing with templated libraries. Unfortunately, I guess I can't use it on my next project anyway, since I need to support iOS5.1 which probably means XCode… 4? Sigh… That's one of the things that annoy me with C++, the long tail for being able to use the new features.I spent the last 2 days doing some string processing in C++... possibly the least fun I've ever had programming. Somehow I used to find it tolerable!Ack… I try to stick to binary formats. Strings are only fun in languages like Python (and possibly Haskell).
Nov 13 2014
On 2014-11-13 23:00, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:Unfortunately, I guess I can't use it on my next project anyway, since I need to support iOS5.1 which probably means XCode… 4? Sigh…Can't you use Xcode 6 and set the minimum deploy target to iOS 5.1? If that's not possible it should be possible to use Xcode 4 and replace the Clang compiler that Xcode uses with a later version. -- /Jacob Carlborg
Nov 14 2014
On Friday, 14 November 2014 at 08:10:22 UTC, Jacob Carlborg wrote:Can't you use Xcode 6 and set the minimum deploy target to iOS 5.1? If that's not possible it should be possible to use Xcode 4 and replace the Clang compiler that Xcode uses with a later version.I don't know yet, but the 5.1 simulator will probably have to run on OS-X 10.6.8 from what I've found on the net. Maybe it is possible to do as you said with clang… Hm.
Nov 14 2014
On 2014-11-14 15:28, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <ola.fosheim.grostad+dlang gmail.com>" wrote:I don't know yet, but the 5.1 simulator will probably have to run on OS-X 10.6.8 from what I've found on the net. Maybe it is possible to do as you said with clang… Hm.The simulator bundled with Xcode 5 run on Yosemite but not the one bundled with Xcode 4. -- /Jacob Carlborg
Nov 15 2014
On Thursday, 13 November 2014 at 13:46:20 UTC, Manu via Digitalmars-d wrote:D has attribute inference, that's like, a thing now. Theoretically, the compiler may be able to determine that a reference does not escape, and infer the 'scope' attribute, in many cases. This would be consistent with other attributes.Yes, that is the only sane road forward.
Nov 13 2014
On 11/13/2014 3:44 AM, Manu via Digitalmars-d wrote:After having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point.What I find odd about the progress of C++ (11, 14, 17, ...) is that there has been no concerted effort to make the preprocesser obsolete.
Nov 15 2014
Am 16.11.2014 um 05:51 schrieb Walter Bright:On 11/13/2014 3:44 AM, Manu via Digitalmars-d wrote:What about templates, compile time reflection, modules and compile time code execution? No need for the pre-processor other than textual inclusion and conditional compilation. -- PauloAfter having adapted to D and distancing from C++, trying to go back is like some form of inhuman torture! I really don't remember it being as bad as it is... the time away has given me new perspective on how terrible C++ is, and I can say with confidence, there is NOTHING C++ could do to make itself a 'better option' at this point.What I find odd about the progress of C++ (11, 14, 17, ...) is that there has been no concerted effort to make the preprocesser obsolete.
Nov 15 2014
On 11/15/2014 11:14 PM, Paulo Pinto wrote:Am 16.11.2014 um 05:51 schrieb Walter Bright:Competent and prominent C++ coding teams still manage to find complex and tangled uses for the preprocessor that rely on the most obscure details of how the preprocessor works, and then hang their whole codebase on it. I find it baffling, but there it is. I've made some effort to get rid of preprocessor use in the DMD source.What I find odd about the progress of C++ (11, 14, 17, ...) is that there has been no concerted effort to make the preprocesser obsolete.What about templates, compile time reflection, modules and compile time code execution?No need for the pre-processor other than textual inclusion and conditional compilation.Andrei, Herb, and I made a proposal to the C++ committee to introduce 'static if'. It was promptly nailed to the wall and executed by firing squad. :-)
Nov 15 2014
Am 16.11.2014 um 08:44 schrieb Walter Bright:On 11/15/2014 11:14 PM, Paulo Pinto wrote:That was quite bad how it happened.Am 16.11.2014 um 05:51 schrieb Walter Bright:Competent and prominent C++ coding teams still manage to find complex and tangled uses for the preprocessor that rely on the most obscure details of how the preprocessor works, and then hang their whole codebase on it. I find it baffling, but there it is. I've made some effort to get rid of preprocessor use in the DMD source.What I find odd about the progress of C++ (11, 14, 17, ...) is that there has been no concerted effort to make the preprocesser obsolete.What about templates, compile time reflection, modules and compile time code execution?No need for the pre-processor other than textual inclusion and conditional compilation.Andrei, Herb, and I made a proposal to the C++ committee to introduce 'static if'. It was promptly nailed to the wall and executed by firing squad. :-)
Nov 15 2014
On Thursday, 13 November 2014 at 10:38:57 UTC, Ola Fosheim Grøstad wrote:On Thursday, 13 November 2014 at 10:24:44 UTC, Marc Schütz wrote:I think I understand now how you want to use templates. You basically want to make the various reference types implement a protocol, and then templatize functions to accept any of those reference types, just like we do with ranges. But this brings with it the downside of templates, namely template bloat. And you need to do additional work to eliminate all the redundant inc/dec calls if you pass an RC reference. All of which is unnecessary for most functions, because a function knows in advance whether it needs to retain the reference or just borrow it, and can be declared accordingly. This means that `scope` acts as an abstraction over the various reference types, be it GC, RC, plain pointer, unique pointer, or some more complicated user defined scheme. This also benefits the GC, by the way. A scope reference doesn't need to be treated as a root, because there will always be at least one other copy of the reference. This means that structures containing only scoped references need not be scanned.Huh? That's exactly what _borrowing_ does. Ref-counting OTOH adds yet another reference type and thereby makes the situation worse.I don't like explicit ref counting, but it is sometimes useful and I think Rust-style ownership is pretty close to unique_ptr which is ref-counting with a max count of 1… You can also view GC as being implicitly ref counted too (it is "counted" during collection).Yes. But they can also be merged into un-owned structures (i.e. the various heaps), at which point they lose their owned-ness. This allows the creator of these objects to be agnostic about how the consumers want to handle them.:)What does "shared" tell the compiler?I guess you mean "scope"?My understanding of Deadalnix' proposal is that "owned" objects can only reference other "owned" objects.It tells it "retain no references after completion of this function". Like with "pure", it should be opposite. You should tell the compiler "I transfer ownership of this parameter". Then have a generic concept "owned" for parameters that is resolved using templates.That's what deadalnix's proposal does. Though I don't quite see what templates have to do with it.I think region allocators do better if you start constraining relations by ownership.Not sure what you mean, but I don't think region allocators can be used for this. They require the object creator to know in advance how long the objects will exist. Or alternatively, the creators need to be informed about that by receiving a reference to the region allocator, at which point we're most likely back at templates.
Nov 13 2014
On Thursday, 13 November 2014 at 10:00:10 UTC, Ola Fosheim Grøstad wrote:GC ptr: release() and move() are dummies.Well, "move()" should obviously not be a dummy, but just a regular assignment that requires the object to be GC allocated… What I am saying is that D needs type-classes for pointers so that you can write generic functions can be ignorant to specific allocation schemes and specify their minimum requirements. It basically means that in safe code all raw pointers are "borrowed" and all owned pointers requires specification tha is either concrete (gc,shared,unique) or a templated generalization (single ownership, multiple ownership etc). It seems a perfect for D to extend templates with more power and make good use of it now that C++ adds concepts.
Nov 13 2014
Andrei Alexandrescu:I agree. This is one of those cases in which a good engineering solution may be a lot better than the "perfect" solution (and linear types are not even perfect...).I am sure you are aware that the solution you are talking about is rather more complex (for both final programmers and language implementators) than the Rust solution. Bye, bearophile
Nov 13 2014
On 11/13/14 1:41 AM, bearophile wrote:Andrei Alexandrescu:In fact I am not! -- AndreiI agree. This is one of those cases in which a good engineering solution may be a lot better than the "perfect" solution (and linear types are not even perfect...).I am sure you are aware that the solution you are talking about is rather more complex (for both final programmers and language implementators) than the Rust solution.
Nov 13 2014