digitalmars.D - toHash => pure, nothrow, const, safe
- Walter Bright (15/15) Mar 11 2012 Consider the toHash() function for struct key types:
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (6/21) Mar 11 2012 It may be a hack, but you know, those have special semantics/meanings in...
- Kapps (4/8) Mar 11 2012 Agreed. Those are already special, so I don't think it hurts to
- bearophile (4/6) Mar 11 2012 At risk of sounding like a troll, I hope from now on Walter will not use...
- Don Clugston (16/43) Mar 12 2012 That was sounding reasonable, but...
- bearophile (5/8) Mar 11 2012 Recently I have suggested to deprecate and later remove the need of opCm...
- H. S. Teoh (26/48) Mar 11 2012 Ah, I see the "just add a new attribute" thing is coming back to bite
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (7/53) Mar 11 2012 No. Too late in the design process. I have 20k+ lines of code that rely
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (7/69) Mar 11 2012 I should point out that I *do* think the idea is good (i.e. if you want
- Marco Leise (12/16) Mar 12 2012 "@safe pure nothrow" as default could have worked better than manually s...
- Martin Nowak (5/9) Mar 12 2012 u =
- James Miller (11/19) Mar 12 2012 ing
- Martin Nowak (20/28) Mar 12 2012 That sounds intentionally.
- James Miller (5/25) Mar 12 2012 My point was more about distant code breaking. Its more to do with
- Walter Bright (5/7) Mar 12 2012 Auto-inference is currently done for lambdas and template functions - wh...
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (5/13) Mar 12 2012 Isn't auto-inference for templates a Bad Thing (TM) since it may give
- deadalnix (4/16) Mar 12 2012 As long as you can explicitly specify that too, and that you get a
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (5/23) Mar 12 2012 But people might be relying on your API that just so happens to be pure,...
- Jonathan M Davis (20/42) Mar 12 2012 True, but without out, pure, @safe, and nothrow are essentially useless ...
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (6/48) Mar 12 2012 That could be solved with a @ctfe attribute or something, no? Like, if
- Jonathan M Davis (23/33) Mar 12 2012 1. That goes completely against how CTFE was designed in that part of th...
- =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= (8/41) Mar 12 2012 Though, rarely, functions written with runtime execution in mind
- H. S. Teoh (11/24) Mar 12 2012 [...]
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (13/35) Mar 12 2012 I stopped writing inline unit tests in larger code bases. If I do that,
- H. S. Teoh (23/34) Mar 12 2012 [...]
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (13/45) Mar 12 2012 That's what I do. I simply moved my unittest blocks to a separate
- Jacob Carlborg (9/19) Mar 12 2012 I agree. I've also started to do more high level testing of some of my
- Walter Bright (3/8) Mar 12 2012 That's exactly how it was intended! It seems like such a small feature, ...
- Martin Nowak (2/5) Mar 12 2012 Everything that's pure should be CTFEable which doesn't imply that you
- Jonathan M Davis (6/12) Mar 12 2012 I don't think that that's quite true. pure doesn't imply @safe, so you c...
- Timon Gehr (3/15) Mar 12 2012 CTFE allows quite some pointer arithmetic, but makes sure it is actually...
- Martin Nowak (10/18) Mar 12 2012 A "@safe pure nothrow const" might be used as "@system".
- Peter Alexander (5/14) Mar 13 2012 Dumb question:
- deadalnix (2/16) Mar 13 2012 That is exactly what I was thinking about.
- Andrei Alexandrescu (4/18) Mar 13 2012 Because in the general case functions call one another so there's no way...
- deadalnix (8/29) Mar 13 2012 This problem is pretty close to garbage collection. Let's use pure as
- Andrei Alexandrescu (12/19) Mar 13 2012 Certain analyses can be done using the so-called worklist approach. The
- deadalnix (3/24) Mar 13 2012 I expect the function we are talking about here not to call almost all
- H. S. Teoh (24/41) Mar 13 2012 [...]
- kennytm (4/27) Mar 13 2012 That's no difference from template functions calling each others right?
- Timon Gehr (11/38) Mar 13 2012 http://d.puremagic.com/issues/show_bug.cgi?id=7205
- Andrei Alexandrescu (5/13) Mar 13 2012 There is. Templates are guaranteed to have the body available. Walter
- so (4/20) Mar 12 2012 A pattern is emerging. Why not analyze it a bit and somehow try
- so (5/8) Mar 12 2012 @mask(wat) const|pure|nothrow|safe
- Martin Nowak (1/4) Mar 12 2012 How about complete inference instead of a hack?
- Jonathan M Davis (13/18) Mar 12 2012 Because that requires having all of the source code. The fact that we ha...
- Martin Nowak (10/13) Mar 12 2012 It doesn't require all source code.
- Walter Bright (4/7) Mar 12 2012 Hello endless bug reports of the form:
- Jacob Carlborg (4/11) Mar 13 2012 We already have that, sometimes :(
- Martin Nowak (3/10) Mar 13 2012 Yeah, you're right. It would easily create confusing behavior.
- Walter Bright (6/7) Mar 13 2012 In general, for modules a and b, all of these should work:
- Martin Nowak (2/7) Mar 14 2012 For '-c' CTFE will already run semantic3 on the other module's functions...
- deadalnix (6/21) Mar 12 2012 I don't really see the point. For Objects, we inherit from Object, which...
- Walter Bright (4/8) Mar 12 2012 Yes, because they are referred to by TypeInfo, and that's fairly useless...
- deadalnix (4/13) Mar 13 2012 I always though that TypeInfo is a poor substitute for metaprograming
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (6/22) Mar 13 2012 Yes, and in some cases, it doesn't even work right; i.e. you can declare...
- Steven Schveighoffer (7/11) Mar 14 2012 re =
- Steven Schveighoffer (19/34) Mar 12 2012 What about a new attribute @type (or better name?) that means "this
- bearophile (7/10) Mar 12 2012 I have read the other answers of this thread, and I don't like
- Jonathan M Davis (9/14) Mar 12 2012 D doesn't make writing unit tests easy, since there's an intrinsic amoun...
- Walter Bright (3/5) Mar 12 2012 It can be remarkable how much more use something gets if you just make i...
- H. S. Teoh (18/34) Mar 12 2012 I would argue that D *does* make unit tests easier to write, in that you
- Jonathan M Davis (6/32) Mar 12 2012 I didn't say that D doesn't make writing unit tests easier. I just said ...
-
Stewart Gordon
(4/10)
Mar 12 2012
- Jonathan M Davis (12/26) Mar 12 2012 That really should be too, but work is probably going to have to be done...
- bearophile (5/6) Mar 12 2012 Often in toString I use format() or text(), or to!string(), that current...
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (4/10) Mar 12 2012 I fully support that.
- H. S. Teoh (17/44) Mar 12 2012 This is not going to be a choice, because some overrides of toHash calls
- Walter Bright (2/5) Mar 12 2012 Yup. It also seems very hard to figure out a transitional path to it.
- H. S. Teoh (7/13) Mar 12 2012 Perhaps we just need to bite the bullet and do it all in one shot and
- =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= (4/15) Mar 12 2012 I have to say this seems like the most sensible approach right now.
- Manu (3/13) Mar 14 2012 MMMmmm, now we're talking!
- Steven Schveighoffer (29/34) Mar 14 2012 It seems most people have ignored my post in this thread, so I'll say it...
- Dmitry Olshansky (5/39) Mar 14 2012 For one, I'm sold on it. And the proposed magic hack can work right now,...
- Martin Nowak (7/11) Mar 14 2012 Why would you want to add explicit annotation for implicit TypeInfo_Stru...
- Steven Schveighoffer (11/21) Mar 14 2012 Because right now, it's a guessing game of whether you wanted an operati...
- Walter Bright (2/3) Mar 12 2012 Good question. What do you suggest?
- bearophile (4/5) Mar 12 2012 I suggest to follow a slow but reliable path, working bottom-up: turn to...
- H. S. Teoh (12/20) Mar 12 2012 [...]
- Don Clugston (2/5) Mar 13 2012 Why can't we just kill that abomination?
- Walter Bright (2/8) Mar 13 2012 Break a lot of existing code?
- bearophile (5/6) Mar 13 2012 Invent a good deprecation strategy for toString?
- Dmitry Olshansky (5/14) Mar 14 2012 And gain efficiency. BTW transition paths were suggested, need to just
- Steven Schveighoffer (9/18) Mar 14 2012 I'm unaware of much code that uses TypeInfo.xtostring to print anything....
- Andrei Alexandrescu (5/17) Mar 12 2012 I think the three others have a special regime because pointers to them
- Martin Nowak (7/11) Mar 13 2012 Adding a special case for AAs is not a good idea but
Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate them It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).
Mar 11 2012
On 12-03-2012 00:54, Walter Bright wrote:Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate them It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).It may be a hack, but you know, those have special semantics/meanings in the first place, so is it really that bad? Consider also that contract blocks are now implicitly const, etc. -- - Alex
Mar 11 2012
On Sunday, 11 March 2012 at 23:55:34 UTC, Alex Rønne Petersen wrote:It may be a hack, but you know, those have special semantics/meanings in the first place, so is it really that bad? Consider also that contract blocks are now implicitly const, etc.Agreed. Those are already special, so I don't think it hurts to make this change. But I may be missing some implications.
Mar 11 2012
Kapps:Agreed. Those are already special, so I don't think it hurts to make this change. But I may be missing some implications.At risk of sounding like a troll, I hope from now on Walter will not use this kind of strategy to solve all the MANY breaking changes D/DMD will need to face :-) Bye, bearophile
Mar 11 2012
On 12/03/12 00:55, Alex Rønne Petersen wrote:On 12-03-2012 00:54, Walter Bright wrote:Maybe we need nice or something, to mean pure nothrow safe.Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted A 2. it's just plain annoying to annotate themThat was sounding reasonable, but...It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, andsafe (if not already marked as trusted)....this part is a bit scary. It sounds as though the semantics are a bit fuzzy. There is no way to make a function as 'impure' or 'does_throw'. But you can annotate with system.It may be a hack, but you know, those have special semantics/meanings in the first place, so is it really that bad?Agreed, they are in some sense virtual functions. But how would you declare those functions. With "pure nothrow safe", or with "pure nothrow trusted" ?Consider also that contract blocks are now implicitly const, etc.But the clutter problem isn't restricted to those specific functions. One issue with pure, nothrow is that they have no inverse, so you cannot simply write pure: nothrow: at the top of the file and use 'pure nothrow' by default. The underlying problem is that, when spelt out in full, those annotations uglify the code.
Mar 12 2012
Walter:So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).Recently I have suggested to deprecate and later remove the need of opCmp for the built-in AAs. Regarding this hack proposal of yours, I don't fully understand its consequences yet. What are the negative sides of this idea? Bye, bearophile
Mar 11 2012
On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate themAh, I see the "just add a new attribute" thing is coming back to bite you. ;-)It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to. Or, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user. T -- If the comments and the code disagree, it's likely that *both* are wrong. -- Christopher
Mar 11 2012
On 12-03-2012 06:43, H. S. Teoh wrote:On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:No. Too late in the design process. I have 20k+ lines of code that rely on the opposite behavior.Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate themAh, I see the "just add a new attribute" thing is coming back to bite you. ;-)It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to.Or, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user.No, that has API design issues. You can silently break a guarantee you made previously.T-- - Alex
Mar 11 2012
On 12-03-2012 07:04, Alex Rønne Petersen wrote:On 12-03-2012 06:43, H. S. Teoh wrote:I should point out that I *do* think the idea is good (i.e. if you want the "bad" things, that's what you have to declare), but it's just too late now. Also, there might be issues with const and the likes - should the system assume const or immutable or inout or...?On Sun, Mar 11, 2012 at 04:54:09PM -0700, Walter Bright wrote:No. Too late in the design process. I have 20k+ lines of code that rely on the opposite behavior.Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate themAh, I see the "just add a new attribute" thing is coming back to bite you. ;-)It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).I'm wary of the idea of automatically-imposed attributes on a "special" set of functions... seems a bit arbitrary, and arbitrary things don't tend to stand the test of time. OTOH I can see the value of this. Forcing all toHash's to be pure nothrow safe makes is much easier to, for example, implement AA's purely in object_.d (which I'm trying to do :-P). You don't have to worry about somebody defining a toHash that does strange things. Same thing with opEquals, etc.. It also lets you freely annotate stuff that calls these functions as pure, nothrow, safe, etc., without having to dig through every function in druntime and phobos to mark all of them. Here's an alternative (and perhaps totally insane) idea: what if, instead of needing to mark functions as pure, nothrow, etc., etc., we ASSUME all functions are pure, nothrow, and safe unless they're explicitly declared otherwise? IOW, let all D code be pure, nothrow, and safe by default, and if you want non-pure, or throwing code, or unsafe code, then you annotate the function as impure, throwing, or system. It goes along with D's general philosophy of safe-by-default, unsafe-if-you-want-to.-- - AlexOr, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user.No, that has API design issues. You can silently break a guarantee you made previously.T
Mar 11 2012
Am Mon, 12 Mar 2012 07:06:33 +0100 schrieb Alex R=C3=B8nne Petersen <xtzgzorex gmail.com>:I should point out that I *do* think the idea is good (i.e. if you want=20 the "bad" things, that's what you have to declare), but it's just too=20 late now. Also, there might be issues with const and the likes - should=20 the system assume const or immutable or inout or...?" safe pure nothrow" as default could have worked better than manually sett= ing it, I agree. safe can be set at module level, so it is less of an issu= e to make it the default in your code. The problem with those attributes is= not that pure is used more often than impure or nothrow more often than th= rows, but that they need to be set transitive in function calls. And even t= hough the attributes do no harm to the user of the function (unlike immutab= le) they can easily be forgotten or left away, because it is tedious to typ= e them. --=20 Marco
Mar 12 2012
On Mon, 12 Mar 2012 07:04:52 +0100, Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com> wrote:Or, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user. No, that has API design issues. You can silently break a guarantee yo=u =made previously.What's wrong with auto-inference. Inferred attributes are only = strengthening guarantees.
Mar 12 2012
On 12 March 2012 21:08, Martin Nowak <dawg dawgfoto.de> wrote:On Mon, 12 Mar 2012 07:04:52 +0100, Alex R=C3=B8nne Petersen <xtzgzorex gmail.com> wrote:youOr, as a compromise, perhaps the compiler can auto-infer most of the attributes without any further effort from the user. =C2=A0No, that has API design issues. You can silently break a guarantee=ingmade previously.What's wrong with auto-inference. Inferred attributes are only strengthen=guarantees.One problem I can think of is relying on the auto-inference can create fragile code. You make a change in one place without concentrating and suddenly a completely different part of your code breaks, because it's expecting pure, or safe code and you have done something to prevent the inference. I don't know how much of a problem that could be, but its one I can think of. -- James Miller
Mar 12 2012
One problem I can think of is relying on the auto-inference can create fragile code. You make a change in one place without concentrating and suddenly a completely different part of your code breaks, because it's expecting pure, or safe code and you have done something to prevent the inference. I don't know how much of a problem that could be, but its one I can think of. -- James MillerThat sounds intentionally. Say you have a struct with a getHash method. struct Key { hash_t getHash() /* inferred pure */ { } } Say you have an Set that requires a pure opHash. void insert(Key key) pure { immutable hash = key.toHash(); } Now if you change the implementation of Key.getHash then maybe it can no longer be inserted into that Set. If OTOH your set.insert were inferred pure itself, then the impureness would escalate to the set.insert(key) caller. It's about the same logic that would makes nothrow more useful. You can omit it most of the times but always have the possibility to enforce it, e.g. at a much higher level.
Mar 12 2012
That sounds intentionally. Say you have a struct with a getHash method. struct Key { =C2=A0 =C2=A0hash_t getHash() /* inferred pure */ =C2=A0 =C2=A0{ =C2=A0 =C2=A0} } Say you have an Set that requires a pure opHash. void insert(Key key) pure { =C2=A0 =C2=A0immutable hash =3D key.toHash(); } Now if you change the implementation of Key.getHash then maybe it can no longer be inserted into that Set. If OTOH your set.insert were inferred pure itself, then the impureness would escalate to the set.insert(key) caller. It's about the same logic that would makes nothrow more useful. You can omit it most of the times but always have the possibility to enforce it, e.g. at a much higher level.My point was more about distant code breaking. Its more to do with unexpected behavior than code correctness in this case. As i said, I could be worrying about nothing though. -- James Miller
Mar 12 2012
On 3/12/2012 1:08 AM, Martin Nowak wrote:What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
On 12-03-2012 10:40, Walter Bright wrote:On 3/12/2012 1:08 AM, Martin Nowak wrote:Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking? -- - AlexWhat's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :On 12-03-2012 10:40, Walter Bright wrote:As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.On 3/12/2012 1:08 AM, Martin Nowak wrote:Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
On 12-03-2012 14:16, deadalnix wrote:Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :But people might be relying on your API that just so happens to be pure, but then suddenly isn't! -- - AlexOn 12-03-2012 10:40, Walter Bright wrote:As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.On 3/12/2012 1:08 AM, Martin Nowak wrote:Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
On Monday, March 12, 2012 14:23:28 Alex Rønne Petersen wrote:On 12-03-2012 14:16, deadalnix wrote:True, but without out, pure, safe, and nothrow are essentially useless with templates, because far too many templates depend on their arguments for whether they can be pure, safe, and/or nothrow or not. It's attribute inference for templates that made it possible to use something stuff like std.range and std.algorithm in pure functions. Without that, it couldn't be done (at least not without some nasty casting). Attribute inference is necessary for templates. Now, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past. In both cases, I believe that the best solution that we have is to unit test stuff to show that it _can_ be pure, safe, nothrow, and/or CTFEable if the arguments support it, and then those tests can guarantee that it stays that way in spite of any code changes, since they'll fail if the changes break that. - Jonathan M DavisLe 12/03/2012 13:51, Alex Rønne Petersen a écrit :But people might be relying on your API that just so happens to be pure, but then suddenly isn't!On 12-03-2012 10:40, Walter Bright wrote:As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.On 3/12/2012 1:08 AM, Martin Nowak wrote:Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
On 12-03-2012 18:38, Jonathan M Davis wrote:On Monday, March 12, 2012 14:23:28 Alex Rønne Petersen wrote:That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.On 12-03-2012 14:16, deadalnix wrote:True, but without out, pure, safe, and nothrow are essentially useless with templates, because far too many templates depend on their arguments for whether they can be pure, safe, and/or nothrow or not. It's attribute inference for templates that made it possible to use something stuff like std.range and std.algorithm in pure functions. Without that, it couldn't be done (at least not without some nasty casting). Attribute inference is necessary for templates. Now, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past.Le 12/03/2012 13:51, Alex Rønne Petersen a écrit :But people might be relying on your API that just so happens to be pure, but then suddenly isn't!On 12-03-2012 10:40, Walter Bright wrote:As long as you can explicitly specify that too, and that you get a compile time error when you fail to provide what is explicitly stated, this isn't a problem.On 3/12/2012 1:08 AM, Martin Nowak wrote:Isn't auto-inference for templates a Bad Thing (TM) since it may give API guarantees that you can end up silently breaking?What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.In both cases, I believe that the best solution that we have is to unit test stuff to show that it _can_ be pure, safe, nothrow, and/or CTFEable if the arguments support it, and then those tests can guarantee that it stays that way in spite of any code changes, since they'll fail if the changes break that. - Jonathan M Davis-- - Alex
Mar 12 2012
On Monday, March 12, 2012 18:44:06 Alex Rønne Petersen wrote:1. That goes completely against how CTFE was designed in that part of the idea was that you _wouldn't_ have to annotate it. 2. I don't really know how feasible that would be. At minimum, the fact that CTFE works with classes now would probably render it completely infeasible for classes, since they're polymorphic, and the compiler can't possibly know all of the possible types that could be passed to the function. Templates would screw it over too for the exact same reasons that they can have issues with pure, safe, and nothrow. It may or may not be feasible without classes or templates being involved. So, no, I don't think that ctfe would really work. And while I agree that the situation isn't exactly ideal, I don't really see a way around it. Unit tests _do_ catch it for you though. The only thing that they can't catch is whether the template is going to be pure, nothrow, safe, and/or CTFEable with _your_ arguments to it, but as long as it's pure, nothrow, safe, and/or CTFEable with _a_ set of arguments, it will generally be the fault of the arguments when such a function fails to be pure, nothrow, safe, and/or CTFEable as expected. If the unit tests don't hit all of the possible static if-else blocks and all of the possible code paths for CTFE, it could still be a problem, but that just means that the unit tests aren't thorough enough, and more thorough unit tests will fix the problem, as tedious as it may be to do that. - Jonathan M DavisNow, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past.That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.
Mar 12 2012
On 12-03-2012 18:55, Jonathan M Davis wrote:On Monday, March 12, 2012 18:44:06 Alex Rønne Petersen wrote:Though, rarely, functions written with runtime execution in mind actually Just Work in CTFE. You usually have to change code or special-case things for it to work. In my experience, anyway...1. That goes completely against how CTFE was designed in that part of the idea was that you _wouldn't_ have to annotate it.Now, that _does_ introduce the possibility of a template being to be pure and then not being able to be pure thanks to a change that's made to it or something that it uses, and that makes impossible for any code using it to be pure. CTFE has the same problem. It's fairly easy to have a function which is CTFEable cease to be CTFEable thanks to a change to it, and no one notices. We've had issues with this in the past.That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.2. I don't really know how feasible that would be. At minimum, the fact that CTFE works with classes now would probably render it completely infeasible for classes, since they're polymorphic, and the compiler can't possibly know all of the possible types that could be passed to the function. Templates would screw it over too for the exact same reasons that they can have issues with pure, safe, and nothrow. It may or may not be feasible without classes or templates being involved.I hadn't thought of classes at all. In practice, it's impossible then.So, no, I don't think that ctfe would really work. And while I agree that the situation isn't exactly ideal, I don't really see a way around it. Unit tests _do_ catch it for you though. The only thing that they can't catch is whether the template is going to be pure, nothrow, safe, and/or CTFEable with _your_ arguments to it, but as long as it's pure, nothrow, safe, and/or CTFEable with _a_ set of arguments, it will generally be the fault of the arguments when such a function fails to be pure, nothrow, safe, and/or CTFEable as expected. If the unit tests don't hit all of the possible static if-else blocks and all of the possible code paths for CTFE, it could still be a problem, but that just means that the unit tests aren't thorough enough, and more thorough unit tests will fix the problem, as tedious as it may be to do that. - Jonathan M Davis-- - Alex
Mar 12 2012
On Mon, Mar 12, 2012 at 01:55:33PM -0400, Jonathan M Davis wrote: [...]So, no, I don't think that ctfe would really work. And while I agree that the situation isn't exactly ideal, I don't really see a way around it. Unit tests _do_ catch it for you though. The only thing that they can't catch is whether the template is going to be pure, nothrow, safe, and/or CTFEable with _your_ arguments to it, but as long as it's pure, nothrow, safe, and/or CTFEable with _a_ set of arguments, it will generally be the fault of the arguments when such a function fails to be pure, nothrow, safe, and/or CTFEable as expected. If the unit tests don't hit all of the possible static if-else blocks and all of the possible code paths for CTFE, it could still be a problem, but that just means that the unit tests aren't thorough enough, and more thorough unit tests will fix the problem, as tedious as it may be to do that.[...] Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all. T -- Ruby is essentially Perl minus Wall.
Mar 12 2012
On 12-03-2012 19:04, H. S. Teoh wrote:On Mon, Mar 12, 2012 at 01:55:33PM -0400, Jonathan M Davis wrote: [...]I stopped writing inline unit tests in larger code bases. If I do that, I have to maintain a separate build configuration just for test execution, which is not practical. Furthermore, I want to test my code in debug and release mode, which... goes against having a test configuration. So, I've ended up moving all unit tests to a separate executable that links in all my libraries and runs their tests in debug/release mode. Works much better. I don't feel that unittest in D was really thought through properly for large projects targeting actual end users... -- - AlexSo, no, I don't think that ctfe would really work. And while I agree that the situation isn't exactly ideal, I don't really see a way around it. Unit tests _do_ catch it for you though. The only thing that they can't catch is whether the template is going to be pure, nothrow, safe, and/or CTFEable with _your_ arguments to it, but as long as it's pure, nothrow, safe, and/or CTFEable with _a_ set of arguments, it will generally be the fault of the arguments when such a function fails to be pure, nothrow, safe, and/or CTFEable as expected. If the unit tests don't hit all of the possible static if-else blocks and all of the possible code paths for CTFE, it could still be a problem, but that just means that the unit tests aren't thorough enough, and more thorough unit tests will fix the problem, as tedious as it may be to do that.[...] Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all. T
Mar 12 2012
On Mon, Mar 12, 2012 at 07:41:39PM +0100, Alex Rønne Petersen wrote:On 12-03-2012 19:04, H. S. Teoh wrote:[...][...]Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.I stopped writing inline unit tests in larger code bases. If I do that, I have to maintain a separate build configuration just for test execution, which is not practical. Furthermore, I want to test my code in debug and release mode, which... goes against having a test configuration.[...] Hmm. Sounds like what you want is not really unittests, but global program startup self-checks. In my mind, unittests is for running specific checks against specific functions, classes/structs inside a module. I frequently write lots of unittests that instantiates all sorts of templates never used by the real program, contrived data objects, etc., that may potentially have long running times, or creates files in the working directory or other stuff like that. IOW, stuff that are not suitable to be used for release builds at all. It's really more of a way of forcing the program to refuse to start during development when a code change breaks the system, so that the developer notices the breakage immediately. Definitely not for the end-user. If I wanted release-build self-consistency checking, then yeah, I'd use a different framework than unittests. As for build configuration, I've given up on make a decade ago for something saner, which can handle complicated build options properly. But that belongs to another topic. T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Mar 12 2012
On 12-03-2012 20:08, H. S. Teoh wrote:On Mon, Mar 12, 2012 at 07:41:39PM +0100, Alex Rønne Petersen wrote:That's what I do. I simply moved my unittest blocks to a separate executable.On 12-03-2012 19:04, H. S. Teoh wrote:[...][...]Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.I stopped writing inline unit tests in larger code bases. If I do that, I have to maintain a separate build configuration just for test execution, which is not practical. Furthermore, I want to test my code in debug and release mode, which... goes against having a test configuration.[...] Hmm. Sounds like what you want is not really unittests, but global program startup self-checks. In my mind, unittests is for running specific checks against specific functions, classes/structs inside amodule. I frequently write lots of unittests that instantiates all sorts of templates never used by the real program, contrived data objects, etc., that may potentially have long running times, or creates files in the working directory or other stuff like that. IOW, stuff that are notYou never know if some code that seems to work fine in debug mode breaks in release mode then (until your user runs into a bug). This is why I want full coverage in all configurations.suitable to be used for release builds at all. It's really more of a way of forcing the program to refuse to start during development when a code change breaks the system, so that the developer notices the breakage immediately. Definitely not for the end-user.Right. That's why my tests are in a separate executable from the actual program.If I wanted release-build self-consistency checking, then yeah, I'd use a different framework than unittests.IMHO unittest works fine for both debug and release, just not inline.As for build configuration, I've given up on make a decade ago for something saner, which can handle complicated build options properly. But that belongs to another topic.I used to use Make for this project, then switched to Waf. It's an amazing build tool.T-- - Alex
Mar 12 2012
On 2012-03-12 19:41, Alex Rønne Petersen wrote:I stopped writing inline unit tests in larger code bases. If I do that, I have to maintain a separate build configuration just for test execution, which is not practical. Furthermore, I want to test my code in debug and release mode, which... goes against having a test configuration.I don't inline my unit test either.So, I've ended up moving all unit tests to a separate executable that links in all my libraries and runs their tests in debug/release mode. Works much better. I don't feel that unittest in D was really thought through properly for large projects targeting actual end users...I agree. I've also started to do more high level testing of some of my command line tools using Cucumber and Aruba. But these test are written in Ruby because of Cucumber and Aruba. http://cukes.info/ https://github.com/cucumber/aruba -- /Jacob Carlborg
Mar 12 2012
On 3/12/2012 11:04 AM, H. S. Teoh wrote:Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.That's exactly how it was intended! It seems like such a small feature, really just a syntactic convenience, but what a difference it makes.
Mar 12 2012
That could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
Mar 12 2012
On Monday, March 12, 2012 21:36:21 Martin Nowak wrote:I don't think that that's quite true. pure doesn't imply safe, so you could do pointer arithmetic and stuff and the like - which I'm pretty sure CTFE won't allow. And, of course, if you mark a C function as pure or subvert pure through casts, then pure _definitely_ doesn't imply CTFEability. - Jonathan M DavisThat could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
Mar 12 2012
On 03/12/2012 09:46 PM, Jonathan M Davis wrote:On Monday, March 12, 2012 21:36:21 Martin Nowak wrote:CTFE allows quite some pointer arithmetic, but makes sure it is actually safe.I don't think that that's quite true. pure doesn't imply safe, so you could do pointer arithmetic and stuff and the like - which I'm pretty sure CTFE won't allow. And, of course, if you mark a C function as pure or subvert pure through casts, then pure _definitely_ doesn't imply CTFEability. - Jonathan M DavisThat could be solved with a ctfe attribute or something, no? Like, if the function has ctfe, go through all possible CTFE paths (excluding !__ctfe paths of course) and make sure they are CTFEable.Everything that's pure should be CTFEable which doesn't imply that you can turn every CTFEable function into a pure one.
Mar 12 2012
On Mon, 12 Mar 2012 10:40:16 +0100, Walter Bright <newshound2 digitalmars.com> wrote:On 3/12/2012 1:08 AM, Martin Nowak wrote:A " safe pure nothrow const" might be used as " system". That means someone using a declaration may have a different view than someone providing the implementation. Those interface boundaries are also a good place for by-hand annotations to provide explicit API guarantees and enforce a correct implementation. Though another issue with inference is that it would require a depth-first-order for the semantic passes. I also hope we still don't mangle inferred attributes.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 12 2012
On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:On 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
Le 13/03/2012 12:02, Peter Alexander a écrit :On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:That is exactly what I was thinking about.On 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
On 3/13/12 6:02 AM, Peter Alexander wrote:On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:Because in the general case functions call one another so there's no way to figure which to look at first. AndreiOn 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
Le 13/03/2012 15:46, Andrei Alexandrescu a écrit :On 3/13/12 6:02 AM, Peter Alexander wrote:This problem is pretty close to garbage collection. Let's use pure as example, but it work with other qualifier too. function are marked pure, impure, or pure given all function called are pure (possibly pure). Then you go throw all possibly pure function and if it call an impure function, they mark it impure. When you don't mark any function as impure on a loop, you can mark all remaining possibly pure functions as pure.On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:Because in the general case functions call one another so there's no way to figure which to look at first. AndreiOn 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
On 3/13/12 10:47 AM, deadalnix wrote:This problem is pretty close to garbage collection. Let's use pure as example, but it work with other qualifier too. function are marked pure, impure, or pure given all function called are pure (possibly pure). Then you go throw all possibly pure function and if it call an impure function, they mark it impure. When you don't mark any function as impure on a loop, you can mark all remaining possibly pure functions as pure.Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class. However, the discussion was about availability of the body. A worklist-based approach would need all functions that call one another regardless of module. That makes the analysis interprocedural, i.e. difficult on large codebases. Andrei
Mar 13 2012
Le 13/03/2012 17:06, Andrei Alexandrescu a écrit :On 3/13/12 10:47 AM, deadalnix wrote:I expect the function we are talking about here not to call almost all the codebase. It would be scary.This problem is pretty close to garbage collection. Let's use pure as example, but it work with other qualifier too. function are marked pure, impure, or pure given all function called are pure (possibly pure). Then you go throw all possibly pure function and if it call an impure function, they mark it impure. When you don't mark any function as impure on a loop, you can mark all remaining possibly pure functions as pure.Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class. However, the discussion was about availability of the body. A worklist-based approach would need all functions that call one another regardless of module. That makes the analysis interprocedural, i.e. difficult on large codebases. Andrei
Mar 13 2012
On Tue, Mar 13, 2012 at 11:06:00AM -0500, Andrei Alexandrescu wrote:On 3/13/12 10:47 AM, deadalnix wrote:[...] I have an idea. Instead of making potentially risky changes to the compiler, or changes with unknown long-term consequences, what about an external tool (or a new compiler option) that performs this analysis and saves it into a file, say in json format or something? So we run the analysis on druntime, and it tells us exactly which functions can be marked pure, const, whatever, then we can (1) look through the list to see if functions that *should* be pure aren't, then investigate why and (possibly) fix the problem; (2) annotate all functions in druntime just by going through the list, without needing to manually fix one function, find out it breaks 5 other functions, fix those functions, find another 25 broken, etc.. We can also run this on phobos, cleanup whatever functions aren't marked pure, and then go through the list and annotate everything in one shot. Now that I think of it, it seems quite silly that we should be agonizing over the amount of manual work needed to annotate druntime and phobos, when the compiler already has all the necessary information to automate most of the tedious work. T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry FordThis problem is pretty close to garbage collection. Let's use pure as example, but it work with other qualifier too. function are marked pure, impure, or pure given all function called are pure (possibly pure). Then you go throw all possibly pure function and if it call an impure function, they mark it impure. When you don't mark any function as impure on a loop, you can mark all remaining possibly pure functions as pure.Certain analyses can be done using the so-called worklist approach. The analysis can be pessimistic (initially marking all functions as not carrying the property analyzed and gradually proving some do carry it) or optimistic (the other way around). The algorithm ends when the worklist is empty. This approach is well-studied and probably ought more coverage in compiler books. I learned about it in a graduate compiler class.
Mar 13 2012
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:On 3/13/12 6:02 AM, Peter Alexander wrote:That's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:Because in the general case functions call one another so there's no way to figure which to look at first. AndreiOn 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
On 03/13/2012 11:39 PM, kennytm wrote:Andrei Alexandrescu<SeeWebsiteForEmail erdani.org> wrote:http://d.puremagic.com/issues/show_bug.cgi?id=7205 The non-trivial issue is what to do with compile-time reflection in the function body. I think during reflection, the function should appear non-annotated to itself and all functions with inferred attributes it calls transitively through other functions with inferred attributes regardless of whether or not they are later inferred. (currently inference fails spectacularly for anything inside a typeof expression anyway, therefore it is not yet that much of an issue.) pragma(msg, typeof({writeln("hello world");})); // "void function() pure safe"On 3/13/12 6:02 AM, Peter Alexander wrote:That's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }On Monday, 12 March 2012 at 09:40:15 UTC, Walter Bright wrote:Because in the general case functions call one another so there's no way to figure which to look at first. AndreiOn 3/12/2012 1:08 AM, Martin Nowak wrote:Dumb question: Why not auto-infer when the function body is available, and put the inferred attributes into the automatically generated .di file? Apologies if I've missed something completely obvious.What's wrong with auto-inference. Inferred attributes are only strengthening guarantees.Auto-inference is currently done for lambdas and template functions - why? - because the function's implementation is guaranteed to be visible to the compiler. For other functions, not so, and so the attributes must be part of the function signature.
Mar 13 2012
On 3/13/12 5:39 PM, kennytm wrote:Andrei Alexandrescu<SeeWebsiteForEmail erdani.org> wrote:There is. Templates are guaranteed to have the body available. Walter uses a recursive on-demand approach instead of a worklist approach for inferring attributes (worklists have an issue I forgot about). AndreiBecause in the general case functions call one another so there's no way to figure which to look at first. AndreiThat's no difference from template functions calling each others right? int a()(int x) { return x==0?1:b(x-1); } int b()(int x) { return x==0?1:a(x-1); }
Mar 13 2012
On Sunday, 11 March 2012 at 23:54:10 UTC, Walter Bright wrote:Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate them It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).A pattern is emerging. Why not analyze it a bit and somehow try to find a common ground? Then we can generalize it to a single annotation.
Mar 12 2012
On Monday, 12 March 2012 at 07:18:06 UTC, so wrote:A pattern is emerging. Why not analyze it a bit and somehow try to find a common ground? Then we can generalize it to a single annotation.mask(wat) const|pure|nothrow|safe wat hash_t toHash() wat bool opEquals(ref const KeyType s) wat int opCmp(ref const KeyType s)
Mar 12 2012
So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).How about complete inference instead of a hack?
Mar 12 2012
On Monday, March 12, 2012 09:14:17 Martin Nowak wrote:Because that requires having all of the source code. The fact that we have .di files prevents that. You'd have to be able to guarantee that you can always see the whole source (including the source of anything that the functions call) in order for attribute inferrence to work. The only reason that we can do it with templates is because we _do_ always have their source, and the fact that non- templated functions must have the attributes in their signatures makes it so that the templated functions don't need their source in order to determine their own attributes. The fact that we can't guarantee that all of the source is available when compiling a particular module seriously hampers any attempts at general attribute inference. - Jonathan M DavisSo I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).How about complete inference instead of a hack?
Mar 12 2012
Because that requires having all of the source code. The fact that we have .di files prevents that.It doesn't require all source code. It just means that without source code nothing can be inferred and the attributes fall back to what has been annotated by hand. It could be used to annotated functions at the API level and have the compiler check that transitively. It should behave like implicit conversion to "pure nothrow ..." if the compiler hasn't found them inapplicable. On the downside it has some implications for the compilation model because functions would need to be analyzed transitively. But then again we already do this for CTFE.
Mar 12 2012
On 3/12/2012 1:56 PM, Martin Nowak wrote:It doesn't require all source code. It just means that without source code nothing can be inferred and the attributes fall back to what has been annotated by hand.Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
Mar 12 2012
On 2012-03-13 01:40, Walter Bright wrote:On 3/12/2012 1:56 PM, Martin Nowak wrote:We already have that, sometimes :( -- /Jacob CarlborgIt doesn't require all source code. It just means that without source code nothing can be inferred and the attributes fall back to what has been annotated by hand.Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
Mar 13 2012
On Tue, 13 Mar 2012 01:40:08 +0100, Walter Bright <newshound2 digitalmars.com> wrote:On 3/12/2012 1:56 PM, Martin Nowak wrote:Yeah, you're right. It would easily create confusing behavior.It doesn't require all source code. It just means that without source code nothing can be inferred and the attributes fall back to what has been annotated by hand.Hello endless bug reports of the form: "It compiles when I send the arguments to dmd this way but not that way. dmd is broken. D sux."
Mar 13 2012
On 3/13/2012 4:50 AM, Martin Nowak wrote:Yeah, you're right. It would easily create confusing behavior.In general, for modules a and b, all of these should work: dmd a b dmd b a dmd -c a dmd -c b
Mar 13 2012
In general, for modules a and b, all of these should work: dmd a b dmd b a dmd -c a dmd -c bFor '-c' CTFE will already run semantic3 on the other module's functions. But it would be very inefficient to do that for attributes.
Mar 14 2012
Le 12/03/2012 00:54, Walter Bright a écrit :Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate them It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).I don't really see the point. For Objects, we inherit from Object, which can define theses. For struct, we have inference, so most of the time attributes will correct. const pure nothrow safe are something we want, but is it something we want to enforce ?
Mar 12 2012
On 3/12/2012 4:11 AM, deadalnix wrote:For struct, we have inference,? No we don't.so most of the time attributes will correct. const pure nothrow safe are something we want, but is it something we want to enforce ?Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
Mar 12 2012
Le 13/03/2012 01:50, Walter Bright a écrit :On 3/12/2012 4:11 AM, deadalnix wrote:Ok my mistake. So why not dig in that direction ?For struct, we have inference,? No we don't.I always though that TypeInfo is a poor substitute for metaprograming and compile time reflexion.so most of the time attributes will correct. const pure nothrow safe are something we want, but is it something we want to enforce ?Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
Mar 13 2012
On 13-03-2012 16:56, deadalnix wrote:Le 13/03/2012 01:50, Walter Bright a écrit :Yes, and in some cases, it doesn't even work right; i.e. you can declare certain opCmp and opEquals signatures that work fine for ==, >, <, etc but don't get emitted to the TypeInfo metadata, and vice versa. It's a mess. -- - AlexOn 3/12/2012 4:11 AM, deadalnix wrote:Ok my mistake. So why not dig in that direction ?For struct, we have inference,? No we don't.I always though that TypeInfo is a poor substitute for metaprograming and compile time reflexion.so most of the time attributes will correct. const pure nothrow safe are something we want, but is it something we want to enforce ?Yes, because they are referred to by TypeInfo, and that's fairly useless if it isn't const etc.
Mar 13 2012
On Tue, 13 Mar 2012 12:03:22 -0400, Alex R=C3=B8nne Petersen = <xtzgzorex gmail.com> wrote:Yes, and in some cases, it doesn't even work right; i.e. you can decla=re =certain opCmp and opEquals signatures that work fine for =3D=3D, >, <,=etc =but don't get emitted to the TypeInfo metadata, and vice versa. It's a==mess.See my post in this thread. It fixes this problem. -Steve
Mar 14 2012
On Sun, 11 Mar 2012 19:54:09 -0400, Walter Bright <newshound2 digitalmars.com> wrote:Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s); They need to be, as well as const, pure nothrow safe. The problem is: 1. a lot of code must be retrofitted 2. it's just plain annoying to annotate them It's the same problem as for Object.toHash(). That was addressed by making those attributes inheritable, but that won't work for struct ones. So I propose instead a bit of a hack. toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).What about a new attribute type (or better name?) that means "this function is part of the TypeInfo interface, and has an equivalent xFuncname in TypeInfo_Struct". Then it implicitly inherits all the attributes of that xFuncname (not necessarily defined by the compiler). Then, we can have several benefits: 1. This triggers the compiler to complain if we don't correctly define the function (as specified in TypeInfo_Struct). In other words, it allows the developer to specify "I want this function to go into TypeInfo". 2. It potentially allows additional interface hooks without compiler modification. For example, you could add xfoo in TypeInfo_Struct, and then every struct you define type foo() would get a hook there. 3. As you wanted, it eliminates having to duplicate all the attributes. The one large drawback is, you need to annotate all existing functions. We could potentially assume that type is specified on the functions that currently enjoy automatic inclusion in the TypeInfo_Struct instance. I'd recommend at some point eliminating this hack though. -Steve
Mar 12 2012
Walter:toHash, opEquals, and opCmp as struct members be automatically annotated with pure, nothrow, and safe (if not already marked as trusted).I have read the other answers of this thread, and I don't like some of them. In this case I think this programmer convenience doesn't justify adding one more special case to D purity. So for me it's a -1. Bye, bearophile
Mar 12 2012
On Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be. I believe that Walter likes to say that it takes away your excuse _not_ to write them because of how easy it is to write unit tests in D. - Jonathan M Davis
Mar 12 2012
On 3/12/2012 11:10 AM, Jonathan M Davis wrote:I believe that Walter likes to say that it takes away your excuse _not_ to write them because of how easy it is to write unit tests in D.It can be remarkable how much more use something gets if you just make it a bit more convenient.
Mar 12 2012
On Mon, Mar 12, 2012 at 02:10:23PM -0400, Jonathan M Davis wrote:On Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:I would argue that D *does* make unit tests easier to write, in that you can write them in straight D code inline (as opposed to some testing frameworks that require external stuff like Expect, Python, intermixed with native code), so you don't need to put what you're writing on hold while you go off and write unittests. You can just insert a unittest block after the function/class/etc immediately while the code is still fresh in your mind. I often find myself writing unittests simultaneously with real code, since while writing the code I see a possible boundary condition to test for, and immediately put that in a unittest to ensure I don't forget about it later. This improves the quality of both the code and the unittests.Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be.I believe that Walter likes to say that it takes away your excuse _not_ to write them because of how easy it is to write unit tests in D.[...] Yep. They're so easy to write in D that I'd be embarrassed to *not* write them. T -- Famous last words: I *think* this will work...
Mar 12 2012
On Monday, March 12, 2012 11:25:41 H. S. Teoh wrote:On Mon, Mar 12, 2012 at 02:10:23PM -0400, Jonathan M Davis wrote:I didn't say that D doesn't make writing unit tests easier. I just said that it doesn't make them _easy_. They're as much work as writing any code is. But by making them easier, D makes them about as easy to write as they can be. Regardless, built-in unit testing is a fantastic feature. - Jonathan m DavisOn Monday, March 12, 2012 11:04:54 H. S. Teoh wrote:I would argue that D *does* make unit tests easier to write, in that you can write them in straight D code inline (as opposed to some testing frameworks that require external stuff like Expect, Python, intermixed with native code), so you don't need to put what you're writing on hold while you go off and write unittests. You can just insert a unittest block after the function/class/etc immediately while the code is still fresh in your mind. I often find myself writing unittests simultaneously with real code, since while writing the code I see a possible boundary condition to test for, and immediately put that in a unittest to ensure I don't forget about it later. This improves the quality of both the code and the unittests.Tangential note: writing unit tests may be tedious, but D's inline unittest syntax has alleviated a large part of that tedium. So much so that I find myself writing as much code in unittests as real code. Which is a good thing, because in the past I'd always been too lazy to write any unittests at all.D doesn't make writing unit tests easy, since there's an intrinsic amount of effort required to write them, just like there is with any code, but it takes away all of the extraneous effort in having to set up a unit test framework and the like. And by removing pretty much anything from the effort which is not actually required, it makes writing unit testing about as easy as it can be.
Mar 12 2012
On 11/03/2012 23:54, Walter Bright wrote:Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s);<snip> And what about toString? Stewart.
Mar 12 2012
On Tuesday, March 13, 2012 01:15:59 Stewart Gordon wrote:On 11/03/2012 23:54, Walter Bright wrote:That really should be too, but work is probably going to have to be done to make sure that format and std.conv.to can be pure, since they're pretty much required in most toString functions. I believe that changes to toUTF8, toUTF16, and toUTF32 were made recently which are at least a major step in that direction for std.conv.to (since it uses them) and may outright make it so that it can be pure now (I'm not sure if anything else is preventing them from being pure). But I have no idea what the current state of format is with regards to purity, and if the changes to toUTFx weren't enough to make std.conv.to pure for strings, then more will need to be done there as well. - Jonathan M DavisConsider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s);<snip> And what about toString?
Mar 12 2012
Stewart Gordon:And what about toString?Often in toString I use format() or text(), or to!string(), that currently aren't pure nor nothrow. But in this thread I have seen no answers regarding deprecating the need of opCmp() for hashability. Bye, bearophile
Mar 12 2012
On 13-03-2012 02:28, bearophile wrote:Stewart Gordon:I fully support that. -- - AlexAnd what about toString?Often in toString I use format() or text(), or to!string(), that currently aren't pure nor nothrow. But in this thread I have seen no answers regarding deprecating the need of opCmp() for hashability. Bye, bearophile
Mar 12 2012
On Mon, Mar 12, 2012 at 09:27:41PM -0400, Jonathan M Davis wrote:On Tuesday, March 13, 2012 01:15:59 Stewart Gordon wrote:This is not going to be a choice, because some overrides of toHash calls toString.On 11/03/2012 23:54, Walter Bright wrote:That really should be too, but work is probably going to have to be done to make sure that format and std.conv.to can be pure, since they're pretty much required in most toString functions.Consider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s);<snip> And what about toString?I believe that changes to toUTF8, toUTF16, and toUTF32 were made recently which are at least a major step in that direction for std.conv.to (since it uses them) and may outright make it so that it can be pure now (I'm not sure if anything else is preventing them from being pure). But I have no idea what the current state of format is with regards to purity, and if the changes to toUTFx weren't enough to make std.conv.to pure for strings, then more will need to be done there as well.[...] Looks like we need a massive one-shot overhaul of almost all of druntime and a potentially large part of phobos in order to get this pure/ safe/nothrow thing off the ground. There are just too many interdependencies everywhere that there's practically no way to do it incrementally. I tried the incremental approach several times and have given up, 'cos every small change inevitably propagates to functions all over druntime and then some. And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all. T -- Obviously, some things aren't very obvious.
Mar 12 2012
On 3/12/2012 6:40 PM, H. S. Teoh wrote:And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 12 2012
On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:On 3/12/2012 6:40 PM, H. S. Teoh wrote:Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground. T -- Two wrongs don't make a right; but three rights do make a left...And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 12 2012
On 13-03-2012 03:15, H. S. Teoh wrote:On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:I have to say this seems like the most sensible approach right now. -- - AlexOn 3/12/2012 6:40 PM, H. S. Teoh wrote:Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground. TAnd I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 12 2012
On 13 March 2012 04:15, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:On Mon, Mar 12, 2012 at 07:06:51PM -0700, Walter Bright wrote:MMMmmm, now we're talking! I've always preferred this approach :POn 3/12/2012 6:40 PM, H. S. Teoh wrote:Perhaps we just need to bite the bullet and do it all in one shot and check it into master, then deal with the aftermath later. :-) Otherwise it will simply never get off the ground.And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 14 2012
On Mon, 12 Mar 2012 22:06:51 -0400, Walter Bright <newshound2 digitalmars.com> wrote:On 3/12/2012 6:40 PM, H. S. Teoh wrote:It seems most people have ignored my post in this thread, so I'll say it again: What about an annotation (I suggested type, it could be anything, but I'll use that as my strawman) that says to the compiler "this is part of the TypeInfo_Struct interface." In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method. It does two things: 1. It provides an indicator a user can use when he *wants* to include that function as part of the typeinfo. Right now, you have to guess, and pray to the compiler gods that your function signature is deemed worthy. 2. It takes all sorts of burden off the compiler to know which functions are "special", and to make assumptions about them. We can implement it now *without* making those function pointers pure/safe/nothrow/whatever, and people can then experiment with changing it without having to muck with the compiler. As a bonus, it also allows people to experiment with adding more interface methods to structs without having to muck with the compiler. The only drawback is what to do with existing code that *doesn't* have type on it's functions that go into TypeInfo_Struct. There are ways to handle this. My suggestion is to simply treat the current methods as special and assume type is on those methods. But I would suggest removing that "hack" in the future, with some way to easily tell you where you need to put type annotations. -SteveAnd I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 14 2012
On 14.03.2012 16:11, Steven Schveighoffer wrote:On Mon, 12 Mar 2012 22:06:51 -0400, Walter Bright <newshound2 digitalmars.com> wrote:For one, I'm sold on it. And the proposed magic hack can work right now, then it'll just get deprecated in favor of explicit type. -- Dmitry OlshanskyOn 3/12/2012 6:40 PM, H. S. Teoh wrote:It seems most people have ignored my post in this thread, so I'll say it again: What about an annotation (I suggested type, it could be anything, but I'll use that as my strawman) that says to the compiler "this is part of the TypeInfo_Struct interface." In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method. It does two things: 1. It provides an indicator a user can use when he *wants* to include that function as part of the typeinfo. Right now, you have to guess, and pray to the compiler gods that your function signature is deemed worthy. 2. It takes all sorts of burden off the compiler to know which functions are "special", and to make assumptions about them. We can implement it now *without* making those function pointers pure/safe/nothrow/whatever, and people can then experiment with changing it without having to muck with the compiler. As a bonus, it also allows people to experiment with adding more interface methods to structs without having to muck with the compiler. The only drawback is what to do with existing code that *doesn't* have type on it's functions that go into TypeInfo_Struct. There are ways to handle this. My suggestion is to simply treat the current methods as special and assume type is on those methods. But I would suggest removing that "hack" in the future, with some way to easily tell you where you need to put type annotations.And I'm not talking about doing just toHash, or just toString either. Any of these functions have complex interdependencies with each other, so it's either fix them ALL, or not at all.Yup. It also seems very hard to figure out a transitional path to it.
Mar 14 2012
In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method.Why would you want to add explicit annotation for implicit TypeInfo_Struct methods? I think type is a very interesting idea if combined with a string->method lookup in TypeInfo_Struct, but this wouldn't allow for static type checking. If you wanted static type checking then type could probably refer to Interfaces.
Mar 14 2012
On Wed, 14 Mar 2012 09:27:08 -0400, Martin Nowak <dawg dawgfoto.de> wrote:Because right now, it's a guessing game of whether you wanted an operation to be part of the typeinfo's interface. And many times, the compiler guesses wrong. I've seen countless posts on d.learn saying "why won't AA's call my opEquals or opHash function?" With explicit annotation, you have instructed the compiler "I expect this to be in TypeInfo," so it can take the appropriate actions if it doesn't match.In essence, when type is encountered the compiler looks at TypeInfo_Struct (in object.di) for the equivalent xfuncname. Then uses the attributes of that function pointer (and also the parameter types/count) to compile the given method.Why would you want to add explicit annotation for implicit TypeInfo_Struct methods?I think type is a very interesting idea if combined with a string->method lookup in TypeInfo_Struct, but this wouldn't allow for static type checking.Yes it would. It has access to TypeInfo_Struct in object.di, so it can figure out what the signature should be. -Steve
Mar 14 2012
On 3/12/2012 6:15 PM, Stewart Gordon wrote:And what about toString?Good question. What do you suggest?
Mar 12 2012
Walter:Good question. What do you suggest?I suggest to follow a slow but reliable path, working bottom-up: turn to!string()/text()/format() into pure+nothrow functions, and then later require toString to be pure+nothrow and to have such annotations. Bye, bearophile
Mar 12 2012
On Mon, Mar 12, 2012 at 10:58:18PM -0400, bearophile wrote:Walter:[...] The problem is that there may not *be* a bottom to start from. These functions are all interlinked to each other in various places (spread across a myriad of different overrides of them). I've tried to find one function that I can annotate without needing hundreds of other changes, but alas, they all depend on each other at some level, and every time I end up annotating almost every other function in druntime and the change just gets bigger and bigger. T -- Trying to define yourself is like trying to bite your own teeth. -- Alan WattsGood question. What do you suggest?I suggest to follow a slow but reliable path, working bottom-up: turn to!string()/text()/format() into pure+nothrow functions, and then later require toString to be pure+nothrow and to have such annotations.
Mar 12 2012
On 13/03/12 03:05, Walter Bright wrote:On 3/12/2012 6:15 PM, Stewart Gordon wrote:Why can't we just kill that abomination?And what about toString?Good question. What do you suggest?
Mar 13 2012
On 3/13/2012 4:15 AM, Don Clugston wrote:On 13/03/12 03:05, Walter Bright wrote:Break a lot of existing code?On 3/12/2012 6:15 PM, Stewart Gordon wrote:Why can't we just kill that abomination?And what about toString?Good question. What do you suggest?
Mar 13 2012
Walter:Break a lot of existing code?Invent a good deprecation strategy for toString? The idea of modifying toString isn't new, we have discussed it more than on time, and I agree with the general strategy Don appreciates. As far as I know you didn't do much on it mostly because there were other more important things to do, like fixing important bugs, not because people thought the situation was good enough. Maybe now it's a good moment to slow down bug fixing and look for things more important than bugs (where "important" means "can't be done much later"). Bye, bearophile
Mar 13 2012
On 14.03.2012 3:23, Walter Bright wrote:On 3/13/2012 4:15 AM, Don Clugston wrote:And gain efficiency. BTW transition paths were suggested, need to just dig up DIP9 discussions. -- Dmitry OlshanskyOn 13/03/12 03:05, Walter Bright wrote:Break a lot of existing code?On 3/12/2012 6:15 PM, Stewart Gordon wrote:Why can't we just kill that abomination?And what about toString?Good question. What do you suggest?
Mar 14 2012
On Tue, 13 Mar 2012 19:23:25 -0400, Walter Bright <newshound2 digitalmars.com> wrote:On 3/13/2012 4:15 AM, Don Clugston wrote:I'm unaware of much code that uses TypeInfo.xtostring to print anything. write[f][ln] doesn't, and I don't think std.conv.to does either. In other words, killing the "specialness" of toString doesn't mean killing toString methods in all structs. What this does is allow us to not worry about what you annotate your toString methods with, it just becomes a regular method. -SteveOn 13/03/12 03:05, Walter Bright wrote:Break a lot of existing code?On 3/12/2012 6:15 PM, Stewart Gordon wrote:Why can't we just kill that abomination?And what about toString?Good question. What do you suggest?
Mar 14 2012
On 3/12/12 8:15 PM, Stewart Gordon wrote:On 11/03/2012 23:54, Walter Bright wrote:I think the three others have a special regime because pointers to them must be saved for the sake of associative arrays. toString is used only generically, AndreiConsider the toHash() function for struct key types: http://dlang.org/hash-map.html And of course the others: const hash_t toHash(); const bool opEquals(ref const KeyType s); const int opCmp(ref const KeyType s);<snip> And what about toString?
Mar 12 2012
On Tue, 13 Mar 2012 04:40:01 +0100, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:I think the three others have a special regime because pointers to them must be saved for the sake of associative arrays. toString is used only generically, AndreiAdding a special case for AAs is not a good idea but these operators are indeed special and should have a defined behavior. Requiring pureness for comparison for example is good for all kind of generic algorithms.
Mar 13 2012