www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Array!T and find are slow

reply "Damian Day" <damianroyday gmail.com> writes:
I've found bench-marking my program that std.algorithm.find is
very slow on Array!T, due to the fact it iterates on a range
instead of a plain array.

I've written some search functions, which are many times faster, 
is it
worth making a pull request?


May 14 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 14 May 2014 at 14:24:28 UTC, Damian Day wrote:
 I've written some search functions, which are many times 
 faster, is it
 worth making a pull request?
Generally, we strive to make the algorithms in Phobos as fast as possible even in the general case. I didn't look at the issue at hand in any detail, but if the "-O -release -inline" performance when operating on Arrays is considerably worse than when directly operating on the data, we have a problem, as it likely to also impact other ranges. In short, I don't think the best solution is to add this special case to Array!T, but rather to figure out what exactly is wrong with Array!T and/or find() and fix the problem at its source. Could you post a short benchmark snippet explicitly showing the problem? Best, David
May 14 2014
next sibling parent reply "Damian Day" <damianroyday gmail.com> writes:
On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger wrote:

 Could you post a short benchmark snippet explicitly showing the 
 problem?
Benchmark found here: http://dpaste.dzfl.pl/0058fc8341830
May 14 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 14 May 2014 at 15:42:13 UTC, Damian Day wrote:
 On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger 
 wrote:

 Could you post a short benchmark snippet explicitly showing 
 the problem?
Benchmark found here: http://dpaste.dzfl.pl/0058fc8341830
Unfortunately, I don't have the time to look at the details right now, but https://github.com/D-Programming-Language/phobos/pull/1713 might have improved the situation somewhat. David
May 14 2014
parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Wednesday, 14 May 2014 at 17:20:37 UTC, David Nadlinger wrote:
 On Wednesday, 14 May 2014 at 15:42:13 UTC, Damian Day wrote:
 On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger 
 wrote:

 Could you post a short benchmark snippet explicitly showing 
 the problem?
Benchmark found here: http://dpaste.dzfl.pl/0058fc8341830
Unfortunately, I don't have the time to look at the details right now, but https://github.com/D-Programming-Language/phobos/pull/1713 might have improved the situation somewhat. David
That pull shows that the previous behaviour was to use enforce? Isn't this very expensive, particularly considering that enforce uses lazy non-scope arguments?
May 14 2014
next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 14 May 2014 at 21:20:06 UTC, Kapps wrote:
 That pull shows that the previous behaviour was to use enforce?
 Isn't this very expensive, particularly considering that enforce
 uses lazy non-scope arguments?
Yes.
May 14 2014
prev sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Wed, 14 May 2014 21:20:05 +0000
Kapps via Digitalmars-d-learn <digitalmars-d-learn puremagic.com> wrote:

 That pull shows that the previous behaviour was to use enforce?
 Isn't this very expensive, particularly considering that enforce
 uses lazy non-scope arguments?
Yeah, much as Andrei would hate to hear it (enforce was his idea, and he quite likes the idiom), the fact that lazy is so inefficient makes it so that it's arguably bad practice to use it in high performance code. We really need to find a way to make it so that lazy is optimized properly so that we _can_ safely use enforce, but for now, it's not a good idea unless the code that you're working on can afford the performance hit. Honestly, in general, I'd avoid most anything which uses lazy (e.g. that's why I'd use explict try-catch blocks rather than use std.exception.assumeWontThrow - like enforce, it's a nice idea, but it's too expensive at this point). - Jonathan M Davis
May 14 2014
parent reply "Meta" <jared771 gmail.com> writes:
On Wednesday, 14 May 2014 at 22:32:01 UTC, Jonathan M Davis via 
Digitalmars-d-learn wrote:
 Yeah, much as Andrei would hate to hear it (enforce was his 
 idea, and he quite
 likes the idiom), the fact that lazy is so inefficient makes it 
 so that it's
 arguably bad practice to use it in high performance code. We 
 really need to
 find a way to make it so that lazy is optimized properly so 
 that we _can_
 safely use enforce, but for now, it's not a good idea unless 
 the code that
 you're working on can afford the performance hit.

 Honestly, in general, I'd avoid most anything which uses lazy 
 (e.g. that's why
 I'd use explict try-catch blocks rather than use
 std.exception.assumeWontThrow - like enforce, it's a nice idea, 
 but it's too
 expensive at this point).

 - Jonathan M Davis
On the topic of lazy, why *is* it so slow, exactly? I thought it was just shorthand for taking a function that evaluates the expression, and wrapping said expression in that function at the call site. That is, I thought that: int doSomething(lazy int n) { return n(); } Was more or less equivalent to: int doSomething(int function(int) n) { return n(); }
May 14 2014
next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Wednesday, 14 May 2014 at 23:50:34 UTC, Meta wrote:
 On the topic of lazy, why *is* it so slow, exactly? I thought 
 it was just shorthand for taking a function that evaluates the 
 expression, and wrapping said expression in that function at 
 the call site. That is, I thought that:

 int doSomething(lazy int n)
 {
     return n();
 }

 Was more or less equivalent to:

 int doSomething(int function(int) n)
 {
     return n();
 }
It's more equivalent to: int doSomething(int delegate(int) n) { return n(); } And (I could be very likely wrong here), I believe that it's expensive because it's not scope and possibly requires a closure. Again, very likely may be wrong.
May 14 2014
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Thu, 15 May 2014 01:29:23 +0000
Kapps via Digitalmars-d-learn <digitalmars-d-learn puremagic.com> wrote:

 On Wednesday, 14 May 2014 at 23:50:34 UTC, Meta wrote:
 On the topic of lazy, why *is* it so slow, exactly? I thought
 it was just shorthand for taking a function that evaluates the
 expression, and wrapping said expression in that function at
 the call site. That is, I thought that:

 int doSomething(lazy int n)
 {
     return n();
 }

 Was more or less equivalent to:

 int doSomething(int function(int) n)
 {
     return n();
 }
It's more equivalent to: int doSomething(int delegate(int) n) { return n(); } And (I could be very likely wrong here), I believe that it's expensive because it's not scope and possibly requires a closure. Again, very likely may be wrong.
Yeah. It generates a delegate. You even use the value internally as a delegate. So, that's definitely part of the problem, though IIRC, there were other issues with it. However, I don't remember at the moment. The big one IIRC (which may be due to its nature as a delegate) is simply that it can't be inlined, and in many cases, you very much what the code to be inlined (enforce would be a prime example of that). enforce(cond, "failure"); really should just translate to something close to if(!cond) throw new Exception("failure"); but it doesn't do anything close to that. And as long as it doesn't, enforce is of questionable value in any code that cares about efficiency. - Jonathan M Davis
May 14 2014
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 15 May 2014 at 04:37:16 UTC, Jonathan M Davis via 
Digitalmars-d-learn wrote:
 enforce(cond, "failure");

 really should just translate to something close to

 if(!cond) throw new Exception("failure");

 but it doesn't do anything close to that. And as long as it 
 doesn't, enforce
 is of questionable value in any code that cares about 
 efficiency.

 - Jonathan M Davis
As a workaround, I'm sure we could specialize enforce without lazy for built-in types? BTW: Why *is* enforce lazy again? I don't really see it. I makes more sense for things like "collectException" I guess, but I don't see it for enforce.
May 14 2014
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Thu, 15 May 2014 05:53:45 +0000
monarch_dodra via Digitalmars-d-learn
<digitalmars-d-learn puremagic.com> wrote:

 As a workaround, I'm sure we could specialize enforce without
 lazy for built-in types?
No. I don't think that that would work. The problem is that you'd have to be able to overload between stuff like "error message" and format("error message: %s", foo), because you don't want the first one to be lazy, whereas you do want the second one to be lazy.
 BTW: Why *is* enforce lazy again? I don't really see it. I makes
 more sense for things like "collectException" I guess, but I
 don't see it for enforce.
It's lazy so that the second argument isn't evaluated. That might not be obvious in the example enforce(cond, "failure"); but if you have enforce(cond, format("failure: %s", foo)); or enforce(cond, new BarException("blah")); then it would definitely be costing you something if the second parameter weren't lazy - and in theory, the cost of it not being lazy could be arbitrarily large, because you can pass it arbitrarily complex expressions just so long as their result is the correct type. Now, given how slow lazy is, maybe it would actually be faster to just have it be strict instead of lazy, but in theory, it's saving you something. And if lazy were actually properly efficient, then it would definitely be saving you something. Regardless, using an explicit if statement is definitely faster, because then you don't have the lazy parameter to worry about _or_ the potentially expensive expression, since the expression would only be encountered if the condition failed. - Jonathan M Davis
May 14 2014
parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Thursday, 15 May 2014 at 06:52:44 UTC, Jonathan M Davis via 
Digitalmars-d-learn wrote:
 On Thu, 15 May 2014 05:53:45 +0000
 monarch_dodra via Digitalmars-d-learn
 <digitalmars-d-learn puremagic.com> wrote:

 As a workaround, I'm sure we could specialize enforce without
 lazy for built-in types?
No. I don't think that that would work. The problem is that you'd have to be able to overload between stuff like "error message" and format("error message: %s", foo), because you don't want the first one to be lazy, whereas you do want the second one to be lazy.
Oh... right. It's the *second* parameter that's lazy. Arguably, the compiler should be able too "see" if the argument passed is a value or an expression though, and optimize accordingly.
May 15 2014
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 5/15/14, 1:31 AM, Jonathan M Davis via Digitalmars-d-learn wrote:
 On Thu, 15 May 2014 01:29:23 +0000
 Kapps via Digitalmars-d-learn <digitalmars-d-learn puremagic.com> wrote:

 On Wednesday, 14 May 2014 at 23:50:34 UTC, Meta wrote:
 On the topic of lazy, why *is* it so slow, exactly? I thought
 it was just shorthand for taking a function that evaluates the
 expression, and wrapping said expression in that function at
 the call site. That is, I thought that:

 int doSomething(lazy int n)
 {
      return n();
 }

 Was more or less equivalent to:

 int doSomething(int function(int) n)
 {
      return n();
 }
It's more equivalent to: int doSomething(int delegate(int) n) { return n(); } And (I could be very likely wrong here), I believe that it's expensive because it's not scope and possibly requires a closure. Again, very likely may be wrong.
Yeah. It generates a delegate. You even use the value internally as a delegate. So, that's definitely part of the problem, though IIRC, there were other issues with it. However, I don't remember at the moment. The big one IIRC (which may be due to its nature as a delegate) is simply that it can't be inlined, and in many cases, you very much what the code to be inlined (enforce would be a prime example of that). enforce(cond, "failure"); really should just translate to something close to if(!cond) throw new Exception("failure"); but it doesn't do anything close to that.
Isn't there a way in D to just expand: enforce(cond, "failure"); (or something with a similar syntax) to this, at compile-time: if(!cond) throw new Exception("failure"); I thought D could do this, so enforce should do this instead of using lazy arguments.
May 15 2014
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Thu, 15 May 2014 08:04:59 -0300
Ary Borenszweig via Digitalmars-d-learn
<digitalmars-d-learn puremagic.com> wrote:

 Isn't there a way in D to just expand:

 enforce(cond, "failure");

 (or something with a similar syntax) to this, at compile-time:

 if(!cond) throw new Exception("failure");

 I thought D could do this, so enforce should do this instead of using
 lazy arguments.
No. enforce is a function, and the only other things that it could be with that syntax would be other callables (e.g. a lambda, delegate, or functor). Mixins are the only constructs that can completely replace themselves with another construct. So, I suppose that you could have a function called enforce that returned a string and be able to do something like mixin(enforce("cond", `"failure"`)); and you could probably do something similar with a template mixin. mixin(enforce!(cond, "failure")); might be possible if both of the template parameters were alias parameters. But there's no way to take one piece of code and completely translate it into another piece of code without mixins. To do that probably would have meant using some kind of macros, or that was specifically avoided in D's design. And conceptually, using lazy for enforce is a perfect fit. The problem is with the current implementation of lazy. - Jonathan M Davis
May 16 2014
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 16 May 2014 11:36:44 -0400, Jonathan M Davis via  
Digitalmars-d-learn <digitalmars-d-learn puremagic.com> wrote:

 On Thu, 15 May 2014 08:04:59 -0300
 Ary Borenszweig via Digitalmars-d-learn
 <digitalmars-d-learn puremagic.com> wrote:

 Isn't there a way in D to just expand:

 enforce(cond, "failure");

 (or something with a similar syntax) to this, at compile-time:

 if(!cond) throw new Exception("failure");

 I thought D could do this, so enforce should do this instead of using
 lazy arguments.
No. enforce is a function, and the only other things that it could be with that syntax would be other callables (e.g. a lambda, delegate, or functor).
I think it *could* optimize properly, and that would be an amazing improvement to the compiler, if someone wants to implement that. Essentially, you need to be able to inline enforce (not a problem since it's a template), and then deduce that the lazy calls can just be moved to where they are used, in this case, only once. This would make a logging library even better too. -Steve
May 16 2014
parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Fri, 16 May 2014 11:51:31 -0400
Steven Schveighoffer via Digitalmars-d-learn
<digitalmars-d-learn puremagic.com> wrote:

 On Fri, 16 May 2014 11:36:44 -0400, Jonathan M Davis via
 Digitalmars-d-learn <digitalmars-d-learn puremagic.com> wrote:

 On Thu, 15 May 2014 08:04:59 -0300
 Ary Borenszweig via Digitalmars-d-learn
 <digitalmars-d-learn puremagic.com> wrote:

 Isn't there a way in D to just expand:

 enforce(cond, "failure");

 (or something with a similar syntax) to this, at compile-time:

 if(!cond) throw new Exception("failure");

 I thought D could do this, so enforce should do this instead of
 using lazy arguments.
No. enforce is a function, and the only other things that it could be with that syntax would be other callables (e.g. a lambda, delegate, or functor).
I think it *could* optimize properly, and that would be an amazing improvement to the compiler, if someone wants to implement that. Essentially, you need to be able to inline enforce (not a problem since it's a template), and then deduce that the lazy calls can just be moved to where they are used, in this case, only once. This would make a logging library even better too.
Sure, the compiler could be improved to optimize enforce such that enforce(cond, "failure"); becomes if(!cond) throw new Exception("failure"); And I think that that's what needs to be done. However, what I understood the question to be was whether there was a construct in the language that we could use where enforce(cond, "failure"); would be automatically converted to if(!cond) throw new Exception("failure"); without the compiler having to do optimizations to make it happen. The closest to that would be mixins, but they can't do it with the same syntax. You'd be required to write something more complex to use mixins to do the job. But I think that the correct solution is to improve the compiler with regards to lazy. The fact that lazy is so slow is a serious problem, and enforce is just one manifestation of it (albeit the worst because of how often it's used). - Jonathan M Davis
May 17 2014
parent "w0rp" <devw0rp gmail.com> writes:
On Saturday, 17 May 2014 at 20:06:03 UTC, Jonathan M Davis via 
Digitalmars-d-learn wrote:
 But I think that the correct solution is to improve the 
 compiler with regards
 to lazy. The fact that lazy is so slow is a serious problem, 
 and enforce is
 just one manifestation of it (albeit the worst because of how 
 often it's
 used).

 - Jonathan M Davis
I haven't been bitten by the performance of lazy myself, but I'd +1 that, because of how easy it can make writing code.
May 17 2014
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 14 May 2014 19:50:33 -0400, Meta <jared771 gmail.com> wrote:

 On Wednesday, 14 May 2014 at 22:32:01 UTC, Jonathan M Davis via  
 Digitalmars-d-learn wrote:
 Yeah, much as Andrei would hate to hear it (enforce was his idea, and  
 he quite
 likes the idiom), the fact that lazy is so inefficient makes it so that  
 it's
 arguably bad practice to use it in high performance code. We really  
 need to
 find a way to make it so that lazy is optimized properly so that we  
 _can_
 safely use enforce, but for now, it's not a good idea unless the code  
 that
 you're working on can afford the performance hit.

 Honestly, in general, I'd avoid most anything which uses lazy (e.g.  
 that's why
 I'd use explict try-catch blocks rather than use
 std.exception.assumeWontThrow - like enforce, it's a nice idea, but  
 it's too
 expensive at this point).

 - Jonathan M Davis
On the topic of lazy, why *is* it so slow, exactly?
Last time I remember, the issue is that functions with lazy parameters will never be inlined. -Steve
May 15 2014
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 14 May 2014 at 15:42:13 UTC, Damian Day wrote:
 On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger 
 wrote:

 Could you post a short benchmark snippet explicitly showing 
 the problem?
Benchmark found here: http://dpaste.dzfl.pl/0058fc8341830
FYI, that implementation is wrong: Array.Range keeps `_a` and `_b`, which represents the "internal" bounds of the payload proper (and not "low" and "length"). You want something along the lines of: Range searchForward(E)(E needle) if (isImplicitlyConvertible!(E, T)) { assert(_data.refCountedStore.isInitialized); auto haystack = _data._payload[_a .. _b]; immutable len = _b - _a; for (size_t index = 0; index < len; ++index) { if (haystack[index] is needle) return Range(this, _b - len, _b); } return Range(this, _b, _b); } But even then, as I suggested above, while I think having searchForward could improve the situation, implementing it "in terms of" find would be better: This way, you'd still benefit from some of the highly optimized cases in find.
May 14 2014
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Damian Day:

 Benchmark found here:
 http://dpaste.dzfl.pl/0058fc8341830
Perhaps it's a problem of missed dmd inlining. So I suggest to run the benchmark with ldc2. Bye, bearophile
May 14 2014
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 14 May 2014 at 14:54:57 UTC, David Nadlinger wrote:
 On Wednesday, 14 May 2014 at 14:24:28 UTC, Damian Day wrote:
 I've written some search functions, which are many times 
 faster, is it
 worth making a pull request?
Generally, we strive to make the algorithms in Phobos as fast as possible even in the general case. I didn't look at the issue at hand in any detail, but if the "-O -release -inline" performance when operating on Arrays is considerably worse than when directly operating on the data, we have a problem, as it likely to also impact other ranges. In short, I don't think the best solution is to add this special case to Array!T, but rather to figure out what exactly is wrong with Array!T and/or find() and fix the problem at its source. Could you post a short benchmark snippet explicitly showing the problem? Best, David
"One" of the issue is the cost of repeatedly indexing Array, when internally it is nothing more than an array. Adding a special case "in" Array.Range allows bypassing the repeated indexing costs, but at the very least, should be implemented "in terms of" find itself, eg: Range searchForward(E)(E needle) if (isImplicitlyConvertible!(E, T)) { assert(_outer._data.refCountedStore.isInitialized); auto haystack = _outer._data._payload[_a .. _b]; haystack = haystack.find(needle); return Range(this._outer, _b - haystack.length, _b); }
May 14 2014
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 14 May 2014 at 17:36:35 UTC, monarch_dodra wrote:
 Adding a special case "in" Array.Range allows bypassing the 
 repeated indexing costs, but at the very least, should be 
 implemented "in terms of" find itself, eg:
Shouldn't the extra indirection just vanish into thin air once opIndex or what have you is inlined? I don't see a conceptual reason for overhead in this case. Of course, the compiler probably wouldn't be able to infer any of the memchr/… optimizations find does for the array case, but Damian's alternative version doesn't do any of that either. David
May 14 2014
parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 14 May 2014 at 20:29:52 UTC, David Nadlinger wrote:
 On Wednesday, 14 May 2014 at 17:36:35 UTC, monarch_dodra wrote:
 Adding a special case "in" Array.Range allows bypassing the 
 repeated indexing costs, but at the very least, should be 
 implemented "in terms of" find itself, eg:
Shouldn't the extra indirection just vanish into thin air once opIndex or what have you is inlined? I don't see a conceptual reason for overhead in this case.
Maybe. That said, currently, find doesn't use indexing for RA ranges. It simply uses a front/popFront for loop, so *that's* extra work. I just tried a RA/hasLength/hasSlicing case in find. It improves the situation by a factor of 2 on my machine (with dmd), but it is still somewhat slower than extracting the array directly. I'm usually reluctant to add extra code when the generic case works, but I feel we should make an exception for find. I don't know if the optimizer can optimize away the indirection entirely, when the code for front also does indexing, with bounds checking... Array comes with deterministic memory management, as well as non-invalidating range when relocation occurs. Extra overhead is to be expected.
 Of course, the compiler probably wouldn't be able to infer any 
 of the memchr/… optimizations find does for the array case, but 
 Damian's alternative version doesn't do any of that either.

 David
The memchr optimization would only trigger for ubyte/byte and char (although a bug prevents Array from working with char https://issues.dlang.org/show_bug.cgi?id=8824) But more importantly, there are more flavors of find, such as with predicate, or range/range find. Having an implementation provide a thin wrapper might be acceptable. Reimplementation isn't.
May 14 2014
parent Jonathan M Davis via Digitalmars-d-learn writes:
On Wed, 14 May 2014 20:54:19 +0000
monarch_dodra via Digitalmars-d-learn
<digitalmars-d-learn puremagic.com> wrote:

 I'm usually reluctant to add extra code when the generic case
 works, but I feel we should make an exception for find.
We should avoid it where it doesn't gain us much, but the standard library needs to be fast, so if there's a definite performance gain to be made in adding overloads, we should. That being said, I agree that we should be trying to make ranges in general faster rather than optimizing for individual range types such as Array's range type. Still, if there's a significant performance gain to be made in adding an overload for find to Array's range type, we should at least consider it. UFCS should then make it just work. But that solution should be reserved for if there's a fundamental reason why it would be faster to create a specialization for Array's range type rather than just because ranges aren't optimized well enough by the compiler or because std.algorithm.find itself needs improvement. If we can speed up ranges in general, then we gain everywhere, and if we can speed up std.algorithm.find, then it's sped up for many types rather than just making it faster for Array's range type. - Jonathan M Davis
May 14 2014
prev sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Wednesday, 14 May 2014 at 14:24:28 UTC, Damian Day wrote:
 I've found bench-marking my program that std.algorithm.find is
 very slow on Array!T, due to the fact it iterates on a range
 instead of a plain array.

 I've written some search functions, which are many times 
 faster, is it
 worth making a pull request?


BTW, this is a more "general" issue: Given a generic algorithm "std.foo", how can I write my own (better optimized) "object.foo", and make sure *that* is called instead? I initially filed the issue for "retro", while indeed mentioning that "find" was also open to the improvement: https://issues.dlang.org/show_bug.cgi?id=12583 This spawned the thread: http://forum.dlang.org/thread/op.xeuot6g2eav7ka stevens-macbook-pro-2.local Unfortunately, nothing came of it.
May 14 2014
parent "Damian Day" <damianroyday gmail.com> writes:
On Wednesday, 14 May 2014 at 17:57:30 UTC, monarch_dodra wrote:
 BTW, this is a more "general" issue: Given a generic algorithm 
 "std.foo", how can I write my own (better optimized) 
 "object.foo", and make sure *that* is called instead?

 I initially filed the issue for "retro", while indeed 
 mentioning that "find" was also open to the improvement:
 https://issues.dlang.org/show_bug.cgi?id=12583

 This spawned the thread:
 http://forum.dlang.org/thread/op.xeuot6g2eav7ka stevens-macbook-pro-2.local

 Unfortunately, nothing came of it.
It's an interesting problem to solve. Would having a specialized container range, which has search, removal, etc primitives work better?
May 14 2014