digitalmars.D - GC removeRange/removeRoot API issue
- safety0ff (33/33) May 07 2014 I was working on a Treap implementation to accelerate the GC
- Orvid King via Digitalmars-d (14/47) May 08 2014 What's the reasoning for the current behavior of add/remove range?
- safety0ff (19/21) May 08 2014 I think the behaviour only stems from the simple implementation
I was working on a Treap implementation to accelerate the GC add/remove root/range functions when I realised it is not specified how multiple calls to addRange with the same parameter p (and possibly different size parameter,) should be handled. Currently the case for add/remove root is if you call addRoot with the same parameter N times, then the GC will only stop scanning the root once N calls to removeRoot are made. The current situation for removeRange is different: it assumes FiFo order. For example: GC.addRange(p, 64); GC.addRange(p, 32); GC.addRange(p, 16); // assumes we want to remove scanning of p[32..64] GC.removeRange(p); // assumes we want to remove scanning of p[16..32] GC.removeRange(p); // assumes we want to remove scanning of p[0..16] GC.removeRange(p); This behaviour seems like it might lead to non-deterministic behaviour in a system which relies heavily on this API. My question is: how should multiple calls to add/remove range/root with the same parameter p be handled? One possibility is to conservatively remove ranges and add another removeRange function which takes a size parameter to allow non-conservative removal. Another possibility would be to specify that the stops scanning a root/range after the first call to remove root/range (i.e. duplicate calls are ignored.) This would be a higher performance option but would also cause breakage. Either way, we should _specify_ how the duplicate call case should be handled! Please discuss!
May 07 2014
What's the reasoning for the current behavior of add/remove range? This is actually something that I had almost forgotten about in my GC design, so I thank you for reminding me of it :D After a preliminary think-through of the design, I would end up going with the first possibility, so that precise-removal is possible, due to the fact I intend to allow for entry compaction. By this I mean that if there is, for instance, 6 global object fields in contiguous memory, only 1 call to addRootRange (as it would be become named) would be performed. I would like to note that while entries would only be merged if they are of the same type, all reference types and pointers will be considered to be of the exact same type. The same will be the case with both all delegate, and all array types. Structs would be either compared for equality by their field bitmaps, or else by their actual type. On 5/8/14, safety0ff via Digitalmars-d <digitalmars-d puremagic.com> wrote:I was working on a Treap implementation to accelerate the GC add/remove root/range functions when I realised it is not specified how multiple calls to addRange with the same parameter p (and possibly different size parameter,) should be handled. Currently the case for add/remove root is if you call addRoot with the same parameter N times, then the GC will only stop scanning the root once N calls to removeRoot are made. The current situation for removeRange is different: it assumes FiFo order. For example: GC.addRange(p, 64); GC.addRange(p, 32); GC.addRange(p, 16); // assumes we want to remove scanning of p[32..64] GC.removeRange(p); // assumes we want to remove scanning of p[16..32] GC.removeRange(p); // assumes we want to remove scanning of p[0..16] GC.removeRange(p); This behaviour seems like it might lead to non-deterministic behaviour in a system which relies heavily on this API. My question is: how should multiple calls to add/remove range/root with the same parameter p be handled? One possibility is to conservatively remove ranges and add another removeRange function which takes a size parameter to allow non-conservative removal. Another possibility would be to specify that the stops scanning a root/range after the first call to remove root/range (i.e. duplicate calls are ignored.) This would be a higher performance option but would also cause breakage. Either way, we should _specify_ how the duplicate call case should be handled! Please discuss!
May 08 2014
On Thursday, 8 May 2014 at 14:20:58 UTC, Orvid King via Digitalmars-d wrote:What's the reasoning for the current behavior of add/remove range?I think the behaviour only stems from the simple implementation rather than reason. After sleeping on the question, I realise there's no way around dealing with duplicates. The new questions are: - Should there be a way to remove ranges less conservatively (e.g. via explicit size parameter for removeRange) - Should there be a way of 'forcing' removal of a range/root? (e.g. we're about to free() some memory, force the GC to ignore duplicate add root/range calls and remove it.) The interface might look like: void removeRange(void* p, size_t size=0, bool force=false) if size is zero, conservatively remove range/root if force is true then remove all scanning of the range/root if size is zero and force is true, all ranges/root specified by p will be removed Thoughts?
May 08 2014