www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Is this a good pattern for allocation?

reply "JS" <js.mdnq gmail.com> writes:
Trying to avoid GC dependence on my objects.

interface Alloc(T)
{
	T New(A...)(A args);
         //final T Factory(A...)(T, A args) { return new T(args); }
}


class A(T) : Alloc!(A!(T))
{
     ....
     static A!T New()
     {
         return new A!T();    // Use GC for now
         // Factor(A!T);
     }
}

(I'm not sure why the static New satisfies the interface but dmd 
doesn't complain)

Using New instead of new removes a dependence step. If I could 
get the factory to work, then it could be used by New to help 
setup the object, the factory just allocating the object while 
New setting it up properly.


Anyways, is this a bad or good way and how can I call Factory to 
get a new object? (I know that ther is .init and other stuff that 
can be used by Factor/New. I'm just trying to get the skeleton to 
compile and make sure there are not going to be major issues in 
the future.
Aug 04 2013
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Monday, 5 August 2013 at 01:47:26 UTC, JS wrote:
 Anyways, is this a bad or good way and how can I call Factory 
 to get a new object? (I know that ther is .init and other stuff 
 that can be used by Factor/New. I'm just trying to get the 
 skeleton to compile and make sure there are not going to be 
 major issues in the future.
I wouldn't use inheritance, but something that involves a template function and std.conv.emplace and handles all of your types uniformly.
Aug 05 2013
parent reply "JS" <js.mdnq gmail.com> writes:
On Monday, 5 August 2013 at 07:15:30 UTC, Tobias Pankrath wrote:
 On Monday, 5 August 2013 at 01:47:26 UTC, JS wrote:
 Anyways, is this a bad or good way and how can I call Factory 
 to get a new object? (I know that ther is .init and other 
 stuff that can be used by Factor/New. I'm just trying to get 
 the skeleton to compile and make sure there are not going to 
 be major issues in the future.
I wouldn't use inheritance, but something that involves a template function and std.conv.emplace and handles all of your types uniformly.
The purpose of inheritance is to allow one to modify the allocation scheme if necessary and to have a common interface which looses the dependency on any standard library, D, or the GC. (e.g., instead of having to change new every place it occurs one just has to change it either in the factory or in the New method). I guess you mean that I should use a template as a factory instead of an interface? I'll have to think about it to see what the pro's and con's of each are. The interface pattern should include the template pattern though. (after all, the interface is parameterized...)
Aug 05 2013
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Monday, 5 August 2013 at 08:11:59 UTC, JS wrote:

 I guess you mean that I should use a template as a factory 
 instead of an interface? I'll have to think about it to see 
 what the pro's and con's of each are. The interface pattern 
 should include the template pattern though. (after all, the 
 interface is parameterized...)
If you want to swap your allocators at runtime than an interface is a good solution. They are here for runtime dispatch after all. However all your allocators are forced to be classes. Coincidentally an article has been written about exactly this maybe a week ago. See http://blog.thecybershadow.net/2013/07/28/low-overhead-components/ .
Aug 05 2013
parent "JS" <js.mdnq gmail.com> writes:
On Monday, 5 August 2013 at 12:41:19 UTC, Tobias Pankrath wrote:
 On Monday, 5 August 2013 at 08:11:59 UTC, JS wrote:

 I guess you mean that I should use a template as a factory 
 instead of an interface? I'll have to think about it to see 
 what the pro's and con's of each are. The interface pattern 
 should include the template pattern though. (after all, the 
 interface is parameterized...)
If you want to swap your allocators at runtime than an interface is a good solution. They are here for runtime dispatch after all. However all your allocators are forced to be classes. Coincidentally an article has been written about exactly this maybe a week ago. See http://blog.thecybershadow.net/2013/07/28/low-overhead-components/ .
What I would like is some way to choose an approximate optimal allocator that can be modified at run-time if necessary. e.g., small objects can use a slab allocator. Large objects can use a buddy allocator. I can change the allocator type at application startup to something else entirely if necessary for various reasons(performance checking, memory leakage, etc...). What I would like is for classes to be able to request a certain allocator(this way, the class writer can attempt to choose the best. Later on, I can profile different allocators rather easily by just changing the factory and force the classes to use the best one found. There seems to be a lot on the specifics that I'll have to read up on but I mainly want to avoid programming myself into a corner.
Aug 05 2013