digitalmars.D.learn - How to use D without the GC ?
- Vinod K Chandran (8/8) Jun 11 Hi all,
- matheus (5/6) Jun 11 Similar posts that may help:
- Vinod K Chandran (2/9) Jun 11 Thank you Matheus, let me check that. :)
- Kagamin (8/8) Jun 11 1) arena allocator makes memory manageable with occasional cache
- Vinod K Chandran (3/11) Jun 11 Oh thank you @Kagamin. That's some valuable comments. I will take
- drug007 (2/10) Jun 11
- Steven Schveighoffer (17/25) Jun 11 I could answer the question directly, but it seems others have
- Vinod K Chandran (8/11) Jun 11 Hi Steve,
- monkyyy (3/17) Jun 11 rather then worring about the gc, just have 95% of data on the
- Vinod K Chandran (4/6) Jun 12 How's that even possible ? AFAIK, we need heap allocated memory
- monkyyy (6/14) Jun 12 gui is a bit harder and maybe aim for 70%
- DrDread (6/20) Jun 12 the GC only runs on allocation. if you want to squeeze out the
- Vinod K Chandran (2/4) Jun 12 Thanks for the suggestion. Let me check that idea.
- Sergey (3/8) Jun 12 Btw are you going to use PyD or doing everything manually from
- Vinod K Chandran (6/8) Jun 12 Does PyD active now ? I didn't tested it. My approach is using
- evilrat (86/96) Jun 12 It is probably not that well maintained, but it definitely works
- Ferhat =?UTF-8?B?S3VydHVsbXXFnw==?= (6/18) Jun 12 You can use libonnx via importc to do inference of pytorch models
- Vinod K Chandran (16/19) Jun 12 Oh I see. I did some experiments with nimpy and pybind11. Both
- bachmeier (9/15) Jun 12 On the second point, I would be interested in hearing what you
- bachmeier (24/24) Jun 12 A SafeRefCounted example with main marked @nogc:
- Vinod K Chandran (2/3) Jun 12 Thanks for the sample. It looks tempting! Let me check that.
- Vinod K Chandran (2/17) Jun 12 Why not just use `ptr` ? Why did you `data` with `ptr` ?
- bachmeier (5/27) Jun 12 Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly
- Vinod K Chandran (12/15) Jun 12 We can use it like this, i think.
- bachmeier (6/22) Jun 12 Yes, you can do that, but then you're replicating what you get
- drug007 (2/29) Jun 12 I think you can use data only because data contains data.ptr
- bachmeier (3/33) Jun 12 Yes, but you get all the benefits of `double[]` for free if you
- drug007 (22/56) Jun 12 I meant you do not need to add `ptr` field at all
- Lance Bachmeier (5/8) Jun 12 I see. You're right. I thought it would be easier for someone new
- Vinod K Chandran (4/7) Jun 12 Thanks, I have read about the possibilities of "using malloc and
- Dukc (11/14) Jun 12 I suspect `SafeRefCounted` (or `RefCounted`) is not the best fit for
- Lance Bachmeier (5/19) Jun 12 Why would it be different from calling malloc and free manually?
- Dukc (12/16) Jun 13 Because with `SafeRefCounted`, you have to decide the size of your
- Dukc (4/9) Jun 13 Now granted, 16MiB (or even smaller amounts, like 256 KiB) sounds big
- Lance Bachmeier (9/22) Jun 13 We must be talking about different things. You could, for
Hi all, I am planning to write some D code without GC. But I have no prior experience with it. I have experience using manual memory management languages. But D has so far been used with GC. So I want to know what pitfalls it has and what things I should watch out for. Also, I want to know what high level features I will be missing. Thanks in advance.
Jun 11
On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran wrote:...Similar posts that may help: https://forum.dlang.org/thread/hryadrwplyezihwagiox forum.dlang.org https://forum.dlang.org/thread/dblfikgnzqfmmglwdxdg forum.dlang.org Matheus.
Jun 11
On Tuesday, 11 June 2024 at 13:35:19 UTC, matheus wrote:On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran wrote:Thank you Matheus, let me check that. :)...Similar posts that may help: https://forum.dlang.org/thread/hryadrwplyezihwagiox forum.dlang.org https://forum.dlang.org/thread/dblfikgnzqfmmglwdxdg forum.dlang.org Matheus.
Jun 11
1) arena allocator makes memory manageable with occasional cache invalidation problem 2) no hashtable no problem 3) error handling depends on your code complexity, but even in exception or you don't 4) I occasionally use CTFE, where ` nogc` is a nuisance 5) polymorphism can be a little quirky
Jun 11
On Tuesday, 11 June 2024 at 14:59:24 UTC, Kagamin wrote:1) arena allocator makes memory manageable with occasional cache invalidation problem 2) no hashtable no problem 3) error handling depends on your code complexity, but even in an exception or you don't 4) I occasionally use CTFE, where ` nogc` is a nuisance 5) polymorphism can be a little quirkyOh thank you Kagamin. That's some valuable comments. I will take special care.
Jun 11
On 11.06.2024 17:59, Kagamin wrote:1) arena allocator makes memory manageable with occasional cache invalidation problem 2) no hashtable no problem[OT] could you elaborate what problems they cause?3) error handling depends on your code complexity, but even in complex you don't 4) I occasionally use CTFE, where ` nogc` is a nuisance 5) polymorphism can be a little quirky
Jun 11
On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran wrote:Hi all, I am planning to write some D code without GC. But I have no prior experience with it. I have experience using manual memory management languages. But D has so far been used with GC. So I want to know what pitfalls it has and what things I should watch out for. Also, I want to know what high level features I will be missing. Thanks in advance.I could answer the question directly, but it seems others have already done so. I would instead ask the reason for wanting to write D code without the GC. In many cases, you can write code without *regularly* using the GC (i.e. preallocate, or reuse buffers), but still use the GC in the sense that it is there as your allocator. A great example is exceptions. Something that has the code `throw new Exception(...)` is going to need the GC in order to build that exception. But if your code is written such that this never (normally) happens, then you aren't using the GC for that code. So I would call this kind of style writing code that avoids creating garbage. To me, this is the most productive way to minimize GC usage, while still allowing one to use D as it was intended. -Steve
Jun 11
On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer wrote:I would instead ask the reason for wanting to write D code without the GC. -SteveHi Steve, Two reasons. 1. I am writting a dll to use in Python. So I am assuming that manual memory management is better for this project. It will give finer control to me. 2. To squeeze out the last bit of performance from D.
Jun 11
On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer wrote:rather then worring about the gc, just have 95% of data on the stackI would instead ask the reason for wanting to write D code without the GC. -SteveHi Steve, Two reasons. 1. I am writting a dll to use in Python. So I am assuming that manual memory management is better for this project. It will give finer control to me. 2. To squeeze out the last bit of performance from D.
Jun 11
On Wednesday, 12 June 2024 at 01:35:26 UTC, monkyyy wrote:rather then worring about the gc, just have 95% of data on the stackHow's that even possible ? AFAIK, we need heap allocated memory in order to make GUI lib as a DLL. So creating things in heap and modify it, that's the nature of my project.
Jun 12
On Wednesday, 12 June 2024 at 16:50:04 UTC, Vinod K Chandran wrote:On Wednesday, 12 June 2024 at 01:35:26 UTC, monkyyy wrote:gui is a bit harder and maybe aim for 70% but if you went down the rabbit hole you could have strings be in an "arena" of which the first 5000 chars are a global scope array; or full me and just an array that doesnt expandrather then worring about the gc, just have 95% of data on the stackHow's that even possible ? AFAIK, we need heap allocated memory in order to make GUI lib as a DLL. So creating things in heap and modify it, that's the nature of my project.
Jun 12
On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer wrote:the GC only runs on allocation. if you want to squeeze out the last bit of performance, you should preallocate all bufferes anyway, and GC vs no GC doesn't matter. also just slap nogc on your main function to avoid accidential allocations.I would instead ask the reason for wanting to write D code without the GC. -SteveHi Steve, Two reasons. 1. I am writting a dll to use in Python. So I am assuming that manual memory management is better for this project. It will give finer control to me. 2. To squeeze out the last bit of performance from D.
Jun 12
On Wednesday, 12 June 2024 at 09:44:05 UTC, DrDread wrote:also just slap nogc on your main function to avoid accidential allocations.Thanks for the suggestion. Let me check that idea.
Jun 12
On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer wrote:Btw are you going to use PyD or doing everything manually from scratch?Two reasons. 1. I am writting a dll to use in Python. So I am assuming that
Jun 12
On Wednesday, 12 June 2024 at 10:16:26 UTC, Sergey wrote:Btw are you going to use PyD or doing everything manually from scratch?Does PyD active now ? I didn't tested it. My approach is using "ctypes" library with my dll. Ctypes is the fastes FFI in my experience. I tested Cython, Pybind11 and CFFI. But None can beat the speed of ctypes. Currently the fastest experiments were the dlls created in Odin & C3. Both are non-GC languages.
Jun 12
On Wednesday, 12 June 2024 at 17:00:14 UTC, Vinod K Chandran wrote:On Wednesday, 12 June 2024 at 10:16:26 UTC, Sergey wrote:It is probably not that well maintained, but it definitely works with python 3.10 and maybe even 3.11, i use it to interface with pytorch and numpy and PIL, but my use case is pretty simple, i just write some wrapper python functions to run inference and pass images back and forth using embedded py_stmts. the only problem is that it seems to leak a lot PydObjects so i have to manually free them, even scope doesn't helps with that which is sad. example classifier python ```python def inference(image: Image): """ Predicts the image class and returns confidences for every class To get the class one can use the following code > conf = inference(image) > index = conf.argmax() > cls = classes[index] """ ch = len(image.getbands()) has_transparency = image.info.get('transparency', None) is not None if ch > 3 or has_transparency: image = image.convert("RGB") image_tensor = prep_transform(image).float() image_tensor = image_tensor.unsqueeze_(0) #if torch.cuda.is_available(): with torch.inference_mode(): output = model(image_tensor) index = output.data.numpy() return index ``` and some of D functions ```d ImageData aiGoesBrrrr(string path, int strength = 50) { try { if (!pymod) py_stmts("import sys; sys.path.append('modules/xyz')"); initOnce!pymod(py_import("xyz.inference")); if (!pymod.hasattr("model")) pymod.model = pymod.method("load_model", "modules/xyz/pre_trained/weights.pth"); PydObject ipath = py(path); scope(exit) destroy(ipath); auto context = new InterpContext(); context.path = ipath; context.py_stmts(" from PIL import Image image = Image.open(path) ch = len(image.getbands()) if ch > 3: image = image.convert('RGB') "); // signature: def run(model, imagepath, alpha=45) -> numpy.Array PydObject output = pymod.method("run", pymod.model, context.image, 100-strength); context.output = output; scope(exit) destroy(output); PydObject shape = output.getattr("shape"); scope(exit) destroy(shape); // int n = ...; int c = shape[2].to_d!int; int w = shape[1].to_d!int; int h = shape[0].to_d!int; // numpy array void* raw_ptr = output.buffer_view().item_ptr([0,0,0]); ubyte* d_ptr = cast(ubyte*) raw_ptr; ubyte[] d_img = d_ptr[0..h*w*c]; return ImageData(d_img.dup, h ,w ,c); } catch (PythonException e) { // oh no... auto context = new InterpContext(); context.trace = new PydObject(e.traceback); context.py_stmts("from traceback import format_tb; trace = format_tb(trace)"); printerr(e.py_message, "\n", context.trace.to_d!string); } return ImageData.init; ```Btw are you going to use PyD or doing everything manually from scratch?Does PyD active now ? I didn't tested it. My approach is using "ctypes" library with my dll. Ctypes is the fastes FFI in my experience. I tested Cython, Pybind11 and CFFI. But None can beat the speed of ctypes. Currently the fastest experiments were the dlls created in Odin & C3. Both are non-GC languages.
Jun 12
On Wednesday, 12 June 2024 at 18:58:49 UTC, evilrat wrote:On Wednesday, 12 June 2024 at 17:00:14 UTC, Vinod K Chandran wrote:You can use libonnx via importc to do inference of pytorch models after converting them *.onnx. in this way you won't need python at all. Please refer to the etichetta. instead of PIL for preprocessing just use DCV. https://github.com/trikko/etichetta[...]It is probably not that well maintained, but it definitely works with python 3.10 and maybe even 3.11, i use it to interface with pytorch and numpy and PIL, but my use case is pretty simple, i just write some wrapper python functions to run inference and pass images back and forth using embedded py_stmts. the only problem is that it seems to leak a lot PydObjects so i have to manually free them, even scope doesn't helps with that which is sad. [...]
Jun 12
On Wednesday, 12 June 2024 at 18:58:49 UTC, evilrat wrote:the only problem is that it seems to leak a lot PydObjects so i have to manually free them, even scope doesn't helps with that which is sad.Oh I see. I did some experiments with nimpy and pybind11. Both experiments were resulted in slower than ctypes dll calling method. That's why I didn't take much interest in binding with Python C API. Even Cython is slower compare to ctypes. But it can be used when we call the dll in Cython and call the cython code from python. But then you will have to face some other obstacles. In my case, callback functions are the reason. When using a dll in cython, you need to pass a cython function as callback and inside that func, you need to convert everything into pyobject back and forth. That will take time. Imagine that you want to do some heavy lifting in a mouse move event ? No one will be happy with at snail's pace. But yeah, Cython is a nice language and we can create an entire gui lib in Cython but the execution speed is 2.5X slower than my current c3 dll.
Jun 12
On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:Hi Steve, Two reasons. 1. I am writting a dll to use in Python. So I am assuming that manual memory management is better for this project. It will give finer control to me. 2. To squeeze out the last bit of performance from D.On the second point, I would be interested in hearing what you find out. In general, I have not had any luck with speeding things up inside loops using manual memory management. The only that's worked is avoiding allocations and reusing already allocated memory. You're splitting things into GC-allocated memory and manually managed memory. There's also SafeRefCounted, which handles the malloc and free for you.
Jun 12
A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } nogc ~this() { free(ptr); printf("Data has been freed\n"); } } nogc void main() { auto foo = SafeRefCounted!Foo(3); foo[0..3] = 1.5; printf("%f %f %f\n", foo[0], foo[1], foo[2]); } ```
Jun 12
On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:A SafeRefCounted example with main marked nogc:Thanks for the sample. It looks tempting! Let me check that.
Jun 12
On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } } ```Why not just use `ptr` ? Why did you `data` with `ptr` ?
Jun 12
On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } } ```Why not just use `ptr` ? Why did you `data` with `ptr` ?
Jun 12
On Wednesday, 12 June 2024 at 18:57:41 UTC, bachmeier wrote:Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.We can use it like this, i think. ``` struct Foo { double * ptr; uint capacity; uint legnth; alias data this; } ``` And then we use an index, we can perform a bound check. I am not sure but I hope this will work.
Jun 12
On Wednesday, 12 June 2024 at 20:31:34 UTC, Vinod K Chandran wrote:On Wednesday, 12 June 2024 at 18:57:41 UTC, bachmeier wrote:Yes, you can do that, but then you're replicating what you get for free by taking a slice. You'd have to write your own opIndex, opSlice, etc., and I don't think there's any performance benefit from doing so.Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.We can use it like this, i think. ``` struct Foo { double * ptr; uint capacity; uint legnth; alias data this; } ``` And then we use an index, we can perform a bound check. I am not sure but I hope this will work.
Jun 12
On 12.06.2024 21:57, bachmeier wrote:On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:I think you can use data only because data contains data.ptrOn Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } } ```Why not just use `ptr` ? Why did you `data` with `ptr` ?
Jun 12
On Wednesday, 12 June 2024 at 20:37:36 UTC, drug007 wrote:On 12.06.2024 21:57, bachmeier wrote:Yes, but you get all the benefits of `double[]` for free if you do it that way, including the more concise foo[10] syntax.On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:I think you can use data only because data contains data.ptrOn Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } } ```Why not just use `ptr` ? Why did you `data` with `ptr` ?
Jun 12
On 12.06.2024 23:56, bachmeier wrote:On Wednesday, 12 June 2024 at 20:37:36 UTC, drug007 wrote:I meant you do not need to add `ptr` field at all ```D import std; import core.stdc.stdlib; struct Foo { nogc: double[] data; alias data this; this(int n) { auto ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; } } nogc void main() { auto foo = SafeRefCounted!Foo(3); foo[0..3] = 1.5; printf("%f %f %f\n", foo[0], foo[1], foo[2]); foo.ptr[10] = 1.5; // no need for separate ptr field } ```On 12.06.2024 21:57, bachmeier wrote:Yes, but you get all the benefits of `double[]` for free if you do it that way, including the more concise foo[10] syntax.On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:I think you can use data only because data contains data.ptrOn Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws an out of bounds error. The second gives `Segmentation fault (core dumped)`.A SafeRefCounted example with main marked nogc: ``` import std; import core.stdc.stdlib; struct Foo { double[] data; double * ptr; alias data this; nogc this(int n) { ptr = cast(double*) malloc(n*double.sizeof); data = ptr[0..n]; printf("Data has been allocated\n"); } } ```Why not just use `ptr` ? Why did you `data` with `ptr` ?
Jun 12
On Wednesday, 12 June 2024 at 21:59:54 UTC, drug007 wrote:I see. You're right. I thought it would be easier for someone new to the language to read more explicit code rather than assuming knowledge about data.ptr. In practice it's better to not have a ptr field.Yes, but you get all the benefits of `double[]` for free if you do it that way, including the more concise foo[10] syntax.I meant you do not need to add `ptr` field at all
Jun 12
On Wednesday, 12 June 2024 at 15:21:22 UTC, bachmeier wrote:You're splitting things into GC-allocated memory and manually managed memory. There's also SafeRefCounted, which handles the malloc and free for you.Thanks, I have read about the possibilities of "using malloc and free from D" in some other post. I think I should need to check that.
Jun 12
bachmeier kirjoitti 12.6.2024 klo 18.21:You're splitting things into GC-allocated memory and manually managed memory. There's also SafeRefCounted, which handles the malloc and free for you.I suspect `SafeRefCounted` (or `RefCounted`) is not the best fit for this scenario. The problem with it is it `malloc`s and `free`s individual objects, which doesn't sound efficient to me. Maybe it performs if the objects in question are big enough, or if they can be bundled to static arrays so there's no need to refcount individual objects. But even then, you can't just allocate and free dozens or hundreds of megabytes with one call, unlike with the GC or manual `malloc`/`free`. I honestly don't know if calling malloc/free for, say each 64KiB, would have performance implications over a single allocation.
Jun 12
On Wednesday, 12 June 2024 at 21:36:30 UTC, Dukc wrote:bachmeier kirjoitti 12.6.2024 klo 18.21:Why would it be different from calling malloc and free manually? I guess I'm not understanding, because you put the same calls to malloc and free that you'd otherwise be doing inside this and ~this.You're splitting things into GC-allocated memory and manually managed memory. There's also SafeRefCounted, which handles the malloc and free for you.I suspect `SafeRefCounted` (or `RefCounted`) is not the best fit for this scenario. The problem with it is it `malloc`s and `free`s individual objects, which doesn't sound efficient to me. Maybe it performs if the objects in question are big enough, or if they can be bundled to static arrays so there's no need to refcount individual objects. But even then, you can't just allocate and free dozens or hundreds of megabytes with one call, unlike with the GC or manual `malloc`/`free`. I honestly don't know if calling malloc/free for, say each 64KiB, would have performance implications over a single allocation.
Jun 12
Lance Bachmeier kirjoitti 13.6.2024 klo 1.32:Why would it be different from calling malloc and free manually? I guess I'm not understanding, because you put the same calls to malloc and free that you'd otherwise be doing inside this and ~this.Because with `SafeRefCounted`, you have to decide the size of your allocations at compile time, meaning you need to do a varying number of `malloc`s and `free`s to vary the size of your allocation at runtime. Even if you were to use templates to vary the type of `SafeRefCounted` object based on size of your allocation, the spec puts an upper bound of 16MiB to size of a static array. So for example, if you have a program that sometimes needs 600Mib and sometimes needs 1100MiB, you can in any case allocate all that in one go with one `malloc` or one `new`, but you'll need at least 38/59 `SafeRefCounted` static arrays, and therefore `malloc`s, to accomplish the same.
Jun 13
Dukc kirjoitti 13.6.2024 klo 10.18:So for example, if you have a program that sometimes needs 600Mib and sometimes needs 1100MiB, you can in any case allocate all that in one go with one `malloc` or one `new`, but you'll need at least 38/59 `SafeRefCounted` static arrays, and therefore `malloc`s, to accomplish the same.Now granted, 16MiB (or even smaller amounts, like 256 KiB) sounds big enough that it probably isn't making a difference since it's a long way into multiples of page size anyway. But I'm not sure.
Jun 13
On Thursday, 13 June 2024 at 07:18:48 UTC, Dukc wrote:Lance Bachmeier kirjoitti 13.6.2024 klo 1.32:We must be talking about different things. You could, for instance, call a function in a C library to allocate memory at runtime. That function returns a pointer and you pass it to SafeRefCounted to ensure it gets freed. Nothing is known about the allocation at compile time. This is in fact my primary use case - allocating an opaque struct allocated by a C library, and not wanting to concern myself with freeing it when I'm done with it.Why would it be different from calling malloc and free manually? I guess I'm not understanding, because you put the same calls to malloc and free that you'd otherwise be doing inside this and ~this.Because with `SafeRefCounted`, you have to decide the size of your allocations at compile time, meaning you need to do a varying number of `malloc`s and `free`s to vary the size of your allocation at runtime. Even if you were to use templates to vary the type of `SafeRefCounted` object based on size of your allocation, the spec puts an upper bound of 16MiB to size of a static array.
Jun 13
Lance Bachmeier kirjoitti 14.6.2024 klo 4.23:We must be talking about different things. You could, for instance, call a function in a C library to allocate memory at runtime. That function returns a pointer and you pass it to SafeRefCounted to ensure it gets freed. Nothing is known about the allocation at compile time. This is in fact my primary use case - allocating an opaque struct allocated by a C library, and not wanting to concern myself with freeing it when I'm done with it.Using a raw pointer as the `SafeRefCounted` type like that isn't going to work. `SafeRefCounted` will free only the pointer itself at the end, not the struct it's referring to. If you use some sort of RAII wrapper for the pointer that `free`s it at it's destructor, then it'll work - maybe that's what you meant.
Jun 14
On Friday, 14 June 2024 at 07:52:35 UTC, Dukc wrote:Lance Bachmeier kirjoitti 14.6.2024 klo 4.23:See the example I posted elsewhere in this thread: https://forum.dlang.org/post/mwerxaolbkuxlgfepzwc forum.dlang.org I defined ``` nogc ~this() { free(ptr); printf("Data has been freed\n"); } ``` and that gets called when the reference count hits zero.We must be talking about different things. You could, for instance, call a function in a C library to allocate memory at runtime. That function returns a pointer and you pass it to SafeRefCounted to ensure it gets freed. Nothing is known about the allocation at compile time. This is in fact my primary use case - allocating an opaque struct allocated by a C library, and not wanting to concern myself with freeing it when I'm done with it.Using a raw pointer as the `SafeRefCounted` type like that isn't going to work. `SafeRefCounted` will free only the pointer itself at the end, not the struct it's referring to. If you use some sort of RAII wrapper for the pointer that `free`s it at it's destructor, then it'll work - maybe that's what you meant.
Jun 14
bachmeier kirjoitti 14.6.2024 klo 16.48:See the example I posted elsewhere in this thread: https://forum.dlang.org/post/mwerxaolbkuxlgfepzwc forum.dlang.org I defined ``` nogc ~this() { free(ptr); printf("Data has been freed\n"); } ``` and that gets called when the reference count hits zero.Oh sorry, missed that.
Jun 14