www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - How does D compare to Go when it comes to C interop?

reply Pradeep Gowda <pradeep btbytes.com> writes:
I read this post about the difficulties of using go/cgo to 
interface with C code: 
http://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/

How does D do in comparison? specifically in regards to:

1. Call overhead (CGo is 100 times slower than native Go fn call)
2. Memory management. (I guess D is not immune from having to 
manage memory for the C functions it calls.. but is there a 
difference in approach? is it safer in D?)
3. Cgorountes != goroutines (This one may not apply for D)
4. Static builds (how easy/straight-forward is this in D?)
5. Debugging (ease of accessing C parts when debugging)
Dec 10 2015
next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
wrote:
 I read this post about the difficulties of using go/cgo to 
 interface with C code: 
 http://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/

 How does D do in comparison? specifically in regards to:

 1. Call overhead (CGo is 100 times slower than native Go fn 
 call)
 2. Memory management. (I guess D is not immune from having to 
 manage memory for the C functions it calls.. but is there a 
 difference in approach? is it safer in D?)
 3. Cgorountes != goroutines (This one may not apply for D)
 4. Static builds (how easy/straight-forward is this in D?)
 5. Debugging (ease of accessing C parts when debugging)
1) No overhead, extern(C) in D means C ABI (although no inlining opportunities without LTO, so C -> C and D -> D calls can end up being faster in practice) 2) You can manipulate the C heap in D just the same as in C. GC-allocated memory in D being passed to C requires more care, but is normally not a problem. 3) Not applicable really. 4) D builds very similar to C, so you shouldn't see any problems here. Loading multiple D shared libraries can cause headaches and/or not work at all depending on platform. 5) Trivial. From the debuggers perspective it's all just functions, you might not even notice the language barrier. All D debuggers are also C debuggers and I doubt that's going to change.
Dec 10 2015
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Thursday, 10 December 2015 at 13:52:57 UTC, John Colvin wrote:
 On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
 wrote:
 2. Memory management. (I guess D is not immune from having to 
 manage memory for the C functions it calls.. but is there a 
 difference in approach? is it safer in D?)
 2) You can manipulate the C heap in D just the same as in C. 
 GC-allocated memory in D being passed to C requires more care, 
 but is normally not a problem.
I was going to say the exact same thing, but I do think it's worth putting some emphasis on "more care" here. It's true that it normally is not a problem, but it's precisely the abnormal cases that you will miss when you aren't paying attention.
Dec 10 2015
prev sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Thursday, 10 December 2015 at 13:52:57 UTC, John Colvin wrote:
 On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
 wrote:
 5. Debugging (ease of accessing C parts when debugging)
5) Trivial. From the debuggers perspective it's all just functions, you might not even notice the language barrier. All D debuggers are also C debuggers and I doubt that's going to change.
To add to this, as long as you're on anything but OS X, you're fine. Debugging D on OS X is, to put it plainly, fucked.
Dec 10 2015
next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 10 December 2015 at 14:57:33 UTC, Jack Stouffer 
wrote:
 On Thursday, 10 December 2015 at 13:52:57 UTC, John Colvin 
 wrote:
 On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
 wrote:
 5. Debugging (ease of accessing C parts when debugging)
5) Trivial. From the debuggers perspective it's all just functions, you might not even notice the language barrier. All D debuggers are also C debuggers and I doubt that's going to change.
To add to this, as long as you're on anything but OS X, you're fine. Debugging D on OS X is, to put it plainly, fucked.
Never had any real problems with it, but I don't expect much from my debuggers. Breakpoint, backtrace, disassemble, register dump. Maybe some stepping about once a year.
Dec 10 2015
parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Thursday, 10 December 2015 at 15:18:18 UTC, John Colvin wrote:
 On Thursday, 10 December 2015 at 14:57:33 UTC, Jack Stouffer 
 wrote:
 On Thursday, 10 December 2015 at 13:52:57 UTC, John Colvin 
 wrote:
 On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
 wrote:
 5. Debugging (ease of accessing C parts when debugging)
5) Trivial. From the debuggers perspective it's all just functions, you might not even notice the language barrier. All D debuggers are also C debuggers and I doubt that's going to change.
To add to this, as long as you're on anything but OS X, you're fine. Debugging D on OS X is, to put it plainly, fucked.
Never had any real problems with it, but I don't expect much from my debuggers. Breakpoint, backtrace, disassemble, register dump. Maybe some stepping about once a year.
https://issues.dlang.org/show_bug.cgi?id=14927 If you've gotten GDB on OS X to work, please let me know. Trying to debug NULL pointer bugs without a debugger is like breaking down a wall by smashing your head into it over and over.
Dec 10 2015
parent John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 10 December 2015 at 15:29:46 UTC, Jack Stouffer 
wrote:
 On Thursday, 10 December 2015 at 15:18:18 UTC, John Colvin 
 wrote:
 On Thursday, 10 December 2015 at 14:57:33 UTC, Jack Stouffer 
 wrote:
 On Thursday, 10 December 2015 at 13:52:57 UTC, John Colvin 
 wrote:
 On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
 wrote:
 5. Debugging (ease of accessing C parts when debugging)
5) Trivial. From the debuggers perspective it's all just functions, you might not even notice the language barrier. All D debuggers are also C debuggers and I doubt that's going to change.
To add to this, as long as you're on anything but OS X, you're fine. Debugging D on OS X is, to put it plainly, fucked.
Never had any real problems with it, but I don't expect much from my debuggers. Breakpoint, backtrace, disassemble, register dump. Maybe some stepping about once a year.
https://issues.dlang.org/show_bug.cgi?id=14927 If you've gotten GDB on OS X to work, please let me know. Trying to debug NULL pointer bugs without a debugger is like breaking down a wall by smashing your head into it over and over.
When I hit problems like that I swap to llldb. Much less D support, but it works well enough for my limited needs. I don't mind looking at mangled names most of the time. Still, it should definitely be fixed, I understand that other people have different needs/workflows.
Dec 10 2015
prev sibling parent deadalnix <deadalnix gmail.com> writes:
On Thursday, 10 December 2015 at 14:57:33 UTC, Jack Stouffer 
wrote:
 To add to this, as long as you're on anything but OS X, you're 
 fine. Debugging D on OS X is, to put it plainly, fucked.
I have some good thing coming out of lldb. Far from perfect, but usable.
Dec 10 2015
prev sibling next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Thu, 10 Dec 2015 13:33:07 +0000, Pradeep Gowda wrote:

 I read this post about the difficulties of using go/cgo to interface
 with C code:
 http://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/
 
 How does D do in comparison? specifically in regards to:
 
 1. Call overhead (CGo is 100 times slower than native Go fn call)
C calls use the C calling convention. That's the only runtime difference. The C calling convention is fast, of course, and D's is comparable. From what you're saying about Go, it sounds like someone implemented the C calling convention at runtime. A few years ago (circa 2008, I think?), I implemented something like this in D (call a function with a series of boxed values, specifically). It was about 150 times slower than just calling the function directly in my brief tests. You can use packed C structs in D, which is one thing that cgo doesn't allow.
 2. Memory management. (I guess D is not immune from having to manage
 memory for the C functions it calls.. but is there a difference in
 approach? is it safer in D?)
For C interop? You can allocate memory in D and pass it to C functions. This requires some caution. If you send a pointer to a C function that allocates memory on the heap and stores that pointer in that memory, the GC doesn't know about that. If you are writing the code on both sides, you can forbid that pattern. If you are uncertain, you can hold a reference to anything you pass to C code until you know it's safe to discard it. Finally, if the C code takes ownership of the pointer (and expects to be able to realloc / free it), you must use malloc. realloc and free may well crash if you try to realloc pointers they don't know about. If you're using D and not calling C functions, you can easily write memory-safe code, and there's a garbage collector to boot.
 3. Cgorountes != goroutines (This one may not apply for D)
On the other hand, coroutines are coroutines, and D has a coroutine implementation (core.thread.Fiber). In Go, all IO is tightly integrated with the coroutine scheduler. The reason it requires caution to use C from Go is that, if you perform a blocking operation, your entire program will block, unlike what happens if you use Go IO functions. If you are using vibe.d, you have a similar IO model and need similar caution. That's about it, I think. Since you can call D from C/C++, you can implement coroutine-aware code in C that will yield at the appropriate places.
 4. Static builds (how easy/straight-forward is this in D?)
DUB, the D build and dependency tool, produces static libraries by default.
Dec 10 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 10 December 2015 at 16:37:16 UTC, Chris Wright wrote:
 From what you're saying about Go, it sounds like someone 
 implemented the C calling convention at runtime.
Go has a more advanced, memory efficient and secure runtime than D fibers. So when calling C Go has to do extra work. E.g. Ensure that the stack is large enough for C etc.
 On the other hand, coroutines are coroutines, and D has a 
 coroutine implementation (core.thread.Fiber).
But the D fiber stacks aren't managed/safe, so in D the programmer is responsible for allocating a stack that is large enough. And that is neither safe or scalable. D fibers are also assumed to not migrate between threads at the moment.
Dec 10 2015
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Thu, 10 Dec 2015 20:17:57 +0000, Ola Fosheim Grøstad wrote:

 But the D fiber stacks aren't managed/safe, so in D the programmer is
 responsible for allocating a stack that is large enough. And that is
 neither safe or scalable.
Try it out! --- import core.thread; import std.stdio; void main() { int i = 0; void f() { i++; } Fiber[] fibers; for (int j = 0; j < 1000; j++) { fibers ~= new Fiber(&f, 1_000_000_000); } foreach (fiber; fibers) { fiber.call; } writeln(i); } --- core.thread.Fiber is careful not to make any eager allocations. In most operating systems, you have to go through hoops to demand address space that's backed by memory *right now*. But instead of just not going through those hoops, Fiber is more explicit about asking for lazily filled address space. On Posix, it will use mmap(2). mmap is specifically for reserving address space. Fiber passes flags for it to be backed by memory lazily (and zero- initialized). On Windows, it uses VirtualAlloc and passes MEM_RESERVE | MEM_COMMIT. The WinAPI docs say: "Actual physical pages are not allocated unless/until the virtual addresses are actually accessed." druntime doesn't explicitly grow your stack because it's simply unnecessary. The OS's memory manager does it for you. This doesn't work well for 32-bit processes because, while it doesn't demand memory up front, it does demand address space. But lazily demanding pages from the OS doesn't help. You still have to divide the address space among fiber stacks.
Dec 10 2015
parent reply Ola Fosheim Gr <ola.fosheim.grostad+dlang gmail.com> writes:
On Thursday, 10 December 2015 at 23:25:42 UTC, Chris Wright wrote:
 This doesn't work well for 32-bit processes because, while it 
 doesn't demand memory up front, it does demand address space. 
 But lazily demanding pages from the OS doesn't help. You still 
 have to divide the address space among fiber stacks.
it does not work for 64 bit either. You will: 1. Kill caches and TLB 2. Get bloated page tables. 3. Run out of memory. 4. Run out of stack space. These 3 approaches work: 1. Allocate all activation records on the heap (Simula/Beta) 2. Use stacks that grows and shrink dynamically (Go) 3. Require no state on stack at yield. (Pony / C++17)
Dec 10 2015
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 11 December 2015 at 01:25:24 UTC, Ola Fosheim Gr wrote:
 On Thursday, 10 December 2015 at 23:25:42 UTC, Chris Wright 
 wrote:
 This doesn't work well for 32-bit processes because, while it 
 doesn't demand memory up front, it does demand address space. 
 But lazily demanding pages from the OS doesn't help. You still 
 have to divide the address space among fiber stacks.
it does not work for 64 bit either. You will: 1. Kill caches and TLB 2. Get bloated page tables. 3. Run out of memory. 4. Run out of stack space.
Benchmark needed. It not like the stack is randomly accessed, so I don't think the TLB argument makes a lot of sense. I mean, yes, if you run tens of thousand of fibers maybe you'll experience that, but that doesn't sound like a very smart plan to boot.
 These 3 approaches work:

 1. Allocate all activation records on the heap (Simula/Beta)
 2. Use stacks that grows and shrink dynamically (Go)
 3. Require no state on stack at yield. (Pony / C++17)
Dec 10 2015
next sibling parent reply Ola Fosheim Gr <ola.fosheim.grostad+dlang gmail.com> writes:
On Friday, 11 December 2015 at 01:43:12 UTC, deadalnix wrote:
 I mean, yes, if you run tens of thousand of fibers maybe you'll 
 experience that, but that doesn't sound like a very smart plan 
 to boot.
If you cannot spin up fibers in the thousands, then it is very limited. If you cannot have frequent shortlived fibers, then it is very limited. If it is limited like that then you can't make an efficient game server with it, or an efficient web server for persistent connections.
Dec 10 2015
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 11 December 2015 at 02:08:27 UTC, Ola Fosheim Gr wrote:
 On Friday, 11 December 2015 at 01:43:12 UTC, deadalnix wrote:
 I mean, yes, if you run tens of thousand of fibers maybe 
 you'll experience that, but that doesn't sound like a very 
 smart plan to boot.
If you cannot spin up fibers in the thousands, then it is very limited. If you cannot have frequent shortlived fibers, then it is very limited. If it is limited like that then you can't make an efficient game server with it, or an efficient web server for persistent connections.
Yeah tell me more about having web servers with many persistent connections, I don't have the opportunity to see to many of these t work.
Dec 10 2015
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 11 December 2015 at 06:36:47 UTC, deadalnix wrote:
 Yeah tell me more about having web servers with many persistent 
 connections, I don't have the opportunity to see to many of 
 these t work.
There is a point in this statement?
Dec 11 2015
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 11 December 2015 at 09:10:42 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 11 December 2015 at 06:36:47 UTC, deadalnix wrote:
 Yeah tell me more about having web servers with many 
 persistent connections, I don't have the opportunity to see to 
 many of these t work.
There is a point in this statement?
No, I just like you to explain me how to serve web request at scale, because I have zero clue how that works.
Dec 11 2015
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 11 December 2015 at 09:28:03 UTC, deadalnix wrote:
 No, I just like you to explain me how to serve web request at 
 scale, because I have zero clue how that works.
Only well behaved kids get presents for xmas.
Dec 11 2015
prev sibling parent Chris Wright <dhasenan gmail.com> writes:
On Fri, 11 Dec 2015 01:43:12 +0000, deadalnix wrote:

 Benchmark needed. It not like the stack is randomly accessed, so I don't
 think the TLB argument makes a lot of sense.
The TLB issue is about the pattern of memory access, and that's got a lot more to do with the user's code than with the stack arrangement. So it's a red herring.
 I mean, yes, if you run tens of thousand of fibers maybe you'll
 experience that, but that doesn't sound like a very smart plan to boot.
I have a use case for hundreds of thousands of fibers, but that's accompanied with a custom scheduler. I expect most of them will only need to be called about once per minute. Incidentally, implementing a custom scheduler was pretty essential for this project, as was canceling coroutines. Go doesn't allow for either. I also wanted to be able to list coroutines and store a piece of data on each, or at least to get a reference to the current coroutine so I could store it somewhere and determine if anything that should have a running coroutine didn't have one. None of this was possible in Go.
Dec 10 2015
prev sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Fri, 11 Dec 2015 01:25:24 +0000, Ola Fosheim Gr wrote:

 it does not work for 64 bit either. You will:

 1. Kill caches and TLB
Which only affects efficiency, not correctness. And that's only for people who want to use as much as a gigabyte of stack space for every fiber. Since the TLB is a cache based on usage, that part only applies when you are using pointers to tons of far-flung regions of memory all together. If that's your usage pattern, it doesn't matter whether you're on a native thread stack or you're storing things on the heap or you're using a memory-mapped fiber stack; you're going to have a bad time. The only thing you can do in application code to improve the situation is to use larger pages. For instance, you can pass MAP_HUGETLB to mmap(2), which should enable 2-4MB pages in place of 4-8KB ones (depending on hardware support and kernel configuration). Finally, the *best* you can do for giving hints to the OS about the intended access pattern with lazy physical memory allocation is mmap, possibly passing MAP_GROWSDOWN, which is pretty much intended for this sort of use case. For caching, the major problem we see here is that the physical memory behind a very large stack is not contiguous. The only thing you can do to improve your situation is to request larger pages, as with MAP_HUGETLB.
 2. Get bloated page tables.
Also affecting efficiency rather than correctness. Also not in the common case. Also potentially improved with MAP_HUGETLB, depending on OS internals. (The kernel might sometimes have a different store of large pages, or sometimes it may stitch together multiple adjacent normal pages.)
 3. Run out of memory.
 4. Run out of stack space.
At some point, you have to write code against the system you're using, not some idealized computer with infinite resources. You can use very large stacks in your fibers, but you need to ensure that they're short-lived. You can operate recursively on moderately large datasets, but not on arbitrarily large ones. These are considerations you have to take into account in D. You have to consider the same things in Go because memory is a limited resource. Sometimes you can address them in different ways.
 These 3 approaches work:
 
 1. Allocate all activation records on the heap (Simula/Beta)
Or rather, allow a fragmented stack, in both physical and virtual memory. Don't even bother giving the kernel any hints about probable access patterns. This has an obvious negative impact on performance, and that applies to the common case as well as unusual ones.
 2. Use stacks that grows and shrink dynamically (Go)
This has most of the problems you complain about. (Go doesn't even have unlimited stack sizes; see https://golang.org/pkg/runtime/debug/ #SetMaxStack .) Furthermore, Go's implementation (based on comments in the source code) requires that every function check that the stack is large enough. Every single function call. Even if that were a good solution, it's not going to be added to D. In order to keep the stack contiguous, Go *reallocates and copies your entire stack*, then walks through it to fix up every pointer. This is only even possible because you can't store stack pointers on the heap, apparently. (I don't know how that's enforced.) This is, needless to say, expensive. But at least it's amortized, right? If you want to make this work from D, you would have to do something a bit more awkward. Maybe create a shared memory object, then mmap it multiple times at different sizes. Pass it back and forth between two ranges of virtual memory. It would be ugly, and when you're unlucky, you won't have enough virtual address space in the right places.
 3. Require no state on stack at yield. (Pony / C++17)
Which limits their utility immensely.
Dec 10 2015
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 11 December 2015 at 05:05:29 UTC, Chris Wright wrote:
 1. Kill caches and TLB
Which only affects efficiency, not correctness.
That's true, but when you have fibers or coroutines as a paradigm, you do it because it is a convenient way of preserving statefulness. So you want to be able to use it where it makes your code more maintainable. If you only care about efficiency for a very limited scenario, then you don't pick coroutines, you use events. But having many, also short lived coroutines makes it easier to write code that is evolving, like simulations. That makes requiring syscalls like mmap for instantiation way too expensive, although if all your stacks are the same size, you can just use a freelist pool, and avoid the syscalls. But that is not a good generic solution. That's a special case.
 memory all together. If that's your usage pattern, it doesn't 
 matter whether you're on a native thread stack or you're 
 storing things on the heap or you're using a memory-mapped 
 fiber stack; you're going to have a bad time.
It matters if you keep wiping caches/tlb by hammering the page tables with changes, it matters if you need to use small pages because you need to use a guard page at the bottom of the stack in order to avoid checking stack size. It matters if page tables grow in size because you fragment memory deliberately. And you also need to make sure that code probe the guard page before addressing something beyond a potential guard page etc (e.g. if you put a large array on the stack).
 You have to consider the same things in Go because memory is a 
 limited resource. Sometimes you can address them in different 
 ways.
Yes, it is a limited resource, especially in typical Go scenarios where you run on shared instances with a fixed small memory size. Which basically makes small default stacks that grow a decent solution, although it does make GC questionable as it leads to significant memory overhead.
 1. Allocate all activation records on the heap (Simula/Beta)
Or rather, allow a fragmented stack, in both physical and virtual memory. Don't even bother giving the kernel any hints about probable access patterns. This has an obvious negative impact on performance, and that applies to the common case as well as unusual ones.
This is basically the model most high level languages take on the conceptual level, then you do optimizations under the hood. Basically having the same model for objects, functions, lambdas and coroutines is a big win in many ways. You can still have a LIFO allocator under the hood for "stack-like-allocation". New features in C++ is taking the everything-is-an-object approach. Lambdas are objects. Coroutines are objects. Is it more difficult to get the highest performance, yes, but it is memory efficient and conceptually elegant.
 In order to keep the stack contiguous, Go *reallocates and 
 copies your entire stack*, then walks through it to fix up
Yes, but one can easily think of optimizations, e.g. leave open slots so that you statistically often can just extend the stack. Or the opposite, over-allocate and shrink when you know what the stack will be like. One problem with Go there could be the focus on separate compilation, smart behaviour here probably require full analysis of possible call-chains.
 If you want to make this work from D, you would have to do 
 something a bit more awkward.
One general problem for D is that you can call D from C. If you knew that C code only could be called at the leaves (or rather keep state at the bottom of the stack), then you also would get more creative freedom.
 3. Require no state on stack at yield. (Pony / C++17)
Which limits their utility immensely.
Not really, it may affect execution speed, but the basic idea is that you establish what state is to be retained in the heap object by static analysis, the rest is put on the regular thread stack.
Dec 11 2015
prev sibling parent Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, 2015-12-10 at 20:17 +0000, Ola Fosheim Gr=C3=B8stad via Digitalmars=
-
d wrote:
[=E2=80=A6]
=20
 Go has a more advanced, memory efficient and secure runtime than=20
 D fibers. So when calling C Go has to do extra work. E.g. Ensure=20
 that the stack is large enough for C etc.
Goroutines are great. They are the single thing that makes Go usable. Fibres are for cooperative coroutines, they are a very long way from being the equivalent of goroutines. std:parallelism has tasks and a scheduler. This is much more like goroutines, but they are not publicly available. A good thing to do would be to build an asynchronous task pool system for D as exists for Java (Quasar), Rust (Eventual), Groovy (GPars) that is available for all. Fibres are not that system, and should not be coerced to fill that role. Fibres are for cooperative coroutines and should stay that way. What is needed is a task pool that can then harness kernel threads exactly as goroutines do. And Erlang actors for that matter. And GPars dataflow/actors/CSP. [=E2=80=A6] In the new year, it may be possible for me to join in an activity on this rather than just waffling about it. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Dec 11 2015
prev sibling next sibling parent reply Gary Willoughby <dev nomad.so> writes:
On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
wrote:
 I read this post about the difficulties of using go/cgo to 
 interface with C code: 
 http://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/

 How does D do in comparison? specifically in regards to:

 1. Call overhead (CGo is 100 times slower than native Go fn 
 call)
 2. Memory management. (I guess D is not immune from having to 
 manage memory for the C functions it calls.. but is there a 
 difference in approach? is it safer in D?)
 3. Cgorountes != goroutines (This one may not apply for D)
 4. Static builds (how easy/straight-forward is this in D?)
 5. Debugging (ease of accessing C parts when debugging)
Some reading: https://dlang.org/spec/interfaceToC.html http://wiki.dlang.org/D_binding_for_C
Dec 10 2015
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 10 December 2015 at 17:16:43 UTC, Gary Willoughby 
wrote:
 Some reading:

 https://dlang.org/spec/interfaceToC.html
 http://wiki.dlang.org/D_binding_for_C
I can't recall if Ali's book has a section on this, but I'm not a fan of those links.
Dec 10 2015
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 12/10/2015 09:24 AM, jmh530 wrote:
 On Thursday, 10 December 2015 at 17:16:43 UTC, Gary Willoughby wrote:
 Some reading:

 https://dlang.org/spec/interfaceToC.html
 http://wiki.dlang.org/D_binding_for_C
I can't recall if Ali's book has a section on this, but I'm not a fan of those links.
Unfortunately no, I don't have that topic covered, other than brief mentions of extern(C), extern(C++), and pragma(mangle): http://ddili.org/ders/d.en/modules.html#ix_modules.linkage http://ddili.org/ders/d.en/pragma.html#ix_pragma.mangle,%20pragma However, Mike Parker's book "Learning D" has more than 50 pages on "Connecting D with C": http://wiki.dlang.org/Books Ali
Dec 10 2015
parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 10 December 2015 at 19:46:44 UTC, Ali Çehreli wrote:
 However, Mike Parker's book "Learning D" has more than 50 pages 
 on "Connecting D with C":
Well now I need to buy that!
Dec 10 2015
prev sibling parent deadalnix <deadalnix gmail.com> writes:
On Thursday, 10 December 2015 at 13:33:07 UTC, Pradeep Gowda 
wrote:
 I read this post about the difficulties of using go/cgo to 
 interface with C code: 
 http://www.cockroachlabs.com/blog/the-cost-and-complexity-of-cgo/

 How does D do in comparison? specifically in regards to:

 1. Call overhead (CGo is 100 times slower than native Go fn 
 call)
No overhead. The only limitation is that you won't have inlining between C and D, otherwize it is a direct call, just the same as calling a C function from C.
 2. Memory management. (I guess D is not immune from having to 
 manage memory for the C functions it calls.. but is there a 
 difference in approach? is it safer in D?)
You got to be carefull not putting pointer GC managed memory in the C heap. Otherwize, because of 1, you can just call malloc and free the way C does and with the same effect/perfs.
 3. Cgorountes != goroutines (This one may not apply for D)
I don't know much about these.
 4. Static builds (how easy/straight-forward is this in D?)
D and C use a similar compilation model. You can link both together without much trouble.
 5. Debugging (ease of accessing C parts when debugging)
If the C part is compiled with debug information you'll get everything a C debugger can give you. For the D part, support tend to be not as good but is fairly decent, especially using GDB. It's difficult to assert if it will do for you, but it does enough for what I do with a debugger.
Dec 10 2015