www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Nim's new cycle collector

reply Adnan <relay.public.adnan outlook.com> writes:
https://forum.nim-lang.org/t/5734#38665

Text:

Progress update:

     The new exception handling implementation is the default with 
--gc:arc (and --gc:orc). [Shipping in 1.2]
     The compiler infers the sink parameter annotation for you, 
there is little need to use it explicitly. [Shipping in 1.2]
     The cycle collector --gc:orc is now much faster. [Not yet in 
an official release.] The implementation does exploit the revived 
acyclic annotation.

That means the existing async designs (stdlib, chronos) work 
without modifications with --gc:orc. (In theory; in practice more 
testing is required.)

If you use ref object like C++'s unique_ptr the performance is on 
par (or - because of Nim's superior parameter passing semantics - 
better than) C++'s unique_ptr: all RC operations are elided and 
the cycle collector isn't involved either.

So how does --gc:orc work?

It's a myth that reference counting doesn't work with cyclic 
structures, but it is true that it reintroduces the "tracing" 
aspects and heuristics when to run the tracing and over which 
subgraphs in order to make this fast enough. The current 
implementation is based on so called "trial deletion" with 
Bacon's improvements, read 
https://researcher.watson.ibm.com/researcher/files/us-bacon/B
con01Concurrent.pdf for more information.

Highlights

     Only traces the subgraphs that were modified since the last 
GC run, somewhat comparable to a generational GC (but in theory 
actually better than a generational GC). What you don't touch in 
memory the GC doesn't touch either, it's independent of the 
involved heap sizes (but not independent of the involved 
subgraphs (SCCs) ...).
     Our implementation uses a novel "type tag free" optimization 
so that most heap objects do not have to store a type tag in the 
heap, saving 8 bytes of memory which we use to make cyclic root 
removal an O(1) operation. This implies that at runtime we 
exploit even more acyclic cases than previous designs.
Apr 28 2020
parent reply Adnan <relay.public.adnan outlook.com> writes:
On Tuesday, 28 April 2020 at 22:42:35 UTC, Adnan wrote:
 https://forum.nim-lang.org/t/5734#38665

 Text:

 [...]
Wanted to share this with D friends to see how they view Nim's strategy of reference counting and how it would fit into D in theory.
Apr 28 2020
next sibling parent Araq <rumpf_a web.de> writes:
On Tuesday, 28 April 2020 at 22:44:48 UTC, Adnan wrote:
 On Tuesday, 28 April 2020 at 22:42:35 UTC, Adnan wrote:
 https://forum.nim-lang.org/t/5734#38665

 Text:

 [...]
Wanted to share this with D friends to see how they view Nim's strategy of reference counting and how it would fit into D in theory.
Well nothing I do fits D's design, Nim is quite a different language with its effect system for global variables and its distinction between traced and untraced pointers. You can always emulate everything that Nim does to some extend in D but I doubt the result would be something you want to use in production. This is not a criticism of D, D and Nim and Rust are simply more different than e.g Ruby, Python and Lua are.
Apr 29 2020
prev sibling parent reply Bienlein <ffm2002 web.de> writes:
On Tuesday, 28 April 2020 at 22:44:48 UTC, Adnan wrote:
 On Tuesday, 28 April 2020 at 22:42:35 UTC, Adnan wrote:
 https://forum.nim-lang.org/t/5734#38665

 Text:

 [...]
Wanted to share this with D friends to see how they view Nim's strategy of reference counting and how it would fit into D in theory.
Objective-C and Swift are two well-known languages that use reference counting. I remember some benchmark someone made in his thesis where he wrote a drive for something in various languages including Go and Objective-C. The version in Objective-C was too slow for the requirements of a driver. It turned out that changing counters when adding and removing items to collections were the main reason for this. On the contrary, the version in Go was fast enough. Apparently because Go's GC runs short enough on every collection invocation not to slow down the application too much. So to me the question arises whether reference counting in Nim could be a problem for a language that is also intended to be used for system programming. Eventually, a mix of reference counting and manual memory management would do.
Apr 30 2020
next sibling parent reply Araq <rumpf_a web.de> writes:
On Thursday, 30 April 2020 at 13:49:54 UTC, Bienlein wrote:
 Objective-C and Swift are two well-known languages that use 
 reference counting. I remember some benchmark someone made in 
 his thesis where he wrote a drive for something in various 
 languages including Go and Objective-C.

 The version in Objective-C was too slow for the requirements of 
 a driver. It turned out that changing counters when adding and 
 removing items to collections were the main reason for this.

 On the contrary, the version in Go was fast enough. Apparently 
 because Go's GC runs short enough on every collection 
 invocation not to slow down the application too much.

 So to me the question arises whether reference counting in Nim 
 could be a problem for a language that is also intended to be 
 used for system programming. Eventually, a mix of reference 
 counting and manual memory management would do.
We don't do atomic reference counting so the results of Swift don't apply. If you can scroll up to the beginning of https://forum.nim-lang.org/t/5734 you can read more about it.
Apr 30 2020
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/30/20 10:05 AM, Araq wrote:
 On Thursday, 30 April 2020 at 13:49:54 UTC, Bienlein wrote:
 Objective-C and Swift are two well-known languages that use reference 
 counting. I remember some benchmark someone made in his thesis where 
 he wrote a drive for something in various languages including Go and 
 Objective-C.

 The version in Objective-C was too slow for the requirements of a 
 driver. It turned out that changing counters when adding and removing 
 items to collections were the main reason for this.

 On the contrary, the version in Go was fast enough. Apparently because 
 Go's GC runs short enough on every collection invocation not to slow 
 down the application too much.

 So to me the question arises whether reference counting in Nim could 
 be a problem for a language that is also intended to be used for 
 system programming. Eventually, a mix of reference counting and manual 
 memory management would do.
We don't do atomic reference counting so the results of Swift don't apply. If you can scroll up to the beginning of https://forum.nim-lang.org/t/5734 you can read more about it.
The key to performant reference counting I think is to have the compiler elide pairs of increments and decrements that it can determine are unnecesary, which is how Objective-C was able to get away with reasonable performance, even on old iPhones (though ARC wasn't available until iOS 5). And I think Nim is doing that from what I read, correct? It's definitely a good model to look at for D. -Steve
Apr 30 2020
prev sibling parent IGotD- <nise nise.com> writes:
On Thursday, 30 April 2020 at 13:49:54 UTC, Bienlein wrote
 Eventually, a mix of reference counting and manual memory 
 management would do.
There is another language called "V language". There the author claims that the language will feature some kind static analysis of lifetimes like Rust but also reference counting. The compiler tries to do static analysis in order to figure out when an object goes out of scope. If the compiler fails, it will just add reference counting to the object. This approach is kind of transparent saves you from the explicitness of Rust but do optimizes where it can. This is just according to he author but so far an idea without any realization yet. I don't know how similar this approach is to Swift. I'm just guessing here but I think this is where we are heading when it comes to memory management. Reference counting is usually completely ok for systems programming but it would be nice to have this kind of optimization.
Apr 30 2020