www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Mac Apps That Use Garbage Collection Must Move to ARC

reply "JN" <666total wp.pl> writes:
https://developer.apple.com/news/?id=02202015a

Interesting...

Apple is dropping GC in favor of automatic reference counting. 
What are the benefits of ARC over GC? Is it just about 
predictability of resource freeing? Would ARC make sense in D?
Feb 21 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the benefits of ARC over GC? Is it just about 
 predictability of resource freeing? Would ARC make sense in D?
Apple never was able to roll out a good GC for ObjC and got back to ARC (an overly complex reference counting system that the compiler is aware of). You get the usual tradeof of RC vs GC : - RC is more predictable. - RC has less floating garbage, so usually a lower memory foot print. - RC usually increase cache pressure as you need to have reference count ready and hot. - RC behave (very) poorly when reference are shared across cores, as they ill compete for cache lines. It tends to be faster in single threaded mode, but that depends on the type of application (graph manipulation for instance, tend to behave poorly with RC). - RC can leak. - RC is unsafe without ownership.
Feb 21 2015
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/21/15 12:22 PM, deadalnix wrote:
 On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. What
 are the benefits of ARC over GC? Is it just about predictability of
 resource freeing? Would ARC make sense in D?
Apple never was able to roll out a good GC for ObjC and got back to ARC (an overly complex reference counting system that the compiler is aware of). You get the usual tradeof of RC vs GC : - RC is more predictable. - RC has less floating garbage, so usually a lower memory foot print. - RC usually increase cache pressure as you need to have reference count ready and hot. - RC behave (very) poorly when reference are shared across cores, as they ill compete for cache lines. It tends to be faster in single threaded mode, but that depends on the type of application (graph manipulation for instance, tend to behave poorly with RC). - RC can leak. - RC is unsafe without ownership.
Apparently Apple has delivered on a well oiled RC implementation and has "won". Most people who develop for both iOS and Android prefer the former. -- Andrei
Feb 21 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/21/2015 12:22 PM, deadalnix wrote:
 You get the usual tradeof of RC vs GC :
   - RC is more predictable.
   - RC has less floating garbage, so usually a lower memory foot print.
   - RC usually increase cache pressure as you need to have reference count
ready
 and hot.
   - RC behave (very) poorly when reference are shared across cores, as they ill
 compete for cache lines. It tends to be faster in single threaded mode, but
that
 depends on the type of application (graph manipulation for instance, tend to
 behave poorly with RC).
   - RC can leak.
   - RC is unsafe without ownership.
- RC is slower overall - RC has further performance and code bloat problems when used with exception handling
Feb 21 2015
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-02-22 03:23, Walter Bright wrote:

 - RC has further performance and code bloat problems when used with
 exception handling
Exceptions in Objective-C are basically like Errors in D. Should not be caught and should terminate the applications. Swift doesn't event have exceptions. -- /Jacob Carlborg
Feb 22 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/22/2015 3:19 AM, Jacob Carlborg wrote:
 On 2015-02-22 03:23, Walter Bright wrote:

 - RC has further performance and code bloat problems when used with
 exception handling
Exceptions in Objective-C are basically like Errors in D. Should not be caught and should terminate the applications. Swift doesn't event have exceptions.
And I suspect that ARC is why they don't have exceptions.
Feb 22 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-02-22 21:48, Walter Bright wrote:

 And I suspect that ARC is why they don't have exceptions.
Objective-C still has both ARC and exceptions. Although the documentation [1] says that ARC is not exception safe by default, but it does have a flag to enable it "-fobjc-arc-exceptions". [1] http://clang.llvm.org/docs/AutomaticReferenceCounting.html#exceptions -- /Jacob Carlborg
Feb 23 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2015 11:57 PM, Jacob Carlborg wrote:
 On 2015-02-22 21:48, Walter Bright wrote:

 And I suspect that ARC is why they don't have exceptions.
Objective-C still has both ARC and exceptions. Although the documentation [1] says that ARC is not exception safe by default, but it does have a flag to enable it "-fobjc-arc-exceptions". [1] http://clang.llvm.org/docs/AutomaticReferenceCounting.html#exceptions
From your reference: "Making code exceptions-safe by default would impose severe runtime and code size penalties on code that typically does not actually care about exceptions safety. Therefore, ARC-generated code leaks by default on exceptions, which is just fine if the process is going to be immediately terminated anyway. Programs which do care about recovering from exceptions should enable the option." Note "severe runtime and code size penalties". Just what I said.
Feb 24 2015
prev sibling parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/22/2015 03:23 AM, Walter Bright wrote:
 - RC is slower overall
This claim isn't true for almost all applications when using a conservative GC, except for programs that produce a lot of garbage and have very few long-lived objects. The memory bandwidth consumed to mark long-lived objects during every collection dominates the GC cost even for small heaps (say 100MB). -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU8Nu9AAoJELJzgRYSuxk5ONwP/0vfyS4UH4OuyXASpyTRdEYQ PsAi68S1oSqJaZXGkjVYSsPBASA3qn8Vf/n2002c4NKjGnEbywVyUGijmzyEx94q Ja8TKtIvw4HJ8xCQEd3NvKwttJhY+K868hAH2YWEiOknad0x7MV3N0GXb7yyEFbt b5AMJmr5Qs+6wTvOYcwgdJevznaE4LjxtI/iURsjQ7X3tfg6igb3W96Ehx/5URFB upP5lCswBJ5agz8TbOSVeqk1AjR7dYYgtSDhF+IhkH9Ig5lJ68SECWNG7Ru9ixmK JqUhyGJXWpK5UWkDE9zggUQ2M1QVXTnX/QzzUGcvnbqC1SgHbd79gTwQWOLSOy8i 5e464zCVe1QcMmDK5vUxcuNCr9XiATV/k9M+SHtkXu2AZvx0mQdWBKPVnQmTzizQ Tf+yKT84zKz4kZK6cfoP9KsrDlWLcU+L6vmghkqFfkk0mpvkoXEF7mNsPlWw+bvn GAEJvj+xItFkulaE9X+HWbvRs5YeFOSuV7qXRKoTGRvhnr49XaDLi2jANLBt2SLu Ku/pjkwl20rHUB3Q8+7qfoqjm/iunujZxqVw+vXzRvp3hrvbMiFW2b6jHzl8WN7n LceLniG/sk4/hLILlu4CKgiLQRk3PxQEBJHqUaSqNZZS7Wrp/g6b3nKaGzqv2ehM 2RRjU57Ptw6AQsw+QT9E =ANkN -----END PGP SIGNATURE-----
Feb 27 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 February 2015 at 05:20, JN via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. What are the
 benefits of ARC over GC? Is it just about predictability of resource
 freeing? Would ARC make sense in D?
D's GC is terrible, and after 6 years hanging out in this place, I have seen precisely zero development on the GC front. Nobody can even imagine, let alone successfully implement a GC that covers realtime use requirements. On the other hand, if 'scope' is implemented well, D may have some of the best tools in town for quality ARC implementation. There is a visible way forward for quality RC in D, and I think we could do better than Apple. I personally think ARC in D is the only way forwards. That is an unpopular opinion however... although I think I'm just being realistic ;)
Feb 21 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/21/2015 4:43 PM, Manu via Digitalmars-d wrote:
 D's GC is terrible, and after 6 years hanging out in this place, I
 have seen precisely zero development on the GC front. Nobody can even
 imagine, let alone successfully implement a GC that covers realtime
 use requirements.
Nobody thinks GC is suitable for hard realtime.
 On the other hand, if 'scope' is implemented well, D may have some of
 the best tools in town for quality ARC implementation. There is a
 visible way forward for quality RC in D, and I think we could do
 better than Apple.
With 'return ref', which is now implemented, you can create a memory safe RefCounted type. However, nobody has bothered. Are you up for it? :-)
Feb 21 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Sunday, 22 February 2015 at 02:14:04 UTC, Walter Bright wrote:
 With 'return ref', which is now implemented, you can create a 
 memory safe RefCounted type. However, nobody has bothered. Are 
 you up for it? :-)
wait what? These things should be on the changelog! You can't really complain that people haven't bothered using a secret feature that 1) doesn't seem to be out of beta yet and 2) isn't listed as a new thing here http://dlang.org/changelog.html
Feb 21 2015
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/21/2015 6:20 PM, Adam D. Ruppe wrote:
 On Sunday, 22 February 2015 at 02:14:04 UTC, Walter Bright wrote:
 With 'return ref', which is now implemented, you can create a memory safe
 RefCounted type. However, nobody has bothered. Are you up for it? :-)
wait what? These things should be on the changelog! You can't really complain that people haven't bothered using a secret feature that 1) doesn't seem to be out of beta yet and 2) isn't listed as a new thing here http://dlang.org/changelog.html
It's new in 2.067, which is out in beta. It implements DIP25. http://wiki.dlang.org/DIP25
Feb 21 2015
prev sibling parent "Gary Willoughby" <dev nomad.so> writes:
On Sunday, 22 February 2015 at 02:20:03 UTC, Adam D. Ruppe wrote:
 On Sunday, 22 February 2015 at 02:14:04 UTC, Walter Bright 
 wrote:
 With 'return ref', which is now implemented, you can create a 
 memory safe RefCounted type. However, nobody has bothered. Are 
 you up for it? :-)
wait what? These things should be on the changelog! You can't really complain that people haven't bothered using a secret feature that 1) doesn't seem to be out of beta yet and 2) isn't listed as a new thing here http://dlang.org/changelog.html
This! What is 'return ref'?
Feb 22 2015
prev sibling next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 22 February 2015 at 02:14:04 UTC, Walter Bright wrote:
 On 2/21/2015 4:43 PM, Manu via Digitalmars-d wrote:
 D's GC is terrible, and after 6 years hanging out in this 
 place, I
 have seen precisely zero development on the GC front. Nobody 
 can even
 imagine, let alone successfully implement a GC that covers 
 realtime
 use requirements.
Nobody thinks GC is suitable for hard realtime.
 On the other hand, if 'scope' is implemented well, D may have 
 some of
 the best tools in town for quality ARC implementation. There 
 is a
 visible way forward for quality RC in D, and I think we could 
 do
 better than Apple.
With 'return ref', which is now implemented, you can create a memory safe RefCounted type. However, nobody has bothered. Are you up for it? :-)
Excuse my ignorance,(I read the DIP btw) How does 'return ref' address issues like http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org ?
Feb 21 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/21/2015 10:07 PM, weaselcat wrote:
 Excuse my ignorance,(I read the DIP btw)
 How does 'return ref' address issues like
 http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org
 ?
The short answer, is your ref types never expose a raw pointer. They do it all with ref's.
Feb 21 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 22 February 2015 at 06:37:45 UTC, Walter Bright wrote:
 On 2/21/2015 10:07 PM, weaselcat wrote:
 Excuse my ignorance,(I read the DIP btw)
 How does 'return ref' address issues like
 http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org
 ?
The short answer, is your ref types never expose a raw pointer. They do it all with ref's.
If you don't plan to use any method on objects, I guess that's fine.
Feb 21 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/21/2015 11:06 PM, deadalnix wrote:
 On Sunday, 22 February 2015 at 06:37:45 UTC, Walter Bright wrote:
 On 2/21/2015 10:07 PM, weaselcat wrote:
 Excuse my ignorance,(I read the DIP btw)
 How does 'return ref' address issues like
 http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org

 ?
The short answer, is your ref types never expose a raw pointer. They do it all with ref's.
If you don't plan to use any method on objects, I guess that's fine.
Structs pass the 'this' pointer by ref.
Feb 21 2015
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 22 February 2015 at 06:37:45 UTC, Walter Bright wrote:
 On 2/21/2015 10:07 PM, weaselcat wrote:
 Excuse my ignorance,(I read the DIP btw)
 How does 'return ref' address issues like
 http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org
 ?
The short answer, is your ref types never expose a raw pointer. They do it all with ref's.
And a coroutine cannot hold onto a ref?
Feb 22 2015
prev sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 22 February 2015 at 06:07:02 UTC, weaselcat wrote:
 On Sunday, 22 February 2015 at 02:14:04 UTC, Walter Bright 
 wrote:
 On 2/21/2015 4:43 PM, Manu via Digitalmars-d wrote:
 D's GC is terrible, and after 6 years hanging out in this 
 place, I
 have seen precisely zero development on the GC front. Nobody 
 can even
 imagine, let alone successfully implement a GC that covers 
 realtime
 use requirements.
Nobody thinks GC is suitable for hard realtime.
 On the other hand, if 'scope' is implemented well, D may have 
 some of
 the best tools in town for quality ARC implementation. There 
 is a
 visible way forward for quality RC in D, and I think we could 
 do
 better than Apple.
With 'return ref', which is now implemented, you can create a memory safe RefCounted type. However, nobody has bothered. Are you up for it? :-)
Excuse my ignorance,(I read the DIP btw) How does 'return ref' address issues like http://forum.dlang.org/thread/pagpusgpyhlhoipldofs forum.dlang.org#post-ewuwphzmubtmykfsywuw:40forum.dlang.org ?
Responding to myself, I don't think this actually touches on the issue deadalnix wanted at all. So I think the
 With 'return ref', which is now implemented, you can create a 
 memory safe RefCounted type.
is not quite correct... in his usage, anyways. I think this would require DIP69, no?
Feb 21 2015
prev sibling next sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 22.02.2015 um 03:13 schrieb Walter Bright:
 Nobody thinks GC is suitable for hard realtime.
I think you should know manu good enough by now that you know he is not talking about hard realtime but soft realtime instead. (e.g. games) There are GCs which handle this situation pretty well but D's GC is not one of them.
Feb 22 2015
parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 2015-02-22 at 10:21 +0100, Benjamin Thaut via Digitalmars-d
wrote:
 Am 22.02.2015 um 03:13 schrieb Walter Bright:
 Nobody thinks GC is suitable for hard realtime.
=20 I think you should know manu good enough by now that you know he is not=
=20
 talking about hard realtime but soft realtime instead. (e.g. games)=20
 There are GCs which handle this situation pretty well but D's GC is not=
=20
 one of them.
If the D GC really is quite so bad, why hasn't a cabal formed to create a new GC that is precise, fast and efficient? I suspect Python's RC/GC approach is one architecture, whilst Java G1 is another. (Ignore all previous GCs in OpenJDK, they "suck". Sadly the GCs other than G1 that are interesting on the JVM are proprietary.) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 22 2015
next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 22 February 2015 at 09:48:16 UTC, Russel Winder wrote:
 On Sun, 2015-02-22 at 10:21 +0100, Benjamin Thaut via 
 Digitalmars-d
 wrote:
 Am 22.02.2015 um 03:13 schrieb Walter Bright:
 Nobody thinks GC is suitable for hard realtime.
I think you should know manu good enough by now that you know he is not talking about hard realtime but soft realtime instead. (e.g. games) There are GCs which handle this situation pretty well but D's GC is not one of them.
If the D GC really is quite so bad, why hasn't a cabal formed to create a new GC that is precise, fast and efficient? I suspect Python's RC/GC approach is one architecture, whilst Java G1 is another. (Ignore all previous GCs in OpenJDK, they "suck". Sadly the GCs other than G1 that are interesting on the JVM are proprietary.)
GCs are difficult. I don't think they're as bad for soft realtime as some people would lead you to believe though. A nice advantage of GCs is that you can hold off all collections, and soft realtime(games) frequently have pauses long enough for collections. The issue I found with D's GC by default is that it runs far too often and it's _much_ more efficient to do large collections infrequently than frequent small collections. IIRC this is completely adjustable in 2.067. Just my 2 cents. Also, .NET's GC is under MIT now AFAIK(?). I don't even know what the quality of it is, but .NET is and has been Microsoft's darling.
Feb 22 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-02-22 10:48, Russel Winder via Digitalmars-d wrote:

 If the D GC really is quite so bad, why hasn't a cabal formed to create
 a new GC that is precise, fast and efficient?
It's like with everything else that hasn't been done. No one has cared enough do to something about it. There are some issues that makes it harder to implement a good GC in D: * D allows to do unsafe operations like unions, casts, pointer arithmetic * D needs to be able to interface with C * Most good GC implementations need some kind of barrier (read or write, don't remember which). If I recall there are several people against this in the community -- /Jacob Carlborg
Feb 22 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/22/2015 3:25 AM, Jacob Carlborg wrote:
 * Most good GC implementations need some kind of barrier (read or write, don't
 remember which). If I recall there are several people against this in the
community
Count me among those. In Java, write barriers make sense because Java uses the GC for everything. Pretty much every indirection is a GC reference. This is not at all true with D code. But since the compiler can't know that, it has to insert write barriers for all those dereferences regardless. I suspect it would be a terrible performance hit.
Feb 23 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-02-23 21:30, Walter Bright wrote:

 Count me among those.

 In Java, write barriers make sense because Java uses the GC for
 everything. Pretty much every indirection is a GC reference.

 This is not at all true with D code. But since the compiler can't know
 that, it has to insert write barriers for all those dereferences
 regardless.
The alternative would be to have two kind of pointers, one for GC allocated data and one for other kind of data. But I know you don't like that either. We kind of already have this, class references and regular pointers. But that would tie classes to the GC.
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up. -- /Jacob Carlborg
Feb 23 2015
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 24 February 2015 at 07:53:52 UTC, Jacob Carlborg 
wrote:
 On 2015-02-23 21:30, Walter Bright wrote:

 Count me among those.

 In Java, write barriers make sense because Java uses the GC for
 everything. Pretty much every indirection is a GC reference.

 This is not at all true with D code. But since the compiler 
 can't know
 that, it has to insert write barriers for all those 
 dereferences
 regardless.
The alternative would be to have two kind of pointers, one for GC allocated data and one for other kind of data. But I know you don't like that either. We kind of already have this, class references and regular pointers. But that would tie classes to the GC.
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
This the approach taken by Active Oberon and Modula-3. Pointers are GC by default, but can be declared as untraced pointers in code considered system like in D. -- Paulo
Feb 24 2015
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
This the approach taken by Active Oberon and Modula-3. Pointers are GC by default, but can be declared as untraced pointers in code considered system like in D.
Do they have concurrent gc and emit barriers for each write to a default pointer? Do they have precise scanning and don't scan the untraced pointers? Are the meaningful performance comparisons between the two pointer types that would enable us to estimate how costly emitting those barriers in D would be?
Feb 24 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2015 1:30 AM, Tobias Pankrath wrote:
 Are the meaningful performance comparisons
 between the two pointer types that would enable us to estimate
 how costly emitting those barriers in D would be?
Even 10% makes it a no-go. Even 1%. D has to be competitive in the most demanding environments. If you've got a server farm, 1% speedup means 1% fewer servers, and that can add up to millions of dollars.
Feb 24 2015
next sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Tuesday, 24 February 2015 at 09:53:19 UTC, Walter Bright wrote:
 D has to be competitive in the most demanding environments.
But isn't that exactly the point? Garbage collected D is NOT competitive in demanding environments. -Wyatt
Feb 24 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2015 5:28 AM, Wyatt wrote:
 On Tuesday, 24 February 2015 at 09:53:19 UTC, Walter Bright wrote:
 D has to be competitive in the most demanding environments.
But isn't that exactly the point? Garbage collected D is NOT competitive in demanding environments.
Write barriers are not the answer.
Feb 24 2015
prev sibling next sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 24.02.2015 um 10:53 schrieb Walter Bright:
 On 2/24/2015 1:30 AM, Tobias Pankrath wrote:
 Are the meaningful performance comparisons
 between the two pointer types that would enable us to estimate
 how costly emitting those barriers in D would be?
Even 10% makes it a no-go. Even 1%. D has to be competitive in the most demanding environments. If you've got a server farm, 1% speedup means 1% fewer servers, and that can add up to millions of dollars.
You seeing this completely one sided. Even if write barries make code slower by 10% its a non issue if the GC collections get faster by 10% as well. Then in average the program will run at the same speed.
Feb 25 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/25/15 1:27 PM, Benjamin Thaut wrote:
 Am 24.02.2015 um 10:53 schrieb Walter Bright:
 On 2/24/2015 1:30 AM, Tobias Pankrath wrote:
 Are the meaningful performance comparisons
 between the two pointer types that would enable us to estimate
 how costly emitting those barriers in D would be?
Even 10% makes it a no-go. Even 1%. D has to be competitive in the most demanding environments. If you've got a server farm, 1% speedup means 1% fewer servers, and that can add up to millions of dollars.
You seeing this completely one sided. Even if write barries make code slower by 10% its a non issue if the GC collections get faster by 10% as well. Then in average the program will run at the same speed.
Hmmmm... not sure the math works out that way. -- Andrei
Feb 25 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 25 February 2015 at 21:44:05 UTC, Andrei 
Alexandrescu wrote:
 You seeing this completely one sided. Even if write barries 
 make code
 slower by 10% its a non issue if the GC collections get faster 
 by 10% as
 well. Then in average the program will run at the same speed.
Hmmmm... not sure the math works out that way. -- Andrei
Yeah the math are wrong, but the general idea remains. I don't think it make sens to completely discard the idea of barriers, especially when it come to write barrier on the immutable heap. At least that should certainly pay off.
Feb 25 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 1:50 PM, deadalnix wrote:
 On Wednesday, 25 February 2015 at 21:44:05 UTC, Andrei Alexandrescu wrote:
 You seeing this completely one sided. Even if write barries make code
 slower by 10% its a non issue if the GC collections get faster by 10% as
 well. Then in average the program will run at the same speed.
Hmmmm... not sure the math works out that way. -- Andrei
Yeah the math are wrong, but the general idea remains. I don't think it make sens to completely discard the idea of barriers, especially when it come to write barrier on the immutable heap. At least that should certainly pay off.
Part of the equation is D simply does not use GC anywhere near as pervasively as Java does, so the benefit/cost is greatly reduced for D.
Feb 25 2015
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Feb 25, 2015 at 04:36:22PM -0800, Walter Bright via Digitalmars-d wrote:
 On 2/25/2015 1:50 PM, deadalnix wrote:
On Wednesday, 25 February 2015 at 21:44:05 UTC, Andrei Alexandrescu wrote:
You seeing this completely one sided. Even if write barries make
code slower by 10% its a non issue if the GC collections get faster
by 10% as well. Then in average the program will run at the same
speed.
Hmmmm... not sure the math works out that way. -- Andrei
Yeah the math are wrong, but the general idea remains. I don't think it make sens to completely discard the idea of barriers, especially when it come to write barrier on the immutable heap. At least that should certainly pay off.
Part of the equation is D simply does not use GC anywhere near as pervasively as Java does, so the benefit/cost is greatly reduced for D.
Do you have data to back that up? I don't know how typical this is, but in my own D code I tend to use arrays a lot, and they do tend to add significant GC load. A recent performance improvement attempt in one of my projects found that collection cycles take up to 40% of total running time (it's a CPU-bound process). Turning off GC collections and manually triggering them at strategic points with lower frequency gave me huge performance improvements, even though the collection cycles are still pretty slow. I'm not sure how write barriers would play into this scenario, though. The overall performance outside of GC collections would probably suffer a bit, but it might be more than made up for by more accurate collection cycles that take only a fraction of the time -- most of the scanned data is live, only a small subset needs to be collected. A generational GC would also greatly improve this particular use case, but that seems really remote in D right now. In any case, a bit more investigation into the actual costs/benefits of write barriers might give us more concrete data to base decisions on, instead of just a blanket dismissal of the whole idea. T -- It always amuses me that Windows has a Safe Mode during bootup. Does that mean that Windows is normally unsafe?
Feb 25 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 4:50 PM, H. S. Teoh via Digitalmars-d wrote:
 On Wed, Feb 25, 2015 at 04:36:22PM -0800, Walter Bright via Digitalmars-d
wrote:
 On 2/25/2015 1:50 PM, deadalnix wrote:
 On Wednesday, 25 February 2015 at 21:44:05 UTC, Andrei Alexandrescu wrote:
 You seeing this completely one sided. Even if write barries make
 code slower by 10% its a non issue if the GC collections get faster
 by 10% as well. Then in average the program will run at the same
 speed.
Hmmmm... not sure the math works out that way. -- Andrei
Yeah the math are wrong, but the general idea remains. I don't think it make sens to completely discard the idea of barriers, especially when it come to write barrier on the immutable heap. At least that should certainly pay off.
Part of the equation is D simply does not use GC anywhere near as pervasively as Java does, so the benefit/cost is greatly reduced for D.
Do you have data to back that up?
I've written a Java compiler and and a GC for the Java VM (for Symantec, back in the 90's.). I'm familiar with the code generated for Java, and the code generated for D. Yes, I'm pretty comfortable with the assessment of how often pointers are GC pointers and how often they are not.
 I don't know how typical this is, but in my own D code I tend to use
 arrays a lot, and they do tend to add significant GC load. A recent
 performance improvement attempt in one of my projects found that
 collection cycles take up to 40% of total running time (it's a CPU-bound
 process). Turning off GC collections and manually triggering them at
 strategic points with lower frequency gave me huge performance
 improvements, even though the collection cycles are still pretty slow.
Note that you didn't need write barriers for that.
 I'm not sure how write barriers would play into this scenario, though.
 The overall performance outside of GC collections would probably suffer
 a bit, but it might be more than made up for by more accurate collection
 cycles that take only a fraction of the time -- most of the scanned data
 is live, only a small subset needs to be collected. A generational GC
 would also greatly improve this particular use case, but that seems
 really remote in D right now.
Writing a generational collector for D is possible right now with no language changes, it's just that nobody has bothered to do it. Don't need write barriers for it, either.
 In any case, a bit more investigation into
 the actual costs/benefits of write barriers might give us more concrete
 data to base decisions on, instead of just a blanket dismissal of the
 whole idea.
Except that I've actually written GCs that used write barriers. I've been there, done that. Of course, I might still be wrong. If you want to prove me wrong, do the work. You don't need compiler changes to prove yourself right, you can code in the write barriers explicitly.
Feb 25 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 02:48:15 UTC, Walter Bright 
wrote:
 Writing a generational collector for D is possible right now 
 with no language changes, it's just that nobody has bothered to 
 do it. Don't need write barriers for it, either.
How are you planning to track assignment a pointer to the young generation in the old generation ? Because if you plan to rescan the whole old generation, this is not exactly a generational GC.
Feb 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 7:27 PM, deadalnix wrote:
 On Thursday, 26 February 2015 at 02:48:15 UTC, Walter Bright wrote:
 Writing a generational collector for D is possible right now with no language
 changes, it's just that nobody has bothered to do it. Don't need write
 barriers for it, either.
How are you planning to track assignment a pointer to the young generation in the old generation ? Because if you plan to rescan the whole old generation, this is not exactly a generational GC.
A lot of benefit simply came from compacting all the remaining used allocations together, essentially defragging the memory.
Feb 25 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Thursday, 26 February 2015 at 04:08:32 UTC, Walter Bright 
wrote:
 On 2/25/2015 7:27 PM, deadalnix wrote:
 On Thursday, 26 February 2015 at 02:48:15 UTC, Walter Bright 
 wrote:
 Writing a generational collector for D is possible right now 
 with no language
 changes, it's just that nobody has bothered to do it. Don't 
 need write
 barriers for it, either.
How are you planning to track assignment a pointer to the young generation in the old generation ? Because if you plan to rescan the whole old generation, this is not exactly a generational GC.
A lot of benefit simply came from compacting all the remaining used allocations together, essentially defragging the memory.
Is this implying you've begun work on a compacting D collector, or are you relating to your Java experiences?
Feb 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 8:51 PM, weaselcat wrote:
 Is this implying you've begun work on a compacting D collector,
No.
 or are you relating to your Java experiences?
Yes.
Feb 26 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 04:08:32 UTC, Walter Bright 
wrote:
 On 2/25/2015 7:27 PM, deadalnix wrote:
 On Thursday, 26 February 2015 at 02:48:15 UTC, Walter Bright 
 wrote:
 Writing a generational collector for D is possible right now 
 with no language
 changes, it's just that nobody has bothered to do it. Don't 
 need write
 barriers for it, either.
How are you planning to track assignment a pointer to the young generation in the old generation ? Because if you plan to rescan the whole old generation, this is not exactly a generational GC.
A lot of benefit simply came from compacting all the remaining used allocations together, essentially defragging the memory.
That's is not answering the question at all.
Feb 25 2015
prev sibling next sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 26.02.2015 um 05:08 schrieb Walter Bright:
 On 2/25/2015 7:27 PM, deadalnix wrote:
 On Thursday, 26 February 2015 at 02:48:15 UTC, Walter Bright wrote:
 Writing a generational collector for D is possible right now with no
 language
 changes, it's just that nobody has bothered to do it. Don't need write
 barriers for it, either.
How are you planning to track assignment a pointer to the young generation in the old generation ? Because if you plan to rescan the whole old generation, this is not exactly a generational GC.
A lot of benefit simply came from compacting all the remaining used allocations together, essentially defragging the memory.
What you are describing is a compacting GC and not a generational GC. Please just describe in words how you would do a generational GC without write barriers. Because just as deadalnix wrote, the problem is tracking pointers within the old generation that point to the new generation.
Feb 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 11:01 PM, Benjamin Thaut wrote:
 What you are describing is a compacting GC and not a generational GC. Please
 just describe in words how you would do a generational GC without write
 barriers. Because just as deadalnix wrote, the problem is tracking pointers
 within the old generation that point to the new generation.
It was a generational gc, I described earlier how it used page faults instead of write barriers. I eventually removed the page fault system because it was faster without it.
Feb 26 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 19:58:56 UTC, Walter Bright 
wrote:
 On 2/25/2015 11:01 PM, Benjamin Thaut wrote:
 What you are describing is a compacting GC and not a 
 generational GC. Please
 just describe in words how you would do a generational GC 
 without write
 barriers. Because just as deadalnix wrote, the problem is 
 tracking pointers
 within the old generation that point to the new generation.
It was a generational gc, I described earlier how it used page faults instead of write barriers. I eventually removed the page fault system because it was faster without it.
Page fault ARE write barrier.
Feb 26 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2015 12:29 PM, deadalnix wrote:
 Page fault ARE write barrier.
When we all start debating what the meaning of "is" is, it's time for me to check out.
Feb 26 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 20:56:25 UTC, Walter Bright 
wrote:
 On 2/26/2015 12:29 PM, deadalnix wrote:
 Page fault ARE write barrier.
When we all start debating what the meaning of "is" is, it's time for me to check out.
You are the one playing that game. You said earlier you used MMU as write barrier for a GC in Java, and now you are deciding that this is not a write barrier anymore. But indeed, that settle the debate. If you are down to redefining things as not write barriers as to pretend you can implement what you redefine as a generational GC (which the rest of the world call a compacting GC) then what is there to discuss ? You don't seems to be up to date about recent GC technologies and/or terminology used in the area.
Feb 26 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/26/2015 09:29 PM, deadalnix wrote:
 Page fault ARE write barrier.
If done at kernel mode, it's too expensive anyhow. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU8PBSAAoJELJzgRYSuxk58EgP/1ZJMTJ4srl8A+67dr1jpKwv vqZlsKvbZ0ZpPZwO754TnWoaekgqPumvqmdjjHpOiV5fyu8mHBq81Jg9tJB9K3Yc fedTM/xQtj+MTuARedyAqDAlzS7WBhc0Emx9QCBJJWwIuXu6aMvLZ9UO25eG0S/I qXJ+/gIF6pfzmvn1/vsMzEkIeK75PslR8FQA9lGw30R9cN8vIeR5ZrOdyLv3w/Az BbvjKg1086e7a9Gyxfo9ZXGmaoippxzx3jIAaqS9Gy6KZwNIrWqBX3fww+P3qqxZ DstYPPp5b57xGjEt+e1vRDydW4OEHcQ1wEw7Ozfi/s6qCTRBj5xD9l+8idffyYYp d76PcJGq3rByyP9ag1AxurEBVyIpPIPQNCy2/kwYET6s8aLc4FQsWtuI2oOp8fRs m0k4pp9F8n62/PHjkbkUaHgFZOKedFeXfunT/21Be+pUhdFBwg438C70WerQt2Uy UtiERO/2BhCCyq3+SFFtsfbCAFxPTNWCPbCnDhlPscrwl2YPCzYH1XvrdV0lWAG/ cI32lj7IdXWbO5otuO1qOPwbDVgLMTnpx2/wXrCriubTvBQTCeasJDlAcZYBAVyb yQ1PZtbIN6Woa/6DJmR67YsuW3ag998oEXXP/LBo76rxE969OvcDi6Kh01MzweGb i+QKVhgLT57Yw1urIwlY =gVgL -----END PGP SIGNATURE-----
Feb 27 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 27 February 2015 at 22:32:06 UTC, Martin Nowak wrote:
 On 02/26/2015 09:29 PM, deadalnix wrote:
 Page fault ARE write barrier.
If done at kernel mode, it's too expensive anyhow.
As mentioned, it is not for the immutable part of the heap, for obvious reasons.
Feb 27 2015
prev sibling next sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 26.02.2015 um 20:58 schrieb Walter Bright:
 It was a generational gc, I described earlier how it used page faults
 instead of write barriers. I eventually removed the page fault system
 because it was faster without it.
Page faults are inferrior to compiler generated write barriers. Because with a page fault startegy you pay for every write. Even if the write does not write a pointer. Compiler generated write barriers only apply to pointers written through another pointer.
Feb 26 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 21:17:57 UTC, Benjamin Thaut 
wrote:
 Am 26.02.2015 um 20:58 schrieb Walter Bright:
 It was a generational gc, I described earlier how it used page 
 faults
 instead of write barriers. I eventually removed the page fault 
 system
 because it was faster without it.
Page faults are inferrior to compiler generated write barriers. Because with a page fault startegy you pay for every write. Even if the write does not write a pointer. Compiler generated write barriers only apply to pointers written through another pointer.
It is a tradeof. You can implement write barrier in the codegen. In which case you check them every time, but only for pointers. The check is cheap but creeping. You can implemented them using memory protection. In which case it is WAY more expensive and will trap all the write, but ONLY when needed (it can be turned on and off) and usually you trap once per page. Note that in D, you have union and all kind of crap like that, so what is writing a pointer is non obvious and so the tradeof is very different than it is in other languages.
Feb 26 2015
parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 27.02.2015 um 00:05 schrieb deadalnix:
 Note that in D, you have union and all kind of crap like that, so what
 is writing a pointer is non obvious and so the tradeof is very different
 than it is in other languages.
To have any chance of implementing a better GC in D I would simly start of with assuming all code is safe. For code that is not safe the user would have to make sure it plays nice with the GC. This would also apply to unions which contain pointer types. If you wan't to write a good GC that does support non safe features without user input you don't even have to start in my opinion.
Feb 26 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 27 February 2015 at 07:09:20 UTC, Benjamin Thaut wrote:
 Am 27.02.2015 um 00:05 schrieb deadalnix:
 Note that in D, you have union and all kind of crap like that, 
 so what
 is writing a pointer is non obvious and so the tradeof is very 
 different
 than it is in other languages.
To have any chance of implementing a better GC in D I would simly start of with assuming all code is safe. For code that is not safe the user would have to make sure it plays nice with the GC. This would also apply to unions which contain pointer types. If you wan't to write a good GC that does support non safe features without user input you don't even have to start in my opinion.
That is a reasonable approach (and indeed, I would assume that system code have to ensure that it does not do something that will confuse the GC). Still, what you can do when compiling AOT is different than what you can do when you JIT. For instance, when you JIT, you can add write barrier and remove them on the fly when you need to. When doing AOT, they must be always on or always of.
Feb 27 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-02-26 20:58, Walter Bright wrote:

 It was a generational gc, I described earlier how it used page faults
 instead of write barriers. I eventually removed the page fault system
 because it was faster without it.
Instead you used? -- /Jacob Carlborg
Feb 26 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 02/26/2015 05:08 AM, Walter Bright wrote:
 
 A lot of benefit simply came from compacting all the remaining used
 allocations together, essentially defragging the memory.
Compacting is indeed easy once we have a precise GC, and can be done partially, i.e. objects pointed to by the stack/register are pinned.
Mar 02 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/2/2015 11:38 AM, Martin Nowak wrote:
 Compacting is indeed easy once we have a precise GC, and can be done
 partially, i.e. objects pointed to by the stack/register are pinned.
Also unions.
Mar 02 2015
parent "Martin Nowak" <code dawg.eu> writes:
On Tuesday, 3 March 2015 at 02:05:08 UTC, Walter Bright wrote:
 On 3/2/2015 11:38 AM, Martin Nowak wrote:
 Compacting is indeed easy once we have a precise GC, and can 
 be done
 partially, i.e. objects pointed to by the stack/register are 
 pinned.
Also unions.
Compacting doesn't solve the inherent performance problem of a conservative GC though. It's the capability to run small incremental collections that make modern GCs fast. With a conservative GC you always have to mark the complete heap to even free a single object. Shoveling a 1GB heap from main memory through your CPU already takes 250ms, and on top of that comes memory latency for non-sequential traversing and the actual marking.
Mar 03 2015
prev sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, 2015-02-25 at 18:48 -0800, Walter Bright via Digitalmars-d
wrote:
 On 2/25/2015 4:50 PM, H. S. Teoh via Digitalmars-d wrote:
 […]

 Do you have data to back that up?
I've written a Java compiler and and a GC for the Java VM (for Symantec, back in the 90's.). I'm familiar with the code generated for Java, and the code generated for D.
Have you studied the G1 GC? Any "data" from the 1990 regarding Java, and indeed any other programming language with a lifespan of 20+years, is suspect, to say the least. Also what is said above is not data, it is opinion. All too often in these mailing lists, performance issues are argued with pure opinion and handwaving, not to mention mud-slinging. There should be a rule saying that no-one, but no-one, is allowed to make any claims about anything to do with performance without first having actually done a proper experiment and presented actual real data with statistical analysis. Nigh on every GC-related comment in this thread has been a waste of time reading. It must therefore have been a waste of time writing. Time that could have been used doing something constructive. Perhaps the benchmark module now has a lot of resource able to move it to being a tool for making experiments to answer the questions with data rather than unfounded opinion. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 25 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 26 February 2015 at 07:05:56 UTC, Russel Winder 
wrote:
 pure opinion and handwaving, not to mention mud-slinging. There 
 should
 be a rule saying that no-one, but no-one, is allowed to make 
 any claims
 about anything to do with performance without first having 
 actually done
 a proper experiment and presented actual real data with 
 statistical
 analysis.
I agree in general, but one can argue the theoretical best performance based on computer architecture and language features. The fact is to get good performance you need cache-line friendly layout. D is stuck with: 1. Fixed C struct layout 2. Separate compilation units that leaves the compiler blind. 3. C backends that are less GC friendly than Java/Javascript. 4. No compiler control over multi-threading. 5. Generic programming without compiler optimized data layout (that hurts). It is possible to do "atomic writes" cheaply on x86 if you stick everything on the same cache line, and schedule instructions around the SFENCE in a clever manner to prevent pipeline stalls. It is possible to avoid pointers and use indexes thus limiting the extent of a precise scan. So surely you can create an experiment that gets good performance close to the theoretical limit, but it does not tell you how it will work out with a complicated generic programming based program based on D semantics and "monkey programming". Computer architecture is also moving. AFAIK, on Intel MIC you get fast RAM close to the core (multi layered on top) and slower shared RAM. There is also a big difference on memory bus throughput ranging from ~5 - 30 GB/s peak on desktop CPUs. But before you measure anything you need to agree on what you want measured. You need a baseline. IMO, the only acceptable baseline is carefully hand crafted data layout and manual memory management...
Feb 26 2015
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 11:05 PM, Russel Winder via Digitalmars-d wrote:
 Have you studied the G1 GC?
Nope.
 Any "data" from the 1990 regarding Java,
1998 or so. I wrote D's GC some years later.
 and indeed any other programming language with a lifespan of 20+years, is
 suspect, to say the least. Also what is said above is not data, it is
 opinion.
The day C++ compilers routinely generate write barriers is the day I believe they don't have runtime cost.
Feb 26 2015
prev sibling parent reply Martin Nowak <code+news.digitalmars dawg.eu> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/26/2015 01:50 AM, H. S. Teoh via Digitalmars-d wrote:
 I don't know how typical this is, but in my own D code I tend to
 use arrays a lot, and they do tend to add significant GC load. A
 recent performance improvement attempt in one of my projects found
 that collection cycles take up to 40% of total running time (it's a
 CPU-bound process).
Is this project public? I'd like to benchmark it with the new GC growth strategy. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU8PAQAAoJELJzgRYSuxk5yCcP/0lsmrM66SjwyxPfYzJmAsIA mSCBogIoWpvL7R4uaZNvnYFBg50O1ddiLIIYaon4tX/b40UC6R36bFjx4A6j9WSr PepNp0RFp8lFfXmeozPkJfEL3cVBFR70rrOySk2DNnFK/5heHXcR6gCyZ4xZaggi gT5HtwhI/lZ5MKxpGk8sZpFDjaLHcGTHebjRV/sL+ItHAULcU8qgEm29yzlekgUY v96EN0w83bwhjf2KZ97oKCWF6wW4tFKH+AXi2WoKjt/54xYyBS4tyXgtkMY9YmJ2 U8OQ03ASIP+tLlqJfexCwWxgHd8U9oUiHJoM9kCRw3XxwnlLIV1h/R9TBPrRcQFu 8DVQgniHiGab0tvhX9pD9q7blILsEPArSIIXFCTEU534dhjSplYu4Fu3Os7dGR5+ /ughG/ZyzLOJWI2iaI7H0hL2UVcO69pnTToOZYEHlZhFGHTxjPuesDsJUIv7ZjDN iOWYgLVk3pFHDctIhhFJoqyHSz4pbadmU2bCksrT3S1feHyadmitE2eA5OwCpz9Z Iz5jCGpN4QxF38PTg9OT3h5xRZvS2S3NvfzKdRzCfEzFBq7dUy1q34ir86Cw4OmE 4vRRxIjNCwpeju17FWOnR11WLXIvNJYQqznXwKyPEZX9HvPK01GmgsfutBOHKOez YvgGhDqNJIi5VGGZZAju =1GfD -----END PGP SIGNATURE-----
Feb 27 2015
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Feb 27, 2015 at 11:30:40PM +0100, Martin Nowak via Digitalmars-d wrote:
 -----BEGIN PGP SIGNED MESSAGE-----
 Hash: SHA1
 
 On 02/26/2015 01:50 AM, H. S. Teoh via Digitalmars-d wrote:
 I don't know how typical this is, but in my own D code I tend to
 use arrays a lot, and they do tend to add significant GC load. A
 recent performance improvement attempt in one of my projects found
 that collection cycles take up to 40% of total running time (it's a
 CPU-bound process).
Is this project public? I'd like to benchmark it with the new GC growth strategy.
[...] I haven't posted the code anywhere so far, though I probably will in the future. If you like, though, I could send you a tarball for you to play around with. T -- The day Microsoft makes something that doesn't suck is probably the day they start making vacuum cleaners... -- Slashdotter
Feb 27 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 00:36:26 UTC, Walter Bright 
wrote:
 On 2/25/2015 1:50 PM, deadalnix wrote:
 On Wednesday, 25 February 2015 at 21:44:05 UTC, Andrei 
 Alexandrescu wrote:
 You seeing this completely one sided. Even if write barries 
 make code
 slower by 10% its a non issue if the GC collections get 
 faster by 10% as
 well. Then in average the program will run at the same speed.
Hmmmm... not sure the math works out that way. -- Andrei
Yeah the math are wrong, but the general idea remains. I don't think it make sens to completely discard the idea of barriers, especially when it come to write barrier on the immutable heap. At least that should certainly pay off.
Part of the equation is D simply does not use GC anywhere near as pervasively as Java does, so the benefit/cost is greatly reduced for D.
You seems to avoid the important part of my message : write barrier tend to be very cheap on immutable data. Because, as a matter of fact, you don't write immutable data (in fact you do to some extent, but the amount of write is minimal. There is no reason not to leverage this for D, and java comparison are irrelevant on the subject as java does not have the concept of immutability. The same way, we can use the fact that TL data are not supposed to refers to each others.
Feb 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 6:57 PM, deadalnix wrote:
 You seems to avoid the important part of my message : write barrier tend to be
 very cheap on immutable data. Because, as a matter of fact, you don't write
 immutable data (in fact you do to some extent, but the amount of write is
minimal.

 There is no reason not to leverage this for D, and java comparison are
 irrelevant on the subject as java does not have the concept of immutability.

 The same way, we can use the fact that TL data are not supposed to refers to
 each others.
Of course, you don't pay a write barrier cost when you don't write to data, whether it is immutable or not is irrelevant.
Feb 25 2015
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 04:11:42 UTC, Walter Bright 
wrote:
 On 2/25/2015 6:57 PM, deadalnix wrote:
 You seems to avoid the important part of my message : write 
 barrier tend to be
 very cheap on immutable data. Because, as a matter of fact, 
 you don't write
 immutable data (in fact you do to some extent, but the amount 
 of write is minimal.

 There is no reason not to leverage this for D, and java 
 comparison are
 irrelevant on the subject as java does not have the concept of 
 immutability.

 The same way, we can use the fact that TL data are not 
 supposed to refers to
 each others.
Of course, you don't pay a write barrier cost when you don't write to data, whether it is immutable or not is irrelevant.
It DOES matter as user tends to write way more mutable data than immutable ones. Pretending otherwise is ridiculous.
Feb 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 9:01 PM, deadalnix wrote:
 On Thursday, 26 February 2015 at 04:11:42 UTC, Walter Bright wrote:
 On 2/25/2015 6:57 PM, deadalnix wrote:
 You seems to avoid the important part of my message : write barrier tend to be
 very cheap on immutable data. Because, as a matter of fact, you don't write
 immutable data (in fact you do to some extent, but the amount of write is
 minimal.

 There is no reason not to leverage this for D, and java comparison are
 irrelevant on the subject as java does not have the concept of immutability.

 The same way, we can use the fact that TL data are not supposed to refers to
 each others.
Of course, you don't pay a write barrier cost when you don't write to data, whether it is immutable or not is irrelevant.
It DOES matter as user tends to write way more mutable data than immutable ones. Pretending otherwise is ridiculous.
I don't really understand your point. Write barriers are emitted for code that is doing a write. This doesn't happen for code that doesn't do writes. For example: x = 3: // write barrier emitted for write to x! y = x + 5; // no write barrier emitted for read of x! How would making x immutable make (x + 5) faster?
Feb 26 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 20:15:37 UTC, Walter Bright 
wrote:
 I don't really understand your point. Write barriers are 
 emitted for code that is doing a write.
That is exactly the point? When you don't write, you don't pay for write barriers. It is fairly straightforward that the argument that write barrier are expensive and undesirable does not follow for immutable heap.
Feb 26 2015
prev sibling parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
On 02/25/2015 10:50 PM, deadalnix wrote:
 
 I don't think it make sens to completely discard the idea of barriers,
 especially when it come to write barrier on the immutable heap. At least
 that should certainly pay off.
Before the argument gets lost. http://forum.dlang.org/post/mcqr3s$cmf$1 digitalmars.com
 Write barriers would cost a low single digit, e.g. 3-4%.
 While searching for ways to avoid the cost I found an interesting
alternative to generational GCs.

https://github.com/D-Programming-Language/druntime/pull/1081#issuecomment-69151660
Mar 02 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 1:27 PM, Benjamin Thaut wrote:
 You seeing this completely one sided. Even if write barries make code slower by
 10% its a non issue if the GC collections get faster by 10% as well. Then in
 average the program will run at the same speed.
You'll be paying that 10% penalty for every write access, not just for GC data. D is not Java in that D has a lot of objects that are not on the GC heap. Tradeoffs appropriate for Java are not necessarily appropriate for D.
Feb 26 2015
parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 26.02.2015 um 21:39 schrieb Walter Bright:
 On 2/25/2015 1:27 PM, Benjamin Thaut wrote:
 You seeing this completely one sided. Even if write barries make code
 slower by
 10% its a non issue if the GC collections get faster by 10% as well.
 Then in
 average the program will run at the same speed.
You'll be paying that 10% penalty for every write access, not just for GC data. D is not Java in that D has a lot of objects that are not on the GC heap. Tradeoffs appropriate for Java are not necessarily appropriate for D.
Write barries only have to be generated for writes to pointers through pointers. So you are not paying a penality for every write. class Bar { int x; Bar other; void method() { x = 5; // no write barrier other = this; // write barrier } } Also the following code will not generate a single write barrier: void someFunc(uint[] ar) { for(uint* it = someArray.ptr; it < ar.ptr + ar.length; it++) { *it = 5; } }
Feb 26 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2015 1:15 PM, Benjamin Thaut wrote:
 Am 26.02.2015 um 21:39 schrieb Walter Bright:
 You'll be paying that 10% penalty for every write access, not just for
 GC data. D is not Java in that D has a lot of objects that are not on
 the GC heap. Tradeoffs appropriate for Java are not necessarily
 appropriate for D.
Write barries only have to be generated for writes to pointers through pointers.
Of course.
 So you are not paying a penality for every write.
Sigh. That does not change the point of what I wrote.
Feb 26 2015
prev sibling parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/24/2015 10:53 AM, Walter Bright wrote:
 
 Even 10% makes it a no-go. Even 1%.
Write barriers would cost a low single digit, e.g. 3-4%. While searching for ways to avoid the cost I found an interesting alternative to generational GCs. https://github.com/D-Programming-Language/druntime/pull/1081#issuecomment-69151660 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU8O+kAAoJELJzgRYSuxk5dbIP/2+BWeTD8nWzxFwoUugwD7k3 +JqfKi3BFOpY+bQbg/ct+hlDcVcVMUvz/pqyMqll1v/axZasWjtNd/wydepRovv+ FU9xHyS5dSJHb/5TU6OJDHYsFBDg3a+CS1OfGXpIdpWyVAWE5ZojU916DhddxOfB Mb4cxoLUe7spRHVQ+eiXttYG97O7vVmmy7zY1/h5CUxgWLAoJe9DD4HpbgR4BaDS zmIsNMJf8rooZhIxbWiy0WJu6YFesSR8amVqiw3+Zd1ijeOyT6Pe5EC4gyvjXhoC BwvfnM0s0c3VlbWxtXdHqnJ0A8V1/XJCQ3DXSUD2D4AY2rhwkC5bLIPPyrpm86xg oGhXWwYJax9lVJCSEjJnL4p1lj+MoUpjrUMS/vEqk37p4VHcWGj6jspkq0MEVWJE wSUD/hrihzzpOHRjBxQLZWUo+JnvS+xZL/PN2sK7T1dhsZujGYIUZodmS+3915dQ kjsXXSAT0/vL+kM1WgTlDe/vxZ+toS/tuOVrRLHGktvXAKEJWUpulNKKj/bvaNUZ HNG41R+LLcOyGM0QP4TH0opUtro09EWQoT+1wIuKtZA1Ect9vCJfFmz2a8YE3MWu Q4DzY7WTQ6mCG4Kh63Ya1uJA+DKS4Abyfsu0TGfbR0GpKHer8f5Ni+kZTKM69IZn NLB9QRBtFxFSNZhZH10Y =aDb/ -----END PGP SIGNATURE-----
Feb 27 2015
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 24 February 2015 at 09:30:33 UTC, Tobias Pankrath 
wrote:
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
This the approach taken by Active Oberon and Modula-3. Pointers are GC by default, but can be declared as untraced pointers in code considered system like in D.
Do they have concurrent gc and emit barriers for each write to a default pointer? Do they have precise scanning and don't scan the untraced pointers? Are the meaningful performance comparisons between the two pointer types that would enable us to estimate how costly emitting those barriers in D would be?
Both Active Oberon and Modula-3 support threading at language level, so multi-threading is a presence on their runtimes. The latest documentation available for Active Oberon is from 2002. http://e-collection.library.ethz.ch/view/eth:26082 Originally it was a concurrent mark-and-sweep GC, with stop the world phase for collection. Other algorithms are discussed on the paper. Sadly ETHZ is done with Oberon as their startup failed to pick up steam in the industry (selling Component Pascal, an evolution of Oberon-2). As for Modula-3, due to the way the the whole DEC, Olivetti, Compaq, HP process went, it isn't easy to find much documentation online. I had a few books. The latest implementation had a concurrent incremental generational GC. https://modula3.elegosoft.com/cm3/doc/help/cm3/gc.html 2002 is also around the same time that Modula-3 developed was stopped. Dylan, which I just remembered while writing this, used the MPS collector. http://www.ravenbrook.com/project/mps/doc/2002-01-30/ismm2002-paper/ismm2002.html Sadly the industry went JVM/CLR instead, and only now we are getting back to native systems programming with GC languages. If those languages had been picked up by the industry instead of JVM/CLR, the situation could be quite different. As always, it is a matter where the money for research gets pumped into. -- Paulo
Feb 24 2015
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 The latest implementation had a concurrent incremental 
 generational GC.

 https://modula3.elegosoft.com/cm3/doc/help/cm3/gc.html
According to this they never had a concurrent or incremental GC on x86.
Feb 24 2015
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 24 February 2015 at 11:08:59 UTC, Tobias Pankrath 
wrote:
 The latest implementation had a concurrent incremental 
 generational GC.

 https://modula3.elegosoft.com/cm3/doc/help/cm3/gc.html
According to this they never had a concurrent or incremental GC on x86.
Sorry about the caps, couldn't find a better way to emphasis. Not sure where you found out the information about x86, or why it should matter. "The current collector is, by default, INCREMENTAL and GENERATIONAL. The interruptions of service should be very small, and the overall performance should be better than with the previous collectors." "Note that the new optional BACKGROUND collection THREAD is not on by default; this may change in the future." I take this to understand that the latest collector was incremental, with the ability to work concurrently when the background collection was enabled, on some CPU architecture, regardless which one. Modula-3 died when its team kept changing hands between DEC, Olivetti, Compaq and HP. It hardly had any new development since 2000, its GC would surely look differently if development hadn't stopped and maybe even be quite good on x86. -- Paulo
Feb 24 2015
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Tuesday, 24 February 2015 at 12:31:06 UTC, Paulo  Pinto wrote:
 Sorry about the caps, couldn't find a better way to emphasis. 
 Not sure where you found out the information about x86, or why 
 it should matter.
I found an (apparently older) version of the documentation earlier that looked exactly the same, so I didn't mind to read your link carefully enough.
 "The current collector is, by default, INCREMENTAL and 
 GENERATIONAL. The interruptions of service should be very 
 small, and the overall performance should be better than with 
 the previous collectors."
Yes, however from your page now:
 Now  M3novm is the default.
And if you follow the link:
  M3novm implies  M3noincremental and  M3nogenerational.
Maybe, that's an documentation error. This was the place where the other version mentioned that x86 is not supported. While I like that you constantly remind us about achievements of older programming languages, you'll often do it with a "that problem was solved in Language X 20 years ago"-attitude, but almost never elaborate how that solution could be applied to D. When taking a closer look, I often find that those languages solved an similar but different problem and the solution do not apply to D at all. For example the last time in the discussion on separate compilation, templates and object files you blamed the C tool chain and pointed to pascal/delphi. But they didn't solved the problem, because they didn't faced it in the first place, because they didn't had the template and meta-programming capabilities of D. At the problem at hand: I don't see how Module3's distinction between system and default pointer types or the lessons they learned help in any way to improve the current D GC.
Feb 24 2015
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 24 February 2015 at 13:07:38 UTC, Tobias Pankrath 
wrote:
 On Tuesday, 24 February 2015 at 12:31:06 UTC, Paulo  Pinto 
 wrote:
 Sorry about the caps, couldn't find a better way to emphasis. 
 Not sure where you found out the information about x86, or why 
 it should matter.
I found an (apparently older) version of the documentation earlier that looked exactly the same, so I didn't mind to read your link carefully enough.
 "The current collector is, by default, INCREMENTAL and 
 GENERATIONAL. The interruptions of service should be very 
 small, and the overall performance should be better than with 
 the previous collectors."
Yes, however from your page now:
 Now  M3novm is the default.
And if you follow the link:
  M3novm implies  M3noincremental and  M3nogenerational.
Maybe, that's an documentation error. This was the place where the other version mentioned that x86 is not supported. While I like that you constantly remind us about achievements of older programming languages, you'll often do it with a "that problem was solved in Language X 20 years ago"-attitude, but almost never elaborate how that solution could be applied to D. When taking a closer look, I often find that those languages solved an similar but different problem and the solution do not apply to D at all. For example the last time in the discussion on separate compilation, templates and object files you blamed the C tool chain and pointed to pascal/delphi. But they didn't solved the problem, because they didn't faced it in the first place, because they didn't had the template and meta-programming capabilities of D.
Yes I agree with you, it is just that I would like to see a language like D being adopted at large, so as a language geek that has spent too much time in language research during the compiler design classes, I like to pull this information out of the attic. When knowledge goes away people get other understanding of the reality, for example, many young developers think C was the very first systems programming language, which isn't the case given the research going on outside AT&T. I am well aware that those solutions don't cover 100% D's use cases, but maybe they have enough juice to provide ideas in D context. It is always a matter of research and funding for the said ideas. If I was at academia, applying these ideas to improve D would be a good source for papers and thesis. As such, I cannot do much more than throw them over the wall and see if they can inspire someone.
 At the problem at hand: I don't see how Module3's distinction 
 between system and default pointer types or the lessons they 
 learned help in any way to improve the current D GC.
It helps reduce the pressure in the GC allocated memory, and also allows for giving pointers straight to external code. Maybe given the type of implicit allocations in D vs Modula-3, it doesn't help. But yeah, too much noise from a D dabbler I guess. -- Paulo
Feb 24 2015
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2015 11:53 PM, Jacob Carlborg wrote:
 On 2015-02-23 21:30, Walter Bright wrote:

 Count me among those.

 In Java, write barriers make sense because Java uses the GC for
 everything. Pretty much every indirection is a GC reference.

 This is not at all true with D code. But since the compiler can't know
 that, it has to insert write barriers for all those dereferences
 regardless.
The alternative would be to have two kind of pointers, one for GC allocated data and one for other kind of data. But I know you don't like that either.
That kinda defeats much of the point to having a GC.
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
I've seen enough benchmarks that purport to show that Java is just as fast as C++, as long as only primitive types are being used and not pointers. I've done enough benchmarks to know that inserting even one extra instruction in a tight loop has significant consequences. If you don't believe that, feel free to try it and see. D is not going to have competitive performance with systems programming languages if write barriers are added.
Feb 24 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2015 1:50 AM, Walter Bright wrote:
 On 2/23/2015 11:53 PM, Jacob Carlborg wrote:
 On 2015-02-23 21:30, Walter Bright wrote:
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
I've seen enough benchmarks that purport to show that Java is just as fast as C++, as long as only primitive types are being used and not pointers.
Let me put it another way. You don't believe me about the performance hit. My experience with people who don't believe me is they won't believe any benchmarks I produce, either. They'll say I didn't do the benchmark right, it is not representative, the data is cherry-picked, nobody would write code that way, etc. I quit writing benchmarks for public consumption for that reason years ago. It shouldn't be hard for you to put together a benchmark you can believe in.
Feb 24 2015
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 24 February 2015 at 07:53:52 UTC, Jacob Carlborg 
wrote:
 On 2015-02-23 21:30, Walter Bright wrote:

 Count me among those.

 In Java, write barriers make sense because Java uses the GC for
 everything. Pretty much every indirection is a GC reference.

 This is not at all true with D code. But since the compiler 
 can't know
 that, it has to insert write barriers for all those 
 dereferences
 regardless.
The alternative would be to have two kind of pointers, one for GC allocated data and one for other kind of data. But I know you don't like that either. We kind of already have this, class references and regular pointers. But that would tie classes to the GC.
 I suspect it would be a terrible performance hit.
It would be nice to have some numbers backing this up.
The page fault strategy is used by ML family language's GC and they get really good performance out of it. That being said, in ML like language most things are immutable, so they are a
Feb 24 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2015 11:07 AM, deadalnix wrote:
 The page fault strategy is used by ML family language's GC and they get really
 good performance out of it. That being said, in ML like language most things
are
 immutable, so they are a
I wrote a gc for Java that used the page fault strategy. It was slower and so I went with another strategy.
Feb 24 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 24 February 2015 at 23:49:21 UTC, Walter Bright wrote:
 On 2/24/2015 11:07 AM, deadalnix wrote:
 The page fault strategy is used by ML family language's GC and 
 they get really
 good performance out of it. That being said, in ML like 
 language most things are
 immutable, so they are a
I wrote a gc for Java that used the page fault strategy. It was slower and so I went with another strategy.
That is fairly obvious. Java is not exactly putting emphasis on immutability...
Feb 24 2015
prev sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 22.02.2015 um 10:48 schrieb Russel Winder via Digitalmars-d:
 On Sun, 2015-02-22 at 10:21 +0100, Benjamin Thaut via Digitalmars-d
 wrote:
 Am 22.02.2015 um 03:13 schrieb Walter Bright:
 Nobody thinks GC is suitable for hard realtime.
I think you should know manu good enough by now that you know he is not talking about hard realtime but soft realtime instead. (e.g. games) There are GCs which handle this situation pretty well but D's GC is not one of them.
If the D GC really is quite so bad, why hasn't a cabal formed to create a new GC that is precise, fast and efficient?
There have been countless dicussions about D's GC and how bad it is, and how to improve it. But it always turns out that it would be a ton of work or someone doesn't like the consequences. The key points always are: 1) We need full percise pointer discovery, even for pointers on the stack. 2) We need write barriers. 1) Is a really complex task for a language like D. There is a reason why java has so a small feature set. 2) For some reason nobody likes write barries because the general fear is, that they will cost performance, so it was decided to not implement them. (Without actually measuring performance impact vs GC improvement) The problem is that, to implement a non stop-the-world-GC you need 2) So until there is no implementation for any of the both mentioned points, there will be no better GC in D. You can fake 2) with fork on linux, thats what the CDGC did (see the DConf talk). This works because fork has copy on write semantics, but there is no equivalent on Windows. Experiments by Rainer Schuetze to implement similar copy on write semantics on Windows have shown to have major overhead which is most likely even worse then implementing write barries themselfs. Experiments implementing a Heap-percise GC, again by Rainer Schuetze, have schon that percicse heap scanning is slower compared to impercise scanning. In my opinion the key problem is, that D was designed in a way that requires a GC but D was not designed in a way to propperly support a GC. (shared, immutable and other things basically prevent thread local pools). Kind Regards Benjamin
Feb 22 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 22 February 2015 at 12:55:17 UTC, Benjamin Thaut wrote:
 1) We need full percise pointer discovery, even for pointers on 
 the stack.
 2) We need write barriers.

 1) Is a really complex task for a language like D. There is a 
 reason why java has so a small feature set.
Worse than complex, you need a strongly typed language. D's type system is ad hoc, aka broken.
 2) For some reason nobody likes write barries because the 
 general fear is, that they will cost performance, so it was 
 decided to not implement them. (Without actually measuring 
 performance impact vs GC improvement)
Barriers in a loop is not good... 3. you need to get rid of destructors et al from GC.
 The problem is that, to implement a non stop-the-world-GC you 

 you need 1).
Fortunately you can make do with "stop the GC threads" and still have good real time responsiveness. But that requires: 1. minimal scanning (implies significant language and compiler changes) 2. non-polluting cache friendly scanning (implies tunable or slow scan in favour of real time non-gc threads)
Feb 22 2015
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 February 2015 at 12:13, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/21/2015 4:43 PM, Manu via Digitalmars-d wrote:
 D's GC is terrible, and after 6 years hanging out in this place, I
 have seen precisely zero development on the GC front. Nobody can even
 imagine, let alone successfully implement a GC that covers realtime
 use requirements.
Nobody thinks GC is suitable for hard realtime.
 On the other hand, if 'scope' is implemented well, D may have some of
 the best tools in town for quality ARC implementation. There is a
 visible way forward for quality RC in D, and I think we could do
 better than Apple.
With 'return ref', which is now implemented, you can create a memory safe RefCounted type. However, nobody has bothered. Are you up for it? :-)
I can't overload on 'scope'. How can I create a scope constructor/destructor/postblit that doesn't perform the ref fiddling? On a tangent, can I pass rvalues to ref args now? That will massively sanitise linear algebra (matrix/vector) code big time!
Feb 22 2015
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 22 February 2015 at 16:15:44 UTC, Manu wrote:
 I can't overload on 'scope'. How can I create a scope
 constructor/destructor/postblit that doesn't perform the ref 
 fiddling?

 On a tangent, can I pass rvalues to ref args now? That will 
 massively
 sanitise linear algebra (matrix/vector) code big time!
The return ref thing is a feature that do not pay for itself.
Feb 22 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/22/2015 8:15 AM, Manu via Digitalmars-d wrote:
 I can't overload on 'scope'. How can I create a scope
 constructor/destructor/postblit that doesn't perform the ref fiddling?
I don't understand what you're trying to do.
Feb 22 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 06:49, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/22/2015 8:15 AM, Manu via Digitalmars-d wrote:
 I can't overload on 'scope'. How can I create a scope
 constructor/destructor/postblit that doesn't perform the ref fiddling?
I don't understand what you're trying to do.
struct RCThing { RefType *instance; this(this) { IncRef(instance); } ~this() { DecRef(instance); } this(this) scope {} // <- scope instances don't need ref fiddling ~this() scope {} } Or various permutations along those lines. Ie, library types may eliminate their ref fiddling when they are scope.
Feb 22 2015
prev sibling next sibling parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Manu via Digitalmars-d"  wrote in message 
news:mailman.7037.1424565826.9932.digitalmars-d puremagic.com...

 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being realistic
 ;)
A big part of why it's unpopular is that nobody, including you, wants to implement it to see if it's viable.
Feb 21 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 February 2015 at 13:53, Daniel Murphy via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 "Manu via Digitalmars-d"  wrote in message
 news:mailman.7037.1424565826.9932.digitalmars-d puremagic.com...

 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being realistic
 ;)
A big part of why it's unpopular is that nobody, including you, wants to implement it to see if it's viable.
I have no idea where to start. But I think there's a more significant inhibiting factor; even if I were to spend months learning how to have a go at such a thing, I'm already convinced it would be rejected in principle. Why would anyone waste the time while it's so clearly off the table?
Feb 22 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/22/15 8:36 AM, Manu via Digitalmars-d wrote:
 On 22 February 2015 at 13:53, Daniel Murphy via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 "Manu via Digitalmars-d"  wrote in message
 news:mailman.7037.1424565826.9932.digitalmars-d puremagic.com...

 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being realistic
 ;)
A big part of why it's unpopular is that nobody, including you, wants to implement it to see if it's viable.
I have no idea where to start.
Simple approaches to reference counting are accessible to any software engineer. The right starting point is "I used reference counting in this project, and here are my findings". A position such as the following makes the dialog very difficult: 1. One solution is deemed the only viable. 2. Details and difficulties are unknown to the proposer. 3. It must be implemented by others, not the proposer. 4. It must be part of the language; any experimentation outside the language is considered an unnecessary waste of time. Andrei
Feb 22 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 03:13, Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/22/15 8:36 AM, Manu via Digitalmars-d wrote:
 On 22 February 2015 at 13:53, Daniel Murphy via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 "Manu via Digitalmars-d"  wrote in message
 news:mailman.7037.1424565826.9932.digitalmars-d puremagic.com...

 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being realistic
 ;)
A big part of why it's unpopular is that nobody, including you, wants to implement it to see if it's viable.
I have no idea where to start.
Simple approaches to reference counting are accessible to any software engineer. The right starting point is "I used reference counting in this project, and here are my findings". A position such as the following makes the dialog very difficult: 1. One solution is deemed the only viable.
Propose how GC will ever be a success? I honestly don't care, I just want a solution that's acceptable. I tried to convince myself that GC would be fine for half a decade now, but I've run out of patience. I've been sitting around waiting for years for someone to say how GC will ever be an acceptable solution. How long do I have to wait? Evidence suggests, there IS only one viable solution. Granted, that's not proven viable; it has barely been explored. (on account of borderline religious opposition) I can see why GC will never work in D, I can not see why ARC will never work.
 2. Details and difficulties are unknown to the proposer.
I'm proposing that the *conversation* needs to be taken seriously. Every time I've raised it in the past it's been immediately dismissed and swiped off the table. I'm not an expert on garbage collection, I have practically nothing to add. I'm also not particularly interested in garbage collection (of any form); I just want it to work. But it doesn't take an expert to recognise that in 6 years, nobody has presented any forward momentum on the GC front, no matter how fantastical. I can easily visualise a way forward with RC. There's plenty of room for exploration. Sure it's not trivial, but maybe it's *possible*, as it certainly seems that GC is not.
 3. It must be implemented by others, not the proposer.
It must be discussed before we even think about implementing it. And that is predicated by not being dismissed on impact.
 4. It must be part of the language; any experimentation outside the language
 is considered an unnecessary waste of time.
RC performance in a lib depends a lot on scope overloads of constructor/destructor/postblit to eliminate ref fiddling code. Scope proposals were butchered. I'm disappointed with where that went.
Feb 22 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/22/15 5:57 PM, Manu via Digitalmars-d wrote:
 I can easily visualise a way forward with RC.
Then do it. Frankly it seems to me you're doing anything you possibly can to talk yourself out of doing work. Andrei
Feb 22 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 14:11, Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/22/15 5:57 PM, Manu via Digitalmars-d wrote:
 I can easily visualise a way forward with RC.
Then do it. Frankly it seems to me you're doing anything you possibly can to talk yourself out of doing work.
Excellent technique to immediately invalidate everything someone says. Thanks for that. You dismissed absolutely everything I said, and then imply that I have no right to comment unless I do it myself. It's got nothing to do with doing work. ARC (or something like it) is almost religiously opposed. We can't even have a reasonable conversation about it, or really explore it's implications before someone (that ideally know's what they're doing) thinks about writing code. There's no room for progress in this environment. What do you want me to do? I use manual RC throughout my code, but the experience in D today is identical to C++. I can confirm that it's equally terrible to any C++ implementation I've used. We have no tools to improve on it. And whatever, if it's the same as C++, I can live with it. I've had that my whole career (although it's sad, because we could do much better). The problem, as always, is implicit allocations, and allocations from 3rd party libraries. While the default allocator remains incompatible with workloads I care about, that isolates us from virtually every library that's not explicitly written to care about my use cases. That's the situation in C++ forever. We're familiar with it, and it's shit. I have long hoped we would move beyond that situation in D but it's a massive up-hill battle to even gain mindshare, regardless of actual implementation. There's no shared vision on this matter, the word is basically "GC is great, everyone loves it, use an RC lib". How long do we wait for someone to invent a fantastical GC that solves our problems and works in D? I think it's practically agreed that no known design can work, but nobody wants to bite the bullet and accept the fact. We need to admit we have an impassable problem, and then maybe we can consider alternatives openly. Obviously, it would be disruptive, but it might actually work... which is better than where we seem to be heading (with a velocity of zero). The fact is, people who are capable of approaching this problem in terms of actual code will never even attempt it until there's resounding consensus that it's worth exploring.
Feb 22 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/22/2015 9:53 PM, Manu via Digitalmars-d wrote:
 It's got nothing to do with doing work. ARC (or something like it) is
 almost religiously opposed. We can't even have a reasonable
 conversation about it, or really explore it's implications before
 someone (that ideally know's what they're doing) thinks about writing
 code.
I participated in a very technical thread here on implementing ARC in D. I wanted to make it work - nothing was off the table, language changes, special features, etc. http://www.digitalmars.com/d/archives/digitalmars/D/draft_proposal_for_ref_counting_in_D_211885.html It was just unworkable, and nobody who participated in that thread had workable ideas on moving forward with it. Nothing since has come up that changes that. If you've got some ideas, please present them taking into account the issues brought up in that thread. Also, please take into account; proposals will not get much of a reception if they ignore these points: 1. Increment and decrement, ESPECIALLY DECREMENT, is EXPENSIVE in time and bloat because of exceptions. Swift does it by NOT HAVING EXCEPTIONS. This is not an option for D. 2. As far as I can tell, the idea of flipping a compiler switch and the GC switches to ref counting is a pipe dream fantasy. You can probably make such a scheme work with a very limited language like Javascript, but it is never going to work with D's support for low level programming. The way RC and GC work is different enough that different user coding techniques will be used for them. 3. Memory safety is a requirement for any ARC proposal for D. Swift ignores memory safety concerns. 4. DIP25, now implemented, is a way to address memory safety in D while using reference counting. Any proposal for ARC needs to, at least, understand that proposal. http://wiki.dlang.org/DIP25
Feb 22 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 06:51:21 UTC, Walter Bright wrote:
 4. DIP25, now implemented, is a way to address memory safety in 
 D while using reference counting. Any proposal for ARC needs 
 to, at least, understand that proposal.
I asked further up in the thread if coroutines can hold onto "return ref", e.g. does the compiler prevent a yield? It would be nice if you and Andrei admitted that you are in the land of complicated linear typing with "return ref".
Feb 23 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 16:50, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/22/2015 9:53 PM, Manu via Digitalmars-d wrote:
 It's got nothing to do with doing work. ARC (or something like it) is
 almost religiously opposed. We can't even have a reasonable
 conversation about it, or really explore it's implications before
 someone (that ideally know's what they're doing) thinks about writing
 code.
I participated in a very technical thread here on implementing ARC in D. I wanted to make it work - nothing was off the table, language changes, special features, etc. http://www.digitalmars.com/d/archives/digitalmars/D/draft_proposal_for_ref_counting_in_D_211885.html It was just unworkable, and nobody who participated in that thread had workable ideas on moving forward with it. Nothing since has come up that changes that. If you've got some ideas, please present them taking into account the issues brought up in that thread.
Wow, I missed that one it seems. I'll catch up.
 Also, please take into account; proposals will not get much of a reception
 if they ignore these points:

 1. Increment and decrement, ESPECIALLY DECREMENT, is EXPENSIVE in time and
 bloat because of exceptions. Swift does it by NOT HAVING EXCEPTIONS. This is
 not an option for D.
This is going to sound really stupid... but do people actually use exceptions regularly? I've never used one. When I encounter code that does, I just find it really annoying to debug. I've never 'gotten' exceptions. I'm not sure why error codes are insufficient, other than the obvious fact that they hog the one sacred return value. D is just a whisker short of practical multiple-return-values. If we cracked that, we could use alternative (superior?) error state return mechanisms. I'd be really into that. I'll agree though that this can't be changed at this point in the game. You say that's a terminal case? Generating code to properly implement a decrement chain during unwind impacts on the non-exceptional code path?
 2. As far as I can tell, the idea of flipping a compiler switch and the GC
 switches to ref counting is a pipe dream fantasy. You can probably make such
 a scheme work with a very limited language like Javascript, but it is never
 going to work with D's support for low level programming. The way RC and GC
 work is different enough that different user coding techniques will be used
 for them.
I agree. I would suggest if ARC were proven possible, we would like, switch.
 3. Memory safety is a requirement for any ARC proposal for D. Swift ignores
 memory safety concerns.
What makes RC implicitly unsafe?
Feb 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2015 1:50 AM, Manu via Digitalmars-d wrote:
 1. Increment and decrement, ESPECIALLY DECREMENT, is EXPENSIVE in time and
 bloat because of exceptions. Swift does it by NOT HAVING EXCEPTIONS. This is
 not an option for D.
This is going to sound really stupid... but do people actually use exceptions regularly?
It doesn't matter if they do or not. It's a feature of D, and has to be supported. The only time it won't matter is if the intervening code is all 'nothrow'.
 You say that's a terminal case? Generating code to properly implement
 a decrement chain during unwind impacts on the non-exceptional code
 path?
Since you don't believe me :-), write some shared_ptr code in C++ using your favorite compiler, compile it, and take a look at the generated assembler. I've asked you to do this before. It's necessary to understand how exception unwinding works in order to pontificate about ARC.
 3. Memory safety is a requirement for any ARC proposal for D. Swift ignores
 memory safety concerns.
What makes RC implicitly unsafe?
You already know the answer - saving pointers to the RC object's payload that then outlive the RC'd object.
Feb 23 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/15 3:27 AM, Walter Bright wrote:
 On 2/23/2015 1:50 AM, Manu via Digitalmars-d wrote:
 1. Increment and decrement, ESPECIALLY DECREMENT, is EXPENSIVE in
 time and
 bloat because of exceptions. Swift does it by NOT HAVING EXCEPTIONS.
 This is
 not an option for D.
This is going to sound really stupid... but do people actually use exceptions regularly?
It doesn't matter if they do or not. It's a feature of D, and has to be supported. The only time it won't matter is if the intervening code is all 'nothrow'.
 You say that's a terminal case? Generating code to properly implement
 a decrement chain during unwind impacts on the non-exceptional code
 path?
Since you don't believe me :-), write some shared_ptr code in C++ using your favorite compiler, compile it, and take a look at the generated assembler. I've asked you to do this before. It's necessary to understand how exception unwinding works in order to pontificate about ARC.
BTW: http://asm.dlang.org Andrei
Feb 23 2015
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
 This is going to sound really stupid... but do people actually 
 use
 exceptions regularly?
I'd say exception are exceptional in most code. That being said, unless the compiler can PROVE that no exception is gonna be thrown, you are stuck with having to generate a code path for unwinding that decrement the refcount. It means that you'll have code bloat (not that bad IMO, unless you are embeded) but more importantly, it means that most increment/decrement can't be optimized away in the regular path, as you must get the expected count in the unwinding path. Moreover, as you get some work to do on the unwind path, it becomes impossible to do various optimizations like tail calls. I think Walter is right when he says that switft dropped exception because of ARC.
 I've never used one. When I encounter code that does, I just 
 find it
 really annoying to debug. I've never 'gotten' exceptions. I'm 
 not sure
 why error codes are insufficient, other than the obvious fact 
 that
 they hog the one sacred return value.
Return error code have usually been an usability disaster for the simple reason that the do nothing behavior is to ignore the error. The second major problem is that you usually have no idea how where the error check is done, forcing the programmer to bubble up the error where it is meaningful to handle it.
 I'll agree though that this can't be changed at this point in 
 the game.
 You say that's a terminal case? Generating code to properly 
 implement
 a decrement chain during unwind impacts on the non-exceptional 
 code
 path?
Yes as you can't remove increment/decrement pairs as there are 2 decrement path (so there is pair).
 I agree. I would suggest if ARC were proven possible, we would 
 like, switch.
I'd like to see ARC support in D, but I do not think it makes sense as a default.
 3. Memory safety is a requirement for any ARC proposal for D. 
 Swift ignores
 memory safety concerns.
What makes RC implicitly unsafe?
Without ownership, one can leak reference to RCed object that the RC system do not see.
Feb 23 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 24 February 2015 at 10:36, deadalnix via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
 This is going to sound really stupid... but do people actually use
 exceptions regularly?
I'd say exception are exceptional in most code. That being said, unless the compiler can PROVE that no exception is gonna be thrown, you are stuck with having to generate a code path for unwinding that decrement the refcount. It means that you'll have code bloat (not that bad IMO, unless you are embeded) but more importantly, it means that most increment/decrement can't be optimized away in the regular path, as you must get the expected count in the unwinding path.
Can the unwind path not be aware that the non-exceptional path had an increment optimised away? Surely the unwind only needs to perform matching decrements where an increment was generated... I can easily imagine the burden ARC may place on the exceptional path, but I can't visualise the influence it has on the normal path?
 Moreover, as you get some work to do on the unwind path, it becomes
 impossible to do various optimizations like tail calls.
Tail call is nice, but I don't think I'd lament the loss, especially when nothrow (which is inferred these days) will give it right back. Embedded code that can't handle the bloat would be required to use nothrow to meet requirements. That's fine, it's a known constraint of embedded programming. Realtime/hot code may also need to use nothrow to guarantee all optimisations are possible... which would probably be fine in most cases? I'd find that perfectly acceptable.
 I think Walter is right when he says that switft dropped exception because
 of ARC.
But what was the definitive deal breaker? Was there one thing, like it's actually impossible... or was it a matter of cumulative small things leading them to make a value judgement? We might make a different value judgement given very different circumstances.
 I've never used one. When I encounter code that does, I just find it
 really annoying to debug. I've never 'gotten' exceptions. I'm not sure
 why error codes are insufficient, other than the obvious fact that
 they hog the one sacred return value.
Return error code have usually been an usability disaster for the simple reason that the do nothing behavior is to ignore the error.
I generally find this preferable to spontaneous crashing. That is, assuming the do nothing behaviour with exceptions is for it to unwind all the way to the top, which I think is the comparable 'do nothing' case. I've pondered using 'throw' in D, but the thing that usually kills it for me is that I can't have a free catch() statement, it needs to be structured with a try. I just want to write catch() at a random line where I want unwinding to stop if anything before it went wrong. Ie, implicit try{} around all the code in the scope that comes before. I don't know if that would tip me over the line, but it would go a long way to making it more attractive. I just hate what exceptions do to your code. But also, debuggers are terrible at handling them.
 The second major problem is that you usually have no idea how where the
 error check is done, forcing the programmer to bubble up the error where it
 is meaningful to handle it.
That's true, but I've never felt like exceptions are a particularly good solution to that problem. I don't find they make the code simpler. Infact, I find them to produce almost the same amount of functional code, except with additional indentation and brace spam, syntactic baggage, and bonus allocations. Consider: if(tryAndDoThing() == Error.Failed) return Error.FailedForSomeReason; if(tryNextThing() == Error.Failed) return Error.FailedForAnotherReason; Compared to: try { doThing(); nextThing(); } catch(FirstKindOfException e) { throw new Ex(Error.FailedForSomeReason); } catch(SecondKindOfException e) { throw new Ex(Error.FailedForAnotherReason); } It's long and bloated (4 lines became 13 lines!), it allocates, is not nogc, etc. Sure, it might be that you don't always translate inner exceptions to high-level concepts like this and just let the inner exception bubble up... but I often do want to have the meaningful translation of errors, so this must at least be fairly common.
 I'll agree though that this can't be changed at this point in the game.
 You say that's a terminal case? Generating code to properly implement
 a decrement chain during unwind impacts on the non-exceptional code
 path?
Yes as you can't remove increment/decrement pairs as there are 2 decrement path (so there is pair).
I don't follow. If you remove the increment, then the decrement is just removed in both places...? I can't imagine how it's different than making sure struct destructors are called, which must already work right?
 I agree. I would suggest if ARC were proven possible, we would like,
 switch.
I'd like to see ARC support in D, but I do not think it makes sense as a default.
Then we will have 2 distinct worlds. There will be 2 kinds of D code, and they will be incompatible... I think that's worse.
 3. Memory safety is a requirement for any ARC proposal for D. Swift
 ignores
 memory safety concerns.
What makes RC implicitly unsafe?
Without ownership, one can leak reference to RCed object that the RC system do not see.
Obviously, ARC is predicated on addressing ownership. I think that's very clear.
Feb 25 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 25 February 2015 at 15:55:26 UTC, Manu wrote:
 On 24 February 2015 at 10:36, deadalnix via Digitalmars-d
 I'd like to see ARC support in D, but I do not think it makes 
 sense as a
 default.
Then we will have 2 distinct worlds. There will be 2 kinds of D code, and they will be incompatible... I think that's worse.
There are already at least 2 kinds of D code. nogc and with gc... Do you want ARC without regular pointers? I think you will find the performance disappointing... I don't mind replacing the current GC with ARC, but only if ARC is used with the same low frequency as shared_ptr in C++...
Feb 26 2015
prev sibling parent Johannes Pfau <nospam example.com> writes:
Am Thu, 26 Feb 2015 01:55:14 +1000
schrieb Manu via Digitalmars-d <digitalmars-d puremagic.com>:

 I agree. I would suggest if ARC were proven possible, we would
 like, switch.
  
I'd like to see ARC support in D, but I do not think it makes sense as a default.
Then we will have 2 distinct worlds. There will be 2 kinds of D code, and they will be incompatible... I think that's worse.
Excuse my ignorance, but I'm no longer sure what everybody in this thread is actually arguing about: Andrei's WIP DIP74 [1] adds compiler-recognized AddRef/Release calls to classes. The compiler will _automatically_ call these. Of course the compiler can then also detect and optimize dead AddRef/Release pairs. All the exception issues Walter described also apply here and they also apply for structs with destructors in general. So I'd say DIP74 is basically ARC. Ironically structs which are better for RC right now won't benefit from optimizations. However, it'd be simple to also recognize Release/AddRef in structs to gain the necessary information for optimization. It's not even necessary to call these automatically, they can be called manually from struct dtor as usual. So what exactly is the difference between ARC and DIP74? Simply that it's not the default memory management method? I do understand that this makes a huge difference in practice, OTOH switching all D classes to ARC seems very unrealistic as it could break code in subtle ways (cycles). But even if one wants ARC as a default, DIP74 is a huge step forward. If library maintainers are convinced that ARC works well it will become a quasi-default. I'd expect most game-related libraries (SFMLD, derelict) will switch to RC classes. [1] http://wiki.dlang.org/DIP74
Feb 26 2015
prev sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
 This is going to sound really stupid... but do people actually 
 use
 exceptions regularly?
 I've never used one. When I encounter code that does, I just 
 find it
 really annoying to debug. I've never 'gotten' exceptions. I'm 
 not sure
 why error codes are insufficient, other than the obvious fact 
 that
 they hog the one sacred return value.
I used to feel like that with exceptions. It's only after a position involving lots of legacy code that they revealed their value. One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too). Exceptions makes a program crash noisily so errors can't be ignored. More importantly, ignoring an error code is invisible, while ignoring exceptions require explicit discarding and some thought. Simply put, correctly handling error code is more code and more ugly. Ignoring exception is no less code than ignoring error codes, but at least it will crash. Secondly, one real advantage is pure readability improvement. The normal path looks clean and isn't cluttered with error code check. Almost everything can fail! writeln("Hello"); // can fail auto f = File("config.txt"); // can fail What matter in composite operations is whether all of them succeeded or not. Example: if the sequence of operations A-B-C failed while doing B, you are interested by the fact A-B-C has failed but not really that B failed specifically. So you would have to translate error codes from one formalism to another. What happens next is that error codes become conflated in the same namespace and reused in other unrelated places. Hence, error codes from library leak into code that should be isolated from it. Lastly, exceptions have a hierarchy and allow to distinguish between bugs and input errors by convention. Eg: Alan just wrote a function in a library that return an error code if it fails. The user program by Betty pass it a null pointer. This is a logic error as Alan disallowed it by contract. As Walter repeatedly said us, logic errors/bugs are not input errors and the only sane way to handle them is to crash. But since this function error interface is an error code, Alan return something like ERR_POINTER_IS_NULL_CONTRACT_VIOLATED since well, no other choice. Now the logic error code gets conflated with error codes corresponding to input errors (ERR_DISK_FAILED), and both will be handled similarly by Betty for sure, and the earth begin to crackle. Unfortunately exceptions requires exception safety, they may block some optimizations, and they may hamper debugging. That is usually a social blocker for more exception adoption in C++ circles, but once a group really get it, like RAII, you won't be able to take it away from them.
Feb 24 2015
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 24 February 2015 at 23:02:14 UTC, ponce wrote:
 One (big) problem about error code is that they do get ignored, 
 much too often. It's like manual memory management, everyone 
 think they can do it without errors, but mostly everyone fail 
 at it (me too, and you too).
Explicit return values for errors is usually annoying, yes, but it is possible to have a language construct that isn't ignorable. That means you have to explicitly state that you are ignoring the error. E.g. open_file("file2")?.write("stuff") // only write if file is ready open_file("file1")?.write("ffuts") // only write if file is ready if ( error ) log_error() // log if some files were not ready or: f = open_file(…) g = open_file(…) h = open_file(…) if( error(f,g,h) ) log_error Also with async programming, futures/promises, the errors will be delayed, so you might be better off having them as part of the object.
Feb 25 2015
parent reply "ponce" <contact ga3mesfrommars.fr> writes:
 or:

   f = open_file(…)
   g = open_file(…)
   h = open_file(…)
   if( error(f,g,h) ) log_error


 Also with async programming, futures/promises, the errors will 
 be delayed,
That's the problem with future/promises, you spent your time explaining who waits for what instead of just writing what things do.
 so you might be better off having them as part of the object.
No. If I can't open a file I'd better not create a File object in an invalid state. Invalid states defeats RAII. So you can't re-enter that mutex as you asked, so I will grant you a scopedLock, but it is in an errored state so you'd better check that it is valid!
Feb 26 2015
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 26 February 2015 at 11:28:16 UTC, ponce wrote:
 That's the problem with future/promises, you spent your time 
 explaining who waits for what instead of just writing what 
 things do.
There are many ways to do futures, but I don't think it is all that complicated for the end user in most cases. E.g. auto a = request_db1_async(); auto b = request_db2_async(); auto c = request_db3_async(); auto d = compute_stuff_async(); r = wait_for_all(a,b,c,d); if( has_error(r) ) return failure;
 No. If I can't open a file I'd better not create a File object 
 in an invalid state. Invalid states defeats RAII.
This is the attitude I don't like, because it means that you have to use pointers when you could just embed the file-handle. That leads to more allocations and more cache misses.
 So you can't re-enter that mutex as you asked, so I will grant 
 you a scopedLock, but it is in an errored state so you'd better 
 check that it is valid!
A file can always enter an errored state. So can OpenGL. That doesn't mean you have to react immediately in all cases. When you mmap a file you write to the file indirectly without any function calls. The disk could die... but how do you detect that? You wait until you msync() and detect it late. It is more efficient.
Feb 26 2015
parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Thursday, 26 February 2015 at 14:22:01 UTC, Ola Fosheim 
Grøstad wrote:
 No. If I can't open a file I'd better not create a File object 
 in an invalid state. Invalid states defeats RAII.
This is the attitude I don't like, because it means that you have to use pointers when you could just embed the file-handle. That leads to more allocations and more cache misses.
I really don't understand how any of this is related to what we were previously discussing: error handling.
 So you can't re-enter that mutex as you asked, so I will grant 
 you a scopedLock, but it is in an errored state so you'd 
 better check that it is valid!
A file can always enter an errored state. So can OpenGL. That doesn't mean you have to react immediately in all cases.
This is counter to my experience. It does't make much sense to go on after an error, in any software that want some reliability.
Feb 27 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 27 February 2015 at 15:53:18 UTC, ponce wrote:
 On Thursday, 26 February 2015 at 14:22:01 UTC, Ola Fosheim 
 Grøstad wrote:
 No. If I can't open a file I'd better not create a File 
 object in an invalid state. Invalid states defeats RAII.
This is the attitude I don't like, because it means that you have to use pointers when you could just embed the file-handle. That leads to more allocations and more cache misses.
I really don't understand how any of this is related to what we were previously discussing: error handling.
You wrote: «No. If I can't open a file I'd better not create a File object in an invalid state. Invalid states defeats RAII.» If you embed the File object in other objects you also have to deal with the File object being in an invalid state. The alternative is to have discrete objects and nullable pointers to them. Makes sense for a high level programming language like Java, makes no sense for a system programming language.
 It does't make much sense to go on after an error, in any 
 software that want some reliability.
It does, when you do async buffering and want performance, e.g. OpenGL. Often it also makes error-handling simpler. Often you don't care about when it failed, you often only care about the "transactional unit" as a whole. It also makes programs more portable. There are big architectural differences when it comes to when errors can be reported. E.g. you don't want to wait for a networked drive to respond before going on. You only want to know if the "closing of the transaction" succeeded or not.
Mar 01 2015
prev sibling next sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 25 February 2015 at 09:02, ponce via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
 This is going to sound really stupid... but do people actually use
 exceptions regularly?
 I've never used one. When I encounter code that does, I just find it
 really annoying to debug. I've never 'gotten' exceptions. I'm not sure
 why error codes are insufficient, other than the obvious fact that
 they hog the one sacred return value.
I used to feel like that with exceptions. It's only after a position involving lots of legacy code that they revealed their value. One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too). Exceptions makes a program crash noisily so errors can't be ignored.
This is precisely my complaint though. In a production environment where there are 10's, 100's of people working concurrently, it is absolutely unworkable that code can be crashing for random reasons that I don't care about all the time. I've experienced before these noisy crashes relating to things that I don't care about at all. It just interrupts my work, and also whoever else it is that I have to involve to address the problem before I can continue. That is a real cost in time and money. I find that situation to be absolutely unacceptable. I'll take the possibility that an ignored error code may not result in a hard-crash every time.
 More importantly, ignoring an error code is invisible, while ignoring
 exceptions require explicit discarding and some thought.
 Simply put, correctly handling error code is more code and more ugly.
 Ignoring exception is no less code than ignoring error codes, but at least
 it will crash.
Right, but as above, this is an expense quantifiable in time and dollars that I just can't find within me to balance by this reasoning. I also prefer my crashes to occur at the point of failure. Exceptions tend to hide the problem in my experience. I find it so ridiculously hard to debug exception laden code. I have no idea how you're meant to do it, it's really hard to find the source of the problem! Only way I know is to use break-on-throw, which never works in exception laden code since exception-happy code typically throw for common error cases which happen all the time too.
 Secondly, one real advantage is pure readability improvement. The normal
 path looks clean and isn't cluttered with error code check. Almost
 everything can fail!

 writeln("Hello");  // can fail
This only fails if runtime state is already broken. Problem is elsewhere. Hard crash is preferred right here.
 auto f = File("config.txt"); // can fail
Opening files is a very infrequent operation, and one of the poster child examples of where you would always check the error return. I'm not even slightly upset by checking the return value from fopen.
 What matter in composite operations is whether all of them succeeded or not.
 Example: if the sequence of operations A-B-C failed while doing B, you are
 interested by the fact A-B-C has failed but not really that B failed
 specifically.
Not necessarily true. You often want to report what went wrong, not just that "it didn't work".
 So you would have to translate error codes from one formalism
 to another. What happens next is that error codes become conflated in the
 same namespace and reused in other unrelated places. Hence, error codes from
 library leak into code that should be isolated from it.
I don't see exceptions are any different in this way. I see internal exceptions bleed to much higher levels where they've totally lost context all the time.
 Lastly, exceptions have a hierarchy and allow to distinguish between bugs
 and input errors by convention.
 Eg: Alan just wrote a function in a library that return an error code if it
 fails. The user program by Betty pass it a null pointer. This is a logic
 error as Alan disallowed it by contract. As Walter repeatedly said us, logic
 errors/bugs are not input errors and the only sane way to handle them is to
 crash.
 But since this function error interface is an error code, Alan return
 something like ERR_POINTER_IS_NULL_CONTRACT_VIOLATED since well, no other
 choice. Now the logic error code gets conflated with error codes
 corresponding to input errors (ERR_DISK_FAILED), and both will be handled
 similarly by Betty for sure, and the earth begin to crackle.
I'm not sure quite what you're saying here, but I agree with Walter. In case of hard logic error, the sane thing to do is crash (ie, assert), not throw... I want my crash at the point of failure, not somewhere else.
 Unfortunately exceptions requires exception safety, they may block some
 optimizations, and they may hamper debugging. That is usually a social
 blocker for more exception adoption in C++ circles, but once a group really
 get it, like RAII, you won't be able to take it away from them.
Performance inhibition is a factor in considering that I've never used exceptions, but it's certainly not the decisive reason.
Feb 25 2015
next sibling parent "ponce" <contact gam3sfrommars.fr> writes:
On Wednesday, 25 February 2015 at 16:39:38 UTC, Manu wrote:
 This is precisely my complaint though. In a production 
 environment
 where there are 10's, 100's of people working concurrently, it 
 is
 absolutely unworkable that code can be crashing for random 
 reasons
 that I don't care about all the time.
 I've experienced before these noisy crashes relating to things 
 that I
 don't care about at all. It just interrupts my work, and also 
 whoever
 else it is that I have to involve to address the problem before 
 I can
 continue.
I see what the problem can be. My feeling is that it is in part a workplace/codebase problem. Bugs that prevent other people from working aren't usually high on the roadmap. This also happen with assertions, and we have to disable them to do work then ; though assertions create no debate. If the alternative is ignoring error codes, I'm not sure it's better. Anything could happen and then the database must be cleaned up.
 That is a real cost in time and money. I find that situation to 
 be
 absolutely unacceptable. I'll take the possibility that an 
 ignored
 error code may not result in a hard-crash every time.
True eg. you can ignore every OpenGL error it's kind of hardened.
 Right, but as above, this is an expense quantifiable in time and
 dollars that I just can't find within me to balance by this 
 reasoning.
To be fair, this cost has to be balanced with the cost of not finding a defect before sending it to the customer.
 I also prefer my crashes to occur at the point of failure. 
 Exceptions
 tend to hide the problem in my experience.
I would just put a breakpoint on the offending throw. THen it's no different.
 I find it so ridiculously hard to debug exception laden code. I 
 have
 no idea how you're meant to do it, it's really hard to find the 
 source
 of the problem!
I've seen this and this is true with things that need to retry something periodically with a giant try/catch. try/catch at the wrong levels to "recover" too much things can also make it harder. Some (most?) things should really fail hard rather than resist errors. I think gamedev is more of an exception since "continue anyway" might be a successful strategy in some capacity. Games are supposed to be "finished" at release which is something customers thankfully don't ask of most software.
 Only way I know is to use break-on-throw, which never works in
 exception laden code since exception-happy code typically throw 
 for
 common error cases which happen all the time too.
What I do is break-on-uncatched. Break-on-throw is moreoften than not hopelessly noisy like you said. But I only deal with mostly reproducible bugs.
 Opening files is a very infrequent operation, and one of the 
 poster
 child examples of where you would always check the error 
 return. I'm
 not even slightly upset by checking the return value from fopen.
But how happy are you to bubble up the error condition up to call stack? I'm of mixed opinion on that, seeing error code _feels_ nice and we can say to ourselves "I'm treating the error carefully". Much like we can say to ourselves "I'm carefully managing memory" when we manage memory manually. Somehow I like pedestrian work that really feels like work. But that doesn't mean we do it efficiently or even in the right way, just that we _think_ it's done right.
 What matter in composite operations is whether all of them 
 succeeded or not.
 Example: if the sequence of operations A-B-C failed while 
 doing B, you are
 interested by the fact A-B-C has failed but not really that B 
 failed
 specifically.
Not necessarily true. You often want to report what went wrong, not just that "it didn't work".
Fortunately exceptions allows to bring up any information about what went wrong. We often see on the Internet that "errors are best dealed with where they happen". I could not disagree more. Where "fopen" fails, I have no context to know I'm here because I was trying to save the game. Error codes force to bubble up the error to be able to say "saving the game has failed", and then since you have used error codes without error strings you cannot even say "saving the game has failed because fopen has failed to open <filename>". Now instead of just bubbling up error codes I must bubble up error messages too (I've done it). Great! Errors-should-be-dealed-with-where-they-happen is a complete fallacy.
 I don't see exceptions are any different in this way. I see 
 internal
 exceptions bleed to much higher levels where they've totally 
 lost
 context all the time.
In my experience this is often due to sub-systems saving the ass of others by ignoring errors in the first place instead of either crashing of rebooting the faulty sub-system.
 I'm not sure quite what you're saying here, but I agree with 
 Walter.
 In case of hard logic error, the sane thing to do is crash (ie,
 assert), not throw...
assert throw Error for this purpose.
 Performance inhibition is a factor in considering that I've 
 never used
 exceptions, but it's certainly not the decisive reason.
And it's a valid concern. Some program parts also seldom need exceptions since mostly dealing with memory and few I/O.
Feb 25 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 8:39 AM, Manu via Digitalmars-d wrote:
 I'll take the possibility that an ignored
 error code may not result in a hard-crash every time.
If you want some fun, take any system and fill up the disk drive to just short of capacity. Now go about your work using the system. You'll experience all kinds of delightful, erratic behavior, because real world C code tends to ignore write failures and just carries on.
Feb 26 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-02-26 21:45, Walter Bright wrote:

 If you want some fun, take any system and fill up the disk drive to just
 short of capacity. Now go about your work using the system.

 You'll experience all kinds of delightful, erratic behavior, because
 real world C code tends to ignore write failures and just carries on.
It has happened to me quite often. It's usually no problem. Just (re)move some data an continue. -- /Jacob Carlborg
Feb 26 2015
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Feb 26, 2015 at 02:39:28AM +1000, Manu via Digitalmars-d wrote:
 On 25 February 2015 at 09:02, ponce via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Monday, 23 February 2015 at 09:51:07 UTC, Manu wrote:
 This is going to sound really stupid... but do people actually use
 exceptions regularly?
I wouldn't say I use them *all* the time, but I do use them. In properly-designed code, they should be pretty rare, so I only really have to deal with them in strategic places, not sprinkled throughout the code.
 I've never used one. When I encounter code that does, I just find
 it really annoying to debug. I've never 'gotten' exceptions. I'm
 not sure why error codes are insufficient, other than the obvious
 fact that they hog the one sacred return value.
I used to feel like that with exceptions. It's only after a position involving lots of legacy code that they revealed their value. One (big) problem about error code is that they do get ignored, much too often. It's like manual memory management, everyone think they can do it without errors, but mostly everyone fail at it (me too, and you too). Exceptions makes a program crash noisily so errors can't be ignored.
This is precisely my complaint though. In a production environment where there are 10's, 100's of people working concurrently, it is absolutely unworkable that code can be crashing for random reasons that I don't care about all the time.
It doesn't have to crash if the main loop catches them and logs them to a file where the relevant people can monitor for signs of malfunction.
 I've experienced before these noisy crashes relating to things that I
 don't care about at all. It just interrupts my work, and also whoever
 else it is that I have to involve to address the problem before I can
 continue.
Ideally, it should be the person responsible who's seeing the exceptions, rather than you (if you were not responsible). I see this more as a sign that something is wrong with the deployment process -- doesn't the person(s) committing the change test his changes before committing, which should make production-time exceptions a rare occurrence?
 That is a real cost in time and money. I find that situation to be
 absolutely unacceptable. I'll take the possibility that an ignored
 error code may not result in a hard-crash every time.
And the possibility of malfunction caused by ignored errors leading to (possibly non-recoverable) data corruption is more acceptable?
 More importantly, ignoring an error code is invisible, while
 ignoring exceptions require explicit discarding and some thought.
 Simply put, correctly handling error code is more code and more
 ugly.  Ignoring exception is no less code than ignoring error codes,
 but at least it will crash.
Right, but as above, this is an expense quantifiable in time and dollars that I just can't find within me to balance by this reasoning. I also prefer my crashes to occur at the point of failure. Exceptions tend to hide the problem in my experience. I find it so ridiculously hard to debug exception laden code. I have no idea how you're meant to do it, it's really hard to find the source of the problem!
Isn't the stacktrace attached to the exception supposed to lead you to the point of failure?
 Only way I know is to use break-on-throw, which never works in
 exception laden code since exception-happy code typically throw for
 common error cases which happen all the time too.
"Exception-happy" sounds like wrong use of exceptions. No wonder you have problems with them.
 Secondly, one real advantage is pure readability improvement. The
 normal path looks clean and isn't cluttered with error code check.
 Almost everything can fail!

 writeln("Hello");  // can fail
This only fails if runtime state is already broken. Problem is elsewhere. Hard crash is preferred right here.
The problem is, without exceptions, it will NOT crash! If stdout is full, for example (e.g., the logfile has filled up the disk), it will just happily move along as if nothing is wrong, and at the end of the day, the full disk will cause some other part of the code to malfunction, but half of the relevant logs aren't there because they were never written to disk in the first place, but the code didn't notice because error codes were ignored. (And c'mon, when was the last time you checked the error code of printf()? I never did, and I suspect you never did either.)
 auto f = File("config.txt"); // can fail
Opening files is a very infrequent operation, and one of the poster child examples of where you would always check the error return. I'm not even slightly upset by checking the return value from fopen.
I believe that was just a random example; it's unfair to pick on the specifics. The point is, would you rather write code that looks like this: LibAErr_t a_err; LibBErr_t b_err; LibCErr_t c_err; ResourceA *res_a; ResourceB *res_b; ResourceC *res_c; res_a = acquireResourceA(); if ((a_err = firstOperation(a, b, c)) != LIB_A_OK) { freeResourceA(); goto error; } res_b = acquireResourceB(): if ((b_err = secondOperation(x, y, z)) != LIB_B_OK) { freeResourceB(); freeResourceA(); goto error; } res_c = acquireResourceC(); if ((c_err = thirdOperation(p, q, r)) != LIB_C_OK) { freeResourceB(); freeResourceC(); // oops, subtle bug here freeResourceA(); goto error; } ... error: // deal with problems here or this: try { auto res_a = acquireResourceA(); scope(failure) freeResourceA(); firstOperation(a, b, c); auto res_b = acquireResourceB(); scope(failure) freeResourceB(); secondOperation(x, y, z); auto res_c = acquireResourceC(); scope(failure) freeResourceC(); thirdOperation(p, q, r); } catch (Exception e) { // deal with problems here }
 What matter in composite operations is whether all of them succeeded
 or not.  Example: if the sequence of operations A-B-C failed while
 doing B, you are interested by the fact A-B-C has failed but not
 really that B failed specifically.
Not necessarily true. You often want to report what went wrong, not just that "it didn't work".
Isn't that what Exception.msg is for? Whereas if func1() calls 3 functions, which respectively returns errors of types libAErr_t, libBErr_t, libCErr_t, what should func1() return if, say, the 3rd operation failed? (Keep in mind that if we call functions from 3 different libraries, they are almost guaranteed to return their own error code enums which are never compatible with each other.) Should it return libAErr_t, libBErr_t, or libCErr_t? Or should it do a switch over possible error codes and translate them to a common type libABCErr_t?
From my experience, what usually happens is that func1() will just
return a single failure code if *any* of the 3 functions failed -- it's just too tedious and unmaintainable otherwise -- which means you *can't* tell what went wrong, only that "it didn't work". My favorite whipping boy is the "internal error". Almost every module in my work project has their own error enum, and almost invariably the most common error returned by any function is the one corresponding with "internal error". Any time a function calls another module and it fails, "internal error" is returned -- because people simply don't have the time/energy to translate error codes from one module to another and return something meaningful. So whenever there is a problem, all we know is that "internal error" got returned by some function. As to where the actual problem is, who knows? There are 500 places where "internal error" might have originated from, but we can't tell which of them it might be because almost *everything* returns "internal error". Whereas with exceptions, .msg tells you exactly what the error message was. And if the libraries have dedicated exception types, you can even catch each type separately and deal with them accordingly, as opposed to getting a libABCErr_t and then having to map that back to the original error code type in order to understand what the problem was. I find it hard to believe that you appear to be saying that you have trouble pinpointing the source of the problem with exceptions, whereas you find it easy to track down the problem with error codes. IME it's completely the opposite.
 So you would have to translate error codes from one formalism to
 another. What happens next is that error codes become conflated in
 the same namespace and reused in other unrelated places. Hence,
 error codes from library leak into code that should be isolated from
 it.
I don't see exceptions are any different in this way. I see internal exceptions bleed to much higher levels where they've totally lost context all the time.
Doesn't the stacktrace give you the context?
 Lastly, exceptions have a hierarchy and allow to distinguish between
 bugs and input errors by convention.

 Eg: Alan just wrote a function in a library that return an error
 code if it fails. The user program by Betty pass it a null pointer.
 This is a logic error as Alan disallowed it by contract. As Walter
 repeatedly said us, logic errors/bugs are not input errors and the
 only sane way to handle them is to crash.

 But since this function error interface is an error code, Alan
 return something like ERR_POINTER_IS_NULL_CONTRACT_VIOLATED since
 well, no other choice. Now the logic error code gets conflated with
 error codes corresponding to input errors (ERR_DISK_FAILED), and
 both will be handled similarly by Betty for sure, and the earth
 begin to crackle.
I'm not sure quite what you're saying here, but I agree with Walter. In case of hard logic error, the sane thing to do is crash (ie, assert), not throw... I want my crash at the point of failure, not somewhere else.
[...] Again, doesn't the exception stacktrace tell you exactly where the point of failure is? Whereas an error code that has percolated up the call stack 20 levels and underwent various mappings (har har, who does that) or collapsed to generic, non-descript values ("internal error" -- much more likely), is unlikely to tell you anything more than "it didn't work". No information about which of the 500 functions 20 calls down the call graph might have been responsible. T -- You are only young once, but you can stay immature indefinitely. -- azephrahel
Feb 25 2015
prev sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 2015-02-23 at 19:50 +1000, Manu via Digitalmars-d wrote:
 O[…]
 This is going to sound really stupid... but do people actually use
 exceptions regularly?
 I've never used one. When I encounter code that does, I just find it
 really annoying to debug. I've never 'gotten' exceptions. I'm not sure
 why error codes are insufficient, other than the obvious fact that
 they hog the one sacred return value.
 D is just a whisker short of practical multiple-return-values. If we
 cracked that, we could use alternative (superior?) error state return
 mechanisms. I'd be really into that.
[…] Return codes for value returning functions only work if the function returns a pair, the return value and the error code: it is generally impossible to work with return values that serve the purpose of return value and error code. C got this fairly wrong, Go gets it fairly right. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2015
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 12:30:55 UTC, Russel Winder wrote:
 value and error code. C got this fairly wrong, Go gets it 
 fairly right.
It's the one feature about Go that makes Go code look really ugly... So I guess this is a very subjective issue. Posix is actually pretty consistent by returning "-1", even as a pointer, but if you don't write pure Posix code it becomes confusing.
Feb 23 2015
prev sibling next sibling parent "Matthias Bentrup" <matthias.bentrup googlemail.com> writes:
On Monday, 23 February 2015 at 12:30:55 UTC, Russel Winder wrote:
 On Mon, 2015-02-23 at 19:50 +1000, Manu via Digitalmars-d wrote:
 O[…]
 This is going to sound really stupid... but do people actually 
 use
 exceptions regularly?
 I've never used one. When I encounter code that does, I just 
 find it
 really annoying to debug. I've never 'gotten' exceptions. I'm 
 not sure
 why error codes are insufficient, other than the obvious fact 
 that
 they hog the one sacred return value.
 D is just a whisker short of practical multiple-return-values. 
 If we
 cracked that, we could use alternative (superior?) error state 
 return
 mechanisms. I'd be really into that.
[…] Return codes for value returning functions only work if the function returns a pair, the return value and the error code: it is generally impossible to work with return values that serve the purpose of return value and error code. C got this fairly wrong, Go gets it fairly right.
You wouldn't need new syntax (though I think multiple returns would be a nice addition), I think you can compile try/catch exception syntax into error codes internally.
Feb 23 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/15 4:30 AM, Russel Winder via Digitalmars-d wrote:
 On Mon, 2015-02-23 at 19:50 +1000, Manu via Digitalmars-d wrote:
 O[…]
 This is going to sound really stupid... but do people actually use
 exceptions regularly?
 I've never used one. When I encounter code that does, I just find it
 really annoying to debug. I've never 'gotten' exceptions. I'm not sure
 why error codes are insufficient, other than the obvious fact that
 they hog the one sacred return value.
 D is just a whisker short of practical multiple-return-values. If we
 cracked that, we could use alternative (superior?) error state return
 mechanisms. I'd be really into that.
[…] Return codes for value returning functions only work if the function returns a pair, the return value and the error code: it is generally impossible to work with return values that serve the purpose of return value and error code. C got this fairly wrong, Go gets it fairly right.
Urgh. Product types masquerading as sum types. Give me a break will ya. -- Andrei
Feb 23 2015
next sibling parent reply Russel Winder via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 2015-02-23 at 10:08 -0800, Andrei Alexandrescu via Digitalmars-d
wrote:
[=E2=80=A6]
=20
 Urgh. Product types masquerading as sum types. Give me a break will ya.=
=20
 -- Andrei
=20
Uuurrr=E2=80=A6. no. As of today I program with the stuff and it works. Com= ing from Java and Python, I am an exceptions oriented person, but Go has internal consistency on it's attitude to error reporting and you get used to it. Actually, the obsession with error handling at the point of occurrence does lead to getting systems that are less prone to unexpected errors. This is pragmatism not theory. And being honest most programmers would not know what product or sum types were. Nor would they care. Even if perhaps they ought to. When I worked with C, I abhorred the, to me, dreadful error codes system. Go really does allow it to work without the same pain. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Feb 23 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/23/15 10:50 AM, Russel Winder via Digitalmars-d wrote:
 On Mon, 2015-02-23 at 10:08 -0800, Andrei Alexandrescu via Digitalmars-d
 wrote:
 […]
 Urgh. Product types masquerading as sum types. Give me a break will ya.
 -- Andrei
Uuurrr…. no. As of today I program with the stuff and it works. Coming from Java and Python, I am an exceptions oriented person, but Go has internal consistency on it's attitude to error reporting and you get used to it. Actually, the obsession with error handling at the point of occurrence does lead to getting systems that are less prone to unexpected errors. This is pragmatism not theory. And being honest most programmers would not know what product or sum types were. Nor would they care. Even if perhaps they ought to. When I worked with C, I abhorred the, to me, dreadful error codes system. Go really does allow it to work without the same pain.
This is a misunderstanding. I was referring to Go's confusion of sum types with product types. -- Andrei
Feb 23 2015
prev sibling parent "Tobias Pankrath" <tobias pankrath.net> writes:
 Urgh. Product types masquerading as sum types. Give me a break 
 will ya. -- Andrei
1. The product solution is more pleasant to work with, if you have no sugar for sum types like pattern matching. 2. It's the same as with exception specifications: Product types make ignoring the error path easier thus are more popular.
Feb 23 2015
prev sibling next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 23 February 2015 at 05:54:06 UTC, Manu wrote:
 On 23 February 2015 at 14:11, Andrei Alexandrescu via 
 Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 2/22/15 5:57 PM, Manu via Digitalmars-d wrote:
 I can easily visualise a way forward with RC.
Then do it. Frankly it seems to me you're doing anything you possibly can to talk yourself out of doing work.
Excellent technique to immediately invalidate everything someone says. Thanks for that. You dismissed absolutely everything I said, and then imply that I have no right to comment unless I do it myself. It's got nothing to do with doing work. ARC (or something like it) is almost religiously opposed. We can't even have a reasonable conversation about it, or really explore it's implications before someone (that ideally know's what they're doing) thinks about writing code. There's no room for progress in this environment. What do you want me to do? I use manual RC throughout my code, but the experience in D today is identical to C++. I can confirm that it's equally terrible to any C++ implementation I've used. We have no tools to improve on it. And whatever, if it's the same as C++, I can live with it. I've had that my whole career (although it's sad, because we could do much better). The problem, as always, is implicit allocations, and allocations from 3rd party libraries. While the default allocator remains incompatible with workloads I care about, that isolates us from virtually every library that's not explicitly written to care about my use cases. That's the situation in C++ forever. We're familiar with it, and it's shit. I have long hoped we would move beyond that situation in D back), but it's a massive up-hill battle to even gain mindshare, regardless of actual implementation. There's no shared vision on this matter, the word is basically "GC is great, everyone loves it, use an RC lib". How long do we wait for someone to invent a fantastical GC that solves our problems and works in D? I think it's practically agreed that no known design can work, but nobody wants to bite the bullet and accept the fact. We need to admit we have an impassable problem, and then maybe we can consider alternatives openly. Obviously, it would be disruptive, but it might actually work... which is better than where we seem to be heading (with a velocity of zero). The fact is, people who are capable of approaching this problem in terms of actual code will never even attempt it until there's resounding consensus that it's worth exploring.
Personally I think what matters is getting D's situation regarding memory management sorted out, regardless out it will look like in the end. If I am a bit too quick jumping the gun about GC, is that I have embraced GC languages in my line of work, so I tend to be aware that not all GCs are made alike and some like the ones from e.g. Aonix are good enough for real time situations, the ones someone dies if the GC runs on the wrong moment. Maybe such GC quality is impossible to achieve in D, I don't know. What I can say is that I cannot use D on my type of work and keeping up with everything that happens on the JVM, .NET and mobile space already keeps me busy enough.
Feb 22 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 07:19:56 UTC, Paulo Pinto wrote:
 Personally I think what matters is getting D's situation 
 regarding memory management sorted out, regardless out it will 
 look like in the end.
This is exactly right, either 1. The compiler takes care of allocation/deallocations and makes refcounting part of the language implementation (but not necessarily the semantics). or 2. If allocation/deallocation is not the compiler's responsibility then RC should be a library solution based on efficient generally useful counter-semantics build blocks. A compiler solution for RC and manual allocations is a firefighter solution where all similar use cases suffers. I.e. when you want something similar to, but not exactly like what the compiler provides...
 If I am a bit too quick jumping the gun about GC, is that I 
 have embraced GC languages in my line of work, so I tend to be 
 aware that not all GCs are made alike and some like the ones 
 from e.g. Aonix are good enough for real time situations, the 
 ones someone dies if the GC runs on the wrong moment.

 Maybe such GC quality is impossible to achieve in D, I don't 
 know.
Well, hard real time does not mean fast, it means "bounded execution time". GC will is not suitable for a lot of reasons when you want to get the most out of the hardware (just memory requirements on diskless systems is sufficient to disqualify GC). When people use the term "real time" on the forums, they usually just mean hardware-efficient and low latency.
Feb 23 2015
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/22/15 9:53 PM, Manu via Digitalmars-d wrote:
 On 23 February 2015 at 14:11, Andrei Alexandrescu via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 2/22/15 5:57 PM, Manu via Digitalmars-d wrote:
 I can easily visualise a way forward with RC.
Then do it. Frankly it seems to me you're doing anything you possibly can to talk yourself out of doing work.
Excellent technique to immediately invalidate everything someone says. Thanks for that.
It's not immediate - it's a pattern our dialog has followed for years. Essentially I haven't yet managed to communicate with you.
 You dismissed absolutely everything I said, and then imply that I have
 no right to comment unless I do it myself.
 It's got nothing to do with doing work. ARC (or something like it) is
 almost religiously opposed. We can't even have a reasonable
 conversation about it, or really explore it's implications before
 someone (that ideally know's what they're doing) thinks about writing
 code. There's no room for progress in this environment.
I think RC is an important tool on our panoply. More so than Walter. But I have to say you'd do good to understand his arguments better; it doesn't seem you do.
 What do you want me to do? I use manual RC throughout my code, but the
 experience in D today is identical to C++. I can confirm that it's
 equally terrible to any C++ implementation I've used. We have no tools
 to improve on it.
Surely you have long by now written something similar to std::shared_ptr by now. Please paste it somewhere and post a link to it, thanks. That would be a fantastic discussion starter.
 And whatever, if it's the same as C++, I can live with it. I've had
 that my whole career (although it's sad, because we could do much
 better).
 The problem, as always, is implicit allocations, and allocations from
 3rd party libraries. While the default allocator remains incompatible
 with workloads I care about, that isolates us from virtually every
 library that's not explicitly written to care about my use cases.
 That's the situation in C++ forever. We're familiar with it, and it's
 shit. I have long hoped we would move beyond that situation in D

 but it's a massive up-hill battle to even gain mindshare, regardless
 of actual implementation. There's no shared vision on this matter, the
 word is basically "GC is great, everyone loves it, use an RC lib".
It doesn't seem that way to me at all. Improving resource management is right there on the vision page for H1 2015. Right now people are busy fixing regressions for 2.067 (aiming for March 1; probably we won't make that deadline). In that context, posturing and stomping the ground that others work for you right now is in even more of a stark contrast.
 How long do we wait for someone to invent a fantastical GC that solves
 our problems and works in D?
This elucubration belongs only to you. Nobody's waiting on that. Pleas read http://wiki.dlang.org/Vision/2015H1 again.
 I think it's practically agreed that no
 known design can work, but nobody wants to bite the bullet and accept
 the fact.
I don't understand where this perception is coming from.
 We need to admit we have an impassable problem, and then maybe we can
 consider alternatives openly. Obviously, it would be disruptive, but
 it might actually work... which is better than where we seem to be
 heading (with a velocity of zero).
Please forgive people who are working on getting 2.067 out for not making things you need their top priority right now.
 The fact is, people who are capable of approaching this problem in
 terms of actual code will never even attempt it until there's
 resounding consensus that it's worth exploring.
Could you please let me know how we can rephrase this paragraph on http://wiki.dlang.org/Vision/2015H1: ============= Memory Management We aim to improve D's handling of memory. That includes improving the garbage collector itself and also making D eminently usable with limited or no use of tracing garbage collection. We aim to make the standard library usable in its entirety without a garbage collector. Safe code should not require the presence of a garbage collector. ============= Could you please let me know exactly what parts you don't understand or agree with so we can change them. Thanks. Andrei
Feb 23 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 24 February 2015 at 04:04, Andrei Alexandrescu via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 I think RC is an important tool on our panoply. More so than Walter. But I
 have to say you'd do good to understand his arguments better; it doesn't
 seem you do.
The only argument I can't/haven't addressed is the exception argument, which I have granted to Walter on authority. I think the other issues he's presented (at various times over the years, so I'm not sure what is the current state) can probably be addressed by the language. Scope seems like it could win us a lot of the problem cases alone, but we'd need to experiment with it for a year or so before we'll know. It does annoy me that I can't comment on the exceptions case, but the fact is I've never typed 'throw' before, and never scrutinised the codegen. I have meant to take some time to familiarise myself with whatever appears out the other end of the compiler for a long time, but all compilers seem to be different, architectures are probably different too, and I just haven't found the time for something that has no use to me other than to argue this assertion with some authority. Walter has said I should "just compile some code and look at it", which probably sounds fine if you have any idea what you're looking for, or an understanding of the differences between compilers. I don't, it's a fairly deep and involved process to give myself comprehensive knowledge of exceptions, and it has no value to me outside this argument. That said, I'd still be surprised if it was a terminal argument though. Is it really a hard impasse? Or is it just a big/awkward burden on the exceptional path? Surely the performance of the unwind path is irrelevant? Does it bleed into the normal execution path? My whole point is; where we seem to have no options for meaningful progress in GC land as evidenced by years of stagnation, I figure it's the only practical direction we have. I recognise there is some GC action recently, but I haven't seen any activity that changes the fundamental problems?
 What do you want me to do? I use manual RC throughout my code, but the
 experience in D today is identical to C++. I can confirm that it's
 equally terrible to any C++ implementation I've used. We have no tools
 to improve on it.
Surely you have long by now written something similar to std::shared_ptr by now. Please paste it somewhere and post a link to it, thanks. That would be a fantastic discussion starter.
Not really. I don't really use C++; we generally switched from C++ back to C about 8 years ago. Here's something public of the type I am more familiar with: https://github.com/TurkeyMan/fuji/blob/master/dist/include/d2/fuji/resource.d It's not an excellent example; it's quick and dirty, hasn't received much more attention than getting it working, but I think it's fairly representative of a typical pattern, especially when interacting with C code. I'm not aware of any reasonable strategy I could use to eliminate the ref fiddling. 'scope' overloads would have solved the problem nicely. COM is also an excellent candidate for consideration. If COM works well, then I imagine anything should work. Microsoft's latest C++ presents a model for this that I'm generally happy with; distinct RC pointer type. We could do better by having implicit cast to scope(T*) (aka, borrowing) which C++ can't express; scope(T*) would be to T^ and T* like const(T*) is to T* and immutable(T*).
 And whatever, if it's the same as C++, I can live with it. I've had
 that my whole career (although it's sad, because we could do much
 better).
 The problem, as always, is implicit allocations, and allocations from
 3rd party libraries. While the default allocator remains incompatible
 with workloads I care about, that isolates us from virtually every
 library that's not explicitly written to care about my use cases.
 That's the situation in C++ forever. We're familiar with it, and it's
 shit. I have long hoped we would move beyond that situation in D

 but it's a massive up-hill battle to even gain mindshare, regardless
 of actual implementation. There's no shared vision on this matter, the
 word is basically "GC is great, everyone loves it, use an RC lib".
It doesn't seem that way to me at all. Improving resource management is right there on the vision page for H1 2015. Right now people are busy fixing regressions for 2.067 (aiming for March 1; probably we won't make that deadline). In that context, posturing and stomping the ground that others work for you right now is in even more of a stark contrast.
I haven't said that. Review my first post, I never made any claims about what should/shouldn't be done, or make any demands of any sort. I just don't have any faith that GC can get us where we want to be. I was cautiously optimistic for some years, but I lost faith. I'm also critical (disappointed even) of the treatment of scope. I hope I'm wrong, like I hoped I was wrong about GC... There have been lots of people come in here and say "I won't use D because GC", and I've been defensive against those claims, despite being as high risk of similar tendencies myself. I don't think you can say I didn't give GC a fair chance to prove me wrong.
 How long do we wait for someone to invent a fantastical GC that solves
 our problems and works in D?
This elucubration belongs only to you. Nobody's waiting on that. Pleas read http://wiki.dlang.org/Vision/2015H1 again.
This is a bit of a red herring, the roadmap has no mention of ARC, or practical substitution for the GC. This discussion was originally about ARC over the GC, specifically. I firmly understand the push for nogc. I have applauded that effort many times. I have said in the past though that I'm not actually a strong supporter of nogc, and spoke critically of it initially. I'm still not really thrilled, but I do agree it's a necessary building block and I'm very happy it's a key focus, but I'm ultimately concerned it will lead to a place where separation of users into 2 camps (my primary ongoing criticism) is firmly established, and it will be justified by the effort expended to achieve that end. Again, don't get me wrong, I agree we need this, because at this point I see no other satisfactory outcome, but I'm disappointed that we failed to achieve something more ambitious in terms of memory management (garbage collection in whatever form; gc or rc) and nogc users will lose a subset of the language. I'm familiar with the world where some/most libraries aren't available to me, because people tend to use the most convenient memory management strategy by default, and that is incompatible with my environment. That is the world we will have in D. It's 'satisfactory', ie, it's workable for me and my people, but it's not ideal; it's a lost opportunity, and it's disappointing. Perhaps I've just been unrealistically hopeful for too long? Perhaps it will fall to the allocator API to save us from this situation I envision, but I don't have a picture for that in my head. It feels like it will probably be awkward and complicated to me however I try and imagine the end product. The likely result of that will be people not using it (just sticking with default GC allocation) unless they are compelled by a high level of experience, or by their intended user base, again suggesting a separation into 2 worlds. I hope I'm completely wrong. Really!
 The fact is, people who are capable of approaching this problem in
 terms of actual code will never even attempt it until there's
 resounding consensus that it's worth exploring.
Could you please let me know how we can rephrase this paragraph on http://wiki.dlang.org/Vision/2015H1: ============= Memory Management We aim to improve D's handling of memory. That includes improving the garbage collector itself and also making D eminently usable with limited or no use of tracing garbage collection. We aim to make the standard library usable in its entirety without a garbage collector. Safe code should not require the presence of a garbage collector. ============= Could you please let me know exactly what parts you don't understand or agree with so we can change them. Thanks.
I understand what it says, and I generally agree with it. I agree that nogc will give us a C-like foundation to work outwards from, and that is a much better place than where we are today, so I have come to support the direction. I wouldn't complain if efficient RC was on the roadmap, but I agree it's outside the scope for the immediate future. If it were to be said that I 'disagree' with some part of it, which isn't true, it would be that it risks leading to an end that I'm not sure is in our best interest; we will arrive at C vs C++. As I said above, and many many times in the past, I see the nogc effort leading to a place where libraries are firmly divided between 2 worlds. My criticisms of that kind have never been directly addressed. My suspicion is that this is mainly motivated by the fact it is the simplest and lowest-level path, it will give the best low-level building-blocks, and I think that's probably a good thing. But I can also imagine more ambitious paths, like replacing the GC with ambitious ARC implementation as the OP raised (if it's workable, I'm not convinced either way), which would require massive R&D and almost certainly lead to some radical changes. I think resistance is predicated mainly on a rejection of such radical changes, and that's fair enough, but is that rejection of radical change worth dividing D into 2 worlds like C/C++? I don't know of any discussion on this value tradeoff... not to mention, the significant loss of language features to the nogc camp. I'm not asking for any changes. I just gave an opinion to the OP. I can see there is RC work going on now, that's good. I expect (well, am hopeful) it will eventually lead to some form of ARC. We'll see. Re-reading my first 8 posts, I still feel they are perfectly reasonable. I'm not sure where exactly it is that I started 'a thing', but my feeling is it was only when you gave a long and somewhat aggressive reply, and then made a cheap attack on my character, that I tend to reply in kind. I also feel like I'm forced to reply, to every point, otherwise I proliferate this caricature you're prescribing to me (where I disappear when it gets 'hard', or rather, more time consuming than I have time for). Which is often true; it does become more time consuming than I have time for... does that mean I'm not entitled to input on this forum? I'll return to my hole now.
Feb 25 2015
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 7:12 AM, Manu via Digitalmars-d wrote:
 It does annoy me that I can't comment on the exceptions case,
That problem is easily correctable. But if you aren't interested in doing the homework, you're stuck with accepting what I say about it :-)
 That said, I'd still be surprised if it was a terminal argument
 though. Is it really a hard impasse? Or is it just a big/awkward
 burden on the exceptional path? Surely the performance of the unwind
 path is irrelevant? Does it bleed into the normal execution path?
I've answered these questions to you already. I've also quoted the ObjectiveC compiler document saying the same thing. At some point you're going to either have to spend some time investigating it yourself or cede the point.
 I recognise there is some GC action recently, but I haven't seen any
 activity that changes the fundamental problems?
DIP25.
 COM is also an excellent candidate for consideration. If COM works
 well, then I imagine anything should work.
 Microsoft's latest C++ presents a model for this that I'm generally
 happy with; distinct RC pointer type. We could do better by having
 implicit cast to scope(T*) (aka, borrowing) which C++ can't express;
 scope(T*) would be to T^ and T* like const(T*) is to T* and
 immutable(T*).
Microsoft's Managed C++ had two pointer types, and it went over like a lead zeppelin.
 There have been lots of people come in here and say "I won't use D
 because GC",
None of those people will be satisfied with improvements to the GC.
 This is a bit of a red herring, the roadmap has no mention of ARC, or
 practical substitution for the GC. This discussion was originally
 about ARC over the GC, specifically.
See the refcounted array thread.
 I hope I'm completely wrong. Really!
I suspect you are hoping for a magic switch: dmd foo.d -arc and voila! Everything will be reference counted rather than GC'd. This is never going to happen with D. I don't see a path to it that does not involve crippling problems and compromises. However, D will become usable with minimal or no use of the GC, by using components that are allocation agnostic, and selection of types that are RC'd. I suspect most programs will wind up using combinations of GC and RC.
 I wouldn't complain if efficient RC was on the roadmap, but I agree
 it's outside the scope for the immediate future.
Refcounted array thread.
 As I said above, and many many times in the past, I see the  nogc
 effort leading to a place where libraries are firmly divided between 2
 worlds. My criticisms of that kind have never been directly addressed.
I don't buy that. The canonical example is std.algorithm, which is highly useful components that are allocation strategy agnostic. I have since added splitterLines() which replaces the old GC splitLines() function with a component that is agnostic. This is an example of the way forward.
Feb 25 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Thursday, 26 February 2015 at 00:54:57 UTC, Walter Bright 
wrote:
 COM is also an excellent candidate for consideration. If COM 
 works
 well, then I imagine anything should work.
 Microsoft's latest C++ presents a model for this that I'm 
 generally
 happy with; distinct RC pointer type. We could do better by 
 having
 implicit cast to scope(T*) (aka, borrowing) which C++ can't 
 express;
 scope(T*) would be to T^ and T* like const(T*) is to T* and
 immutable(T*).
Microsoft's Managed C++ had two pointer types, and it went over like a lead zeppelin.
Rust currently has four or five pointer types(depending on how you define pointer) and it seems to be quite popular.
Feb 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 5:19 PM, weaselcat wrote:
 Rust currently has four or five pointer types(depending on how you define
 pointer) and it seems to be quite popular.
We'll see. I've already seen some complaints about that aspect.
Feb 25 2015
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 26 February 2015 at 00:54:57 UTC, Walter Bright 
wrote:
 On 2/25/2015 7:12 AM, Manu via Digitalmars-d wrote:
 It does annoy me that I can't comment on the exceptions case,
...
 COM is also an excellent candidate for consideration. If COM 
 works
 well, then I imagine anything should work.
 Microsoft's latest C++ presents a model for this that I'm 
 generally
 happy with; distinct RC pointer type. We could do better by 
 having
 implicit cast to scope(T*) (aka, borrowing) which C++ can't 
 express;
 scope(T*) would be to T^ and T* like const(T*) is to T* and
 immutable(T*).
Microsoft's Managed C++ had two pointer types, and it went over like a lead zeppelin.
This is not true for those of us working on Windows. It just got replaced by C++/CLI, which was an improved version based on feedback about Managed C++. The main difference was the removal of double underscore prefixes and making the keywords semantic. The multiple pointer types are still there. https://msdn.microsoft.com/en-us/library/ms379603%28v=vs.80%29.aspx Maybe it failed the goal of having C++ developers fully embrace .NET, but it achieved its goal of providing an easier way to integrate existing C++ code into .NET applications, instead of the P/Invoke dance. The same syntax extensions that became the foundation of the second coming of COM, aka WinRT and is at the kernel of Windows 8. Specially because this, "Creating Windows Runtime Components in C++" https://msdn.microsoft.com/library/windows/apps/hh441569/ is way simpler than this "Creating a Basic Windows Runtime Component Using Windows Runtime Library" https://msdn.microsoft.com/en-us/library/jj155856%28v=vs.110%29.aspx It might happen that with C++14 and Windows 10, C++/CLI and C++/CX will become irrelevant, but until then, they have been useful integrating C++ code into .NET and Store applications. -- Paulo
Feb 25 2015
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2015 11:50 PM, Paulo Pinto wrote:
 Maybe it failed the goal of having C++ developers fully embrace .NET, but it
 achieved its goal of providing an easier way to integrate existing C++ code
into
 .NET applications, instead of the P/Invoke dance.
I wasn't referring to technical success. There is no doubt that multiple pointer types technically works. I was referring to acceptance by the community. Back in the old DOS days, there were multiple pointer types (near and far). Programmers put up with that because it was the only way, but they HATED HATED HATED it.
Feb 26 2015
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/22/2015 8:36 AM, Manu via Digitalmars-d wrote:
 I have no idea where to start.
Start by making a ref counted type and see what the pain points are.
Feb 22 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 07:47, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 2/22/2015 8:36 AM, Manu via Digitalmars-d wrote:
 I have no idea where to start.
Start by making a ref counted type and see what the pain points are.
All my ref counting types fiddle with the ref in every assignment, or every function call and return. Unless the language has some sort of support for ref counting, I don't know how we can do anything about that.
Feb 22 2015
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 All my ref counting types fiddle with the ref in every 
 assignment, or every function call and return.
Hmm, the optimizer could potentially tell "inc X; dec X;" is useless and remove it without knowing what it is for.
Feb 22 2015
next sibling parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 11:41, Adam D. Ruppe via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 All my ref counting types fiddle with the ref in every assignment, or
 every function call and return.
Hmm, the optimizer could potentially tell "inc X; dec X;" is useless and remove it without knowing what it is for.
Yeah, except we're talking about libraries, and in that context I often have: extern(C) IncRef(T*); extern(C) DecRef(T*); Optimiser can't offer anything.
Feb 22 2015
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 23 February 2015 at 01:41:17 UTC, Adam D. Ruppe wrote:
 On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 All my ref counting types fiddle with the ref in every 
 assignment, or every function call and return.
Hmm, the optimizer could potentially tell "inc X; dec X;" is useless and remove it without knowing what it is for.
It is not that easy. First you need to increment/decrement in an atomic manner (unless we finally decide to fix holes in the type system) so the optimizer is mostly blind. But even if it could (we are not far from being able to do it), in most scenarios it is still an issue as you get potential exception unwinding. The unwind path must find the right reference count in there.
Feb 22 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 01:41:17 UTC, Adam D. Ruppe wrote:
 On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 All my ref counting types fiddle with the ref in every 
 assignment, or every function call and return.
Hmm, the optimizer could potentially tell "inc X; dec X;" is useless and remove it without knowing what it is for.
INPUT: try{ nonsharedobj._rc++; … } finally { nonsharedobj._rc--; if(nonsharedobj._rc==0) destroy… } OPTIMIZED: try{ … } finally { if(nonsharedobj._rc==0) destroy… } ---- Thanks to the messed up modular arithmetics that D has chosen you cannot assume the a non-shared live object does not have a rc==0 due to wrapping integers, in the general case.
Feb 23 2015
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Monday, 23 February 2015 at 08:27:52 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 23 February 2015 at 01:41:17 UTC, Adam D. Ruppe 
 wrote:
 On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 All my ref counting types fiddle with the ref in every 
 assignment, or every function call and return.
Hmm, the optimizer could potentially tell "inc X; dec X;" is useless and remove it without knowing what it is for.
INPUT: try{ nonsharedobj._rc++; … } finally { nonsharedobj._rc--; if(nonsharedobj._rc==0) destroy… } OPTIMIZED: try{ … } finally { if(nonsharedobj._rc==0) destroy… } ---- Thanks to the messed up modular arithmetics that D has chosen you cannot assume the a non-shared live object does not have a rc==0 due to wrapping integers, in the general case.
You mean when there are more than 2^64 references to the object?
Feb 23 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/23/2015 12:33 AM, Tobias Pankrath wrote:
 On Monday, 23 February 2015 at 08:27:52 UTC, Ola Fosheim Grøstad wrote:
 Thanks to the messed up modular arithmetics that D has chosen you cannot
 assume the a non-shared live object does not have a rc==0 due to wrapping
 integers, in the general case.
You mean when there are more than 2^64 references to the object?
Yeah, it'll wrap when there are more references than can even theoretically fit in the address space. I'm not worried about it :-)
Feb 23 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 08:50:28 UTC, Walter Bright wrote:
 On 2/23/2015 12:33 AM, Tobias Pankrath wrote:
 On Monday, 23 February 2015 at 08:27:52 UTC, Ola Fosheim 
 Grøstad wrote:
 Thanks to the messed up modular arithmetics that D has chosen 
 you cannot
 assume the a non-shared live object does not have a rc==0 due 
 to wrapping
 integers, in the general case.
You mean when there are more than 2^64 references to the object?
Yeah, it'll wrap when there are more references than can even theoretically fit in the address space. I'm not worried about it :-)
You don't worry about a lot of things that you ought to worry about :-P
Feb 23 2015
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 08:33:59 UTC, Tobias Pankrath 
wrote:
 You mean when there are more than 2^64 references to the object?
I mean that the optimizer does not know what _rc is. The optimizer can only elide what it can prove, by sound logic, not by assumptions.
Feb 23 2015
parent reply "Daniel Murphy" <yebbliesnospam gmail.com> writes:
"Ola Fosheim Grøstad" " wrote in message 
news:hwwotfmkjvwsempqibla forum.dlang.org...

 I mean that the optimizer does not know what _rc is. The optimizer can 
 only elide what it can prove, by sound logic, not by assumptions.
The whole point of compiler-supported RC is that the optimizer can make assumptions.
Feb 23 2015
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 23 February 2015 at 09:01:23 UTC, Daniel Murphy wrote:
 "Ola Fosheim Grøstad" " wrote in message 
 news:hwwotfmkjvwsempqibla forum.dlang.org...

 I mean that the optimizer does not know what _rc is. The 
 optimizer can only elide what it can prove, by sound logic, 
 not by assumptions.
The whole point of compiler-supported RC is that the optimizer can make assumptions.
Yes, but then it makes no sense to tell Manu that he should use a library RC... It is nice to see at least one person admit that D needs to depart from modular arithmetic to solve real world problems... Because that is the implication of your statement. ;)
Feb 23 2015
prev sibling parent reply "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 On 23 February 2015 at 07:47, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 2/22/2015 8:36 AM, Manu via Digitalmars-d wrote:
 I have no idea where to start.
Start by making a ref counted type and see what the pain points are.
All my ref counting types fiddle with the ref in every assignment, or every function call and return. Unless the language has some sort of support for ref counting, I don't know how we can do anything about that.
There's no move constructor in D, so how did you manage that?
Feb 23 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 23 February 2015 at 20:24, Jakob Ovrum via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 23 February 2015 at 01:38:35 UTC, Manu wrote:
 On 23 February 2015 at 07:47, Walter Bright via Digitalmars-d

 <digitalmars-d puremagic.com> wrote:
 On 2/22/2015 8:36 AM, Manu via Digitalmars-d wrote:
 I have no idea where to start.
Start by making a ref counted type and see what the pain points are.
All my ref counting types fiddle with the ref in every assignment, or every function call and return. Unless the language has some sort of support for ref counting, I don't know how we can do anything about that.
There's no move constructor in D, so how did you manage that?
I wrote it above. struct Thing { T *instance; this(this) { Inc(instance); } ~this() { Dec(instance); } // this would really assist RC when 'scope' is inferred liberally. this(this) scope {} ~this() scope {} } In this case, rc is part of the instance; no reason to separate it when RC is not a generalised concept. Of course the structure can be generalised and fiddled/meta-ed to suit purpose in any number of ways. Inc's and Dec's galore! I'm not sure what a move constructor would give me over this.
Feb 23 2015
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being 
 realistic
 ;)
I've considered adding support for it in SDC in the future, but man, reading the ARC spec fells like a dive into insanity. I've rarely seen such an overcomplicated system.
Feb 21 2015
parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 February 2015 at 14:25, deadalnix via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 I personally think ARC in D is the only way forwards. That is an
 unpopular opinion however... although I think I'm just being realistic
 ;)
I've considered adding support for it in SDC in the future, but man, reading the ARC spec fells like a dive into insanity. I've rarely seen such an overcomplicated system.
But it IS a way forwards... can you suggest another way forwards using a sufficiently fancy GC? While there are no visible alternatives (as has been the case for as long as I've been here), I don't think complexity can be considered a road block. Would a theoretical advanced GC be any less complex? What are the complexities? Can we design to address them elegantly?
Feb 22 2015
parent "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 22 February 2015 at 16:37:15 UTC, Manu wrote:
 But it IS a way forwards... can you suggest another way 
 forwards using
 a sufficiently fancy GC? While there are no visible 
 alternatives (as
 has been the case for as long as I've been here), I don't think
 complexity can be considered a road block. Would a theoretical
 advanced GC be any less complex?

 What are the complexities? Can we design to address them 
 elegantly?
It is overcomplex to accommodate the historical elements of ObjC.
Feb 22 2015
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 On 22 February 2015 at 05:20, JN via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the
 benefits of ARC over GC? Is it just about predictability of 
 resource
 freeing? Would ARC make sense in D?
D's GC is terrible, and after 6 years hanging out in this place, I have seen precisely zero development on the GC front. Nobody can even imagine, let alone successfully implement a GC that covers realtime use requirements.
I lack the skills to be hired by Aonix, but the military seem to think it is ok to put a real time JVM taking care of missile systems. http://www.spacewar.com/reports/Lockheed_Martin_Selects_Aonix_PERC_Virtual_Machine_For_Aegis_Weapon_System_999.html They surely have considered the situation "GC was called => where did the missile go". It doesn't get more real time than that.
Feb 21 2015
prev sibling next sibling parent reply "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 D's GC is terrible, and after 6 years hanging out in this 
 place, I
 have seen precisely zero development on the GC front.
You must have missed RTInfo, Rainer's precise heap scanner and Sociomantic's concurrent GC.
Feb 22 2015
parent Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 22 February 2015 at 21:31, Jakob Ovrum via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 D's GC is terrible, and after 6 years hanging out in this place, I
 have seen precisely zero development on the GC front.
You must have missed RTInfo, Rainer's precise heap scanner and Sociomantic's concurrent GC.
Rasiner's GC was even slower (at the time it was presented), but it offers the advantage that it doesn't leak like a seive. FWIW, I am in favour of migrating to rainer's GC, but it doesn't really address any problems. Sociomantic's GC takes advantage of a particular OS, it's not portable.
Feb 22 2015
prev sibling next sibling parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 On 22 February 2015 at 05:20, JN via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the
 benefits of ARC over GC? Is it just about predictability of 
 resource
 freeing? Would ARC make sense in D?
D's GC is terrible, and after 6 years hanging out in this place, I have seen precisely zero development on the GC front.
There has been a flood of GC improvements in druntime from Martin Nowak and Rainer Schuetze over the last few months. Probably nothing that would satisfy your requirements, but it looks like some sizeable speedups nonetheless.
Feb 22 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/22/15 3:39 AM, John Colvin wrote:
 On Sunday, 22 February 2015 at 00:43:47 UTC, Manu wrote:
 On 22 February 2015 at 05:20, JN via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. What
 are the
 benefits of ARC over GC? Is it just about predictability of resource
 freeing? Would ARC make sense in D?
D's GC is terrible, and after 6 years hanging out in this place, I have seen precisely zero development on the GC front.
There has been a flood of GC improvements in druntime from Martin Nowak and Rainer Schuetze over the last few months. Probably nothing that would satisfy your requirements, but it looks like some sizeable speedups nonetheless.
Martin, Rainer - it would be great if you could make public some measurements. -- Andrei
Feb 22 2015
prev sibling parent Martin Nowak <code+news.digitalmars dawg.eu> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/22/2015 01:43 AM, Manu via Digitalmars-d wrote:
 D's GC is terrible, and after 6 years hanging out in this place, I 
 have seen precisely zero development on the GC front. Nobody can
 even imagine, let alone successfully implement a GC that covers
 realtime use requirements.
We have achieved quite some speedup on the GC front (up to 50% faster allocations) and we work on a few more improvements for the next release (good for at least another 20% speedup). There is even a useful idea how to implement an incremental GC. I might give a talk about the current state at DConf. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU8N0MAAoJELJzgRYSuxk5/4IP/AzBOzt3jc77vwovtV+J9C5E D6jE/y1pCkmlQ9Wjb7yl5F0Ul0bQiDXhGH55rkzYO/AHrCZLGZ2TSr+CchCeDR64 7dVmDCLxDN+Tipo3iOZVZzrosDKISRv0H3o82NUqmWF2jqoXdRZJthSigdiKENAW 4wIIfScBqRdATHtFCw2heSScYMxE480WeEQx2rIjLLuDMD2S0uua0cKDBOlMVV+v t32AnCOyeTL2TGwtO1TTCZVbRH7c4ob1F1dTH6G/nu9K+vTbMJ9FNhBmnwVnYu12 V+RK+WDgogxN9I4hTE/kUKTxbhA6k9u3sG09tql0mIBbLBkVoSsC3ib9WHSOS/ki KcHDNJAvy3pOR5gBTlR/x24r3G1RGkJcjiFTPaCan4oqrsveY4wQ63wImQqoejXj 8iOvJ88ssYTWkol0nSR4QntOeGNK+ni37U2MpBizKdflaVMN75GhDkGlZ55Rt1BA L1SpKUC5c9Kyz43Wvf0QZjWoSvB4LZJc9daU1sMIkCn20nKM6G8rY9PtONE9xoQE 2hYW9S5ufrm9YRuKy2qJmsfEw0Ou2S5MiH6brgHhqnmgWpU9mK3AKN0o1JfwxTAK nHQxrudAB0egpAePUs2wW8jHWXbdtP2GSM575AKr8JMYZkf/K8UoXsibTGPSKb5W VLL35ZKr8y0oVctWnUEd =UARS -----END PGP SIGNATURE-----
Feb 27 2015
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the benefits of ARC over GC? Is it just about 
 predictability of resource freeing? Would ARC make sense in D?
As one of the GC developers explains at Reddit, the GC never worked properly. http://www.reddit.com/r/programming/comments/2wo18p/mac_apps_that_use_garbage_collection_must_move_to/coss311
Feb 21 2015
prev sibling parent reply "Baz" <bb.temp gmx.com> writes:
On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the benefits of ARC over GC? Is it just about 
 predictability of resource freeing? Would ARC make sense in D?
Im not here to be liked so let's throw the bomb: http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.
Feb 26 2015
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 26 February 2015 at 12:06:53 UTC, Baz wrote:
 On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the benefits of ARC over GC? Is it just about 
 predictability of resource freeing? Would ARC make sense in D?
Im not here to be liked so let's throw the bomb: http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.
I am behind a firewall so I cannot read it, still here goes my take on it. Only people writing on their own, with full control of 100% of the application code without any third party libraries can manage memory by their own. Add a third party library without possibility to change the code, teams with more than 10 developers, teams that are distributed, teams with varied skill set, teams with member rotation, overtime, and I don't believe anyone in this context is able to have on their head the whole ownership relations of heap allocated memory. I have seen it happen lots of time in enterprise projects. Besides, the CVE daily exploits notification list is the proof of that. -- Paulo
Feb 26 2015
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 26 February 2015 at 12:06:53 UTC, Baz wrote:
 On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
 https://developer.apple.com/news/?id=02202015a

 Interesting...

 Apple is dropping GC in favor of automatic reference counting. 
 What are the benefits of ARC over GC? Is it just about 
 predictability of resource freeing? Would ARC make sense in D?
Im not here to be liked so let's throw the bomb: http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.
Programmer that think they are so smart they handle everything are the very first to screw up because of lack of self awareness. Good, now that we established the author is the retard here, how seriously do we take his rant ?
Feb 26 2015