www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - John Carmack applauds D's pure attribute

reply Trass3r <un known.com> writes:

Feb 25 2012
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Trass3r" <un known.com> wrote in message news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
Feb 25 2012
parent reply "Yao Gomez" <yao.gomez gmail.com> writes:
On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky 
wrote:
 "Trass3r" <un known.com> wrote in message 
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier 
 inclusion of a "pure" attribute.
Feb 25 2012
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Looks like that GoingNative interview has had some impact. Pretty cool. :)
Feb 25 2012
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Yao Gomez" <yao.gomez gmail.com> wrote in message 
news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r" <un known.com> wrote in message 
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier inclusion 
 of a "pure" attribute.
Interesting. I wish he'd elaborate on why it's not an option for his daily work.
Feb 25 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 25, 2012 at 01:45:34PM -0500, Nick Sabalausky wrote:
 "Yao Gomez" <yao.gomez gmail.com> wrote in message 
 news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r" <un known.com> wrote in message 
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier
 inclusion of a "pure" attribute.
Interesting. I wish he'd elaborate on why it's not an option for his daily work.
[...] I can't speak for him, but it could be employer resistance, or perhaps the current volatile state of D. But this is just a wild guess. T -- To provoke is to call someone stupid; to argue is to call each other stupid.
Feb 25 2012
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 25.02.2012 20:05, schrieb H. S. Teoh:
 On Sat, Feb 25, 2012 at 01:45:34PM -0500, Nick Sabalausky wrote:
 "Yao Gomez"<yao.gomez gmail.com>  wrote in message
 news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r"<un known.com>  wrote in message
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier
 inclusion of a "pure" attribute.
Interesting. I wish he'd elaborate on why it's not an option for his daily work.
[...] I can't speak for him, but it could be employer resistance, or perhaps the current volatile state of D. But this is just a wild guess. T
Most likely because D tooling is still not up to speed with what C and C++ provide. Not to forget that all the Console SDK and constraints one has on the approvals.
Feb 25 2012
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote in message 
news:mailman.107.1330196619.24984.digitalmars-d puremagic.com...
 On Sat, Feb 25, 2012 at 01:45:34PM -0500, Nick Sabalausky wrote:
 "Yao Gomez" <yao.gomez gmail.com> wrote in message
 news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r" <un known.com> wrote in message
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier
 inclusion of a "pure" attribute.
Interesting. I wish he'd elaborate on why it's not an option for his daily work.
[...] I can't speak for him, but it could be employer resistance, or perhaps the current volatile state of D. But this is just a wild guess.
Yea, or (for all we know) something more inherit to D. In any case it'd be very interesting to find out. I'm sure he probably will elaborate at some point, in some place, but I'm quite anxious in the meantime :) Come to think of it, I'm sure the lack of game console targets would be at least one factor. Not so sure about "employer resistance" though. *Technically*, he's just an employee, not CEO or anything (IIRC), but it's a very unique case: He's been known to have more power there than his "superiors", at least when he chooses to wield it.
Feb 25 2012
prev sibling next sibling parent =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 25-02-2012 19:45, Nick Sabalausky wrote:
 "Yao Gomez"<yao.gomez gmail.com>  wrote in message
 news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r"<un known.com>  wrote in message
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript. Anyways, here's the quote:
 Using D for my daily work is not an option, but I applaud thier inclusion
 of a "pure" attribute.
Interesting. I wish he'd elaborate on why it's not an option for his daily work.
The state of D on Windows as compared to C++ comes to mind... -- - Alex
Feb 25 2012
prev sibling parent reply "so" <so so.so> writes:
On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky 
wrote:

 Interesting. I wish he'd elaborate on why it's not an option 
 for his daily
 work.
Not the design but the implementation, memory management would be the first.
Feb 25 2012
parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky 
 wrote:

 Interesting. I wish he'd elaborate on why it's not an option 
 for his daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
Feb 25 2012
next sibling parent reply "so" <so so.so> writes:
On Saturday, 25 February 2012 at 20:26:11 UTC, Peter Alexander 
wrote:

 Memory management is not a problem. You can manage memory just 
 as easily in D as you can in C or C++. Just don't use global 
 new, which they'll already be doing.
C++ standard library is not based around a GC. D promises both MM possibilities yet its standard library as of now based around GC. You are talking about design. When it comes to implementation, last time i checked, not using standard memory manager also means not using standard library. A big codebase on another language is a problem shared by most of us and that is by far the most significant. Yet i thought we were talking about "why not switch to D" rather than "why not switch to another language".
Feb 25 2012
parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 25 February 2012 at 20:50:53 UTC, so wrote:
 On Saturday, 25 February 2012 at 20:26:11 UTC, Peter Alexander 
 wrote:

 Memory management is not a problem. You can manage memory just 
 as easily in D as you can in C or C++. Just don't use global 
 new, which they'll already be doing.
C++ standard library is not based around a GC. D promises both MM possibilities yet its standard library as of now based around GC. You are talking about design. When it comes to implementation, last time i checked, not using standard memory manager also means not using standard library. A big codebase on another language is a problem shared by most of us and that is by far the most significant. Yet i thought we were talking about "why not switch to D" rather than "why not switch to another language".
id, like most games developers, do not use the C++ standard library and they certainly wouldn't use the D standard library, so it's not an issue here. You're right in general, though.
Feb 25 2012
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms. -- Paulo
Feb 25 2012
next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick 
 Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option 
 for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
Feb 25 2012
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 25.02.2012 23:17, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC. -- Paulo
Feb 25 2012
next sibling parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto <pjmlp progtools.org> wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:

 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Feb 25 2012
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org>  wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:

 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html http://www.atego.com/products/aonix-perc-raven/ -- Paulo
Feb 25 2012
parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto <pjmlp progtools.org> wrote:
 Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org> =A0wro=
te:
 Am 25.02.2012 23:17, schrieb Peter Alexander:


 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrot=
e:
 Interesting. I wish he'd elaborate on why it's not an option for h=
is
 daily
 work.
Not the design but the implementation, memory management would be t=
he
 first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'l=
l
 already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC o=
r
 malloc/free. You allocate blocks up front and use those when you need
 consistent high performance.

 It doesn't matter how optimised the GC is. The eventual collection is
 inevitable and if it takes anything more than a small fraction of a
 second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-pe=
rc-virtual-machine-software-for-ballistic-missile-radar.html
 http://www.atego.com/products/aonix-perc-raven/
Neither of those links have any information on how this actually works. In fact, the docs on Atego's site pretty much state that their JVM is highly specialized and requires programmers to follow very different rules from typical Java, which makes this technology look less and less viable for general usage. I don't see how this example is relevant for D. I can't find any details on the system you're mentioning, but assuming they developed something similar to Azul, the fundamental problem is that D has to target platforms in general use, not highly specialized server environments with modified kernels and highly parallel hardware. Until such environments come into general use (assuming they do at all; Azul seems to be having trouble getting their virtual memory manipulation techniques merged into the Linux kernel), D can't make use of them, and we're right back to saying that GCs have unacceptably long pause times for realtime applications.
Feb 25 2012
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 26.02.2012 00:45, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto<pjmlp progtools.org>  wrote:
 Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org>    wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:


 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html http://www.atego.com/products/aonix-perc-raven/
Neither of those links have any information on how this actually works. In fact, the docs on Atego's site pretty much state that their JVM is highly specialized and requires programmers to follow very different rules from typical Java, which makes this technology look less and less viable for general usage. I don't see how this example is relevant for D. I can't find any details on the system you're mentioning, but assuming they developed something similar to Azul, the fundamental problem is that D has to target platforms in general use, not highly specialized server environments with modified kernels and highly parallel hardware. Until such environments come into general use (assuming they do at all; Azul seems to be having trouble getting their virtual memory manipulation techniques merged into the Linux kernel), D can't make use of them, and we're right back to saying that GCs have unacceptably long pause times for realtime applications.
In Java's case they are following the Java's specification for real time applications. http://java.sun.com/javase/technologies/realtime/index.jsp I did not mention any specific algorithm, because like most companies, I am sure Atego patents most of it. Still a quick search in Google reveals a few papers: http://research.microsoft.com/apps/video/dl.aspx?id=103698&amp;l=i http://www.cs.cmu.edu/~spoons/gc/vee05.pdf http://domino.research.ibm.com/comm/research_people.nsf/pages/bacon.presentations.html/$FILE/Bacon05BravelyTalk.ppt http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf http://www.cs.purdue.edu/homes/lziarek/pldi10.pdf I know GC use is a bit of a religious debate but C++ was the very last systems programming language without automatic memory management. And even C++ has got some form in C++11. At least in the desktop area, in a decade from now, most likely system programming in desktop OS will either make use of reference counting (WinRT or ARC), or it will use a GC (similar to Spin, Inferno, Singularity, Oberon). This is how I see the trend going, but hey, I am just a simple person and I get to be wrong lots of time. -- Paulo
Feb 25 2012
next sibling parent reply =?ISO-8859-1?Q?Alex_R=F8nne_Petersen?= <xtzgzorex gmail.com> writes:
On 26-02-2012 08:48, Paulo Pinto wrote:
 Am 26.02.2012 00:45, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto<pjmlp progtools.org> wrote:
 Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org>
 wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:


 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky
 wrote:

 Interesting. I wish he'd elaborate on why it's not an option
 for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html http://www.atego.com/products/aonix-perc-raven/
Neither of those links have any information on how this actually works. In fact, the docs on Atego's site pretty much state that their JVM is highly specialized and requires programmers to follow very different rules from typical Java, which makes this technology look less and less viable for general usage. I don't see how this example is relevant for D. I can't find any details on the system you're mentioning, but assuming they developed something similar to Azul, the fundamental problem is that D has to target platforms in general use, not highly specialized server environments with modified kernels and highly parallel hardware. Until such environments come into general use (assuming they do at all; Azul seems to be having trouble getting their virtual memory manipulation techniques merged into the Linux kernel), D can't make use of them, and we're right back to saying that GCs have unacceptably long pause times for realtime applications.
In Java's case they are following the Java's specification for real time applications. http://java.sun.com/javase/technologies/realtime/index.jsp I did not mention any specific algorithm, because like most companies, I am sure Atego patents most of it. Still a quick search in Google reveals a few papers: http://research.microsoft.com/apps/video/dl.aspx?id=103698&amp;l=i http://www.cs.cmu.edu/~spoons/gc/vee05.pdf http://domino.research.ibm.com/comm/research_people.nsf/pages/bacon.presentations.html/$FILE/Bacon05BravelyTalk.ppt http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf http://www.cs.purdue.edu/homes/lziarek/pldi10.pdf I know GC use is a bit of a religious debate but C++ was the very last systems programming language without automatic memory management. And even C++ has got some form in C++11. At least in the desktop area, in a decade from now, most likely system programming in desktop OS will either make use of reference counting (WinRT or ARC), or it will use a GC (similar to Spin, Inferno, Singularity, Oberon). This is how I see the trend going, but hey, I am just a simple person and I get to be wrong lots of time. -- Paulo
Well, there is Clay, which doesn't use a GC. -- - Alex
Feb 26 2012
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 26.02.2012 12:44, schrieb Alex Rønne Petersen:
 On 26-02-2012 08:48, Paulo Pinto wrote:
 Am 26.02.2012 00:45, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto<pjmlp progtools.org> wrote:
 Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org>
 wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:


 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky
 wrote:

 Interesting. I wish he'd elaborate on why it's not an option
 for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html http://www.atego.com/products/aonix-perc-raven/
Neither of those links have any information on how this actually works. In fact, the docs on Atego's site pretty much state that their JVM is highly specialized and requires programmers to follow very different rules from typical Java, which makes this technology look less and less viable for general usage. I don't see how this example is relevant for D. I can't find any details on the system you're mentioning, but assuming they developed something similar to Azul, the fundamental problem is that D has to target platforms in general use, not highly specialized server environments with modified kernels and highly parallel hardware. Until such environments come into general use (assuming they do at all; Azul seems to be having trouble getting their virtual memory manipulation techniques merged into the Linux kernel), D can't make use of them, and we're right back to saying that GCs have unacceptably long pause times for realtime applications.
In Java's case they are following the Java's specification for real time applications. http://java.sun.com/javase/technologies/realtime/index.jsp I did not mention any specific algorithm, because like most companies, I am sure Atego patents most of it. Still a quick search in Google reveals a few papers: http://research.microsoft.com/apps/video/dl.aspx?id=103698&amp;l=i http://www.cs.cmu.edu/~spoons/gc/vee05.pdf http://domino.research.ibm.com/comm/research_people.nsf/pages/bacon.presentations.html/$FILE/Bacon05BravelyTalk.ppt http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf http://www.cs.purdue.edu/homes/lziarek/pldi10.pdf I know GC use is a bit of a religious debate but C++ was the very last systems programming language without automatic memory management. And even C++ has got some form in C++11. At least in the desktop area, in a decade from now, most likely system programming in desktop OS will either make use of reference counting (WinRT or ARC), or it will use a GC (similar to Spin, Inferno, Singularity, Oberon). This is how I see the trend going, but hey, I am just a simple person and I get to be wrong lots of time. -- Paulo
Well, there is Clay, which doesn't use a GC.
I was unware of it, thanks for pointing it out. It is still at 0.2 and the newsgroup only has 13 messages, lets see how far it goes. -- Paulo
Feb 26 2012
parent "so" <so so.so> writes:
On Sunday, 26 February 2012 at 12:09:21 UTC, Paulo Pinto wrote:

 It is still at 0.2 and the newsgroup only has 13 messages, lets 
 see how far it goes.
We are almost done with gpu devolution. Once we get unified storage none of this will matter, much like flat to round earth transition. So much wasted for absolutely nothing.
Feb 26 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 08:48, Paulo Pinto a écrit :
 Am 26.02.2012 00:45, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto<pjmlp progtools.org> wrote:
 Am 25.02.2012 23:40, schrieb Andrew Wiley:
 On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pjmlp progtools.org>
 wrote:
 Am 25.02.2012 23:17, schrieb Peter Alexander:


 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky
 wrote:

 Interesting. I wish he'd elaborate on why it's not an option
 for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
If you require realtime performance then you don't use either the GC or malloc/free. You allocate blocks up front and use those when you need consistent high performance. It doesn't matter how optimised the GC is. The eventual collection is inevitable and if it takes anything more than a small fraction of a second then it will be too slow for realtime use.
There are GC realtime algorithms, which are actually in use, in systems like the French Ground Master 400 missile radar system. There is no more realtime than that. I surely would not like that such systems had a pause the world GC.
Can you give any description of how that is done (or any relevant papers), and how it can be made to function reasonably on low end consumer hardware and standard operating systems? Without that, your example is irrelevant. Azul has already shown that realtime non-pause GC is certainly possible, but only with massive servers, lots of CPUs, and large kernel modifications. And, as far as I'm aware, that still didn't solve the generally memory-hungry behaviors of the JVM.
Sure. http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html http://www.atego.com/products/aonix-perc-raven/
Neither of those links have any information on how this actually works. In fact, the docs on Atego's site pretty much state that their JVM is highly specialized and requires programmers to follow very different rules from typical Java, which makes this technology look less and less viable for general usage. I don't see how this example is relevant for D. I can't find any details on the system you're mentioning, but assuming they developed something similar to Azul, the fundamental problem is that D has to target platforms in general use, not highly specialized server environments with modified kernels and highly parallel hardware. Until such environments come into general use (assuming they do at all; Azul seems to be having trouble getting their virtual memory manipulation techniques merged into the Linux kernel), D can't make use of them, and we're right back to saying that GCs have unacceptably long pause times for realtime applications.
In Java's case they are following the Java's specification for real time applications. http://java.sun.com/javase/technologies/realtime/index.jsp I did not mention any specific algorithm, because like most companies, I am sure Atego patents most of it. Still a quick search in Google reveals a few papers: http://research.microsoft.com/apps/video/dl.aspx?id=103698&amp;l=i http://www.cs.cmu.edu/~spoons/gc/vee05.pdf http://domino.research.ibm.com/comm/research_people.nsf/pages/bacon.presentations.html/$FILE/Bacon05BravelyTalk.ppt http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf http://www.cs.purdue.edu/homes/lziarek/pldi10.pdf I know GC use is a bit of a religious debate but C++ was the very last systems programming language without automatic memory management. And even C++ has got some form in C++11. At least in the desktop area, in a decade from now, most likely system programming in desktop OS will either make use of reference counting (WinRT or ARC), or it will use a GC (similar to Spin, Inferno, Singularity, Oberon). This is how I see the trend going, but hey, I am just a simple person and I get to be wrong lots of time. -- Paulo
Thank you for the documentation. The more I know about GC, the more I think that user should be able to choose which GC they want.
Feb 26 2012
prev sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 02/25/2012 02:29 PM, Paulo Pinto wrote:

 There are GC realtime algorithms, which are actually in use, in systems
 like the French Ground Master 400 missile radar system.
I just can't resist... :) I hope they are not going to keep that software in the French Ground Master 500. ;) Ali [*] http://archive.eiffel.com/doc/manuals/technology/contract/ariane/ <quote> Ariane 5 launcher crashed [due to] a reuse error. The SRI horizontal bias module was reused from a 10-year-old software, the software from Ariane 4. </quote>
Feb 26 2012
prev sibling next sibling parent reply "so" <so so.so> writes:
On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:

 Most standard compiler malloc()/free() implementations are 
 actually slower than most advanced GC algorithms.
Explicit allocation/deallocation performance is not that significant, main problem is they are unreliable at runtime.
Feb 25 2012
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 25.02.2012 23:17, schrieb so:
 On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:

 Most standard compiler malloc()/free() implementations are actually
 slower than most advanced GC algorithms.
Explicit allocation/deallocation performance is not that significant, main problem is they are unreliable at runtime.
They seem to be reliable enough to control missile radar systems.
Feb 25 2012
prev sibling next sibling parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 2/25/2012 4:08 PM, Paulo Pinto wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:
 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
Games do basically everything in 33.3 or 16.6 ms intervals (30 or 60 fps respectively). 20fps and lower is doable but the input gets extra-laggy very easily, and it is visually choppy. Ideally GC needs to run in a real-time manner, say periodically every 10 or 20 seconds and taking at most 10ms. Continuously would be better, something like 1-2ms of overhead spread out over 16 or 32 ms. Also, a periodic GC that freezes everything needs to run at a predictable/controllable time, so you can do things like skip AI updates for that frame and keep the frame from being 48ms or worse. These time constraints are going to limit the heap size of a GC heap to the slower of speed of memory/computation, until the GC can be made into some variety of a real-time collector. This is less of a problem for games, because you can always allocate non-gc memory with malloc/free or store your textures and meshes exclusively in video memory as d3d/opengl resources. The fact malloc/free and the overhead of refcounting takes longer is largely meaningless, because the cost is spread out. If the perf of malloc/free is a problem you can always make more heaps, as the main cost is usually lock contention. The STL containers are pretty much unusable due to how much memory they waste, how many allocations they require, and the inability to replace their allocators in any meaningful way that allows you to used fixed size block allocators. Hashes for instance require multiple different kinds of allocations but they are forced to all go through the same allocator. Also, the STL containers tend to allocate huge amounts of slack that is hard to get rid of. Type traits and algorithms are about the only usable parts of the STL.
Feb 25 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2012 2:08 PM, Paulo Pinto wrote:
 Most standard compiler malloc()/free() implementations are actually slower than
 most advanced GC algorithms.
Most straight up GC vs malloc/free benchmarks miss something crucial. A GC allows one to do substantially *fewer* allocations. It's a lot faster to not allocate than to allocate. Consider C strings. You need to keep track of ownership of it. That often means creating extra copies, rather than sharing a single copy. Enter C++'s shared_ptr. But that works by, for each object, allocating a *second* chunk of memory to hold the reference count. Right off the bat, you've got twice as many allocations & frees with shared_ptr than a GC would have.
Feb 25 2012
next sibling parent reply Simon <s.d.hammett gmail.com> writes:
On 25/02/2012 22:55, Walter Bright wrote:
 On 2/25/2012 2:08 PM, Paulo Pinto wrote:
 Most standard compiler malloc()/free() implementations are actually
 slower than
 most advanced GC algorithms.
Most straight up GC vs malloc/free benchmarks miss something crucial. A GC allows one to do substantially *fewer* allocations. It's a lot faster to not allocate than to allocate. Consider C strings. You need to keep track of ownership of it. That often means creating extra copies, rather than sharing a single copy. Enter C++'s shared_ptr. But that works by, for each object, allocating a *second* chunk of memory to hold the reference count. Right off the bat, you've got twice as many allocations & frees with shared_ptr than a GC would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Feb 25 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object, allocating a
 *second* chunk of memory to hold the reference count. Right off the bat,
 you've got twice as many allocations & frees with shared_ptr than a GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
Feb 25 2012
parent reply Simon <s.d.hammett gmail.com> writes:
On 26/02/2012 03:22, Walter Bright wrote:
 On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object, allocating a
 *second* chunk of memory to hold the reference count. Right off the bat,
 you've got twice as many allocations & frees with shared_ptr than a GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
The stl one is based on boost, so it has make_shared as well: http://en.cppreference.com/w/cpp/memory/shared_ptr and it's in vs 2010 http://msdn.microsoft.com/en-us/library/ee410595.aspx Not that I'm claiming shared pointers are superior to GC. -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk
Feb 26 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2012 7:04 AM, Simon wrote:
 On 26/02/2012 03:22, Walter Bright wrote:
 On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object, allocating a
 *second* chunk of memory to hold the reference count. Right off the bat,
 you've got twice as many allocations & frees with shared_ptr than a GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
The stl one is based on boost, so it has make_shared as well: http://en.cppreference.com/w/cpp/memory/shared_ptr and it's in vs 2010 http://msdn.microsoft.com/en-us/library/ee410595.aspx Not that I'm claiming shared pointers are superior to GC.
At the GoingNative C++ conference, the guy who is in charge of STL for VS said that it did an extra allocation for the reference count.
Feb 26 2012
parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Sun, 26 Feb 2012 20:26:41 +0100, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 2/26/2012 7:04 AM, Simon wrote:
 On 26/02/2012 03:22, Walter Bright wrote:
 On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object,  
 allocating a
 *second* chunk of memory to hold the reference count. Right off the  
 bat,
 you've got twice as many allocations & frees with shared_ptr than a  
 GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
The stl one is based on boost, so it has make_shared as well: http://en.cppreference.com/w/cpp/memory/shared_ptr and it's in vs 2010 http://msdn.microsoft.com/en-us/library/ee410595.aspx Not that I'm claiming shared pointers are superior to GC.
At the GoingNative C++ conference, the guy who is in charge of STL for VS said that it did an extra allocation for the reference count.
It's actually quite nice to combine unique_ptr and shared_ptr. One can lazily create the refcount only when the pointers are shared. Often one can get away with unique ownership. https://gist.github.com/1920202
Feb 26 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/27/2012 02:29 AM, Martin Nowak wrote:
 On Sun, 26 Feb 2012 20:26:41 +0100, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 2/26/2012 7:04 AM, Simon wrote:
 On 26/02/2012 03:22, Walter Bright wrote:
 On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object,
 allocating a
 *second* chunk of memory to hold the reference count. Right off
 the bat,
 you've got twice as many allocations & frees with shared_ptr than
 a GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
The stl one is based on boost, so it has make_shared as well: http://en.cppreference.com/w/cpp/memory/shared_ptr and it's in vs 2010 http://msdn.microsoft.com/en-us/library/ee410595.aspx Not that I'm claiming shared pointers are superior to GC.
At the GoingNative C++ conference, the guy who is in charge of STL for VS said that it did an extra allocation for the reference count.
It's actually quite nice to combine unique_ptr and shared_ptr. One can lazily create the refcount only when the pointers are shared. Often one can get away with unique ownership.
Ok. Btw, if the utility is in charge of allocation, then the refcount can be allocated together with the storage.
 https://gist.github.com/1920202
Neat. Possible improvement (if I understand your code correctly): Don't add the GC range if all possible aliasing is through Ptr.
Feb 27 2012
parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Mon, 27 Feb 2012 09:32:27 +0100, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 02/27/2012 02:29 AM, Martin Nowak wrote:
 On Sun, 26 Feb 2012 20:26:41 +0100, Walter Bright
 <newshound2 digitalmars.com> wrote:

 On 2/26/2012 7:04 AM, Simon wrote:
 On 26/02/2012 03:22, Walter Bright wrote:
 On 2/25/2012 4:01 PM, Simon wrote:
 On 25/02/2012 22:55, Walter Bright wrote:
 Enter C++'s shared_ptr. But that works by, for each object,
 allocating a
 *second* chunk of memory to hold the reference count. Right off
 the bat,
 you've got twice as many allocations & frees with shared_ptr than
 a GC
 would have.
http://www.boost.org/doc/libs/1_43_0/libs/smart_ptr/make_shared.html so you don't have to have twice as many allocations.
There are many ways to do shared pointers, including one where the reference count is part of the object being allocated. But the C++11 standard share_ptr does an extra allocation.
The stl one is based on boost, so it has make_shared as well: http://en.cppreference.com/w/cpp/memory/shared_ptr and it's in vs 2010 http://msdn.microsoft.com/en-us/library/ee410595.aspx Not that I'm claiming shared pointers are superior to GC.
At the GoingNative C++ conference, the guy who is in charge of STL for VS said that it did an extra allocation for the reference count.
It's actually quite nice to combine unique_ptr and shared_ptr. One can lazily create the refcount only when the pointers are shared. Often one can get away with unique ownership.
Ok. Btw, if the utility is in charge of allocation, then the refcount can be allocated together with the storage.
Yeah, I used the pointer for the deleter in the unshared case. Assuming that the allocator interface will require a delegate callback and not a function it might make more sense to directly stuff it onto the heap. There is one caveat with make_shared, weak_ptrs will keep your object alive because they need the control block. One could do a realloc though.
 https://gist.github.com/1920202
Neat. Possible improvement (if I understand your code correctly): Don't add the GC range if all possible aliasing is through Ptr.
I hope I did that. Or do you mean when holding a class. I think it's needed for classes because of the monitor, not sure though.
Feb 27 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/27/2012 03:42 PM, Martin Nowak wrote:
...
 https://gist.github.com/1920202
Neat. Possible improvement (if I understand your code correctly): Don't add the GC range if all possible aliasing is through Ptr.
I hope I did that.
import smart_ptr; struct S{ Ptr!int a; Ptr!double b; } static assert(hasAliasing!S); Therefore, I think your code will add the storage of Ptr!S to the GC, even though it manages all its pointers manually.
 Or do you mean when holding a class. I think it's needed
 for classes because of the monitor, not sure though.
Is it needed for unshared class instances?
Feb 28 2012
parent reply "Martin Nowak" <dawg dawgfoto.de> writes:
On Tue, 28 Feb 2012 09:30:12 +0100, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 02/27/2012 03:42 PM, Martin Nowak wrote:
 ...
 https://gist.github.com/1920202
Neat. Possible improvement (if I understand your code correctly): Don't add the GC range if all possible aliasing is through Ptr.
I hope I did that.
import smart_ptr; struct S{ Ptr!int a; Ptr!double b; } static assert(hasAliasing!S);
All right, I would need a Ptr aware hasAliasing, that's true. That's one of the few places where C++'s lookup rules for template specializations are useful. std.traits.hasAliasing already has Rebindable hacks. Either way I had an idea to split this into two parts. - unique_ptr which handles ownership and destruction, could be Unique!(Scoped!T) as well. - A move-on-shared wrapper, that has value semantics unless it gets shared, in which case the value is moved to a refcounted heap store.
Feb 28 2012
parent =?UTF-8?Q?Klaim_=2D_Jo=C3=ABl_Lamotte?= <mjklaim gmail.com> writes:
As a (not as experienced as other) game developer, I'd like to share my
point of view (again).

1. smart pointer vs GC war isn't the point, it's a knowledge &
implementation war and isn't related to languages that allow both (as C++ &
D).
2. C++ base code is just there and ready to be used. Only
begining-from-scratch projects can afford to use D
3. D isn't available directly to console hardware. If it is in some ways,
it isn't from the point of view of the vendor.
4. Learning a language have a cost. Any game development company will
totally dodge this kind of cost as much as it can, even if it's more
expensive to stay in the current language. (also D benefit over C++ isn't
really clear at this very moment)
5. STL isn't a problem anymore, most of the time, see:
http://gamedev.stackexchange.com/questions/268/stl-for-games-yea-or-nay
    My experience with STL in console (NDS) games: it's fine if you know
what you're doing (as someone else already mentionned)


I would totally use D for new (home or indie) game project but any company
with history or that wants to be considered serious from console vendor
perspective would use languages proposed by console vendors.

So, to help:

1. make cross-platform game development easier at least for accessible
platforms like PC & mobile (that one is "easy" I think)
2. convince game platform vendors to say "you can use D" - at least
3. find a way to "show" what we call a "killer app". That's the only way to
convince game company developers of considering another technology.

Jo=C3=ABl Lamotte - Klaim
Feb 28 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 25/02/2012 23:55, Walter Bright a écrit :
 On 2/25/2012 2:08 PM, Paulo Pinto wrote:
 Most standard compiler malloc()/free() implementations are actually
 slower than
 most advanced GC algorithms.
Most straight up GC vs malloc/free benchmarks miss something crucial. A GC allows one to do substantially *fewer* allocations. It's a lot faster to not allocate than to allocate. Consider C strings. You need to keep track of ownership of it. That often means creating extra copies, rather than sharing a single copy. Enter C++'s shared_ptr. But that works by, for each object, allocating a *second* chunk of memory to hold the reference count. Right off the bat, you've got twice as many allocations & frees with shared_ptr than a GC would have.
True, but the problem of video game isn't how much computation you do to allocate, but to deliver a frame every few miliseconds. In most cases, it worth spending more in allocating but with a predictable result than let the GC does its job. I wonder how true this will become with multicore and possibility of a 100% concurrent GC.
Feb 26 2012
parent reply "so" <so so.so> writes:
On Sunday, 26 February 2012 at 15:22:09 UTC, deadalnix wrote:

 True, but the problem of video game isn't how much computation 
 you do to allocate, but to deliver a frame every few 
 miliseconds. In most cases, it worth spending more in 
 allocating but with a predictable result than let the GC does 
 its job.
Absolutely! It cracks me up when i see (in this forum or any other graphics related forums) things like "you can't allocate at runtime!!!" or "you shouldn't use standard libraries!!!". Thing is, you can do both just fine if you just RTFM :)
Feb 26 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Feb 26, 2012 at 04:48:12PM +0100, so wrote:
 On Sunday, 26 February 2012 at 15:22:09 UTC, deadalnix wrote:
 
True, but the problem of video game isn't how much computation you
do to allocate, but to deliver a frame every few miliseconds. In
most cases, it worth spending more in allocating but with a
predictable result than let the GC does its job.
Absolutely! It cracks me up when i see (in this forum or any other graphics related forums) things like "you can't allocate at runtime!!!" or "you shouldn't use standard libraries!!!". Thing is, you can do both just fine if you just RTFM :)
[...] Would this even be an issue on multicore systems where the GC can run concurrently? As long as the stop-the-world parts are below some given threshold. T -- Ruby is essentially Perl minus Wall.
Feb 26 2012
parent reply "so" <so so.so> writes:
On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the GC 
 can run
 concurrently? As long as the stop-the-world parts are below 
 some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
Feb 26 2012
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 26.02.2012 17:34, schrieb so:
 On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the GC can run
 concurrently? As long as the stop-the-world parts are below some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
Well, some game studios seem to be quite happy with XNA, which implies using a GC: http://infinite-flight.com/if/index.html
Feb 26 2012
parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto <pjmlp progtools.org> wrote:
 Am 26.02.2012 17:34, schrieb so:

 On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the GC can run
 concurrently? As long as the stop-the-world parts are below some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
Well, some game studios seem to be quite happy with XNA, which implies using a GC: http://infinite-flight.com/if/index.html
I don't really see why you keep bringing up these examples. This is a performance issue, which means you can certainly ignore it and things will still work, just not as well. I've seen 3d games in Java, but they always suffer from an awkward pause at fairly regular intervals. This is why the AAA shops are still writing most of the engines in C++. You will always be able to find examples of developers that simply chose to ignore the issue for one reason or another. To make it clear, I'm not trying to antagonize you here. I agree that GC is in general a superior technical solution to manual memory management, and given the research going into GC technology, I'm sure that long term it's probably a good idea. However, I disagree with your statement that "the main issue is that the GC needs to be optimized, not that manual memory management is required." Making a GC that can run fast enough to make this sort of thing a non-issue is currently so hard that it can only be used in certain niche situations. That will probably change, but it will probably change over the course of several years. Manual memory management, however, is here now and dead simple to use so long as the programmer understands the semantics. Programming in that model is harder, but not nearly as bad as, say, thread-based concurrency with race conditions and deadlock. Manual memory management is much simpler to deal with than many other things programmers already take on voluntarily. When you want your realtime application to behave in a certain way, would you rather spend months or years working on the GC and program in a completely difficult style to deal with the issue, or use manual memory management *now* and deal with the slightly more difficult programming model? Cost/benefit wise, GC just doesn't make a lot of sense in this sort of scenario unless you have a lot of resources to burn or a specific reason to choose a GC-mandatory platform. Again, I'm not saying GC is bad, I'm saying that in this area, the cost/benefit ratio doesn't say you should spend your time improving the GC to make things work. For everyone else, GC is great, and I applaud David Simcha's efforts to improve D's GC performance.
Feb 26 2012
next sibling parent "foobar" <foo bar.com> writes:
On Monday, 27 February 2012 at 04:17:24 UTC, Andrew Wiley wrote:
 On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto 
 <pjmlp progtools.org> wrote:
 Am 26.02.2012 17:34, schrieb so:

 On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the 
 GC can run
 concurrently? As long as the stop-the-world parts are below 
 some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
Well, some game studios seem to be quite happy with XNA, which implies using a GC: http://infinite-flight.com/if/index.html
I don't really see why you keep bringing up these examples. This is a performance issue, which means you can certainly ignore it and things will still work, just not as well. I've seen 3d games in Java, but they always suffer from an awkward pause at fairly regular intervals. This is why the AAA shops are still writing most of the engines in C++. You will always be able to find examples of developers that simply chose to ignore the issue for one reason or another. To make it clear, I'm not trying to antagonize you here. I agree that GC is in general a superior technical solution to manual memory management, and given the research going into GC technology, I'm sure that long term it's probably a good idea. However, I disagree with your statement that "the main issue is that the GC needs to be optimized, not that manual memory management is required." Making a GC that can run fast enough to make this sort of thing a non-issue is currently so hard that it can only be used in certain niche situations. That will probably change, but it will probably change over the course of several years. Manual memory management, however, is here now and dead simple to use so long as the programmer understands the semantics. Programming in that model is harder, but not nearly as bad as, say, thread-based concurrency with race conditions and deadlock. Manual memory management is much simpler to deal with than many other things programmers already take on voluntarily. When you want your realtime application to behave in a certain way, would you rather spend months or years working on the GC and program in a completely difficult style to deal with the issue, or use manual memory management *now* and deal with the slightly more difficult programming model? Cost/benefit wise, GC just doesn't make a lot of sense in this sort of scenario unless you have a lot of resources to burn or a specific reason to choose a GC-mandatory platform. Again, I'm not saying GC is bad, I'm saying that in this area, the cost/benefit ratio doesn't say you should spend your time improving the GC to make things work. For everyone else, GC is great, and I applaud David Simcha's efforts to improve D's GC performance.
It does take years but please note that those referenced papers are already several years old. Some are from 2005-6. It doesn't mean D shouldn't support manual memory management but claiming that GC doesn't work for real-time is a [religious] myth. Clearly the cost of research has already been spent years ago and the algorithms were already documented and tested. OT: one of the papers was written at my university.
Feb 26 2012
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
I keep bringing this issues, because I am a firm believer that 
when
people that fight against a GC are just fighting a lost battle.

Like back in the 80's people were fighting against Pascal or C 
versus
Assembly. Or in the 90' were fighting against C++ versus C.

Now C++ is even used for operating systems, BeOS, Mac OS X 
drivers,
COM/WinRT.

Sure a systems programming language needs some form of manual 
memory
management for "exceptional situations", but 90% of the time you 
will
be allocating either referenced counted or GCed memory.

What will you do when the major OS use a systems programming 
language like forces GC or reference counting on you do? Which is 
already slowly happening with GC and ARC on Mac OS X, WinRT on 
Windows 8, mainstream OS, as well as the Oberon, Spin, Mirage, 
Home, Inferno and Singularity research OSs.

Create your own language to allow you to live in the past?

People that refuse to adapt to times stay behind, those who 
adapt, find ways to profit from the new reality.

But as I said before, that is my opinion and as a simple human is 
also prone to errors. Maybe my ideas regarding memory management 
in systems languages are plain wrong, the future will tell.

--
Paulo

On Monday, 27 February 2012 at 04:17:24 UTC, Andrew Wiley wrote:
 On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto 
 <pjmlp progtools.org> wrote:
 Am 26.02.2012 17:34, schrieb so:

 On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the 
 GC can run
 concurrently? As long as the stop-the-world parts are below 
 some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
Well, some game studios seem to be quite happy with XNA, which implies using a GC: http://infinite-flight.com/if/index.html
I don't really see why you keep bringing up these examples. This is a performance issue, which means you can certainly ignore it and things will still work, just not as well. I've seen 3d games in Java, but they always suffer from an awkward pause at fairly regular intervals. This is why the AAA shops are still writing most of the engines in C++. You will always be able to find examples of developers that simply chose to ignore the issue for one reason or another. To make it clear, I'm not trying to antagonize you here. I agree that GC is in general a superior technical solution to manual memory management, and given the research going into GC technology, I'm sure that long term it's probably a good idea. However, I disagree with your statement that "the main issue is that the GC needs to be optimized, not that manual memory management is required." Making a GC that can run fast enough to make this sort of thing a non-issue is currently so hard that it can only be used in certain niche situations. That will probably change, but it will probably change over the course of several years. Manual memory management, however, is here now and dead simple to use so long as the programmer understands the semantics. Programming in that model is harder, but not nearly as bad as, say, thread-based concurrency with race conditions and deadlock. Manual memory management is much simpler to deal with than many other things programmers already take on voluntarily. When you want your realtime application to behave in a certain way, would you rather spend months or years working on the GC and program in a completely difficult style to deal with the issue, or use manual memory management *now* and deal with the slightly more difficult programming model? Cost/benefit wise, GC just doesn't make a lot of sense in this sort of scenario unless you have a lot of resources to burn or a specific reason to choose a GC-mandatory platform. Again, I'm not saying GC is bad, I'm saying that in this area, the cost/benefit ratio doesn't say you should spend your time improving the GC to make things work. For everyone else, GC is great, and I applaud David Simcha's efforts to improve D's GC performance.
Feb 27 2012
parent reply "so" <so so.so> writes:
On Monday, 27 February 2012 at 08:39:54 UTC, Paulo Pinto wrote:
 I keep bringing this issues, because I am a firm believer that 
 when
 people that fight against a GC are just fighting a lost battle.

 Like back in the 80's people were fighting against Pascal or C 
 versus
 Assembly. Or in the 90' were fighting against C++ versus C.

 Now C++ is even used for operating systems, BeOS, Mac OS X 
 drivers,
 COM/WinRT.
It is not a fair analogy. Unlike MMM and GC, C++ can do everything C can do and has more sugar. What they are argue i think whether or not OO is a solution to everything and troubles with its implementation in C++.
 Sure a systems programming language needs some form of manual 
 memory
 management for "exceptional situations", but 90% of the time 
 you will
 be allocating either referenced counted or GCed memory.

 What will you do when the major OS use a systems programming 
 language like forces GC or reference counting on you do? Which 
 is already slowly happening with GC and ARC on Mac OS X, WinRT 
 on Windows 8, mainstream OS, as well as the Oberon, Spin, 
 Mirage, Home, Inferno and Singularity research OSs.

 Create your own language to allow you to live in the past?

 People that refuse to adapt to times stay behind, those who 
 adapt, find ways to profit from the new reality.

 But as I said before, that is my opinion and as a simple human 
 is also prone to errors. Maybe my ideas regarding memory 
 management in systems languages are plain wrong, the future 
 will tell.

 --
 Paulo
As i said in many threads regarding GC and MMM, it is not about this vs that. There should be no religious stances. Both have their strengths and failures. What every single discussion on this boils down to is some people downplay the failures of their religion :) And that staying behind thing is something i never understand! It is a hype, it is marketing! To sell you a product that doesn't deserve half its price! Religion/Irrationality has no place in what we do. Show me a better tool, "convince me" it is better and i will be using that tool. I don't give a damn if it is D or vim i am leaving behind.
Feb 27 2012
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 27 February 2012 at 09:33:32 UTC, so wrote:
 On Monday, 27 February 2012 at 08:39:54 UTC, Paulo Pinto wrote:
 I keep bringing this issues, because I am a firm believer that 
 when
 people that fight against a GC are just fighting a lost battle.

 Like back in the 80's people were fighting against Pascal or C 
 versus
 Assembly. Or in the 90' were fighting against C++ versus C.

 Now C++ is even used for operating systems, BeOS, Mac OS X 
 drivers,
 COM/WinRT.
It is not a fair analogy. Unlike MMM and GC, C++ can do everything C can do and has more sugar. What they are argue i think whether or not OO is a solution to everything and troubles with its implementation in C++.
 Sure a systems programming language needs some form of manual 
 memory
 management for "exceptional situations", but 90% of the time 
 you will
 be allocating either referenced counted or GCed memory.

 What will you do when the major OS use a systems programming 
 language like forces GC or reference counting on you do? Which 
 is already slowly happening with GC and ARC on Mac OS X, WinRT 
 on Windows 8, mainstream OS, as well as the Oberon, Spin, 
 Mirage, Home, Inferno and Singularity research OSs.

 Create your own language to allow you to live in the past?

 People that refuse to adapt to times stay behind, those who 
 adapt, find ways to profit from the new reality.

 But as I said before, that is my opinion and as a simple human 
 is also prone to errors. Maybe my ideas regarding memory 
 management in systems languages are plain wrong, the future 
 will tell.

 --
 Paulo
As i said in many threads regarding GC and MMM, it is not about this vs that. There should be no religious stances. Both have their strengths and failures. What every single discussion on this boils down to is some people downplay the failures of their religion :) And that staying behind thing is something i never understand! It is a hype, it is marketing! To sell you a product that doesn't deserve half its price! Religion/Irrationality has no place in what we do. Show me a better tool, "convince me" it is better and i will be using that tool. I don't give a damn if it is D or vim i am leaving behind.
I also don't have any problem with tools, what matters is what the customer wants and with what I can make him happy, not what the tool flavor of the month is. Personally, even thought it might come differently in my posts, I also believe there are situations where MMM is much better than GC. Or where GC is not even feasible. The thing is, regardless of what anyone might think, the major desktop OS are integrating reference counting and GC at the kernel level. Now when the systems programming languages that are part of these OS, offer these types of memory management, there is no way to choose otherwise, even if one so wishes. -- Paulo
Feb 27 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 26/02/2012 17:34, so a écrit :
 On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

 Would this even be an issue on multicore systems where the GC can run
 concurrently? As long as the stop-the-world parts are below some given
 threshold.
If it is possible to guarantee that i don't think anyone would bother with manual MM.
It is possible on x86, but currently, OS don't provide the primitives required to do so.
Feb 26 2012
prev sibling parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Sunday, 26 February 2012 at 15:48:13 UTC, so wrote:
 On Sunday, 26 February 2012 at 15:22:09 UTC, deadalnix wrote:

 True, but the problem of video game isn't how much computation 
 you do to allocate, but to deliver a frame every few 
 miliseconds. In most cases, it worth spending more in 
 allocating but with a predictable result than let the GC does 
 its job.
Absolutely! It cracks me up when i see (in this forum or any other graphics related forums) things like "you can't allocate at runtime!!!" or "you shouldn't use standard libraries!!!". Thing is, you can do both just fine if you just RTFM :)
Of course you can do both just fine. It doesn't mean it's a good idea. There's also rarely ever any need to. It's not difficult to avoid allocating memory.
Feb 26 2012
prev sibling next sibling parent "Ben Hanson" <Ben.Hanson tikit.com> writes:
On Saturday, 25 February 2012 at 22:55:37 UTC, Walter Bright
wrote:
 Most straight up GC vs malloc/free benchmarks miss something 
 crucial. A GC allows one to do substantially *fewer* 
 allocations. It's a lot faster to not allocate than to allocate.
The trouble is you don't have to look far to find managed apps that visibly pause at random due to garbage collection. I've personally never come across this behaviour with a program that manages its own memory.
 Consider C strings. You need to keep track of ownership of it. 
 That often means creating extra copies, rather than sharing a 
 single copy.

 Enter C++'s shared_ptr. But that works by, for each object, 
 allocating a *second* chunk of memory to hold the reference 
 count. Right off the bat, you've got twice as many allocations 
 & frees with shared_ptr than a GC would have.
Or, you could use std::make_shared(). Given that even C++ struggles to be accepted as a systems programming language, I can't understand why garbage collection is used even in the phobos library. If systems programmers think that C++ has too much overhead, they sure as hell aren't going to be happy with garbage collection in the low level libraries. It's an interesting point that hard real-time programs allocate all their memory up front, mitigating the entire issue, but obviously this is at the extreme end of the scale. Regards, Ben
Mar 06 2012
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com> wrote:

 On 2/25/2012 2:08 PM, Paulo Pinto wrote:

 Most standard compiler malloc()/free() implementations are actually
 slower than
 most advanced GC algorithms.
Most straight up GC vs malloc/free benchmarks miss something crucial. A GC allows one to do substantially *fewer* allocations. It's a lot faster to not allocate than to allocate.
Do you really think that's true? Are there any statistics to support that? I'm extremely sceptical of this claim. I would have surely thought using a GC leads to a significant *increase* in allocations for a few reasons: It's easy to allocate, ie, nothing to discourage you It's easy to clean up - you don't have to worry about cleanup problems, makes it simpler to use in many situations Dynamic arrays are easy - many C++ users will avoid dynamic arrays because the explicit allocation/clean up implies complexity, one will always use the stack, or a fixed array where they can get away with it Slicing, concatenation, etc performs bucket loads of implicit GC allocations Strings... - C coders who reject the stl will almost always have a separate string heap with very particular allocation patterns, and almost always refcounted Phobos/druntine allocate liberally - the CRT almost never allocates This is my single biggest fear in D. I have explicit control within my own code, but I wonder if many D libraries will be sloppy and over-allocate all over the place, and be generally unusable in many applications. If D is another language like C where the majority of libraries (including the standard libraries I fear) are unusable in various contexts, then that kinda defeats the purpose. D's module system is one of its biggest selling points. I think there should be strict phobos allocation policies, and ideally, druntime should NEVER allocate if it can help it. Consider C strings. You need to keep track of ownership of it. That often
 means creating extra copies, rather than sharing a single copy.
Rubbish, strings are almost always either refcounted or on the stack for dynamic strings, or have fixed memory allocated within structures. I don't think I've ever seen someone duplicating strings into separate allocations liberally.
 Enter C++'s shared_ptr. But that works by, for each object, allocating a
 *second* chunk of memory to hold the reference count. Right off the bat,
 you've got twice as many allocations & frees with shared_ptr than a GC
 would have.
Who actually uses shared_ptr? Talking about the stl is misleading... an overwhelming number of C/C++ programmers avoid the stl like the plague (for these exact reasons). Performance oriented programmers rarely use STL out of the box, and that's what we're talking about here right? If you're not performance oriented, then who cares about the GC either?
Mar 06 2012
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Manu:

   It's easy to allocate, ie, nothing to discourage you
This is partially a cultural thing, and it's partially caused by what the language offers you. I have seen Ada code where most things are stack-allocated, because doing this is handy in Ada. In D stack-allocated variable-length arrays will help move some heap allocations to the stack. And the usage of scoped class instances has to improve some more.
   Strings... - C coders who reject the stl will almost always have a
 separate string heap with very particular allocation patterns, and almost
 always refcounted
I think in future D will be free to have a special heap for strings. I think that in D source code there is already enough semantics to do this. There is one person that is working on the D GC, so maybe he's interested about this.
 an overwhelming number of C/C++ programmers avoid the stl like the plague (for
 these exact reasons). Performance oriented programmers rarely use STL out
 of the box,
Often I have heard the opposite claims too. Like in the recent GoingNative2012 conference. Bye, bearophile
Mar 06 2012
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 03/06/2012 01:27 PM, Manu wrote:
 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 2/25/2012 2:08 PM, Paulo Pinto wrote:

         Most standard compiler malloc()/free() implementations are
         actually slower than
         most advanced GC algorithms.


     Most straight up GC vs malloc/free benchmarks miss something
     crucial. A GC allows one to do substantially *fewer* allocations.
     It's a lot faster to not allocate than to allocate.


 Do you really think that's true? Are there any statistics to support that?
 I'm extremely sceptical of this claim.

 I would have surely thought using a GC leads to a significant *increase*
 in allocations for a few reasons:
    It's easy to allocate, ie, nothing to discourage you
If you believe this, why do you raise this issue?
    It's easy to clean up - you don't have to worry about cleanup
 problems, makes it simpler to use in many situations
GC does not prevent memory leaks, it does not support deterministic cleanup, and most implementations perform poorly on certain workloads. You were saying?
    Dynamic arrays are easy - many C++ users will avoid dynamic arrays
 because the explicit allocation/clean up implies complexity, one will
 always use the stack, or a fixed array where they can get away with it
    Slicing,
Slicing does never allocate.
 concatenation, etc performs bucket loads of implicit GC
 allocations
a~b Nothing implicit about that. The only case where memory allocation is implicit is for closures.
    Strings... - C coders who reject the stl will almost always have a
 separate string heap with very particular allocation patterns, and
 almost always refcounted
    Phobos/druntine allocate liberally - the CRT almost never allocates

 This is my single biggest fear in D. I have explicit control within my
 own code, but I wonder if many D libraries will be sloppy and
 over-allocate all over the place, and be generally unusable in many
 applications.
IMHO this fear is unjustified. If the library developers are that sloppy, chances are that the library is not worth using, even when leaving all memory allocation concerns away. (It is likely that you aren't the only programmer familiar with some of the issues.)
 If D is another language like C where the majority of libraries
 (including the standard libraries I fear) are unusable in various
 contexts, then that kinda defeats the purpose. D's module system is one
 of its biggest selling points.

 I think there should be strict phobos allocation policies,
Yes, a function that does not obviously need to allocate shouldn't, and if possible there should be alternatives that do not allocate. Do you have any particular examples where such a policy would be violated in Phobos?
 and ideally, druntime should NEVER allocate if it can help it.
+1. What are examples of unnecessary allocations in druntime?
     Consider C strings. You need to keep track of ownership of it. That
     often means creating extra copies, rather than sharing a single copy.


 Rubbish, strings are almost always either refcounted
Technically, refcounting is a form of GC.
 or on the stack for
 dynamic strings, or have fixed memory allocated within structures. I
 don't think I've ever seen someone duplicating strings into separate
 allocations liberally.
It is impossible to slice a zero-terminated string without copying it in the general case and refcounting slices is not trivial.
     Enter C++'s shared_ptr. But that works by, for each object,
     allocating a *second* chunk of memory to hold the reference count.
     Right off the bat, you've got twice as many allocations & frees with
     shared_ptr than a GC would have.


 Who actually uses shared_ptr? Talking about the stl is misleading... an
 overwhelming number of C/C++ programmers avoid the stl like the plague
 (for these exact reasons). Performance oriented programmers rarely use
 STL out of the box, and that's what we're talking about here right?
Possibly now you are the one who is to provide supporting statistics.
 If you're not performance oriented, then who cares about the GC either?
There is a difference between not performance oriented and performance agnostic. Probably everyone cares about performance to some extent.
Mar 06 2012
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 06.03.2012 16:27, Manu wrote:
 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 2/25/2012 2:08 PM, Paulo Pinto wrote:

         Most standard compiler malloc()/free() implementations are
         actually slower than
         most advanced GC algorithms.


     Most straight up GC vs malloc/free benchmarks miss something
     crucial. A GC allows one to do substantially *fewer* allocations.
     It's a lot faster to not allocate than to allocate.


 Do you really think that's true? Are there any statistics to support that?
 I'm extremely sceptical of this claim.

 I would have surely thought using a GC leads to a significant *increase*
 in allocations for a few reasons:
    It's easy to allocate, ie, nothing to discourage you
    It's easy to clean up - you don't have to worry about cleanup
 problems, makes it simpler to use in many situations
    Dynamic arrays are easy - many C++ users will avoid dynamic arrays
 because
you mean like new[] ? That's just gloried allocation. STL vector is a dynamic array of C++ and it's being used a lot. the explicit allocation/clean up implies complexity, one will
 always use the stack,
or a fixed array where they can get away with it Still possible
    Slicing,
Nope concatenation, etc performs bucket loads of implicit GC
 allocations
It sure does, like it does without GC with reallocs and refcounting.
    Strings... - C coders who reject the stl will almost always have a
 separate string heap with very particular allocation patterns, and
 almost always refcounted
    Phobos/druntine allocate liberally - the CRT almost never allocates
It's just most of CRT has incredibly bad usability, partly because it lack _any_ notion of allocators. And policy of using statically allocated shared data like in localtime, srand etc. shown remarkably bad M-T scalability.
 This is my single biggest fear in D. I have explicit control within my
 own code, but I wonder if many D libraries will be sloppy and
 over-allocate all over the place, and be generally unusable in many
 applications.
 If D is another language like C where the majority of libraries
 (including the standard libraries I fear) are unusable in various
 contexts, then that kinda defeats the purpose. D's module system is one
 of its biggest selling points.

 I think there should be strict phobos allocation policies, and ideally,
 druntime should NEVER allocate if it can help it.
     Consider C strings. You need to keep track of ownership of it. That
     often means creating extra copies, rather than sharing a single copy.


 Rubbish, strings are almost always either refcounted
like COW? or on the stack for
 dynamic strings, or have fixed memory allocated within structures. I
 don't think I've ever seen someone duplicating strings into separate
 allocations liberally.
I've seen some, and sometimes memory corruption when people hesitated to copy even when they *do* need to pass a copy.
     Enter C++'s shared_ptr. But that works by, for each object,
     allocating a *second* chunk of memory to hold the reference count.
     Right off the bat, you've got twice as many allocations & frees with
     shared_ptr than a GC would have.


 Who actually uses shared_ptr?
Like everybody? Though with c++11 move semantics a unique_ptr is going to lessen it's widespread use. And there are ways to spend less then 2 proper memory allocations per shared_ptr, like keeping special block allocator for ref-counters. More importantly smart pointers are here to stay in C++. Talking about the stl is misleading... an
 overwhelming number of C/C++ programmers avoid the stl like the plague
 (for these exact reasons).
Instead of using it properly. It has a fair share of failures but it's not that bad. Performance oriented programmers rarely use
 STL out of the box, and that's what we're talking about here right? If
 you're not performance oriented, then who cares about the GC either?
-- Dmitry Olshansky
Mar 06 2012
parent reply Manu <turkeyman gmail.com> writes:
On 6 March 2012 15:10, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 03/06/2012 01:27 PM, Manu wrote:

 concatenation, etc performs bucket loads of implicit GC
 allocations
a~b Nothing implicit about that.
That is the very definition of an implicit allocation. What about the concatenation operator says that an allocation is to be expected? And what if you do a sequence of concatenations: a ~ b ~ c, now I've even created a redundant intermediate allocation. Will it be cleaned up immediately? Is there a convenient syntax to concatenate into a target buffer (subverting the implicit allocation)? If the syntax isn't equally convenient, nobody will use it. This is my single biggest fear in D. I have explicit control within my
 own code, but I wonder if many D libraries will be sloppy and
 over-allocate all over the place, and be generally unusable in many
 applications.
IMHO this fear is unjustified. If the library developers are that sloppy, chances are that the library is not worth using, even when leaving all memory allocation concerns away. (It is likely that you aren't the only programmer familiar with some of the issues.)
I don't think it is unjustified, this seems to be the rule in C/C++ rather than the exception, and there's nothing in D to suggest this will be mitigated, possibly worsened... Many libraries which are perfectly usable in any old 'app' are not usable in a realtime or embedded apps purely due to its internal design/allocation habits. Hopefully the D library authors will be more receptive to criticism... but I doubt it. I think it'll be exactly as it is in C/C++ currently.
    Consider C strings. You need to keep track of ownership of it. That
    often means creating extra copies, rather than sharing a single copy.


 Rubbish, strings are almost always either refcounted
Technically, refcounting is a form of GC.
Not really, it doesn't lock up the app at a random time for an indeterminate amount of time. or on the stack for
 dynamic strings, or have fixed memory allocated within structures. I
 don't think I've ever seen someone duplicating strings into separate
 allocations liberally.
It is impossible to slice a zero-terminated string without copying it in the general case and refcounting slices is not trivial.
This is when stack buffers are most common in C. Who actually uses shared_ptr? Talking about the stl is misleading... an
 overwhelming number of C/C++ programmers avoid the stl like the plague
 (for these exact reasons). Performance oriented programmers rarely use
 STL out of the box, and that's what we're talking about here right?
Possibly now you are the one who is to provide supporting statistics.
Touche :) If you're not performance oriented, then who cares about the GC either?

 There is a difference between not performance oriented and performance
 agnostic. Probably everyone cares about performance to some extent.
True. On 6 March 2012 15:13, Dmitry Olshansky <dmitry.olsh gmail.com> wrote:
 On 06.03.2012 16:27, Manu wrote:

   Phobos/druntine allocate liberally - the CRT almost never allocates
It's just most of CRT has incredibly bad usability, partly because it lack _any_ notion of allocators. And policy of using statically allocated shared data like in localtime, srand etc. shown remarkably bad M-T scalability.
I agree to an extent. Most C API's tend to expect you to provide the result buffer, and that doesn't seem to be the prevailing pattern in D. Some might argue it's ugly to pass a result buffer in, and I agree to an extent, but I'll take it every time over the library violating my apps allocation patterns. Who actually uses shared_ptr?

 Like everybody? Though with c++11 move semantics a unique_ptr is going to
 lessen it's widespread use. And there are ways to spend less then 2 proper
 memory allocations per shared_ptr, like keeping special block allocator for
 ref-counters.
 More importantly smart pointers are here to stay in C++.
Everybody eh.. :) Well speaking from within the games industry at least, there's a prevailing trend back towards flat C or C-like C++, many lectures and talks on the topic. I have no contact with any C++ programmers that use STL beyond the most trivial containers like vector. Many games companies re-invent some stl-ish thing internally which is less putrid ;) Additionally, I can't think of many libraries I've used that go hard-out C++. Most popular libraries are very conservative, or even flat C (most old stable libs that EVERYONE uses, zlib, png, jpeg, mad, tinyxml, etc). Havoc, PhysX, FMod, etc are C++, but very light C++, light classes, no STL, etc. Unreal used to use STL... but they fixed it :P
Mar 06 2012
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 06.03.2012 18:10, Manu wrote:
 On 6 March 2012 15:10, Timon Gehr <timon.gehr gmx.ch
 <mailto:timon.gehr gmx.ch>> wrote:

     On 03/06/2012 01:27 PM, Manu wrote:

         concatenation, etc performs bucket loads of implicit GC
         allocations


     a~b

     Nothing implicit about that.


 That is the very definition of an implicit allocation. What about the
 concatenation operator says that an allocation is to be expected?
 And what if you do a sequence of concatenations: a ~ b ~ c, now I've
 even created a redundant intermediate allocation. Will it be cleaned up
 immediately?
Just make an enhancement request ;) Anyway it's good point as long as GC stays sloppy.
 Is there a convenient syntax to concatenate into a target buffer
 (subverting the implicit allocation)? If the syntax isn't equally
 convenient, nobody will use it.

         This is my single biggest fear in D. I have explicit control
         within my
         own code, but I wonder if many D libraries will be sloppy and
         over-allocate all over the place, and be generally unusable in many
         applications.


     IMHO this fear is unjustified. If the library developers are that
     sloppy, chances are that the library is not worth using, even when
     leaving all memory allocation concerns away. (It is likely that you
     aren't the only programmer familiar with some of the issues.)


 I don't think it is unjustified, this seems to be the rule in C/C++
 rather than the exception, and there's nothing in D to suggest this will
 be mitigated, possibly worsened...
 Many libraries which are perfectly usable in any old 'app' are not
 usable in a realtime or embedded apps purely due to its internal
 design/allocation habits.
 Hopefully the D library authors will be more receptive to criticism...
 but I doubt it. I think it'll be exactly as it is in C/C++ currently.

             Consider C strings. You need to keep track of ownership of
         it. That
             often means creating extra copies, rather than sharing a
         single copy.


         Rubbish, strings are almost always either refcounted


     Technically, refcounting is a form of GC.


 Not really, it doesn't lock up the app at a random time for an
 indeterminate amount of time.


         or on the stack for
         dynamic strings, or have fixed memory allocated within structures. I
         don't think I've ever seen someone duplicating strings into separate
         allocations liberally.


     It is impossible to slice a zero-terminated string without copying
     it in the general case and refcounting slices is not trivial.


 This is when stack buffers are most common in C.

         Who actually uses shared_ptr? Talking about the stl is
         misleading... an

         overwhelming number of C/C++ programmers avoid the stl like the
         plague
         (for these exact reasons). Performance oriented programmers
         rarely use
         STL out of the box, and that's what we're talking about here right?


     Possibly now you are the one who is to provide supporting statistics.


 Touche :)

         If you're not performance oriented, then who cares about the GC
         either?


     There is a difference between not performance oriented and
     performance agnostic. Probably everyone cares about performance to
     some extent.


 True.


 On 6 March 2012 15:13, Dmitry Olshansky <dmitry.olsh gmail.com
 <mailto:dmitry.olsh gmail.com>> wrote:

     On 06.03.2012 16:27, Manu wrote:

            Phobos/druntine allocate liberally - the CRT almost never
         allocates


     It's just most of CRT has incredibly bad usability, partly because
     it lack _any_ notion of allocators. And policy of using statically
     allocated shared data like in localtime, srand etc. shown remarkably
     bad M-T scalability.


 I agree to an extent. Most C API's tend to expect you to provide the
 result buffer,
...pointer and it's supposed length and then do something sucky if it doesn't fit, like truncate it (strncpy I'm looking at you!). and that doesn't seem to be the prevailing pattern in D. There are better abstractions then "pass a buffer".
 Some might argue it's ugly to pass a result buffer in, and I agree to an
 extent, but I'll take it every time over the library violating my apps
 allocation patterns.

         Who actually uses shared_ptr?


     Like everybody? Though with c++11 move semantics a unique_ptr is
     going to lessen it's widespread use. And there are ways to spend
     less then 2 proper memory allocations per shared_ptr, like keeping
     special block allocator for ref-counters.
     More importantly smart pointers are here to stay in C++.


 Everybody eh.. :)
 Well speaking from within the games industry at least, there's a
 prevailing trend back towards flat C or C-like C++, many lectures and
 talks on the topic. I have no contact with any C++ programmers that use
 STL beyond the most trivial containers like vector. Many games companies
 re-invent some stl-ish thing internally which is less putrid ;)
 Additionally, I can't think of many libraries I've used that go hard-out
 C++. Most popular libraries are very conservative, or even flat C (most
 old stable libs that EVERYONE uses, zlib, png, jpeg, mad, tinyxml, etc).
Take into account how much discipline and manpower it took. Yet I remember the gory days when libpng segfaulted quite often ;)
 Havoc, PhysX, FMod, etc are C++, but very light C++, light classes, no
 STL, etc.
Havoc is pretty old btw, back in the days STL implementation + c++ compiler combination used to be slow and crappy, partly because of poor inlining. Here comes so-called "abstraction cost" it's almost the other way around nowdays.
 Unreal used to use STL... but they fixed it :P
-- Dmitry Olshansky
Mar 06 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 4:27 AM, Manu wrote:
 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
     Most straight up GC vs malloc/free benchmarks miss something crucial. A GC
     allows one to do substantially *fewer* allocations. It's a lot faster to
not
     allocate than to allocate.
 Do you really think that's true?
Yes.
 Are there any statistics to support that?
No, just my experience using both. Consider strings. In C, I'd often have a function that returns a string. The caller then (eventually) free's it. That means the string must have been allocated by malloc. That means that if I want to: return "foo"; I have to replace it with: return strdup("foo"); It means I can't do the "small string" optimization. It means I cannot return the tail of some other string. I cannot return a malloc'd string that anything else points to. I *must* return a *unique* malloc'd string. This carries into a lot of data structures, and means lots of extra allocations. Next problem: I can't do array slicing. I have to make copies instead. You suggested using ref counting. That's only a partial solution. Setting aside all the problems of getting it right, consider getting a stream of input from a user. With GC, you can slice it and store those slices in a symbol table - no allocations at all. No chance of that without a GC, even with ref counting.
Mar 06 2012
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Slicing works, it just requires more care. GC makes slicing work pretty much=
 automatically, though you can end up with severe memory bloat.=20

On Mar 6, 2012, at 6:25 PM, Walter Bright <newshound2 digitalmars.com> wrote=
:

 On 3/6/2012 4:27 AM, Manu wrote:
 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
    Most straight up GC vs malloc/free benchmarks miss something crucial. A=
GC
    allows one to do substantially *fewer* allocations. It's a lot faster t=
o not
    allocate than to allocate.
 Do you really think that's true?
=20 Yes. =20
 Are there any statistics to support that?
=20 No, just my experience using both. =20 Consider strings. In C, I'd often have a function that returns a string. T=
he caller then (eventually) free's it. That means the string must have been a= llocated by malloc. That means that if I want to:
=20
   return "foo";
=20
 I have to replace it with:
=20
   return strdup("foo");
=20
 It means I can't do the "small string" optimization. It means I cannot ret=
urn the tail of some other string. I cannot return a malloc'd string that an= ything else points to. I *must* return a *unique* malloc'd string.
=20
 This carries into a lot of data structures, and means lots of extra alloca=
tions.
=20
 Next problem: I can't do array slicing. I have to make copies instead.
=20
 You suggested using ref counting. That's only a partial solution. Setting a=
side all the problems of getting it right, consider getting a stream of inpu= t from a user. With GC, you can slice it and store those slices in a symbol t= able - no allocations at all. No chance of that without a GC, even with ref c= ounting.
Mar 06 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 7:19 PM, Sean Kelly wrote:
 Slicing works, it just requires more care.
You can't mix sliced data and unsliced, unless you have extra data in your structures to track this.
 GC makes slicing work pretty much automatically, though you can end up with
severe memory bloat.
I don't see how slicing produces bloat.
Mar 06 2012
parent reply Sean Kelly <sean invisibleduck.org> writes:
On Mar 6, 2012, at 7:47 PM, Walter Bright wrote:

 On 3/6/2012 7:19 PM, Sean Kelly wrote:
 Slicing works, it just requires more care.
=20 You can't mix sliced data and unsliced, unless you have extra data in =
your structures to track this. Ah, I see what you're saying. True.
 GC makes slicing work pretty much automatically, though you can end =
up with severe memory bloat.
=20
 I don't see how slicing produces bloat.
Slice ten bytes out of the middle of a ten MB buffer and the entire = buffer sticks around.=
Mar 06 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/6/2012 9:59 PM, Sean Kelly wrote:
 Slice ten bytes out of the middle of a ten MB buffer and the entire buffer
sticks around.
True.
Mar 07 2012
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Mar 07, 2012 at 02:26:28AM -0800, Walter Bright wrote:
 On 3/6/2012 9:59 PM, Sean Kelly wrote:
Slice ten bytes out of the middle of a ten MB buffer and the entire
buffer sticks around.
[...] Isn't there some way of dealing with this? I mean, if the GC marks the highest & lowest pointers that point to a 10MB block, it should be able to free up the unused parts. (Though I can see how this can be tricky, since the GC would have to understand which pointers involve arrays, so that it doesn't truncate long slices, etc.). T -- Marketing: the art of convincing people to pay for what they didn't need before which you can't deliver after.
Mar 07 2012
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Wednesday, 7 March 2012 at 02:25:41 UTC, Walter Bright wrote:
 On 3/6/2012 4:27 AM, Manu wrote:
 On 26 February 2012 00:55, Walter Bright 
 <newshound2 digitalmars.com
    Most straight up GC vs malloc/free benchmarks miss 
 something crucial. A GC
    allows one to do substantially *fewer* allocations. It's a 
 lot faster to not
    allocate than to allocate.
 Do you really think that's true?
Yes.
I think you're both right. GC does definitely allow you to do less allocations, but as Manu said it also makes people more allocation happy.
 Are there any statistics to support that?
No, just my experience using both. Consider strings. In C, I'd often have a function that returns a string. The caller then (eventually) free's it. That means the string must have been allocated by malloc.
I'd say it is bad design to return a malloc'd string. You should take a destination buffer as an argument and put the string there (like strcpy and friends). That way you can do whatever you want. Sean made the good point that using buffers like that can lead to errors, but D's arrays make things a lot safer.
Mar 07 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 3/7/2012 1:09 AM, Peter Alexander wrote:
 On Wednesday, 7 March 2012 at 02:25:41 UTC, Walter Bright wrote:
 On 3/6/2012 4:27 AM, Manu wrote:
 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
 Most straight up GC vs malloc/free benchmarks miss something crucial. A GC
 allows one to do substantially *fewer* allocations. It's a lot faster to not
 allocate than to allocate.
 Do you really think that's true?
Yes.
I think you're both right. GC does definitely allow you to do less allocations, but as Manu said it also makes people more allocation happy.
I don't regard the latter as a problem with GC.
 Are there any statistics to support that?
No, just my experience using both. Consider strings. In C, I'd often have a function that returns a string. The caller then (eventually) free's it. That means the string must have been allocated by malloc.
I'd say it is bad design to return a malloc'd string. You should take a destination buffer as an argument and put the string there (like strcpy and friends). That way you can do whatever you want.
strcpy() is a known unsafe function. And the problem with passing a buffer is you usually do not know the size in advance. I don't agree with your contention that this is a bad design (for C).
Mar 07 2012
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
	charset=us-ascii

On Mar 6, 2012, at 4:27 AM, Manu <turkeyman gmail.com> wrote:

 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com> wrot=
e:
 On 2/25/2012 2:08 PM, Paulo Pinto wrote:
 Most standard compiler malloc()/free() implementations are actually slower=
than
 most advanced GC algorithms.
=20
 Most straight up GC vs malloc/free benchmarks miss something crucial. A GC=
allows one to do substantially *fewer* allocations. It's a lot faster to no= t allocate than to allocate.
=20
 Do you really think that's true? Are there any statistics to support that?=
 I'm extremely sceptical of this claim.
=20
 I would have surely thought using a GC leads to a significant increase in a=
llocations for a few reasons:
   It's easy to allocate, ie, nothing to discourage you
   It's easy to clean up - you don't have to worry about cleanup problems, m=
akes it simpler to use in many situations
   Dynamic arrays are easy - many C++ users will avoid dynamic arrays becau=
se the explicit allocation/clean up implies complexity, one will always use t= he stack, or a fixed array where they can get away with it
   Slicing, concatenation, etc performs bucket loads of implicit GC allocat=
ions Concatenation anyway.=20
   Strings... - C coders who reject the stl will almost always have a separ=
ate string heap with very particular allocation patterns, and almost always r= efcounted
   Phobos/druntine allocate liberally - the CRT almost never allocates
=20
 This is my single biggest fear in D. I have explicit control within my own=
code, but I wonder if many D libraries will be sloppy and over-allocate all= over the place, and be generally unusable in many applications.
 If D is another language like C where the majority of libraries (including=
the standard libraries I fear) are unusable in various contexts, then that k= inda defeats the purpose. D's module system is one of its biggest selling po= ints.
=20
 I think there should be strict phobos allocation policies, and ideally, dr=
untime should NEVER allocate if it can help it. druntime already avoids allocations whenever possible. For example, core.dem= angle generates it's output in-place in a user-supplied buffer. Regarding allocations in general, it's a matter of design philosophy. Tango,= for example, basically never implicitly allocates. Phobos does. I'd say tha= t Phobos is safer to program against and easier to use, but Tango affords mo= re control for the discerning programmer. Personally, I'd like to see fewer i= mplicit allocations in Phobos, but I think that ship has sailed.=20
 Consider C strings. You need to keep track of ownership of it. That often m=
eans creating extra copies, rather than sharing a single copy.
=20
 Rubbish, strings are almost always either refcounted or on the stack for d=
ynamic strings, or have fixed memory allocated within structures. I don't th= ink I've ever seen someone duplicating strings into separate allocations lib= erally. I think the C standard library was designed with the intent that strings wou= ld be duplicated during processing, similar to D 's native string operations= . It's true that people often use static buffers instead, but this has also b= een an enormous source of program bugs. Buffer overflow attacks wouldn't exi= st in C if people didn't do this. That said, I do it too. And I'll readily s= ay that working with strings this way is a huge pain in the ass. It's just h= as more predictable performance.=20
 Enter C++'s shared_ptr. But that works by, for each object, allocating a *=
second* chunk of memory to hold the reference count. Right off the bat, you'= ve got twice as many allocations & frees with shared_ptr than a GC would hav= e.
=20
 Who actually uses shared_ptr? Talking about the stl is misleading... an ov=
erwhelming number of C/C++ programmers avoid the stl like the plague (for th= ese exact reasons). Performance oriented programmers rarely use STL out of t= he box, and that's what we're talking about here right? If you're not perfor= mance oriented, then who cares about the GC either? I've used it extensively. Managing memory ownership is one of the most compl= icated tasks in a C/C++ app, and in C++ this often means choosing the approp= riate pointer type for the job--auto_ptr for transferral of ownership, share= d_ptr, etc.=20=
Mar 06 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-03-06 17:31, Sean Kelly wrote:
 On Mar 6, 2012, at 4:27 AM, Manu <turkeyman gmail.com
 <mailto:turkeyman gmail.com>> wrote:

 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 2/25/2012 2:08 PM, Paulo Pinto wrote:

         Most standard compiler malloc()/free() implementations are
         actually slower than
         most advanced GC algorithms.


     Most straight up GC vs malloc/free benchmarks miss something
     crucial. A GC allows one to do substantially *fewer* allocations.
     It's a lot faster to not allocate than to allocate.


 Do you really think that's true? Are there any statistics to support that?
 I'm extremely sceptical of this claim.

 I would have surely thought using a GC leads to a significant
 *increase* in allocations for a few reasons:
 It's easy to allocate, ie, nothing to discourage you
 It's easy to clean up - you don't have to worry about cleanup
 problems, makes it simpler to use in many situations
 Dynamic arrays are easy - many C++ users will avoid dynamic arrays
 because the explicit allocation/clean up implies complexity, one will
 always use the stack, or a fixed array where they can get away with it
 Slicing, concatenation, etc performs bucket loads of implicit GC
 allocations
Concatenation anyway.
 Strings... - C coders who reject the stl will almost always have a
 separate string heap with very particular allocation patterns, and
 almost always refcounted
 Phobos/druntine allocate liberally - the CRT almost never allocates

 This is my single biggest fear in D. I have explicit control within my
 own code, but I wonder if many D libraries will be sloppy and
 over-allocate all over the place, and be generally unusable in many
 applications.
 If D is another language like C where the majority of libraries
 (including the standard libraries I fear) are unusable in various
 contexts, then that kinda defeats the purpose. D's module system is
 one of its biggest selling points.

 I think there should be strict phobos allocation policies, and
 ideally, druntime should NEVER allocate if it can help it.
druntime already avoids allocations whenever possible. For example, core.demangle generates it's output in-place in a user-supplied buffer. Regarding allocations in general, it's a matter of design philosophy. Tango, for example, basically never implicitly allocates. Phobos does. I'd say that Phobos is safer to program against and easier to use, but Tango affords more control for the discerning programmer. Personally, I'd like to see fewer implicit allocations in Phobos, but I think that ship has sailed.
I have not seen any evidence that Tango would be less safe than Phobos. Tango uses buffers to let the user optionally pre-allocate buffers. But if the user doesn't, or the buffer is too small, Tango will allocate the buffer. -- /Jacob Carlborg
Mar 06 2012
prev sibling parent Brad Anderson <eco gnuk.net> writes:
On Tue, Mar 6, 2012 at 5:27 AM, Manu <turkeyman gmail.com> wrote:

 On 26 February 2012 00:55, Walter Bright <newshound2 digitalmars.com>wrote:

 On 2/25/2012 2:08 PM, Paulo Pinto wrote:

 Most standard compiler malloc()/free() implementations are actually
 slower than
 most advanced GC algorithms.
Most straight up GC vs malloc/free benchmarks miss something crucial. A GC allows one to do substantially *fewer* allocations. It's a lot faster to not allocate than to allocate.
Do you really think that's true? Are there any statistics to support that? I'm extremely sceptical of this claim. I would have surely thought using a GC leads to a significant *increase*in allocations for a few reasons: It's easy to allocate, ie, nothing to discourage you It's easy to clean up - you don't have to worry about cleanup problems, makes it simpler to use in many situations Dynamic arrays are easy - many C++ users will avoid dynamic arrays because the explicit allocation/clean up implies complexity, one will always use the stack, or a fixed array where they can get away with it Slicing, concatenation, etc performs bucket loads of implicit GC allocations Strings... - C coders who reject the stl will almost always have a separate string heap with very particular allocation patterns, and almost always refcounted Phobos/druntine allocate liberally - the CRT almost never allocates This is my single biggest fear in D. I have explicit control within my own code, but I wonder if many D libraries will be sloppy and over-allocate all over the place, and be generally unusable in many applications. If D is another language like C where the majority of libraries (including the standard libraries I fear) are unusable in various contexts, then that kinda defeats the purpose. D's module system is one of its biggest selling points. I think there should be strict phobos allocation policies, and ideally, druntime should NEVER allocate if it can help it. Consider C strings. You need to keep track of ownership of it. That often
 means creating extra copies, rather than sharing a single copy.
Rubbish, strings are almost always either refcounted or on the stack for dynamic strings, or have fixed memory allocated within structures. I don't think I've ever seen someone duplicating strings into separate allocations liberally.
Many STL implementers have abandoned COW (MSVC/Dinkumware, Clang's libc++, STLPort) and are instead using short-string-optimization. Reference counted strings are seen as an anti-optimization in multithreaded code (especially since rvalue references were added making SSO even faster). [1]
 Enter C++'s shared_ptr. But that works by, for each object, allocating a
 *second* chunk of memory to hold the reference count. Right off the bat,
 you've got twice as many allocations & frees with shared_ptr than a GC
 would have.
Who actually uses shared_ptr? Talking about the stl is misleading... an overwhelming number of C/C++ programmers avoid the stl like the plague (for these exact reasons). Performance oriented programmers rarely use STL out of the box, and that's what we're talking about here right? If you're not performance oriented, then who cares about the GC either?
That's certainly true, from what I hear, in the games industry but STL is heavily used outside of games. Regards, Brad Anderson [1] http://www.gotw.ca/publications/optimizations.htm
Mar 06 2012
prev sibling parent reply Andrew Wiley <wiley.andrew.j gmail.com> writes:
On Sat, Feb 25, 2012 at 4:08 PM, Paulo Pinto <pjmlp progtools.org> wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:

 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
That's not the issue here. The issue is that when your game is required to render at 60fps, you've got 16.67ms for each frame and no time for 100ms+ GC cycle. In this environment, it's mostly irrelevant that you'll spend more time total in malloc than you would have spent in the GC because you can only spare the time in small chunks, not large ones. One simple solution is to avoid all dynamic allocation, but as a few mostly unanswered NG posts have shown, the compiler is currently implicitly generating dynamic allocations in a few places, and there's no simple way to track them down or do anything about them.
Feb 25 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-02-25 23:36, Andrew Wiley wrote:
 On Sat, Feb 25, 2012 at 4:08 PM, Paulo Pinto<pjmlp progtools.org>  wrote:
 Am 25.02.2012 21:26, schrieb Peter Alexander:

 On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:
 On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky wrote:

 Interesting. I wish he'd elaborate on why it's not an option for his
 daily
 work.
Not the design but the implementation, memory management would be the first.
Memory management is not a problem. You can manage memory just as easily in D as you can in C or C++. Just don't use global new, which they'll already be doing.
I couldn't agree more. The GC issue comes around often, but I personally think that the main issue is that the GC needs to be optimized, not that manual memory management is required. Most standard compiler malloc()/free() implementations are actually slower than most advanced GC algorithms.
That's not the issue here. The issue is that when your game is required to render at 60fps, you've got 16.67ms for each frame and no time for 100ms+ GC cycle. In this environment, it's mostly irrelevant that you'll spend more time total in malloc than you would have spent in the GC because you can only spare the time in small chunks, not large ones. One simple solution is to avoid all dynamic allocation, but as a few mostly unanswered NG posts have shown, the compiler is currently implicitly generating dynamic allocations in a few places, and there's no simple way to track them down or do anything about them.
You can remove the GC and you'll get a linker error when its used. Not the best way to track them down but it works. -- /Jacob Carlborg
Feb 26 2012
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Yao Gomez" <yao.gomez gmail.com> wrote in message 
news:pdyvfpeaigfvorkfnddi forum.dlang.org...
 On Saturday, 25 February 2012 at 16:08:40 UTC, Nick Sabalausky wrote:
 "Trass3r" <un known.com> wrote in message 
 news:op.v98sager3ncmek enigma...

It's not showing the actual quote, can someone paste it?
It works for me. God bless Javascript.
With JS I get "Sorry, that page doesn't exist!" (Without JS I get a "Signup for Twitface" screen) So much for wonderful Javascipt.
 Anyways, here's the quote:

 Using D for my daily work is not an option, but I applaud thier inclusion 
 of a "pure" attribute. 
Feb 25 2012
prev sibling parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 25 February 2012 at 16:04:57 UTC, Trass3r wrote:

I think I could be to blame for that. https://twitter.com/#!/Poita_/status/173106149669875712 Obviously he can't use D for his day to day work because they already have a massive codebase written in C++. Also, they need to generate PowerPC code and use lots of platforms specific tools and APIs that are all based around C++. It would simply be impractical to change over to D.
Feb 25 2012