www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Microsoft working on new systems language

reply "Barry L." <barry.lapthorn gmail.com> writes:
Hello everyone, first post...

Just saw this:  
http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

D (and Rust) get a mention with this quote:  "There are other 
candidates early in their lives, too, most notably Rust and D. 

talent and community just an arm’s length away."
Dec 28 2013
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/28/13, Barry L. <barry.lapthorn gmail.com> wrote:
 Just saw this:
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/
Hmm it's already down (database error). Is there a cached version?
Dec 28 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/28/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 On 12/28/13, Barry L. <barry.lapthorn gmail.com> wrote:
 Just saw this:
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/
Hmm it's already down (database error). Is there a cached version?
It seems this is relevant (from reddit): http://lambda-the-ultimate.org/node/4862
Dec 28 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/28/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 On 12/28/13, Barry L. <barry.lapthorn gmail.com> wrote:
 Just saw this:
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/
Hmm it's already down (database error). Is there a cached version?
Found cached version from HN: http://webcache.googleusercontent.com/search?q=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F&oq=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F
Dec 28 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 12/28/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Found cached version from HN:
 http://webcache.googleusercontent.com/search?q=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F&oq=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F
Well anyway there's nothing really "meaty" here. They don't say whether it's a language which will de-facto run mostly on Windows (of course it will be..), the syntax isn't shown, practically nothing is shown.
Dec 28 2013
next sibling parent reply "Big Tummy" <bigtummy gmail.com> writes:
On Saturday, 28 December 2013 at 13:49:44 UTC, Andrej Mitrovic 
wrote:
 On 12/28/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Found cached version from HN:
 http://webcache.googleusercontent.com/search?q=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F&oq=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F
Well anyway there's nothing really "meaty" here. They don't say whether it's a language which will de-facto run mostly on Windows (of course it will be..), the syntax isn't shown, practically nothing is shown.
Sounds like they are building a Windows only version of D.
Dec 28 2013
parent reply =?UTF-8?Q?Klaim_=2D_Jo=C3=ABl_Lamotte?= <mjklaim gmail.com> writes:
This is interesting but:
 - they don't talk about generic code?
 - it's not clear if their language solves the build time issue C++ have
(which D solves)
 - if it's not a totally open language specification, then it's a dead-end
to me.
Dec 28 2013
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 28 December 2013 at 16:09:45 UTC, Klaim - Joël 
Lamotte wrote:
 This is interesting but:
  - they don't talk about generic code?
  - it's not clear if their language solves the build time issue 
 C++ have
 (which D solves)
  - if it's not a totally open language specification, then it's 
 a dead-end
 to me.
In the comments the author (joeduffy) writes: "we prove programs race-free, and have generics & inheritance." Interesting!
Dec 28 2013
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:


http://channel9.msdn.com/Forums/Coffeehouse/Concurrency-Safe-C-from-TSIMidori-team-Joe-Duffy-etc
Dec 28 2013
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On 28.12.2013 16:43, Klaim - Joël Lamotte wrote:
 This is interesting but:
   - they don't talk about generic code?
They use the same model of genericity as .NET, as explained on the original blog entry.
   - it's not clear if their language solves the build time issue C++
 have (which D solves)
Any language with proper module support solves this problem. C++ reliance on C toolchains is one reason of such issues.
   - if it's not a totally open language specification, then it's a
 dead-end to me.
I don't care, my tool religion discussions are long gone and we use tools requested by customers anyway, not chosen by ourselves. -- Paulo
Dec 29 2013
prev sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 28 Dec 2013 05:49:33 -0800, Andrej Mitrovic  
<andrej.mitrovich gmail.com> wrote:

 On 12/28/13, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 Found cached version from HN:
 http://webcache.googleusercontent.com/search?q=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F&oq=cache%3Ajoeduffyblog.com%2F2013%2F12%2F27%2Fcsharp-for-systems-programming%2F
Well anyway there's nothing really "meaty" here. They don't say whether it's a language which will de-facto run mostly on Windows (of course it will be..), the syntax isn't shown, practically nothing is shown.
Well he did say the plan is to open-source the whole thing, so ports to other systems (probably by Xamarin) would be easier than the reverse engineering that Mono had to do back in the days of yore. So at least they're headed in the right direction. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Dec 28 2013
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:

 talent and community just an arm’s length away.
Pretty much definition of NiH syndrome :)
Dec 28 2013
parent "Chris Cain" <clcain uncg.edu> writes:
On Saturday, 28 December 2013 at 16:10:43 UTC, Dicebot wrote:
 Pretty much definition of NiH syndrome :)
Exactly. I'm always secretly hoping Microsoft's next programming project involves them using and contributing to LLVM. But then I remember it's Microsoft and they don't do anything that could be beneficial for all. They're always going to reinvent the wheel and their wheel is going to have a different bolt pattern than every other wheel.
Dec 28 2013
prev sibling next sibling parent reply "Steve Teale" <steve.teale britseyeview.com> writes:
On Saturday, 28 December 2013 at 11:13:55 UTC, Barry L. wrote:
 Hello everyone, first post...
Dec 28 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-12-28 18:56, Steve Teale wrote:


Has already been tried. That is, port D to .Net. -- /Jacob Carlborg
Dec 28 2013
prev sibling next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 28 Dec 2013 03:13:53 -0800, Barry L. <barry.lapthorn gmail.com> =
 =

wrote:

 Hello everyone, first post...

 Just saw this:   =
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

 D (and Rust) get a mention with this quote:  "There are other candidat=
es =
 early in their lives, too, most notably Rust and D. But hey, my team  =

an =
 arm=E2=80=99s length away."
I want to make a point here that many people come to do looking for = = for the most part (using LDC/GDC) you get exactly that. This language = = supported by a large corporate entity with mountains of money and a vest= ed = interest in making it successful. They can kill bugs and make improvemen= ts = This needs to be a wake up call for the D community. For a long time D h= as = occupied the Programmer Efficient and Safe Native Compiled Language nich= e = more or less unchallenged in any serious way (with a nod to Rust). If = Microsoft actually goes through with this (and they will since the .NET = = runtime is murderous on mobile device battery performance) the argument = = for D will get much harder to make. Yes we can argue the ideology of one= = technical bullet point versus another, but that misses the point. The va= st = majority of programmers pick their languages based not on ideological = purity, but on ability to get stuff done quickly. Obviously this is more= = than just the language, it's also the availability of tutorials and = examples. But there isn't much we can do about that at this point. And = namespace composability is big on my personal list. Or proper shared = libraries. Or, etc. I know that I wanted out of the Microsoft world for performance and = cross-platform reasons. However with this project, especially the intere= st = in cross-platforming it that they seem to be showing, they will have a = = wouldn't be hard to go back. So while we're celebrating that D mentioned in an article that made the = = front-page of reddit (by virtue of it's author being well-respected and = = the importance of his employer) let us also reflect on what this news mo= st = likely means for D. Microsoft can invalidate us almost overnight with = mountains money and the size of their community. Yes we got an honorable= = mention, that also means we're on the radars of people who matter... -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Dec 28 2013
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On 29.12.2013 06:59, Adam Wilson wrote:
 On Sat, 28 Dec 2013 03:13:53 -0800, Barry L. <barry.lapthorn gmail.com>
 wrote:

 Hello everyone, first post...

 Just saw this:
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

 D (and Rust) get a mention with this quote:  "There are other
 candidates early in their lives, too, most notably Rust and D. But

 community just an arm’s length away."
I want to make a point here that many people come to do looking for for the most part (using LDC/GDC) you get exactly that. This language supported by a large corporate entity with mountains of money and a vested interest in making it successful. They can kill bugs and make 1.0->2.0). This needs to be a wake up call for the D community. For a long time D has occupied the Programmer Efficient and Safe Native Compiled Language niche more or less unchallenged in any serious way (with a nod to Rust). If Microsoft actually goes through with this (and they will since the .NET runtime is murderous on mobile device battery performance) the argument for D will get much harder to make. Yes we can argue the ideology of one technical bullet point versus another, but that misses the point. The vast majority of programmers pick their languages based not on ideological purity, but on ability to get stuff done quickly. Obviously this is more than just the language, it's also the availability of tutorials and examples. But there isn't much we can do that D cannot. Cross-library namespace composability is big on my personal list. Or proper shared libraries. Or, etc. I know that I wanted out of the Microsoft world for performance and cross-platform reasons. However with this project, especially the interest in cross-platforming it that they seem to be showing, they will have a much easier time getting me back. After all I came from to D from So while we're celebrating that D mentioned in an article that made the front-page of reddit (by virtue of it's author being well-respected and the importance of his employer) let us also reflect on what this news most likely means for D. Microsoft can invalidate us almost overnight with mountains money and the size of their community. Yes we got an honorable mention, that also means we're on the radars of people who matter...
Well, this is nothing new I would say. Microsoft Research already has mainstream Windows, except for the MDIL compiler used in WP8. Now with the native political side gaining strength after the Vista fiasco, it is to be expected something like this, if the wind doesn't change again. -- Paulo
Dec 29 2013
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 29 Dec 2013 00:45:39 -0800, Paulo Pinto <pjmlp progtools.org>  =

wrote:

 On 29.12.2013 06:59, Adam Wilson wrote:
 On Sat, 28 Dec 2013 03:13:53 -0800, Barry L. <barry.lapthorn gmail.co=
m>
 wrote:

 Hello everyone, first post...

 Just saw this:
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

 D (and Rust) get a mention with this quote:  "There are other
 candidates early in their lives, too, most notably Rust and D. But

 community just an arm=E2=80=99s length away."
I want to make a point here that many people come to do looking for
nd
 for the most part (using LDC/GDC) you get exactly that. This language=

rm.

 supported by a large corporate entity with mountains of money and a
 vested interest in making it successful. They can kill bugs and make

 1.0->2.0).

 This needs to be a wake up call for the D community. For a long time =
D
 has occupied the Programmer Efficient and Safe Native Compiled Langua=
ge
 niche more or less unchallenged in any serious way (with a nod to Rus=
t).
 If Microsoft actually goes through with this (and they will since the=
 .NET runtime is murderous on mobile device battery performance) the
 argument for D will get much harder to make. Yes we can argue the
 ideology of one technical bullet point versus another, but that misse=
s
 the point. The vast majority of programmers pick their languages base=
d
 not on ideological purity, but on ability to get stuff done quickly.
 Obviously this is more than just the language, it's also the
 availability of tutorials and examples. But there isn't much we can d=
o

 that D cannot. Cross-library namespace composability is big on my
 personal list. Or proper shared libraries. Or, etc.

 I know that I wanted out of the Microsoft world for performance and
 cross-platform reasons. However with this project, especially the
 interest in cross-platforming it that they seem to be showing, they w=
ill
 have a much easier time getting me back. After all I came from to D f=
rom


 So while we're celebrating that D mentioned in an article that made t=
he
 front-page of reddit (by virtue of it's author being well-respected a=
nd
 the importance of his employer) let us also reflect on what this news=
 most likely means for D. Microsoft can invalidate us almost overnight=
 with mountains money and the size of their community. Yes we got an
 honorable mention, that also means we're on the radars of people who
 matter...
Well, this is nothing new I would say. Microsoft Research already has =
=

y

o =
 mainstream Windows, except for the MDIL compiler used in WP8.

 Now with the native political side gaining strength after the Vista  =
 fiasco, it is to be expected something like this, if the wind doesn't =
=
 change again.


 keynote.

 --
 Paulo
Indeed. However, the difference is that, they're going public with it. I= = e = ... but now they're going to let the little people play with one. I've = er = been a real competitor to D that anybody outside of MSR could actually d= o = anything with. That's really my whole point. But it's an important one. = = Now we have another option for a Safe, Efficient, Native language that y= ou = don't have to be a masochist to use. And it's going to evolve much faste= r = than D... -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Dec 29 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-12-29 06:59, Adam Wilson wrote:

 The vast majority of programmers pick their languages based
 not on ideological purity, but on ability to get stuff done quickly.
I get the feeling most developers pick the default language of the platform they develop for. On Windows that means .Net and most likely are the languages which makes it easy do get stuff done quickly for those platforms because that's where the companies invest their money. -- /Jacob Carlborg
Dec 29 2013
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 06:00:31 UTC, Adam Wilson wrote:
 I want to make a point here that many people come to do looking 

 or Java, and for the most part (using LDC/GDC) you get exactly 

I think neither Go, D or this language is as performant as (skilled use of) C/C++. If the background information is correct, this new language is aiming at safe concurrent programming. Just like Go. So they apparently use a scheme of making variables immutable and isolated and with all global variables immutable… I don't think you need to do that in all cases with transactional memory now available in new processors. Global variables are fast and easy for objects with few interdependencies, with near lock-free mechanisms in place, such as CAS and transactional memory, this is overkill in many scenarios. Simple, tight, unsafe, low-level memory-coherent designs tend to be faster. Guards as language-level-constructs, local storage etc, tend to be slower. But yes, I agree that this language could swipe the feet under D and Rust, Go is safe through its application domain. D really need some work in the low-level area to shine. programming languages. I think they are not, as long as C/C++ is a better solution for embedded programming it will remain THE system level programming language. Which is kind of odd, considering that embedded systems would benefit a lot from a safe programming language (due to the cost/difficulty of updating software installed on deployed hardware). It doesn't matter if it is possible to write C++-like code in a language if the library support and knowhow isn't dominant in the ecosystem. (Like assuming a GC or trying too hard to be cross-platform).
Dec 29 2013
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 29 December 2013 at 13:15:14 UTC, Ola Fosheim Grøstad 
wrote:
 I think neither Go, D or this language is as performant as 
 (skilled use of) C/C++.
This is not true. Assuming skilled use and same compiler backend those are equally performant. D lacks some low-level control C has (which is important for embedded) but it is not directly related to performance.
Dec 29 2013
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 13:46:07 UTC, Dicebot wrote:
 This is not true. Assuming skilled use and same compiler 
 backend those are equally performant. D lacks some low-level 
 control C has (which is important for embedded) but it is not 
 directly related to performance.
That low-level control also matters for performance, when you have hard deadlines. E.g. when the GC kicks in, it not only hogs all the threads that participate in GC, it also trash the caches unless you have a GC implementation that bypasses the caches. Sustained trashing of caches is bad. C has low-level, low resource usage defaults. While you can do the same in some other languages they tend to default to more expensive use patterns. Like D defaults to stuff like GC and thread-local-storage. Defaults affect library design, which in turn affect performance. (thread local storage requires either an extra indirection through a register or multiple kernel level page tables per process)
Dec 29 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/13 6:35 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 13:46:07 UTC, Dicebot wrote:
 This is not true. Assuming skilled use and same compiler backend those
 are equally performant. D lacks some low-level control C has (which is
 important for embedded) but it is not directly related to performance.
That low-level control also matters for performance, when you have hard deadlines. E.g. when the GC kicks in, it not only hogs all the threads that participate in GC, it also trash the caches unless you have a GC implementation that bypasses the caches. Sustained trashing of caches is bad.
Yeah how about using deterministic deallocation in the inner loops - that's the only place where it matters.
 C has low-level, low resource usage defaults. While you can do the same
 in some other languages they tend to default to more expensive use
 patterns. Like D defaults to stuff like GC and thread-local-storage.
 Defaults affect library design, which in turn affect performance.
 (thread local storage requires either an extra indirection through a
 register or multiple kernel level page tables per process)
It is my opinion that safety is the best default at least here; global storage is very often an antipattern in singlethreaded applications and almost always so in multithreaded ones.. I think C got it wrong there and D is in better shape. Andrei
Dec 29 2013
parent "CJS" <Prometheus85 hotmail.com> writes:
On Sunday, 29 December 2013 at 15:26:35 UTC, Andrei Alexandrescu 
wrote:
 On 12/29/13 6:35 AM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 13:46:07 UTC, Dicebot wrote:
 This is not true. Assuming skilled use and same compiler 
 backend those
 are equally performant. D lacks some low-level control C has 
 (which is
 important for embedded) but it is not directly related to 
 performance.
That low-level control also matters for performance, when you have hard deadlines. E.g. when the GC kicks in, it not only hogs all the threads that participate in GC, it also trash the caches unless you have a GC implementation that bypasses the caches. Sustained trashing of caches is bad.
Yeah how about using deterministic deallocation in the inner loops - that's the only place where it matters.
If this is indeed true then it sounds like a standard technique people should be aware of. I really hope it's stuck in Ali Cehreli's book (which is awesome) before it's considered completed and released. It would be very nice to have something to point the GC-hating crowd to as a technique and ask them to present examples where the technique isn't enough.
Dec 29 2013
prev sibling parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 29 December 2013 at 14:35:44 UTC, Ola Fosheim Grøstad 
wrote:
 That low-level control also matters for performance, when you 
 have hard deadlines. E.g. when the GC kicks in, it not only 
 hogs all the threads that participate in GC, it also trash the 
 caches unless you have a GC implementation that bypasses the 
 caches. Sustained trashing of caches is bad.
Common misconception. For absolute majority of programs it never gets to make the difference. I have certain experience with those where such difference really matters and often argue about it on this NG. But applying it as general performance criteria is overstatement at best. In practice most user-space applications are likely to be faster in higher level garbage collected language because it allows to spend more time on architecture and algorithms which are always primary bottlenecks.
Dec 29 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2013 5:46 AM, Dicebot wrote:
 D lacks some low-level control C has
For instance? On the other hand, D has an inline assembler and C (without vendor extensions) does not. C doesn't even have (without vendor extensions) alignment control on struct fields.
Dec 29 2013
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, December 29, 2013 12:19:33 Walter Bright wrote:
 On 12/29/2013 5:46 AM, Dicebot wrote:
 D lacks some low-level control C has
For instance? On the other hand, D has an inline assembler and C (without vendor extensions) does not. C doesn't even have (without vendor extensions) alignment control on struct fields.
I would guess that he's referring to some sort of compiler extension that isn't standard C but which is generally available (e.g. __restrict, which was discussed here a while back). And we don't necessarily have all of the stray stuff like that in any of the current D compilers. But since I never use that sort of thing in C/C++, I'm not very familiar with all of the various extensions that are available, let alone what people who really care about performance frequently use. However, with regards to the language itself, I think that we're definitely on par (if not better) than C/C++ with regards to low level control. - Jonathan M Davis
Dec 29 2013
prev sibling parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 29 December 2013 at 20:19:35 UTC, Walter Bright wrote:
 On 12/29/2013 5:46 AM, Dicebot wrote:
 D lacks some low-level control C has
For instance? On the other hand, D has an inline assembler and C (without vendor extensions) does not. C doesn't even have (without vendor extensions) alignment control on struct fields.
I have nothing to add over already discussed in http://forum.dlang.org/post/mppphhuomfpxyfxsyusp forum.dlang.org and http://forum.dlang.org/post/mailman.479.1386854234.3242.digital ars-d puremagic.com threads
Dec 29 2013
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/13 5:15 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 06:00:31 UTC, Adam Wilson wrote:
 I want to make a point here that many people come to do looking for

 and for the most part (using LDC/GDC) you get exactly that. This

I think neither Go, D or this language is as performant as (skilled use of) C/C++.
Wait, what? Go excused itself out of the competition, and you'd need to bring some evidence that D is not as fast/tight as C++. I have accumulated quite a bit of evidence the other way without even trying. This also smacks of "no true Scotsman" (http://en.wikipedia.org/wiki/No_true_Scotsman). Any inefficient C++ code (owing to hidden costs of features like unnecessary copying, rigidity of the language which discourages aggressive optimization refactoring, the many traps for the unwary that make the simplest and most intuitive code often be the least efficient) can be nicely swiped under the rug as "unskilled" use. By that same argument there is a "skilled" use of D that avoids creating garbage in inner loops, using allocating stdlib functions judiciously etc. etc. Clearly there's work we need to do on improving particularly the standard library. But claiming that D code can't be efficient because of some stdlib artifacts is like claiming C++ code can't do efficient I/O because it must use iostreams (which are indeed objectively and undeniably horrifically slow). Neither argument has merit. Andrei
Dec 29 2013
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 15:22:29 UTC, Andrei Alexandrescu 
wrote:
 Wait, what? Go excused itself out of the competition, and you'd
Agree. I consider Go to be a web-service language atm.
 need to bring some evidence that D is not as fast/tight as C++. 
 I have accumulated quite a bit of evidence the other way 
 without even trying.
One example: Performant C++ is actually C with a bit of C++ convinience, so you toss out exception handling, stack unwinding and even turn off stack frames. With that and allocation pools you can backtrack by simply setting the stack-pointer and dropping the pool. C is so barebones that you can do your own coroutines without language support if you wish. As long as you only call nothrow functions you can do this? So you can use slower C++ convinience for initialization and close-to-the-metal after that.
 the standard library. But claiming that D code can't be 
 efficient because of some stdlib artifacts is like claiming C++ 
 code can't do efficient I/O because it must use iostreams 
 (which are indeed objectively and undeniably horrifically 
 slow). Neither argument has merit.
Well, but people who care about real-time performance in C++ use libraries that stays clear of those areas. C++ stdlib is more for medium-performance code sections than high-performance code. The GC trash cashes when it kicks in. That affect real-time threads where you basically have hard real-time requirements. That means you need higher headroom (can do less signal-processsing in an audio realtime thread).
Dec 29 2013
parent reply jerro <jkrempus gmail.com> writes:
On Sun, 29 Dec 2013 17:54:51 +0000, Ola Fosheim Grøstad wrote:

 C is so barebones that you can do your own coroutines without language
 support if you wish.
You can do that in D too. core.thread.Fiber is implemented in D (with a bit of inline assembly), without any special language support.
 The GC trash cashes when it kicks in. 
It can only kick in on allocation. In parts of your code where latency is crucial, just avoid allocating from the garbage collected heap.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 18:29:52 UTC, jerro wrote:
 You can do that in D too. core.thread.Fiber is implemented in D 
 (with a
 bit of inline assembly), without any special language support.
Yes, coroutines was a bad example, you probably can do that in many stack-based languages. My point was more that the transparency of the simple runtime of C is such that you can easily understand the consequences of such tricks. And the advantage of C(++) is that you can do focused low-level fine-tuning one compilation unit while using a more standard feature set on the rest of your code, because the part of the runtime you have to consider is quite simple.
 The GC trash cashes when it kicks in.
It can only kick in on allocation. In parts of your code where latency is crucial, just avoid allocating from the garbage collected heap.
I understand that. In a real-time system you might enter such sections of your code maybe 120 times per second in a different thread which might be killed by the OS if it doesn't complete on time. It is probably feasible to create a real-time friendly garbage collector that can cooperate with realtime threads, but it isn't trivial. To get good cache coherency all cores have to "cooperate" on what memory areas they write/read to when you enter timing critical code sections. GC jumps all over memory real fast touching cacheline after cacheline basically invalidating the cache (the effect depends on the GC/application/cpu/memorybus).
Dec 29 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2013 11:15 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 It is probably feasible to create a real-time friendly garbage collector that
 can cooperate with realtime threads, but it isn't trivial. To get good cache
 coherency all cores have to "cooperate" on what memory areas they write/read to
 when you enter timing critical code sections. GC jumps all over memory real
fast
 touching cacheline after cacheline basically invalidating the cache (the effect
 depends on the GC/application/cpu/memorybus).
I'll reiterate that the GC will NEVER EVER pause your program unless you are actually calling the GC to allocate memory. A loop that does not GC allocate WILL NEVER PAUSE. Secondly, you can write C code in D. You can only make calls to C's standard library. It WILL NEVER PAUSE. You can do everything you can do in C. You can malloc/free. You don't have to throw exceptions. You don't have to use closures. This fear and loathing of the GC is, in my opinion, wildly overblown. Granted, you have to know what you're doing to write performant D code. You have to know the patterns of memory allocation happening in the code. Organizing your data structures for best caching is required. But you need to have that expertise to write fast C code, too.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 20:36:27 UTC, Walter Bright wrote:
 I'll reiterate that the GC will NEVER EVER pause your program 
 unless you are actually calling the GC to allocate memory. A 
 loop that does not GC allocate WILL NEVER PAUSE.
That's fine, except when you have real-time threads. So unless you use non-temporal load/save in your GC traversal (e.g. on x86 you have SSE instructions that bypass the cache), your GC might trash the cache for other cores that run real-time threads which are initiated as call-backs from the OS. These callbacks might happen 120+ times per seconds and your runtime cannot control those, they have the highest user-level priority. Granted, the latest CPUs have a fair amount of level 3 cache, and the most expensive ones might have a big level 4 cache, but I still think it is a concern. Level 1 and 2 caches are small: 64KB/128KB.
Dec 29 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/13 12:47 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 20:36:27 UTC, Walter Bright wrote:
 I'll reiterate that the GC will NEVER EVER pause your program unless
 you are actually calling the GC to allocate memory. A loop that does
 not GC allocate WILL NEVER PAUSE.
That's fine, except when you have real-time threads. So unless you use non-temporal load/save in your GC traversal (e.g. on x86 you have SSE instructions that bypass the cache), your GC might trash the cache for other cores that run real-time threads which are initiated as call-backs from the OS. These callbacks might happen 120+ times per seconds and your runtime cannot control those, they have the highest user-level priority. Granted, the latest CPUs have a fair amount of level 3 cache, and the most expensive ones might have a big level 4 cache, but I still think it is a concern. Level 1 and 2 caches are small: 64KB/128KB.
I think you and others are talking about different things. Walter was referring about never invoking GC collection, not the performance of the GC process once in progress. Andrei
Dec 29 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2013 12:47 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 20:36:27 UTC, Walter Bright wrote:
 I'll reiterate that the GC will NEVER EVER pause your program unless you are
 actually calling the GC to allocate memory. A loop that does not GC allocate
 WILL NEVER PAUSE.
That's fine, except when you have real-time threads. So unless you use non-temporal load/save in your GC traversal (e.g. on x86 you have SSE instructions that bypass the cache), your GC might trash the cache for other cores that run real-time threads which are initiated as call-backs from the OS. These callbacks might happen 120+ times per seconds and your runtime cannot control those, they have the highest user-level priority. Granted, the latest CPUs have a fair amount of level 3 cache, and the most expensive ones might have a big level 4 cache, but I still think it is a concern. Level 1 and 2 caches are small: 64KB/128KB.
Since you can control if and when the GC runs fairly simply, this is not any sort of blocking issue.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 21:39:52 UTC, Walter Bright wrote:
 Since you can control if and when the GC runs fairly simply, 
 this is not any sort of blocking issue.
I agree, it is not a blocking issue. It is a cache-trashing issue. So unless the GC is cache-friendly I am concerned about using D for audio-visual apps. Granted, GC would be great for managing graphs in application logic (game AI, music structures etc). So I am not anti-GC per se. Lets assume 4 cores, 4MB level 3 cache, 512+ MB AI/game-world data-structures. Lets assume that 50% CPU is spent on graphics, 20% is spent on audio, 10% is spent on texture/mesh loading/building, 10% on AI and 10% is headroom (OS etc). Ok, so assume you have 5 threads for simplicity: thread 1, no GC: audio realtime hardware thread 2/3, no GC: opengl "realtime" (designed to keep the GPU from starving) thread 4, GC: texture/mesh loading/building and game logic Thread 4 is halted during GC, but threads 1-3 keeps running consuming 70% of the CPU. Thread1-3 are tuned to keep most of their working set in cache level 3. However, when the GC kicks in it will start to load 512+MB over the memory bus at fast pace. If there is one pointer per 32 bytes you touch all possible cache lines. So the the memory bus is under strain, and this pollutes cache level 3, which wipes out the look-up-tables used by thread 1&2 which then have to be loaded back into the cache over the memory bus... thread 1-3 fails their deadline, you get some audio-visual defects and the audio/graphics systems compensate by reducing the load by cutting down on audio-visual features. After a while the audio-visual system detects that the CPU is under-utilized and turn the high quality features back on. But I feel there is a high risk of getting disturbing noticable glitches, if this happens every 10 seconds it is going to be pretty annoying. I think you need to take care, and have a cache-friendly GC-strategy tuned for real time. It is possible though. I don't deny it.
Dec 29 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2013 2:10 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 21:39:52 UTC, Walter Bright wrote:
 Since you can control if and when the GC runs fairly simply, this is not any
 sort of blocking issue.
Your reply doesn't take into account that you can control if and when the GC runs fairly simply. So you can run it at a time when it won't matter to the cache.
Dec 29 2013
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On 29.12.2013 23:27, Walter Bright wrote:
 On 12/29/2013 2:10 PM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 21:39:52 UTC, Walter Bright wrote:
 Since you can control if and when the GC runs fairly simply, this is
 not any
 sort of blocking issue.
Your reply doesn't take into account that you can control if and when the GC runs fairly simply. So you can run it at a time when it won't matter to the cache.
Better give up, I learned since the Oberon days that GC phobia will take few generations of programmers to wither away. -- Paulo
Dec 29 2013
prev sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 22:27:43 UTC, Walter Bright wrote:
 Your reply doesn't take into account that you can control if 
 and when the GC runs fairly simply. So you can run it at a time 
 when it won't matter to the cache.
In a computer game GC should run frequently. You don't want to waste memory that could be used to hold textures on GC headroom. Realtime audio applications should run for 1 hour+ with absolutely no hiccups and very low latency. Working around the limitations of a naive GC implementation is probably more work than it is worth.
Dec 29 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/13 3:14 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 22:27:43 UTC, Walter Bright wrote:
 Your reply doesn't take into account that you can control if and when
 the GC runs fairly simply. So you can run it at a time when it won't
 matter to the cache.
In a computer game GC should run frequently. You don't want to waste memory that could be used to hold textures on GC headroom. Realtime audio applications should run for 1 hour+ with absolutely no hiccups and very low latency. Working around the limitations of a naive GC implementation is probably more work than it is worth.
Then don't use the GC. Andrei
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 29 December 2013 at 23:58:34 UTC, Andrei Alexandrescu 
wrote:
 Then don't use the GC.
I agree! Thus: any language that makes it hard to not use the GC is not competing with C++ as a performant language. ;-]
Dec 29 2013
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/29/13 4:08 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 23:58:34 UTC, Andrei Alexandrescu wrote:
 Then don't use the GC.
I agree! Thus: any language that makes it hard to not use the GC is not competing with C++ as a performant language. ;-]
Oh brother. Andrei
Dec 29 2013
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Monday, 30 December 2013 at 00:08:10 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 29 December 2013 at 23:58:34 UTC, Andrei 
 Alexandrescu wrote:
 Then don't use the GC.
I agree! Thus: any language that makes it hard to not use the GC is not competing with C++ as a performant language. ;-]
Good job D isn't that language :-)
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 14:59:49 UTC, Peter Alexander
wrote:
 Good job D isn't that language :-)
Yes, that would be great!! :-o But... new isn't listed among overloadable operators, and I don't want to set allocators per class. I want to allocate the same class from different pools. How? Segmented GC would be neat though. (GC limited to a set of objects and other mechanisms like ref counting for pointers into the segment.) Region pools with reference counting to the region would be nice too (you release the entire region and do ref counting to the region rather than the objects in it)
Dec 30 2013
parent reply "Chris Cain" <clcain uncg.edu> writes:
On Monday, 30 December 2013 at 17:17:06 UTC, Ola Fosheim Grøstad 
wrote:
 But... new isn't listed among overloadable operators, and I 
 don't want to set allocators per class. I want to allocate the 
 same class from different pools. How?
See emplace:
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 17:26:30 UTC, Chris Cain wrote:
 See emplace:

Thanks. I guess that is the equivalent of c++ in-place new expression. I hoped for solution that is a bit more transparent than creating my own new syntax though, a solution which makes replacing allocators across projects less work. *shrug*
Dec 30 2013
next sibling parent reply "Chris Cain" <clcain uncg.edu> writes:
On Monday, 30 December 2013 at 17:36:56 UTC, Ola Fosheim Grøstad 
wrote:
 Thanks. I guess that is the equivalent of c++ in-place new 
 expression.
No problem and yes, that pretty much is the equivalent of C++'s in-place new.
 I hoped for solution that is a bit more transparent than 
 creating my own new syntax though, a solution which makes 
 replacing allocators across projects less work. *shrug*
I'm confused by this. Could you rephrase and/or explain?
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 17:52:33 UTC, Chris Cain wrote:
 I'm confused by this. Could you rephrase and/or explain?
In cpp you transparently replace new gobally and through a hack (include files) map it to different pools for different sections of your code if you wish. Although explicit allocation from pools using regular new-syntax would be nice too.
Dec 30 2013
parent reply "Chris Cain" <clcain uncg.edu> writes:
On Monday, 30 December 2013 at 18:02:26 UTC, Ola Fosheim Grøstad 
wrote:
 In cpp you transparently replace new gobally and through a hack 
 (include files) map it to different pools for different 
 sections of your code if you wish. Although explicit allocation 
 from pools using regular new-syntax would be nice too.
Sounds pretty dangerous to me. I wouldn't really describe that as "transparent" either. If it's working for you in C++, that's great. I wouldn't count on D adopting such an approach, however. I think in the near future there's going to be a standard "allocator interface" (std.allocator) which will allow you to easily use different allocation schemes depending on what you're doing. I'm not sure how well this will work in practice (I'll need to play around with it some to see how well it works before I make a firm judgement), but I have good reason to believe it'll work well and it'll likely cover your use-cases pretty well too.
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 18:22:22 UTC, Chris Cain wrote:
 Sounds pretty dangerous to me. I wouldn't really describe that 
 as "transparent" either. If it's working for you in C++, that's 
 great. I wouldn't count on D adopting such an approach, however.
Well, either that or using a thread local variable setting the current pool, I assume objective-c does that.
 I think in the near future there's going to be a standard 
 "allocator interface" (std.allocator) which will allow you to 
 easily use different allocation schemes depending on what 
 you're doing.
Ok. :) as long as I don't have to pass the allocator around,mwhich is tedious.
Dec 30 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 30 December 2013 at 18:32:05 UTC, Ola Fosheim Grøstad 
wrote:
 Ok. :) as long as I don't have to pass the allocator 
 around,mwhich is tedious.
See this thread for some perspective/info: http://forum.dlang.org/thread/l4btsk$5u8$1 digitalmars.com
Dec 30 2013
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 18:39:08 UTC, John Colvin wrote:
 See this thread for some perspective/info: 
 http://forum.dlang.org/thread/l4btsk$5u8$1 digitalmars.com
Thanks! That looks promising! Looks like a stack of allocators for new. If it is possible to use them explicitly too, then most of my concerns are covered. I think?
Dec 30 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2013 9:36 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 I hoped for solution that is a bit more transparent than creating my own new
 syntax though, a solution which makes replacing allocators across projects less
 work. *shrug*
Having overloaded global operator new in C++ myself across many projects, I eventually concluded that feature is a bug.
Dec 30 2013
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 19:12:16 UTC, Walter Bright wrote::
 Having overloaded global operator new in C++ myself across many 
 projects, I eventually concluded that feature is a bug.
I guess it can go wrong if you end up using the wrong pool in different parts of your code when making calls across compilation units (or inlined functions), but if you stay within an encapsulated framework I would think it is safe?
Dec 30 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2013 11:39 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Monday, 30 December 2013 at 19:12:16 UTC, Walter Bright wrote::
 Having overloaded global operator new in C++ myself across many projects, I
 eventually concluded that feature is a bug.
I guess it can go wrong if you end up using the wrong pool in different parts of your code when making calls across compilation units (or inlined functions), but if you stay within an encapsulated framework I would think it is safe?
It causes problems when linking in code developed elsewhere that makes assumptions about new's behavior. Even if you wrote all the code, it suffers from the usual problems of using global variables to set global state that various parts of the code rely on, i.e. encapsulation failure. This is why D does not support the notion.
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 21:21:06 UTC, Walter Bright wrote:
 It causes problems when linking in code developed elsewhere 
 that makes assumptions about new's behavior.
Yes, if you don't do new/delete pairs under the same circumstances you risk having problems.
 Even if you wrote all the code, it suffers from the usual 
 problems of using global variables to set global state that 
 various parts of the code rely on, i.e. encapsulation failure.
But isn't this exactly what the proposed allocation system linked above enables?
Dec 30 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2013 1:39 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Monday, 30 December 2013 at 21:21:06 UTC, Walter Bright wrote:
 It causes problems when linking in code developed elsewhere that makes
 assumptions about new's behavior.
Yes, if you don't do new/delete pairs under the same circumstances you risk having problems.
And people don't do that with C++. That's the whole problem with GLOBAL state.
 Even if you wrote all the code, it suffers from the usual problems of using
 global variables to set global state that various parts of the code rely on,
 i.e. encapsulation failure.
But isn't this exactly what the proposed allocation system linked above enables?
??
Dec 30 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 23:38:39 UTC, Walter Bright wrote:
 And people don't do that with C++. That's the whole problem 
 with GLOBAL state.
Well, it should work ok for autoreleasepools (delete all objects at once) even under bad circumstances, I think.
 But isn't this exactly what the proposed allocation system 
 linked above enables?
??
Maybe I misunderstand, but isn't the purpose of std.allocator to set up allocators that new can call into?
Dec 30 2013
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 31 December 2013 at 00:21:21 UTC, Ola Fosheim Grøstad 
wrote:
 Maybe I misunderstand, but isn't the purpose of std.allocator 
 to set up allocators that new can call into?
No, it'll be a standard library interface. I really think the built-in new should be discouraged - it has a lot of disadvantages over using a library setup, and not many advantages.
Dec 30 2013
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 30 Dec 2013 16:45:50 -0800, Adam D. Ruppe  =

<destructionator gmail.com> wrote:

 On Tuesday, 31 December 2013 at 00:21:21 UTC, Ola Fosheim Gr=F8stad wr=
ote:
 Maybe I misunderstand, but isn't the purpose of std.allocator to set =
up =
 allocators that new can call into?
No, it'll be a standard library interface. I really think the built-in new should be discouraged - it has a lot o=
f =
 disadvantages over using a library setup, and not many advantages.
Kind of like unique_ptr/shared_ptr but better. IIRC the GC is supposed t= o = be just another allocator, which is fine by me and won't be unfamiliar t= o = = used, they don't use allocators routinely. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Dec 30 2013
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Adam D. Ruppe:

 I really think the built-in new should be discouraged - it has 
 a lot of disadvantages over using a library setup, and not many 
 advantages.
It's hard to discourage the usage of a syntactically short and quite accessible feature of the language. Bye, bearophile
Dec 30 2013
prev sibling next sibling parent reply "Chris Cain" <clcain uncg.edu> writes:
On Tuesday, 31 December 2013 at 00:45:51 UTC, Adam D. Ruppe wrote:
 I really think the built-in new should be discouraged - it has 
 a lot of disadvantages over using a library setup, and not many 
 advantages.
Huuuge +1 from me. A while ago, I heard someone state that allocation and object construction should be separate concepts and "new" kind of clobbers the two concepts together. Over time I've started recognizing that this as well and I've really started thinking that a "new" operator is actually a bad thing, despite it being a short and convenient way to do that process. Once you start thinking of allocators as "objects" you start realizing that using the "new" operator is like you've been using global variables all over the place (which isn't necessarily a bad thing, but it's pretty shocking that no one really made the conscious choice to do so and most languages have made the choice too transparent when it actually matters quite a lot).
Dec 30 2013
parent "bearophile" <bearophileHUGS lycos.com> writes:
Chris Cain:

 A while ago, I heard someone state that allocation and object 
 construction should be separate concepts and "new" kind of 
 clobbers the two concepts together. Over time I've started 
 recognizing that this as well and I've really started thinking 
 that a "new" operator is actually a bad thing,
I remember time ago some people discussing about the idea of deprecating the "new" in D, not just "delete". Bye, bearophile
Dec 30 2013
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 31 December 2013 at 00:45:51 UTC, Adam D. Ruppe wrote:
 I really think the built-in new should be discouraged - it has 
 a lot of disadvantages over using a library setup, and not many 
 advantages.
A flexible new and good compiler level memory management support that cannot be overridden gives the compiler some optimization opportunities. E.g. putting objects on the stack, allocating a single chunk for multiple objects, delaying init and perhaps avoiding allocation. So I am personally sceptical of pushing libraries over advanced compiler support. Leaving more room for analysis is usually a good thing. The issue I have with library level allocation is that the compiler does not know that the code is doing allocation?
Dec 30 2013
parent reply "Chris Cain" <clcain uncg.edu> writes:
On Tuesday, 31 December 2013 at 06:28:40 UTC, Ola Fosheim Grøstad 
wrote:
 A flexible new and good compiler level memory management 
 support that cannot be overridden gives the compiler some 
 optimization opportunities. E.g. putting objects on the stack, 
 allocating a single chunk for multiple objects, delaying init 
 and perhaps avoiding allocation.

 So I am personally sceptical of pushing libraries over advanced 
 compiler support. Leaving more room for analysis is usually a 
 good thing.
Well, that's certainly a good point. There's _probably_ some extra optimizations that could be done with a compiler supported new. Maybe it could make some significantly faster code, but this assumes many things: 1. The compiler writer will actually do this analysis and write the optimization (my bets are that DMD will likely not do many of the things you suggest). 2. The person writing the code is writing code that is allocating several times in a deeply nested loop. 3. Despite the person making the obvious critical error of allocating several times in a deeply nested loop, he must not have made any other significant errors or those other errors must also be covered by optimizations. 4. The allocation isn't passed around too much to avoid the analysis or isn't actually kept around sometimes for some reason This makes the proposed optimization _very_ situational. If the situation does occur, it's possible that the user can use something like "std.allocator.StackAllocator" to do what he needs. Manual optimization in this case isn't too unreasonable. Considering the drawbacks of new and the advantages of a library allocation, this additional _potential_ advantage of new isn't enough to suggest its usage over a library solution here. Again, it's not a bad point, but it's just not compelling enough. That all said,
 The issue I have with library level allocation is that the 
 compiler does not know that the code is doing allocation?
_Technically_ the compiler could also do some analysis in functions and figure out that it will be allocating in there somewhere and using that allocation later on. Think of replacing library calls when it's noticed that it's an allocate function. It's pretty dirty and won't actually happen nor do I suggest it should happen, but it's actually still also _possible_.
Dec 31 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 31 December 2013 at 17:52:56 UTC, Chris Cain wrote:
 Well, that's certainly a good point. There's _probably_ some 
 extra optimizations that could be done with a compiler 
 supported new. Maybe it could make some significantly faster 
 code, but this assumes many things:

 1. The compiler writer will actually do this analysis and write 
 the optimization (my bets are that DMD will likely not do many 
 of the things you suggest).
I think many optimizations become more valuable when you start doing whole program anlysis.
 2. The person writing the code is writing code that is 
 allocating several times in a deeply nested loop.
The premise of efficient high level/generic programming is that the optimizer will undo naive code. Pseudo code example: inline process(inarray, allocator){ a = allocator.alloc(Array) a.init() for e in inarray { a.append(foo(e)) } return a } b = process(emptyarray,myallocator) dosomething(b) myallocator.free(b) The optimizer should get rid of all of this. But since alloc() followed by free() most likely leads to side effects, it can't and you end up with: b = myallocator.alloc(1000) myallocator.free(b)
 3. Despite the person making the obvious critical error of 
 allocating several times in a deeply nested loop, he must not 
 have made any other significant errors or those other errors 
 must also be covered by optimizations
I disagree that that inefficiencies due to high level programming is a mistake if the compiler has opportunity to get rid of it. I wish D would target high level programming in the global scope and low level programming in limited local scopes. I think few applications need hand optimization globally, except perhaps raytracers and compilers.
 Manual optimization in this case isn't too unreasonable.
I think manual optimization in most cases should be privided by the programmer as compiler hints and constraints.
 Think of replacing library calls when it's noticed that it's an 
 allocate function. It's pretty dirty and won't actually happen 
 nor do I suggest it should happen, but it's actually still also 
 _possible_.
Yes, why not? As long as the programmer has the means to control it. Why not let the compiler choose allocation strategies based on profiling for instance?
Dec 31 2013
parent reply "Chris Cain" <clcain uncg.edu> writes:
On Tuesday, 31 December 2013 at 19:53:29 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 31 December 2013 at 17:52:56 UTC, Chris Cain wrote:
 1. The compiler writer will actually do this analysis and 
 write the optimization (my bets are that DMD will likely not 
 do many of the things you suggest).
I think many optimizations become more valuable when you start doing whole program anlysis.
You're correct, but I think the value only comes if it's actually done, which was my point.
 2. The person writing the code is writing code that is 
 allocating several times in a deeply nested loop.
The premise of efficient high level/generic programming is that the optimizer will undo naive code.
Sure. My point was that it's a very precise situation that the optimization would actually work effectively enough to be significant enough to discard the advantages of using a library solution. If there were no trade offs for using a compiler supported new, then even a tiny smidge of an optimization here and there is perfectly reasonable. Unfortunately, that's not the case. The only times where I think the proposed optimization is significant enough to overcome the tradeoff is precisely the type of situation I described. Note I'm _not_ arguing that performing optimizations is irrelevant. Like you said, "The premise of efficient high level/generic programming is that the optimizer will undo naive code." But that is _not_ the only facet that needs to be considered here. If it were, you'd be correct and we should recommend only using new. But since there are distinct advantages to a library solution and distinct disadvantages to the compiler solution, the fact that you _could_, with effort, make small optimizations on occasion just isn't enough to overturn the other tradeoffs you're making.
 3. Despite the person making the obvious critical error of 
 allocating several times in a deeply nested loop, he must not 
 have made any other significant errors or those other errors 
 must also be covered by optimizations
I disagree that that inefficiencies due to high level programming is a mistake if the compiler has opportunity to get rid of it. I wish D would target high level programming in the global scope and low level programming in limited local scopes. I think few applications need hand optimization globally, except perhaps raytracers and compilers.
You seem to be misunderstanding my point again. I'm _not_ suggesting D not optimize as much as possible and I'm not suggesting everyone "hand optimize" everything. Following my previous conditions, this condition is obviously suggesting that there isn't any other significant problems which would minimize the effect of your proposed optimization. So, _if_ the optimization is put in place, and _if_ the code in question is deeply nested to make the code take a significant amount of time so that your proposed optimization has a chance to be actually useful, then now we have to ask the question "are there any other major problems that are also taking up significant time?" If the answer is "Yes, there are other major problems" then your proposed speed-up seems less likely to matter. That's where I was going.
 Manual optimization in this case isn't too unreasonable.
I think manual optimization in most cases should be privided by the programmer as compiler hints and constraints.
In some cases, yes. An "inline" hint, for instance, makes a ton of sense. Are you suggesting that there should be a hint provided to new? Something like: `Something thing = new ( stackallocate) Something(arg1,arg2);`? If so, it seems like a really roundabout way to do it when you could just do `Something thing = stackAlloc.make!Something(arg1,arg2);` I don't see hints possibly being provided to new being an advantage at all. All that means is that to add additional "hint" allocators, you'd have to dive into the compiler (and language spec) as opposed to easily writing your own as a library.
 Think of replacing library calls when it's noticed that it's 
 an allocate function. It's pretty dirty and won't actually 
 happen nor do I suggest it should happen, but it's actually 
 still also _possible_.
Yes, why not? As long as the programmer has the means to control it. Why not let the compiler choose allocation strategies based on profiling for instance?
Uhh... You don't see the problem with the compiler tying itself to the implementation of a library allocate function? Presumably such a thing would _only_ be done using the default library allocator since when the programmer says "use std.allocator.StackAllocator" he generally means it. And I find the whole idea of the compiler hard-coding "if the programmer uses std.allocator.DefaultAllocator.allocate then instead of emitting that function call do ..." to be more than a bit ugly. Possible, but horrific.
Dec 31 2013
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 31 December 2013 at 20:29:34 UTC, Chris Cain wrote:
 On Tuesday, 31 December 2013 at 19:53:29 UTC, Ola Fosheim 
 Grøstad wrote:
 I think many optimizations become more valuable when you start 
 doing whole program anlysis.
You're correct, but I think the value only comes if it's actually done, which was my point.
Well, there is a comment in the DMD source code that suggest that it is being thought about, at least. :) Anyway, I think threading, locking and memory management are areas that should not be controlled by black boxes. Partially for optimization, but also for partial correctness "proofs".
 But since there are distinct advantages to a library solution 
 and distinct disadvantages to the compiler solution, the fact 
 that you _could_, with effort, make small optimizations on 
 occasion just isn't enough to overturn the other tradeoffs 
 you're making.
The way I see it: programmers today avoid the heap and target the stack. They shouldn't have to. The compiler should handle that. C++ compilers do reaaonably well on low level basic blocks, but poorly on higher levels, so programmers are used to hand optimizing for that situation. I think many allocations could be dissolved and replaced with passing values in registers, or simply reused with limited reinitialization with better high level analysis, or maybe have allocations take place at the call site rather than in the repeatedly called function. Automagically!
 You seem to be misunderstanding my point again. I'm _not_ 
 suggesting D not optimize as much as possible and I'm not 
 suggesting everyone "hand optimize" everything. Following my
Well, l think c++ish programmers in today are hand optimizing everything at the medium level, in a sense. Less so in Python (it is too slow to bother :-)
 I think manual optimization in most cases should be privided 
 by the programmer as compiler hints and constraints.
In some cases, yes. An "inline" hint, for instance, makes a ton of sense. Are you suggesting that there should be a hint provided to new?
Actually I want meta-level constructs, like: - this new will allocate at least 1000 objects that are 100-400 bytes, - all new allocs marked by X beyond this point will be released by position Y - this new should not be pages if possible - i dont mind if this new is mapped to disk - this new is a cache object, destroy whenever you feel like - this new will never hit another thread
 Presumably such a thing would _only_ be done using the default 
 library allocator since when the programmer says "use 
 std.allocator.StackAllocator" he generally means it.
He shouldn't have to. You dont write your own paging or scheduler for the OS either. You complain until it works. :)
Dec 31 2013
prev sibling parent "Jakob Ovrum" <jakobovrum gmail.com> writes:
On Tuesday, 31 December 2013 at 00:45:51 UTC, Adam D. Ruppe wrote:
 On Tuesday, 31 December 2013 at 00:21:21 UTC, Ola Fosheim 
 Grøstad wrote:
 Maybe I misunderstand, but isn't the purpose of std.allocator 
 to set up allocators that new can call into?
No, it'll be a standard library interface. I really think the built-in new should be discouraged - it has a lot of disadvantages over using a library setup, and not many advantages.
I suspect it will still be fine to use in situations where a loose aliasing policy is required (so GC-exclusive characteristics are depended on), such as scripts and certain other application code. The same goes for using the concatenation operators, over `Appender` et al. with a pluggable allocator. For libraries though - definitely should be a red flag.
Dec 31 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 12/30/13 11:12 AM, Walter Bright wrote:
 On 12/30/2013 9:36 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 I hoped for solution that is a bit more transparent than creating my
 own new
 syntax though, a solution which makes replacing allocators across
 projects less
 work. *shrug*
Having overloaded global operator new in C++ myself across many projects, I eventually concluded that feature is a bug.
Agreed. Class-level, too. Andrei
Dec 30 2013
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
Broad statements with no explanation are not very enlightening.
Dec 30 2013
prev sibling parent reply "develop32" <develop32 gmail.com> writes:
On Sunday, 29 December 2013 at 23:14:59 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 29 December 2013 at 22:27:43 UTC, Walter Bright 
 wrote:
 Your reply doesn't take into account that you can control if 
 and when the GC runs fairly simply. So you can run it at a 
 time when it won't matter to the cache.
In a computer game GC should run frequently. You don't want to waste memory that could be used to hold textures on GC headroom. Realtime audio applications should run for 1 hour+ with absolutely no hiccups and very low latency. Working around the limitations of a naive GC implementation is probably more work than it is worth.
I work on a somewhat large game using D, there is no GC running because there are no allocations. As far as I know, people tend not to use system primitives like malloc/free in C++ games either as even those are too slow. And why would you store textures in RAM?
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 01:09:26 UTC, develop32 wrote:
 I work on a somewhat large game using D, there is no GC running 
 because there are no allocations.
That's cool!
 As far as I know, people tend not to use system primitives like 
 malloc/free in C++ games either as even those are too slow.
It is not uncommon to use your own allocator or only allocate big chunks, true.
 And why would you store textures in RAM?
Because you want to stream them to the GPU when you walk across the land in a seamless engine?
Dec 29 2013
parent reply "develop32" <develop32 gmail.com> writes:
 Because you want to stream them to the GPU when you walk across 
 the land in a seamless engine?
Indeed, but the arrays that are used for holding the data are allocated (not in GC heap) before the main loop and always reused.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
 Indeed, but the arrays that are used for holding the data are 
 allocated (not in GC heap) before the main loop and always 
 reused.
Yes, I merely tried to point out that you don't want to spend a lot of memory for dead objects waiting for the GC to kick in. You can use that space for caching data or loading more detailed graphics. Besides you define a minimum amount of RAM that you game will work with and try to stay within those bounds in order to increase the market for your game, so you never have enough memory for the bottom line which is what you optimize for… I assumed GC would be useful for AI/game world. It will benefit from GC because you have complex interdependencies, many different classes and want to be able to experiment. Experimentation and explicit memory deallocation is a likely source for memory leaks… However this will only work well with a basic GC if the AI/game world representation consists of relatively few objects. So it limits your design space. I agree that GC isn't all that useful for graphics/audio because you have a small set of classes with simple relationships, so I never assumed you would want that.
Dec 29 2013
parent reply "develop32" <develop32 gmail.com> writes:
 I assumed GC would be useful for AI/game world. It will benefit 
 from GC because you have complex interdependencies, many 
 different classes and want to be able to experiment. 
 Experimentation and explicit memory deallocation is a likely 
 source for memory leaks… However this will only work well with 
 a basic GC if the AI/game world representation consists of 
 relatively few objects. So it limits your design space.
that its not a GC-friendly pattern. But then why use it in a language with a GC? In my engine, world object (entity) is basically just a number. All of the entity data is stored in multiple components that are just functionless structs held in few global arrays. All of the game logic is done by separate managers. In the end, AI/game logic uses the same mechanic as texture streaming - reuse of the previously allocated memory.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 02:18:34 UTC, develop32 wrote:
 In the end, AI/game logic uses the same mechanic as texture 
 streaming - reuse of the previously allocated memory.
That is a possibility of course, but for a heterogenous environment you risk running out of slots. E.g. online games, virtual worlds, sandbox games where users build etc.
Dec 29 2013
parent reply "develop32" <develop32 gmail.com> writes:
 That is a possibility of course, but for a heterogenous 
 environment you risk running out of slots. E.g. online games, 
 virtual worlds, sandbox games where users build etc.
No, not really, just allocate more. The memory is managed by a single closed class, I can do whatever I want with it.
 online games
MMO games are the source of this idea, components are easy to store in DB tables.
 virtual worlds
Remove unneeded render/physics components when entity is out of range, etc.
 sandbox games where users build
No idea how to fix a problem of not enough RAM. The thing is, all of that game logic data takes a really surprisingly small amount of memory.
Dec 29 2013
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 02:44:27 UTC, develop32 wrote:
 The thing is, all of that game logic data takes a really 
 surprisingly small amount of memory.
In that case you probably could use GC, so why don't you?
Dec 29 2013
parent "develop32" <develop32 gmail.com> writes:
On Monday, 30 December 2013 at 02:48:30 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 30 December 2013 at 02:44:27 UTC, develop32 wrote:
 The thing is, all of that game logic data takes a really 
 surprisingly small amount of memory.
In that case you probably could use GC, so why don't you?
Because there is nothing for GC to free in my engine. In other engine architectures it surely can be a possibility. In my experiments it took 3ms to run it at the end of each game loop.
Dec 29 2013
prev sibling parent reply 1100110 <0b1100110 gmail.com> writes:
On 12/29/2013 02:47 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 29 December 2013 at 20:36:27 UTC, Walter Bright wrote:
 I'll reiterate that the GC will NEVER EVER pause your program unless
 you are actually calling the GC to allocate memory. A loop that does
 not GC allocate WILL NEVER PAUSE.
That's fine, except when you have real-time threads.
I'm always astounded how often real-time anything gets thrown around. Yeah sure, some programs are under time constraints. But from the way everyone freaked out you'd think *all* programs are real-time. But in fact it's a very small subset. Hell, it's small enough to be a *special case*.
 So unless you use non-temporal load/save in your GC traversal (e.g. on
 x86 you have SSE instructions that bypass the cache), your GC might
 trash the cache for other cores that run real-time threads which are
 initiated as call-backs from the OS.
Awesome. So now to run a real-time application, you can't have any program that uses a GC running on the same machine. Somehow I don't think that is grounded in reality.
 These callbacks might happen 120+ times per seconds and your runtime
 cannot control those, they have the highest user-level priority.

 Granted, the latest CPUs have a fair amount of level 3 cache, and the
 most expensive ones might have a big level 4 cache, but I still think it
 is a concern. Level 1 and 2 caches are small: 64KB/128KB.
Jan 03 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 4 January 2014 at 07:00:28 UTC, 1100110 wrote:
 But in fact it's a very small subset.  Hell, it's small enough 
 to be a *special case*.
No, real time applications are not a very small subset. Hard real-time applications are in a smaller subset, although most audio-applications for performance fall into this category. It is however THE subset where you need the kind of low-level control that C/C++ provides and which D is supposed to provide. For non-real-time applications you can usually get acceptable performance with higher level languages, if you pick the right language for the task.
 Awesome.  So now to run a real-time application, you can't have 
 any program that uses a GC running on the same machine.
Depends on the characteristics of the GC, if it is cache friendly, how many cache lines it touches per iteration and the real-time application. If you tune a tabular audio-application to full load on a single core and L3 cache size, then fast mark-sweep on a large dataset, frequent cache invalidation in the other threads and high memory-bus activity on the remaining cores is most certainly not desirable. But yes, running multiple high-load programs in parallel is indeed problematic for most high-load real-time applications (games, audio software). And to work around that you need lots of extra logic (such as running with approximations on missed frames where possible or doing buffering based on heuristics).
Jan 03 2014
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, December 29, 2013 07:22:28 Andrei Alexandrescu wrote:
 Clearly there's work we need to do on improving particularly the
 standard library. But claiming that D code can't be efficient because of
 some stdlib artifacts is like claiming C++ code can't do efficient I/O
 because it must use iostreams (which are indeed objectively and
 undeniably horrifically slow). Neither argument has merit.
D's design pretty much guarantees that it's as fast as C++ as long as the application implementations and compiler implementations are comparable. It's too similar to C++ for it to be otherwise. And D adds several features that can make it faster than C++ fairly easily (e.g. slices or CTFE).The only D feature that I think is of any real concern for speed is the GC, and C++ doesn't even have that, so writing D code the same way that you would C++ code would avoid that problem entirely, and you can take advantage of the GC without seriously harming performance simply by being smart about how you go about using it. The main issue is what your implementation is doing, as it's easy enough to make a program slower in either language. I think that the real question at this point is how fast idiomatic D is vs idiomatic C++, as D's design pretty much guarantees that it's competitive performance-wise as far as the language itself goes. And if idiomatic D is slower than idiomatic C++, it's likely something that can and will be fixed by improving the standard library. The only risk there IMHO is if we happened to have picked a particular idiom that is just inherently slow (e.g. if ranges were slow by their very nature), and I don't think that we've done that. And D (particularly idiomatic D) is so much easier to use that the increase in programmer productivity likely outweighs whatever minor performance hit the current implementation might incur. I agree that anyone who thinks that D is not competitive with C++ in terms of performance doesn't know what they're talking about. How you go about getting full performance out of each of them is not necessarily quite the same, but they're so similar (with D adding a number of improvements) that I don't see how D could be fundamentally slower than C++, and if it is, we screwed up big time. - Jonathan M Davis
Dec 29 2013
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On 29.12.2013 14:15, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:


 languages. I think they are not, as long as C/C++ is a better solution
 for embedded programming it will remain THE system level programming
 language. Which is kind of odd, considering that embedded systems would
 benefit a lot from a safe programming language (due to the
 cost/difficulty of updating software installed on deployed hardware).

 It doesn't matter if it is possible to write C++-like code in a language
 if the library support and knowhow isn't dominant in the ecosystem.
 (Like assuming a GC or trying too hard to be cross-platform).
Any language that can be used to write a full OS stack, excluding the usual stuff that can only be done via Assembly like boot loader and device driver <-> DMA operations, is a systems level programming language. Don't forget many things people assume are C features, are actually non portable compiler extensions. -- Paulo
Dec 29 2013
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Barry L.:

 Just saw this:  
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/
A little more info: https://plus.google.com/+AleksBromfield/posts/SnwtcXUdoyZ http://www.reddit.com/r/programming/comments/1tzk5j/the_m_error_model/ From the article:
our language has excellent support for understanding side 
effects at compile time. Most contract systems demand that 
contract predicates be free of side effects, but have no way of 
enforcing this property. We do. If a programmer tries to write a 
mutating predicate, they will get a compile-time error. When we 
first enabled this feature, we were shocked at the number of 
places where people wrote the equivalent of 
“Debug.Assert(WriteToDisk())”. So, practically speaking, this 
checking has been very valuable.<
Bye, bearophile
Dec 30 2013
next sibling parent reply "JN" <666total wp.pl> writes:
I'm kind of an outsider to this discussion, but take a look how
many games are written using GC-languages, Minecraft is written


even if you wanted to (you can do some of that stuff with NIO
buffers in java but it's a PITA). The best you can do in those
languages usually is to just not allocate stuff during the game.
So arguing that GC is useless for games is an overstatement.
Sure, a game engine of magnitude like Unreal Engine 3 might have
problems with use of GC, but for most other projects it will be
OK.
Dec 30 2013
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
JN:

 take a look how
 many games are written using GC-languages, Minecraft is written

But the Oracle JVM has a GC (more than one) way better then the current D one :-) Bye, bearophile
Dec 30 2013
parent "Brian Rogoff" <brogoff gmail.com> writes:
On Monday, 30 December 2013 at 12:20:36 UTC, bearophile wrote:
 But the Oracle JVM has a GC (more than one) way better then the 
 current D one :-)
Of course. But Java requires a world class GC and state-of-the-art escape analysis to achieve excellent performance. These would be nice to have in D, especially the super duper GC, but D will be fine IMO once it has a decent precise GC since it won't be pounding the GC as hard as Java would. I hope that after the ICE removal release coming up soon that there will be a 'get the precise GC working' release. As far as escape analysis goes, I rather wish D had stolen some ideas from Ada (>95) like 'aliased' and restrictions on address-taking and pointers, but I suppose that will have to wait for D2++. -- Brian
Dec 30 2013
prev sibling next sibling parent reply "develop32" <develop32 gmail.com> writes:
On Monday, 30 December 2013 at 11:23:22 UTC, JN wrote:
 I'm kind of an outsider to this discussion, but take a look how
 many games are written using GC-languages, Minecraft is written

 underneath

 even if you wanted to (you can do some of that stuff with NIO
 buffers in java but it's a PITA). The best you can do in those
 languages usually is to just not allocate stuff during the game.
 So arguing that GC is useless for games is an overstatement.
 Sure, a game engine of magnitude like Unreal Engine 3 might have
 problems with use of GC, but for most other projects it will be
 OK.
As far as I know, Unreal Engine 3 has its own GC implemention for its scripting system.
Dec 30 2013
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 30 December 2013 at 12:20:56 UTC, develop32 wrote:
 As far as I know, Unreal Engine 3 has its own GC implemention 
 for its scripting system.
Yes, for heavy weight objects with complex relationships. So it closer to segmented GC.
Dec 30 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-12-30 12:23, JN wrote:

 I'm kind of an outsider to this discussion, but take a look how
 many games are written using GC-languages, Minecraft is written


 even if you wanted to (you can do some of that stuff with NIO
 buffers in java but it's a PITA).
You can use malloc and friends via JNI in Java. The SWT source code is full of uses of malloc, although it's a bit more verbose than to use it from C or D since it doesn't have pointers. -- /Jacob Carlborg
Dec 30 2013
prev sibling parent reply "Chris Cain" <clcain uncg.edu> writes:
On Monday, 30 December 2013 at 11:23:22 UTC, JN wrote:
 The best you can do in those
 languages usually is to just not allocate stuff during the game.
Yeah. The techniques to accomplish this in GC-only languages surprisingly mirror some of the techniques where malloc is available, though. For instance, the object pool pattern has the object already allocated and what you do is just ask for an object from the pool and set it up for your needs. When you're done, you just give it back to the pool to be recycled. It's very similar to what you'd do in any other language, but a little more restricted (other languages, like D, might just treat the memory as untyped bytes and the "object pool" would be more flexible and could support any number of types of objects).
Dec 30 2013
parent Jerry <jlquinn optonline.net> writes:
"Chris Cain" <clcain uncg.edu> writes:

 On Monday, 30 December 2013 at 11:23:22 UTC, JN wrote:
 The best you can do in those
 languages usually is to just not allocate stuff during the game.
Yeah. The techniques to accomplish this in GC-only languages surprisingly mirror some of the techniques where malloc is available, though. For instance, the object pool pattern has the object already allocated and what you do is just ask for an object from the pool and set it up for your needs. When you're done, you just give it back to the pool to be recycled. It's very similar to what you'd do in any other language, but a little more restricted (other languages, like D, might just treat the memory as untyped bytes and the "object pool" would be more flexible and could support any number of types of objects).
I find even in C++ that I need to create object pools for speeding up our code. Generally this is due to objects that have allocated memory in them, such as vectors. For example (C++) class blah { vector<int> a, b, c, d; }; I end up making the object declare a reset function so that it can be recycled without paying for the vector reallocations. This is definitely a useful pattern to not have to rewrite. Jerry
Jan 03 2014
prev sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 30 December 2013 at 10:27:43 UTC, bearophile wrote:
 http://www.reddit.com/r/programming/comments/1tzk5j/the_m_error_model/
As compared with D: - unrecoverable errors crash immediately like they should. I like it since the most sensible reason to catch Error in D is to crash anyway (in eg. C callbacks). - hence, unrecoverable errors exception hierarchy not represented. support for code made in a hurry (something that D shines particularly). - same remark about the "try" keyword at call-site when calling a function which throw recoverable errors.
Dec 30 2013
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Mon, 30 Dec 2013 13:52:28 +0000
schrieb "ponce" <contact gam3sfrommars.fr>:

 http://www.reddit.com/r/programming/comments/1tzk5j/the_m_error_model/
=20
 As compared with D:
=20
 - unrecoverable errors crash immediately like they should. I like=20
 it since the most sensible reason to catch Error in D is to crash=20
 anyway (in eg. C callbacks).
=C2=BBUnrecoverable errors are designed for conditions that can=E2=80=99t really be handled appropriately from within a software component. [...] Null dereferences, out-of-bounds array accesses, bad downcasts, out-of-memory, contract/assertion violations=E2=80=A6=C2=AB =C2=BB[...] all failures are recoverable, but the granularity is much coarser grained than in traditional systems.=C2=AB As far as I can tell he is talking about tearing down a failing component (e.g. a library or plugin), not the whole program. I can only assume that he didn't look into bringing C code into the mix. This is different from D where you typically either get an access violation or an attempt at stack unwinding down to D main(). =C2=BBif one component fails in an unrecoverable way, an external component can observe and/or recover from the failure of that component.=C2=AB
 - hence, unrecoverable errors exception hierarchy not represented.
Personally I think it is useful to be able to check the type of unrecoverable error to handle out of memory situations or checks in test cases and unit tests.

 support for code made in a hurry (something that D shines=20
 particularly).
Can both co-exist? E.g. use case 1) "doesn't throw anything ever", use case 2) "throws UtfException when the date string is not valid UTF-8 and DateException when the day of month is out of range", use case 3) no keyword since we are in a hurry.
 - same remark about the "try" keyword at call-site when calling a=20
 function which throw recoverable errors.
=C2=BBIn fact, if you call a method that might raise a recoverable error, the call must be annotated to indicate that it might throw (the keyword we use is =E2=80=9Ctry=E2=80=9D).=C2=AB I'm not quite sure if that means "directly at the call-site" or "somewhere up in the call stack". What do you think? --=20 Marco
Dec 31 2013
prev sibling parent reply "bioinfornatics" <bioinfornatics fedoraproject.org> writes:
On Saturday, 28 December 2013 at 11:13:55 UTC, Barry L. wrote:
 Hello everyone, first post...

 Just saw this:  
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

 D (and Rust) get a mention with this quote:  "There are other 
 candidates early in their lives, too, most notably Rust and D. 

 talent and community just an arm’s length away."
They are any conclusion about this ? they are 10 page and most part talk about D gc…
Jan 08 2014
next sibling parent "Dwhatever" <not real.com> writes:
On Wednesday, 8 January 2014 at 22:55:24 UTC, bioinfornatics 
wrote:
 On Saturday, 28 December 2013 at 11:13:55 UTC, Barry L. wrote:
 Hello everyone, first post...

 Just saw this:  
 http://joeduffyblog.com/2013/12/27/csharp-for-systems-programming/

 D (and Rust) get a mention with this quote:  "There are other 
 candidates early in their lives, too, most notably Rust and D. 

 talent and community just an arm’s length away."
They are any conclusion about this ? they are 10 page and most part talk about D gc…
Thank you. Microsoft might put together a great language for system programming but if it is going to be used outside the Microsoft world, then LLVM will be essential. GCC has previously been used by processor vendors in order to support languages like C/C++. LLVM is now gradually taking over that part and I expect LLVM to become the compiler framework of choice. The same is really valid for the D language, without LLVM the D language will not live on. I don't really know the plan from Microsoft here but I doubt that they will release the source and support LLVM so I guess the wide acceptance of this new language will be limited to Microsoft development only. Then we might have people who will make an LLVM
Jan 08 2014
prev sibling parent reply "QAston" <qaston gmail.com> writes:
On Wednesday, 8 January 2014 at 22:55:24 UTC, bioinfornatics 
wrote:
 They are any conclusion about this ?
 they are 10 page and most part talk about D gc…
It is concluded that C(and optionally C++ - depending on the speaker) is inherently faster than anything else because C(++) is a "portable assembly language" and therefore it encourages writing fast software. For example, most of real C(++) programmers preallocate large blocks of memory space for future usage instead of allocating space for single variables like most of programmers using the discussed language. Cache locality gives a huge speed gains to the former group, while the latter group gets diabetes because of syntactic sugar. It's also worth noting that C(++) programmers are using memory more efficiently because they only allocate and deallocate memory only when needed - memory is reclaimed by OS as fast as possible. This can't be achieved by garbage collection which frees memory in batches. C(++) is prefered over assembly because it's just as fast or even faster than what would you write manually, yet it allows you to focus on algorithms and data structures instead of low level details of the machine like cache locality and memory layout. Yet, you still retain full control - by using inline asm you can regain some cycles wasted on adhering to calling conventions, etc. C(++) macro language is superior to what NASM, FASM and others have to offer - it's much simpler to use than those and it serves as a prefered way of achieving robust compile time polymorphism. C(++) is designed to be simple and fast language. It can be adopted easily in various architectures because of many undefined behaviors (which leave wiggle-room for implementers) and lack of runtime library - you can just use OS calls! There's no way this language can beat C(++), don't even try to fight with years of tradition. Simply join the cult!
Jan 08 2014
parent "Szymon Gatner" <noemail gmail.com> writes:
On Thursday, 9 January 2014 at 00:20:36 UTC, QAston wrote:

 C(++) is designed to be simple [snip]
Good one! ;)
Jan 08 2014