digitalmars.D - GCs in the news
- Russel Winder via Digitalmars-d (15/15) Jul 17 2014 It appears still to be a general meme that performance required no GC
- Chris (5/13) Jul 17 2014 That's good news in a way. If a big company accepts GC and the Go
- eles (13/18) Jul 17 2014 The same (ie: big companies with GC) holds for C# and Java. They
- currysoup (10/23) Jul 17 2014 It's not about "acceptance", it's about the reality that a GC is
- Daniel Murphy (2/5) Jul 17 2014 Because D has plenty of other things to offer.
- Chris (24/48) Jul 17 2014 Point taken. But as has been said before 90-95% of all apps can
- Chris (2/51) Jul 17 2014 Ah, and there's inline asm too!
- Paulo Pinto (12/47) Jul 17 2014 Easy, like in any language that offers FFI.
- currysoup (15/22) Jul 17 2014 I'm not here to hate on D, the reason I read these forums is
- John (2/11) Jul 17 2014 If D came without GC, it would have replaced C++ a long time ago!
- Brian Rogoff (12/14) Jul 17 2014 That's overly optimistic I think, but I believe that the adoption
- Chris (6/20) Jul 17 2014 Yeah. Best avoid GC in the first place. If GC can stop the world
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (18/32) Jul 19 2014 This claim is being made frequently, but you need to consider
- bachmeier (8/21) Jul 17 2014 The only thing that would have been replaced is the complaints
- Chris (2/25) Jul 17 2014 +1
- Vic (9/11) Jul 17 2014 Agree +1000.
- Peter Alexander (5/7) Jul 17 2014 Much of Phobos already is GC free. The parts that aren't should
- Vic (6/13) Jul 17 2014 If that is true, I may even do a $ bounty to make Phobos GC free.
- H. S. Teoh via Digitalmars-d (18/32) Jul 17 2014 [...]
- Chris (5/50) Jul 17 2014 That's good news! See, we're getting there, just bear with us.
- Dicebot (4/7) Jul 17 2014 Usually GC-free API is added by providing new overloads that take
- deadalnix (2/9) Jul 17 2014 Yes, output ranges are underused by now.
- H. S. Teoh via Digitalmars-d (11/22) Jul 17 2014 Actually, I've realized that output ranges are really only useful when
- Dicebot (15/27) Jul 17 2014 Plain algorithm ranges rarely need to allocate at all so those
- H. S. Teoh via Digitalmars-d (24/52) Jul 17 2014 I think you're missing the input parameter. :)
- Dicebot (3/60) Jul 18 2014 Yes this looks better.
- Walter Bright (3/7) Jul 19 2014 Exactly! The algorithm becomes completely divorced from the memory alloc...
- Walter Bright (10/23) Jul 17 2014 That algorithm takes a string and writes to an output range. This is not...
- H. S. Teoh via Digitalmars-d (15/32) Jul 17 2014 I don't think it will affect existing code (esp. given Walter's stance
- bearophile (9/11) Jul 17 2014 Making various parts of Phobos GC-free doesn't mean that nothing
- Walter Bright (2/12) Jul 17 2014 Boss, dat's pretty much de plan, de plan!
- Chris (28/75) Jul 18 2014 That sounds good to me! This gives me time to upgrade my old code
- Paulo Pinto (6/84) Jul 20 2014 Java has AOT compilers available since the early days. Most
- Russel Winder via Digitalmars-d (14/17) Jul 21 2014 Also, it is not entirely clear that AOT optimization can beat JIT
- deadalnix (3/12) Jul 21 2014 They probably aren't mutually exclusive.
- Paulo Pinto (13/22) Jul 21 2014 Yes it can, if developers bother to do PGO + AOT instead and
- Russel Winder via Digitalmars-d (24/35) Jul 22 2014 I think you have to make good on this claim since the JVM JIT is
- Paulo Pinto (24/59) Jul 22 2014 The JVM JIT was originally targeted to SELF, not Java.
- Russel Winder via Digitalmars-d (44/67) Jul 23 2014 I think you'll find HotSpot evolved from a Smalltalk JIT originally.
- Paulo Pinto (13/35) Jul 23 2014 I will happily use it when it gets to the same execution speed
- Russel Winder via Digitalmars-d (19/34) Jul 23 2014 The way I work with Gradle is to generate Eclipse or IntelliJ IDEA
- Paulo Pinto (11/19) Jul 23 2014 So far I could only find
- Russel Winder via Digitalmars-d (16/27) Jul 24 2014 =20
- John Colvin (5/19) Jul 23 2014 I am suspicious. I understand that a situation can be contrived
- Bienlein (14/17) Jul 23 2014 Yes, that's right. The guys that developed Self (David Ungar et
- Atila Neves (8/31) Jul 23 2014 http://benchmarksgame.alioth.debian.org/
- deadalnix (4/11) Jul 23 2014 It usually does in memory intensive benchmark that aren't
- Russel Winder via Digitalmars-d (14/17) Jul 23 2014 For my data parallel computations, I find C++ with TBB tends to be the
- Brad Anderson (8/11) Jul 23 2014 I'm reminded of when headlines came out saying PyPy was now
- Andrei Alexandrescu (3/5) Jul 23 2014 Uhm, I'm literally right now in a talk on Buck
- Andrei Alexandrescu (3/8) Jul 23 2014 Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg
- Russel Winder via Digitalmars-d (16/26) Jul 23 2014 or
- Paulo Pinto (12/26) Jul 23 2014 I only tried Graddle because of Android Studio, it makes so bad use of
- Russel Winder via Digitalmars-d (26/36) Jul 24 2014 =20
- Paulo Pinto (5/13) Jul 24 2014 Nope, Gradle, as shown by the CPU usage on the task manager.
- Russel Winder via Digitalmars-d (13/15) Jul 24 2014 I am surprised, but data always trumps opinion.
- Paulo Pinto (5/11) Jul 24 2014 One of the first Google results,
- Russel Winder via Digitalmars-d (15/30) Jul 24 2014 Looks like Android Studio tells Gradle to use the number of threads that
- Paulo Pinto (6/29) Jul 24 2014 In this specific case yes, but as I mentioned there are lots of
- Russel Winder via Digitalmars-d (18/20) Jul 27 2014 It turns out to be a "known fact" even in Gradleware. Hans mentions it
- Chris (3/20) Jul 27 2014 I am nobody.
- Russel Winder via Digitalmars-d (17/26) Jul 27 2014 I was fairly appalled at the response so I have requested ability to
- Andrei Alexandrescu (3/4) Jul 23 2014 He promised his kid they'll go on an adventure with daddy. A really nice...
- Russel Winder via Digitalmars-d (15/20) Jul 24 2014 =20
- deadalnix (3/13) Jul 23 2014 Say hi to Simon :)
- Walter Bright (3/6) Jul 23 2014 Fun fact: the guy who wrote Symantec's JVM JIT, Steve Russell, is the ve...
- Dicebot (13/18) Jul 17 2014 Unless you do some hard real-time barebone stuff it is quite
- bearophile (4/7) Jul 17 2014 I see no proof of this. And not everybody hates GCs.
- Right (1/4) Jul 17 2014
- H. S. Teoh via Digitalmars-d (6/9) Jul 17 2014 [...]
- Ary Borenszweig (4/9) Jul 17 2014 Java is everywhere and it has a GC. Go is starting to be everywhere and
- Russel Winder via Digitalmars-d (40/43) Jul 17 2014 On Thu, 2014-07-17 at 15:11 -0300, Ary Borenszweig via Digitalmars-d
- Right (20/20) Jul 17 2014 I'm rather fond of RAII, I find that I rarely every need shared
- Kiith-Sa (5/25) Jul 17 2014 UEngine has been rewritten from scratch.
- Right (17/21) Jul 17 2014 UE4 wasn't really rewritten from scratch, was more like, take
- Kagamin (3/18) Jul 19 2014 Though, GC is safer, easier and cheaper than ownership model,
- Walter Bright (7/12) Jul 19 2014 RAII has a lot of costs associated with it that I am often surprised go
- safety0ff (14/16) Jul 20 2014 They are even more fantastic for speeding up programming.
- Andrei Alexandrescu (3/14) Jul 17 2014 http://www.stroustrup.com/C++11FAQ.html#gc-abi
- Ary Borenszweig (2/17) Jul 17 2014 Sorry, but I don't understand your reply by just reading that link.
- Andrei Alexandrescu (2/21) Jul 17 2014 There's work on adding optional GC to C++ starting with C++11. -- Andrei
- Abdulhaq (28/40) Jul 17 2014 I can't think of anyone posting here, to be honest, who wants to
- Andrei Alexandrescu (6/27) Jul 17 2014 Not at all costs! warp creates a little litter during e.g. command line
- Kagamin (6/11) Jul 18 2014 In D you have a choice to use GC or not use it. You would want to
- w0rp (61/61) Jul 17 2014 The key to making D's GC acceptable lies in two factors I believe.
- thedeemon (6/13) Jul 17 2014 That's easy, just make sure your heap never grows over 0.4 MB.
- Brad Anderson (11/39) Jul 17 2014 I agreed with this for awhile but following the conversation here
- Dicebot (4/16) Jul 17 2014 This is not comparable. Lazy input range based solutions do not
- Brad Anderson (6/25) Jul 17 2014 Well the idea is that you then copy into an output range with
- Dicebot (5/10) Jul 17 2014 It is not always possible - sometimes resulting range element
- Chris (7/17) Jul 17 2014 From what I'm getting is that we might have the chance here to
- H. S. Teoh via Digitalmars-d (11/22) Jul 17 2014 As Brad said, it's far easier to go from lazy to eager than the other
- Walter Bright (2/8) Jul 17 2014 Yup. It enables separating the allocation strategy from the algorithm.
- Walter Bright (3/19) Jul 17 2014 They move the allocation point to the top level, rather than the bottom ...
- H. S. Teoh via Digitalmars-d (11/33) Jul 17 2014 Deferring the allocation point to the top level has the advantage of
- Walter Bright (4/11) Jul 17 2014 Andrei's allocator scheme addresses this. It will also allow such decisi...
- Iain Buclaw via Digitalmars-d (10/13) Jul 20 2014 GC in extermely low memory or real time environments.
- Mike (15/31) Jul 20 2014 Yes, Please!
- Remo (9/17) Jul 17 2014 GC or no GC is that the right question ?
It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 17 2014
On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).
Jul 17 2014
On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).do not aim for the "systems programming language" mantra. (BTW, there are/will be .NET native and gcj). More, in embedded systems (and, generally, in system programming), having or no a GC is not only about speed. It is also about the amount of memory that is used and about predictability. And let's not even talk about the finalizers/destructors and so on. Go went for servers, not for systems. Rename D to Vibe.D and everything will fall in place.
Jul 17 2014
On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management. Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).
Jul 17 2014
"currysoup" wrote in message news:iustbzgyagrlbtnfcton forum.dlang.org...Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language?Because D has plenty of other things to offer.
Jul 17 2014
On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:Point taken. But as has been said before 90-95% of all apps can live happily with GC, and if you want, you can still go bare metal with D. The security GC offers should not be underestimated either. With "acceptance" I meant that people see "it cannot be that bad after all for *most* applications". The GC issue is often cited as a D-eal breaker. I understand that there are applications that need total control over the memory. But those apps have always been programmed in C or any other close-to-the-machine language, and even then programmers (in gaming for example) have to use additional tricks and hacks to squeeze out every little bit of performance. What D has to do is to facilitate control over the memory, but I still consider it a systems programming language due to the fact that it has many things to offer as regard the direct interaction with the machine yes, tell me how, I'm interested.On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management.It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.Don't know if it's really a "major concern" or the favorite weak spot that C++ et. al guys like to flog to death in order to distract from the many strengths that D has (in comparison with C++ et al.) The answer is always "D has GC, it's the Devil, don't touch it!" Also, let's put a little faith in the brilliant developers behind D, I'm sure there's a huge performance boost for D around the corner.
Jul 17 2014
On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:Ah, and there's inline asm too!On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:Point taken. But as has been said before 90-95% of all apps can live happily with GC, and if you want, you can still go bare metal with D. The security GC offers should not be underestimated either. With "acceptance" I meant that people see "it cannot be that bad after all for *most* applications". The GC issue is often cited as a D-eal breaker. I understand that there are applications that need total control over the memory. But those apps have always been programmed in C or any other close-to-the-machine language, and even then programmers (in gaming for example) have to use additional tricks and hacks to squeeze out every little bit of performance. What D has to do is to facilitate control over the memory, but I still consider it a systems programming language due to the fact that it has many things to offer as regard the direct interaction drive in Java, if yes, tell me how, I'm interested.On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management.It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.Don't know if it's really a "major concern" or the favorite weak spot that C++ et. al guys like to flog to death in order to distract from the many strengths that D has (in comparison with C++ et al.) The answer is always "D has GC, it's the Devil, don't touch it!" Also, let's put a little faith in the brilliant developers behind D, I'm sure there's a huge performance boost for D around the corner.
Jul 17 2014
On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:Easy, like in any language that offers FFI. Expose a Driver class with native method declarations, whose implementation is written in Assembly. The SquakVM used to drive SunSPOT devices had the device drivers written in Java. There are quite a few other examples in the embedded market, like the MicroEJ platform. That is no different from writing drivers in ANSI C, which provides zero features for hardware interaction. -- PauloOn Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:Point taken. But as has been said before 90-95% of all apps can live happily with GC, and if you want, you can still go bare metal with D. The security GC offers should not be underestimated either. With "acceptance" I meant that people see "it cannot be that bad after all for *most* applications". The GC issue is often cited as a D-eal breaker. I understand that there are applications that need total control over the memory. But those apps have always been programmed in C or any other close-to-the-machine language, and even then programmers (in gaming for example) have to use additional tricks and hacks to squeeze out every little bit of performance. What D has to do is to facilitate control over the memory, but I still consider it a systems programming language due to the fact that it has many things to offer as regard the direct interaction drive in Java, if yes, tell me how, I'm interested.On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management.It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).
Jul 17 2014
On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:Don't know if it's really a "major concern" or the favorite weak spot that C++ et. al guys like to flog to death in order to distract from the many strengths that D has (in comparison with C++ et al.) The answer is always "D has GC, it's the Devil, don't touch it!" Also, let's put a little faith in the brilliant developers behind D, I'm sure there's a huge performance boost for D around the corner.I'm not here to hate on D, the reason I read these forums is because I love the language. I feel it is a major concern, if I'm starting a project with low latency requirements* I certainly think twice about using D. I think this could apply especially to people outside the community who might not have experienced the benefits D provides. The issue is not there is a GC, it's that the GC is viewed as bad. If the GC was as good as Azul's C4 GC then D would be perfect. I'm not sure if D's memory model supports such a collector though. *According to Don Clugston's talk the default GC can pause for ~250ms which is totally insane for any kind of interactive or near-real-time system. If their concurrent version of the GC could reduce this to 10ms it shows the GC implementation is fairly naive.
Jul 17 2014
On Thursday, 17 July 2014 at 13:30:15 UTC, currysoup wrote:On Thursday, 17 July 2014 at 11:15:10 UTC, Chris wrote:The sequencer that I use executes a loop every 10 ms.*According to Don Clugston's talk the default GC can pause for ~250ms which is totally insane for any kind of interactive or near-real-time system. If their concurrent version of the GC could reduce this to 10ms it shows the GC implementation is fairly naive.
Jul 17 2014
I feel it is a major concern, if I'm starting a project with low latency requirements* I certainly think twice about using D. I think this could apply especially to people outside the community who might not have experienced the benefits D provides. The issue is not there is a GC, it's that the GC is viewed as bad. If the GC was as good as Azul's C4 GC then D would be perfect. I'm not sure if D's memory model supports such a collector though.It doesn't.
Jul 17 2014
On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management. Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.If D came without GC, it would have replaced C++ a long time ago!
Jul 17 2014
On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:If D came without GC, it would have replaced C++ a long time ago!That's overly optimistic I think, but I believe that the adoption rate would have been far greater for a D without GC, or perhaps with a more GC friendly design, as the GC comes up first or close in every D discussion with prospective adopters. However, it's way too late to change that now. IMO, the way forward involves removing all or most hidden allocations from the D libraries, making programming sans GC easier ( nogc everywhere, a compiler switch, documentation for how to work around the lack of GC, etc.) and a much better, precise GC as part of the D release. Any spec changes necessary to support precision should be in a fast path.
Jul 17 2014
On Thursday, 17 July 2014 at 14:05:02 UTC, Brian Rogoff wrote:On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:Yeah. Best avoid GC in the first place. If GC can stop the world for ~250ms, wouldn't it be possible (just an innocent thought) to tell the GC only to work, if it can guarantee to stay below a certain threshold, and do the rest later (or in a parallel thread)?If D came without GC, it would have replaced C++ a long time ago!That's overly optimistic I think, but I believe that the adoption rate would have been far greater for a D without GC, or perhaps with a more GC friendly design, as the GC comes up first or close in every D discussion with prospective adopters. However, it's way too late to change that now. IMO, the way forward involves removing all or most hidden allocations from the D libraries, making programming sans GC easier ( nogc everywhere, a compiler switch, documentation for how to work around the lack of GC, etc.) and a much better, precise GC as part of the D release. Any spec changes necessary to support precision should be in a fast path.
Jul 17 2014
On Thursday, 17 July 2014 at 14:05:02 UTC, Brian Rogoff wrote:On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:This claim is being made frequently, but you need to consider that D started out as a more simpler language than it is today. Many of the distinguishing advantages of D can only be made possible _in a safe way_ when there is a GC. Everyone seems to agree, for example, that array slicing is one of these features. Without a GC, you'd either have to add a complicated reference counting scheme, thus destroying performance and simplicity, or you'd have to rely on the user for ownership management, which is unsafe. (A third way would be borrowing, which D doesn't have (yet).) I also believe that the Range concept was introduced at a later stage in D's history, thus the GC avoidance strategies that are being implemented in Phobos right now weren't available back then. Therefore I cannot agree that D would have been adopted more eagerly without a GC; in fact, the adoption rate would have likely been less, because the language would have been crippled.If D came without GC, it would have replaced C++ a long time ago!That's overly optimistic I think, but I believe that the adoption rate would have been far greater for a D without GC, or perhaps with a more GC friendly design, as the GC comes up first or close in every D discussion with prospective adopters.However, it's way too late to change that now. IMO, the way forward involves removing all or most hidden allocations from the D libraries, making programming sans GC easier ( nogc everywhere, a compiler switch, documentation for how to work around the lack of GC, etc.) and a much better, precise GC as part of the D release. Any spec changes necessary to support precision should be in a fast path.Add borrowing!
Jul 19 2014
On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:The only thing that would have been replaced is the complaints that D has a garbage collector with complaints that D doesn't have the tools and existing libraries of C++. If C++ users were sincere in their claims that they really want to use D, they'd have disabled the garbage collector and used it. I think the GC issue is eating resources that would be better spent elsewhere.It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management. Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.If D came without GC, it would have replaced C++ a long time ago!
Jul 17 2014
On Thursday, 17 July 2014 at 15:19:59 UTC, bachmeier wrote:On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote:+1On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:The only thing that would have been replaced is the complaints that D has a garbage collector with complaints that D doesn't have the tools and existing libraries of C++. If C++ users were sincere in their claims that they really want to use D, they'd have disabled the garbage collector and used it. I think the GC issue is eating resources that would be better spent elsewhere.It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management. Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.If D came without GC, it would have replaced C++ a long time ago!
Jul 17 2014
On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote: <snip>If D came without GC, it would have replaced C++ a long time ago!Agree +1000. If GC is so good, why not make it an option, have a base lib w/o GC. If I want GC, I got me JRE. It seems that some in D want to write a better JRE, and that just won't happen ever. Cheers, Vic
Jul 17 2014
On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:If GC is so good, why not make it an option, have a base lib w/o GC.Much of Phobos already is GC free. The parts that aren't should be easy to convert to use user-supplied buffers. Please add enhancement requests for cases where there isn't a GC-free alternative to a standard library routine.
Jul 17 2014
On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander wrote:On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:If that is true, I may even do a $ bounty to make Phobos GC free. I may do the same, $ bounty on vibe.d port to GC free. I don't know D enough to be able to do that, but good news to me. Cheers, VicIf GC is so good, why not make it an option, have a base lib w/o GC.Much of Phobos already is GC free. The parts that aren't should be easy to convert to use user-supplied buffers. Please add enhancement requests for cases where there isn't a GC-free alternative to a standard library routine.
Jul 17 2014
On Thu, Jul 17, 2014 at 05:28:01PM +0000, Vic via Digitalmars-d wrote:On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander wrote:[...] Over the last year or so, IIRC, there has been a push (a slow but nonetheless steady push) to make as much of Phobos GC-free as possible. I'd say most (all?) of std.algorithm and std.range should be GC-free by now, and probably many of the others can be made GC-free quite easily with the tools that we now have. AFAIK some work still needs to be done with std.string; Walter for one has started some work to implement range-based equivalents for std.string functions, which would be non-allocating; we just need a bit of work to push things through. DMD 2.066 will have nogc, which will make it easy to discover which remaining parts of Phobos are still not GC-free. Then we'll know where to direct our efforts. :-) T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De VitisOn Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:If that is true, I may even do a $ bounty to make Phobos GC free. I may do the same, $ bounty on vibe.d port to GC free. I don't know D enough to be able to do that, but good news to me.If GC is so good, why not make it an option, have a base lib w/o GC.Much of Phobos already is GC free. The parts that aren't should be easy to convert to use user-supplied buffers. Please add enhancement requests for cases where there isn't a GC-free alternative to a standard library routine.
Jul 17 2014
On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d wrote:On Thu, Jul 17, 2014 at 05:28:01PM +0000, Vic via Digitalmars-d wrote:That's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.On Thursday, 17 July 2014 at 17:13:04 UTC, Peter Alexander wrote:[...] Over the last year or so, IIRC, there has been a push (a slow but nonetheless steady push) to make as much of Phobos GC-free as possible. I'd say most (all?) of std.algorithm and std.range should be GC-free by now, and probably many of the others can be made GC-free quite easily with the tools that we now have. AFAIK some work still needs to be done with std.string; Walter for one has started some work to implement range-based equivalents for std.string functions, which would be non-allocating; we just need a bit of work to push things through. DMD 2.066 will have nogc, which will make it easy to discover which remaining parts of Phobos are still not GC-free. Then we'll know where to direct our efforts. :-) TOn Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:If that is true, I may even do a $ bounty to make Phobos GC free. I may do the same, $ bounty on vibe.d port to GC free. I don't know D enough to be able to do that, but good news to me.If GC is so good, why not make it an option, have a base lib w/o GC.Much of Phobos already is GC free. The parts that aren't should be easy to convert to use user-supplied buffers. Please add enhancement requests for cases where there isn't a GC-free alternative to a standard library routine.
Jul 17 2014
On Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:That's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.Usually GC-free API is added by providing new overloads that take an output range instance as an argument so no existing code should break (it will still use allocating versions)
Jul 17 2014
On Thursday, 17 July 2014 at 18:08:18 UTC, Dicebot wrote:On Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:Yes, output ranges are underused by now.That's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.Usually GC-free API is added by providing new overloads that take an output range instance as an argument so no existing code should break (it will still use allocating versions)
Jul 17 2014
On Thu, Jul 17, 2014 at 06:09:49PM +0000, deadalnix via Digitalmars-d wrote:On Thursday, 17 July 2014 at 18:08:18 UTC, Dicebot wrote:Actually, I've realized that output ranges are really only useful when you want to store the final result. For data in mid-processing, you really want to be exporting an input (or higher) range interface instead, because functions that take output ranges are not composable. And for storing final results, you just use std.algorithm.copy, so there's really no need for many functions to take an output range at all. T -- One Word to write them all, One Access to find them, One Excel to count them all, And thus to Windows bind them. -- Mike ChampionOn Thursday, 17 July 2014 at 17:58:15 UTC, Chris wrote:Yes, output ranges are underused by now.That's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.Usually GC-free API is added by providing new overloads that take an output range instance as an argument so no existing code should break (it will still use allocating versions)
Jul 17 2014
On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via Digitalmars-d wrote:Actually, I've realized that output ranges are really only useful when you want to store the final result. For data in mid-processing, you really want to be exporting an input (or higher) range interface instead, because functions that take output ranges are not composable. And for storing final results, you just use std.algorithm.copy, so there's really no need for many functions to take an output range at all.Plain algorithm ranges rarely need to allocate at all so those are somewhat irrelevant to the topic. What I am speaking about are variety of utility functions like this: S detab(S)(S s, size_t tabSize = 8) if (isSomeString!S) this allocates result string. Proper alternative: S detab(S)(ref S output, size_t tabSize = 8) if (isSomeString!S); plus void detab(S, OR)(OR output, size_t tab_Size = 8) if ( isSomeString!S && isSomeString!(ElementType!OR) )
Jul 17 2014
On Thu, Jul 17, 2014 at 06:32:58PM +0000, Dicebot via Digitalmars-d wrote:On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via Digitalmars-d wrote:I think you're missing the input parameter. :) void detab(S, OR)(S s, OR output, size_t tabSize = 8) { ... } I argue that you can just turn it into this: auto withoutTabs(S)(S s, size_t tabSize = 8) { static struct Result { ... // implementation here } static assert(isInputRange!Result); return Result(s, tabSize); } auto myInput = "..."; auto detabbedInput = myInput.withoutTabs.array; // Or: MyOutputRange sink; // allocate using whatever scheme you want myInput.withoutTabs.copy(sink); The algorithm itself doesn't need to know where the result will end up -- sink could be stdout, in which case no allocation is needed at all. Or are you talking about in-place modification of the input string? That's a different kettle o' fish. T -- EMACS = Extremely Massive And Cumbersome SystemActually, I've realized that output ranges are really only useful when you want to store the final result. For data in mid-processing, you really want to be exporting an input (or higher) range interface instead, because functions that take output ranges are not composable. And for storing final results, you just use std.algorithm.copy, so there's really no need for many functions to take an output range at all.Plain algorithm ranges rarely need to allocate at all so those are somewhat irrelevant to the topic. What I am speaking about are variety of utility functions like this: S detab(S)(S s, size_t tabSize = 8) if (isSomeString!S) this allocates result string. Proper alternative: S detab(S)(ref S output, size_t tabSize = 8) if (isSomeString!S); plus void detab(S, OR)(OR output, size_t tab_Size = 8) if ( isSomeString!S && isSomeString!(ElementType!OR) )
Jul 17 2014
On Friday, 18 July 2014 at 00:08:17 UTC, H. S. Teoh via Digitalmars-d wrote:On Thu, Jul 17, 2014 at 06:32:58PM +0000, Dicebot via Digitalmars-d wrote:Yes this looks better.On Thursday, 17 July 2014 at 18:22:11 UTC, H. S. Teoh via Digitalmars-d wrote:I think you're missing the input parameter. :) void detab(S, OR)(S s, OR output, size_t tabSize = 8) { ... } I argue that you can just turn it into this: auto withoutTabs(S)(S s, size_t tabSize = 8) { static struct Result { ... // implementation here } static assert(isInputRange!Result); return Result(s, tabSize); } auto myInput = "..."; auto detabbedInput = myInput.withoutTabs.array; // Or: MyOutputRange sink; // allocate using whatever scheme you want myInput.withoutTabs.copy(sink); The algorithm itself doesn't need to know where the result will end up -- sink could be stdout, in which case no allocation is needed at all.Actually, I've realized that output ranges are really only useful when you want to store the final result. For data in mid-processing, you really want to be exporting an input (or higher) range interface instead, because functions that take output ranges are not composable. And for storing final results, you just use std.algorithm.copy, so there's really no need for many functions to take an output range at all.Plain algorithm ranges rarely need to allocate at all so those are somewhat irrelevant to the topic. What I am speaking about are variety of utility functions like this: S detab(S)(S s, size_t tabSize = 8) if (isSomeString!S) this allocates result string. Proper alternative: S detab(S)(ref S output, size_t tabSize = 8) if (isSomeString!S); plus void detab(S, OR)(OR output, size_t tab_Size = 8) if ( isSomeString!S && isSomeString!(ElementType!OR) )
Jul 18 2014
On 7/17/2014 5:06 PM, H. S. Teoh via Digitalmars-d wrote:MyOutputRange sink; // allocate using whatever scheme you want myInput.withoutTabs.copy(sink); The algorithm itself doesn't need to know where the result will end up -- sink could be stdout, in which case no allocation is needed at all.Exactly! The algorithm becomes completely divorced from the memory allocation. I believe this is a very powerful technique.
Jul 19 2014
On 7/17/2014 11:32 AM, Dicebot wrote:Plain algorithm ranges rarely need to allocate at all so those are somewhat irrelevant to the topic. What I am speaking about are variety of utility functions like this: S detab(S)(S s, size_t tabSize = 8) if (isSomeString!S) this allocates result string. Proper alternative: S detab(S)(ref S output, size_t tabSize = 8) if (isSomeString!S); plus void detab(S, OR)(OR output, size_t tab_Size = 8) if ( isSomeString!S && isSomeString!(ElementType!OR) )That algorithm takes a string and writes to an output range. This is not very composable. For example, what if one has an input range of chars, rather than a string? And what if one wants to tack more processing on the end? A better interface is the one used by the byChar, byWchar, and byDchar ranges recently added to std.utf. Those accept an input range, and present an input range as "output". They are very composable, and can be stuck in anywhere in a character processing pipeline. They do no allocations, and are completely lazy. The byChar algorithm in particular can serve as an outline for how to do a detab algorithm, most of the code can be reused for that.
Jul 17 2014
On Thu, Jul 17, 2014 at 05:58:14PM +0000, Chris via Digitalmars-d wrote:On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d wrote:[...]I don't think it will affect existing code (esp. given Walter's stance on breaking changes!). Probably the old GC-based string functions will still be around for backwards-compatibility. Perhaps some of them might be replaced with non-GC versions where it can be done transparently, but I'd expect you'd need to rewrite your string code to take advantage of the new range-based stuff. Hopefully the rewrites will be minimal (e.g., pass in an output range as argument instead of getting a returned string, replace allocation-based code with a UFCS chain, etc.). The ideal scenario may very well be as simple as tacking on `.copy(myBuffer)` at the end of a UFCS chain. :-P T -- Genius may have its limitations, but stupidity is not thus handicapped. -- Elbert HubbardAFAIK some work still needs to be done with std.string; Walter for one has started some work to implement range-based equivalents for std.string functions, which would be non-allocating; we just need a bit of work to push things through. DMD 2.066 will have nogc, which will make it easy to discover which remaining parts of Phobos are still not GC-free. Then we'll know where to direct our efforts. :-) TThat's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.
Jul 17 2014
H. S. Teoh:I don't think it will affect existing code (esp. given Walter's stance on breaking changes!).Making various parts of Phobos GC-free doesn't mean that nothing GC-allocates, it means that Phobos will offer means to use memory provided by the user. There are many situations where using a GC is OK, so both kinds of usages should be supported by Phobos. It should contain nothrow nogc functions to format and to convert to number and strings. It's a matter of offering choice. Bye, bearophile
Jul 17 2014
On 7/17/2014 11:17 AM, H. S. Teoh via Digitalmars-d wrote:I don't think it will affect existing code (esp. given Walter's stance on breaking changes!). Probably the old GC-based string functions will still be around for backwards-compatibility. Perhaps some of them might be replaced with non-GC versions where it can be done transparently, but I'd expect you'd need to rewrite your string code to take advantage of the new range-based stuff. Hopefully the rewrites will be minimal (e.g., pass in an output range as argument instead of getting a returned string, replace allocation-based code with a UFCS chain, etc.). The ideal scenario may very well be as simple as tacking on `.copy(myBuffer)` at the end of a UFCS chain. :-PBoss, dat's pretty much de plan, de plan!
Jul 17 2014
On Thursday, 17 July 2014 at 18:19:04 UTC, H. S. Teoh via Digitalmars-d wrote:On Thu, Jul 17, 2014 at 05:58:14PM +0000, Chris via Digitalmars-d wrote:That sounds good to me! This gives me time to upgrade my old code little by little and use the new approach when writing new code. Phew! By the way, my code is string intensive and I still have some suboptimal (greedy) ranges here and there. But believe it or not, they're no problem at all. The application (a plugin for a screen reader) is fast and responsive* (according to user feedback) like any other screen reader plugin, and it hasn't crashed for ages (thanks to GC?) - knock on wood! I use a lot of lazy ranges too plus some pointer magic for work intensive algorithms. Plus D let me easily model the various relations between text and speech (for other use cases down the road). Maybe it is not a real time system, but it has to be responsive. So far, GC hasn't affected it negatively. Once the online version will be publicly available, I will report how well vibe.d performs. Current results are encouraging. As regards Java, the big advantage of D is that it compiles to a native DLL and all users have to do is to double click on it to install. No "please download JVM" nightmare. I've been there. Users cannot handle it (why should they?), and to provide it as a developer is a waste of time and resources, and it might still go wrong which leaves both the users and the developers angry and frustrated. * The only thing that bothers me is that there seems to be a slight audio latency problem on Windows, which is not D's fault. On Linux it speaks as soon as you press <Enter>.On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d wrote:[...]I don't think it will affect existing code (esp. given Walter's stance on breaking changes!). Probably the old GC-based string functions will still be around for backwards-compatibility. Perhaps some of them might be replaced with non-GC versions where it can be done transparently, but I'd expect you'd need to rewrite your string code to take advantage of the new range-based stuff. Hopefully the rewrites will be minimal (e.g., pass in an output range as argument instead of getting a returned string, replace allocation-based code with a UFCS chain, etc.). The ideal scenario may very well be as simple as tacking on `.copy(myBuffer)` at the end of a UFCS chain. :-P TAFAIK some work still needs to be done with std.string; Walter for one has started some work to implement range-based equivalents for std.string functions, which would be non-allocating; we just need a bit of work to push things through. DMD 2.066 will have nogc, which will make it easy to discover which remaining parts of Phobos are still not GC-free. Then we'll know where to direct our efforts. :-) TThat's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.
Jul 18 2014
On Friday, 18 July 2014 at 09:25:46 UTC, Chris wrote:On Thursday, 17 July 2014 at 18:19:04 UTC, H. S. Teoh via Digitalmars-d wrote:Java has AOT compilers available since the early days. Most developers just tend to ignore them, because they are not part of the free package. -- PauloOn Thu, Jul 17, 2014 at 05:58:14PM +0000, Chris via Digitalmars-d wrote:That sounds good to me! This gives me time to upgrade my old code little by little and use the new approach when writing new code. Phew! By the way, my code is string intensive and I still have some suboptimal (greedy) ranges here and there. But believe it or not, they're no problem at all. The application (a plugin for a screen reader) is fast and responsive* (according to user feedback) like any other screen reader plugin, and it hasn't crashed for ages (thanks to GC?) - knock on wood! I use a lot of lazy ranges too plus some pointer magic for work intensive algorithms. Plus D let me easily model the various relations between text and speech (for other use cases down the road). Maybe it is not a real time system, but it has to be responsive. So far, GC hasn't affected it negatively. Once the online version will be publicly available, I will report how well vibe.d performs. Current results are encouraging. As regards Java, the big advantage of D is that it compiles to a native DLL and all users have to do is to double click on it to install. No "please download JVM" nightmare. I've been there. Users cannot handle it (why should they?), and to provide it as a developer is a waste of time and resources, and it might still go wrong which leaves both the users and the developers angry and frustrated. * The only thing that bothers me is that there seems to be a slight audio latency problem on Windows, which is not D's fault. On Linux it speaks as soon as you press <Enter>.On Thursday, 17 July 2014 at 17:49:24 UTC, H. S. Teoh via Digitalmars-d wrote:[...]I don't think it will affect existing code (esp. given Walter's stance on breaking changes!). Probably the old GC-based string functions will still be around for backwards-compatibility. Perhaps some of them might be replaced with non-GC versions where it can be done transparently, but I'd expect you'd need to rewrite your string code to take advantage of the new range-based stuff. Hopefully the rewrites will be minimal (e.g., pass in an output range as argument instead of getting a returned string, replace allocation-based code with a UFCS chain, etc.). The ideal scenario may very well be as simple as tacking on `.copy(myBuffer)` at the end of a UFCS chain. :-P TAFAIK some work still needs to be done with std.string; Walter for one has started some work to implement range-based equivalents for std.string functions, which would be non-allocating; we just need a bit of work to push things through. DMD 2.066 will have nogc, which will make it easy to discover which remaining parts of Phobos are still not GC-free. Then we'll know where to direct our efforts. :-) TThat's good news! See, we're getting there, just bear with us. This begs the question of course, how will this affect existing code? My code is string intensive.
Jul 20 2014
On Sun, 2014-07-20 at 16:40 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]Java has AOT compilers available since the early days. Most=20 developers just tend to ignore them, because they are not part of=20 the free package.Also, it is not entirely clear that AOT optimization can beat JIT optimization, at least on the JVM. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 21 2014
On Monday, 21 July 2014 at 18:31:46 UTC, Russel Winder via Digitalmars-d wrote:On Sun, 2014-07-20 at 16:40 +0000, Paulo Pinto via Digitalmars-d wrote: […]They probably aren't mutually exclusive.Java has AOT compilers available since the early days. Most developers just tend to ignore them, because they are not part of the free package.Also, it is not entirely clear that AOT optimization can beat JIT optimization, at least on the JVM.
Jul 21 2014
On Monday, 21 July 2014 at 18:31:46 UTC, Russel Winder via Digitalmars-d wrote:On Sun, 2014-07-20 at 16:40 +0000, Paulo Pinto via Digitalmars-d wrote: […]Yes it can, if developers bother to do PGO + AOT instead and learn the compiler flags. I used to have a stronger opinion on JIT, but given how many JITs perform and do not actually use the hardware as they, in theory could, JIT tend to only be an advantage for dynamic languages not strong typed ones. With JIT, writing the code in a way that makes the JIT compiler happy is a lost battle, as it depends on the exact same JIT implementation being available on the deployment system. -- PauloJava has AOT compilers available since the early days. Most developers just tend to ignore them, because they are not part of the free package.Also, it is not entirely clear that AOT optimization can beat JIT optimization, at least on the JVM.
Jul 21 2014
On Tue, 2014-07-22 at 06:35 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]Yes it can, if developers bother to do PGO + AOT instead and=20 learn the compiler flags. =20 I used to have a stronger opinion on JIT, but given how many JITs=20 perform and do not actually use the hardware as they, in theory=20 could, JIT tend to only be an advantage for dynamic languages not=20 strong typed ones. =20 With JIT, writing the code in a way that makes the JIT compiler=20 happy is a lost battle, as it depends on the exact same JIT=20 implementation being available on the deployment system.I think you have to make good on this claim since the JVM JIT is intended for Java which is supposedly a static, strongly typed language. Moreover, evidence from Groovy is the JVM JIT provides only patchy benefit. The biggest benefit all round is invokedynamic for both static and dynamic languages. Java 8 would be nothing without invokedynamic. But maybe we should take this off this list as it is way off topic. Clearly we can use JMH for benchmarking. I have a couple of codes I could use to try things out. So: 1. How to compile and execute to get full AOT *and* switch off the JIT. 2. How to compile and execute to get no AOT and have JIT on full. then we can begin to compare. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 22 2014
On Tuesday, 22 July 2014 at 08:10:31 UTC, Russel Winder via Digitalmars-d wrote:On Tue, 2014-07-22 at 06:35 +0000, Paulo Pinto via Digitalmars-d wrote: […]The JVM JIT was originally targeted to SELF, not Java.Yes it can, if developers bother to do PGO + AOT instead and learn the compiler flags. I used to have a stronger opinion on JIT, but given how many JITs perform and do not actually use the hardware as they, in theory could, JIT tend to only be an advantage for dynamic languages not strong typed ones. With JIT, writing the code in a way that makes the JIT compiler happy is a lost battle, as it depends on the exact same JIT implementation being available on the deployment system.I think you have to make good on this claim since the JVM JIT is intended for Java which is supposedly a static, strongly typed language.Moreover, evidence from Groovy is the JVM JIT provides only patchy benefit. The biggest benefit all round is invokedynamic for both static and dynamic languages. Java 8 would be nothing without invokedynamic.Functional programming languages have AOT compilers and they perform quite well, almost to C level in many use case cases. As for Groovy, I always felt the implementation was always lacking in performance. I avoid touching Gradle.But maybe we should take this off this list as it is way off topic. Clearly we can use JMH for benchmarking. I have a couple of codes I could use to try things out. So: 1. How to compile and execute to get full AOT *and* switch off the JIT. 2. How to compile and execute to get no AOT and have JIT on full. then we can begin to compare.I was discussing JIT vs AOT in abstract. To be able to perform such a tests you need: - A programming language X - The state of the art JIT compiler implementation for the given language - The state of the art AOT compiler implementation for the given language I know a few commercial AOT compilers for Java, not sure which one would be the best one to choose. But the proof is Microsoft adding .NET Native to their toolchain, Google replacing Dalvik with AOT and Oracle has added AOT compilation (Substract) to Graal, the candidate to Hotspot replacement. So apparently they all agree AOT still wins in many scenarios. -- Paulo
Jul 22 2014
On Tue, 2014-07-22 at 10:55 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]The JVM JIT was originally targeted to SELF, not Java.I think you'll find HotSpot evolved from a Smalltalk JIT originally. Borland and Semantec had JVM JITs as well, Sun even licenced the Semantec one for a while. [=E2=80=A6]Functional programming languages have AOT compilers and they=20 perform quite well, almost to C level in many use case cases.True. Java/JVM/JIT also performs very well surpassing C in many cases. Indeed C++ surpasses C in many cases as well.As for Groovy, I always felt the implementation was always=20 lacking in performance.True. Groovy is a dynamic language not intended for performance computation. However it now has static compilation to JVM bytcodes as well which leads to it being as fast or sometimes faster than Java.I avoid touching Gradle.Your loss! For others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.=20 [=E2=80=A6]=20 I was discussing JIT vs AOT in abstract.The trouble is that this isn't a good way of discussing what is a performance issue that can only be decided by comparative benchmarks.To be able to perform such a tests you need: =20 - A programming language XIn the case at hand X =3D Java.- The state of the art JIT compiler implementation for the given=20 languageI guess HotSpot is the default here, unless anyone has access to the IBM VM.- The state of the art AOT compiler implementation for the given=20 language =20 I know a few commercial AOT compilers for Java, not sure which=20 one would be the best one to choose.I am not sure which I would go with here as I have little experience of the high cost products. We'd have to get some sponsorship for the benchmarks. I will ask around the folks in the JVM performance community.But the proof is Microsoft adding .NET Native to their toolchain,=20 Google replacing Dalvik with AOT and Oracle has added AOT=20 compilation (Substract) to Graal, the candidate to Hotspot=20 replacement.Graal isn't a replacement for HotSpot but a dynamic compilation technology to work with HotSpot. It is actually a very promising technology, I am looking forward to trying it out.So apparently they all agree AOT still wins in many scenarios.Why is it one or the other? Having both AOT and JIT will likely do even better. Hence Graal on HotSpot. Certainly AOT putting the burden on compilation, ensures there is no start-up overhead, so is a benefit for short running systems. JIT has an initial (often large) overhead but once triggered produces highly performant (localized) code. Java is going to have to find the balance to stay up with the performance needed these days. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 23 2014
On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via Digitalmars-d wrote:On Tue, 2014-07-22 at 10:55 +0000, Paulo Pinto via Digitalmars-d wrote: […]I will happily use it when it gets to the same execution speed and hardware resources than Eclipse + ADT is currently using.I avoid touching Gradle.Your loss! For others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.[…]Yes it is. It was presented as such at JavaONE for possible future Java 9+ improvements. I can try to dig out the presentation, if you wish.But the proof is Microsoft adding .NET Native to their toolchain, Google replacing Dalvik with AOT and Oracle has added AOT compilation (Substract) to Graal, the candidate to Hotspot replacement.Graal isn't a replacement for HotSpot but a dynamic compilation technology to work with HotSpot. It is actually a very promising technology, I am looking forward to trying it out.[...] Why is it one or the other? Having both AOT and JIT will likely do even better. Hence Graal on HotSpot.I agree in the cases the toolchain offers both possibilities out of the box and does not force developers to choose among different vendors toolchains. -- Paulo
Jul 23 2014
On Wed, 2014-07-23 at 09:11 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]I will happily use it when it gets to the same execution speed=20 and hardware resources than Eclipse + ADT is currently using.The way I work with Gradle is to generate Eclipse or IntelliJ IDEA projects if I am going to use Eclipse or IntelliJ IDEA. [=E2=80=A6]Clearly I need to update my knowledge! [=E2=80=A6]Graal isn't a replacement for HotSpot but a dynamic compilation technology to work with HotSpot. It is actually a very promising technology, I am looking forward to trying it out.=20 Yes it is. =20 It was presented as such at JavaONE for possible future Java 9+=20 improvements. =20 I can try to dig out the presentation, if you wish.I agree in the cases the toolchain offers both possibilities out=20 of the box and does not force developers to choose among=20 different vendors toolchains.I am trying to get folk in the JVM benchmarking trade to tell me what the latest SP is on things. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 23 2014
Am 23.07.2014 21:27, schrieb Russel Winder via Digitalmars-d:On Wed, 2014-07-23 at 09:11 +0000, Paulo Pinto via Digitalmars-d wrote: […]So far I could only find "Looking into the JVM Crystal Ball" http://www.parleys.com/play/524f6b5be4b0a43ac12123a9/about Between 00:40:00 and 00:45:50, compilation gets discussed, including AOT. Not the ones about Graal, though. I am pretty sure I saw a slide with it as part of the Java 9+ wishlist, now just have to remember if it was actually at JavaONE, Devoxx, FOSDEM or Jax. :\ -- PauloClearly I need to update my knowledge!It was presented as such at JavaONE for possible future Java 9+ improvements. I can try to dig out the presentation, if you wish.
Jul 23 2014
On Wed, 2014-07-23 at 22:58 +0200, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]So far I could only find "Looking into the JVM Crystal Ball" http://www.parleys.com/play/524f6b5be4b0a43ac12123a9/about =20 Between 00:40:00 and 00:45:50, compilation gets discussed, including AOT. =20 Not the ones about Graal, though. =20 I am pretty sure I saw a slide with it as part of the Java 9+ wishlist, now just have to remember if it was actually at JavaONE, Devoxx, FOSDEM==20or Jax. :\I'll check this out. I am also getting the folk from the LJC who represent the LJC on the JCP EC (LJC is an elected members) to get a definitive statement on the road map. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 24 2014
On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via Digitalmars-d wrote:On Tue, 2014-07-22 at 10:55 +0000, Paulo Pinto via Digitalmars-d wrote: […]I am suspicious. I understand that a situation can be contrived such that C will lose, but in normal, sensible code the only language I've ever seen reliably beat C is FORTRAN.The JVM JIT was originally targeted to SELF, not Java.I think you'll find HotSpot evolved from a Smalltalk JIT originally. Borland and Semantec had JVM JITs as well, Sun even licenced the Semantec one for a while. […]Functional programming languages have AOT compilers and they perform quite well, almost to C level in many use case cases.True. Java/JVM/JIT also performs very well surpassing C in many cases. Indeed C++ surpasses C in many cases as well.
Jul 23 2014
Yes, that's right. The guys that developed Self (David Ungar et al.) then set out to develop a high-performance typed Smalltalk using the optimization techniques they developed for Self. The Smalltalk system never hit the market as the development team was acquired by Sun before that could happen. The Smalltalk system they were working on was released to the public: http://www.strongtalk.org/The JVM JIT was originally targeted to SELF, not Java.The reason I replied to this is that the original technology developed for Self was not a JIT. It was a runtime byte code optimizer that was put into Java named HotSpot. Since HotSpot operates at runtime it can optimize things an optimizing compiler could not find at compile time. This is why Java sometimes catches up very good performance and in isolated cases can compete with C.I think you'll find HotSpot evolved from a Smalltalk JIT originally.
Jul 23 2014
On Wednesday, 23 July 2014 at 09:16:57 UTC, John Colvin wrote:On Wednesday, 23 July 2014 at 08:46:32 UTC, Russel Winder via Digitalmars-d wrote:http://benchmarksgame.alioth.debian.org/ There's no good reason for C to beat C++. Even if there were, it would be simple to rewrite the C++ bottleneck in C style. Likewise, there's no good reason for C to beat D either. I was surprised by the Java results once they started beating C at certain benchmarks years ago. But the fact is it does. AtilaOn Tue, 2014-07-22 at 10:55 +0000, Paulo Pinto via Digitalmars-d wrote: […]I am suspicious. I understand that a situation can be contrived such that C will lose, but in normal, sensible code the only language I've ever seen reliably beat C is FORTRAN.The JVM JIT was originally targeted to SELF, not Java.I think you'll find HotSpot evolved from a Smalltalk JIT originally. Borland and Semantec had JVM JITs as well, Sun even licenced the Semantec one for a while. […]Functional programming languages have AOT compilers and they perform quite well, almost to C level in many use case cases.True. Java/JVM/JIT also performs very well surpassing C in many cases. Indeed C++ surpasses C in many cases as well.
Jul 23 2014
On Wednesday, 23 July 2014 at 11:54:19 UTC, Atila Neves wrote:http://benchmarksgame.alioth.debian.org/ There's no good reason for C to beat C++. Even if there were, it would be simple to rewrite the C++ bottleneck in C style. Likewise, there's no good reason for C to beat D either. I was surprised by the Java results once they started beating C at certain benchmarks years ago. But the fact is it does. AtilaIt usually does in memory intensive benchmark that aren't multithreaded. Java's GC is a free shot of concurrency that C won't get.
Jul 23 2014
On Wed, 2014-07-23 at 09:16 +0000, John Colvin via Digitalmars-d wrote: [=E2=80=A6]I am suspicious. I understand that a situation can be contrived=20 such that C will lose, but in normal, sensible code the only=20 language I've ever seen reliably beat C is FORTRAN.For my data parallel computations, I find C++ with TBB tends to be the winner. C, C++ and Fortran (not FORTRAN!) with OpenMP do fairly well. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 23 2014
On Wednesday, 23 July 2014 at 09:16:57 UTC, John Colvin wrote:I am suspicious. I understand that a situation can be contrived such that C will lose, but in normal, sensible code the only language I've ever seen reliably beat C is FORTRAN.I'm reminded of when headlines came out saying PyPy was now faster than C in some cases. I got pretty excited (that's an impressive feat of engineering) but upon looking into it, it turned out it was just inlining better than C because the C code was making a function call into another library. LTCG/LTO wasn't even uncommon at the time and would have easily handled that case had it been enabled.
Jul 23 2014
On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:For others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.Uhm, I'm literally right now in a talk on Buck (https://github.com/facebook/buck) at OSCON. -- Andrei
Jul 23 2014
On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg -- AndreiFor others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.Uhm, I'm literally right now in a talk on Buck (https://github.com/facebook/buck) at OSCON. -- Andrei
Jul 23 2014
On Wed, 2014-07-23 at 11:45 -0700, Andrei Alexandrescu via Digitalmars-d wrote:On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:orOn 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:For others: Gradle is becoming the de facto standard build framework f==20=20 Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg=JVM-based things and also Android.Uhm, I'm literally right now in a talk on Buck (https://github.com/facebook/buck) at OSCON. -- Andrei-- AndreiWere any of the Gradleware folk there, that should really scare them. BTW what's with the rabbit and the monkey? --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 23 2014
Am 23.07.2014 21:23, schrieb Russel Winder via Digitalmars-d:On Wed, 2014-07-23 at 11:45 -0700, Andrei Alexandrescu via Digitalmars-d wrote:I only tried Graddle because of Android Studio, it makes so bad use of hardware resources, pegging my i7 and core duo processors, that I returned to Eclipse + ADT on the same day. The situation is so bad it was even mentioned at this Google IO Android developer tools talk. This aborted my attempt to try to use Kotlin instead of C++ on my hobby Android projects. As for our Fortune 500 customers portfolio, the ones using Java are still 100% in a mix of Ant and Maven. -- PauloOn 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:Were any of the Gradleware folk there, that should really scare them. BTW what's with the rabbit and the monkey?On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg -- AndreiFor others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.Uhm, I'm literally right now in a talk on Buck (https://github.com/facebook/buck) at OSCON. -- Andrei
Jul 23 2014
On Wed, 2014-07-23 at 21:32 +0200, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]=20 I only tried Graddle because of Android Studio, it makes so bad use of==20hardware resources, pegging my i7 and core duo processors, that I=20 returned to Eclipse + ADT on the same day.I have not tried Android Studio for anything as yet. It is based on IntelliJ IDEA though (as is PyCharm) and IntelliJ IDEA beats Eclipse hands down for Java and Groovy working (as PyCharm beats Eclipse/PyDev hands down for Python). For me, YMMV.The situation is so bad it was even mentioned at this Google IO Android==20developer tools talk.I think this will be a JetBrains problem rather than a Gradle problem.This aborted my attempt to try to use Kotlin instead of C++ on my hobby==20Android projects.Kotlin is great fun, but I only use IntelliJ IDEA for that.As for our Fortune 500 customers portfolio, the ones using Java are=20 still 100% in a mix of Ant and Maven.<shudder/> I gave up Ant when I wrote Gant (*), and avoided Maven until Gradle arrived. Humans should not have to hand write XML ever. (*) Someone forked this to create the Groovy front end to Ant, which must beat the XML one any and every day of the week. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 24 2014
On Thursday, 24 July 2014 at 08:34:30 UTC, Russel Winder via Digitalmars-d wrote:On Wed, 2014-07-23 at 21:32 +0200, Paulo Pinto via Digitalmars-d wrote: […]Nope, Gradle, as shown by the CPU usage on the task manager. -- PauloThe situation is so bad it was even mentioned at this Google IO Android developer tools talk.I think this will be a JetBrains problem rather than a Gradle problem.
Jul 24 2014
On Thu, 2014-07-24 at 09:38 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]=20 Nope, Gradle, as shown by the CPU usage on the task manager.I am surprised, but data always trumps opinion. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 24 2014
On Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via Digitalmars-d wrote:On Thu, 2014-07-24 at 09:38 +0000, Paulo Pinto via Digitalmars-d wrote: […]One of the first Google results, http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-computer You can find many more out there, in many combinations use cases.Nope, Gradle, as shown by the CPU usage on the task manager.I am surprised, but data always trumps opinion.
Jul 24 2014
On Thu, 2014-07-24 at 11:09 +0000, Paulo Pinto via Digitalmars-d wrote:On Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via=20 Digitalmars-d wrote:uterOn Thu, 2014-07-24 at 09:38 +0000, Paulo Pinto via=20 Digitalmars-d wrote: [=E2=80=A6]=20 One of the first Google results, =20 http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-comp==20 Nope, Gradle, as shown by the CPU usage on the task manager.I am surprised, but data always trumps opinion.=20 You can find many more out there, in many combinations use cases.Looks like Android Studio tells Gradle to use the number of threads that there are cores, so this is an Android Studio problem, not a Gradle problem per se. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 24 2014
On Thursday, 24 July 2014 at 11:35:09 UTC, Russel Winder via Digitalmars-d wrote:On Thu, 2014-07-24 at 11:09 +0000, Paulo Pinto via Digitalmars-d wrote:In this specific case yes, but as I mentioned there are lots of uses cases being reported. -- PauloOn Thursday, 24 July 2014 at 11:01:35 UTC, Russel Winder via Digitalmars-d wrote:Looks like Android Studio tells Gradle to use the number of threads that there are cores, so this is an Android Studio problem, not a Gradle problem per se.On Thu, 2014-07-24 at 09:38 +0000, Paulo Pinto via Digitalmars-d wrote: […]One of the first Google results, http://askubuntu.com/questions/469709/gradle-compiling-slows-down-my-computer You can find many more out there, in many combinations use cases.Nope, Gradle, as shown by the CPU usage on the task manager.I am surprised, but data always trumps opinion.
Jul 24 2014
On Thu, 2014-07-24 at 11:39 +0000, Paulo Pinto via Digitalmars-d wrote: [=E2=80=A6]In this specific case yes, but as I mentioned there are lots of=20 uses cases being reported.It turns out to be a "known fact" even in Gradleware. Hans mentions it specifically inhis "vision for the future" document of a month ago. He also mentions that the C/C++ build aspects of Gradle are to be used by the Android NDK folk. I already asked them about including D in the package, but the response was "nobody uses D". So maybe we (I guess this mean I) should do a user contributed patch to add D to the whole thing. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 27 2014
On Sunday, 27 July 2014 at 08:24:44 UTC, Russel Winder via Digitalmars-d wrote:On Thu, 2014-07-24 at 11:39 +0000, Paulo Pinto via Digitalmars-d wrote: […]I am nobody.In this specific case yes, but as I mentioned there are lots of uses cases being reported.It turns out to be a "known fact" even in Gradleware. Hans mentions it specifically inhis "vision for the future" document of a month ago. He also mentions that the C/C++ build aspects of Gradle are to be used by the Android NDK folk. I already asked them about including D in the package, but the response was "nobody uses D".So maybe we (I guess this mean I) should do a user contributed patch to add D to the whole thing.
Jul 27 2014
On Sun, 2014-07-27 at 12:51 +0000, Chris via Digitalmars-d wrote:On Sunday, 27 July 2014 at 08:24:44 UTC, Russel Winder via=20 Digitalmars-d wrote:[=E2=80=A6]I was fairly appalled at the response so I have requested ability to clone the C/C++ stuff so as to add D and send in pull requests. Whatever anyone things of Gradle (or SCons) for D, it is looking more and more like Gradle is the route to build on Android. So if we want D on Android ensuring "buildable with Gradle" is a way of removing a hurdle. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winderHe also mentions that the C/C++ build aspects of Gradle are to=20 be used by the Android NDK folk. I already asked them about including D=20 in the package, but the response was "nobody uses D".=20 I am nobody.
Jul 27 2014
On 7/23/14, 12:23 PM, Russel Winder via Digitalmars-d wrote:BTW what's with the rabbit and the monkey?He promised his kid they'll go on an adventure with daddy. A really nice touch. I might steal it for my own talks. -- Andrei
Jul 23 2014
On Wed, 2014-07-23 at 14:37 -0700, Andrei Alexandrescu via Digitalmars-d wrote:On 7/23/14, 12:23 PM, Russel Winder via Digitalmars-d wrote:=20BTW what's with the rabbit and the monkey?=20 He promised his kid they'll go on an adventure with daddy. A really nice=touch. I might steal it for my own talks. -- AndreiExcellent. Perhaps we should make the "a thing". Every speaker must have their "cuddly toy" companion on stage. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 24 2014
On Wednesday, 23 July 2014 at 18:45:23 UTC, Andrei Alexandrescu wrote:On 7/23/14, 11:40 AM, Andrei Alexandrescu wrote:Say hi to Simon :)On 7/23/14, 1:46 AM, Russel Winder via Digitalmars-d wrote:Fresh photo comparing buck with gradle: http://i.imgur.com/uGHdfyq.jpg -- AndreiFor others: Gradle is becoming the de facto standard build framework for JVM-based things and also Android.Uhm, I'm literally right now in a talk on Buck (https://github.com/facebook/buck) at OSCON. -- Andrei
Jul 23 2014
On 7/23/2014 1:46 AM, Russel Winder via Digitalmars-d wrote:I think you'll find HotSpot evolved from a Smalltalk JIT originally. Borland and Semantec had JVM JITs as well, Sun even licenced the Semantec one for a while.Fun fact: the guy who wrote Symantec's JVM JIT, Steve Russell, is the very guy who wrote Optlink!
Jul 23 2014
On Thursday, 17 July 2014 at 17:28:02 UTC, Vic wrote:If that is true, I may even do a $ bounty to make Phobos GC free.Unless you do some hard real-time barebone stuff it is quite likely you can do with limited usage of GC. Hiring some of experienced D user to make a one-time case study with detailed recommendation can be an option if you are seriously concerned.I may do the same, $ bounty on vibe.d port to GC free.vibe.d has -version=VibedManualMemoryManagement which removes much of GC usage from its internals. Not 100% nogc but some entry point to start with for interested parties.I don't know D enough to be able to do that, but good news to me.Here Don mentions some of techniques we (Sociomantic) use to minimize GC impact : https://www.youtube.com/watch?v=WmE7ZR1_YKs In the end it comes to famous Bjarne quote : "C++ may be the best language for garbage collection because it generates so few garbage". Same can be applied to D with proper coding style.
Jul 17 2014
Vic:I see no proof of this. And not everybody hates GCs. Bye, bearophileIf D came without GC, it would have replaced C++ a long time ago!Agree +1000.
Jul 17 2014
I hate GC, so there.I see no proof of this. And not everybody hates GCs. Bye, bearophile
Jul 17 2014
On Thu, Jul 17, 2014 at 05:32:36PM +0000, Right via Digitalmars-d wrote:I hate GC, so there.[...] I don't, so here. :D T -- I see that you JS got Bach.I see no proof of this. And not everybody hates GCs.
Jul 17 2014
On 7/17/14, 2:32 PM, Right wrote:I hate GC, so there.Java is everywhere and it has a GC. Go is starting to be everywhere and I don't think everyone hates GCs. :-)I see no proof of this. And not everybody hates GCs. Bye, bearophile
Jul 17 2014
On Thu, 2014-07-17 at 15:11 -0300, Ary Borenszweig via Digitalmars-d wrote: [=E2=80=A6]Java is everywhere and it has a GC. Go is starting to be everywhere and==20=20I don't think everyone hates GCs. :-)I think we need to try and turn this to a more constructive debate and the above gives a hook. The Go thread is coming to the conclusion that they need a better GC than they currently have. I suspect this will now become a unit of work and that something good will come of it. For many years GC in Java has been a bit of a problem; Java relies on GC, yet the algorithms were always a bit of a compromise and second rate. However Java now has the G1 garbage collector and there is evidence and a huge amount of hope that this is actually a turning point. Java exhibits the behaviour of having a lot of very short lived objects so it becomes crucial to be able to deal with object creation as a very lightweight activity and for very lightweight collection of rapidly useless objects. Java originally went for a generational GC strategy but this has always led to problems especially in a multicore context. Taking an alternative strategy, G1 has seemingly ameliorated a lot of the problems leading to a system that is not "stop the world", is multicore and multithread compatible, and works very well such that soft real time is seemingly not a problem. With C++ I am coming to grips with RAII management of the heap. With Java, Groovy, Go and Python I rely on the GC doing a good job. I note though that there is a lot of evidence that the Unreal folk developed a garbage collector for C++ exactly because they didn't want to do the RAII thing. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Jul 17 2014
I'm rather fond of RAII, I find that I rarely every need shared semantics. I use a custom object model that allows for weak_ptrs to unique_ptrs which I think removes some cases where people might otherwise be inclined to use shared_ptr. Shared semantics are so rare in fact I would say I hardly use it at all, I go for weeks of coding without creating a shared type, not because I'm trying to do so, but because it just isn't necessary. Which is why GC seems like such a waste, given my experience in C++, where I hardly need shared memory, I see little use for a GC(or even ARC etc), all it will do is decrease program performance, make deterministic destruction impossible, and prevent automatic cleanup of none memory resources. Rust seems to have caught on to what C++ has accomplished here. Oh, and Unreal? Yes they have a GC type "UObject", I worked on Unreal at one point, my impression was that this originated back with the original Unreal(circa 1998?), likely caused by the popularity of Java at the time. As for the Unreal code base? Pass on that.
Jul 17 2014
On Thursday, 17 July 2014 at 19:14:06 UTC, Right wrote:I'm rather fond of RAII, I find that I rarely every need shared semantics. I use a custom object model that allows for weak_ptrs to unique_ptrs which I think removes some cases where people might otherwise be inclined to use shared_ptr. Shared semantics are so rare in fact I would say I hardly use it at all, I go for weeks of coding without creating a shared type, not because I'm trying to do so, but because it just isn't necessary. Which is why GC seems like such a waste, given my experience in C++, where I hardly need shared memory, I see little use for a GC(or even ARC etc), all it will do is decrease program performance, make deterministic destruction impossible, and prevent automatic cleanup of none memory resources. Rust seems to have caught on to what C++ has accomplished here. Oh, and Unreal? Yes they have a GC type "UObject", I worked on Unreal at one point, my impression was that this originated back with the original Unreal(circa 1998?), likely caused by the popularity of Java at the time. As for the Unreal code base? Pass on that.UEngine has been rewritten from scratch. UnrealScript doesn't even exist anymore. It is the new UEngine that depends on GC, and we're talking C++, not UnrealScript here (again, UnrealScript is gone).
Jul 17 2014
UE4 wasn't really rewritten from scratch, was more like, take UE3, rewrite various parts and add new features, keep doing that for a few years-- Code style isn't modern C++. No lambda, r-value refs, unique types, algorithms(everyone just bangs out for loops), task implementation is laughable, code mostly single threaded. Basically verbosity hell. The dependency on GC is the same as previous versions, they did not fundamentally change the object model in UE4. I think they did work on the GC, so perhaps it is faster /shrug. They only use the GC for certain objects(deriving UObject). Powerful engine? Yes for sure. If I needed to make a graphically AAA game ASAP I'd use UE4. Doesn't change the fact that the code is nothing impressive. The Blueprint system technically compiles down to UnrealScript bytecode-- but yes Unrealscript is dead, thankfully.UEngine has been rewritten from scratch. UnrealScript doesn't even exist anymore. It is the new UEngine that depends on GC, and we're talking C++, not UnrealScript here (again, UnrealScript is gone).
Jul 17 2014
On Thursday, 17 July 2014 at 19:14:06 UTC, Right wrote:I'm rather fond of RAII, I find that I rarely every need shared semantics. I use a custom object model that allows for weak_ptrs to unique_ptrs which I think removes some cases where people might otherwise be inclined to use shared_ptr. Shared semantics are so rare in fact I would say I hardly use it at all, I go for weeks of coding without creating a shared type, not because I'm trying to do so, but because it just isn't necessary. Which is why GC seems like such a waste, given my experience in C++, where I hardly need shared memory, I see little use for a GC(or even ARC etc), all it will do is decrease program performance, make deterministic destruction impossible, and prevent automatic cleanup of none memory resources. Rust seems to have caught on to what C++ has accomplished here.Though, GC is safer, easier and cheaper than ownership model, which is possible in D too, if you want it.
Jul 19 2014
On 7/17/2014 11:44 AM, Russel Winder via Digitalmars-d wrote:With C++ I am coming to grips with RAII management of the heap. With Java, Groovy, Go and Python I rely on the GC doing a good job. I note though that there is a lot of evidence that the Unreal folk developed a garbage collector for C++ exactly because they didn't want to do the RAII thing.RAII has a lot of costs associated with it that I am often surprised go completely unrecognized by the RAII comunity: 1. the "dec" operation (i.e. shared_ptr) is expensive 2. the inability to freely mix pointers allocated with different schemes 3. slices become mostly unworkable, and slices are a fantastic way to speed up a program
Jul 19 2014
On Saturday, 19 July 2014 at 21:12:44 UTC, Walter Bright wrote:3. slices become mostly unworkable, and slices are a fantastic way to speed up a programThey are even more fantastic for speeding up programming. I think that programmer time isn't included often enough in discussions. I have a program which I used D to quickly prototype and form my baseline implementation. After getting a semi-refined implementation I converted the performance critical part to C++. The D code that survived the rewrite uses slices + ranges, and it's not worth converting that to C++ code (it would be less elegant and isn't worth the time.) The bottom line is that without D's slices, I might not have bothered bringing that small project to the level of completion it has today.
Jul 20 2014
On 7/17/14, 11:11 AM, Ary Borenszweig wrote:On 7/17/14, 2:32 PM, Right wrote:http://www.stroustrup.com/C++11FAQ.html#gc-abi AndreiI hate GC, so there.Java is everywhere and it has a GC. Go is starting to be everywhere and I don't think everyone hates GCs. :-)I see no proof of this. And not everybody hates GCs. Bye, bearophile
Jul 17 2014
On 7/17/14, 3:55 PM, Andrei Alexandrescu wrote:On 7/17/14, 11:11 AM, Ary Borenszweig wrote:Sorry, but I don't understand your reply by just reading that link.On 7/17/14, 2:32 PM, Right wrote:http://www.stroustrup.com/C++11FAQ.html#gc-abi AndreiI hate GC, so there.Java is everywhere and it has a GC. Go is starting to be everywhere and I don't think everyone hates GCs. :-)I see no proof of this. And not everybody hates GCs. Bye, bearophile
Jul 17 2014
On 7/17/14, 12:26 PM, Ary Borenszweig wrote:On 7/17/14, 3:55 PM, Andrei Alexandrescu wrote:There's work on adding optional GC to C++ starting with C++11. -- AndreiOn 7/17/14, 11:11 AM, Ary Borenszweig wrote:Sorry, but I don't understand your reply by just reading that link.On 7/17/14, 2:32 PM, Right wrote:http://www.stroustrup.com/C++11FAQ.html#gc-abi AndreiI hate GC, so there.Java is everywhere and it has a GC. Go is starting to be everywhere and I don't think everyone hates GCs. :-)I see no proof of this. And not everybody hates GCs. Bye, bearophile
Jul 17 2014
On Thursday, 17 July 2014 at 16:56:56 UTC, Vic wrote:On Thursday, 17 July 2014 at 13:29:18 UTC, John wrote: <snip>I can't think of anyone posting here, to be honest, who wants to write a better JRE. The JRE is a virtual machine, and java compiles to bytecode that is run on the JVM. On the contrary, and in accordance with the core principle that D is a systems programming language, D compiles to native and (hopefully) highly optimised native machine code. There does exist something of a 'culture clash' where, by the very nature of GCs, there can be not-insignificant pauses in the running of the program that would be inimicable to real-time software such as high res complex games, operating systems, drivers etc. The response to this in the forums is either to improve the GC so that it doesn't ever pause for more than a certain amount of time (e.g. concurrent GCs, remove the global lock so other threads can continue to run), or to offer alternative memory management approaches such as ARC, which can also have pauses, but at other inflections as the program runs. Personally I'm a bit disappointed that the good work that has been done on GCs so far doesn't seem to be being picked up and run with, and nor do I see any reasons given as to why that is the case. Adnrei was threatening to start another GC an one point but unfortunately I haven't seen any more of that and we all know how short of time every one seems to be these days. Also on a personal note, I see some slightly snarky comments Java. I just wish it had Qt (I must finish my bindings for Qt) and/or ran on Android! The GC issues are irrelevant for me.If D came without GC, it would have replaced C++ a long time ago!Agree +1000. If GC is so good, why not make it an option, have a base lib w/o GC. If I want GC, I got me JRE. It seems that some in D want to write a better JRE, and that just won't happen ever. Cheers, Vic
Jul 17 2014
On 7/17/14, 2:57 AM, currysoup wrote:On Thursday, 17 July 2014 at 09:26:38 UTC, Chris wrote:Not at all costs! warp creates a little litter during e.g. command line preprocessing and other inconsequential tasks. The core of it is careful to not allocate frequently in inner loops.On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It's not about "acceptance", it's about the reality that a GC is not a universal solution to memory management. Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers).It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.That's good news in a way. If a big company accepts GC and the Go crowd go with it (pardon the pun), then it will find more acceptance (as Paulo pointed out in a different thread).Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language? Within this community the question is rhetorical but to outsiders I feel it's a major concern.I agree there's a perception issue. Andrei
Jul 17 2014
On Thursday, 17 July 2014 at 09:57:09 UTC, currysoup wrote:Just from watching a few of the DConf 2014 talks, if you want performance you avoid the GC at all costs (even if that means allocating into huge predefined buffers). Once you're going to these lengths to avoid garbage collection it begs the question, why are you even using this language?In D you have a choice to use GC or not use it. You would want to not use if you have a severe performance problem, which may or may not exist. There's no guarantee another language is a silver bullet and will magically solve all problems.
Jul 18 2014
The key to making D's GC acceptable lies in two factors I believe. 1. Improve the implementation enough so that you will only be impacted by GC in extermely low memory or real time environments. 2. Defer allocation more and more by using ranges and algorithms more, and trust that compiler optimisations will make these fast. The big, big offender I believe for extra allocations is functions which return objects, rather than functions which write to output ranges. The single most common occurence of this is something like this is toString. Instead of writing this... string toString() { // Allocations the user of the library has no control over. return foo.toString() ~ bar.toString() ~ " something else"; } I believe you should always, always instead write this. // I left out the part with different character types. void writeString(OutputRange)(OutputRange outputRange) if (isOutputRange!(OutputRange, char)) { // Allocations controlle by the user of the library, // this template could appear in a nogc function. foo.writeString(outputRange); bar.writeString(outputRange); "something else".copy(outputRange); } It's perhaps strange at first because you're pre-programmed from other languages, except maybe C++ which uses output streams, to always be allocating temporary objects everywhere, even if all you are doing is writing them to an object. For improving the GC to an acceptable level, I believe collection only needs to execute fast enough such that it will fit within a frame comfortably. So for something rendering at 60FPS you have 1 second / 60 frames ~= 16.6 milliseconds of computation you can do without resulting in a single dropped frame. That means you need to get collection down to something in the 1ms to 2ms region. At which point collection time will only impact something which is really pushing the hardware, which would exclude most mobile video games, which are about the complexity of Angry Birds. I firmly believe there's no silver bullet for automatic memory management. Reference counting solutions, including automatic reference counting, will consume less memory than a garbage collector and offer more predictable collection times, but do so at the expense of memory safety and simplicity. You need fatter pointers to manage the reference counts, and you need to carefully deal with reference cycles. In addition, you cannot easily share slices of memory with reference counting, which is an advantage of garbage collection. With GC, you can allocate a string, slice a part of it, hand over the slice to some other object, and you know that the slice will stay around for as long as it's needed. With reference counting, you have to either retain the slice and the whole segment in the same way and allow for the possibility of hidden cycles, or disallow slicing and create copies instead. Slicing in GC is important, because you can create much more efficient programs which take slices based on regex, which we do right now. For the environments which cannot tolerate collection whatsoever, like Sociomantic's real time bidding operations, then control of allocation will have to be left to the user. This is where the zero allocation idea behind ranges and algorithms comes into play, because then the code which doesn't allocate, which could potentially be all of std.algorithm, can still be used in those environments, rather than being rendered unusable. There's my thoughts on it anyway. I probably rambled on too long.
Jul 17 2014
On Thursday, 17 July 2014 at 12:37:10 UTC, w0rp wrote:For improving the GC to an acceptable level, I believe collection only needs to execute fast enough such that it will fit within a frame comfortably. So for something rendering at 60FPS you have 1 second / 60 frames ~= 16.6 milliseconds of computation you can do without resulting in a single dropped frame. That means you need to get collection down to something in the 1ms to 2ms region.That's easy, just make sure your heap never grows over 0.4 MB. Seriously, 200 MB of small object in heap = 1 second. That's how bad it is now. And here Walter says it won't get much better. Ever. http://www.reddit.com/r/programming/comments/2avdod/dconf_2014_realtime_big_data_in_d_by_don_clugston/
Jul 17 2014
On Thursday, 17 July 2014 at 12:37:10 UTC, w0rp wrote:The key to making D's GC acceptable lies in two factors I believe. 1. Improve the implementation enough so that you will only be impacted by GC in extermely low memory or real time environments. 2. Defer allocation more and more by using ranges and algorithms more, and trust that compiler optimisations will make these fast. The big, big offender I believe for extra allocations is functions which return objects, rather than functions which write to output ranges. The single most common occurence of this is something like this is toString. Instead of writing this... string toString() { // Allocations the user of the library has no control over. return foo.toString() ~ bar.toString() ~ " something else"; } I believe you should always, always instead write this. // I left out the part with different character types. void writeString(OutputRange)(OutputRange outputRange) if (isOutputRange!(OutputRange, char)) { // Allocations controlle by the user of the library, // this template could appear in a nogc function. foo.writeString(outputRange); bar.writeString(outputRange); "something else".copy(outputRange); }I agreed with this for awhile but following the conversation here <https://github.com/D-Programming-Language/phobos/pull/2149> I'm more inclined to think we should be adding lazy versions of functions where possible rather than versions with OutputRange parameters. It's more flexible that way and can result in even fewer allocations than even OutputRange parameters would have (i.e. you can have chains of lazy operations and only allocate on the final step, or not at all in some cases). Laziness isn't appropriate or possible everywhere but it's much easier to go from lazy to eager than the other way around.[...]
Jul 17 2014
On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:I agreed with this for awhile but following the conversation here <https://github.com/D-Programming-Language/phobos/pull/2149> I'm more inclined to think we should be adding lazy versions of functions where possible rather than versions with OutputRange parameters. It's more flexible that way and can result in even fewer allocations than even OutputRange parameters would have (i.e. you can have chains of lazy operations and only allocate on the final step, or not at all in some cases). Laziness isn't appropriate or possible everywhere but it's much easier to go from lazy to eager than the other way around.This is not comparable. Lazy input range based solutions do not make it possible to change allocation strategy, they simply defer the allocation point. Ideally both are needed.[...]
Jul 17 2014
On Thursday, 17 July 2014 at 22:16:10 UTC, Dicebot wrote:On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:Well the idea is that you then copy into an output range with whatever allocation strategy you want at the end. There is quite a bit of overlap I think. Not complete overlap and OutputRange accepting functions will still be needed but I think we should prefer the lazy approach where possible.I agreed with this for awhile but following the conversation here <https://github.com/D-Programming-Language/phobos/pull/2149> I'm more inclined to think we should be adding lazy versions of functions where possible rather than versions with OutputRange parameters. It's more flexible that way and can result in even fewer allocations than even OutputRange parameters would have (i.e. you can have chains of lazy operations and only allocate on the final step, or not at all in some cases). Laziness isn't appropriate or possible everywhere but it's much easier to go from lazy to eager than the other way around.This is not comparable. Lazy input range based solutions do not make it possible to change allocation strategy, they simply defer the allocation point. Ideally both are needed.[...]
Jul 17 2014
On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:Well the idea is that you then copy into an output range with whatever allocation strategy you want at the end. There is quite a bit of overlap I think. Not complete overlap and OutputRange accepting functions will still be needed but I think we should prefer the lazy approach where possible.It is not always possible - sometimes resulting range element must be already "cooked" object. I do agree it is a powerful default when feasible though. At the same time simple output range overloads is much faster to add.
Jul 17 2014
On Thursday, 17 July 2014 at 22:27:52 UTC, Dicebot wrote:On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:From what I'm getting is that we might have the chance here to redefine memory usage, as was pointed out by Teoh et al. Reduce allocations as much as possible, avoiding a problem in the first place is better than solving it. It's worth thinking in this direction, cos the GC / RC issue will always boil down to the fact that there is a price to be paid.Well the idea is that you then copy into an output range with whatever allocation strategy you want at the end. There is quite a bit of overlap I think. Not complete overlap and OutputRange accepting functions will still be needed but I think we should prefer the lazy approach where possible.It is not always possible - sometimes resulting range element must be already "cooked" object. I do agree it is a powerful default when feasible though. At the same time simple output range overloads is much faster to add.
Jul 17 2014
On Thu, Jul 17, 2014 at 10:27:51PM +0000, Dicebot via Digitalmars-d wrote:On Thursday, 17 July 2014 at 22:21:54 UTC, Brad Anderson wrote:Example?Well the idea is that you then copy into an output range with whatever allocation strategy you want at the end. There is quite a bit of overlap I think. Not complete overlap and OutputRange accepting functions will still be needed but I think we should prefer the lazy approach where possible.It is not always possible - sometimes resulting range element must be already "cooked" object.I do agree it is a powerful default when feasible though. At the same time simple output range overloads is much faster to add.As Brad said, it's far easier to go from lazy to eager than the other way round, e.g., by sticking .array at the end, or .copy(buf) where buf is allocated according to whatever scheme the user chooses. Since buf is declared by the user, the user is free to use whatever allocation mechanism he wishes, the string algorithm doesn't know nor care what it is (and it shouldn't need to). T -- What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
Jul 17 2014
On 7/17/2014 4:01 PM, H. S. Teoh via Digitalmars-d wrote:As Brad said, it's far easier to go from lazy to eager than the other way round, e.g., by sticking .array at the end, or .copy(buf) where buf is allocated according to whatever scheme the user chooses. Since buf is declared by the user, the user is free to use whatever allocation mechanism he wishes, the string algorithm doesn't know nor care what it is (and it shouldn't need to).Yup. It enables separating the allocation strategy from the algorithm.
Jul 17 2014
On 7/17/2014 3:16 PM, Dicebot wrote:On Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:They move the allocation point to the top level, rather than the bottom or intermediate level.I agreed with this for awhile but following the conversation here <https://github.com/D-Programming-Language/phobos/pull/2149> I'm more inclined to think we should be adding lazy versions of functions where possible rather than versions with OutputRange parameters. It's more flexible that way and can result in even fewer allocations than even OutputRange parameters would have (i.e. you can have chains of lazy operations and only allocate on the final step, or not at all in some cases). Laziness isn't appropriate or possible everywhere but it's much easier to go from lazy to eager than the other way around.This is not comparable. Lazy input range based solutions do not make it possible to change allocation strategy, they simply defer the allocation point. Ideally both are needed.[...]
Jul 17 2014
On Thu, Jul 17, 2014 at 10:33:26PM -0700, Walter Bright via Digitalmars-d wrote:On 7/17/2014 3:16 PM, Dicebot wrote:Deferring the allocation point to the top level has the advantage of letting high-level user code decide what the allocation strategy should be, rather than percolating that decision down the call graph to every low-level function. Of course, it's not always possible to defer this, such as if you need to tell a container which allocator to use. But IMO this should be pushed up to higher-level code whenever possible. T -- Why can't you just be a nonconformist like everyone else? -- YHLOn Thursday, 17 July 2014 at 22:06:01 UTC, Brad Anderson wrote:They move the allocation point to the top level, rather than the bottom or intermediate level.I agreed with this for awhile but following the conversation here <https://github.com/D-Programming-Language/phobos/pull/2149> I'm more inclined to think we should be adding lazy versions of functions where possible rather than versions with OutputRange parameters. It's more flexible that way and can result in even fewer allocations than even OutputRange parameters would have (i.e. you can have chains of lazy operations and only allocate on the final step, or not at all in some cases). Laziness isn't appropriate or possible everywhere but it's much easier to go from lazy to eager than the other way around.This is not comparable. Lazy input range based solutions do not make it possible to change allocation strategy, they simply defer the allocation point. Ideally both are needed.[...]
Jul 17 2014
On 7/17/2014 10:47 PM, H. S. Teoh via Digitalmars-d wrote:Deferring the allocation point to the top level has the advantage of letting high-level user code decide what the allocation strategy should be, rather than percolating that decision down the call graph to every low-level function.Exactly.Of course, it's not always possible to defer this, such as if you need to tell a container which allocator to use. But IMO this should be pushed up to higher-level code whenever possible.Andrei's allocator scheme addresses this. It will also allow such decisions to be made at the high level.
Jul 17 2014
On 17 Jul 2014 13:40, "w0rp via Digitalmars-d" <digitalmars-d puremagic.com> wrote:The key to making D's GC acceptable lies in two factors I believe. 1. Improve the implementation enough so that you will only be impacted byGC in extermely low memory or real time environments.2. Defer allocation more and more by using ranges and algorithms more,and trust that compiler optimisations will make these fast.How about 1. Make it easier to select which GC you want to use at runtime init. 2. Write an alternate GC aimed at different application uses (ie: real-time) We already have (at least) three GC implementations for D. Regards Iain
Jul 20 2014
On Sunday, 20 July 2014 at 08:41:16 UTC, Iain Buclaw via Digitalmars-d wrote:On 17 Jul 2014 13:40, "w0rp via Digitalmars-d"Yes, Please! Being able to specify an alternate memory manager at compile-time, link-time and/or runtime would be most advantageous, and probably put an end to the GC-phobia. DIP46 [1] also proposes and interesting alternative to the GC by creating regions at runtime. And given the passion surrounding the GC in this community, if runtime hooks and/or a suitable API for custom memory managers were created and documented, it would invite participation and an informal, highly competitive contest for the best GC would likely ensue. Mike [1] http://wiki.dlang.org/DIP46The key to making D's GC acceptable lies in two factors I believe. 1. Improve the implementation enough so that you will only be impacted byGC in extermely low memory or real time environments.2. Defer allocation more and more by using ranges and algorithms more,and trust that compiler optimisations will make these fast.How about 1. Make it easier to select which GC you want to use at runtime init. 2. Write an alternate GC aimed at different application uses (ie: real-time)
Jul 20 2014
On Sunday, 20 July 2014 at 11:44:56 UTC, Mike wrote:Being able to specify an alternate memory manager at compile-time, link-time and/or runtime would be most advantageous, and probably put an end to the GC-phobia.AFAIK, GC is not directly referenced in druntime, so you already should be able to link with different GC implementation. If you provide all symbols requested by the code, the linker won't link default GC module.
Jul 20 2014
On Sunday, 20 July 2014 at 12:07:47 UTC, Kagamin wrote:On Sunday, 20 July 2014 at 11:44:56 UTC, Mike wrote:Yes, I believe you are correct. I also believe there is even a GCStub in the runtime that uses malloc without free. What's missing is API documentation and examples that makes such features accessible. Also missing, are language/runtime hooks that could allow users to try alternative memory management schemes such as ARC and find what works best for them through experimentation. In short, IMO, D should not embrace one type of automatic memory management, they should make it extensible. In time two ore three high quality memory managers will prevail. MikeBeing able to specify an alternate memory manager at compile-time, link-time and/or runtime would be most advantageous, and probably put an end to the GC-phobia.AFAIK, GC is not directly referenced in druntime, so you already should be able to link with different GC implementation. If you provide all symbols requested by the code, the linker won't link default GC module.
Jul 20 2014
On Sunday, 20 July 2014 at 12:30:02 UTC, Mike wrote:Yes, I believe you are correct. I also believe there is even a GCStub in the runtime that uses malloc without free. What's missing is API documentation and examples that makes such features accessible.The existing functions should be understandable, so you can document them yourself. If you want to standardize the API, you can write a small wrapper library, which will account for possible internal API changes and map them to your standard API. Examples are up to you, since nobody knows, what features you will implement in your GC implementation and what API they should have. You have gcstub as an example with GC proxy substitution API.In short, IMO, D should not embrace one type of automatic memory management, they should make it extensible. In time two ore three high quality memory managers will prevail.It's a matter of writing an appropriate library and providing it as a dub module. Do you know the best, what you want, you are the one to make your wish come to life.
Jul 21 2014
On Thursday, 17 July 2014 at 09:20:36 UTC, Russel Winder via Digitalmars-d wrote:It appears still to be a general meme that performance required no GC and GC mean poor performance. The debate has been restarted on the Go mailing list under the banner "go without garbage collector". The response to will Go remove the garbage collector was somewhat unequivocal: nope.GC or no GC is that the right question ? The quality of GC implementation is probably more important. "Simpler and faster GC for Go" https://docs.google.com/document/d/1v4Oqa0WwHunqlb8C3ObL_uNQw3DfSY-ztoA-4wWbKcg/pub Another point that will be ignored in such debates is that GC gives solution for only one problem, memory management. How about other resources, how to manage them ?
Jul 17 2014
On Thursday, 17 July 2014 at 13:02:22 UTC, Remo wrote: <snip>The quality of GC implementation is probably more important.I disagree, I am a burn victim and don't trust smoke. Ideally it is optional. Cheers, Vic
Jul 17 2014
On Thursday, 17 July 2014 at 17:36:36 UTC, Vic wrote:On Thursday, 17 July 2014 at 13:02:22 UTC, Remo wrote: <snip>Well it appears to be very hard to make proper GC. So all the hate again GC could be because of suboptimal implementation? Any way as written before memory is not only one resource that need to be managed. So a language need to offer solution not only for memory management but all other resources. In C++ this is called RAII and work reasonable well. Rust looks even more promising for me.The quality of GC implementation is probably more important.I disagree, I am a burn victim and don't trust smoke.Ideally it is optional.Yes for me too. GC must be optional. I hope nogc will allow this for D.Cheers, Vic
Jul 17 2014