www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Disable GC entirely

reply Adrian Mercieca <amercieca gmail.com> writes:
Hi,

Is it possible to switch off the GC entirely in D?
Can the GC be switched off completely - including within phobos?

What I am looking for is absolute control over memory management.
I've done some tests with GC on and GC off and the performance with GC is 
not good enough for my requirements.

Thanks.
- Adrian.
Apr 05 2013
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Saturday, 6 April 2013 at 04:16:13 UTC, Adrian Mercieca wrote:
 Hi,

 Is it possible to switch off the GC entirely in D?
 Can the GC be switched off completely - including within phobos?
import core.memory; GC.disable(); However, most D code - including the runtime and standard library - assume that a GC is present, and thus may leak memory.
 What I am looking for is absolute control over memory 
 management.
 I've done some tests with GC on and GC off and the performance 
 with GC is
 not good enough for my requirements.
The GC will not be invoked unless you allocate memory (either explicitly, using "new", or using D's features which do so, such as dynamic arrays and closures). If you do not make use of those features, no GC code will run.
Apr 05 2013
parent reply Adrian Mercieca <amercieca gmail.com> writes:
Thanks for your very quick answer Vladimir.

 On Saturday, 6 April 2013 at 04:16:13 UTC, Adrian Mercieca wrote:
 Hi,

 Is it possible to switch off the GC entirely in D? Can the GC be
 switched off completely - including within phobos?
import core.memory; GC.disable(); However, most D code - including the runtime and standard library - assume that a GC is present, and thus may leak memory.
Guess that answers the question; even if I avoid the GC in my own code and disable it as you say, then the runtime and standard library will leak - rendering the whole thing as not-an-option. So I'll either have to not use the runtime+standard libraries and implement all I'd need myself without GC or else stick to C++. The latter would be a pity because I really like D, but then in C++ I have full control and the performance is always good. In my very simple test, the GC version of my program ran more than twice slower than the non GC version. I just cannot accept that kind of performance penalty. Thanks.
Apr 06 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Saturday, 6 April 2013 at 08:01:09 UTC, Adrian Mercieca wrote:
 So I'll either have to not use the runtime+standard libraries 
 and
 implement all I'd need myself without GC or else stick to C++. 
 The latter
 would be a pity because I really like D, but then in C++ I have 
 full
 control and the performance is always good.
It is actually even worse, as omitting runtime cripples core language noticably. I was hinted with this cool project though : https://bitbucket.org/timosi/minlibd
 In my very simple test, the GC version of my program ran more 
 than twice
 slower than the non GC version. I just cannot accept that kind 
 of
 performance penalty.
Raw performance is kind of achievable if you use own memory pools for data and limit GC only for language constructs (appending to slices, delegates etc.) - those to approaches can happily coexist in D. Problems start when soft real-time requirements appear.
Apr 06 2013
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 06.04.2013 10:01, schrieb Adrian Mercieca:
 Thanks for your very quick answer Vladimir.

 On Saturday, 6 April 2013 at 04:16:13 UTC, Adrian Mercieca wrote:
 Hi,

 Is it possible to switch off the GC entirely in D? Can the GC be
 switched off completely - including within phobos?
import core.memory; GC.disable(); However, most D code - including the runtime and standard library - assume that a GC is present, and thus may leak memory.
Guess that answers the question; even if I avoid the GC in my own code and disable it as you say, then the runtime and standard library will leak - rendering the whole thing as not-an-option. So I'll either have to not use the runtime+standard libraries and implement all I'd need myself without GC or else stick to C++. The latter would be a pity because I really like D, but then in C++ I have full control and the performance is always good. In my very simple test, the GC version of my program ran more than twice slower than the non GC version. I just cannot accept that kind of performance penalty. Thanks.
D's GC is not as good as some other system programming languages like Oberon or Active Oberon, just to cite two examples from many. However, does the current performance really impact the type of applications you are writing? I'm asking because I always found the C and C++ communities always care too much about micro optimizations in cases it does not matter. Coming from a polyglot background I never managed to grok that. However there are cases where every byte and every ms matter, in those cases you are still better with C, C++ and Fortran. -- Paulo
Apr 06 2013
next sibling parent reply Adrian Mercieca <amercieca gmail.com> writes:
Hi
 
 D's GC is not as good as some other system programming languages like
 Oberon or Active Oberon, just to cite two examples from many.
As I said, maybe it's time (IMHO) for D's GC to addresses - or otherwise dropped.
 
 However, does the current performance really impact the type of
 applications you are writing?
Yes it does; and to be honest, I don't buy into this argument that for certain apps I don't need the speed and all that... why should I ever want a slower app? And if performance was not such an issue, to be perfectly frank, then Java would more than suffice and I would not be looking at D in the first place. D is supposed to be a better C++ (or at least that's what I have been led to believe - or like to believe)...... so it's got to be an improvement all round. It is a better structured and neater language, but if it's going to come at the price of being slower to C++, than at the end of the day it is not an improvement at all.
 
 I'm asking because I always found the C and C++ communities always care
 too much about micro optimizations in cases it does not matter. Coming
 from a polyglot background I never managed to grok that.
 
 However there are cases where every byte and every ms matter, in those
 cases you are still better with C, C++ and Fortran.
But why are you so quick to give up on D being as fast as C++ ? Wouldn't it be just awesome if D - with its better constructs and all that - was just as fast as C++ ? Can't it just be that someone does achieve the best of both worlds? I feel that D is very close to that: a great, versatile and powerful language... if only the performance was as good as C++'s then it'll be what I have always dreamt of. Just my 2p worth...
Apr 07 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Sunday, 7 April 2013 at 09:10:14 UTC, Adrian Mercieca wrote:
 However, does the current performance really impact the type of
 applications you are writing?
Yes it does; and to be honest, I don't buy into this argument that for certain apps I don't need the speed and all that... why should I ever want a slower app? And if performance was not such an issue, to be perfectly frank, then Java would more than suffice and I would not be looking at D in the first placeю
The point here is that applications caring for performance don't do dynamic allocations at all. Both GC and malloc are slow, memory pools of pre-allocated memory are used instead. Having standard lib helpers for those may be helpful but anyway, those are GC-agnostic and hardly done any differently than in C++. So it should be possible to achieve performance similar to C/C++ even with current bad GC if application memory architecture is done right. It is not a panacea and sometimes the very existence of GC harms performance requirements (When not only speed, but also latency matter). That is true. But for performance-hungry user applications situation is pretty acceptable right now. Well, it will be, once easy way to track accidental gc_malloc calls is added.
Apr 07 2013
parent reply "Rob T" <alanb ucora.com> writes:
On Sunday, 7 April 2013 at 09:41:21 UTC, Dicebot wrote:
[...]
 applications situation is pretty acceptable right now. Well, it 
 will be, once easy way to track accidental gc_malloc calls is 
 added.
That's the critical missing piece of the puzzle. In effect we need to be able to use a sub-set of D that is 100% GC free. Currently writing GC-free applications in D may be theoretically possible, but it is simply not a practical option in most situations for most people, it's far too error probe and fragile. --rt
Apr 07 2013
parent Adrian Mercieca <adrian777 onvol.net> writes:
 That's the critical missing piece of the puzzle. In effect we 
 need to be able to use a sub-set of D that is 100% GC free. 
That's it actually - spot on. If only we could write 100% GC free D code... that would be it. ----Android NewsGroup Reader---- http://www.piaohong.tk/newsgroup
Apr 07 2013
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 07.04.2013 11:10, schrieb Adrian Mercieca:
 Hi
 D's GC is not as good as some other system programming languages like
 Oberon or Active Oberon, just to cite two examples from many.
As I said, maybe it's time (IMHO) for D's GC to addresses - or otherwise dropped.
 However, does the current performance really impact the type of
 applications you are writing?
Yes it does; and to be honest, I don't buy into this argument that for certain apps I don't need the speed and all that... why should I ever want a slower app? And if performance was not such an issue, to be perfectly frank, then Java would more than suffice and I would not be looking at D in the first place. D is supposed to be a better C++ (or at least that's what I have been led to believe - or like to believe)...... so it's got to be an improvement all round. It is a better structured and neater language, but if it's going to come at the price of being slower to C++, than at the end of the day it is not an improvement at all.
The current compilers just don't have the amount of investment in more than 20 years of code optimization like C++ has. You cannot expect to achieve that from one moment to the other.
 I'm asking because I always found the C and C++ communities always care
 too much about micro optimizations in cases it does not matter. Coming
 from a polyglot background I never managed to grok that.

 However there are cases where every byte and every ms matter, in those
 cases you are still better with C, C++ and Fortran.
But why are you so quick to give up on D being as fast as C++ ? Wouldn't it be just awesome if D - with its better constructs and all that - was just as fast as C++ ? Can't it just be that someone does achieve the best of both worlds? I feel that D is very close to that: a great, versatile and powerful language... if only the performance was as good as C++'s then it'll be what I have always dreamt of. Just my 2p worth...
I am not giving up speed. It just happens that I have been coding since 1986 and I am a polyglot programmer that started doing system programming in the Pascal family of languages, before moving into C and C++ land. Except for some cases, it does not matter if you get an answer in 1s or 2ms, however most single language C and C++ developers care about the 2ms case even before starting to code, this is what I don't approve. Of course I think given time D compilers will be able to achieve C++ like performance, even with GC or who knows, a reference counted version. Nowadays the only place I do manual memory management is when writing Assembly code. -- Paulo
Apr 07 2013
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/07/2013 12:59 PM, Paulo Pinto wrote:
 ...

 The current compilers just don't have the amount of investment in more
 than 20 years of code optimization like C++ has. You cannot expect to
 achieve that from one moment to the other.
 ...
GDC certainly has. Parts of the runtime could use some investment.
 Nowadays the only place I do manual memory management is when writing
 Assembly code.
I do not buy that. Maintaining mutable state is a form of manual memory management.
Apr 07 2013
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Apr-2013 00:28, Timon Gehr пишет:
 On 04/07/2013 12:59 PM, Paulo Pinto wrote:
 ...

 The current compilers just don't have the amount of investment in more
 than 20 years of code optimization like C++ has. You cannot expect to
 achieve that from one moment to the other.
 ...
GDC certainly has. Parts of the runtime could use some investment.
Similar understanding here. There is not a single thing D on GDC couldn't have that GCC has. -- Dmitry Olshansky
Apr 07 2013
next sibling parent reply "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
I agree that language support for disabling the GC should exist. 
D, as I understand, is targeting C++ programmers (primarily). 
Those people are concerned about performance. If D as a systems 
programming language, can't deliver that, they aren't going to 
use it just because it has better templates (to name something).
Apr 07 2013
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 07.04.2013 23:07, schrieb Minas Mina:
 I agree that language support for disabling the GC should exist. D, as I
 understand, is targeting C++ programmers (primarily). Those people are
 concerned about performance. If D as a systems programming language,
 can't deliver that, they aren't going to use it just because it has
 better templates (to name something).
Just as an example. This startup sells Oberon compilers for embedded boards (Cortex-M3 and NXP LPC2000) http://www.astrobe.com/default.htm You get a normal GC systems programming language running on bare metal on these systems. Surely it also allows for manual memory management in modules that import the pseudo module SYSTEM, similar to system code in D. The company exists since 1997, so they must be doing something right. On my modest opinion this and improving the GC's current performance would be enough. Or maybe have the option to use reference counting instead of GC, but not disabling automatic memory management altogether. -- Paulo
Apr 07 2013
prev sibling parent Adrian Mercieca <adrian777 onvol.net> writes:
"Minas Mina" <minas_mina1990 hotmail.co.uk> Wrote in message:
 I agree that language support for disabling the GC should exist. 
 D, as I understand, is targeting C++ programmers (primarily). 
 Those people are concerned about performance. If D as a systems 
 programming language, can't deliver that, they aren't going to 
 use it just because it has better templates (to name something).
 
very well put. I want to be able to write programs as fast as C++ ones... in D. D is it for me... I just need not be hampered by a GC - particularly when its implementation is somewhat lagging. ----Android NewsGroup Reader---- http://www.piaohong.tk/newsgroup
Apr 07 2013
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 07.04.2013 22:49, schrieb Dmitry Olshansky:
 08-Apr-2013 00:28, Timon Gehr пишет:
 On 04/07/2013 12:59 PM, Paulo Pinto wrote:
 ...

 The current compilers just don't have the amount of investment in more
 than 20 years of code optimization like C++ has. You cannot expect to
 achieve that from one moment to the other.
 ...
GDC certainly has. Parts of the runtime could use some investment.
Similar understanding here. There is not a single thing D on GDC couldn't have that GCC has.
Faire enough. I tend to only use dmd to play around with the language. -- Paulo
Apr 07 2013
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 07.04.2013 22:28, schrieb Timon Gehr:
 On 04/07/2013 12:59 PM, Paulo Pinto wrote:
 ...

 The current compilers just don't have the amount of investment in more
 than 20 years of code optimization like C++ has. You cannot expect to
 achieve that from one moment to the other.
 ...
GDC certainly has. Parts of the runtime could use some investment.
 Nowadays the only place I do manual memory management is when writing
 Assembly code.
I do not buy that. Maintaining mutable state is a form of manual memory management.
I don't follow that. Since 2002 I don't write any C code, only C++, JVM and .NET languages. While at CERN, my team (Atlas HLT) had a golden rule, new and delete were only allowed in library code. Application code had to rely in stl, boost or CERN libraries own allocation mechanisms. Nowadays 100% of the C++ code I write makes use of reference counting. In the last month I started to port a toy compiler done in '99 on my last university year from Java 1.1 to Java 7, while updating the generated Assembly code and C based run time library. It is the first time since 2002, I care to write manual memory management. Why should maintaining mutable state be like manual memory management, if you have in place the required GC/reference counter helpers? -- Paulo
Apr 07 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/07/2013 11:11 PM, Paulo Pinto wrote:
 ...

 Why should maintaining mutable state be like manual memory management,
 if you have in place the required GC/reference counter helpers?

 ...
Every time a variable is reassigned, its old value is destroyed.
Apr 07 2013
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 08.04.2013 00:27, schrieb Timon Gehr:
 On 04/07/2013 11:11 PM, Paulo Pinto wrote:
 ...

 Why should maintaining mutable state be like manual memory management,
 if you have in place the required GC/reference counter helpers?

 ...
Every time a variable is reassigned, its old value is destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management. -- Paulo
Apr 07 2013
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/08/2013 12:33 AM, Paulo Pinto wrote:
 Am 08.04.2013 00:27, schrieb Timon Gehr:
 On 04/07/2013 11:11 PM, Paulo Pinto wrote:
 ...

 Why should maintaining mutable state be like manual memory management,
 if you have in place the required GC/reference counter helpers?

 ...
Every time a variable is reassigned, its old value is destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management. -- Paulo
What would be a feature that clearly distinguishes the two?
Apr 07 2013
prev sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
 Am 08.04.2013 00:27, schrieb Timon Gehr:
 Every time a variable is reassigned, its old value is 
 destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management.
Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.
Apr 07 2013
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Sunday, 7 April 2013 at 22:59:37 UTC, Peter Alexander wrote:
 On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
 Am 08.04.2013 00:27, schrieb Timon Gehr:
 Every time a variable is reassigned, its old value is 
 destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management.
Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.
If you as a developer don't call explicitly any language API to acquire/release resource and it is actually done on your behalf by the runtime, it is not manual memory management.
Apr 07 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/08/2013 07:55 AM, Paulo Pinto wrote:
 On Sunday, 7 April 2013 at 22:59:37 UTC, Peter Alexander wrote:
 On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
 Am 08.04.2013 00:27, schrieb Timon Gehr:
 Every time a variable is reassigned, its old value is destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management.
Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.
If you as a developer don't call explicitly any language API to acquire/release resource and it is actually done on your behalf by the runtime, it is not manual memory management.
a = b; ^- explicit "language API" call
Apr 08 2013
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 8 April 2013 at 10:13:36 UTC, Timon Gehr wrote:
 On 04/08/2013 07:55 AM, Paulo Pinto wrote:
 On Sunday, 7 April 2013 at 22:59:37 UTC, Peter Alexander wrote:
 On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
 Am 08.04.2013 00:27, schrieb Timon Gehr:
 Every time a variable is reassigned, its old value is 
 destroyed.
I do have functional and logic programming background and still fail to see how that is manual memory management.
Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.
If you as a developer don't call explicitly any language API to acquire/release resource and it is actually done on your behalf by the runtime, it is not manual memory management.
a = b; ^- explicit "language API" call
I give up
Apr 08 2013
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 7 April 2013 20:59, Paulo Pinto <pjmlp progtools.org> wrote:

 I am not giving up speed. It just happens that I have been coding since
 1986 and I am a polyglot programmer that started doing system programming
 in the Pascal family of languages, before moving into C and C++ land.

 Except for some cases, it does not matter if you get an answer in 1s or
 2ms, however most single language C and C++ developers care about the 2ms
 case even before starting to code, this is what I don't approve.
Bear in mind, most remaining C/C++ programmers are realtime programmers, and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run realtime software. If I chose not to care about 2ms only 8 times, I'll have no time left. I would cut off my left nut for 2ms most working days! I typically measure execution times in 10s of microseconds, if something measures in milliseconds it's a catastrophe that needs to be urgently addressed... and you're correct, as a C/C++ programmer, I DO design with consideration for sub-ms execution times before I write a single line of code. Consequently, I have seen the GC burn well into the ms on occasion, and as such, it is completely unacceptable in realtime software. The GC really needs to be addressed in terms of performance; it can't stop the world for milliseconds at a time. I'd be happy to give it ~150us every 16ms, but NOT 2ms every 200ms. Alternatively, some urgency needs to be invested in tools to help programmers track accidental GC allocations. I cope with D in realtime software by carefully avoiding excess GC usage, which, sadly, means basically avoiding the standard library at all costs. People use concatenations all through the std lib, in the strangest places, I just can't trust it at all anymore. I found a weird one just a couple of days ago in the function toUpperInPlace() (!! it allocates !!), but only when it encountered a utf8 sequence, which means I didn't even notice while working in my language! >_< Imagine it, I would have gotten a bug like "game runs slow in russian", and I would have been SOOOO "what the ****!?", while crunching to ship the product... That isn't so say I don't appreciate the idea of the GC if it was efficient enough for me to use. I do use it, but very carefully. If there are only a few GC allocations it's okay at the moment, but I almost always run into trouble when I call virtually any std library function within loops. That's the critical danger in my experience. Walter's claim is that D's inefficient GC is mitigated by the fact that D produces less garbage than other languages, and this is true to an extent. But given that is the case, to be reliable, it is of critical importance that: a) the programmer is aware of every allocation they are making, they can't be hidden inside benign looking library calls like toUpperInPlace. b) all allocations should be deliberate. c) helpful messages/debugging features need to be available to track where allocations are coming from. standardised statistical output would be most helpful. d) alternatives need to be available for the functions that allocate by nature, or an option for user-supplied allocators, like STL, so one can allocate from a pool instead. e) D is not very good at reducing localised allocations to the stack, this needs some attention. (array initialisation is particularly dangerous) f) the GC could do with budgeting controls. I'd like to assign it 150us per 16ms, and it would defer excess workload to later frames. Of course I think given time D compilers will be able to achieve C++ like
 performance, even with GC or who knows, a reference counted version.

 Nowadays the only place I do manual memory management is when writing
 Assembly code.
Apparently you don't write realtime software. I get so frustrated on this forum by how few people care about realtime software, or any architecture other than x86 (no offense to you personally, it's a general observation). Have you ever noticed how smooth and slick the iPhone UI feels? It runs at 60hz and doesn't miss a beat. It wouldn't work in D. Video games can't stutter, audio/video processing can't stutter. These are all important tasks in modern computing. The vast majority of personal computers in the world today are in peoples pockets running relatively weak ARM processors, and virtually every user of these devices appreciates the smooth operation of the devices interfaces. People tend to complain when their device is locking up or stuttering. These small, weak devices are increasingly becoming responsible for _most_ personal computing tasks these days, and apart from the web, most personal computing tasks are realtime in some way (music/video, skype, etc). It's not a small industry. It is, perhaps, the largest computing industry, and sadly D is yet not generally deployable to the average engineer... only the D enthusiast prepared to take the time to hold it's hand until this is important issue is addressed.
Apr 07 2013
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
 On 7 April 2013 20:59, Paulo Pinto <pjmlp progtools.org> wrote:

 I am not giving up speed. It just happens that I have been 
 coding since
 1986 and I am a polyglot programmer that started doing system 
 programming
 in the Pascal family of languages, before moving into C and 
 C++ land.

 Except for some cases, it does not matter if you get an answer 
 in 1s or
 2ms, however most single language C and C++ developers care 
 about the 2ms
 case even before starting to code, this is what I don't 
 approve.
Bear in mind, most remaining C/C++ programmers are realtime programmers, and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run realtime software. If I chose not to care about 2ms only 8 times, I'll have no time left. I would cut off my left nut for 2ms most working days! I typically measure execution times in 10s of microseconds, if something measures in milliseconds it's a catastrophe that needs to be urgently addressed... and you're correct, as a C/C++ programmer, I DO design with consideration for sub-ms execution times before I write a single line of code. Consequently, I have seen the GC burn well into the ms on occasion, and as such, it is completely unacceptable in realtime software.
I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case. Afterwards the same discussion came around with JVM and .NET environments, which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true. Nowadays template based code beats C, systems programming is moving to C++ in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark. So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.
 Walter's claim is that D's inefficient GC is mitigated by the 
 fact that D
 produces less garbage than other languages, and this is true to 
 an extent.
 But given that is the case, to be reliable, it is of critical 
 importance
 that:
 a) the programmer is aware of every allocation they are making, 
 they can't
 be hidden inside benign looking library calls like 
 toUpperInPlace.
 b) all allocations should be deliberate.
 c) helpful messages/debugging features need to be available to 
 track where
 allocations are coming from. standardised statistical output 
 would be most
 helpful.
 d) alternatives need to be available for the functions that 
 allocate by
 nature, or an option for user-supplied allocators, like STL, so 
 one can
 allocate from a pool instead.
 e) D is not very good at reducing localised allocations to the 
 stack, this
 needs some attention. (array initialisation is particularly 
 dangerous)
 f) the GC could do with budgeting controls. I'd like to assign 
 it 150us per
 16ms, and it would defer excess workload to later frames.
No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management. What you need is a way to do controlled allocations for the few cases that there is no way around it, but this should be reserved for modules with system code and not scattered everywhere.
 Of course I think given time D compilers will be able to 
 achieve C++ like
 performance, even with GC or who knows, a reference counted 
 version.

 Nowadays the only place I do manual memory management is when 
 writing
 Assembly code.
Apparently you don't write realtime software. I get so frustrated on this forum by how few people care about realtime software, or any architecture other than x86 (no offense to you personally, it's a general observation). Have you ever noticed how smooth and slick the iPhone UI feels? It runs at 60hz and doesn't miss a beat. It wouldn't work in D. Video games can't stutter, audio/video processing can't stutter. ....
I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years. However I do see a lot of games being pushed out the door in Yeah most of they are no AAA, but that does make them less enjoyable. I also had the pleasure of being able to use the Native Oberon and AOS operating systems back in the late 90's at the university, desktop operating systems done in GC systems programming languages. Sure you could do manual memory management, but only via the SYSTEM pseudo module. One of the applications was a video player, just the decoder was written in Assembly. http://ignorethecode.net/blog/2009/04/22/oberon/ In the end the question is what would a D version just with manual memory management have as compelling feature against C++1y and Ada, already established languages with industry standards? Then again my lack of experience in the embedded world invalidates what I think might be the right way. -- Paulo
Apr 07 2013
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:
 On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
 On 7 April 2013 20:59, Paulo Pinto <pjmlp progtools.org> wrote:

 I am not giving up speed. It just happens that I have been 
 coding since
 1986 and I am a polyglot programmer that started doing system 
 programming
 in the Pascal family of languages, before moving into C and 
 C++ land.

 Except for some cases, it does not matter if you get an 
 answer in 1s or
 2ms, however most single language C and C++ developers care 
 about the 2ms
 case even before starting to code, this is what I don't 
 approve.
Bear in mind, most remaining C/C++ programmers are realtime programmers, and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run realtime software. If I chose not to care about 2ms only 8 times, I'll have no time left. I would cut off my left nut for 2ms most working days! I typically measure execution times in 10s of microseconds, if something measures in milliseconds it's a catastrophe that needs to be urgently addressed... and you're correct, as a C/C++ programmer, I DO design with consideration for sub-ms execution times before I write a single line of code. Consequently, I have seen the GC burn well into the ms on occasion, and as such, it is completely unacceptable in realtime software.
I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case. Afterwards the same discussion came around with JVM and .NET environments, which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true. Nowadays template based code beats C, systems programming is moving to C++ in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark. So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.
 Walter's claim is that D's inefficient GC is mitigated by the 
 fact that D
 produces less garbage than other languages, and this is true 
 to an extent.
 But given that is the case, to be reliable, it is of critical 
 importance
 that:
 a) the programmer is aware of every allocation they are 
 making, they can't
 be hidden inside benign looking library calls like 
 toUpperInPlace.
 b) all allocations should be deliberate.
 c) helpful messages/debugging features need to be available to 
 track where
 allocations are coming from. standardised statistical output 
 would be most
 helpful.
 d) alternatives need to be available for the functions that 
 allocate by
 nature, or an option for user-supplied allocators, like STL, 
 so one can
 allocate from a pool instead.
 e) D is not very good at reducing localised allocations to the 
 stack, this
 needs some attention. (array initialisation is particularly 
 dangerous)
 f) the GC could do with budgeting controls. I'd like to assign 
 it 150us per
 16ms, and it would defer excess workload to later frames.
No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management. What you need is a way to do controlled allocations for the few cases that there is no way around it, but this should be reserved for modules with system code and not scattered everywhere.
 Of course I think given time D compilers will be able to 
 achieve C++ like
 performance, even with GC or who knows, a reference counted 
 version.

 Nowadays the only place I do manual memory management is when 
 writing
 Assembly code.
Apparently you don't write realtime software. I get so frustrated on this forum by how few people care about realtime software, or any architecture other than x86 (no offense to you personally, it's a general observation). Have you ever noticed how smooth and slick the iPhone UI feels? It runs at 60hz and doesn't miss a beat. It wouldn't work in D. Video games can't stutter, audio/video processing can't stutter. ....
I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years. However I do see a lot of games being pushed out the door in Yeah most of they are no AAA, but that does make them less enjoyable.
Correction: Yeah most of they are no AAA, but that does not make them less enjoyable.
Apr 07 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:
 I do understand that, the thing is that since I am coding in 
 1986, I remember people complaining that C and Turbo Pascal 
 were too slow, lets code everything in Assembly. Then C became 
 alright, but C++ and Ada were too slow, god forbid to call 
 virtual methods or do any operator calls in C++'s case.

 Afterwards the same discussion came around with JVM and .NET 
 environments, which while making GC widespread, also had the 
 sad side-effect to make younger generations think that safe 
 languages require a VM when that is not true.

 Nowadays template based code beats C, systems programming is 
 moving to C++ in mainstream OS, leaving C behind, while some 
 security conscious areas are adopting Ada and Spark.

 So for me when someone claims about the speed benefits of C and 
 C++ currently have, I smile as I remember having this kind of 
 discussions with C having the role of too slow language.
But important question is "what has changed?". Was it just shift in programmer opinion and they initially mislabeled C code as slow or progress in compiler optimizations was real game-breaker? Same for GC's and VM's. It may be perfectly possible to design GC that suits real-time needs and is fast enough (well, Manu has mentioned some of requirements it needs to satisfy). But if embedded developers need to wait until tool stack that advanced is produced for D to use it - it is pretty much same as saying that D is dead for embedded. Mythical "clever-enough compilers" are good in theory but job needs to be done right now.
Apr 08 2013
parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 17:59, Dicebot <m.strashun gmail.com> wrote:

 On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:

 I do understand that, the thing is that since I am coding in 1986, I
 remember people complaining that C and Turbo Pascal were too slow, lets
 code everything in Assembly. Then C became alright, but C++ and Ada were
 too slow, god forbid to call virtual methods or do any operator calls in
 C++'s case.

 Afterwards the same discussion came around with JVM and .NET
 environments, which while making GC widespread, also had the sad
 side-effect to make younger generations think that safe languages require a
 VM when that is not true.

 Nowadays template based code beats C, systems programming is moving to
 C++ in mainstream OS, leaving C behind, while some security conscious areas
 are adopting Ada and Spark.

 So for me when someone claims about the speed benefits of C and C++
 currently have, I smile as I remember having this kind of discussions with
 C having the role of too slow language.
But important question is "what has changed?". Was it just shift in programmer opinion and they initially mislabeled C code as slow or progress in compiler optimizations was real game-breaker? Same for GC's and VM's. It may be perfectly possible to design GC that suits real-time needs and is fast enough (well, Manu has mentioned some of requirements it needs to satisfy). But if embedded developers need to wait until tool stack that advanced is produced for D to use it - it is pretty much same as saying that D is dead for embedded. Mythical "clever-enough compilers" are good in theory but job needs to be done right now.
D for embedded, like PROPER embedded (microcontrollers, or even raspberry pi maybe?) is one area where most users would be happy to use a custom druntime like the ones presented earlier in this thread where it's strategically limited in scope and designed not to allocate. 'Really embedded' software tends not to care so much about portability. A bigger problem is D's executable size, which are rather 'plump' to be frank :P Last time I tried to understand this, one main issue was objectfactory, and the inability to strip out unused classinfo structures (and other junk). Any unused data should be stripped, but D somehow finds reason to keep it all. Also, template usage needs to be relaxed. Over-use of templates really bloats the exe. But it's not insurmountable, D could be used in 'proper embedded'. For 'practically embedded', like phones/games consoles, the EXE size is still an issue, but we mainly care about performance. Shrink the EXE, improve the GC. There's no other showstoppers I'm aware of. D offers you as much control as C++ over the rest of your performance considerations, I think they can be addressed by the programmer. That said, I'd still KILL for __forceinline! ;) ;)
Apr 08 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 08:31:29 UTC, Manu wrote:
 D for embedded, like PROPER embedded (microcontrollers, or even 
 raspberry
 pi maybe?) is one area where most users would be happy to use a 
 custom
 druntime like the ones presented earlier in this thread where 
 it's
 strategically limited in scope and designed not to allocate.
Yes, this is one of important steps in solution and some good work has been already done on topic. Main issue is that it won't be any convenient unless second step is done - making core language/compiler more friendly to embedded needs so that you can both implement custom druntime AND have solid language. Ability to track/prohibit GC allocations is one part of this. Static array literals is another. Most likely you'll also need to disable RTTI like it is done in C++/Embedded projects I have seen so far. I have done quite a research on this topic and have a lot to say here :)
 'Really
 embedded' software tends not to care so much about portability.
 A bigger problem is D's executable size, which are rather 
 'plump' to be
 frank :P
 Last time I tried to understand this, one main issue was 
 objectfactory, and
 the inability to strip out unused classinfo structures (and 
 other junk).
 Any unused data should be stripped, but D somehow finds reason 
 to keep it
 all. Also, template usage needs to be relaxed. Over-use of 
 templates really
 bloats the exe. But it's not insurmountable, D could be used in 
 'proper
 embedded'.
Sure. Actually, executable size is an easy problem to solve considering custom druntimed mentioned before. Most of size in small executables come from statically linked huge druntime. (Simple experiment: use "-betterC" switch and compile hello-world program linking only to C stdlib. Same binary size as for C analog). Once you have defined more restrictive language subset and implemented minimal druntime for it, executable sizes will get better. Template issue is not an issue on their own, but D front-end is very careless about emitting template symbols (see my recent thread on topic). Most of them are weak symbols but hitting certain cases/bugs may bloat executable without you even noticing that. None of those issues is unsolvable show-stopper. But there does not seem an interest to work in this direction from current dmd developers (I'd be glad to be mistaken) and dmd source code sets rather hard entry barrier. You see, game developers are not the only ones with real-time requirements that are freaking tired of working with 40-year obsolete languages :) I am very interested in this topic. Looking forward to watching your DConf presentation recording about tricks used to adapt it to game engine by the way.
Apr 08 2013
next sibling parent Manu <turkeyman gmail.com> writes:
On 8 April 2013 18:56, Dicebot <m.strashun gmail.com> wrote:

 On Monday, 8 April 2013 at 08:31:29 UTC, Manu wrote:

 D for embedded, like PROPER embedded (microcontrollers, or even raspberry
 pi maybe?) is one area where most users would be happy to use a custom
 druntime like the ones presented earlier in this thread where it's
 strategically limited in scope and designed not to allocate.
Yes, this is one of important steps in solution and some good work has been already done on topic. Main issue is that it won't be any convenient unless second step is done - making core language/compiler more friendly to embedded needs so that you can both implement custom druntime AND have solid language. Ability to track/prohibit GC allocations is one part of this. Static array literals is another. Most likely you'll also need to disable RTTI like it is done in C++/Embedded projects I have seen so far. I have done quite a research on this topic and have a lot to say here :) 'Really
 embedded' software tends not to care so much about portability.
 A bigger problem is D's executable size, which are rather 'plump' to be
 frank :P
 Last time I tried to understand this, one main issue was objectfactory,
 and
 the inability to strip out unused classinfo structures (and other junk).
 Any unused data should be stripped, but D somehow finds reason to keep it
 all. Also, template usage needs to be relaxed. Over-use of templates
 really
 bloats the exe. But it's not insurmountable, D could be used in 'proper
 embedded'.
Sure. Actually, executable size is an easy problem to solve considering custom druntimed mentioned before. Most of size in small executables come from statically linked huge druntime. (Simple experiment: use "-betterC" switch and compile hello-world program linking only to C stdlib. Same binary size as for C analog). Once you have defined more restrictive language subset and implemented minimal druntime for it, executable sizes will get better. Template issue is not an issue on their own, but D front-end is very careless about emitting template symbols (see my recent thread on topic). Most of them are weak symbols but hitting certain cases/bugs may bloat executable without you even noticing that. None of those issues is unsolvable show-stopper. But there does not seem an interest to work in this direction from current dmd developers (I'd be glad to be mistaken) and dmd source code sets rather hard entry barrier.
Yeah, I wish I had the time (or patience) to get involved at that level. Breaking the ice in DMD seems daunting, and there's so many satellite jobs I already can't find the time to work on (like std.simd). I'd love to see a concerted push to solve these 2 key problems scheduled and a just get them done some time... You see, game developers are not the only ones with real-time requirements
 that are freaking tired of working with 40-year obsolete languages :) I am
 very interested in this topic. Looking forward to watching your DConf
 presentation recording about tricks used to adapt it to game engine by the
 way.
Oh god! Now there's expectation! >_< Yeah... we'll see how that one goes. I'm actually starting to worry I might not have as much exciting experiences to share as people may be hoping... infact, I'm having quite a lot of trouble making that talk seem interesting even to myself. My current draft feels a bit thin. I hope it's not terrible! ;) I think my biggest issue is that a slideshow is not a good place for demonstrating code snippets, and it's hard to illustrate my approach (particularly the curly bits) without showing a bunch of code... so I'll end up just describing it maybe? I dunno. Chances are you're just gonna hear a bunch of the same rants that everyone's heard from me a bunch of times before :P
Apr 08 2013
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 18:56, Dicebot <m.strashun gmail.com> wrote:

 On Monday, 8 April 2013 at 08:31:29 UTC, Manu wrote:

 D for embedded, like PROPER embedded (microcontrollers, or even raspberry
 pi maybe?) is one area where most users would be happy to use a custom
 druntime like the ones presented earlier in this thread where it's
 strategically limited in scope and designed not to allocate.
Yes, this is one of important steps in solution and some good work has been already done on topic. Main issue is that it won't be any convenient unless second step is done - making core language/compiler more friendly to embedded needs so that you can both implement custom druntime AND have solid language. Ability to track/prohibit GC allocations is one part of this. Static array literals is another. Most likely you'll also need to disable RTTI like it is done in C++/Embedded projects I have seen so far. I have done quite a research on this topic and have a lot to say here :)
... so where's your dconf talk then? You can have one of my slots, I'm very interested to hear all about it! ;)
Apr 08 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 09:31:03 UTC, Manu wrote:
 ... so where's your dconf talk then? You can have one of my 
 slots, I'm very
 interested to hear all about it! ;)
Meh, I am a more of "poor student" type and can't really afford even a one-way plane ticket from Easter Europe to USA :( Latvia is India branch in Europe when it comes to cheap programming workforce. Will be waiting for videos from DConf. I can provide any information you are interested in via e-mail though, I'd like to see how my explorations survive meeting with real requirements.
Apr 08 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/8/13 6:04 AM, Dicebot wrote:
 On Monday, 8 April 2013 at 09:31:03 UTC, Manu wrote:
 ... so where's your dconf talk then? You can have one of my slots, I'm
 very
 interested to hear all about it! ;)
Meh, I am a more of "poor student" type and can't really afford even a one-way plane ticket from Easter Europe to USA :( Latvia is India branch in Europe when it comes to cheap programming workforce. Will be waiting for videos from DConf.
Getting a talk approved would have guaranteed transportation expenses covered. Andrei
Apr 08 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 15:37:23 UTC, Andrei Alexandrescu 
wrote:
 Getting a talk approved would have guaranteed transportation 
 expenses covered.

 Andrei
I'll quote that before DConf 2014 ;)
Apr 08 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/8/13 5:30 AM, Manu wrote:
 ... so where's your dconf talk then? You can have one of my slots, I'm
 very interested to hear all about it! ;)
Just a note - we may be able to accommodate that schedule change if you all move fast. Andrei
Apr 08 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/8/2013 2:30 AM, Manu wrote:
 ... so where's your dconf talk then? You can have one of my slots, I'm very
 interested to hear all about it! ;)
I'm not willing to give up any of your slots!
Apr 10 2013
parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 08:54, Walter Bright <newshound2 digitalmars.com> wrote:

 On 4/8/2013 2:30 AM, Manu wrote:

 ... so where's your dconf talk then? You can have one of my slots, I'm
 very
 interested to hear all about it! ;)
I'm not willing to give up any of your slots!
:) Well, it was mostly a joke, but if he does have a lot to say on the matter, I'm very interested to hear such a talk myself. I'm sure there's room to slot one more in somewhere ;)
Apr 10 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 10:56, Dicebot wrote:

 Sure. Actually, executable size is an easy problem to solve considering
 custom druntimed mentioned before. Most of size in small executables
 come from statically linked huge druntime. (Simple experiment: use
 "-betterC" switch and compile hello-world program linking only to C
 stdlib. Same binary size as for C analog).
That's cheating. It's most likely due to the C standard library is being dynamically linked. If you dynamically link with the D runtime and the standard library you will get the same size for a Hello World in D as in C. Yes, I've tried this with Tango back in the D1 days. -- /Jacob Carlborg
Apr 08 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 19:31, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-08 10:56, Dicebot wrote:

  Sure. Actually, executable size is an easy problem to solve considering
 custom druntimed mentioned before. Most of size in small executables
 come from statically linked huge druntime. (Simple experiment: use
 "-betterC" switch and compile hello-world program linking only to C
 stdlib. Same binary size as for C analog).
That's cheating. It's most likely due to the C standard library is being dynamically linked. If you dynamically link with the D runtime and the standard library you will get the same size for a Hello World in D as in C. Yes, I've tried this with Tango back in the D1 days.
I don't see how. I noticed that the ancillary data kept along with class definitions and other stuff was quite significant, particularly when a decent number of templates appear. Dynamic linkage of the runtime can't affect that.
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 11:39, Manu wrote:

 I don't see how. I noticed that the ancillary data kept along with class
 definitions and other stuff was quite significant, particularly when a
 decent number of templates appear.
 Dynamic linkage of the runtime can't affect that.
That's the result I got when I tried, don't know why. -- /Jacob Carlborg
Apr 08 2013
prev sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 09:31:46 UTC, Jacob Carlborg wrote:
 On 2013-04-08 10:56, Dicebot wrote:

 Sure. Actually, executable size is an easy problem to solve 
 considering
 custom druntimed mentioned before. Most of size in small 
 executables
 come from statically linked huge druntime. (Simple experiment: 
 use
 "-betterC" switch and compile hello-world program linking only 
 to C
 stdlib. Same binary size as for C analog).
That's cheating. It's most likely due to the C standard library is being dynamically linked. If you dynamically link with the D runtime and the standard library you will get the same size for a Hello World in D as in C. Yes, I've tried this with Tango back in the D1 days.
Erm. How so? Same C library is dynamically linked both for D and C programs so I am comparing raw binary size honestly here (and it is the same). If you mean size of druntime is not that relevant if you link it dynamically - embedded application can often be the only program that runs on given system ("single executive" concept) and it makes no difference (actually, dynamic linking is not even possible in that case).
Apr 08 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 12:08, Dicebot wrote:

 Erm. How so? Same C library is dynamically linked both for D and C
 programs so I am comparing raw binary size honestly here (and it is the
 same).
You're comparing a D executable, statically linked with its runtime and standard library to a C executable which is dynamically linked with instead. It's not rocket science that if you put more into the executable it will become larger.
 If you mean size of druntime is not that relevant if you link it
 dynamically - embedded application can often be the only program that
 runs on given system ("single executive" concept) and it makes no
 difference (actually, dynamic linking is not even possible in that case).
Then you have to include the size of the C runtime and standard library when comparing. -- /Jacob Carlborg
Apr 08 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 11:36:40 UTC, Jacob Carlborg wrote:
 You're comparing a D executable, statically linked with its 
 runtime and standard library to a C executable which is 
 dynamically linked with instead. It's not rocket science that 
 if you put more into the executable it will become larger.
You have got it wrong. I am comparing D executable with no runtime and standard library and C executable (-betterC -defaultlib=). And they are roughly the same, what is good, because indicates that there is nothing wrong with plain binary code gen.
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 13:42, Dicebot wrote:

 You have got it wrong. I am comparing D executable with no runtime and
 standard library and C executable (-betterC -defaultlib=). And they are
 roughly the same, what is good, because indicates that there is nothing
 wrong with plain binary code gen.
Aha, I see, my bad. -- /Jacob Carlborg
Apr 08 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 10:31, Manu wrote:

 D for embedded, like PROPER embedded (microcontrollers, or even
 raspberry pi maybe?) is one area where most users would be happy to use
 a custom druntime like the ones presented earlier in this thread where
 it's strategically limited in scope and designed not to allocate.
 'Really embedded' software tends not to care so much about portability.
 A bigger problem is D's executable size, which are rather 'plump' to be
 frank :P
 Last time I tried to understand this, one main issue was objectfactory,
 and the inability to strip out unused classinfo structures (and other
 junk). Any unused data should be stripped, but D somehow finds reason to
 keep it all. Also, template usage needs to be relaxed. Over-use of
 templates really bloats the exe. But it's not insurmountable, D could be
 used in 'proper embedded'.
I agree with the templates, Phobos is full of them. Heck, I created a D-Objective-C bridge that resulted in a 60MB GUI Hello World exeuctable. Full of template and virtual methods bloat. -- /Jacob Carlborg
Apr 08 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 19:26, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-08 10:31, Manu wrote:

  D for embedded, like PROPER embedded (microcontrollers, or even
 raspberry pi maybe?) is one area where most users would be happy to use
 a custom druntime like the ones presented earlier in this thread where
 it's strategically limited in scope and designed not to allocate.
 'Really embedded' software tends not to care so much about portability.
 A bigger problem is D's executable size, which are rather 'plump' to be
 frank :P
 Last time I tried to understand this, one main issue was objectfactory,
 and the inability to strip out unused classinfo structures (and other
 junk). Any unused data should be stripped, but D somehow finds reason to
 keep it all. Also, template usage needs to be relaxed. Over-use of
 templates really bloats the exe. But it's not insurmountable, D could be
 used in 'proper embedded'.
I agree with the templates, Phobos is full of them. Heck, I created a D-Objective-C bridge that resulted in a 60MB GUI Hello World exeuctable. Full of template and virtual methods bloat.
Haha, yeah I remember discussing that with you some time back when we were discussing iPhone. Rather humorous ;) I do wonder if there's room in D for built-in Obj-C compatibility; extern(ObjC) ;) OSX and iOS are not minor platforms by any measure. At least support for the most common parts of the Obj-C calling convention. D doesn't offer full C++ either, but what it does offer is very useful, and it's important that it's there.
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 11:36, Manu wrote:

 Haha, yeah I remember discussing that with you some time back when we
 were discussing iPhone.
 Rather humorous ;)
Yeah :)
 I do wonder if there's room in D for built-in Obj-C compatibility;
 extern(ObjC) ;)
 OSX and iOS are not minor platforms by any measure. At least support for
 the most common parts of the Obj-C calling convention. D doesn't offer
 full C++ either, but what it does offer is very useful, and it's
 important that it's there.
I really think so. Michel Fortin was working on that. When he announced that I think Walter agreed it would be a good idea to include. http://michelf.ca/projects/d-objc/ -- /Jacob Carlborg
Apr 08 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/8/13 5:26 AM, Jacob Carlborg wrote:
 On 2013-04-08 10:31, Manu wrote:

 D for embedded, like PROPER embedded (microcontrollers, or even
 raspberry pi maybe?) is one area where most users would be happy to use
 a custom druntime like the ones presented earlier in this thread where
 it's strategically limited in scope and designed not to allocate.
 'Really embedded' software tends not to care so much about portability.
 A bigger problem is D's executable size, which are rather 'plump' to be
 frank :P
 Last time I tried to understand this, one main issue was objectfactory,
 and the inability to strip out unused classinfo structures (and other
 junk). Any unused data should be stripped, but D somehow finds reason to
 keep it all. Also, template usage needs to be relaxed. Over-use of
 templates really bloats the exe. But it's not insurmountable, D could be
 used in 'proper embedded'.
I agree with the templates, Phobos is full of them. Heck, I created a D-Objective-C bridge that resulted in a 60MB GUI Hello World exeuctable. Full of template and virtual methods bloat.
Would be interesting to analyze where the bloat comes from. Andrei
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 17:29, Andrei Alexandrescu wrote:

 Would be interesting to analyze where the bloat comes from.
I'll see if I can resurrect that code from the D1 grave. -- /Jacob Carlborg
Apr 08 2013
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/8/13 4:31 AM, Manu wrote:
 That said, I'd still KILL for __forceinline! ;) ;)
Probably it's time to plop an enhancement request for an attribute that's recognized by the compiler. Andrei
Apr 08 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 16:35, Paulo Pinto <pjmlp progtools.org> wrote:

 On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:

 On 7 April 2013 20:59, Paulo Pinto <pjmlp progtools.org> wrote:

  I am not giving up speed. It just happens that I have been coding since
 1986 and I am a polyglot programmer that started doing system programming
 in the Pascal family of languages, before moving into C and C++ land.

 Except for some cases, it does not matter if you get an answer in 1s or
 2ms, however most single language C and C++ developers care about the 2ms
 case even before starting to code, this is what I don't approve.
Bear in mind, most remaining C/C++ programmers are realtime programmers, and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run realtime software. If I chose not to care about 2ms only 8 times, I'll have no time left. I would cut off my left nut for 2ms most working days! I typically measure execution times in 10s of microseconds, if something measures in milliseconds it's a catastrophe that needs to be urgently addressed... and you're correct, as a C/C++ programmer, I DO design with consideration for sub-ms execution times before I write a single line of code. Consequently, I have seen the GC burn well into the ms on occasion, and as such, it is completely unacceptable in realtime software.
I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.
The C++ state hasn't changed though. We still avoid virtual calls like the plague. One of my biggest design gripes with D, hands down, is that functions are virtual by default. I believe this is a critical mistake, and the biggest one in the language by far. Afterwards the same discussion came around with JVM and .NET environments,
 which while making GC widespread, also had the sad side-effect to make
 younger generations think that safe languages require a VM when that is not
 true.
I agree with this sad trend. D can help address this issue if it breaks free. Nowadays template based code beats C, systems programming is moving to C++
 in mainstream OS, leaving C behind, while some security conscious areas are
 adopting Ada and Spark.
I don't see a significant trend towards C++ in systems code? Where are you looking? The main reason people are leaving C is because they've had quite enough of the inconvenience... 40 years is plenty thank you! I think the main problem for the latency is that nothing compelling enough really stood in to take the helm. Liberal use of templates only beats C where memory and bandwidth are unlimited. Sadly, most computers in the world these days are getting smaller, not bigger, so this is not a trend that should be followed. Binary size is, as always, a critical factor in performance (mainly relating to the size of the targets icache). Small isolated templates produce some great wins, over-application of templates results in crippling (and very hard to track/isolate) performance issues. These performance issues are virtually impossible to fight; they tend not to appear on profilers, since they're evenly distributed throughout the code, making the whole program uniformly slow, instead of producing hot-spots, which are much easier to combat. They also have the effect of tricking their authors into erroneously thinking that their code is performing really well, since the profilers show no visible hot spots. Truth is, they didn't both writing a proper basis for comparison, and as such, they will happily continue to believe their program performs well, or even improves the situation (...most likely verified by testing a single template version of one function over a generalised one that was slower, and not factoring in the uniform slowless of the whole application they have introduced). I often fear that D promotes its powerful templates too much, and that junior programmers might go even more nuts than in C++. I foresee that strict discipline will be required in the future... :/ So for me when someone claims about the speed benefits of C and C++
 currently have, I smile as I remember having this kind of discussions with
 C having the role of too slow language.
C was mainly too slow due to the immaturity of compilers, and the fact that computers were not powerful enough, or had enough resources to perform decent optimisations. Back in those days I could disassemble basically anything and point at the compilers mistakes. (note, I was programming in the early 90's, so I imagine the situation was innumerably worse in the mid 80's) These days, I disassemble some code to check what the compiler did, and I'm usually surprised when I find a mistake. And when I do, I find it's usually MY mistake, and I tweak the C/C++ code to allow the compiler to do the proper job. With a good suite of intrinsics available to express architecture-specific concepts outside the language, I haven't had any reason to write assembly for years, the compiler/optimiser produce perfect code (within the ABI, which sometimes has problems). Also, 6502 and z80 processors don't lend themselves to generic workloads. It's hard to develop a good general ABI for those machines; you typically want the ABI to be application specific... decent ABI's only started appearing for the 68000 line which had enough registers to implement a reasonable one. In short, I don't think your point is entirely relevalt. It's not the nature of C that was slow in those days, it's mainly the immaturity of the implementation, combined with the fact that the hardware did not yet support the concepts. So the point is fallacious, you basically can't get better performance if you hand-write x86 assembly these days. It will probably be worse. Walter's claim is that D's inefficient GC is mitigated by the fact that D
 produces less garbage than other languages, and this is true to an extent.
 But given that is the case, to be reliable, it is of critical importance
 that:
 a) the programmer is aware of every allocation they are making, they can't
 be hidden inside benign looking library calls like toUpperInPlace.
 b) all allocations should be deliberate.
 c) helpful messages/debugging features need to be available to track where
 allocations are coming from. standardised statistical output would be most
 helpful.
 d) alternatives need to be available for the functions that allocate by
 nature, or an option for user-supplied allocators, like STL, so one can
 allocate from a pool instead.
 e) D is not very good at reducing localised allocations to the stack, this
 needs some attention. (array initialisation is particularly dangerous)
 f) the GC could do with budgeting controls. I'd like to assign it 150us
 per
 16ms, and it would defer excess workload to later frames.
No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management.
I don't advocate making D a manual managed language. I advocate making it a _possibility_. Tools need to be supplied, because it wastes a LOT of time trying to assert your code (or subsets of your code, ie, an frame execution loop), is good. What you need is a way to do controlled allocations for the few cases that
 there is no way around it, but this should be reserved for modules with
 system code and not scattered everywhere.


 Of course I think given time D compilers will be able to achieve C++ like

 performance, even with GC or who knows, a reference counted version.

 Nowadays the only place I do manual memory management is when writing
 Assembly code.
Apparently you don't write realtime software. I get so frustrated on this forum by how few people care about realtime software, or any architecture other than x86 (no offense to you personally, it's a general observation). Have you ever noticed how smooth and slick the iPhone UI feels? It runs at 60hz and doesn't miss a beat. It wouldn't work in D. Video games can't stutter, audio/video processing can't stutter. ....
I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years. local optimizations done in C and C++.
 Yeah most of they are no AAA, but that does make them less enjoyable.
This is certainly a prevaling trend. The key reason for this is productivity I think. Game devs are sick of C++. Like, REALLY sick of it. Just don't want to waste their time anymore. Swearing about C++ is a daily talk point. This is an industry basically screaming out for salvation, but you'll find no real consensus on where to go. People are basically dabbling at the moment. They are also lead by the platform holders to some extent, MS has a lot of But yes, also as you say, the move towards 'casual' games, where the performance requirements aren't really critical. In 'big games' though, it's still brutally competitive. If you don't raise the technology/performance bar, your competition will. D is remarkably close to offering salvation... this GC business is one of the final hurdles I think.
 I also had the pleasure of being able to use the Native Oberon and AOS
 operating systems back in the late 90's at the university, desktop
 operating systems done in GC systems programming languages. Sure you could
 do manual memory management, but only via the SYSTEM pseudo module.

 One of the applications was a video player, just the decoder was written
 in Assembly.

 http://ignorethecode.net/blog/**2009/04/22/oberon/<http://ignorethecode.net/blog/2009/04/22/oberon/>


 In the end the question is what would a D version just with manual memory
 management have as compelling feature against C++1y and Ada, already
 established languages with industry standards?

 Then again my lack of experience in the embedded world invalidates what I
 think might be the right way.
C++11 is a joke. Too little, too late if you ask me. It barely addresses the problems it tries to tackle, and a lot of it is really lame library solutions. Also, C++ is too stuck. Bad language design that can never be changed. It's templates are a nightmare in particular, and it'll be stuck with headers forever. I doubt the compile times will ever be significantly improved. But again, I'm not actually advocating a D without the GC like others in this thread. I'm a realtime programmer, and I don't find the concepts incompatible, they just need tight control, and good debug/analysis tools. If I can timeslice the GC, limit it to ~150us/frame, that would do the trick. I'd pay 1-2% of my frame time for the convenience it offers for sure. I'd also rather it didn't stop the world. If it could collect on one thread while another thread was still churning data, that would really help the situation. Complex though... It helps that there are basically no runtime allocations in realtime software. This theoretically means the GC should have basically nothing to do! The state of the heap really shouldn't change from frame to frame, and surely that temporal consistency could be used to improve a good GC implementation? (Note: I know nothing about writing a GC) The main source of realtime allocations in D code come from array concatenation, and about 95% of that, in my experience, are completely local and could be relaxed onto the stack! But D doesn't do this in most cases (to my constant frustration)... it allocates anyway, even thought it can easily determine the allocation is localised.
Apr 08 2013
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:
 On 8 April 2013 16:35, Paulo Pinto <pjmlp progtools.org> wrote:

 On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:

 On 7 April 2013 20:59, Paulo Pinto <pjmlp progtools.org> 
 wrote:

  I am not giving up speed. It just happens that I have been 
 coding since
 1986 and I am a polyglot programmer that started doing 
 system programming
 in the Pascal family of languages, before moving into C and 
 C++ land.

 Except for some cases, it does not matter if you get an 
 answer in 1s or
 2ms, however most single language C and C++ developers care 
 about the 2ms
 case even before starting to code, this is what I don't 
 approve.
Bear in mind, most remaining C/C++ programmers are realtime programmers, and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run realtime software. If I chose not to care about 2ms only 8 times, I'll have no time left. I would cut off my left nut for 2ms most working days! I typically measure execution times in 10s of microseconds, if something measures in milliseconds it's a catastrophe that needs to be urgently addressed... and you're correct, as a C/C++ programmer, I DO design with consideration for sub-ms execution times before I write a single line of code. Consequently, I have seen the GC burn well into the ms on occasion, and as such, it is completely unacceptable in realtime software.
I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.
The C++ state hasn't changed though. We still avoid virtual calls like the plague. One of my biggest design gripes with D, hands down, is that functions are virtual by default. I believe this is a critical mistake, and the biggest one in the language by far.
better suited for languages with AOT compilation. Virtual by default, in terms of implementation, is not an issue if the code is JITed, but with AOT compilation you need PGO to be able to inline virtual calls.
 Afterwards the same discussion came around with JVM and .NET 
 environments,
 which while making GC widespread, also had the sad side-effect 
 to make
 younger generations think that safe languages require a VM 
 when that is not
 true.
I agree with this sad trend. D can help address this issue if it breaks free.
Even Go and Rust are a help in that direction I would say.
 Nowadays template based code beats C, systems programming is 
 moving to C++
 in mainstream OS, leaving C behind, while some security 
 conscious areas are
 adopting Ada and Spark.
I don't see a significant trend towards C++ in systems code? Where are you looking?
Mainly at the following examples. Microsoft stating C90 is good enough in their tooling and C++ is way forward as the Windows system programming language. On BUILD 2012 there is a brief mention from Herb Sutter that kernel team is making the code C++ compliant, on his presentation about Modern C++. I can search that on the videos, or maybe if someone was there can confirm it. Oh, and the new Windows APIs since XP are mostly COM based, thus C++, because no sane person should try to use COM from C. Mac OS X driver subsystem uses a C++ subset. Symbian and BeOS/Haiku are implemented in C++. OS/400 is a mix of Assembly, Modula-2 and C++. Both gcc and clang now use C++ as implementation language. Sometimes I think UNIX is what keeps C alive in a way.
 The main reason people are leaving C is because they've had 
 quite enough of
 the inconvenience... 40 years is plenty thank you!
 I think the main problem for the latency is that nothing 
 compelling enough
 really stood in to take the helm.

 Liberal use of templates only beats C where memory and 
 bandwidth are
 unlimited. Sadly, most computers in the world these days are 
 getting
 smaller, not bigger, so this is not a trend that should be 
 followed.
 Binary size is, as always, a critical factor in performance 
 (mainly
 relating to the size of the targets icache). Small isolated 
 templates
 produce some great wins, over-application of templates results 
 in crippling
 (and very hard to track/isolate) performance issues.
 These performance issues are virtually impossible to fight; 
 they tend not
 to appear on profilers, since they're evenly distributed 
 throughout the
 code, making the whole program uniformly slow, instead of 
 producing
 hot-spots, which are much easier to combat.
 They also have the effect of tricking their authors
 into erroneously thinking that their code is performing really 
 well, since
 the profilers show no visible hot spots. Truth is, they didn't 
 both writing
 a proper basis for comparison, and as such, they will happily 
 continue to
 believe their program performs well, or even improves the 
 situation
 (...most likely verified by testing a single template version 
 of one
 function over a generalised one that was slower, and not 
 factoring in the
 uniform slowless of the whole application they have introduced).

 I often fear that D promotes its powerful templates too much, 
 and that
 junior programmers might go even more nuts than in C++. I 
 foresee that
 strict discipline will be required in the future... :/
I agree there. Since D makes meta-programming too easy when compared with C++, I think some examples are just too clever for average developers.
 So for me when someone claims about the speed benefits of C and 
 C++
 currently have, I smile as I remember having this kind of 
 discussions with
 C having the role of too slow language.
C was mainly too slow due to the immaturity of compilers, and the fact that computers were not powerful enough, or had enough resources to perform decent optimisations. [...]
Yeah the main issue was immature compilers. Which is still true when targeting 8 and 16 processors as they still have a similar environment, I imagine.
 With a good suite of intrinsics available to express 
 architecture-specific
 concepts outside the language, I haven't had any reason to 
 write assembly
 for years, the compiler/optimiser produce perfect code (within 
 the ABI,
 which sometimes has problems).
I am cleaning up a toy compiler done on my final year (1999) and I wanted to remove the libc dependency on the runtime, which is quite small anyway, only allowing for int, boolean and string IO. After playing around some hours writing Assembly from scratch, I decided to use the C compiler as high level assembler, disabling the dependency on the C runtime and talking directly to the kernel, It is already good enough to get myself busy with Assembly in the code generator.
 Also, 6502 and z80 processors don't lend themselves to generic 
 workloads.
 It's hard to develop a good general ABI for those machines; you 
 typically
 want the ABI to be application specific... decent ABI's only 
 started
 appearing for the 68000 line which had enough registers to 
 implement a
 reasonable one.

 In short, I don't think your point is entirely relevalt. It's 
 not the
 nature of C that was slow in those days, it's mainly the 
 immaturity of the
 implementation, combined with the fact that the hardware did 
 not yet
 support the concepts.
 So the point is fallacious, you basically can't get better 
 performance if
 you hand-write x86 assembly these days. It will probably be 
 worse.
I do lack real life experience in the game and real time areas, but sometimes the complains about new language features seem to be a thing of old generations don't wanting to learn the new ways. But I have been proven wrong a few times. specially when I tend to assume stuff even without proper field experience.
 Walter's claim is that D's inefficient GC is mitigated by the 
 fact that D
 produces less garbage than other languages, and this is true 
 to an extent.
 But given that is the case, to be reliable, it is of critical 
 importance
 that:
 a) the programmer is aware of every allocation they are 
 making, they can't
 be hidden inside benign looking library calls like 
 toUpperInPlace.
 b) all allocations should be deliberate.
 c) helpful messages/debugging features need to be available 
 to track where
 allocations are coming from. standardised statistical output 
 would be most
 helpful.
 d) alternatives need to be available for the functions that 
 allocate by
 nature, or an option for user-supplied allocators, like STL, 
 so one can
 allocate from a pool instead.
 e) D is not very good at reducing localised allocations to 
 the stack, this
 needs some attention. (array initialisation is particularly 
 dangerous)
 f) the GC could do with budgeting controls. I'd like to 
 assign it 150us
 per
 16ms, and it would defer excess workload to later frames.
No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management.
I don't advocate making D a manual managed language. I advocate making it a _possibility_. Tools need to be supplied, because it wastes a LOT of time trying to assert your code (or subsets of your code, ie, an frame execution loop), is good.
Sorry about the confusion.
 What you need is a way to do controlled allocations for the few 
 cases that
 there is no way around it, but this should be reserved for 
 modules with
 system code and not scattered everywhere.


 Of course I think given time D compilers will be able to 
 achieve C++ like

 performance, even with GC or who knows, a reference counted 
 version.

 Nowadays the only place I do manual memory management is 
 when writing
 Assembly code.
Apparently you don't write realtime software. I get so frustrated on this forum by how few people care about realtime software, or any architecture other than x86 (no offense to you personally, it's a general observation). Have you ever noticed how smooth and slick the iPhone UI feels? It runs at 60hz and doesn't miss a beat. It wouldn't work in D. Video games can't stutter, audio/video processing can't stutter. ....
I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years. However I do see a lot of games being pushed out the door in local optimizations done in C and C++.
 Yeah most of they are no AAA, but that does make them less 
 enjoyable.
This is certainly a prevaling trend. The key reason for this is productivity I think. Game devs are sick of C++. Like, REALLY sick of it. Just don't want to waste their time anymore. Swearing about C++ is a daily talk point. This is an industry basically screaming out for salvation, but you'll find no real consensus on where to go. People are basically dabbling at the moment. They are also lead by the platform holders to some extent, MS has a lot of But yes, also as you say, the move towards 'casual' games, where the performance requirements aren't really critical. In 'big games' though, it's still brutally competitive. If you don't raise the technology/performance bar, your competition will. D is remarkably close to offering salvation... this GC business is one of the final hurdles I think.
This is what I see with most system programming languages. The only ones that succeed in the long run, where the ones pushed by the platform holders. That is what got me dragged from Turbo Pascal/Delphi land into C and C++, as I wanted to use the default OS languages, even though I preferred the former ones.
 I also had the pleasure of being able to use the Native Oberon 
 and AOS
 operating systems back in the late 90's at the university, 
 desktop
 operating systems done in GC systems programming languages. 
 Sure you could
 do manual memory management, but only via the SYSTEM pseudo 
 module.

 One of the applications was a video player, just the decoder 
 was written
 in Assembly.

 http://ignorethecode.net/blog/**2009/04/22/oberon/<http://ignorethecode.net/blog/2009/04/22/oberon/>


 In the end the question is what would a D version just with 
 manual memory
 management have as compelling feature against C++1y and Ada, 
 already
 established languages with industry standards?

 Then again my lack of experience in the embedded world 
 invalidates what I
 think might be the right way.
C++11 is a joke. Too little, too late if you ask me. It barely addresses the problems it tries to tackle, and a lot of it is really lame library solutions. Also, C++ is too stuck. Bad language design that can never be changed. It's templates are a nightmare in particular, and it'll be stuck with headers forever. I doubt the compile times will ever be significantly improved.
I agree with you there, but the industry seems to be following along anyway.
 But again, I'm not actually advocating a D without the GC like 
 others in
 this thread. I'm a realtime programmer, and I don't find the 
 concepts
 incompatible, they just need tight control, and good 
 debug/analysis tools.
 If I can timeslice the GC, limit it to ~150us/frame, that would 
 do the
 trick. I'd pay 1-2% of my frame time for the convenience it 
 offers for sure.
 I'd also rather it didn't stop the world. If it could collect 
 on one thread
 while another thread was still churning data, that would really 
 help the
 situation. Complex though...
 It helps that there are basically no runtime allocations in 
 realtime
 software. This theoretically means the GC should have basically 
 nothing to
 do! The state of the heap really shouldn't change from frame to 
 frame, and
 surely that temporal consistency could be used to improve a 
 good GC
 implementation? (Note: I know nothing about writing a GC)
 The main source of realtime allocations in D code come from
 array concatenation, and about 95% of that, in my experience, 
 are
 completely local and could be relaxed onto the stack! But D 
 doesn't do this
 in most cases (to my constant frustration)... it allocates 
 anyway, even
 thought it can easily determine the allocation is localised.
Agreed. Thanks for the explanation, it is always quite interesting to read your counterarguments. -- Paulo
Apr 08 2013
prev sibling next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Mon, 8 Apr 2013 17:57:57 +1000
Manu <turkeyman gmail.com> wrote:
 
 But yes, also as you say, the move towards 'casual' games, where the
 performance requirements aren't really critical.
 In 'big games' though, it's still brutally competitive. If you don't
 raise the technology/performance bar, your competition will.
 
I can't help wondering how big the "big games" world really is anyway, though. I know there's huge sums of money involved, both cost and revenue, and lots of developers, but...well, let me put it this way: Maybe I'm just projecting my own tastes into this, or maybe this is just because I don't have sales/profits/etc charts for the last 10-20 years to examine, but lately I'm finding it difficult to believe that "AAA" games aren't becoming (or already) a mere niche, much like high-performance sports cars. (Ie, big money, but small market.) Part of this is because, as I see it, the "big/AAA games" *as they used to exist* up until around the early 2000's don't seem to be around much anymore. The big business development companies have, for the most part, split their non-sports product lines into two main areas: 1. Mobile, Casual IP tie-ins, "Free-2-Play", etc. 2. Interactive movies. Note that *neither* of those two categories include the sorts of games the "big games/AAA developers" were commonly making from around late-80's to about 2000 or so. Those sorts of games are now almost exclusively the realm of the indie (Although there are still some exceptions, mainly from Japanese developers - which incidentally is why I still respect the Japanese games industry more than their western counterparts.) Now, of those two categories currently made by the big name developers, only the second category, "Interactive movies", are actually AAA/big-budget titles. So my question is, who really plays the current crop of AAA/big-budget titles, and can it really be considered much more than a niche? First off, they cost $60. Plus $100's for hardware (a standard "email and MS Word" machine isn't going to cut it). And often either no or minimal demos. And it typically takes at least half-an-hour to even reach any core gameplay (Yes, I've timed it). So right there it's already looking a bit more "muscle car" than "sedan". High cost-of-entry. So is it the "core" gamers buying the modern AAA/big-budget titles? Honestly, I'm not seeing it. From what I can tell, these days they're mostly switching over to indie games. As for why that's happening, I figure "Core" gamers are core gamers *because* they play videogames. Modern AAA/big-budget titles, are *not* videogames except in a very loose sense, and core gamers *do* frequently take issue with them. Modern AAA/big-budget titles are interactive movies, not videogames, because their focus is story, dialog and cinematics, not gameplay. So core gamers have been moving *away* from AAA/big-budget titles and towards indie games. So is it the "casual" crowd buying the modern AAA/big-budget titles? Definitely not. They're the ones who tend to be intimidated by 3D environments and game controllers and spend their time on Words With Friends, Wii Waggle, PopCap, etc., rarely spend much money on gaming and rarely venture outside iOS, Android and Web. I know there is and will always be an audience for the modern AAA/big-budget cinematic interactive-movie "games". But I don't see where there's a *large non-niche* audience for them. There's maybe the multiplayer-FPS enthusiasts, but that's a bit of a niche itself. And I don't see a whole lot else. It's the "Italian sports-cars" of videogaming: Just a big-budget niche.
Apr 08 2013
parent reply Manu <turkeyman gmail.com> writes:
On 9 April 2013 11:32, Nick Sabalausky
<SeeWebsiteToContactMe semitwist.com>wrote:

 On Mon, 8 Apr 2013 17:57:57 +1000
 Manu <turkeyman gmail.com> wrote:
 But yes, also as you say, the move towards 'casual' games, where the
 performance requirements aren't really critical.
 In 'big games' though, it's still brutally competitive. If you don't
 raise the technology/performance bar, your competition will.
I can't help wondering how big the "big games" world really is anyway, though. I know there's huge sums of money involved, both cost and revenue, and lots of developers, but...well, let me put it this way: Maybe I'm just projecting my own tastes into this, or maybe this is just because I don't have sales/profits/etc charts for the last 10-20 years to examine, but lately I'm finding it difficult to believe that "AAA" games aren't becoming (or already) a mere niche, much like high-performance sports cars. (Ie, big money, but small market.) Part of this is because, as I see it, the "big/AAA games" *as they used to exist* up until around the early 2000's don't seem to be around much anymore. The big business development companies have, for the most part, split their non-sports product lines into two main areas: 1. Mobile, Casual IP tie-ins, "Free-2-Play", etc. 2. Interactive movies.
Where is Call of Duty? Grand Thieft Auto? Starcraft? World of Warcraft? Is Uncharted an interactive movie? What about Tomb Raider? Mario? Zelda? Gears of War? Bioshock? They have lots of cinematic presentation, but clearly complex and involved gameplay. God of War? Note that *neither* of those two categories include the sorts of games the
 "big games/AAA  developers" were commonly making from around late-80's to
 about 2000 or so. Those sorts of games are now almost exclusively the realm
 of the indie (Although there are still some exceptions, mainly from
 Japanese developers - which incidentally is why I still respect the
 Japanese games industry more than their western counterparts.)
None of those games I list above are in any way 'indy. That's certainly not an exhaustive list. Granted, the last couple (~2ish) years have been suffering, the industry is sick, and it's also the end of a hardware generation. New consoles will come soon, the bar will raise, along with a slew of new titles the major publishers have been holding back, to launch with the new systems to try and win the early-generation system war. Now, of those two categories currently made by the big name developers,
 only the second category, "Interactive movies", are actually AAA/big-budget
 titles.

 So my question is, who really plays the current crop of AAA/big-budget
 titles, and can it really be considered much more than a niche?
The volume of games release is decreasing in recent years. This is due to a lot of factors, and the industry is suffering at the moment. But that does not suggest people are playing less video games. They're only playing _less_ video games. Fewer developers are selling much higher quantities. If your studio is not in the top tier, it does suck to be you right now... First off, they cost $60. Plus $100's for hardware (a standard "email
 and MS Word" machine isn't going to cut it). And often either no
 or minimal demos. And it typically takes at least half-an-hour to even
 reach any core gameplay (Yes, I've timed it). So right there it's
 already looking a bit more "muscle car" than "sedan". High
 cost-of-entry.
Games console generations have a 5-10 year lifespan, so it's not exactly an annual investment. And they're cheaper than phones and iPad's amazingly! Demos are a known problem, that will be addressed by the coming hardware generation. I think it's only 'niche' by total volume. The percentage of 'core'/'hardcore' gamers is decreasing, but that's because the overall sensis is increasing. There are a lot more gamers now. Girls are gamers now! 51% of the population who were not previously in the statistics... So is it the "core" gamers buying the modern AAA/big-budget titles?
 Honestly, I'm not seeing it. From what I can tell, these days they're
 mostly switching over to indie games. As for why that's happening, I figure
 "Core" gamers are core gamers *because* they play videogames. Modern
 AAA/big-budget titles, are *not* videogames except in a very loose sense,
 and core gamers *do* frequently take issue with them. Modern
  AAA/big-budget titles are interactive movies, not videogames, because
 their focus is story, dialog and cinematics, not gameplay. So core gamers
 have been moving *away* from AAA/big-budget titles and towards indie games.
Tell me Call of Duty and friends don't sell. They make squillions. There are less successful titles than in recent years, and that's largely because the industry is sick, and kinda antiquated... You may be right, traditionally self-identified *core* gamers are moving indy, because it's an innovative sector that's offering what they like about video games. But they've had their place taken by... 'normal people', you know, the kinds of people that go to the cinema and watch movies. There's a lot of them, and they still buy lots of games. Core gamers still buy games too, even if they don't enjoy them as much these days. So is it the "casual" crowd buying the modern AAA/big-budget titles?
 Definitely not. They're the ones who tend to be intimidated by 3D
 environments and game controllers and spend their time on Words With
 Friends, Wii Waggle, PopCap, etc., rarely spend much money on gaming and
 rarely venture outside iOS, Android and Web.
What's your point here? Yes, it's a MASSIVE emerging market. But since they buy fewer games than an enthusiast, and they cost 99c, rather than $60, only the 5 biggest titles in the appstore are wildly successful. It's probably worse than 'big games' in terms of number of participants that can do well. There is a lower cost of entry, and since the industry is collapsing, there are a lot of unemployed/disgruntled developers trying the indy thing out. This is great mind you! The chronically risk-averse industry has been short on innovation recently. The more quality engineers from major studios go the indy direction though, you can expect the pressure to increase on delivering top-notch presentation in even 'casual' titles. I think it'll see technological bar-raising seepage from ex-AAA developers trying to gain a foothold in the new industry. I know there is and will always be an audience for the modern
 AAA/big-budget cinematic interactive-movie "games". But I don't see where
 there's a *large non-niche* audience for them. There's maybe the
 multiplayer-FPS enthusiasts, but that's a bit of a niche itself. And I
 don't see a whole lot else. It's the "Italian sports-cars" of videogaming:
 Just a big-budget niche.
Well, a 'niche' that's bigger than the entire film industry is not a niche that one can ignore. I don't think the number of big-games players has significantly decreased though. It might have peaked, I haven't checked numbers like that for a fair while. What has happened, is the concept of the 'video games' industry has dramatically expanded. Basically everyone is a 'gamer' now, and everyone has some 99c games in their pocket. It's a whole new industry that is being explored, but since the statistics are all conflated into 'the video games industry', I can see how you might see it that way. The business model is changing shape too, thanks to the indy/casual thing. $60 is silly (and I'd love it if I could get games for $60 btw, we pay $90-$120 here. yanks get fucking everything subsidised!), people aren't happy to spend $60 on entertainment anymore, when something that can equally entertain them costs 99c. I reckon AAA will start to sell in installments/episodes, or start leveraging in-game sales of stuff a lot more to reduce the cost of entry. But the budgets are so huge that it's slow and very conservative to experiment with new business models . If someone tries something new and makes a mistake, their studio will go out of business. It'll be the mid-tier, the A and B+ games that nail that first. One thing I've heard reports of, contrary to your assertion that AAA is losing participants to the more innovative indy area, is 'casual games' are actually bridging previously non-gamers into big games. People who may have been intimidated previously are becoming more comfortable with games in general, and are joining in on the TV screen. Particularly true for new women gamers who managed to generally avoid it before. This will have an interesting effect on the format of AAA offerings. You suggest games are becoming interactive movies... this isn't an accident, they sell extremely well! And I think this trend will continue as a larger percentage of the audience are becoming women... I don't think Call of Duty or Grand Thieft Auto are going anywhere just yet though.
Apr 09 2013
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 9 Apr 2013 20:15:53 +1000
Manu <turkeyman gmail.com> wrote:
 On 9 April 2013 11:32, Nick Sabalausky
 I can't help wondering how big the "big games" world really is
 anyway, though. I know there's huge sums of money involved, both
 cost and revenue, and lots of developers, but...well, let me put it
 this way:

 Maybe I'm just projecting my own tastes into this, or maybe this is
 just because I don't have sales/profits/etc charts for the last
 10-20 years to examine, but lately I'm finding it difficult to
 believe that "AAA" games aren't becoming (or already) a mere niche,
 much like high-performance sports cars. (Ie, big money, but small
 market.)

 Part of this is because, as I see it, the "big/AAA games" *as they
 used to exist* up until around the early 2000's don't seem to be
 around much anymore. The big business development companies have,
 for the most part, split their non-sports product lines into two
 main areas:

 1. Mobile, Casual IP tie-ins, "Free-2-Play", etc.
 2. Interactive movies.
Where is Call of Duty? Grand Thieft Auto? Starcraft? World of Warcraft? Is Uncharted an interactive movie? What about Tomb Raider? Mario? Zelda? Gears of War? Bioshock? They have lots of cinematic presentation, but clearly complex and involved gameplay. God of War?
First of all, I should emphasize that I'm not making an actual statement of "big/AAA tiles have a tiny audience". Just that, due to a combination of factors, I can't help getting the feeling that they're either headed towards being niche, or are perhaps at least somewhat closer to niche than they're typically considered. Like I said previously, these could very well be unfounded impressions. Secondly, with those "two main categories" I pointed out, I did say "for the *most* part". Clearly there are exceptions. With that in mind: - Call of Duty?: Can't speak for the original series, but for Modern Warfare, yes, most definitely. I do actually like what I've played of Modern Warfare (and was very surprised by that), but that doesn't change that it's most definitely an interactive movie. - Grand Thief Auto?: This one's an oddball since it's sort of two games in one: A sandbox title and, if you actually do the missions, then yes, definitely an interactive movie. - Starcraft?: Starcraft is 15 years old, so it isn't an example of a modern AAA title in the first place. - World of Warcraft?: That would probably fall in the "casual" category. It's not an interactive movie AFAIK, but my understanding is that it's not much of a videogame, either. If you ignore the theme, it's closer to "Second Life" or maybe "Words With Friends" than it is to "Warcraft" or a cinematics extravaganza. But the D&D theme admittedly makes that considerably less-than-obvious. - Is Uncharted an interactive movie?: What I played of the first one (from what I can remember, it was a while ago) didn't seem to be. Although the gameplay did feel a bit weak (FWIW). The demo of the second certainly seemed to fall into "interactive movie" though. - What about Tomb Raider?: I wouldn't have any idea. When I tried the original fifteen-plus years ago, I was so turned off (within minutes) by the barely-usable controls that I've never paid the series any attention since. - Mario?: Depends which one. Galaxy, yes. Sunshine, yes to a somewhat lesser extent. 64, not particularly, no (but it was starting to head that direction). New Mario and 8/16-bit mario, definitely no. - Zelda?: Skyward Sword, definitely yes, it's Harry Potter with a green suit. The DS one, mostly yes. Twilight Princess and Wind Waker, borderline (I did quite like Wind Waker, though). Pretty much anything else, no. - Gears of War?: Wouldn't know, I don't have a 360. But if it's anything like Bulletstorm, then yes. - Bioshock?: From what I could stand to sit through, yes. And from the playthrough video of Bioshock Infinity in the PSN store, yes. (Right from the beginning I was wishing so much the player would kill that side-kick lady so she'd shut the hell up.) - God of War?: If the first one is any indication, yes. It's borderline "Dragon's Lair without cell shading".
 They have lots of cinematic presentation, but clearly complex and
 involved gameplay.
In the ones I identified as "interactive movie", cinematic presentation deeply permeates the entire experience, gameplay and all. As one example, even during the "gameplay" sections NPCs very frequently won't shut the hell up. "Blah blah blah BLAH BLAH blah blah BLAH." (Side note: Journey, while not much of a game, was a notable breath of fresh air in that regard. I respected the fact that *it* respected *me* enough to not spoon-feed me every inane detail the writer could think of.) Furthermore, any "game" that takes literally 30+ minutes to reach the real meat of uninterrupted gameplay most definitely counts as "interactive movie". This includes, just as a few off-the-top-of-my-head examples: Assassin's Creed 2 (god what a turd, never played the other versions though), Zelda Skyward Sword (takes at least a full 2 hours to reach anything remotely resembling real Zelda gameplay), and Bulletstorm (Was the dialog written by a fifth-grader? And does the redneck NPC ever shut up? And as an unrelated side note: why have we gone back to using view-bob again? FPSes stopped doing that in the late 90's for a very good reason - it was literally nauseating - and still is. If I wanted nauseating camerawork with pre-pubescent dialog, I'd watch a J.J. Abrams movie.)
 
 The volume of games release is decreasing in recent years. This is
 due to a lot of factors, and the industry is suffering at the moment.
Again, perhaps unfounded, but I can't help wondering if the industry's suffering is directly related to the enormous amounts of resources they keep throwing at cinematics and photo-realistic rendering. If the industry has been pushing that stuff so hard...and the industry is suffering...it's time to examine whether or not there might be a connection. Maybe there aren't as many people with $60 worth of interest in such games as they thought. Or maybe there are. But in any case, it's something the industry needs to reflect on, if they aren't doing so already.
 
 Games console generations have a 5-10 year lifespan, so it's not
 exactly an annual investment.
But it's still an up-front cost with a real potential for sticker-shock, and despite Sony's claims to the contrary, they're primary use is just games. So it's a non-trivial cost-of-entry which hinders casual-audience adoption.
 And they're cheaper than phones and iPad's amazingly!
Well, those things are outrageously expensive anyway, what with the combination of "palm-sized battery-powered super-computer", plus "telecom greed" (telecom is known for being one of the most anti-consumer, anti-competitive, greed-driven industries in the world), plus Apple's trademark cost inflation (for a significant number of the devices out there, anyway). And in any case, most people already have such devices for non-game purposes. So gaming on them usually has a very low cost-of-entry compared to dedicated gaming devices. Therefore: "casual audience friendly".
 Demos are a known problem, that will be addressed
 by the coming hardware generation.
Ummm...what?!? The current generation is already more than perfectly capable of handling demos just fine. The only problem is that many (obviously not all, but still, "many") AAA titles choose not to do them. You could talk about demo size restrictions, but that's an artificial store-imposed limitation, not a hardware one. Fuck, I've got *entire* full games (plural) on my PS3's HDD with room to spare, and downloading them (over WiFi no less) was no problem (and yes, legitimately). There is absolutely nothing about demos for the next hardware generation *to* address. Either the studios/publishers put them out or they don't. Hell even the previous generation had demos, albeit disc-based ones (which *was* a notable problem in certain ways).
 I think it's only 'niche' by total volume. The percentage of
 'core'/'hardcore' gamers is decreasing, but that's because the overall
 sensis is increasing. There are a lot more gamers now. Girls are
 gamers now! 51% of the population who were not previously in the
 statistics...
 
Perhaps so, an interesting point. But then if that's so, why would the industry be suffering? If there's really that many more real gamers now, ones that like the big-budget cinematic stuff, shouldn't that mean enough increased sales to keep things going well? Or maybe there really are more gamers who like that stuff than before, but an even *greater* increase in the developers' interest?
 So is it the "core" gamers buying the modern AAA/big-budget titles?
 Honestly, I'm not seeing it. From what I can tell, these days
 they're mostly switching over to indie games. As for why that's
 happening, I figure "Core" gamers are core gamers *because* they
 play videogames. Modern AAA/big-budget titles, are *not* videogames
 except in a very loose sense, and core gamers *do* frequently take
 issue with them. Modern AAA/big-budget titles are interactive
 movies, not videogames, because their focus is story, dialog and
 cinematics, not gameplay. So core gamers have been moving *away*
 from AAA/big-budget titles and towards indie games.
Tell me Call of Duty and friends don't sell. They make squillions.
Yea, they do. And so does Porche. And yet most drivers have never touched a high-performance car and never will. But how much room is there in the market for a big-budget games that *do* sell fantastically? Room enough for *some* obviously, the Italian sports cars of the gaming world, but is there really room for much more than a few?
 There are less successful titles than in recent years, and that's
 largely because the industry is sick, and kinda antiquated...
The question then is: What made it sick? My suspicion, and again this is only a suspicion, is that at least part of it is the production of big-budget cinematic extravaganzas overshooting demand. It wouldn't be too surprising for the industry to suffer if they're all trying to shoot the moon, so to speak. And it would seem that they are trying to, from what you've said about top-tier being so competitive.
 You may be right, traditionally self-identified *core* gamers are
 moving indy, because it's an innovative sector that's offering what
 they like about video games. But they've had their place taken by...
 'normal people', you know, the kinds of people that go to the cinema
 and watch movies. There's a lot of them, and they still buy lots of
 games.
Maybe so, but I'm unconvinced that they buy a lot of AAA titles per-person.
 So is it the "casual" crowd buying the modern AAA/big-budget titles?
 Definitely not. They're the ones who tend to be intimidated by 3D
 environments and game controllers and spend their time on Words With
 Friends, Wii Waggle, PopCap, etc., rarely spend much money on
 gaming and rarely venture outside iOS, Android and Web.
What's your point here?
Just saying that I don't think the "casual" gaming crowd is buying a whole ton of AAA cinematic titles. Some, sure, but significantly more than that? I'd be surprised.
 
 I know there is and will always be an audience for the modern
 AAA/big-budget cinematic interactive-movie "games". But I don't see
 where there's a *large non-niche* audience for them. There's maybe
 the multiplayer-FPS enthusiasts, but that's a bit of a niche
 itself. And I don't see a whole lot else. It's the "Italian
 sports-cars" of videogaming: Just a big-budget niche.
Well, a 'niche' that's bigger than the entire film industry is not a niche that one can ignore.
Bigger than the film industry? I know that's true in terms of revenue, and even profits IIRC, but in terms of audience size, ie number of people? I've never heard that claimed, and it would surprise me. Remember, niche doesn't mean small profits, just small audience. And I never said anything about ignoring it ;)
 I don't think the number of big-games players has significantly
 decreased though. It might have peaked, I haven't checked numbers
 like that for a fair while.
 What has happened, is the concept of the 'video games' industry has
 dramatically expanded. Basically everyone is a 'gamer' now, and
 everyone has some 99c games in their pocket.
 It's a whole new industry that is being explored, but since the
 statistics are all conflated into 'the video games industry', I can
 see how you might see it that way.
 
Well, even looking exclusively at Console/PC, I still get the impression that AAA titles are becoming a harder and harder sell. But like I said, I don't have actual figures here, so I could be mistaken.
 The business model is changing shape too, thanks to the indy/casual
 thing. $60 is silly (and I'd love it if I could get games for $60
 btw, we pay $90-$120 here.
Ouch! (Out of curiosity, would that be Europe, Australia, New Zealand, something else? I've heard that New Zealand in particular it can be very difficult to get games quickly and inexpensively. But then I've heard a lot of grumbling about Europe's VAT, too.) I did pay $80 for Conker's Bad Fur Day, back many years ago. And right before its cost came down, too ;) 'Course that high cost was for entirely different reasons (N64) than in your case.
 yanks get fucking everything subsidised!),
 people aren't happy to spend $60 on entertainment anymore, when
 something that can equally entertain them costs 99c.
 I reckon AAA will start to sell in installments/episodes,
Some of them have already been trying out episodic (Half-Life 2, Sonic 4 - the latter is underrated IMO). Actually, from what I can tell, there's been a *lot* of talk about moving to episodic gaming for some time now. So that wouldn't surprise me either. OTOH, I would have expected to see more of it by now already, but maybe there's just been a lot of inertia in the way.
 or start
 leveraging in-game sales of stuff a lot more to reduce the cost of
 entry.
That's definitely been taking off. Don't know if it's working or not, but that sort of thing is all over the PSN Store. I'd be surprised if it isn't working though, because it does seem to make a lot of sense in many ways. Closely related to this, have you read the latest Game Developer Magazine (April 2013)? There's a lot of very interesting, and convincing, discussion about Free-2-Play and F2P-inspired models having a lot of promise for AAA gaming, and some implication that the current (old) AAA model may be a sinking (or at least outdated and slightly leaky) ship. "Money Issues" on page 4 was particularly interesting.
 One thing I've heard reports of, contrary to your assertion that AAA
 is losing participants to the more innovative indy area, is 'casual
 games' are actually bridging previously non-gamers into big games.
Really? That was Nintendo's intended strategy with the Wii 1, but I'd heard they never got much conversion ratio (and I know they wound up alienating a lot of core gamers and developers in the process). The people would buy a Wii just for Wii Sports or Wii Fit and then never buy much else. So that's interesting if the conversion is happening with indie games...and unfortunate for N, as it suggests they should have been more indie-friendly and download-friendly like 360, PSN and Steam. Which I could have told them from the start, but meh, they never listen to me ;)
 You suggest games are becoming
 interactive movies... this isn't an accident, they sell extremely
 well! And I think this trend will continue as a larger percentage of
 the audience are becoming women... I don't think Call of Duty or
 Grand Thieft Auto are going anywhere just yet though.
 
That does suggest though that such people still aren't really interested in videogames. They're just interested in the hot new type of movie.
Apr 09 2013
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 04/09/2013 04:43 PM, Nick Sabalausky wrote:
 - Starcraft?: Starcraft is 15 years old, so it isn't an example of a
    modern AAA title in the first place.
StarCraft II came out a few years ago and sold very well. They also just released the second installment of it within the past month or so, and considering it is essentially an over-priced expansion pack, it also sold very well.
 In the ones I identified as "interactive movie", cinematic presentation
 deeply permeates the entire experience, gameplay and all.
Translation: Wearing your gumpy-old-man goggles, you dismiss games that feature lots of cinematics as "interactive movies", even though there is plenty of core gameplay to be had. There *are* games that are essentially interactive movies, like Heavy Rain for example, or LA Noire, but putting shooters like BioShock Infinite or GTA (when doing the missions) in this category is ridiculous.
Apr 10 2013
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 08:38:26 -0400
Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/09/2013 04:43 PM, Nick Sabalausky wrote:
 - Starcraft?: Starcraft is 15 years old, so it isn't an example of a
    modern AAA title in the first place.
StarCraft II came out a few years ago and sold very well. They also just released the second installment of it within the past month or so, and considering it is essentially an over-priced expansion pack, it also sold very well.
 In the ones I identified as "interactive movie", cinematic
 presentation deeply permeates the entire experience, gameplay and
 all.
Translation: Wearing your gumpy-old-man goggles, you dismiss games that feature lots of cinematics as "interactive movies", even though there is plenty of core gameplay to be had.
"Dismissing" isn't the right word for it (Although I have gone straight from "teenage angst" to what can be interpreted as "old dude crotchetyness"). Like I said, I do like CoD: Modern Warfare (at least what I've played). I'd also confidently toss the Splinter Cell series in the "interactive movie" boat, and yet that's actually one of my all-time favorite series (at least 1-3, and to a slightly lesser extent 4, wouldn't know about 5). Same goes for the Portal games (although I would have *very much* preferred they had actually included a "fast-forward / skip-ahead" feature for all the scripted sequences. Every movie in existence can handle that no problem, it's a *basic* *expected* feature, why can't a videogame with a whole team of programmers actually manage it? A true FF/Rewind obviously has technical hurdles for real-time cinematics, but a "skip" sure as fuck doesn't). I guess I haven't been entirely clear, but the complaints I do have about what I've been calling "interactive movies" are that: A. I think the non-indie industry has been focusing way too much on them, to the detriment of the medium itself, and possibly the health of the industry (and yes, to the determent of my own opinion on modern videogaming as well). It's strongly analogous to the irrationally high obsession with "3D" in the mid 90's: 3D isn't bad, but it was WAAAAAY over-obsessed, and it certainly isn't the *only* good way to go. A *good* 2D game would have sold well: Rayman and Castlevania: SoTN proved that. The problem was, publishers and developers pulled this notion that "Gamers will only buy 3D" *completely* out of their asses, with absolutely zero meaningful data to back it up, and instead shoveled out load after load of mostly-bad, and mostly-ugly 3D games. I still consider that easily the worst console generation. "Cinematic" is very much the new "3D". Everything still applies, and history is repeating itself. B. Most of them (from what I've seen) are very poorly done. Just to rehash some examples: - Skyward Sword is easily one of the worst Zeldas ever made. Same with the last Metroid (the one from the Ninja Gaiden reboot developers). Personally I thought Metroid Prime 3 had taken the series straight downhill too, but I guess I'm alone in that. - Assassin's Creed (at least if AC2 is any indication) is one of the absolute worst multimedia productions ever created, period. It's just inane BS after inane BS after more inane BS. You may as well watch a soap. - And the first 45 minutes of Bulletstorm is wretched as well. The "walking on the skyscraper's wall" *could* have been absolutely fantastic - *if* there had actually been anything to *do* besides listen to horrible dialog while walking to the next cutscene. Portal and Splinter Cell did their story/dialog/presentation very well (despite Portal's asinine *forcing* of it, which is a royal PITA when you just want to play the puzzles), but quality in "interactive movie" storytelling is extremely rare in general. And the thing is, if you can't do a feature *well*, then it doesn't belong in the finished game, period. I guess I've rambled again clouding my main points but basically: Cinematic/Story/etc is the new 3D: It's not inherently bad, but it's usually done bad, and even if it weren't done badly it's way too heavily focused/obsessed on and over-used, to the detriment of the medium and possibly the industry.
 There *are* games that are essentially interactive movies, like Heavy 
 Rain for example, or LA Noire, but putting shooters like BioShock 
 Infinite or GTA (when doing the missions) in this category is
 ridiculous.
Well yea, Quantic Dream goes WAAAAAY off into the "interactive movie" realm. (Ex: Indigo Prophesy started out looking promising but quickly devolved into one long quicktime event). Quantic Dream is basically the new Digital Pictures or...whoever made Dragon's Lair. Keep in mind, I'm using "interactive movie" largely for lack of a better term. "Videogame" definitely isn't the right word for them. But at the same time, these "interactive movie" things tend to swing back and forth (within the very same game) between "more of a game than a *true* interactive movie" and "literally *less* interactive than a Hollywood movie, because you can't interact with a cuscene *and* you can rarely fast-forward past it". (And then there's...dumb...shits like Nintendo that *do* put in a skip feature, for *some* cutscenes, and then deliberately *disable* it on any save-game that hasn't gotten at least that far. Seriously, they could write a book on how to be an asshole developer.) And for the record, in case anyone at Valve, Irrational, or Human Head ever reads this: A cutscene that you can walk
Apr 10 2013
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 10.04.2013 19:14, schrieb Nick Sabalausky:
 On Wed, 10 Apr 2013 08:38:26 -0400
 Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/09/2013 04:43 PM, Nick Sabalausky wrote:
 - Starcraft?: Starcraft is 15 years old, so it isn't an example of a
     modern AAA title in the first place.
StarCraft II came out a few years ago and sold very well. They also just released the second installment of it within the past month or so, and considering it is essentially an over-priced expansion pack, it also sold very well.
 In the ones I identified as "interactive movie", cinematic
 presentation deeply permeates the entire experience, gameplay and
 all.
Translation: Wearing your gumpy-old-man goggles, you dismiss games that feature lots of cinematics as "interactive movies", even though there is plenty of core gameplay to be had.
"Dismissing" isn't the right word for it (Although I have gone straight from "teenage angst" to what can be interpreted as "old dude crotchetyness"). Like I said, I do like CoD: Modern Warfare (at least what I've played). I'd also confidently toss the Splinter Cell series in the "interactive movie" boat, and yet that's actually one of my all-time favorite series (at least 1-3, and to a slightly lesser extent 4, wouldn't know about 5). Same goes for the Portal games (although I would have *very much* preferred they had actually included a "fast-forward / skip-ahead" feature for all the scripted sequences. Every movie in existence can handle that no problem, it's a *basic* *expected* feature, why can't a videogame with a whole team of programmers actually manage it? A true FF/Rewind obviously has technical hurdles for real-time cinematics, but a "skip" sure as fuck doesn't). I guess I haven't been entirely clear, but the complaints I do have about what I've been calling "interactive movies" are that: A. I think the non-indie industry has been focusing way too much on them, to the detriment of the medium itself, and possibly the health of the industry (and yes, to the determent of my own opinion on modern videogaming as well). It's strongly analogous to the irrationally high obsession with "3D" in the mid 90's: 3D isn't bad, but it was WAAAAAY over-obsessed, and it certainly isn't the *only* good way to go. A *good* 2D game would have sold well: Rayman and Castlevania: SoTN proved that. The problem was, publishers and developers pulled this notion that "Gamers will only buy 3D" *completely* out of their asses, with absolutely zero meaningful data to back it up, and instead shoveled out load after load of mostly-bad, and mostly-ugly 3D games. I still consider that easily the worst console generation. "Cinematic" is very much the new "3D". Everything still applies, and history is repeating itself. B. Most of them (from what I've seen) are very poorly done. Just to rehash some examples: - Skyward Sword is easily one of the worst Zeldas ever made. Same with the last Metroid (the one from the Ninja Gaiden reboot developers). Personally I thought Metroid Prime 3 had taken the series straight downhill too, but I guess I'm alone in that. - Assassin's Creed (at least if AC2 is any indication) is one of the absolute worst multimedia productions ever created, period. It's just inane BS after inane BS after more inane BS. You may as well watch a soap. - And the first 45 minutes of Bulletstorm is wretched as well. The "walking on the skyscraper's wall" *could* have been absolutely fantastic - *if* there had actually been anything to *do* besides listen to horrible dialog while walking to the next cutscene. Portal and Splinter Cell did their story/dialog/presentation very well (despite Portal's asinine *forcing* of it, which is a royal PITA when you just want to play the puzzles), but quality in "interactive movie" storytelling is extremely rare in general. And the thing is, if you can't do a feature *well*, then it doesn't belong in the finished game, period. I guess I've rambled again clouding my main points but basically: Cinematic/Story/etc is the new 3D: It's not inherently bad, but it's usually done bad, and even if it weren't done badly it's way too heavily focused/obsessed on and over-used, to the detriment of the medium and possibly the industry.
 There *are* games that are essentially interactive movies, like Heavy
 Rain for example, or LA Noire, but putting shooters like BioShock
 Infinite or GTA (when doing the missions) in this category is
 ridiculous.
Well yea, Quantic Dream goes WAAAAAY off into the "interactive movie" realm. (Ex: Indigo Prophesy started out looking promising but quickly devolved into one long quicktime event). Quantic Dream is basically the new Digital Pictures or...whoever made Dragon's Lair. Keep in mind, I'm using "interactive movie" largely for lack of a better term. "Videogame" definitely isn't the right word for them. But at the same time, these "interactive movie" things tend to swing back and forth (within the very same game) between "more of a game than a *true* interactive movie" and "literally *less* interactive than a Hollywood movie, because you can't interact with a cuscene *and* you can rarely fast-forward past it". (And then there's...dumb...shits like Nintendo that *do* put in a skip feature, for *some* cutscenes, and then deliberately *disable* it on any save-game that hasn't gotten at least that far. Seriously, they could write a book on how to be an asshole developer.) And for the record, in case anyone at Valve, Irrational, or Human Head ever reads this: A cutscene that you can walk
This is what makes me happy while travelling on the bus and train: https://play.google.com/store/apps/details?id=com.larvalabs.gurk -- Paulo
Apr 10 2013
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 19:35:27 +0200
Paulo Pinto <pjmlp progtools.org> wrote:
 
 This is what makes me happy while travelling on the bus and train:
 
 https://play.google.com/store/apps/details?id=com.larvalabs.gurk
 
Cool. Heh, I love these lines in the description: "this ain't your parent's RPG... it's your grandparent's!" "Oh yes indeed, there will be pixels!" :) Shit, I really need to finish up my current work project and try to get back to some indie game dev again.
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 17:48:39 UTC, Nick Sabalausky 
wrote:
 Cool. Heh, I love these lines in the description:

 "this ain't your parent's RPG... it's your grandparent's!"
 "Oh yes indeed, there will be pixels!"

 :)

 Shit, I really need to finish up my current work project and 
 try to get
 back to some indie game dev again.
You should probably like some recently founded Kickstarter RPG stars ;)
Apr 10 2013
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 10, 2013 at 07:35:27PM +0200, Paulo Pinto wrote:
 Am 10.04.2013 19:14, schrieb Nick Sabalausky:
[...]
Keep in mind, I'm using "interactive movie" largely for lack of a
better term. "Videogame" definitely isn't the right word for them.
But at the same time, these "interactive movie" things tend to swing
back and forth (within the very same game) between "more of a game
than a *true* interactive movie" and "literally *less* interactive
than a Hollywood movie, because you can't interact with a cuscene
*and* you can rarely fast-forward past it". (And then
there's...dumb...shits like Nintendo that *do* put in a skip feature,
for *some* cutscenes, and then deliberately *disable* it on any
save-game that hasn't gotten at least that far. Seriously, they could
write a book on how to be an asshole developer.) And for the record,
in case anyone at Valve, Irrational, or Human Head ever reads this: A
cutscene that you can walk around in while you wait is still a

This is what makes me happy while travelling on the bus and train: https://play.google.com/store/apps/details?id=com.larvalabs.gurk
[...] Yeah!!! I recently played Gurk II (which according to reviews is even better than the original Gurk), and totally loved it!! It was so nostalgic that it inspired me to fire up my trusty old dosbox and relive the good ole ultima 4&5 days. :-) Granted, I *did* discover to my dismay that the pixel graphics and tinny sounds of the *original* ultima 4&5 are a lot worse than how my memory recalls they were (the pain was somewhat relieved upon installing the graphics upgrade patch) -- but man, the gameplay was excellent. The NPC dialogues were obviously trivial, monster AI was trivially predictable, and there are tons of loopholes that you can exploit -- but the important thing was, it was FUN. I could totally immerse myself in the virtual world and forget about the pixels and loopholes. I'm afraid I can't say the same for most modern games with their fancy 3D graphics, CD-quality sound, superior AI, etc.. The fun factor is just missing, even though all the superficial elements -- graphics, sounds, AIs, storylines -- are all far more developed. Gurk II captures some of the fun gameplay of the original (pre-7) ultimas, and proves that a modern game without 3D graphics and a multimillion budget *can* be fun (and, judging from the reviews, it's selling pretty well too). T -- INTEL = Only half of "intelligence".
Apr 10 2013
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 10:58:03 -0700
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:

 On Wed, Apr 10, 2013 at 07:35:27PM +0200, Paulo Pinto wrote:
 
 This is what makes me happy while travelling on the bus and train:
 
 https://play.google.com/store/apps/details?id=com.larvalabs.gurk
[...] Yeah!!! I recently played Gurk II (which according to reviews is even better than the original Gurk), and totally loved it!! It was so nostalgic that it inspired me to fire up my trusty old dosbox and relive the good ole ultima 4&5 days. :-) Granted, I *did* discover to my dismay that the pixel graphics and tinny sounds of the *original* ultima 4&5 are a lot worse than how my memory recalls they were (the pain was somewhat relieved upon installing the graphics upgrade patch) -- but man, the gameplay was excellent. The NPC dialogues were obviously trivial, monster AI was trivially predictable, and there are tons of loopholes that you can exploit -- but the important thing was, it was FUN. I could totally immerse myself in the virtual world and forget about the pixels and loopholes. I'm afraid I can't say the same for most modern games with their fancy 3D graphics, CD-quality sound, superior AI, etc.. The fun factor is just missing, even though all the superficial elements -- graphics, sounds, AIs, storylines -- are all far more developed. Gurk II captures some of the fun gameplay of the original (pre-7) ultimas, and proves that a modern game without 3D graphics and a multimillion budget *can* be fun (and, judging from the reviews, it's selling pretty well too).
This all reminds me, if you have an NDS, or access to one, you may want to try "Retro Game Challenge". Actually, I highly recommend it. (Disclaimer: That's the English version of it. When I played it no English version had been announced, and it looked far too awesome to be something likely to get localized, so I went through the Japanese version, "Game Center CX: Arino's Challenge". So I don't know what differences there may be). It takes you chronologically through the 1980's with a series of brand new 8-bit-style games, all extremely authentic, culminating with an 8-bit-style JRPG. Absolutely fantastic game. I only wish there had been a non-portable version so I could play it on a nice big TV...and that I knew more Japanese so I could actually tell what the hell anyone was saying ;)
Apr 10 2013
prev sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 04/10/2013 01:14 PM, Nick Sabalausky wrote:
 Well yea, Quantic Dream goes WAAAAAY off into the "interactive movie"
 realm.
Because that's what the game is. There's nothing wrong with it if you like it, and many people do.
 Keep in mind, I'm using "interactive movie" largely for lack of a
 better term. "Videogame" definitely isn't the right word for them.
They're games, and they use the video medium. Video games. The rest of your post is mostly just a rant about what you personally like in video games/"interactive movies". You are of course entitled to an opinion, but the grumpy old man ranting gets old.
Apr 10 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 4:35 PM, Jeff Nowakowski wrote:
 On 04/10/2013 01:14 PM, Nick Sabalausky wrote:
 Well yea, Quantic Dream goes WAAAAAY off into the "interactive movie"
 realm.
Because that's what the game is. There's nothing wrong with it if you like it, and many people do.
 Keep in mind, I'm using "interactive movie" largely for lack of a
 better term. "Videogame" definitely isn't the right word for them.
They're games, and they use the video medium. Video games. The rest of your post is mostly just a rant about what you personally like in video games/"interactive movies". You are of course entitled to an opinion, but the grumpy old man ranting gets old.
FWIW there's this Neal Stephenson novel "Diamond Age" taking place in the future. That book's vision is that all modern movies are "ractive" (short from "interactive") using networked real players and a loose script, whereas old movies are "passive"s and only watched by a few older people. Andrei
Apr 10 2013
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 04/10/2013 04:44 PM, Andrei Alexandrescu wrote:
 FWIW there's this Neal Stephenson novel "Diamond Age" taking place in
 the future. That book's vision is that all modern movies are "ractive"
 (short from "interactive") using networked real players and a loose
 script, whereas old movies are "passive"s and only watched by a few
 older people.
Personally I don't think passive movies will ever go away. Many times you just want to relax and view the story instead of being part of it. People have been pitching "interactive TV" for a very long time, but passive TV still dominates.
Apr 10 2013
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 17:33:51 -0400
Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/10/2013 04:44 PM, Andrei Alexandrescu wrote:
 FWIW there's this Neal Stephenson novel "Diamond Age" taking place
 in the future. That book's vision is that all modern movies are
 "ractive" (short from "interactive") using networked real players
 and a loose script, whereas old movies are "passive"s and only
 watched by a few older people.
Personally I don't think passive movies will ever go away. Many times you just want to relax and view the story instead of being part of it. People have been pitching "interactive TV" for a very long time, but passive TV still dominates.
I'd tend to agree. I've always been huge on videogames (for whatever definition of "videogame" ;) ), but after all the mental work of code all day even I'm usually more inclined to just veg out with something passive. Just don't want to have to "do" any more. 'Course, this suggests it may depend on occupation. A day of route manual labor, or anything tedious I'd probably be itching to do something involving thought (but maybe that's just me). I have noticed that programming and videogames both scratch the same mental itch, at least for me. If I've been doing a lot of one, I'm less motivated to do the other.
Apr 10 2013
next sibling parent "Rob T" <alanb ucora.com> writes:
On Wednesday, 10 April 2013 at 22:02:09 UTC, Nick Sabalausky 
wrote:
 I have noticed that programming and videogames both scratch the 
 same
 mental itch, at least for me. If I've been doing a lot of one, 
 I'm less
 motivated to do the other.
I recently reached that exact same conclusion too, but in my case I don't think they both satisfy the exact same "itch". For example I tend to enjoy FPS games like counter-strike, and even when I'm tired from programming all day, if I drink a beer or two and I could play for a few hours no problem. I'll however tend to avoid RTS games even though I enjoy them after programming all day. Passive movies are great for a complete shut down, almost like going to sleep, which I sometimes do when watching them. --rt
Apr 10 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 10, 2013 at 06:02:05PM -0400, Nick Sabalausky wrote:
[...]
 I'd tend to agree. I've always been huge on videogames (for whatever
 definition of "videogame" ;) ), but after all the mental work of code
 all day even I'm usually more inclined to just veg out with something
 passive. Just don't want to have to "do" any more.
 
 'Course, this suggests it may depend on occupation. A day of route
 manual labor, or anything tedious I'd probably be itching to do
 something involving thought (but maybe that's just me).
 
 I have noticed that programming and videogames both scratch the same
 mental itch, at least for me. If I've been doing a lot of one, I'm
 less motivated to do the other.
I wonder if this is why I enjoy retro games more -- they require less concentration and lots of fun can be had for not too much effort. I find that a lot of modern games seem to require a lot of concentration -- keeping track of a convoluted storyline, keeping track of one's 3D surroundings, being on one's toes to react quickly at surprise enemy attacks, etc.. After a full day's worth of coding, that's the last thing I want to be doing. Much better to relax with something that can be played in a more relaxed/casual way. Maybe that's why casual games are such a big thing nowadays. OTOH, though, I find that sometimes I wish to get away from the pain of having to deal with some really stupid code, and I'd escape for a few minutes with some very mentally-challenging games (like block-shuffling puzzles, which according to one analysis[1] are PSPACE-complete, that is, possibly harder than NP-complete problems!). I guess maybe it tickles the same mental itch as coding. :) [1] http://www.antiquark.com/2004/12/complexity-of-sliding-block-puzzles.html T -- If creativity is stifled by rigid discipline, then it is not true creativity.
Apr 10 2013
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 15:29:25 -0700
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:
 
 I wonder if this is why I enjoy retro games more -- they require less
 concentration and lots of fun can be had for not too much effort. I
 find that a lot of modern games seem to require a lot of
 concentration -- keeping track of a convoluted storyline, keeping
 track of one's 3D surroundings, being on one's toes to react quickly
 at surprise enemy attacks, etc.. After a full day's worth of coding,
 that's the last thing I want to be doing. Much better to relax with
 something that can be played in a more relaxed/casual way.
 
Strange, I find the exact opposite to true. I always felt this summed it up perfectly: http://semitwist.com/download/img/funny/digitalunrest-2008-09-29.jpg (That said, I never thought MM9 was *as* hard as people made it out to be. At first it seemed the same as all the older megaman's, and then it wasn't long before I could get through the whole thing in about an hour. Still one of the best games ever made, though. But if you want a *really* hard MegaMan, try "MegaMan & Bass". I'm totally stuck in that.) The last 10 or so years, big-budget games have tended to be designed specifically so that anyone can get to the end without much effort. The lack of challenge makes them tedious and boring. For example, the Mario and Zelda games have done nothing but get progressively easier sine the 80's (compare the battle system in the original zelda to *any* 3D zelda - the former is an addictive challenge, the latter is mindless button-mashing/waggle and *vastly* easier.) New Mario is fun, but notably easier than Mario 1/2/3/64. And then there's the old Kid Icarus. *Phew!* - that's not for the faint of heart. Most people don't even know that it has zelda/Metroid-like dungeons or horizontal levels because they never got past level 3. As far as "keeping track of a convoluted storyline", I rarely pay attention to the stories/dialog/characters/etc anyway. There are exceptions (like 2D JRPGs or Disgaea), but most games I just skip through the dialog (9 times out of 10 it's both uninteresting and irrelevant to the gameplay), and when a game doesn't let me skip a cutscene or scripted event I'll just grab a drink or snack or hit the can if I need to, or otherwise just hit "Switch Inputs" and find something not-too-horrible on TV while I wait for the tell-tale sound of a level being loaded off disc.
Apr 10 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 11, 2013 at 02:39:01AM -0400, Nick Sabalausky wrote:
 On Wed, 10 Apr 2013 15:29:25 -0700
 "H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:
 
 I wonder if this is why I enjoy retro games more -- they require
 less concentration and lots of fun can be had for not too much
 effort. I find that a lot of modern games seem to require a lot of
 concentration -- keeping track of a convoluted storyline, keeping
 track of one's 3D surroundings, being on one's toes to react quickly
 at surprise enemy attacks, etc.. After a full day's worth of coding,
 that's the last thing I want to be doing. Much better to relax with
 something that can be played in a more relaxed/casual way.
 
Strange, I find the exact opposite to true. I always felt this summed it up perfectly: http://semitwist.com/download/img/funny/digitalunrest-2008-09-29.jpg (That said, I never thought MM9 was *as* hard as people made it out to be. At first it seemed the same as all the older megaman's, and then it wasn't long before I could get through the whole thing in about an hour. Still one of the best games ever made, though. But if you want a *really* hard MegaMan, try "MegaMan & Bass". I'm totally stuck in that.) The last 10 or so years, big-budget games have tended to be designed specifically so that anyone can get to the end without much effort. The lack of challenge makes them tedious and boring.
OK, now I'm not so sure what I meant anymore, because I find this tedium and bore really tiring, whereas something like, say, the ancient Lode Runner with its incredibly complicated time-sensitive maneuvres is actually stimulating and, paradoxically enough, relaxing. OTOH, things like Quake and other FPSes I find exhausting, even if they're no more than mindless shoot-everything-that-moves deals. Maybe the difference lies in the simplicity of rules in the older 2D games -- yes they can be challenging but the mechanics are easy to grasp, whereas in 3D environments, the complexity of movement possibilities can be overwhelming. Big-budget hold-your-hand "games", OTOH, are tiring in another way, in a click-through ads kinda way. I have very little patience for anything with video clips, 'cos I rather be doing stuff instead of watching a video (I might as well watch youtube instead, etc.), yet I feel like I can't really get "into" the game if I don't endure through all those clips, 'cos I might miss some interesting game-world exposition or important story twist, etc.. So the result is that it's very tiring. But maybe this all just reflects my personal biases, and has nothing to do with what is "objectively" tiring / difficult / etc..
 For example, the Mario and Zelda games have done nothing but get
 progressively easier sine the 80's (compare the battle system in the
 original zelda to *any* 3D zelda - the former is an addictive
 challenge, the latter is mindless button-mashing/waggle and *vastly*
 easier.) New Mario is fun, but notably easier than Mario 1/2/3/64. And
 then there's the old Kid Icarus. *Phew!* - that's not for the faint of
 heart. Most people don't even know that it has zelda/Metroid-like
 dungeons or horizontal levels because they never got past level 3.
Hmm. I beat nethack. Several times. I don't know of any other game that is as difficult to beat! But OTOH, its difficulty comes not from hand-eye coordination, but from the block-shuffling-puzzle type of inherent difficulty -- you have all the time in the world to think before making your next move, but your decision could mean the difference between life and death (i.e. the loss of the last 40 hours of gameplay, due to permadeath). I guess personally I prefer that kind of challenge to the how-fast-can-you-react kind.
 As far as "keeping track of a convoluted storyline", I rarely pay
 attention to the stories/dialog/characters/etc anyway. There are
 exceptions (like 2D JRPGs or Disgaea), but most games I just skip
 through the dialog (9 times out of 10 it's both uninteresting and
 irrelevant to the gameplay), and when a game doesn't let me skip a
 cutscene or scripted event I'll just grab a drink or snack or hit the
 can if I need to, or otherwise just hit "Switch Inputs" and find
 something not-too-horrible on TV while I wait for the tell-tale sound
 of a level being loaded off disc.
But you see, that's precisely the kind of thing that wears me out. I feel like I'm not getting the max out of the game if I don't watch all the cutscenes / read all the dialogs, but then I have to endure through the whole thing when it's poorly written and then it's not enjoyable anymore. This is one of the things I really liked about the older Ultimas: the technology was such that dialogs were minimal, but that meant that they got the point across without needing to sit through long cutscenes / sift through convoluted dialogues. The trigger keywords were more-or-less obvious, so you just hit town, chat up the few obviously-important NPCs using the obviously important keywords, get the info you need, and move out and do stuff. The free-exploration style of the old Ultimas was also something I liked very much. I find sequence-breakers in many modern games very mimesis-breaking, especially when it's something trivial like exploring and beating an area before talking to the NPC who was supposed to tell you to go there, thereby breaking some poorly-written script that assumes you haven't been there yet. Forcefully railroaded games I also find annoying (why is that gigantic boulder sitting there on the road blocking my way for no good reason other than that the game devs don't want me to go there yet? how does talking to an NPC magically make that boulder vanish into thin air?). I much prefer open-exploration games where you have to actively search out stuff and discover what you have to do, rather than just being strung along by the arbitrary sequence the game devs decided must be how the story will pan out. What *really* cinches it for me is when a well-written storyline is made to unfold *while* allowing free exploration (and multiple possible solution paths) at the same time. This gives me the freedom to plan ahead -- take advantage of the open exploration to prepare for what I anticipate is coming, so that I can beat the boss monster my way. Hidden secret bonuses that can only be found via free exploration is also something I really enjoy. I guess I just like active entertainment over passive entertainment (I don't even own a TV!). T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Apr 11 2013
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Thu, 11 Apr 2013 10:24:14 -0700
"H. S. Teoh" <hsteoh quickfur.ath.cx> wrote:
 On Thu, Apr 11, 2013 at 02:39:01AM -0400, Nick Sabalausky wrote:
 The last 10 or so years, big-budget games have tended to be designed
 specifically so that anyone can get to the end without much effort.
 The lack of challenge makes them tedious and boring.
OK, now I'm not so sure what I meant anymore, because I find this tedium and bore really tiring, whereas something like, say, the ancient Lode Runner with its incredibly complicated time-sensitive maneuvres is actually stimulating and, paradoxically enough, relaxing. OTOH, things like Quake and other FPSes I find exhausting, even if they're no more than mindless shoot-everything-that-moves deals. Maybe the difference lies in the simplicity of rules in the older 2D games -- yes they can be challenging but the mechanics are easy to grasp, whereas in 3D environments, the complexity of movement possibilities can be overwhelming.
Ahh, I see what you mean, and I can relate. Maybe part of it is sensory overload. There's a lot more to take in. And there's more visual/auditory information to process and mentally filter out all the details to reach the "core" elements like "this is an enemy, shoot here", "this is an area of interest, go here", "this is dangerous, avoid" etc. And like you say, freedom of movement.
 Big-budget hold-your-hand "games", OTOH, are tiring in another way,
 in a click-through ads kinda way. I have very little patience for
 anything with video clips, 'cos I rather be doing stuff instead of
 watching a video (I might as well watch youtube instead, etc.), yet I
 feel like I can't really get "into" the game if I don't endure
 through all those clips, 'cos I might miss some interesting
 game-world exposition or important story twist, etc.. So the result
 is that it's very tiring.
 
Interesting points, yea. Personally, I don't feel afraid of missing out on such things unless it's a JRPG (whether action JRPG or menu-based) or it demonstrates a high degree of storytelling quality (*and* grabs my interest) right from the start, like Disgaea, Splinter Cell 1-3, Izuna, or Max Payne. (Just as examples.)
 Hmm. I beat nethack. Several times. I don't know of any other game
 that is as difficult to beat! But OTOH, its difficulty comes not from
 hand-eye coordination, but from the block-shuffling-puzzle type of
 inherent difficulty -- you have all the time in the world to think
 before making your next move, but your decision could mean the
 difference between life and death (i.e. the loss of the last 40 hours
 of gameplay, due to permadeath). I guess personally I prefer that
 kind of challenge to the how-fast-can-you-react kind.
 
I like both kinds :) At least, provided that the "how-fast-can-you-react" also requires active thinking and accurate execution, too, as a good bullet-hell shmup or MegaMan, Contra, etc.
 But you see, that's precisely the kind of thing that wears me out. I
 feel like I'm not getting the max out of the game if I don't watch all
 the cutscenes / read all the dialogs, but then I have to endure
 through the whole thing when it's poorly written and then it's not
 enjoyable anymore. This is one of the things I really liked about the
 older Ultimas: the technology was such that dialogs were minimal, but
 that meant that they got the point across without needing to sit
 through long cutscenes / sift through convoluted dialogues. The
 trigger keywords were more-or-less obvious, so you just hit town,
 chat up the few obviously-important NPCs using the obviously
 important keywords, get the info you need, and move out and do stuff.
 
I never really played the Ultimas (I've always been drawn more to JRPGs than to the D&D/Tolkien-esque western RPGs). Although I do have a vague recollection of spending a few minutes in some 3D Ultima on DOS. But I do get what you're saying: I *love* Zelda 2-style dialog: "Stop and rest here." "Sorry. I know nothing." "Only the hammer can destroy a roadblock." They cut straight to the chase and then shut up. I *love* that. Then the 16-bit ones expanded a bit and added a nice dash of character, but not overdone and still generally good. But modern NPCs talk the way my Dad does: They'll give you half their life story and entire social and emotional profile before finally getting to the point, and then they'll restate the damn point twenty times. Cliffs notes, people! ;)
 The free-exploration style of the old Ultimas was also something I
 liked very much. I find sequence-breakers in many modern games very
 mimesis-breaking, especially when it's something trivial like
 exploring and beating an area before talking to the NPC who was
 supposed to tell you to go there, thereby breaking some
 poorly-written script that assumes you haven't been there yet.
 Forcefully railroaded games I also find annoying (why is that
 gigantic boulder sitting there on the road blocking my way for no
 good reason other than that the game devs don't want me to go there
 yet? how does talking to an NPC magically make that boulder vanish
 into thin air?). I much prefer open-exploration games where you have
 to actively search out stuff and discover what you have to do, rather
 than just being strung along by the arbitrary sequence the game devs
 decided must be how the story will pan out.
 
Yea. I did get used to things like "talking to the right NPC magically advances time and triggers events" back in the 16-bit days, so I don't personally mind that except when it's done really poorly. But, actually playing the game *myself*, and using my *own* brain to get through is exactly what I always found compelling about videogames. Ie, *I'm* the one overcoming the obstacles, not the player-character doing it on my behalf. So when modern games present me with a problem and then outright *deny* me the opportunity to actually solve it by solving it *for* me, that kinda pisses me off. It was *supposed* to be interactive, not passive! If your game is just going to solve it's own obstacles, then don't deceive me by claiming it's more of an interactive game than a passive movie spiced with some token movement controls. Ever see the second episode of Futurama, with the Moon theme park? Just like Fry, I want a lunar rover, but I just get a patronizing "Whaler's on the Moon" ride instead. I do like both open-exploration and linear games, but in either case, the game has to let *me* play it. It can't just simply start playing itself whenever it's afraid I might be too dumb to succeed at anything requiring actual thought or skill. That's the *fun* part! The rest is window-dressing.
 What *really* cinches it for me is when a well-written storyline is
 made to unfold *while* allowing free exploration (and multiple
 possible solution paths) at the same time.
*cough* Splinter Cell 3 *wink wink, nudge nudge* Gaming perfection.
 This gives me the freedom
 to plan ahead -- take advantage of the open exploration to prepare
 for what I anticipate is coming, so that I can beat the boss monster
 my way. Hidden secret bonuses that can only be found via free
 exploration is also something I really enjoy.
 
 I guess I just like active entertainment over passive entertainment (I
 don't even own a TV!).
 
Heh :) Even if I absolutely hated passive entertainment, I'd still have a TV just so I wouldn't have to game at my desk or my computer. I spend so much time working on this thing I'd feel like brain-in-a-vat if it was my entertainment box, too. But I do love a lot of passive entertainment, too (I'm a complete anime addict. And 90's/early-2000's SciFi is pretty damn addictive, too, as are other minimally-dramatic shows like Monk, Hunter, MacGyver.)
Apr 11 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 3:02 PM, Nick Sabalausky wrote:
 I have noticed that programming and videogames both scratch the same
 mental itch, at least for me. If I've been doing a lot of one, I'm less
 motivated to do the other.
Oddly for me, programming video games kinda wrecked my enjoyment of them. I keep seeing the man behind the curtain instead of the fantasy :-)
Apr 10 2013
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 19:42:57 -0700
Walter Bright <newshound2 digitalmars.com> wrote:

 On 4/10/2013 3:02 PM, Nick Sabalausky wrote:
 I have noticed that programming and videogames both scratch the same
 mental itch, at least for me. If I've been doing a lot of one, I'm
 less motivated to do the other.
Oddly for me, programming video games kinda wrecked my enjoyment of them. I keep seeing the man behind the curtain instead of the fantasy :-)
Long ago, when I was an anxious budding indie-game dev (before I got side-tracked on web), I trained myself to analyze games on their various levels: technical, gameplay mechanics, aesthetics, interface, etc. Afterwords, I never did learn how to turn that part of my brain off ;). Personally, though, I find that process entertaining as well, so I haven't found it to hinder my ability to enjoy a good game. I realize some people may scoff at seeing that last sentence coming from me, but there really are a lot of games that I do enjoy very much overall - even when there's things I think could, or even should, have been done better. I just tend to find "what was done wrong" much more interesting to discuss them "what was done well". Probably because "things done right" are problems that have been solved, whereas "things done wrong" are problems to be solved and signal areas with a ripe potential for improvement.
Apr 10 2013
prev sibling parent "Rob T" <alanb ucora.com> writes:
On Wednesday, 10 April 2013 at 21:33:52 UTC, Jeff Nowakowski 
wrote:
 On 04/10/2013 04:44 PM, Andrei Alexandrescu wrote:
 FWIW there's this Neal Stephenson novel "Diamond Age" taking 
 place in
 the future. That book's vision is that all modern movies are 
 "ractive"
 (short from "interactive") using networked real players and a 
 loose
 script, whereas old movies are "passive"s and only watched by 
 a few
 older people.
Personally I don't think passive movies will ever go away. Many times you just want to relax and view the story instead of being part of it. People have been pitching "interactive TV" for a very long time, but passive TV still dominates.
I agree, 9 x out of 10 I watch a movie precisely because I want to shut my brain down and relax, interacting with a game may be entertaining at times but it is sometimes just too much work when you're very tired. Also we're talking about a specific genre of entertainment that not everyone will enjoy no matter how much effort is put into it or what your age may be. --rt
Apr 10 2013
prev sibling next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 16:35:56 -0400
Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/10/2013 01:14 PM, Nick Sabalausky wrote:
 Well yea, Quantic Dream goes WAAAAAY off into the "interactive
 movie" realm.
Because that's what the game is. There's nothing wrong with it if you like it, and many people do.
Look, don't twist my words around. I didn't say there was anything wrong about Quantic Dream doing that. Heck, I didn't make any complaint about Quantic Dream at all. Yea, *I* don't like the directions they've been going, and I do think the industry as a whole focuses too much on story/etc, but I never said anything that amounts to "It's wrong that Quantic Dream does it", and that's because I *don't* feel that way about it.
 Keep in mind, I'm using "interactive movie" largely for lack of a
 better term. "Videogame" definitely isn't the right word for them.
They're games,
For many (admittedly, not all) of them, I really don't believe "games" is an accurate term (Don't misinterpret that into a statement of "Only true 'games' are legitimate" because I never said such a thing.) They have interactive sections, and they are entertainment, but being interactive entertainment does not inherently imply "game". Keep in mind, even sandbox titles, which are definitely not remotely "interactive movie" or cinematic at all (at least any of the ones I've seen), have long been debated as to whether or not they are "games". And note that nobody ever said that was a bad thing. It might be a bad thing if the industry focused too heavily on them, but that would be a completely different complaint.
 and they use the video medium. Video games. The rest
 of your post is mostly just a rant about what you personally like in
 video games/"interactive movies". You are of course entitled to an
 opinion, but the grumpy old man ranting gets old.
I do keep venturing into side-topics (so I like to critique media, so what?), but I get the impression that you're consistently trying to twist my main points around into some nastyness that I'm not actually saying, and make me out to be some "hates everything" grouch when I've very clearly *praised* certain things too, even certain "interactive movies" (for lack of a better term) right here in this very sub-thread. And really, is it so damn horrible to have and voice a negative opinion on something? Let's all pretend everything in the world is objectively wonderful, because only optimism and complements should ever be tolerated! Sheesh.
Apr 10 2013
next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 04/10/2013 05:22 PM, Nick Sabalausky wrote:
 For many (admittedly, not all) of them, I really don't believe "games"
 is an accurate term (Don't misinterpret that into a statement of "Only
 true 'games' are legitimate" because I never said such a thing.)
But that's essentially what you *are* saying by downplaying the gameplay that lies at the heart of the "interactive movies" you've used as examples. It's the "No True Scotsman" fallacy. Let's take a statement from your original post: "Modern AAA/big-budget titles are interactive movies, not videogames, because their focus is story, dialog and cinematics, not gameplay." Which is untrue when it comes to games like BioShock or GTA. At the end of the day both games are mostly shooters along with other gameplay elements (like driving in GTA), and you will spend most of your time playing the game and not watching cinematics. I gave you a canonical example of what would be an interactive movie, and you tried to wave it away because it really was an interactive movie.
 It might be a bad thing if the industry focused too heavily on them,
 but that would be a completely different complaint.
Which has been the essence of your complaint, based on how games used to be and your particular tastes, sounding a lot like a grumpy old man who thinks the industry is suffering because they don't make them like they used to: "Maybe I'm just projecting my own tastes into this, or maybe this is just because I don't have sales/profits/etc charts for the last 10-20 years to examine, but lately I'm finding it difficult to believe that "AAA" games aren't becoming (or already) a mere niche, much like high-performance sports cars. (Ie, big money, but small market.) Part of this is because, as I see it, the "big/AAA games" *as they used to exist* up until around the early 2000's don't seem to be around much anymore."
 And really, is it so damn horrible to have and voice a negative opinion
 on something?
Not at all, but when the constant refrain is grumpy-old-man ranting, it is pretty horrible.
Apr 10 2013
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 10 Apr 2013 18:52:58 -0400
Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/10/2013 05:22 PM, Nick Sabalausky wrote:
 For many (admittedly, not all) of them, I really don't believe
 "games" is an accurate term (Don't misinterpret that into a
 statement of "Only true 'games' are legitimate" because I never
 said such a thing.)
But that's essentially what you *are* saying by downplaying the gameplay that lies at the heart of the "interactive movies" you've used as examples.
That's because the heart of such games *isn't* the gameplay, it's the storytelling. I'm not downplaying anything that the developers themselves aren't already downplaying.
 It's the "No True Scotsman" fallacy.
No, you're just very persistent in trying to turn it into the "No True Scotsman" fallacy. I'm merely using terminology to distinguish between story-driven titles and gameplay-driven tiles. *YOU'RE* the one who's falsely insisting that what I meant was "Only the one type is legitimate", despite my numerous statements to the contrary. How many times to I have to tell you in various wordings, "I'm *not* using 'interactive movie' pejoratively" before you'll stop trying to tell me what I meant?
 Let's take a
 statement from your original post:
 
 "Modern  AAA/big-budget titles are interactive movies, not
 videogames, because their focus is story, dialog and cinematics, not
 gameplay."
 
 Which is untrue when it comes to games like BioShock or GTA. At the
 end of the day both games are mostly shooters along with other
 gameplay elements (like driving in GTA), and you will spend most of
 your time playing the game and not watching cinematics.
So we disagree on the categorization of a few titles. Big freaking deal.
 I gave you a
 canonical example of what would be an interactive movie, and you
 tried to wave it away because it really was an interactive movie.
 
That's a complete mischaracterization, and I find it interesting that you've claimed that while *completely* ignoring my very clear statement of: "Keep in mind, I'm using "interactive movie" largely for lack of a better term." Yes, obviously Heavy Rain is a canonical example of "interactive movie", and for goodness sake, I *AGREED* with you and yet you're still complaining.
 It might be a bad thing if the industry focused too heavily on them,
 but that would be a completely different complaint.
Which has been the essence of your complaint,
Now you're just flat-out quoting me out-of-context. Here it is with the proper context re-added:
Keep in mind, even sandbox titles, which are definitely not
remotely "interactive movie" or cinematic at all (at least any
of the ones I've seen), have long been debated as to whether or
not they are "games". And note that nobody ever said that was a
bad thing. It might be a bad thing if the industry focused too
heavily on them, but that would be a completely different complaint.
What that means when it's *not* deliberately twisted around is:
 The following are two completely *different* claims:

 A. Not being a "game" is an inherently bad thing.

 B. Too much indusrtry-wide focus on XXXX (for whatever XXXX) is a
 bad thing.

 I am claiming B and *NOT* A. Stop trying to tell me I'm claiming A.
See?
 based on how games used
 to be and your particular tastes, sounding a lot like a grumpy old
 man who thinks the industry is suffering because they don't make them
 like they used to:
 
 "Maybe I'm just projecting my own tastes into this, or maybe this is 
 just because I don't have sales/profits/etc charts for the last 10-20 
 years to examine, but lately I'm finding it difficult to believe that 
 "AAA" games aren't becoming (or already) a mere niche, much like 
 high-performance sports cars. (Ie, big money, but small market.)
 
 Part of this is because, as I see it, the "big/AAA games" *as they
 used to exist* up until around the early 2000's don't seem to be
 around much anymore."
 
Oh for crap's sake. Yes, newer AAA/big-business games, on average, *do* direct significantly more of their emphasis on story/dialog/cinematic feel/etc than older ones. I was being diplomatic before, but that's really undeniable. Do you think all that comes at no cost in development resources? (Rhetorical, of course. I'm pointing out it's rhetorical so I don't get accused of hyperbole or of actually suggesting that you did think it didn't cost extra resources.) So that requires more sales for sustainability, and then I went on with my reasoning about diminishing audience - clearly marked with disclaimers about my lack of certainly (which you've conveniently quoted for me and also conveniently ignored). And now you come along, slap the big generic "grumpy old man" "don't make them like they used to" labels over the whole thing, and now I'm supposed to believe not only that your "poisoning the well" tactics somehow *aren't* a logical fallacy, but also that I'm the one being categorically dismissive?
 And really, is it so damn horrible to have and voice a negative
 opinion on something?
Not at all, but when the constant refrain is grumpy-old-man ranting, it is pretty horrible.
Convenient then how the negative opinions just happen to be of your horrible grumpy-old-man variety rather then types you would accept as the "not at all horrible" negative opinions. Next time I'll make sure anything I dislike isn't something you'll decide to imagine a grumpy old man might agree with. True, I admitted to some grumpy-old-man-ness, but I'm not the one abusing it for ad hominem ammunition.
Apr 11 2013
parent reply Jeff Nowakowski <jeff dilacero.org> writes:
On 04/11/2013 04:17 AM, Nick Sabalausky wrote:
 No, you're just very persistent in trying to turn it into the "No True
 Scotsman" fallacy. I'm merely using terminology to distinguish between
 story-driven titles and gameplay-driven tiles.
Then you could call them "story-driven games" instead of "interactive movies", and also acknowledge that the gameplay element is still a strong component. Your insistence on denying the massive amounts of gaming elements that are still part of these titles shows you have an ax to grind, backed up by the fact that you even started your argument by saying your personal tastes may have been informing your theories.
 So we disagree on the categorization of a few titles. Big freaking deal.
Since it's the heart of your argument, it is a big deal.
 Yes, obviously Heavy Rain is a canonical example of "interactive
 movie", and for goodness sake, I *AGREED* with you and yet you're still
 complaining.
You just have a funny way of agreeing, what I'll call disagreeable agreeing.
 Oh for crap's sake. Yes, newer AAA/big-business games, on average, *do*
 direct significantly more of their emphasis on story/dialog/cinematic
 feel/etc than older ones.
Yes, there's no doubt about that, and do you know *why* they do this? It's because, just like movies, these big budget cinematic games tend to sell a whole lot more, both in quantity and dollar volume. And just like the movies, it's also a big risk. But they are still games, and it's the gamers who flock to these blockbuster titles. As an aside, the interesting thing about GTA, especially GTA3, is that the budget wasn't about the movie elements, of which there were few. It was about creating an immersive *environment*. It's really the artwork that costs so much money. There was also a story arc, but you can find stories in games going back decades. As to why the industry is "sick", in Manu's terms, it's probably just competition with other forms of entertainment given the mobile explosion. The games industry did very well post 2000, despite the move to cinematic experiences.
 And now you come along, slap the big generic "grumpy old man" "don't
 make them like they used to" labels over the whole thing, and now I'm
 supposed to believe not only that your "poisoning the well" tactics
 somehow *aren't* a logical fallacy, but also that I'm the one being
 categorically dismissive?
Yet your pet theory does amount to how they don't make them like they used to, and maybe that's the reason the industry is failing, which sounds a lot like a grumpy-old-man complaint, doesn't it? Along with your usual ranting, of course. Last post for me.
Apr 11 2013
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Thu, 11 Apr 2013 06:57:05 -0400
Jeff Nowakowski <jeff dilacero.org> wrote:

 Your insistence on denying the massive amounts of 
 gaming elements that are still part of these titles shows you have an
 ax to grind,
 
Generic gameplay is not "massive amounts of gaming elements", but that's not what you're *really* interested in discussing is it? You're *still* starting from your purely-fabricated assertion that I'm just trying to be nasty, which *you've* decided to be true *solely because* you've decided it to be true. Then you blatantly ignore everything I say to the contrary and repeatedly use your own personal attacks as *their own* proof. Clearly you're not interested in even attempting a remotely rational discussion. The only thing you care to do is repeatedly bang your tired old "You're being a grumpy old man" drum of a complete non-argument. I'm sorry I've given you the benefit of the doubt as to your true intent in all this and actually read all of your asinine claims up to now, but I'm not bothering reading the rest of this clearly twisted post of yours, nor will I be reading anything more from you in this thread. Maybe you'll be willing to be sensible in other discussions.
Apr 11 2013
prev sibling parent "Zach the Mystic" <reachzach gggggmail.com> writes:
On Wednesday, 10 April 2013 at 21:22:30 UTC, Nick Sabalausky 
wrote:
 On Wed, 10 Apr 2013 16:35:56 -0400
 Jeff Nowakowski <jeff dilacero.org> wrote:
 Keep in mind, I'm using "interactive movie" largely for lack 
 of a
 better term. "Videogame" definitely isn't the right word for 
 them.
They're games,
For many (admittedly, not all) of them, I really don't believe "games" is an accurate term (Don't misinterpret that into a statement of "Only true 'games' are legitimate" because I never said such a thing.) They have interactive sections, and they are entertainment, but being interactive entertainment does not inherently imply "game". Keep in mind, even sandbox titles, which are definitely not remotely "interactive movie" or cinematic at all (at least any of the ones I've seen), have long been debated as to whether or not they are "games". And note that nobody ever said that was a bad thing. It might be a bad thing if the industry focused too heavily on them, but that would be a completely different complaint.
I was frustrated with the all-inclusive term "videogame" until I realized that spoken languages (no to mention programming ones) change over time. The technical definition of "game" is one thing, but if a language starts using a term for something else, eventually that just becomes the definition. I think the original reason it caught on was because video games have a childlike wonder about them which reminds people of "playing". But now that the term's caught on, it's not going away. Therefore video games need not be games, in the traditional sense that they must have rules. All life is a game... and the people are merely players! That's the new sense of the word I think.
Apr 10 2013
prev sibling parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 06:35, Jeff Nowakowski <jeff dilacero.org> wrote:

 On 04/10/2013 01:14 PM, Nick Sabalausky wrote:

 Well yea, Quantic Dream goes WAAAAAY off into the "interactive movie"
 realm.
Because that's what the game is. There's nothing wrong with it if you like it, and many people do.
My girlfriend played through Heavy Rain recently, and really liked it. Keep in mind, I'm using "interactive movie" largely for lack of a
 better term. "Videogame" definitely isn't the right word for them.
They're games, and they use the video medium. Video games. The rest of your post is mostly just a rant about what you personally like in video games/"interactive movies". You are of course entitled to an opinion, but the grumpy old man ranting gets old.
It's possible they're closer to the traditional term 'video game' than most that came before ;) .. We finally got there!
Apr 10 2013
prev sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 9 Apr 2013 20:15:53 +1000
Manu <turkeyman gmail.com> wrote:
 On 9 April 2013 11:32, Nick Sabalausky
 I can't help wondering how big the "big games" world really is
 anyway, though. I know there's huge sums of money involved, both
 cost and revenue, and lots of developers, but...well, let me put it
 this way:

 Maybe I'm just projecting my own tastes into this, or maybe this is
 just because I don't have sales/profits/etc charts for the last
 10-20 years to examine, but lately I'm finding it difficult to
 believe that "AAA" games aren't becoming (or already) a mere niche,
 much like high-performance sports cars. (Ie, big money, but small
 market.)

 Part of this is because, as I see it, the "big/AAA games" *as they
 used to exist* up until around the early 2000's don't seem to be
 around much anymore. The big business development companies have,
 for the most part, split their non-sports product lines into two
 main areas:

 1. Mobile, Casual IP tie-ins, "Free-2-Play", etc.
 2. Interactive movies.
Where is Call of Duty? Grand Thieft Auto? Starcraft? World of Warcraft? Is Uncharted an interactive movie? What about Tomb Raider? Mario? Zelda? Gears of War? Bioshock? They have lots of cinematic presentation, but clearly complex and involved gameplay. God of War?
First of all, I should emphasize that I'm not making an actual statement of "big/AAA tiles have a tiny audience". Just that, due to a combination of factors, I can't help getting the feeling that they're either headed towards being niche, or are perhaps at least somewhat closer to niche than they're typically considered. Like I said previously, these could very well be unfounded impressions. Secondly, with those "two main categories" I pointed out, I did say "for the *most* part". Clearly there are exceptions. With that in mind: - Call of Duty?: Can't speak for the original series, but for Modern Warfare, yes, most definitely. I do actually like what I've played of Modern Warfare (and was very surprised by that), but that doesn't change that it's most definitely an interactive movie. - Grand Thief Auto?: This one's an oddball since it's sort of two games in one: A sandbox title and, if you actually do the missions, then yes, definitely an interactive movie. - Starcraft?: Starcraft is 15 years old, so it isn't an example of a modern AAA title in the first place. - World of Warcraft?: That would probably fall in the "casual" category. It's not an interactive movie AFAIK, but my understanding is that it's not much of a videogame, either. If you ignore the theme, it's closer to "Second Life" or maybe "Words With Friends" than it is to "Warcraft" or a cinematics extravaganza. But the D&D theme admittedly makes that considerably less-than-obvious. - Is Uncharted an interactive movie?: What I played of the first one (from what I can remember, it was a while ago) didn't seem to be. Although the gameplay did feel a bit weak (FWIW). The demo of the second certainly seemed to fall into "interactive movie" though. - What about Tomb Raider?: I wouldn't have any idea. When I tried the original fifteen-plus years ago, I was so turned off (within minutes) by the barely-usable controls that I've never paid the series any attention since. - Mario?: Depends which one. Galaxy, yes. Sunshine, yes to a somewhat lesser extent. 64, not particularly, no (but it was starting to head that direction). New Mario and 8/16-bit mario, definitely no. - Zelda?: Skyward Sword, definitely yes, it's Harry Potter with a green suit. The DS one, mostly yes. Twilight Princess and Wind Waker, borderline (I did quite like Wind Waker, though). Pretty much anything else, no. - Gears of War?: Wouldn't know, I don't have a 360. But if it's anything like Bulletstorm, then yes. - Bioshock?: From what I could stand to sit through, yes. And from the playthrough video of Bioshock Infinity in the PSN store, yes. (Right from the beginning I was wishing so much the player would kill that side-kick lady so she'd shut the hell up.) - God of War?: If the first one is any indication, yes. It's borderline "Dragon's Lair without cell shading".
 They have lots of cinematic presentation, but clearly complex and
 involved gameplay.
In the ones I identified as "interactive movie", cinematic presentation deeply permeates the entire experience, gameplay and all. As one example, even during the "gameplay" sections NPCs very frequently won't shut the hell up. "Blah blah blah BLAH BLAH blah blah BLAH." (Side note: Journey, while not much of a game, was a notable breath of fresh air in that regard. I respected the fact that *it* respected *me* enough to not spoon-feed me every inane detail the writer could think of.) Furthermore, any "game" that takes literally 30+ minutes to reach the real meat of uninterrupted gameplay most definitely counts as "interactive movie". This includes, just as a few off-the-top-of-my-head examples: Assassin's Creed 2 (god what a turd, never played the other versions though), Zelda Skyward Sword (takes at least a full 2 hours to reach anything remotely resembling real Zelda gameplay), and Bulletstorm (Was the dialog written by a fifth-grader? And does the redneck NPC ever shut up? And as an unrelated side note: why have we gone back to using view-bob again? FPSes stopped doing that in the late 90's for a very good reason - it was literally nauseating - and still is. If I wanted nauseating camerawork with pre-pubescent dialog, I'd watch a J.J. Abrams movie.)
 
 The volume of games release is decreasing in recent years. This is
 due to a lot of factors, and the industry is suffering at the moment.
Again, perhaps unfounded, but I can't help wondering if the industry's suffering is directly related to the enormous amounts of resources they keep throwing at cinematics and photo-realistic rendering. If the industry has been pushing that stuff so hard...and the industry is suffering...it's time to examine whether or not there might be a connection. Maybe there aren't as many people with $60 worth of interest in such games as they thought. Or maybe there are. But in any case, it's something the industry needs to reflect on, if they aren't doing so already.
 
 Games console generations have a 5-10 year lifespan, so it's not
 exactly an annual investment.
But it's still an up-front cost with a real potential for sticker-shock, and despite Sony's claims to the contrary, they're primary use is just games. So it's a non-trivial cost-of-entry which hinders casual-audience adoption.
 And they're cheaper than phones and iPad's amazingly!
Well, those things are outrageously expensive anyway, what with the combination of "palm-sized battery-powered super-computer", plus "telecom greed" (telecom is known for being one of the most anti-consumer, anti-competitive, greed-driven industries in the world), plus Apple's trademark cost inflation (for a significant number of the devices out there, anyway). And in any case, most people already have such devices for non-game purposes. So gaming on them usually has a very low cost-of-entry compared to dedicated gaming devices. Therefore: "casual audience friendly".
 Demos are a known problem, that will be addressed
 by the coming hardware generation.
Ummm...what?!? The current generation is already more than perfectly capable of handling demos just fine. The only problem is that many (obviously not all, but still, "many") AAA titles choose not to do them. You could talk about demo size restrictions, but that's an artificial store-imposed limitation, not a hardware one. Fuck, I've got *entire* full games (plural) on my PS3's HDD with room to spare, and downloading them (over WiFi no less) was no problem (and yes, legitimately). There is absolutely nothing about demos for the next hardware generation *to* address. Either the studios/publishers put them out or they don't. Hell even the previous generation had demos, albeit disc-based ones (which *was* a notable problem in certain ways).
 I think it's only 'niche' by total volume. The percentage of
 'core'/'hardcore' gamers is decreasing, but that's because the overall
 sensis is increasing. There are a lot more gamers now. Girls are
 gamers now! 51% of the population who were not previously in the
 statistics...
 
Perhaps so, an interesting point. But then if that's so, why would the industry be suffering? If there's really that many more real gamers now, ones that like the big-budget cinematic stuff, shouldn't that mean enough increased sales to keep things going well? Or maybe there really are more gamers who like that stuff than before, but an even *greater* increase in the developers' interest?
 So is it the "core" gamers buying the modern AAA/big-budget titles?
 Honestly, I'm not seeing it. From what I can tell, these days
 they're mostly switching over to indie games. As for why that's
 happening, I figure "Core" gamers are core gamers *because* they
 play videogames. Modern AAA/big-budget titles, are *not* videogames
 except in a very loose sense, and core gamers *do* frequently take
 issue with them. Modern AAA/big-budget titles are interactive
 movies, not videogames, because their focus is story, dialog and
 cinematics, not gameplay. So core gamers have been moving *away*
 from AAA/big-budget titles and towards indie games.
Tell me Call of Duty and friends don't sell. They make squillions.
Yea, they do. And so does Porche. And yet most drivers have never touched a high-performance car and never will. But how much room is there in the market for a big-budget games that *do* sell fantastically? Room enough for *some* obviously, the Italian sports cars of the gaming world, but is there really room for much more than a few?
 There are less successful titles than in recent years, and that's
 largely because the industry is sick, and kinda antiquated...
The question then is: What made it sick? My suspicion, and again this is only a suspicion, is that at least part of it is the production of big-budget cinematic extravaganzas overshooting demand. It wouldn't be too surprising for the industry to suffer if they're all trying to shoot the moon, so to speak. And it would seem that they are trying to, from what you've said about top-tier being so competitive.
 You may be right, traditionally self-identified *core* gamers are
 moving indy, because it's an innovative sector that's offering what
 they like about video games. But they've had their place taken by...
 'normal people', you know, the kinds of people that go to the cinema
 and watch movies. There's a lot of them, and they still buy lots of
 games.
Maybe so, but I'm unconvinced that they buy a lot of AAA titles per-person.
 So is it the "casual" crowd buying the modern AAA/big-budget titles?
 Definitely not. They're the ones who tend to be intimidated by 3D
 environments and game controllers and spend their time on Words With
 Friends, Wii Waggle, PopCap, etc., rarely spend much money on
 gaming and rarely venture outside iOS, Android and Web.
What's your point here?
Just saying that I don't think the "casual" gaming crowd is buying a whole ton of AAA cinematic titles. Some, sure, but significantly more than that? I'd be surprised.
 
 I know there is and will always be an audience for the modern
 AAA/big-budget cinematic interactive-movie "games". But I don't see
 where there's a *large non-niche* audience for them. There's maybe
 the multiplayer-FPS enthusiasts, but that's a bit of a niche
 itself. And I don't see a whole lot else. It's the "Italian
 sports-cars" of videogaming: Just a big-budget niche.
Well, a 'niche' that's bigger than the entire film industry is not a niche that one can ignore.
Bigger than the film industry? I know that's true in terms of revenue, and even profits IIRC, but in terms of audience size, ie number of people? I've never heard that claimed, and it would surprise me. Remember, niche doesn't mean small profits, just small audience. And I never said anything about ignoring it ;)
 I don't think the number of big-games players has significantly
 decreased though. It might have peaked, I haven't checked numbers
 like that for a fair while.
 What has happened, is the concept of the 'video games' industry has
 dramatically expanded. Basically everyone is a 'gamer' now, and
 everyone has some 99c games in their pocket.
 It's a whole new industry that is being explored, but since the
 statistics are all conflated into 'the video games industry', I can
 see how you might see it that way.
 
Well, even looking exclusively at Console/PC, I still get the impression that AAA titles are becoming a harder and harder sell. But like I said, I don't have actual figures here, so I could be mistaken.
 The business model is changing shape too, thanks to the indy/casual
 thing. $60 is silly (and I'd love it if I could get games for $60
 btw, we pay $90-$120 here.
Ouch! (Out of curiosity, would that be Europe, Australia, New Zealand, something else? I've heard that New Zealand in particular it can be very difficult to get games quickly and inexpensively. But then I've heard a lot of grumbling about Europe's VAT, too.) I did pay $80 for Conker's Bad Fur Day, back many years ago. And right before its cost came down, too ;) 'Course that high cost was for entirely different reasons (N64) than in your case.
 yanks get fucking everything subsidised!),
 people aren't happy to spend $60 on entertainment anymore, when
 something that can equally entertain them costs 99c.
 I reckon AAA will start to sell in installments/episodes,
Some of them have already been trying out episodic (Half-Life 2, Sonic 4 - the latter is underrated IMO). Actually, from what I can tell, there's been a *lot* of talk about moving to episodic gaming for some time now. So that wouldn't surprise me either. OTOH, I would have expected to see more of it by now already, but maybe there's just been a lot of inertia in the way.
 or start
 leveraging in-game sales of stuff a lot more to reduce the cost of
 entry.
That's definitely been taking off. Don't know if it's working or not, but that sort of thing is all over the PSN Store. I'd be surprised if it isn't working though, because it does seem to make a lot of sense in many ways. Closely related to this, have you read the latest Game Developer Magazine (April 2013)? There's a lot of very interesting, and convincing, discussion about Free-2-Play and F2P-inspired models having a lot of promise for AAA gaming, and some implication that the current (old) AAA model may be a sinking (or at least outdated and slightly leaky) ship. "Money Issues" on page 4 was particularly interesting.
 One thing I've heard reports of, contrary to your assertion that AAA
 is losing participants to the more innovative indy area, is 'casual
 games' are actually bridging previously non-gamers into big games.
Really? That was Nintendo's intended strategy with the Wii 1, but I'd heard they never got much conversion ratio (and I know they wound up alienating a lot of core gamers and developers in the process). The people would buy a Wii just for Wii Sports or Wii Fit and then never buy much else. So that's interesting if the conversion is happening with indie games...and unfortunate for N, as it suggests they should have been more indie-friendly and download-friendly like 360, PSN and Steam. Which I could have told them from the start, but meh, they never listen to me ;)
 You suggest games are becoming
 interactive movies... this isn't an accident, they sell extremely
 well! And I think this trend will continue as a larger percentage of
 the audience are becoming women... I don't think Call of Duty or
 Grand Thieft Auto are going anywhere just yet though.
 
That does suggest though that such people still aren't really interested in videogames. They're just interested in the hot new type of movie.
Apr 09 2013
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 9 Apr 2013 16:44:53 -0400
Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> wrote:
 
 [...giant snip...]
Sorry 'bout the duplicate post. My client seems to have gone haywire.
Apr 09 2013
prev sibling parent reply "Rob T" <alanb ucora.com> writes:
On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:
 The C++ state hasn't changed though. We still avoid virtual 
 calls like the
 plague.
 One of my biggest design gripes with D, hands down, is that 
 functions are
 virtual by default. I believe this is a critical mistake, and 
 the biggest
 one in the language by far.
My understanding of this is that while all of your class functions will be virtual by default, the compiler will reduce them to non-virtual unless you actually override them, and to override by mistake is difficult because you have to specify the "override" keyword to avoid a compiler error. I'd like to see that understanding confirmed as it was only implied in here: http://dlang.org/overview.html For extra safety you have to specify "final" which would be a pain if that's what you want by default, but I'm not so sure it's really necessary if the compiler really does optimize virtual functions away. BTW, the red code/green code concept sounds like the most promising route towards a generalized solution. I'll try and find the time to watch it as well. --rt
Apr 08 2013
next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 09 Apr 2013 05:09:05 +0200
"Rob T" <alanb ucora.com> wrote:
=20
 BTW, the red code/green code concept sounds like the most=20
 promising route towards a generalized solution. I'll try and find=20
 the time to watch it as well.
=20
=46rom what I could tell from a brief glance, it sounded like it's more comparable to system/ trusted rather than system/ safe. Still, could be useful though, and then compiler-verified "super-greens" (like safe) for certain things could potentially be created and tied into it.
Apr 08 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-09 05:09, Rob T wrote:

 My understanding of this is that while all of your class functions will
 be virtual by default, the compiler will reduce them to non-virtual
 unless you actually override them, and to override by mistake is
 difficult because you have to specify the "override" keyword to avoid a
 compiler error.
Currently DMD does not devirtualize (or what's called) methods. -- /Jacob Carlborg
Apr 08 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 9 April 2013 at 03:09:09 UTC, Rob T wrote:
 My understanding of this is that while all of your class 
 functions will be virtual by default, the compiler will reduce 
 them to non-virtual unless you actually override them, and to 
 override by mistake is difficult because you have to specify 
 the "override" keyword to avoid a compiler error.

 I'd like to see that understanding confirmed as it was only 
 implied in here:
 http://dlang.org/overview.html

 For extra safety you have to specify "final" which would be a 
 pain if that's what you want by default, but I'm not so sure 
 it's really necessary if the compiler really does optimize 
 virtual functions away.

 BTW, the red code/green code concept sounds like the most 
 promising route towards a generalized solution. I'll try and 
 find the time to watch it as well.

 --rt
Slightly other way around. "override" only makes sure that you have something to override. You can omit it and virtual dispatch will still happen with no error. Error happens only when you mark with override method which does not exist in base class / interface. "virtuality" can be optimized away from final methods and for symbols that don't get exposed for linkage (so that compiler can check all sources and verify that no override happens). It is not done in dmd currently, of course.
Apr 09 2013
parent reply Timothee Cour <thelastmammoth gmail.com> writes:
 You can omit it and virtual dispatch will still happen with no error. Error
happens only when you mark with override method which does not exist in base
class / interface.
not anymore: CT error when doing so: Deprecation: overriding base class function without using override attribute is deprecated On Tue, Apr 9, 2013 at 1:00 AM, Dicebot <m.strashun gmail.com> wrote:
 On Tuesday, 9 April 2013 at 03:09:09 UTC, Rob T wrote:
 My understanding of this is that while all of your class functions will be
 virtual by default, the compiler will reduce them to non-virtual unless you
 actually override them, and to override by mistake is difficult because you
 have to specify the "override" keyword to avoid a compiler error.

 I'd like to see that understanding confirmed as it was only implied in
 here:
 http://dlang.org/overview.html

 For extra safety you have to specify "final" which would be a pain if
 that's what you want by default, but I'm not so sure it's really necessary
 if the compiler really does optimize virtual functions away.

 BTW, the red code/green code concept sounds like the most promising route
 towards a generalized solution. I'll try and find the time to watch it as
 well.

 --rt
Slightly other way around. "override" only makes sure that you have something to override. You can omit it and virtual dispatch will still happen with no error. Error happens only when you mark with override method which does not exist in base class / interface. "virtuality" can be optimized away from final methods and for symbols that don't get exposed for linkage (so that compiler can check all sources and verify that no override happens). It is not done in dmd currently, of course.
Apr 09 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 9 April 2013 at 08:14:43 UTC, Timothee Cour wrote:
 not anymore: CT error when doing so:
 Deprecation: overriding base class function without using 
 override
 attribute is deprecated
yet another breaking change introduced in git master? Works in 2.062 as I have described : http://dpaste.1azy.net/19f18c72
Apr 09 2013
parent reply Timothee Cour <thelastmammoth gmail.com> writes:
 yet another breaking change introduced in git master? Works in 2.062 as I have
described : http://dpaste.1azy.net/19f18c72
'override' needed to override base class AFAIK, not base interface. the code you posted still works as you're using a base interface. On Tue, Apr 9, 2013 at 1:23 AM, Dicebot <m.strashun gmail.com> wrote:
 On Tuesday, 9 April 2013 at 08:14:43 UTC, Timothee Cour wrote:
 not anymore: CT error when doing so:
 Deprecation: overriding base class function without using override
 attribute is deprecated
yet another breaking change introduced in git master? Works in 2.062 as I have described : http://dpaste.1azy.net/19f18c72
Apr 09 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
Huh, you are true. Why the difference? It is confusing.
Apr 09 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-09 10:54, Dicebot wrote:
 Huh, you are true. Why the difference? It is confusing.
You're not overriding, you're implementing. If you misspell a method when implementing the interface it will complain anyway since the interface isn't implemented. That's how it always been working. -- /Jacob Carlborg
Apr 09 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 9 April 2013 13:09, Rob T <alanb ucora.com> wrote:

 On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:

 The C++ state hasn't changed though. We still avoid virtual calls like the
 plague.
 One of my biggest design gripes with D, hands down, is that functions are
 virtual by default. I believe this is a critical mistake, and the biggest
 one in the language by far.
My understanding of this is that while all of your class functions will be virtual by default, the compiler will reduce them to non-virtual unless you actually override them, and to override by mistake is difficult because you have to specify the "override" keyword to avoid a compiler error.
Thus successfully eliminating non-open-source libraries from D... Making a dependency on WPO is a big mistake. I'd like to see that understanding confirmed as it was only implied in here:
 http://dlang.org/overview.html

 For extra safety you have to specify "final" which would be a pain if
 that's what you want by default, but I'm not so sure it's really necessary
 if the compiler really does optimize virtual functions away.

 BTW, the red code/green code concept sounds like the most promising route
 towards a generalized solution. I'll try and find the time to watch it as
 well.

 --rt
Apr 09 2013
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/09/2013 12:18 PM, Manu wrote:
 ...

 Thus successfully eliminating non-open-source libraries from D...
 Making a dependency on WPO is a big mistake.
 ...
Inheritance is usually a bad way to reuse library functionality anyway.
Apr 09 2013
parent reply Manu <turkeyman gmail.com> writes:
On 9 April 2013 21:02, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 04/09/2013 12:18 PM, Manu wrote:

 ...


 Thus successfully eliminating non-open-source libraries from D...
 Making a dependency on WPO is a big mistake.
 ...
Inheritance is usually a bad way to reuse library functionality anyway.
Who said anything about inheritance? What if I want to call a method? len = x.length for instance? Properties are almost always really trivial leaf functions. But they're all virtual calls! O_O
Apr 09 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/09/2013 01:09 PM, Manu wrote:
 On 9 April 2013 21:02, Timon Gehr <timon.gehr gmx.ch
 <mailto:timon.gehr gmx.ch>> wrote:

     On 04/09/2013 12:18 PM, Manu wrote:

         ...


         Thus successfully eliminating non-open-source libraries from D...
         Making a dependency on WPO is a big mistake.
         ...


     Inheritance is usually a bad way to reuse library functionality anyway.


 Who said anything about inheritance? What if I want to call a method?
 len = x.length for instance? Properties are almost always really trivial
 leaf functions. But they're all virtual calls! O_O
If you do not want to add overrides, then there is no dependency on WPO.
Apr 09 2013
parent reply Manu <turkeyman gmail.com> writes:
Eh? How so? Overrides may or may not come from anywhere...
Actually, a DLL may introduce an override that's not present at link time.
Even WPO can't save it.


On 9 April 2013 22:12, Timon Gehr <timon.gehr gmx.ch> wrote:

 On 04/09/2013 01:09 PM, Manu wrote:

 On 9 April 2013 21:02, Timon Gehr <timon.gehr gmx.ch
 <mailto:timon.gehr gmx.ch>> wrote:

     On 04/09/2013 12:18 PM, Manu wrote:

         ...


         Thus successfully eliminating non-open-source libraries from D...
         Making a dependency on WPO is a big mistake.
         ...


     Inheritance is usually a bad way to reuse library functionality
 anyway.


 Who said anything about inheritance? What if I want to call a method?
 len = x.length for instance? Properties are almost always really trivial
 leaf functions. But they're all virtual calls! O_O
If you do not want to add overrides, then there is no dependency on WPO.
Apr 09 2013
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/09/2013 02:30 PM, Manu wrote:
 Eh? How so? Overrides may or may not come from anywhere...
 Actually, a DLL may introduce an override that's not present at link
 time. Even WPO can't save it.
 ...
That would not be a case of 'there are no overrides'.
Apr 09 2013
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
09-Apr-2013 14:18, Manu пишет:
 On 9 April 2013 13:09, Rob T <alanb ucora.com <mailto:alanb ucora.com>>
 wrote:

     On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:


         The C++ state hasn't changed though. We still avoid virtual
         calls like the
         plague.
         One of my biggest design gripes with D, hands down, is that
         functions are
         virtual by default. I believe this is a critical mistake, and
         the biggest
         one in the language by far.


     My understanding of this is that while all of your class functions
     will be virtual by default, the compiler will reduce them to
     non-virtual unless you actually override them, and to override by
     mistake is difficult because you have to specify the "override"
     keyword to avoid a compiler error.


 Thus successfully eliminating non-open-source libraries from D...
 Making a dependency on WPO is a big mistake.
final class Foo{ //no inheritance final: //no virtuals ... } 2 extra words and you are done. The only problem I see is that there is no way to "undo" final on a few methods later... -- Dmitry Olshansky
Apr 09 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-09 16:50, Dmitry Olshansky wrote:

 final class Foo{ //no inheritance
 final: //no virtuals
 ...
 }

 2 extra words and you are done. The only problem I see is that there is
 no way to "undo" final on a few methods later...
Isn't "final" on the class enough. No point in having virtual methods if nothing can inherit from the class. -- /Jacob Carlborg
Apr 09 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 00:50, Dmitry Olshansky <dmitry.olsh gmail.com> wrote:

 09-Apr-2013 14:18, Manu =D0=BF=D0=B8=D1=88=D0=B5=D1=82:

 On 9 April 2013 13:09, Rob T <alanb ucora.com <mailto:alanb ucora.com>>

 wrote:

     On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:


         The C++ state hasn't changed though. We still avoid virtual
         calls like the
         plague.
         One of my biggest design gripes with D, hands down, is that
         functions are
         virtual by default. I believe this is a critical mistake, and
         the biggest
         one in the language by far.


     My understanding of this is that while all of your class functions
     will be virtual by default, the compiler will reduce them to
     non-virtual unless you actually override them, and to override by
     mistake is difficult because you have to specify the "override"
     keyword to avoid a compiler error.


 Thus successfully eliminating non-open-source libraries from D...
 Making a dependency on WPO is a big mistake.
final class Foo{ //no inheritance final: //no virtuals ... } 2 extra words and you are done. The only problem I see is that there is n=
o
 way to "undo" final on a few methods later...
Yes, it can not be un-done. And any junior/tired/forgetful programmer will accidentally write slow code all over the place, and nobody will ever have any idea that they've done it. It's very dangerous.
Apr 09 2013
next sibling parent reply "Rob T" <alanb ucora.com> writes:
On Tuesday, 9 April 2013 at 16:49:04 UTC, Manu wrote:

 final class Foo{ //no inheritance
 final: //no virtuals
 ...
 }

 2 extra words and you are done. The only problem I see is that 
 there is no
 way to "undo" final on a few methods later...
final class Foo { final //no virtuals { ... } // final ends, virtual methods below? .... }
 And any junior/tired/forgetful programmer will
 accidentally write slow code all over the place, and nobody 
 will ever have
 any idea that they've done it. It's very dangerous.
I suppose, OTOH forgetting to make methods virtual can lead to another problem. In general, perhaps we're still talking about the same problem as with how to ensure that portions of code do not contain unwanted features of the language. If you want to ban the use of virtual functions, there ought to be a way to do it easily, same for writing code that does not contain features of the language that require a GC, etc. --rt
Apr 09 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 04:29, Rob T <alanb ucora.com> wrote:

 On Tuesday, 9 April 2013 at 16:49:04 UTC, Manu wrote:

  final class Foo{ //no inheritance
final: //no virtuals ... } 2 extra words and you are done. The only problem I see is that there is no way to "undo" final on a few methods later...
final class Foo { final //no virtuals { ... } // final ends, virtual methods below? .... }
My point exactly. That is completely fucked. final on the class means you can't derive it anymore (what's the point of a class?), and the manual final blocks are totally prone to error. In my experience, 90% of functions are not (or rather, should not be) virtual. The common (and well performing) case should be default. Any language with properties can't have virtual-by-default. Seriously, .length or any other trivial property that can no longer be inlined, or even just called. And human error aside, do you really want to have to type final on every function? Or indent every line an extra tab level? And any junior/tired/forgetful programmer will
 accidentally write slow code all over the place, and nobody will ever have
 any idea that they've done it. It's very dangerous.
I suppose, OTOH forgetting to make methods virtual can lead to another problem.
No it can't. override is explicit. You get an error if you forget to make a function virtual. And even if that was a risk (it's not), I'd take that any day of the week. In general, perhaps we're still talking about the same problem as with how
 to ensure that portions of code do not contain unwanted features of the
 language. If you want to ban the use of virtual functions, there ought to
 be a way to do it easily, same for writing code that does not contain
 features of the language that require a GC, etc.
I don't want to ban the use of virtual functions. I want to ban the use of virtual functions that aren't marked virtual explicitly! ;) Likewise, I like the GC, I just want to be able to control it. Disable auto-collect, explicitly issue collect calls myself at controlled moments, and give the collect function a maximum timeout where it will yield, and then resume where it left off next time I call it.
Apr 09 2013
next sibling parent reply "Rob T" <alanb ucora.com> writes:
On Wednesday, 10 April 2013 at 04:32:52 UTC, Manu wrote:
 final on the class means you can't derive it anymore (what's 
 the point of a
 class?),
I think that you'd place final on a derived class, not a base class. So it can make perfect sense, although final on the methods of a final class makes little sense so it should probably be a compiler error.
 and the manual final blocks are totally prone to error.
 In my experience, 90% of functions are not (or rather, should 
 not be)
 virtual. The common (and well performing) case should be 
 default.
Believe it or not, but I actually have been in the reverse position more than once, so which way the default should go is debatable. In your case I can certainly see why you'd prefer the opposite as I've done RT programming before and will be doing it again. What could perhaps work is a module level specifier that indicates what the defaults should be as a matter of use case preference, but I can see that going horribly wrong. The question I have, is why use a class if you do not need to use virtual functions? I think the problem goes a bit deeper than the defaults being opposite of what you want. I expect that you'd probably want struct inheritance or something like that but cannot get it from D?
 Any language with properties can't have virtual-by-default.
 Seriously, .length or any other trivial property that can no 
 longer be
 inlined, or even just called.
 And human error aside, do you really want to have to type final 
 on every
 function? Or indent every line an extra tab level?
Mark your properties as final? [...]
 I don't want to ban the use of virtual functions. I want to ban 
 the use of
 virtual functions that aren't marked virtual explicitly! ;)

 Likewise, I like the GC, I just want to be able to control it.
 Disable auto-collect, explicitly issue collect calls myself at 
 controlled
 moments, and give the collect function a maximum timeout where 
 it will
 yield, and then resume where it left off next time I call it.
I agree 100% and have that need too. I'd go further and also prefer the ability to optionally ban certain language features from use from within selective parts of my code base. As you say, I do not actually want to outright ban the GC or any other language feature (I really do use them!!!), it's only the desire to be able to have much better control over it for the situations that demand precision and certainty. Having control over D and the GC like what we're taking about in here can turn D into a seriously awesome systems language unlike any other. --rt
Apr 09 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 15:08, Rob T <alanb ucora.com> wrote:

 On Wednesday, 10 April 2013 at 04:32:52 UTC, Manu wrote:

 final on the class means you can't derive it anymore (what's the point of
 a
 class?),
I think that you'd place final on a derived class, not a base class. So it can make perfect sense, although final on the methods of a final class makes little sense so it should probably be a compiler error. and the manual final blocks are totally prone to error.
 In my experience, 90% of functions are not (or rather, should not be)
 virtual. The common (and well performing) case should be default.
Believe it or not, but I actually have been in the reverse position more than once, so which way the default should go is debatable. In your case I can certainly see why you'd prefer the opposite as I've done RT programming before and will be doing it again. What could perhaps work is a module level specifier that indicates what the defaults should be as a matter of use case preference, but I can see that going horribly wrong. The question I have, is why use a class if you do not need to use virtual functions? I think the problem goes a bit deeper than the defaults being opposite of what you want. I expect that you'd probably want struct inheritance or something like that but cannot get it from D?
I do use virtual functions, that's the point of classes. But most functions are not virtual. More-so, most functions are trivial accessors, which really shouldn't be virtual. OOP by design recommends liberal use of accessors, ie, properties, that usually just set or return a variable. Wen would you ever want property size_t size() { return size; } to be a virtual call? A base class typically offers a sort of template of something, implementing as much shared/common functionality as possible, but which you might extend, or make more specific in some very controlled way. Typically the base functionality and associated accessors deal with variable data contained in the base-class. The only situations I can imagine where most functions would be virtual are either a) ridiculously small classes with only 2 functions (at least you'll only type 'virtual' once or twice in this case), b) some sort of OOP-tastic widget that 'can do anything!' or tries to be a generalisation of an 'anything', which is, frankly, horrible and immature software design, and basically the entire reason OOP is developing a name as being a terrible design pattern in the first place... I wouldn't make the latter case an implicit recommendation through core language design... but apparently that's just me ;) No I don't want struct inheritance (although that would be nice! but I'm okay with aggregating and 'alias this'), that would insist on using 'ref' everywhere, and you can't create ref locals, so you can't really use structs conveniently this way. Classes are reference types, that's the point. I know what a class is, and I'm happy with all aspects of the existing design, except this one thing. Can you demonstrate a high level class, ie, not a primitive tool, but the sort of thing a programmer would write in their daily work where all/most functions would be virtual? I can paste almost any class I've ever written, there is usually 2-4 virtuals, among 20-30 functions. Any language with properties can't have virtual-by-default.
 Seriously, .length or any other trivial property that can no longer be
 inlined, or even just called.
 And human error aside, do you really want to have to type final on every
 function? Or indent every line an extra tab level?
Mark your properties as final?
That's 90% of the class! You are familiar with OOP right? :) Almost everything is an accessor... I usually have 2 virtual functions, update() and draw(), or perhaps there's a purpose specific doWork() type of function to perform the derived object's designated function, but basically everything else is trivial accessors, or base class concepts that make absolutely no sense to override. I also work with lots of C++ middleware, and the virtuals are usually tightly controlled and deliberately minimised, and there's a good reason for this too; the fewer virtuals that the user is expected to override, the simpler it is to understand and work with your class! Additionally, it's a nice self-documenting feature; at a glance you can see the virtuals, ie, what you need to do to make use of a 3rd party OOP API. [...]
 I don't want to ban the use of virtual functions. I want to ban the use of

 virtual functions that aren't marked virtual explicitly! ;)

 Likewise, I like the GC, I just want to be able to control it.
 Disable auto-collect, explicitly issue collect calls myself at controlled
 moments, and give the collect function a maximum timeout where it will
 yield, and then resume where it left off next time I call it.
I agree 100% and have that need too. I'd go further and also prefer the ability to optionally ban certain language features from use from within selective parts of my code base. As you say, I do not actually want to outright ban the GC or any other language feature (I really do use them!!!), it's only the desire to be able to have much better control over it for the situations that demand precision and certainty.
Precisely. Having control over D and the GC like what we're taking about in here can
 turn D into a seriously awesome systems language unlike any other.
Correct, it's not quite a systems language while the GC does whatever it wants. But D needs the GC to be considered a 'modern', and generally productive language.
Apr 09 2013
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:
 [...]

 I do use virtual functions, that's the point of classes. But 
 most functions
 are not virtual. More-so, most functions are trivial accessors, 
 which
 really shouldn't be virtual.
 OOP by design recommends liberal use of accessors, ie, 
 properties, that
 usually just set or return a variable. Wen would you ever want 
  property
 size_t size() { return size; } to be a virtual call?
Yes, if you want to change its behavior in a derived class. One nice feature of properties is that you can trigger actions when assigning/reading from properties. This is very used in OO GUI and DB code in other languages.
 Can you demonstrate a high level class, ie, not a primitive 
 tool, but the
 sort of thing a programmer would write in their daily work 
 where all/most
 functions would be virtual?
I have lots of code from JVM and .NET languages with such examples. OO code in the enterprise world is a beauty in itself, regardless of the language.
 Likewise, I like the GC, I just want to be able to control it.
 Disable auto-collect, explicitly issue collect calls myself 
 at controlled
 moments, and give the collect function a maximum timeout 
 where it will
 yield, and then resume where it left off next time I call it.
I agree 100% and have that need too. I'd go further and also prefer the ability to optionally ban certain language features from use from within selective parts of my code base. As you say, I do not actually want to outright ban the GC or any other language feature (I really do use them!!!), it's only the desire to be able to have much better control over it for the situations that demand precision and certainty.
Precisely. Having control over D and the GC like what we're taking about in here can
 turn D into a seriously awesome systems language unlike any 
 other.
Correct, it's not quite a systems language while the GC does whatever it wants. But D needs the GC to be considered a 'modern', and generally productive language.
Maybe something like VisualVM would help as well. -- Paulo
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 17:01, Paulo Pinto <pjmlp progtools.org> wrote:

 On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:

 [...]


 I do use virtual functions, that's the point of classes. But most
 functions
 are not virtual. More-so, most functions are trivial accessors, which
 really shouldn't be virtual.
 OOP by design recommends liberal use of accessors, ie, properties, that
 usually just set or return a variable. Wen would you ever want  property
 size_t size() { return size; } to be a virtual call?
Yes, if you want to change its behavior in a derived class.
That really shouldn't be encouraged. Give me an example? One nice feature of properties is that you can trigger actions when
 assigning/reading from properties.
That doesn't make the property virtual, that makes the virtual that the property calls virtual. You can't have a derived class redefining the function of a trivial accessor. If it has a side effect that is context specific, then it would call through to a separate virtual. And this would be a controlled and deliberate case, ie, 1 in 100, not the norm. This is very used in OO GUI and DB code in other languages. I know, it's an abomination, and the main reason OOP is going out of fashion. Can you demonstrate a high level class, ie, not a primitive tool, but the
 sort of thing a programmer would write in their daily work where all/most
 functions would be virtual?
I have lots of code from JVM and .NET languages with such examples. OO code in the enterprise world is a beauty in itself, regardless of the language.
That's not an exampe. I want to see a class where every function SHOULD be overloaded... That sounds like a nightmare, how can anyone other than the author ever expect to understand it? The fewer and more deliberately controlled the virtuals, the better, by almost every measure I can imagine.
Apr 10 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 09:56:01 +0100, Manu <turkeyman gmail.com> wrote:

 On 10 April 2013 17:01, Paulo Pinto <pjmlp progtools.org> wrote:

 On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:

 [...]


 I do use virtual functions, that's the point of classes. But most
 functions
 are not virtual. More-so, most functions are trivial accessors, which
 really shouldn't be virtual.
 OOP by design recommends liberal use of accessors, ie, properties, that
 usually just set or return a variable. Wen would you ever want  
  property
 size_t size() { return size; } to be a virtual call?
Yes, if you want to change its behavior in a derived class.
That really shouldn't be encouraged. Give me an example?
I wrote some crypto code which had a property for the hash size, each derived class returned it's own size. Now, in this case the property size() was called rarely, but it's still a valid example. That said, I agree we don't generally want properties and other short-and-should-be-inlined methods to be virtual by default. But.. is D really doing that? I mean /actually/ really doing it. I have read most of this thread in passing and I've seen the earlier discussion about classes in libraries etc and TBH I am not sure how/what D does in those situations. Presumably if D does not have the source for the library then the library class would have to have all methods virtual - just in case it was derived from and I guess this is an issue that needs solving somehow. But, I don't know if this is the case, or if this is even an actual problem. That's not the most common case however, the common case is a class you're compiling with the rest of your classes and in this case D is not going to make your base class methods (which have not explicitly been overridden) virtual, they should be non-virtual and optimised/inlined/etc. They just start out "virtually" virtual until the compiler is finished compiling them and all derived classes, at that point all non-override base methods should be non-virtual. It would be interesting to see what the compiler actually does if, for example, you compile a file with a class then in a separate compile/link step compile a derived class and link. Does D manage to non-virtual all non-overridden methods? If so, can it do the same with a lib/dll combo?
 That's not an exampe. I want to see a class where every function SHOULD  
 be
 overloaded... That sounds like a nightmare, how can anyone other than the
 author ever expect to understand it? The fewer and more deliberately
 controlled the virtuals, the better, by almost every measure I can  
 imagine.
This isn't the reasoning behind making virtual the virtual default for methods. The reason for doing it was simply so the compiler could be responsible for figuring it out without you having to label it manually. You're already labelling the derived class method "override" and that should be enough. This was a choice of convenience and ultimately performance because you won't accidentally make a method virtual that doesn't need to be. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 19:44, Regan Heath <regan netmail.co.nz> wrote:

 On Wed, 10 Apr 2013 09:56:01 +0100, Manu <turkeyman gmail.com> wrote:

  On 10 April 2013 17:01, Paulo Pinto <pjmlp progtools.org> wrote:
  On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:
  [...]
 I do use virtual functions, that's the point of classes. But most
 functions
 are not virtual. More-so, most functions are trivial accessors, which
 really shouldn't be virtual.
 OOP by design recommends liberal use of accessors, ie, properties, that
 usually just set or return a variable. Wen would you ever want  property
 size_t size() { return size; } to be a virtual call?
Yes, if you want to change its behavior in a derived class.
That really shouldn't be encouraged. Give me an example?
I wrote some crypto code which had a property for the hash size, each derived class returned it's own size. Now, in this case the property size() was called rarely, but it's still a valid example.
But this is a trivial class, with 2 methods, size() and computeHash(). It's also a primitive tool that should probably live in a standard library. It's not the sort of code people are writing on a daily basis. That said, I agree we don't generally want properties and other
 short-and-should-be-inlined methods to be virtual by default.  But.. is D
 really doing that?  I mean /actually/ really doing it.
Yes. People don't write final on everything. It's not a habit, and it's not even recommended anywhere. I have read most of this thread in passing and I've seen the earlier
 discussion about classes in libraries etc and TBH I am not sure how/what D
 does in those situations.  Presumably if D does not have the source for the
 library then the library class would have to have all methods virtual -
 just in case it was derived from and I guess this is an issue that needs
 solving somehow.  But, I don't know if this is the case, or if this is even
 an actual problem.
It's a problem. And I don't believe it's 'solvable'. The only solution I can imagine is to not force the compiler into that completely unknowable situation in the first place, by making virtual explicit to begin with. That's not the most common case however, the common case is a class you're
 compiling with the rest of your classes and in this case D is not going to
 make your base class methods (which have not explicitly been overridden)
 virtual, they should be non-virtual and optimised/inlined/etc.  They just
 start out "virtually" virtual until the compiler is finished compiling them
 and all derived classes, at that point all non-override base methods should
 be non-virtual.
I don't see how this is possible, unless the instance was created in the same local scope, and that's extremely unlikely. Even if it has the source for my class, it can't ever know that the pointer I hold is not a base to a more derived class. It could be returned from a DLL or anything. Everyone keeps talking like this is possible. Is it? I can't imagine any way that it is. It would be interesting to see what the compiler actually does if, for
 example, you compile a file with a class then in a separate compile/link
 step compile a derived class and link.  Does D manage to non-virtual all
 non-overridden methods?  If so, can it do the same with a lib/dll combo?
The compiler makes a virtual call. If a static lib, or a DLL is involved, I don't see how the optimisation could ever be valid? That's not an exampe. I want to see a class where every function SHOULD be
 overloaded... That sounds like a nightmare, how can anyone other than the
 author ever expect to understand it? The fewer and more deliberately
 controlled the virtuals, the better, by almost every measure I can
 imagine.
This isn't the reasoning behind making virtual the virtual default for methods. The reason for doing it was simply so the compiler could be responsible for figuring it out without you having to label it manually.
Yes but the compiler can't figure it out. It's not logically possible, it must always assume that it could potentially be derived somewhere... because it could. You're already labelling the derived class method "override" and that
 should be enough.  This was a choice of convenience and ultimately
 performance because you won't accidentally make a method virtual that
 doesn't need to be.
Sorry, I don't follow. How does this help?
Apr 10 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
Ok, lets Rewind for a sec.  I need some educating on the actual issue and  
I want to go through the problem one case at a time..




class A
{
   public int isVirt()  { return 1; }
   public int notVirt() { return 2; }
}

and compile it with another class..

class B : A
{
   override public int isVirt() { return 5; }
}

and this main..

void main()
{
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // but not here
}

Right?




void main()
{
   A a = new A();
   a.isVirt();     // not a virtual call
   a.notVirt();    // neither is this
}

Right?




class B : A
{
   override public int isVirt() { return 5; }
}

and..

void main()
{
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // we're saying it has to make a virtual call here too
}

So, when the compiler produced the library it compiled all methods of A as  
virtual because it could not know if they were to be overridden in  
consumers of the library.

So, if the library created an A and passed it to you, all method calls  
would have to be virtual.

And, in your own code, if you derive B from A, then calls to A base class  
methods will be virtual.

Right?

R
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 20:29, Regan Heath <regan netmail.co.nz> wrote:

 Ok, lets Rewind for a sec.  I need some educating on the actual issue and
 I want to go through the problem one case at a time..




 class A
 {
   public int isVirt()  { return 1; }
   public int notVirt() { return 2; }
 }

 and compile it with another class..

 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and this main..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // but not here
 }

 Right?




 void main()
 {
   A a = new A();
   a.isVirt();     // not a virtual call
   a.notVirt();    // neither is this
 }

 Right?




 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // we're saying it has to make a virtual call here too
 }

 So, when the compiler produced the library it compiled all methods of A as
 virtual because it could not know if they were to be overridden in
 consumers of the library.

 So, if the library created an A and passed it to you, all method calls
 would have to be virtual.

 And, in your own code, if you derive B from A, then calls to A base class
 methods will be virtual.

 Right?

 R
All your examples all hinge on the one case where the optimisation may possibly be valid, a is _allocated in the same scope_. Consider: void func(A* a) { a.isVirt(); // is it? I guess... a.notVirt(); // what about this? } We don't even know what a B is, or whether it even exists. All we know is that A's methods are virtual by default, and they could be overridden somewhere else. It's not possible to assume otherwise.
Apr 10 2013
parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 11:39:20 +0100, Manu <turkeyman gmail.com> wrote:
 All your examples all hinge on the one case where the optimisation may
 possibly be valid, a is _allocated in the same scope_.
Correct, that was the whole point. I am trying to establish where the "problem" arises, starting from the most basic examples :)
 Consider:
The example below is, I presume, supposed to be a function call in user code where A is defined in a library, correct? If so, I agree with the issues stated. If not, if A, and B, and any other classes derived from A are all compiled at the same time(*) and we're producing an exe then the compiler has all the information it needs to decide which methods MUST be virtual, and all others should be non-virtual. This is the point I was trying to establish with my simpler examples :) (*) Yes, perhaps you compile a.d then b.d separately - this may result in the same problem case as a library, or it may not - I don't have the understanding/data to say one way or the other.
 void func(A* a)
 {
  a.isVirt(); // is it? I guess...
  a.notVirt(); // what about this?
 }

 We don't even know what a B is, or whether it even exists.
Assuming A and B come from a library and are not present in the source we are compiling with this function, yes.
 All we know is that A's methods are virtual by default, and they could be
 overridden somewhere else. It's not possible to assume otherwise.
If we're compiling A, and some number of sub-classes of A, and producing an exe, then from the definition of A, B, and all sub-classes we know which methods have been overridden, those are virtual, all others are non-virtual. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
Sorry, void func(A* a) should be void func(A a). C++ took me for a moment ;)


On 10 April 2013 20:39, Manu <turkeyman gmail.com> wrote:

 On 10 April 2013 20:29, Regan Heath <regan netmail.co.nz> wrote:

 Ok, lets Rewind for a sec.  I need some educating on the actual issue and
 I want to go through the problem one case at a time..




 class A
 {
   public int isVirt()  { return 1; }
   public int notVirt() { return 2; }
 }

 and compile it with another class..

 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and this main..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // but not here
 }

 Right?




 void main()
 {
   A a = new A();
   a.isVirt();     // not a virtual call
   a.notVirt();    // neither is this
 }

 Right?




 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // we're saying it has to make a virtual call here too
 }

 So, when the compiler produced the library it compiled all methods of A
 as virtual because it could not know if they were to be overridden in
 consumers of the library.

 So, if the library created an A and passed it to you, all method calls
 would have to be virtual.

 And, in your own code, if you derive B from A, then calls to A base class
 methods will be virtual.

 Right?

 R
All your examples all hinge on the one case where the optimisation may possibly be valid, a is _allocated in the same scope_. Consider: void func(A* a) { a.isVirt(); // is it? I guess... a.notVirt(); // what about this? } We don't even know what a B is, or whether it even exists. All we know is that A's methods are virtual by default, and they could be overridden somewhere else. It's not possible to assume otherwise.
Apr 10 2013
prev sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
My understanding as based on dlang.org:

On Wednesday, 10 April 2013 at 10:29:05 UTC, Regan Heath wrote:
 Ok, lets Rewind for a sec.  I need some educating on the actual 
 issue and I want to go through the problem one case at a time..




 class A
 {
   public int isVirt()  { return 1; }
   public int notVirt() { return 2; }
 }

 and compile it with another class..

 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and this main..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // but not here
 }

 Right?
No. A is not final. A has no internal linkage. It can be inherited from in other compilation unit. notVirt is virtual. Other answers are matching ;)
Apr 10 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 11:48:18 +0100, Dicebot <m.strashun gmail.com> wrote:

 My understanding as based on dlang.org:

 On Wednesday, 10 April 2013 at 10:29:05 UTC, Regan Heath wrote:
 Ok, lets Rewind for a sec.  I need some educating on the actual issue  
 and I want to go through the problem one case at a time..




 class A
 {
   public int isVirt()  { return 1; }
   public int notVirt() { return 2; }
 }

 and compile it with another class..

 class B : A
 {
   override public int isVirt() { return 5; }
 }

 and this main..

 void main()
 {
   A a = new B();
   a.isVirt();    // compiler makes a virtual call
   a.notVirt();   // but not here
 }

 Right?
No.
Hmm..
 A is not final.
True. But, I don't see how this matters.
 A has no internal linkage. It can be inherited from in other compilation  
 unit.
False. In this first example we are compiling A and B together (into an exe - I left that off) so the compiler has all sources and all uses of all methods of A (and B).
 notVirt is virtual.
It may actually be (I don't know) but it certainly does not have to be (compiler has all sources/uses) and my impression was that it /should/ not be. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 10:53:26 UTC, Regan Heath wrote:
 Hmm..

 A is not final.
True. But, I don't see how this matters.
 A has no internal linkage. It can be inherited from in other 
 compilation unit.
False. In this first example we are compiling A and B together (into an exe - I left that off) so the compiler has all sources and all uses of all methods of A (and B).
 notVirt is virtual.
It may actually be (I don't know) but it certainly does not have to be (compiler has all sources/uses) and my impression was that it /should/ not be. R
If it is compiled all at once and compiled into executable binary than yes, you examples are valid and compiler _MAY_ omit virtual. But a) DMD doesn't do it as far as I am aware. b) It is a quite uncommon and restrictive build setup.
Apr 10 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 11:59:32 +0100, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 10:53:26 UTC, Regan Heath wrote:
 Hmm..

 A is not final.
True. But, I don't see how this matters.
 A has no internal linkage. It can be inherited from in other  
 compilation unit.
False. In this first example we are compiling A and B together (into an exe - I left that off) so the compiler has all sources and all uses of all methods of A (and B).
 notVirt is virtual.
It may actually be (I don't know) but it certainly does not have to be (compiler has all sources/uses) and my impression was that it /should/ not be. R
If it is compiled all at once and compiled into executable binary than yes, you examples are valid and compiler _MAY_ omit virtual.
Exactly the point I was trying to make. I wanted to establish the point at which the design problems (what D defines/intends to do) arise, vs when the implementation problems arise (DMD not doing what D intends).
 But
 a) DMD doesn't do it as far as I am aware.
Maybe, maybe not. I have no idea. My understanding of the design decision is that DMD will eventually do it.
 b) It is a quite uncommon and restrictive build setup.
Maybe at present. Lets assume DMD can remove virtual when presented with all sources compiled in one-shot. Lets assume it cannot if each source is compiled separately. Is that an insurmountable problem? A design problem? Or, is it simply an implementation issue. Could an obj file format be designed to allow DMD to perform the same optimisation in this case, as in the one-shot case. My impression is that this should be solvable. So, that just leaves the library problem. Is this also insurmountable? A design problem? Or, is it again an implementation issue. Can D not mark exported library methods as virtual/non-virtual? When user code derives from said exported class could D not perform the same optimisation for that class? I don't know enough about compilation to answer that. But, I can see how if the library itself manifests an object of type A - which may actually be an internal derived sub-class of A, there are clearly issues. But, could DMD not have two separate definitions for A, and use one for objects manifested from the library, and another locally for user derived classes? I don't know, these are all just ideas I have on the subject. :p R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 11:09:30 UTC, Regan Heath wrote:
 ...
 R
Yes, this is insurmountable design problem, because it forces you to have an application that is not allowed to use dll's, not allowed to split part of its functionality in static libraries, requires full recompilation on smallest change. Sounds rather useless, to be honest. I don't see how changing obj format may help.
Apr 10 2013
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 21:09, Regan Heath <regan netmail.co.nz> wrote:

 On Wed, 10 Apr 2013 11:59:32 +0100, Dicebot <m.strashun gmail.com> wrote:

  On Wednesday, 10 April 2013 at 10:53:26 UTC, Regan Heath wrote:
 Hmm..

  A is not final.

 True.  But, I don't see how this matters.

  A has no internal linkage. It can be inherited from in other
 compilation unit.
False. In this first example we are compiling A and B together (into an exe - I left that off) so the compiler has all sources and all uses of all methods of A (and B). notVirt is virtual.

 It may actually be (I don't know) but it certainly does not have to be
 (compiler has all sources/uses) and my impression was that it /should/ not
 be.

 R
If it is compiled all at once and compiled into executable binary than yes, you examples are valid and compiler _MAY_ omit virtual.
Exactly the point I was trying to make. I wanted to establish the point at which the design problems (what D defines/intends to do) arise, vs when the implementation problems arise (DMD not doing what D intends). But
 a) DMD doesn't do it as far as I am aware.
Maybe, maybe not. I have no idea. My understanding of the design decision is that DMD will eventually do it.
I feel like I'm being ignored. It's NOT POSSIBLE. b) It is a quite uncommon and restrictive build setup.

 Maybe at present.

 Lets assume DMD can remove virtual when presented with all sources
 compiled in one-shot.

 Lets assume it cannot if each source is compiled separately.  Is that an
 insurmountable problem?  A design problem?  Or, is it simply an
 implementation issue.  Could an obj file format be designed to allow DMD to
 perform the same optimisation in this case, as in the one-shot case.  My
 impression is that this should be solvable.

 So, that just leaves the library problem.  Is this also insurmountable?  A
 design problem?  Or, is it again an implementation issue.  Can D not mark
 exported library methods as virtual/non-virtual?  When user code derives
 from said exported class could D not perform the same optimisation for that
 class?  I don't know enough about compilation to answer that.  But, I can
 see how if the library itself manifests an object of type A - which may
 actually be an internal derived sub-class of A, there are clearly issues.
  But, could DMD not have two separate definitions for A, and use one for
 objects manifested from the library, and another locally for user derived
 classes?  I don't know, these are all just ideas I have on the subject.  :p
That sounds overly complex and error prone. I don't believe the source of an object is actually trackable in the way required to do that. I can't conceive any solution of this sort is viable. And even if it were theoretically possible, when can we expect to see it implemented? It's not feasible. The _problem_ is that functions are virtual by default. It's a trivial problem to solve, however it's a major breaking change, so it will never happen. Hence my passing comment that spawned this whole thread, I see it as the single biggest critical mistake in D, and I'm certain it will never be changed. I've made my peace, however disappointing it is to me.
Apr 10 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 11:31:08 UTC, Manu wrote:
 I feel like I'm being ignored. It's NOT POSSIBLE.
Well, this is not 100% true. It is possible if you just say "hey, I prohibit you to use dll or any other way to leak symbols from binary and please no separate compilation in any form". But I doubt one can consider it a viable option.
Apr 10 2013
prev sibling next sibling parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 12:30:56 +0100, Manu <turkeyman gmail.com> wrote:

 On 10 April 2013 21:09, Regan Heath <regan netmail.co.nz> wrote:

 On Wed, 10 Apr 2013 11:59:32 +0100, Dicebot <m.strashun gmail.com>  
 wrote:

  On Wednesday, 10 April 2013 at 10:53:26 UTC, Regan Heath wrote:
 Hmm..

  A is not final.

 True.  But, I don't see how this matters.

  A has no internal linkage. It can be inherited from in other
 compilation unit.
False. In this first example we are compiling A and B together (into an exe - I left that off) so the compiler has all sources and all uses of all methods of A (and B). notVirt is virtual.

 It may actually be (I don't know) but it certainly does not have to be
 (compiler has all sources/uses) and my impression was that it  
 /should/ not
 be.

 R
If it is compiled all at once and compiled into executable binary than yes, you examples are valid and compiler _MAY_ omit virtual.
Exactly the point I was trying to make. I wanted to establish the point at which the design problems (what D defines/intends to do) arise, vs when the implementation problems arise (DMD not doing what D intends). But
 a) DMD doesn't do it as far as I am aware.
Maybe, maybe not. I have no idea. My understanding of the design decision is that DMD will eventually do it.
I feel like I'm being ignored. It's NOT POSSIBLE.
You're not. The issue here is my understanding of the problem (and compilation etc in general) and why you believe it's an insurmountable problem. I am trying to both understand the issue and explore possible solutions. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one.
 Hence my passing comment that spawned this whole thread, I see it as the
 single biggest critical mistake in D, and I'm certain it will never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Apr 10 2013
next sibling parent "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
On Wednesday, 10 April 2013 at 15:38:49 UTC, Andrei Alexandrescu 
wrote:
 On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major 
 breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one.
 Hence my passing comment that spawned this whole thread, I see 
 it as the
 single biggest critical mistake in D, and I'm certain it will 
 never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Why is virtual by default a problem? You could have non-virtual by default and would live happily until a day where you forget to declare the base class destructor virtual. Then you spent a lot of time trying to find why you are leaking memory. In C++ you have to be aware all the time not to forget something and screw everything. D is more forgiving, at a small cost of performance. So I don't buy the non-virtual by default argument. If your profiler tells you that a particular virtual function is the bottleneck, go on and make it final. That's why profilers exist.
Apr 10 2013
prev sibling next sibling parent reply "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
On Wednesday, 10 April 2013 at 15:38:49 UTC, Andrei Alexandrescu
wrote:
 On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major 
 breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one.
 Hence my passing comment that spawned this whole thread, I see 
 it as the
 single biggest critical mistake in D, and I'm certain it will 
 never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Why is virtual by default a problem? You could have non-virtual by default and would live happily until a day where you forget to declare the base class destructor virtual. Then you spent a lot of time trying to find why you are leaking memory. In C++ you have to be aware all the time not to forget something and screw everything. D is more forgiving, at a small cost of performance. So I don't buy the non-virtual by default argument. If your profiler tells you that a particular virtual function is the bottleneck, go on and make it final. That's why profilers exist.
Apr 10 2013
next sibling parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 02:29, Minas Mina <minas_mina1990 hotmail.co.uk> wrote:

 On Wednesday, 10 April 2013 at 15:38:49 UTC, Andrei Alexandrescu
 wrote:

  On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one. Hence my passing comment that spawned this whole thread, I see it as the
 single biggest critical mistake in D, and I'm certain it will never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Why is virtual by default a problem?
Seriously? There's like 100 posts in this thread. You could have non-virtual by default and would live happily
 until a day where you forget to declare the base class destructor
 virtual. Then you spent a lot of time trying to find why you are
 leaking memory.
That's never happened to me. On the contrary, I'm yet to see another programmer properly apply final throughout a class... In C++ you have to be aware all the time not to forget something
 and screw everything. D is more forgiving, at a small cost of
 performance.
'Small cost'? D is a compiled systems language, performance is not unimportant. And how do you quantify 'small'? Scattered dcache/icache misses are the worst possible hazard. So I don't buy the non-virtual by default argument. If your
 profiler tells you that a particular virtual function is the
 bottleneck, go on and make it final. That's why profilers exist.
Thanks for wasting my time! I already spend countless working hours looking at a profiler. I'd like to think I might waste less time doing that in the future. Additionally, the profiler won't tell you the virtual function is the bottleneck, it'll be the calling function that shows the damage, and in the event the function is called from multiple/many places (as trivial accessors are), it won't show up at all as a significant cost in any place, it'll be evenly spread, which is the worst possible sort of performance hazard. Coincidentally, this called-from-many-locations situation is the time when it's most likely to cause icache/dcache misses! It's all bad, and very hard to find/diagnose. In C++, when I treat all the obvious profiler hot-spots, I start manually trawling through source files at random, looking for superfluous virtuals and if()'s. In D, I'm now encumbered by an additional task, since it's not MARKED virtual, I can't instantly recognise it, or reason about whether it actually should (or was intended to be) be virtual or not, so now I have to perform an additional tedious process of diagnosing for each suspicious function if it actually IS or should-be virtual before I can mark it final. And this doesn't just apply to the few existing functions marked virtual as in C++, now this new process applies to EVERY function. *sigh* The language shouldn't allow programmers to make critical performance blunders of that sort. I maintain that virtual-by-default is a critical error. In time, programmers will learn to be cautious/paranoid, and 'final' will dominate your code window.
Apr 10 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 11 April 2013 02:59, Manu <turkeyman gmail.com> wrote:

 In time, programmers will learn to be cautious/paranoid, and 'final' will
 dominate your code window.
Or more realistically, most programmers will continue to be oblivious, and we'll enjoy another eternity of the same old problem where many 3rd party libraries written on a PC are unusable on resource-limited machines, and people like me will waste innumerable more hours re-inventing wheels in-house, because the programmer of a closed-source library either didn't know, or didn't give a shit. None of it would be a problem if he just had to type virtual when he meant it... the action would even assist in invoking conscious thought about whether that's actually what he wants to do, or if there's a better design. </okay, really end rant>
Apr 10 2013
parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 10.04.2013 19:06, schrieb Manu:
 On 11 April 2013 02:59, Manu <turkeyman gmail.com
 <mailto:turkeyman gmail.com>> wrote:

     In time, programmers will learn to be cautious/paranoid, and 'final'
     will dominate your code window.


 Or more realistically, most programmers will continue to be oblivious,
 and we'll enjoy another eternity of the same old problem where many 3rd
 party libraries written on a PC are unusable on resource-limited
 machines, and people like me will waste innumerable more hours
 re-inventing wheels in-house, because the programmer of a closed-source
 library either didn't know, or didn't give a shit.

 None of it would be a problem if he just had to type virtual when he
 meant it... the action would even assist in invoking conscious thought
 about whether that's actually what he wants to do, or if there's a
 better design.
 </okay, really end rant>
Manu, maybe something you might not be aware: - Smalltalk - Eiffel - Lisp - Java - Self - Dylan - Julia - Objective-C - JavaScript Are just a few examples of languages with virtual semantics for method call. Some of those only offer virtual dispatch actually. Some of them were developed in an age of computer systems that would make today's embedded systems look like High Performance Computing servers. Julia is actually a new kid on block, hardly one year old, and already achieves C parity in many benchmarks. So I think how much could be a problem of D's compilers and not the virtual by default concept in itself. -- Paulo
Apr 10 2013
parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On 2013-04-10, 19:46, Paulo Pinto wrote:


 Manu, maybe something you might not be aware:

 - Smalltalk
 - Eiffel
 - Lisp
 - Java
 - Self
 - Dylan
 - Julia
 - Objective-C
 - JavaScript

 Are just a few examples of languages with virtual semantics for method  
 call. Some of those only offer virtual dispatch actually.

 Some of them were developed in an age of computer systems that would  
 make today's embedded systems look like High Performance Computing  
 servers.
The fact that successful languages have been created where virtual dispatch is the default, or even the only possibility, does not mean that virtual dispatch is not slower than non-virtual, nor, especially, that this inefficiency might be a problem in some fields. Sure, games have been written in most of these languages. AAA titles today have somewhat stricter needs, and virtual dispatch by default is most definitely a problem there.
 Julia is actually a new kid on block, hardly one year old, and already  
 achieves C parity in many benchmarks.
On their website (http://julialang.org/), they show two such benchmarks, both of which seem to be exactly the kind where virtual dispatch is not going to be a problem when you have a JIT.
 So I think how much could be a problem of D's compilers and not the  
 virtual by default concept in itself.
Look once more at that list. They're all dynamically typed, have JITs or the like (possibly with the exception of Eiffel). In other words, they do devirtualization at runtime. -- Simen
Apr 10 2013
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 10.04.2013 20:32, schrieb Simen Kjaeraas:
 On 2013-04-10, 19:46, Paulo Pinto wrote:


 Manu, maybe something you might not be aware:

 - Smalltalk
 - Eiffel
 - Lisp
 - Java
 - Self
 - Dylan
 - Julia
 - Objective-C
 - JavaScript

 Are just a few examples of languages with virtual semantics for method
 call. Some of those only offer virtual dispatch actually.

 Some of them were developed in an age of computer systems that would
 make today's embedded systems look like High Performance Computing
 servers.
The fact that successful languages have been created where virtual dispatch is the default, or even the only possibility, does not mean that virtual dispatch is not slower than non-virtual, nor, especially, that this inefficiency might be a problem in some fields. Sure, games have been written in most of these languages. AAA titles today have somewhat stricter needs, and virtual dispatch by default is most definitely a problem there.
 Julia is actually a new kid on block, hardly one year old, and already
 achieves C parity in many benchmarks.
On their website (http://julialang.org/), they show two such benchmarks, both of which seem to be exactly the kind where virtual dispatch is not going to be a problem when you have a JIT.
 So I think how much could be a problem of D's compilers and not the
 virtual by default concept in itself.
Look once more at that list. They're all dynamically typed, have JITs or the like (possibly with the exception of Eiffel). In other words, they do devirtualization at runtime.
You are right, I just discussing about virtual dispatch in general. Yeah, maybe it is really not that desired in languages with AOT compilation. -- Paulo
Apr 10 2013
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 01:38, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 4/10/13 7:30 AM, Manu wrote:

 Hence my passing comment that spawned this whole thread, I see it as the
single biggest critical mistake in D, and I'm certain it will never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace.
Well, I personally have no other issues with D that I would call 'critical mistakes', this is it... and it is a pretty big one. From a performance point of view, it's very dangerous, and I've also illustrated a whole bunch of other reasons why I think it's a mistake irrespective of performance. Also Andrej's recent point was interesting. Any other gripes I have are really just incomplete features, like rvalue -> ref (or scope's incomplete implementation as I've always imagined it), allocations where they don't need to be, better gc control, and general lack of consideration for other architectures. I can see movement on all those issues, they'll come around when they've finished baking. Meanwhile, lots of things that were missing/buggy have been fixed in the last year, D is feeling loads more solid recently, at least to me. I've been able to do most things I need to do with relatively little friction.
Apr 10 2013
prev sibling next sibling parent "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
On Wednesday, 10 April 2013 at 15:38:49 UTC, Andrei Alexandrescu
wrote:
 On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major 
 breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one.
 Hence my passing comment that spawned this whole thread, I see 
 it as the
 single biggest critical mistake in D, and I'm certain it will 
 never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Why is virtual by default a problem? You could have non-virtual by default and would live happily until a day where you forget to declare the base class destructor virtual. Then you spent a lot of time trying to find why you are leaking memory. In C++ you have to be aware all the time not to forget something and screw everything. D is more forgiving, at a small cost of performance. So I don't buy the non-virtual by default argument. If your profiler tells you that a particular virtual function is the bottleneck, go on and make it final. That's why profilers exist.
Apr 10 2013
prev sibling parent "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
On Wednesday, 10 April 2013 at 15:38:49 UTC, Andrei Alexandrescu 
wrote:
 On 4/10/13 7:30 AM, Manu wrote:
 The _problem_ is that functions are virtual by
 default. It's a trivial problem to solve, however it's a major 
 breaking
 change, so it will never happen.
I agree. We may as well save our breath on this one.
 Hence my passing comment that spawned this whole thread, I see 
 it as the
 single biggest critical mistake in D, and I'm certain it will 
 never be
 changed. I've made my peace, however disappointing it is to me.
I disagree with the importance assessment, but am soothed by your being at peace. Andrei
Why is virtual by default a problem? You could have non-virtual by default and would live happily until a day where you forget to declare the base class destructor virtual. Then you spent a lot of time trying to find why you are leaking memory. In C++ you have to be aware all the time not to forget something and screw everything. D is more forgiving, at a small cost of performance. So I don't buy the non-virtual by default argument. If your profiler tells you that a particular virtual function is the bottleneck, go on and make it final. That's why profilers exist.
Apr 10 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 4/10/13, Manu <turkeyman gmail.com> wrote:
 The _problem_ is that functions are virtual by default.
 It's a trivial problem to solve, however it's a major breaking change, so
 it will never happen.
I wouldn't say never. In fact, it might go hand-in-hand with changing how protection attributes affect virtuality (currently they do, I'd argue they shouldn't) For example one argument against allowing private and package methods the ability to be virtual is performance, namely these methods are now non-virtual and would suddenly become virtual if we allowed private/package overrides (because users typically don't mark private/package methods as final). However if we at the same time introduced a virtual keyword, then the private/package methods would remain non-virtual. What would break are public methods which are overriden but don't use the virtual keyword. So it's a breaking change but at least you won't get any performance degradation by accident. Another reason I like the idea of a virtual keyword is that it documents the method better. The user of a library would clearly see a method can be overriden because it's marked as virtual, rather than having to guess whether the method was left non-final on purpose or by accident (accidents like that can happen..).
Apr 10 2013
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 10 April 2013 23:15, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 4/10/13, Manu <turkeyman gmail.com> wrote:
 The _problem_ is that functions are virtual by default.
 It's a trivial problem to solve, however it's a major breaking change, so
 it will never happen.
I wouldn't say never.
... don't get my hopes up! In fact, it might go hand-in-hand with changing how protection
 attributes affect virtuality (currently they do, I'd argue they
 shouldn't)

 For example one argument against allowing private and package methods
 the ability to be virtual is performance, namely these methods are now
 non-virtual and would suddenly become virtual if we allowed
 private/package overrides (because users typically don't mark
 private/package methods as final).

 However if we at the same time introduced a virtual keyword, then the
 private/package methods would remain non-virtual.

 What would break are public methods which are overriden but don't use
 the virtual keyword. So it's a breaking change but at least you won't
 get any performance degradation by accident.
Rather, you'll gain innumerably, thanks to every property/accessor now being non-virtual as it should be. You make a compelling argument, although I'm easily sold on sich matters! It could be staggered in over a few releases... ie, in one release, 'virtual' is introduced - does nothing - encouraged to update code, next release, complain that missing virtual is deprecated, next release, turn it on proper; compile errors... Walter would have a meltdown I think :P Another reason I like the idea of a virtual keyword is that it
 documents the method better. The user of a library would clearly see a
 method can be overriden because it's marked as virtual, rather than
 having to guess whether the method was left non-final on purpose or by
 accident (accidents like that can happen..).
Hmm, this sounds familiar... where have I heard that before? ;)
Apr 10 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 4/10/13, Manu <turkeyman gmail.com> wrote:
 On 10 April 2013 23:15, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:
 On 4/10/13, Manu <turkeyman gmail.com> wrote:
 It's a trivial problem to solve, however it's a major breaking change,
 so
 it will never happen.
I wouldn't say never.
... don't get my hopes up!
Just take a look at the upcoming changelog: https://github.com/D-Programming-Language/d-programming-language.org/pull/303 (You can clone the repo and run git fetch upstream pull/303/head:pull303 && git checkout pull303) There is a ton of breaking language changes. Pretty much every release is a breaking one in one way or another.
Apr 10 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-10 18:49, Andrej Mitrovic wrote:

 Just take a look at the upcoming changelog:

 https://github.com/D-Programming-Language/d-programming-language.org/pull/303
It's great to see that we will have, what it looks like, a proper changelog of the next release. -- /Jacob Carlborg
Apr 10 2013
prev sibling parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 02:49, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 4/10/13, Manu <turkeyman gmail.com> wrote:
 On 10 April 2013 23:15, Andrej Mitrovic <andrej.mitrovich gmail.com>
wrote:
 On 4/10/13, Manu <turkeyman gmail.com> wrote:
 It's a trivial problem to solve, however it's a major breaking change,
 so
 it will never happen.
I wouldn't say never.
... don't get my hopes up!
Just take a look at the upcoming changelog: https://github.com/D-Programming-Language/d-programming-language.org/pull/303 (You can clone the repo and run git fetch upstream pull/303/head:pull303 && git checkout pull303) There is a ton of breaking language changes. Pretty much every release is a breaking one in one way or another.
O_O There's HEAPS of breaking changes in there! Okay, you've successfully re-ignited me ;)
Apr 10 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 20:53, Regan Heath <regan netmail.co.nz> wrote:

 False.  In this first example we are compiling A and B together (into an
 exe - I left that off) so the compiler has all sources and all uses of all
 methods of A (and B).
And if the program calls LoadLibrary() somewhere? notVirt is virtual.

 It may actually be (I don't know) but it certainly does not have to be
 (compiler has all sources/uses) and my impression was that it /should/ not
 be.
It doesn't have all sources. It could load a library. That can never be guaranteed.
Apr 10 2013
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 11:00:17 UTC, Manu wrote:
 On 10 April 2013 20:53, Regan Heath <regan netmail.co.nz> wrote:

 False.  In this first example we are compiling A and B 
 together (into an
 exe - I left that off) so the compiler has all sources and all 
 uses of all
 methods of A (and B).
And if the program calls LoadLibrary() somewhere? notVirt is virtual.

 It may actually be (I don't know) but it certainly does not 
 have to be
 (compiler has all sources/uses) and my impression was that it 
 /should/ not
 be.
It doesn't have all sources. It could load a library. That can never be guaranteed.
That is the main reason why most JITs do code rewriting every time the world changes. For example, on Hotspot, depending how many subclasses you have loaded many virtual calls are actually direct calls, until this is no longer possible and code regeneration is required. This of course works way better on server side, or with layered JITs like Java 7 has. -- Paulo
Apr 10 2013
prev sibling next sibling parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 12:00:04 +0100, Manu <turkeyman gmail.com> wrote:

 On 10 April 2013 20:53, Regan Heath <regan netmail.co.nz> wrote:

 False.  In this first example we are compiling A and B together (into an
 exe - I left that off) so the compiler has all sources and all uses of  
 all
 methods of A (and B).
And if the program calls LoadLibrary() somewhere? notVirt is virtual.

 It may actually be (I don't know) but it certainly does not have to be
 (compiler has all sources/uses) and my impression was that it /should/  
 not
 be.
It doesn't have all sources. It could load a library. That can never be guaranteed.
So, your counter example is this.. [shared source] class A {} [example DLL] class B : A {} compiled along with source for A. [example application] class C : A {} compiled along with source for A. main() {} calls LoadLibrary which loads above DLL, DLL manifests an A, which is actually a B, e.g. A fromDLL = ...exported DLL function... or perhaps we constrct a B A fromDLL = new B(); And if we were to manifest our own C, derived from A A local = new C(); the methods which would be virtual/non-virtual might very well differ. So how can the compiler know what to do when it sees: void foo(A a) { a.method(); } Correct? I must admit I don't know how a class is exported from a DLL. Presumably the method address is exported. But, does the vtbl get exported, or do virtual methods all map to a wrapper method which indexes the vtbl? In any case, it seems to me that the underlying issue is the desire to have 2 separate definitions for A. One from the DLL, and one local to the application. And, for calls to optimised on the local one in the ususal fashion, while maintaining the DLL mandated virtual/non-virtual methods at the same time. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 4:00 AM, Manu wrote:
 It doesn't have all sources. It could load a library. That can never be
guaranteed.
Currently, a virtual call is done with: (*vtbl[i])(args...) It makes me wonder if the following would be faster: method_fp = vtbl[i]; if (method_fp == &method) method(args...) else (*method_fp)(args...) Anyone care to dummy this up and run a benchmark? It's also possible to, for the per-class static data, have a flag that says "isfinal", which is set to false at runtime whenever a new is done for a derived class. Then, the method test above becomes: if (isfinal)
Apr 10 2013
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 On 4/10/2013 4:00 AM, Manu wrote:
 It doesn't have all sources. It could load a library. That can 
 never be guaranteed.
Currently, a virtual call is done with: (*vtbl[i])(args...) It makes me wonder if the following would be faster: method_fp = vtbl[i]; if (method_fp == &method) method(args...) else (*method_fp)(args...) Anyone care to dummy this up and run a benchmark?
It's important to repeat experiments, but I think it's also interesting to take a look at the very extensive amount of experiments already done on such matters since Self virtual machines. The relevant part of the source code of open source JavaVM is worth looking at. http://en.wikipedia.org/wiki/Inline_caching http://www.azulsystems.com/blog/cliff/2010-04-08-inline-caches-and-call-site-optimization http://extras.springer.com/2000/978-3-540-67660-7/papers/1628/16280258.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.3108&rep=rep1&type=pdf The benchmarking should be done with a both few different little synthetic programs, and one largish program that used virtual calls a lot. Bye, bearophile
Apr 10 2013
prev sibling parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 09:23, Walter Bright <newshound2 digitalmars.com> wrote:

 On 4/10/2013 4:00 AM, Manu wrote:

 It doesn't have all sources. It could load a library. That can never be
 guaranteed.
Currently, a virtual call is done with: (*vtbl[i])(args...) It makes me wonder if the following would be faster: method_fp = vtbl[i]; if (method_fp == &method) method(args...) else (*method_fp)(args...) Anyone care to dummy this up and run a benchmark?
**On a non-x86 machine. It would depend how derived the class is, but I reckon there's good chance it would be slower. In the end, the work is the same, but you've introduced another hazard on top, a likely branch misprediction. I think the common case with classes is that you're calling through a base-pointer, so virtual calls become more expensive, and the non-virtuals may save a cache miss, but gain a misprediction. I suspect the cache miss (a more costly hazard) would occur less often than the mispredicion, which, given a greater volume, might add up to be similar. This is also very, very hard to profile realistically. Isolated test cases can't reveal the truth on this matter, and in the end, we remain in the same place, where non-virtual-by-default is clearly better. It's also possible to, for the per-class static data, have a flag that says
 "isfinal", which is set to false at runtime whenever a new is done for a
 derived class. Then, the method test above becomes:

     if (isfinal)
Branch still exists, vcall is still required if you have a base pointer. Accessors might benefit, but incur the cost of a branch. Aside from virtual (or lack thereof), if() is possibly the second most dangerous keyword ;) It's getting better with time though. Super-long pipeline architectures are slowly dying off.
Apr 10 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-10 11:44, Regan Heath wrote:

 They just start out "virtually" virtual until
 the compiler is finished compiling them and all derived classes, at that
 point all non-override base methods should be non-virtual.
The compiler doesn't perform that optimization, at all. -- /Jacob Carlborg
Apr 10 2013
next sibling parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 10 Apr 2013 12:51:18 +0100, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-10 11:44, Regan Heath wrote:

 They just start out "virtually" virtual until
 the compiler is finished compiling them and all derived classes, at that
 point all non-override base methods should be non-virtual.
The compiler doesn't perform that optimization, at all.
Ok. I've always assumed that it would tho, at some stage. Or is that not the intention? R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 10 2013
prev sibling parent =?utf-8?Q?Simen_Kj=C3=A6r=C3=A5s?= <simen.kjaras gmail.com> writes:
On Wed, 10 Apr 2013 13:51:18 +0200, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-10 11:44, Regan Heath wrote:

 They just start out "virtually" virtual until
 the compiler is finished compiling them and all derived classes, at that
 point all non-override base methods should be non-virtual.
The compiler doesn't perform that optimization, at all.
And, as Manu has tried pointing out, it can't. -- Simen
Apr 10 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:
 A base class typically offers a sort of template of something, 
 implementing
 as much shared/common functionality as possible, but which you 
 might
 extend, or make more specific in some very controlled way.
 Typically the base functionality and associated accessors deal 
 with
 variable data contained in the base-class.
I believe that template mixins + structs are much more natural way to express this concept. Basically, if you need inheritance only for code reuse, you don't need inheritance and all polymorphic overhead. D provides some good tools to shift away from that traditional approach. Those can and should be improved, but I think the whole concept "classes are polymorphic virtual reference types, structs are plain aggregates" is very solid and area of struct-only development needs to be explored a bit more.
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 18:11, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:

 A base class typically offers a sort of template of something,
 implementing
 as much shared/common functionality as possible, but which you might
 extend, or make more specific in some very controlled way.
 Typically the base functionality and associated accessors deal with
 variable data contained in the base-class.
I believe that template mixins + structs are much more natural way to express this concept. Basically, if you need inheritance only for code reuse, you don't need inheritance and all polymorphic overhead. D provides some good tools to shift away from that traditional approach. Those can and should be improved, but I think the whole concept "classes are polymorphic virtual reference types, structs are plain aggregates" is very solid and area of struct-only development needs to be explored a bit more.
... nar, I don't think so. A class is a class, I'm not arguing for anything that's kinda-like-a-class, I'm talking about classes. The fact that I (and sensible 3rd party libraries I would choose to use) minimise the number of virtuals makes perfect sense. It's faster, it's easier to understand, you can see what functions you need to override to use the object effectively at a glance, behaviour is more predictable since there are fewer points of variation... Nobody has yet showed me an example of a typical class where it would make ANY sense that all (or even most) methods be virtual. (Again, not talking about small/trivial or foundational/container classes, people don't write these every day, they write them once, and use them for a decade, and they probably like in the standard library)
Apr 10 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:12:33 UTC, Manu wrote:
 ... nar, I don't think so.
 A class is a class, I'm not arguing for anything that's 
 kinda-like-a-class,
 I'm talking about classes.
The question is then "what is class?". Because the very reason to have class is to have guaranteed polymorphic behavior, so that working with object via its base will always make sense without any fears about what behavior can be overriden. But that is mostly needed in OOP hell with few practical cases like plugins. If essentially coupling data and methods is needed, that is what struct does. I am not arguing that everything should be virtual, I am arguing that you actually need classes. It is not C++ and, in my opinion, structs should be much more common entities than classes, especially in performance-hungry code.
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 19:19, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 09:12:33 UTC, Manu wrote:

 ... nar, I don't think so.
 A class is a class, I'm not arguing for anything that's
 kinda-like-a-class,
 I'm talking about classes.
The question is then "what is class?". Because the very reason to have class is to have guaranteed polymorphic behavior, so that working with object via its base will always make sense without any fears about what behavior can be overriden. But that is mostly needed in OOP hell with few practical cases like plugins.
I think I've lost you here... this doesn't make sense. Where did I say virtual was bad, and it shouldn't exist? And how does my suggesting affect any guarantee of polymorphic behaviour? How is working with an object via it's base where only a small subset of the functions are designed to be overridden any less convenient? It makes no difference. What's more convenient is it's much more obvious what the user is meant to override, and what it should do (presuming the rest of the API is designed in a sensible way). The self documenting nature of the virtual keyword is nice. It's also _MUCH FASTER_! If essentially coupling data and methods is needed, that is what struct
 does. I am not arguing that everything should be virtual, I am arguing that
 you actually need classes. It is not C++ and, in my opinion, structs should
 be much more common entities than classes, especially in performance-hungry
 code.
Struct's are inconvenienced by other stuff. You can't create a ref local, thus you can't use a struct as a class conveniently. It also has no virtual table, how do you extend it conveniently? I'm not talking about struct's, I'm talking about classes.
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:35:12 UTC, Manu wrote:
 I think I've lost you here... this doesn't make sense.
Same here, I am afraid it is just too hard to me to imagine type of architecture used in your project, so I'd better resort from doing stupid advice :) Beg my pardon.
Apr 10 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-10 11:19, Dicebot wrote:

 The question is then "what is class?". Because the very reason to have
 class is to have guaranteed polymorphic behavior, so that working with
 object via its base will always make sense without any fears about what
 behavior can be overriden. But that is mostly needed in OOP hell with
 few practical cases like plugins.

 If essentially coupling data and methods is needed, that is what struct
 does. I am not arguing that everything should be virtual, I am arguing
 that you actually need classes. It is not C++ and, in my opinion,
 structs should be much more common entities than classes, especially in
 performance-hungry code.
I often want reference types, but not necessarily polymorphic types. What I want: * Heap GC allocated * Be able to store references What I don't want: * Pointers (don't guarantee heap allocated) * ref parameters (can't store the reference) * Implement reference counting -- /Jacob Carlborg
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 12:46:28 UTC, Jacob Carlborg wrote:
 I often want reference types, but not necessarily polymorphic 
 types.

 What I want:

 * Heap GC allocated
 * Be able to store references

 What I don't want:

 * Pointers (don't guarantee heap allocated)
 * ref parameters (can't store the reference)
 * Implement reference counting
I find this more of an issue with "ref" being a second-class citizen instead of proper type qualifier.
Apr 10 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 2:12 AM, Manu wrote:
 Nobody has yet showed me an example of a typical class where it would make ANY
 sense that all (or even most) methods be virtual. (Again, not talking about
 small/trivial or foundational/container classes, people don't write these every
 day, they write them once, and use them for a decade, and they probably like in
 the standard library)
Expression, Statement, Type, and Dsymbol in the compiler sources.
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 11 April 2013 11:11, Walter Bright <newshound2 digitalmars.com> wrote:

 On 4/10/2013 2:12 AM, Manu wrote:

 Nobody has yet showed me an example of a typical class where it would
 make ANY
 sense that all (or even most) methods be virtual. (Again, not talking
 about
 small/trivial or foundational/container classes, people don't write these
 every
 day, they write them once, and use them for a decade, and they probably
 like in
 the standard library)
Expression, Statement, Type, and Dsymbol in the compiler sources.
The bases? Do you write those classes every day, or are they a tool that you've been using for decade(/s)?
Apr 10 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 6:20 PM, Manu wrote:
 On 11 April 2013 11:11, Walter Bright <newshound2 digitalmars.com
 <mailto:newshound2 digitalmars.com>> wrote:

     On 4/10/2013 2:12 AM, Manu wrote:

         Nobody has yet showed me an example of a typical class where it would
         make ANY
         sense that all (or even most) methods be virtual. (Again, not talking
about
         small/trivial or foundational/container classes, people don't write
         these every
         day, they write them once, and use them for a decade, and they probably
         like in
         the standard library)


     Expression, Statement, Type, and Dsymbol in the compiler sources.


 The bases? Do you write those classes every day, or are they a tool that you've
 been using for decade(/s)?
I modify them constantly. They aren't foundational/container classes, in that they are very specific to the compiler's needs.
Apr 10 2013
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 2:02 AM, Manu wrote:
 I do use virtual functions, that's the point of classes. But most
 functions are not virtual. More-so, most functions are trivial
 accessors, which really shouldn't be virtual.
I'd say a valid style is to use free functions for non-virtual methods. UFCS will take care of caller syntax. Andrei
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 22:37, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 4/10/13 2:02 AM, Manu wrote:

 I do use virtual functions, that's the point of classes. But most
 functions are not virtual. More-so, most functions are trivial
 accessors, which really shouldn't be virtual.
I'd say a valid style is to use free functions for non-virtual methods. UFCS will take care of caller syntax.
Valid, perhaps. But would you really recommend that design pattern? It seems a little obscure for no real reason. Breaks the feeling of the OO encapsulation principle somewhat. I've started using UFCS more recently, but I'm still wary of overuse leading to unnecessary obscurity.
Apr 10 2013
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 12:44:34 UTC, Manu wrote:
 On 10 April 2013 22:37, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org>wrote:

 On 4/10/13 2:02 AM, Manu wrote:

 I do use virtual functions, that's the point of classes. But 
 most
 functions are not virtual. More-so, most functions are trivial
 accessors, which really shouldn't be virtual.
I'd say a valid style is to use free functions for non-virtual methods. UFCS will take care of caller syntax.
Valid, perhaps. But would you really recommend that design pattern? It seems a little obscure for no real reason. Breaks the feeling of the OO encapsulation principle somewhat. I've started using UFCS more recently, but I'm still wary of overuse leading to unnecessary obscurity.
It depends what model of OO you refer to. I have been reading lately about multi-methods usage in languages like Dylan and Lisp, which is similar to UFCS, although more powerful because all parameters are used when deciding which method to bind. that for whatever reason did not make it into mainstream. -- Paulo
Apr 10 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 8:44 AM, Manu wrote:
 On 10 April 2013 22:37, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org <mailto:SeeWebsiteForEmail erdani.org>>
 wrote:

     On 4/10/13 2:02 AM, Manu wrote:

         I do use virtual functions, that's the point of classes. But most
         functions are not virtual. More-so, most functions are trivial
         accessors, which really shouldn't be virtual.


     I'd say a valid style is to use free functions for non-virtual
     methods. UFCS will take care of caller syntax.


 Valid, perhaps. But would you really recommend that design pattern?
 It seems a little obscure for no real reason. Breaks the feeling of the
 OO encapsulation principle somewhat.
It may as well be a mistake that nonvirtual functions are at all part of a class' methods. This has been quite painfully seen in C++ leading to surprising conclusions: http://goo.gl/dqZrr.
 I've started using UFCS more recently, but I'm still wary of overuse
 leading to unnecessary obscurity.
UFCS is a "slam dunk" feature - simple and immensely successful. The only bummer is that UFCS arrived to the scene late. If I designed D's classes today, I'd only allow overridable methods and leave everything else to free functions. Andrei
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 11 April 2013 02:08, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org>wrote:

 On 4/10/13 8:44 AM, Manu wrote:

 On 10 April 2013 22:37, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org
<mailto:SeeWebsiteForEmail **erdani.org<SeeWebsiteForEmail erdani.org>

wrote: On 4/10/13 2:02 AM, Manu wrote: I do use virtual functions, that's the point of classes. But most functions are not virtual. More-so, most functions are trivial accessors, which really shouldn't be virtual. I'd say a valid style is to use free functions for non-virtual methods. UFCS will take care of caller syntax. Valid, perhaps. But would you really recommend that design pattern? It seems a little obscure for no real reason. Breaks the feeling of the OO encapsulation principle somewhat.
It may as well be a mistake that nonvirtual functions are at all part of a class' methods. This has been quite painfully seen in C++ leading to surprising conclusions: http://goo.gl/dqZrr.
Hmm, some interesting points. Although I don't think I buy what he's selling. It looks like over-complexity for the sake of it to me. I don't buy the real-world benefit. At least not more so than the obscurity it introduces (breaking the location of function definitions apart), and of course, C++ doesn't actually support this syntactically, it needs UFCS. Granted, the principle applies far better to D, ie, actually works... If I designed D's classes today, I'd only allow overridable methods and
 leave everything else to free functions.
Why? Sorry, that article didn't sell me. Maybe I need to sit and simmer on it for a bit longer though. I like static methods (I prefer them to virtuals!) ;) If I had the methods spit, some inside the class at one indentation level, and most outside at a different level, it would annoy me, for OCD reasons. But I see no real advantage one way or the other in D, other than a cosmetic one.
Apr 10 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 17:33:55 UTC, Manu wrote:
 Why? Sorry, that article didn't sell me. Maybe I need to sit 
 and simmer on
 it for a bit longer though. I like static methods (I prefer 
 them to
 virtuals!) ;)
I agree with Andrei here and it is one of those rare moments when his opinion actually matches embedded needs :P Virtual functions are powerful and dangerous, the more they are separated from the other code the better. Programming in such way is quite innovative paradigm shift but I think it is superior to current C++-derived approach and it is a place where UFCS truly shine. Actually, if you can afford to spend some time - send me sample or description of design you try to create in C++ way and I'll consider it a personal challenge to implement it in D way with extended comments why it is better :P I believe it _should_ be possible.
Apr 10 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 1:33 PM, Manu wrote:
     It may as well be a mistake that nonvirtual functions are at all
     part of a class' methods. This has been quite painfully seen in C++
     leading to surprising conclusions: http://goo.gl/dqZrr.


 Hmm, some interesting points. Although I don't think I buy what he's
 selling.
That article ain't sellin'. It's destroyin'. It destroys dogma that had been uncritically acquired by many. That puts it in a nice category alongside with e.g. http://goo.gl/2kBy0 - boy did that destroy.
 It looks like over-complexity for the sake of it to me. I don't buy the
 real-world benefit. At least not more so than the obscurity it
 introduces (breaking the location of function definitions apart), and of
 course, C++ doesn't actually support this syntactically, it needs UFCS.
 Granted, the principle applies far better to D, ie, actually works...


     If I designed D's classes today, I'd only allow overridable methods
     and leave everything else to free functions.


 Why?
With UFCS the only possible utility of member functions is clarifying the receiver in a virtual dispatch. Even that's not strictly necessary (as some languages confirm). Andrei
Apr 10 2013
parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 4/10/13, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 On 4/10/13 1:33 PM, Manu wrote:
     It may as well be a mistake that nonvirtual functions are at all
     part of a class' methods. This has been quite painfully seen in C++
     leading to surprising conclusions: http://goo.gl/dqZrr.


 Hmm, some interesting points. Although I don't think I buy what he's
 selling.
That article ain't sellin'. It's destroyin'.
UFCS can't be the penultimate solution, for example: class S(T) { alias Type = SomeTypeModifier!T; static if (isSomePredicate!Type) { int x; void foo() { /* do something with x */ } } else { float y; void bar() { /* do something with y */ } } } How do you implement this with UFCS? It wouldn't look nice if you had to put those methods outside, while keeping the data inside. You'd have to duplicate static ifs outside and inside the class.
Apr 10 2013
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 2:26 PM, Andrej Mitrovic wrote:
 On 4/10/13, Andrei Alexandrescu<SeeWebsiteForEmail erdani.org>  wrote:
 On 4/10/13 1:33 PM, Manu wrote:
      It may as well be a mistake that nonvirtual functions are at all
      part of a class' methods. This has been quite painfully seen in C++
      leading to surprising conclusions: http://goo.gl/dqZrr.


 Hmm, some interesting points. Although I don't think I buy what he's
 selling.
That article ain't sellin'. It's destroyin'.
UFCS can't be the penultimate solution, for example: class S(T) { alias Type = SomeTypeModifier!T; static if (isSomePredicate!Type) { int x; void foo() { /* do something with x */ } } else { float y; void bar() { /* do something with y */ } } } How do you implement this with UFCS? It wouldn't look nice if you had to put those methods outside, while keeping the data inside. You'd have to duplicate static ifs outside and inside the class.
Agreed. Andrei
Apr 10 2013
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
Am 10.04.2013 18:08, schrieb Andrei Alexandrescu:
 On 4/10/13 8:44 AM, Manu wrote:
 On 10 April 2013 22:37, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org <mailto:SeeWebsiteForEmail erdani.org>>
 wrote:

     On 4/10/13 2:02 AM, Manu wrote:

         I do use virtual functions, that's the point of classes. But most
         functions are not virtual. More-so, most functions are trivial
         accessors, which really shouldn't be virtual.


     I'd say a valid style is to use free functions for non-virtual
     methods. UFCS will take care of caller syntax.


 Valid, perhaps. But would you really recommend that design pattern?
 It seems a little obscure for no real reason. Breaks the feeling of the
 OO encapsulation principle somewhat.
It may as well be a mistake that nonvirtual functions are at all part of a class' methods. This has been quite painfully seen in C++ leading to surprising conclusions: http://goo.gl/dqZrr.
 I've started using UFCS more recently, but I'm still wary of overuse
 leading to unnecessary obscurity.
UFCS is a "slam dunk" feature - simple and immensely successful. The only bummer is that UFCS arrived to the scene late. If I designed D's classes today, I'd only allow overridable methods and leave everything else to free functions. Andrei
Everyone seems to be having them, it is as if after realizing that in many cases aggregation is better than inheritance, multi-methods is also a better way to add attach behaviour to objects. -- Paulo
Apr 10 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/10/13 1:50 PM, Paulo Pinto wrote:
 Everyone seems to be having them, it is as if after realizing that in
 many cases aggregation is better than inheritance, multi-methods is also
 a better way to add attach behaviour to objects.
My perception is that there's an exponential falloff as follows: 99% of all cases are handled by single dispatch; 0.9% are handled by double dispatch; 0.09% are handled by triple dispatch; ... I'd tell any language designer - don't worry about multiple dispatch. Spend that time on anything else, it'll be better invested. Andrei
Apr 10 2013
parent Paulo Pinto <pjmlp progtools.org> writes:
Am 10.04.2013 20:16, schrieb Andrei Alexandrescu:
 On 4/10/13 1:50 PM, Paulo Pinto wrote:
 Everyone seems to be having them, it is as if after realizing that in
 many cases aggregation is better than inheritance, multi-methods is also
 a better way to add attach behaviour to objects.
My perception is that there's an exponential falloff as follows: 99% of all cases are handled by single dispatch; 0.9% are handled by double dispatch; 0.09% are handled by triple dispatch; ... I'd tell any language designer - don't worry about multiple dispatch. Spend that time on anything else, it'll be better invested. Andrei
I agree with you, I doubt there are that much cases where more than double dispatch is required. But they make nice language puzzles. :) -- Paulo
Apr 10 2013
prev sibling parent reply "xenon325" <x m.net> writes:
On Wednesday, 10 April 2013 at 16:08:53 UTC, Andrei Alexandrescu 
wrote:
 It may as well be a mistake that nonvirtual functions are at 
 all part of a class' methods. This has been quite painfully 
 seen in C++ leading to surprising conclusions: 
 http://goo.gl/dqZrr.
"Non-Member Functions Improve Encapsulation" is invalid for D because of implicit friends. It was discussed before: http://forum.dlang.org/post/op.wbyg2ywyeav7ka localhost.localdomain -- Alexander
Apr 10 2013
parent "Rob T" <alanb ucora.com> writes:
On Thursday, 11 April 2013 at 04:23:07 UTC, xenon325 wrote:
 On Wednesday, 10 April 2013 at 16:08:53 UTC, Andrei 
 Alexandrescu wrote:
 It may as well be a mistake that nonvirtual functions are at 
 all part of a class' methods. This has been quite painfully 
 seen in C++ leading to surprising conclusions: 
 http://goo.gl/dqZrr.
"Non-Member Functions Improve Encapsulation" is invalid for D because of implicit friends. It was discussed before: http://forum.dlang.org/post/op.wbyg2ywyeav7ka localhost.localdomain -- Alexander
In some ways, implicit friends is the analogous to implicit virtual. With implied virtual you can at least state "final" for a reversal but with implied friends there's no way out other than through band-aid measures which cause headaches. Really, the generality of the problem is analogous to the reasons why you do not allow implicit data typing or worse, implicit declarations. So in D a fundamental rule is being violated. IMO, everything should be explicit unless the intention can be calculated to be 100% certain (i.e., no possible alternatives). For example "auto" in D is fine, because the data typing is certain, but a data type of a value in a JSON struct being defined by the value is wrong, eg maybe the 0 (integer) was supposed to be 0.0 (real). BTW, the UFCS solution to non-virtual methods creates the old C++ problem of constantly re-typing in the same symbol name of the class for every external member function. Not fun. Some syntactic sugar may help, perhaps clever use of template's, I don't know. --rt
Apr 11 2013
prev sibling parent reply "Rob T" <alanb ucora.com> writes:
On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:
 Can you demonstrate a high level class, ie, not a primitive 
 tool, but the
 sort of thing a programmer would write in their daily work 
 where all/most
 functions would be virtual?
 I can paste almost any class I've ever written, there is 
 usually 2-4
 virtuals, among 20-30 functions.
In my case it was a C++ virtual class used to supply a common interface to various database libraries. This is not a usual thing, so your point is valid, and I'll agree that most often your classes will have proportionally far less virtual functions overall. It's mostly the base classes that will contain the most virtual functions, but derived classes generally outnumber them.
 Mark your properties as final?
That's 90% of the class! You are familiar with OOP right? :) Almost everything is an accessor...
Based on what I've learned from this thread, to get the best performance I'll have to wrap up almost all my D classes with "final", or take the more painful alternative route and move all non virtual functions into UFCS. I can understand the argument in favor if UFCS as the "final" solution, however it's something I'd have to try out first before making a conclusion. Off hand it seem like more work (an example with static if's was shown already), and for code structuring and readability it seems to me it won't be helpful. Again these are my first impressions without actually trying it out, so who knows, it may work well despite my concerns.
 Correct, it's not quite a systems language while the GC does 
 whatever it
 wants. But D needs the GC to be considered a 'modern', and 
 generally
 productive language.
The GC issue is a recurring one (how many threads on this topic?) because the current implementation directly interferes with the stated goals of D being a systems language. Not only can the GC be fixed in simple ways (eg just give us programmers more control over how and when it does its job), but we can do one better than just improving the GC, and it's through marking sections of code as off limits to anything that may allocate, and even better than that in more general terms, prevent the use a feature (or features) of the language that is not appropriate for a marked section of code. That'll make me very happy. --rt
Apr 10 2013
parent Manu <turkeyman gmail.com> writes:
On 11 April 2013 06:50, Rob T <alanb ucora.com> wrote:

 On Wednesday, 10 April 2013 at 06:03:08 UTC, Manu wrote:

 Can you demonstrate a high level class, ie, not a primitive tool, but the
 sort of thing a programmer would write in their daily work where all/most
 functions would be virtual?
 I can paste almost any class I've ever written, there is usually 2-4
 virtuals, among 20-30 functions.
In my case it was a C++ virtual class used to supply a common interface to various database libraries. This is not a usual thing, so your point is valid, and I'll agree that most often your classes will have proportionally far less virtual functions overall. It's mostly the base classes that will contain the most virtual functions, but derived classes generally outnumber them.
 Mark your properties as final?
That's 90% of the class! You are familiar with OOP right? :) Almost everything is an accessor...
Based on what I've learned from this thread, to get the best performance I'll have to wrap up almost all my D classes with "final", or take the more painful alternative route and move all non virtual functions into UFCS. I can understand the argument in favor if UFCS as the "final" solution, however it's something I'd have to try out first before making a conclusion. Off hand it seem like more work (an example with static if's was shown already), and for code structuring and readability it seems to me it won't be helpful. Again these are my first impressions without actually trying it out, so who knows, it may work well despite my concerns.
 Correct, it's not quite a systems language while the GC does whatever it
 wants. But D needs the GC to be considered a 'modern', and generally
 productive language.
The GC issue is a recurring one (how many threads on this topic?) because the current implementation directly interferes with the stated goals of D being a systems language. Not only can the GC be fixed in simple ways (eg just give us programmers more control over how and when it does its job), but we can do one better than just improving the GC, and it's through marking sections of code as off limits to anything that may allocate, and even better than that in more general terms, prevent the use a feature (or features) of the language that is not appropriate for a marked section of code. That'll make me very happy.
I won't complain about this, but it'll prevent you from being able to call into a very significant portion of the standard library. Browse through it, especially the most basic of tools, like std.string, basically everything allocates somewhere! I'm not that enthusiastic about fracturing my code into sections that can make use of the library, and sections that just can't. A lot of work could be done to make the library not allocate I'm sure, increasing the amount of library available in these isolated sections maybe... but I'd rather see work done to add finer control of the GC, so I can operate it in an acceptable manner.
Apr 10 2013
prev sibling parent "Rob T" <alanb ucora.com> writes:
On Wednesday, 10 April 2013 at 04:32:52 UTC, Manu wrote:
 moments, and give the collect function a maximum timeout where 
 it will
 yield, and then resume where it left off next time I call it.
Maximum collect period is perhaps the most significant missing feature of all. Having that alone would probably solve most of the complaints. The other thing is not having any control over when the thing decides to make a run. It runs while allocating large data sets without a reason to do so slowing things down by as much as 3x, although perhaps it has no way to know that there's no chance for anything to be collected without help from the programmer. What I have to do, is disable the GC, do the allocations, then re-enable, but without the maximum timeout, you can get a very large pause after re-enabling in the range of seconds which is completely unacceptable for some applications. --rt
Apr 09 2013
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
09-Apr-2013 20:48, Manu пишет:
 On 10 April 2013 00:50, Dmitry Olshansky <dmitry.olsh gmail.com
 <mailto:dmitry.olsh gmail.com>> wrote:

     09-Apr-2013 14:18, Manu пишет:

         On 9 April 2013 13:09, Rob T <alanb ucora.com
         <mailto:alanb ucora.com> <mailto:alanb ucora.com
         <mailto:alanb ucora.com>>>

         wrote:

              On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:


                  The C++ state hasn't changed though. We still avoid virtual
                  calls like the
                  plague.
                  One of my biggest design gripes with D, hands down, is that
                  functions are
                  virtual by default. I believe this is a critical
         mistake, and
                  the biggest
                  one in the language by far.


              My understanding of this is that while all of your class
         functions
              will be virtual by default, the compiler will reduce them to
              non-virtual unless you actually override them, and to
         override by
              mistake is difficult because you have to specify the "override"
              keyword to avoid a compiler error.


         Thus successfully eliminating non-open-source libraries from D...
         Making a dependency on WPO is a big mistake.


     final class Foo{ //no inheritance
     final: //no virtuals
     ...
     }

     2 extra words and you are done. The only problem I see is that there
     is no way to "undo" final on a few methods later...


 Yes, it can not be un-done. And any junior/tired/forgetful programmer
 will accidentally write slow code all over the place, and nobody will
 ever have any idea that they've done it. It's very dangerous.
Yup. Well for that matter I had a wild proposal on the back-burner to ditch the whole OOP part of D and/or redesign it. And the reasons roughly go like this: What we have is an inforced model with a bunch of arbitrary choices that fail short on the "one size fits all" promise : single inheritance, all virtual (yet private is always final), GC-ed infinite lifetime model backed-in, reference semantic (that might be fine), "interface is not an Object" etc.. Basically it's a set of choices that doesn't give you any of control over internals (life-time, object layout, virtuality, ABI). Yet it presents all of them together in a form box with half-usable knobs like: - hidden v-table (always!) - obligatory TypeInfo - the lame copy-paste monitor mutex from Java - etc. Yet what I think is needed is getting the orthogonal concepts: - custom polymorphic behavior (WHEN you need it and HOW you need it) - being able plug into a COM of your choice (GObject, MS COM, XP COM, your own object model etc.) - optional "pay as you go" reflection (extends on the previous point) - control over ABI (with potential for true late-binding) - life-time policy: RC, GC, anything custom including manual management All of this should be CUSTOMIZABLE and DECOUPLED! Give people the frigging control over the OOP breed they want to use. Providing people a toolkit not a one-button black box (and that button keeps getting stuck!) would be awesome. Say I want v-table and late-binding for a set of methods and want a particular ABI for that. And it be manually managed/staticaly allocated. No typeinfo and half-ass reflection (=boat). There is no such thing in D (and in C++ for that matter as it has no "fit this ABI" too). It feels like I'm back to C + preprocessor or ASM. And again to have great interop with OS/engine/framework it has to be able to follow some PLATFORM-SPECIFIC object model like Obj-C one. (We sort of has it with M$ COM, but again why only M$ and why it's built-in into the _language_ itself?) If compiler front-end manages to deliver on "multiple alias this" and other features then it could be done (half-decently) in library. Of course, some generic compiler support could help here and there. Quiz: why do D structs look like powerhouse that has all of the tricks and classes look like poor lame Java wannabes? -- Dmitry Olshansky
Apr 09 2013
parent reply "Rob T" <alanb ucora.com> writes:
On Tuesday, 9 April 2013 at 18:38:55 UTC, Dmitry Olshansky wrote:
[...]
 All of this should be CUSTOMIZABLE and DECOUPLED! Give people 
 the frigging control over the OOP breed they want to use.

 Providing people a toolkit not a one-button black box (and that 
 button keeps getting stuck!) would be awesome.
What we have is a traditional monolithic language and compiler which is rather inflexible and not as useful as it could be if it were instead a library of components that you can pick and choose from to assemble the language sub-set and compiler components that you need for a particular application. If you don't like a component, you should be able to plug a different one in or write your own. Same with the features of the language. I should be able to say "No I don't want dynamic arrays" in this part of my code and the compiler will refuse to compile them in. Perhaps I should even be able to create my own language features that natively compile. Furthermore, as a library of components, you should be able to hook them into your applications for general use, such as creating a JIT compiler for a sub-set of the language. --rt
Apr 09 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
09-Apr-2013 23:39, Rob T пишет:
 On Tuesday, 9 April 2013 at 18:38:55 UTC, Dmitry Olshansky wrote:
 [...]
 All of this should be CUSTOMIZABLE and DECOUPLED! Give people the
 frigging control over the OOP breed they want to use.

 Providing people a toolkit not a one-button black box (and that button
 keeps getting stuck!) would be awesome.
What we have is a traditional monolithic language and compiler which is rather inflexible and not as useful as it could be if it were instead a library of components that you can pick and choose from to assemble the language sub-set and compiler components that you need for a particular application. If you don't like a component, you should be able to plug a different one in or write your own.
Love the positive radical nature of your post ;) But - it can't hot swap components, it would break guarantees: say library expect Ref-Counted objects and you hot-swapped that for GC ones, and so and so forth. The sane way is sub-setting by restrictions - no changes and no extras. At least you can use the libraries if your set of restictions is superset of theirs.
 Same with the features of the language. I should be able to say "No I
 don't want dynamic arrays" in this part of my code and the compiler will
 refuse to compile them in.
Fine and nice. See threads about nogc.
 Perhaps I should even be able to create my
 own language features that natively compile.
Too bad and doesn't scale. Sub-setting works pulling custom built-ins doesn't. What could work is seeking smaller sets of "meta" or "proto" features in which we can implement others (in std-lib) at no loss in expressiveness, performance etc. I claim that the whole OOP should be the first one to be broken down into few smallish proto-features. But as a start we'd better fix the AA in library disaster (by completing the move) though. -- Dmitry Olshansky
Apr 09 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 05:12, Manu wrote:

 Bear in mind, most remaining C/C++ programmers are realtime programmers,
 and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
 realtime software.
 If I chose not to care about 2ms only 8 times, I'll have no time left. I
 would cut off my left nut for 2ms most working days!
 I typically measure execution times in 10s of microseconds, if something
 measures in milliseconds it's a catastrophe that needs to be urgently
 addressed... and you're correct, as a C/C++ programmer, I DO design with
 consideration for sub-ms execution times before I write a single line of
 code.
 Consequently, I have seen the GC burn well into the ms on occasion, and
 as such, it is completely unacceptable in realtime software.

 The GC really needs to be addressed in terms of performance; it can't
 stop the world for milliseconds at a time. I'd be happy to give it
 ~150us every 16ms, but NOT 2ms every 200ms.
 Alternatively, some urgency needs to be invested in tools to help
 programmers track accidental GC allocations.
An easy workaround is to remove the GC and when you use the GC you'll get linker errors. Not pretty but it could work.
 I cope with D in realtime software by carefully avoiding excess GC
 usage, which, sadly, means basically avoiding the standard library at
 all costs. People use concatenations all through the std lib, in the
 strangest places, I just can't trust it at all anymore.
 I found a weird one just a couple of days ago in the function
 toUpperInPlace() (!! it allocates !!), but only when it encountered a
 utf8 sequence, which means I didn't even notice while working in my
 language! >_<
 Imagine it, I would have gotten a bug like "game runs slow in russian",
 and I would have been SOOOO "what the ****!?", while crunching to ship
 the product...
To address this particular case, without having looked at the code, you do know that it's possible that the length of a Unicode string changes when converting between upper and lower case for some languages. With that in mind, it might not be a good idea to have an in place version of toUpper/Lower at all.
 d) alternatives need to be available for the functions that allocate by
 nature, or an option for user-supplied allocators, like STL, so one can
 allocate from a pool instead.
Have you seen this, links at the bottom: http://3d.benjamin-thaut.de/?p=20 -- /Jacob Carlborg
Apr 08 2013
parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 17:21, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-08 05:12, Manu wrote:

  Bear in mind, most remaining C/C++ programmers are realtime programmers,
 and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
 realtime software.
 If I chose not to care about 2ms only 8 times, I'll have no time left. I
 would cut off my left nut for 2ms most working days!
 I typically measure execution times in 10s of microseconds, if something
 measures in milliseconds it's a catastrophe that needs to be urgently
 addressed... and you're correct, as a C/C++ programmer, I DO design with
 consideration for sub-ms execution times before I write a single line of
 code.
 Consequently, I have seen the GC burn well into the ms on occasion, and
 as such, it is completely unacceptable in realtime software.

 The GC really needs to be addressed in terms of performance; it can't
 stop the world for milliseconds at a time. I'd be happy to give it
 ~150us every 16ms, but NOT 2ms every 200ms.
 Alternatively, some urgency needs to be invested in tools to help
 programmers track accidental GC allocations.
An easy workaround is to remove the GC and when you use the GC you'll get linker errors. Not pretty but it could work.
Hehe, yeah I'm aware of these tricks. But I'm not really keen to be doing that. Like I said before, I'm not actally interested in eliminating the GC, I just want it to be usable. I like the concept of a GC, and I wish I could trust it. This requires me spending time using it and gathering experience, and perhaps making a noise about my pains here from time to time ;) I cope with D in realtime software by carefully avoiding excess GC
 usage, which, sadly, means basically avoiding the standard library at
 all costs. People use concatenations all through the std lib, in the
 strangest places, I just can't trust it at all anymore.
 I found a weird one just a couple of days ago in the function
 toUpperInPlace() (!! it allocates !!), but only when it encountered a
 utf8 sequence, which means I didn't even notice while working in my
 language! >_<
 Imagine it, I would have gotten a bug like "game runs slow in russian",
 and I would have been SOOOO "what the ****!?", while crunching to ship
 the product...
To address this particular case, without having looked at the code, you do know that it's possible that the length of a Unicode string changes when converting between upper and lower case for some languages. With that in mind, it might not be a good idea to have an in place version of toUpper/Lower at all.
... I don't think that's actually true. Can you suggest such a character in any language? I think they take that sort of thing into careful consideration when designing the codepoints for a character set. But if that is the case, then a function called toUpperInPlace is flawed by design, because it would be incapable of doing what it says it does. I'm not convinced that's true though. d) alternatives need to be available for the functions that allocate by
 nature, or an option for user-supplied allocators, like STL, so one can
 allocate from a pool instead.
Have you seen this, links at the bottom: http://3d.benjamin-thaut.de/?**p=20 <http://3d.benjamin-thaut.de/?p=20>
I hadn't. Interesting to note that I experience all the same critical issues listed at the bottom :) Most of them seem quite fix-able, it just needs some focused attention... My biggest issue not mentioned there though, is that when the datasets get large, the collects take longer, and they are not synced with the game, leading to regular intermittent spikes that result in regular lost frames. A stuttering framerate is the worst possible kind of performance problem.
Apr 08 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 10:15, Manu wrote:

 ... I don't think that's actually true. Can you suggest such a character
 in any language? I think they take that sort of thing into careful
 consideration when designing the codepoints for a character set.
 But if that is the case, then a function called toUpperInPlace is flawed
 by design, because it would be incapable of doing what it says it does.
 I'm not convinced that's true though.
The German double "s" (ß) in uppercase form should be "SS". That consists of two characters. There's also something similar with the Turkic "I" with a dot. Here's the full list of special casings: http://www.unicode.org/Public/UNIDATA/SpecialCasing.txt You should also read this: http://forum.dlang.org/thread/kcppa1$30b9$1 digitalmars.com Shows some nasty corner cases with Unicode. Short summary: encodings are PITA. -- /Jacob Carlborg
Apr 08 2013
parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 19:06, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-08 10:15, Manu wrote:

  ... I don't think that's actually true. Can you suggest such a character
 in any language? I think they take that sort of thing into careful
 consideration when designing the codepoints for a character set.
 But if that is the case, then a function called toUpperInPlace is flawed
 by design, because it would be incapable of doing what it says it does.
 I'm not convinced that's true though.
The German double "s" (=C3=9F) in uppercase form should be "SS". That con=
sists
 of two characters. There's also something similar with the Turkic "I" wit=
h
 a dot.

 Here's the full list of special casings:

 http://www.unicode.org/Public/**UNIDATA/SpecialCasing.txt<http://www.unic=
ode.org/Public/UNIDATA/SpecialCasing.txt>
 You should also read this:

 http://forum.dlang.org/thread/**kcppa1$30b9$1 digitalmars.com<http://foru=
m.dlang.org/thread/kcppa1$30b9$1 digitalmars.com>
 Shows some nasty corner cases with Unicode.

 Short summary: encodings are PITA.
... bugger! :/ Well I guess that function just needs to be amended to not-upper-case-ify those troublesome letters? Shame.
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 11:14, Manu wrote:

 ... bugger! :/
 Well I guess that function just needs to be amended to
 not-upper-case-ify those troublesome letters? Shame.
I haven't looked at the source code so I don't know if this is the actual problem. But theoretically this will be a problem for every function trying to do this. In general I would not be particular happy with a function that doesn't work properly. But since these cases are so few it might be ok. -- /Jacob Carlborg
Apr 08 2013
prev sibling parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 08.04.2013 05:12, Manu wrote:
 The GC really needs to be addressed in terms of performance; it can't stop
 the world for milliseconds at a time. I'd be happy to give it ~150us every
 16ms, but NOT 2ms every 200ms.
 Alternatively, some urgency needs to be invested in tools to help
 programmers track accidental GC allocations.
I'm not sure if these have been proposed already in this long thread, but 2 very small patches could help a lot for realtime applications: 1. a thread local flag to disallow and detect GC allocations 2. a flag per thread to specify that the thread should not be paused by the GC during collections. The latter would then put the responsibility on the programmer to ensure that no references are mutated in the non-pausing thread that don't exist anywhere else (to avoid the collection of objects used in the realtime thread). As anything in a realtime thread usually has to be pre-allocated anyway, that doesn't put a lot more constraints on it but to ensure having references to the pre-allocated data in other threads or global state.
Apr 09 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 16:14, Rainer Schuetze <r.sagitario gmx.de> wrote:

 On 08.04.2013 05:12, Manu wrote:

 The GC really needs to be addressed in terms of performance; it can't stop
 the world for milliseconds at a time. I'd be happy to give it ~150us every
 16ms, but NOT 2ms every 200ms.
 Alternatively, some urgency needs to be invested in tools to help
 programmers track accidental GC allocations.
I'm not sure if these have been proposed already in this long thread, but 2 very small patches could help a lot for realtime applications: 1. a thread local flag to disallow and detect GC allocations 2. a flag per thread to specify that the thread should not be paused by the GC during collections. The latter would then put the responsibility on the programmer to ensure that no references are mutated in the non-pausing thread that don't exist anywhere else (to avoid the collection of objects used in the realtime thread). As anything in a realtime thread usually has to be pre-allocated anyway, that doesn't put a lot more constraints on it but to ensure having references to the pre-allocated data in other threads or global state.
It's all rather useless without some powerful tools for tracking down leaks, and unintended allocations though. There will surely be bugs with this idea, and finding them will be a nightmare.
Apr 09 2013
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.04.2013 08:26, Manu wrote:
 On 10 April 2013 16:14, Rainer Schuetze <r.sagitario gmx.de> wrote:

 On 08.04.2013 05:12, Manu wrote:

 The GC really needs to be addressed in terms of performance; it can't stop
 the world for milliseconds at a time. I'd be happy to give it ~150us every
 16ms, but NOT 2ms every 200ms.
 Alternatively, some urgency needs to be invested in tools to help
 programmers track accidental GC allocations.
I'm not sure if these have been proposed already in this long thread, but 2 very small patches could help a lot for realtime applications: 1. a thread local flag to disallow and detect GC allocations 2. a flag per thread to specify that the thread should not be paused by the GC during collections. The latter would then put the responsibility on the programmer to ensure that no references are mutated in the non-pausing thread that don't exist anywhere else (to avoid the collection of objects used in the realtime thread). As anything in a realtime thread usually has to be pre-allocated anyway, that doesn't put a lot more constraints on it but to ensure having references to the pre-allocated data in other threads or global state.
It's all rather useless without some powerful tools for tracking down leaks, and unintended allocations though. There will surely be bugs with this idea, and finding them will be a nightmare.
It needs some kind of manual ownership handling that's more or less similar as what you do in C++ right now. IMO it allows to workaround some bad consequences due to the mere existence of the GC without giving up its benefits in other threads. I don't expect a GC any time soon with pause times acceptable for real time tasks. I'd be happy if they were acceptable for standard interactive programs (hopefully the concurrent version can be ported to Windows aswell).
Apr 10 2013
prev sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 06:14:24 UTC, Rainer Schuetze 
wrote:
 I'm not sure if these have been proposed already in this long 
 thread, but 2 very small patches could help a lot for realtime 
 applications:

 1. a thread local flag to disallow and detect GC allocations
 2. a flag per thread to specify that the thread should not be 
 paused by the GC during collections.

 The latter would then put the responsibility on the programmer 
 to ensure that no references are mutated in the non-pausing 
 thread that don't exist anywhere else (to avoid the collection 
 of objects used in the realtime thread). As anything in a 
 realtime thread usually has to be pre-allocated anyway, that 
 doesn't put a lot more constraints on it but to ensure having 
 references to the pre-allocated data in other threads or global 
 state.
One concept is abused quite nicely in Erlang VM/GC - there is one gc instance per process(fiber) and it gets own pre-allocated memory pool for all its needs. That memory is prohibited to escape to other processes. When process lifespan is short enough (and it is Erlang Way, spawning hell lot of small processes), true garbage collection is never called, whole memory block is just sent back to pool on process termination. I have tested it few times and such approach allowed to meet some soft real-time requirements. Don't know if something like that can be abused in D with its fibers and scope storage class.
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 18:07, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 06:14:24 UTC, Rainer Schuetze wrote:

 I'm not sure if these have been proposed already in this long thread, but
 2 very small patches could help a lot for realtime applications:

 1. a thread local flag to disallow and detect GC allocations
 2. a flag per thread to specify that the thread should not be paused by
 the GC during collections.

 The latter would then put the responsibility on the programmer to ensure
 that no references are mutated in the non-pausing thread that don't exist
 anywhere else (to avoid the collection of objects used in the realtime
 thread). As anything in a realtime thread usually has to be pre-allocated
 anyway, that doesn't put a lot more constraints on it but to ensure having
 references to the pre-allocated data in other threads or global state.
One concept is abused quite nicely in Erlang VM/GC - there is one gc instance per process(fiber) and it gets own pre-allocated memory pool for all its needs. That memory is prohibited to escape to other processes. When process lifespan is short enough (and it is Erlang Way, spawning hell lot of small processes), true garbage collection is never called, whole memory block is just sent back to pool on process termination. I have tested it few times and such approach allowed to meet some soft real-time requirements. Don't know if something like that can be abused in D with its fibers and scope storage class.
That sounds horribly non-deterministic. What if you have 256mb of ram, and no pagefile, and you fill it up till you have 1mb headroom spare?
Apr 10 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 08:57:55 UTC, Manu wrote:
 That sounds horribly non-deterministic. What if you have 256mb 
 of ram, and
 no pagefile, and you fill it up till you have 1mb headroom 
 spare?
It is Erlang, it is not meant to be run on 256Mb RAM ;) It kind of solves the issue of response latency for GC-enabled software on powerful enterprise servers. Because with stop-the-world GC you can't do it, does not matter how powerful your hardware is. Does not help game dev and small-scale embedded though, that for sure ;)
Apr 10 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 19:07, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 08:57:55 UTC, Manu wrote:

 That sounds horribly non-deterministic. What if you have 256mb of ram, and
 no pagefile, and you fill it up till you have 1mb headroom spare?
It is Erlang, it is not meant to be run on 256Mb RAM ;) It kind of solves the issue of response latency for GC-enabled software on powerful enterprise servers. Because with stop-the-world GC you can't do it, does not matter how powerful your hardware is. Does not help game dev and small-scale embedded though, that for sure ;)
faster than D? They produce a squillion times more garbage than D, yet they're much much software, it's not so bad.
Apr 10 2013
next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:15:26 UTC, Manu wrote:
 Well there's always the standing question though, why is JVM 

 faster than D?
 They produce a squillion times more garbage than D, yet they're 
 much much

 realtime
 software, it's not so bad.
Erm, because they are not faster? I have performance tested performance and had same roughly latency (GC collections cycles, gah!). And Erlang won all of them in terms of latency on powerful machine.
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 19:22, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 09:15:26 UTC, Manu wrote:


 much
 faster than D?
 They produce a squillion times more garbage than D, yet they're much much

 software, it's not so bad.
Erm, because they are not faster? I have performance tested vibe.d latency (GC collections cycles, gah!). And Erlang won all of them in terms of latency on powerful machine.
How much garbage were you collecting? How long were the collect times? Did D? What was the performance relationship with respect to garbage volume? was it linear?
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:33:13 UTC, Manu wrote:
 How much garbage were you collecting? How long were the collect 
 times? Did
 you see the same relationship between the volume of garbage in 

 D?
 What was the performance relationship with respect to garbage 
 volume? was
 it linear?
In D? Almost none. It was manual memory management with GC enabled side-by-side. If you want to compare D GC with JVM GC, don't pretend you are comparing D with Java. Problem with D GC is that collection cycles can still hit sometimes even if there are almost zero garbage and it does not matter that nothing is actually collected - the very existence of context switch and collection cycle creates latency spike.
Apr 10 2013
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 09:15:26 UTC, Manu wrote:
 On 10 April 2013 19:07, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 08:57:55 UTC, Manu wrote:

 That sounds horribly non-deterministic. What if you have 
 256mb of ram, and
 no pagefile, and you fill it up till you have 1mb headroom 
 spare?
It is Erlang, it is not meant to be run on 256Mb RAM ;) It kind of solves the issue of response latency for GC-enabled software on powerful enterprise servers. Because with stop-the-world GC you can't do it, does not matter how powerful your hardware is. Does not help game dev and small-scale embedded though, that for sure ;)
Well there's always the standing question though, why is JVM faster than D? They produce a squillion times more garbage than D, yet they're much much realtime software, it's not so bad.
First of all they require the use of safe references. Pointer manipulation is reserved to unsafe regions, which allows for more aggressive GC algorithms. Secondly you have years of GC research invested into those runtimes. Finally they don't offer a single GC, but tunable versions. Additionally the garbage might be less than what you think, because you may use "new" but the JIT will actually do a stack allocation if it sees the object will be dead at the end of scope. Some Java related information, http://www.oracle.com/technetwork/java/javase/tech/g1-intro-jsp-135488.html http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html -- Paulo
Apr 10 2013
parent Manu <turkeyman gmail.com> writes:
On 10 April 2013 21:18, Paulo Pinto <pjmlp progtools.org> wrote:

 On Wednesday, 10 April 2013 at 09:15:26 UTC, Manu wrote:

 On 10 April 2013 19:07, Dicebot <m.strashun gmail.com> wrote:

  On Wednesday, 10 April 2013 at 08:57:55 UTC, Manu wrote:
  That sounds horribly non-deterministic. What if you have 256mb of ram,
 and
 no pagefile, and you fill it up till you have 1mb headroom spare?
It is Erlang, it is not meant to be run on 256Mb RAM ;) It kind of solves the issue of response latency for GC-enabled software on powerful enterprise servers. Because with stop-the-world GC you can't do it, does not matter how powerful your hardware is. Does not help game dev and small-scale embedded though, that for sure ;)
much faster than D? They produce a squillion times more garbage than D, yet they're much much software, it's not so bad.
First of all they require the use of safe references. Pointer manipulation is reserved to unsafe regions, which allows for more aggressive GC algorithms. Secondly you have years of GC research invested into those runtimes. Finally they don't offer a single GC, but tunable versions. Additionally the garbage might be less than what you think, because you may use "new" but the JIT will actually do a stack allocation if it sees the object will be dead at the end of scope.
Good point. It'd be really great if D implemented optimisations of this sort too one of these days. There's a lot of such opportunities waiting for some attention. I'd be very interested to see what sort of practical difference they make. Some Java related information,
 http://www.oracle.com/**technetwork/java/javase/tech/**
 g1-intro-jsp-135488.html<http://www.oracle.com/technetwork/java/javase/tech/g1-intro-jsp-135488.html>
 http://docs.oracle.com/javase/**7/docs/technotes/guides/vm/**
 performance-enhancements-7.**html<http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html>
Apr 10 2013
prev sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 09:07:53 UTC, Dicebot wrote:
 On Wednesday, 10 April 2013 at 08:57:55 UTC, Manu wrote:
 That sounds horribly non-deterministic. What if you have 256mb 
 of ram, and
 no pagefile, and you fill it up till you have 1mb headroom 
 spare?
It is Erlang, it is not meant to be run on 256Mb RAM ;) It kind of solves the issue of response latency for GC-enabled software on powerful enterprise servers. Because with stop-the-world GC you can't do it, does not matter how powerful your hardware is. Does not help game dev and small-scale embedded though, that for sure ;)
Actually there is a game studio in Germany, Wooga that uses Erlang for their game servers, they are quite happy to have left C++ land. http://www.slideshare.net/wooga/from-0-to-1000000-daily-users-with-erlang -- Paulo
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:33:03 UTC, Paulo Pinto wrote:
 Actually there is a game studio in Germany, Wooga that uses 
 Erlang for their game servers, they are quite happy to have 
 left C++ land.

 http://www.slideshare.net/wooga/from-0-to-1000000-daily-users-with-erlang

 --
 Paulo
No doubts, I was speaking about client-side. Server-side game-dev is more about servers than about game-dev ;) And Erlang is pretty good there.
Apr 10 2013
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 18:07, Dicebot <m.strashun gmail.com> wrote:

 On Wednesday, 10 April 2013 at 06:14:24 UTC, Rainer Schuetze wrote:

 I'm not sure if these have been proposed already in this long thread, but
 2 very small patches could help a lot for realtime applications:

 1. a thread local flag to disallow and detect GC allocations
 2. a flag per thread to specify that the thread should not be paused by
 the GC during collections.

 The latter would then put the responsibility on the programmer to ensure
 that no references are mutated in the non-pausing thread that don't exist
 anywhere else (to avoid the collection of objects used in the realtime
 thread). As anything in a realtime thread usually has to be pre-allocated
 anyway, that doesn't put a lot more constraints on it but to ensure having
 references to the pre-allocated data in other threads or global state.
One concept is abused quite nicely in Erlang VM/GC - there is one gc instance per process(fiber) and it gets own pre-allocated memory pool for all its needs. That memory is prohibited to escape to other processes. When process lifespan is short enough (and it is Erlang Way, spawning hell lot of small processes), true garbage collection is never called, whole memory block is just sent back to pool on process termination. I have tested it few times and such approach allowed to meet some soft real-time requirements. Don't know if something like that can be abused in D with its fibers and scope storage class.
Actally, now that I think about it though, I do something like this a lot in games. I have a block of memory that only lasts the life of a single frame, any transient allocations go in there, and they are never released, the pointer is just reset and it overwrites the buffer each frame. Allocation is virtually instant: void* alloc(size_t bytes) { offset += bytes; return buffer[offset-bytes..offset]; } and releasing: offset = 0; This is a great realtime allocator! ;) I usually have separate allocation functions to do this, but in D there is the problem that all the implicit allocations (array concatenation/strings/etc) can't have their alloc source controlled :/
Apr 10 2013
parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 10 April 2013 at 09:06:40 UTC, Manu wrote:
 Actally, now that I think about it though, I do something like 
 this a lot
 in games.
 I have a block of memory that only lasts the life of a single 
 frame, any
 transient allocations go in there, and they are never released, 
 the pointer
 is just reset and it overwrites the buffer each frame.
 Allocation is virtually instant: void* alloc(size_t bytes) { 
 offset +=
 bytes; return buffer[offset-bytes..offset]; }
 and releasing: offset = 0;
 This is a great realtime allocator! ;)
 I usually have separate allocation functions to do this, but in 
 D there is
 the problem that all the implicit allocations (array
 concatenation/strings/etc) can't have their alloc source 
 controlled :/
Well, avoid all implicit allocations and you can do something like that already right now. My initial comment was inspired by recent uprising of "scope" discussion - it fits the concept allowing such implementation to be not only fast but also type-safe.
Apr 10 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/7/2013 3:59 AM, Paulo Pinto wrote:
 The current compilers just don't have the amount of investment in more than 20
 years of code optimization like C++ has. You cannot expect to achieve that from
 one moment to the other.
This is incorrect, as dmd, gdc, and ldc all use the backends of C++ compilers, and the code generated is as good as that of the corresponding C++ compiler.
Apr 10 2013
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 22:50:48 UTC, Walter Bright wrote:
 On 4/7/2013 3:59 AM, Paulo Pinto wrote:
 The current compilers just don't have the amount of investment 
 in more than 20
 years of code optimization like C++ has. You cannot expect to 
 achieve that from
 one moment to the other.
This is incorrect, as dmd, gdc, and ldc all use the backends of C++ compilers, and the code generated is as good as that of the corresponding C++ compiler.
Correct, assuming the frontend organizes the intermediate information in a way that the backend can make the best use of it, or am I wrong?
Apr 10 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 11:57 PM, Paulo Pinto wrote:
 On Wednesday, 10 April 2013 at 22:50:48 UTC, Walter Bright wrote:
 On 4/7/2013 3:59 AM, Paulo Pinto wrote:
 The current compilers just don't have the amount of investment in more than 20
 years of code optimization like C++ has. You cannot expect to achieve that from
 one moment to the other.
This is incorrect, as dmd, gdc, and ldc all use the backends of C++ compilers, and the code generated is as good as that of the corresponding C++ compiler.
Correct, assuming the frontend organizes the intermediate information in a way that the backend can make the best use of it, or am I wrong?
Of course.
Apr 11 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/6/2013 3:10 AM, Paulo Pinto wrote:
 However there are cases where every byte and every ms matter, in those cases
you
 are still better with C, C++ and Fortran.
This is not correct. If you care about every byte, you can make D code just as performant. And by caring about every byte, you'll need to become as familiar with how D generates code, and the cost of what various features entail, as you would be in C++.
Apr 10 2013
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 10 April 2013 at 22:48:06 UTC, Walter Bright wrote:
 On 4/6/2013 3:10 AM, Paulo Pinto wrote:
 However there are cases where every byte and every ms matter, 
 in those cases you
 are still better with C, C++ and Fortran.
This is not correct. If you care about every byte, you can make D code just as performant. And by caring about every byte, you'll need to become as familiar with how D generates code, and the cost of what various features entail, as you would be in C++.
Same can be said for JVM and .NET languages. Some of our consulting projects are conversion of C++ code into one of the said technologies. We usually achieve performance parity with the existing application. With C, C++ and Fortran it is easier to achieve a certain performance level without effort, while the other languages require a bit of effort knowing the runtime, writing GC friendly data structures and algorithms, and doing performance analysis, but it achievable as well. Many developers don't want to do this, hence my statement. -- Paulo
Apr 10 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 11:55 PM, Paulo Pinto wrote:
 Some of our consulting projects are conversion of C++ code into one of the said
 technologies. We usually achieve performance parity with the existing
application.

 With C, C++ and Fortran it is easier to achieve a certain performance level
 without effort, while the other languages require a bit of effort knowing the
 runtime, writing GC friendly data structures and algorithms, and doing
 performance analysis, but it achievable as well.

 Many developers don't want to do this, hence my statement.
I've seen enough "performant" C++ code to disagree with your statement. If they knew what was going on with how C++ implemented their constructions, they could get a lot better performance. The second problem with writing performant C and C++ code is the difficulty of refactoring code to try different data structures & algorithms. Generally, one picks a design up front, and never changes it because it is too hard to change it.
Apr 11 2013
parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Thursday, 11 April 2013 at 08:03:53 UTC, Walter Bright wrote:
 On 4/10/2013 11:55 PM, Paulo Pinto wrote:
 Some of our consulting projects are conversion of C++ code 
 into one of the said
 technologies. We usually achieve performance parity with the 
 existing application.

 With C, C++ and Fortran it is easier to achieve a certain 
 performance level
 without effort, while the other languages require a bit of 
 effort knowing the
 runtime, writing GC friendly data structures and algorithms, 
 and doing
 performance analysis, but it achievable as well.

 Many developers don't want to do this, hence my statement.
I've seen enough "performant" C++ code to disagree with your statement. If they knew what was going on with how C++ implemented their constructions, they could get a lot better performance. The second problem with writing performant C and C++ code is the difficulty of refactoring code to try different data structures & algorithms. Generally, one picks a design up front, and never changes it because it is too hard to change it.
Fair enough, I have left the C daily coding on the job in 2002 and C++ in 2005, so I can hardly consider myself an expert in optimization tricks in those languages. Nowadays I get seldom the opportunity to write new C++ code on the job. -- Paulo
Apr 11 2013
prev sibling parent reply "Rob T" <alanb ucora.com> writes:
On Saturday, 6 April 2013 at 08:01:09 UTC, Adrian Mercieca wrote:
 In my very simple test, the GC version of my program ran more 
 than twice
 slower than the non GC version. I just cannot accept that kind 
 of
 performance penalty.

 Thanks.
I have ran into similar problems with D and understand what your are looking for, so far the only solution is to change the way you write your code. Automated memory management has pros and cons, you are witnessing the cons and I don't know if a better GC can really solve all of the cons. In my case I have been able to mostly get around the problem by strategically disabling the GC during active memory allocations, and then re-enabling when all or most of the allocations are completed. In effect I'm doing manual memory management all over again because the automated version fails to do a good enough job. Using this technique I've been able to get near C++ performance speeds. Part of the problem is that the GC implementation is simply not suitable for performance code and lacks fine tuning abilities (that I'm aware of). Without a decent brain, it does stupid things, so when you have a lot of allocations going on but no deallocations, the GC seems to be running needlessly slowing down the application by as much as 3x. --rt
Apr 06 2013
parent reply Adrian Mercieca <amercieca gmail.com> writes:
Thanks for your response
 
 In my case I have been able to mostly get around the problem by
 strategically disabling the GC during active memory allocations, and
 then re-enabling when all or most of the allocations are completed. In
 effect I'm doing manual memory management all over again because the
 automated version fails to do a good enough job. Using this technique
 I've been able to get near C++ performance speeds.
Incidentally, when you got this speed, what compiler were you using? dmd?
 
 Part of the problem is that the GC implementation is simply not suitable
 for performance code and lacks fine tuning abilities (that I'm aware
 of). Without a decent brain, it does stupid things, so when you have a
 lot of allocations going on but no deallocations, the GC seems to be
 running needlessly slowing down the application by as much as 3x.
Maybe it's time the GC implementation is addressed - or otherwise, the whole concept of GC in D should be dropped. To be honest, I'm perfectly happy with RAII in C++ and since D supports that fully (even better IMHO), I don't really see that much point for GC in a language that is vying to become a top systems language. D without a GC and as fast as C++ ............... that would be it - the ultimate IMHO.
Apr 07 2013
parent reply "Rob T" <alanb ucora.com> writes:
On Sunday, 7 April 2013 at 09:02:25 UTC, Adrian Mercieca wrote:
 Incidentally, when you got this speed, what compiler were you 
 using? dmd?
I was (and still am) using the latest released DMD compiler. Here's the original thread where I presented the problem and the solution. Youu probably should read through it to understand what needs to be done. http://forum.dlang.org/thread/waarzqtfcxuzhzdelhtt forum.dlang.org
 Maybe it's time the GC implementation is addressed - or 
 otherwise, the
 whole concept of GC in D should be dropped. To be honest, I'm 
 perfectly
 happy with RAII in C++ and since D supports that fully (even 
 better IMHO),
 I don't really see that much point for GC in a language that is 
 vying to
 become a top systems language.

 D without a GC and as fast as C++ ............... that would be 
 it - the
 ultimate IMHO.
Ideally, I think what we need is 1) a better GC since the pros with using one are very significant, and 2) the ability to selectively mark sections of code as "off limits" to all GC dependent code. What I mean by this is that the compiler will refuse to compile any code that makes use of automated memory allocations for a noheap marked section of code. There's been a proposal to do this that really ought to be taken seriously http://d.puremagic.com/issues/show_bug.cgi?id=5219 You'll see there's also related proposals for better fine tuning through attribute marked sections of code in general, which is another item that I would like to see implemented one day. Please vote it up if you agree. --rt
Apr 07 2013
parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 04:41, Rob T <alanb ucora.com> wrote:

 Ideally, I think what we need is 1) a better GC since the pros with using
 one are very significant, and 2) the ability to selectively mark sections
 of code as "off limits" to all GC dependent code. What I mean by this is
 that the compiler will refuse to compile any code that makes use of
 automated memory allocations for a  noheap marked section of code.
I wonder if UDA's could be leveraged to implement this in a library? UDA's can not permute the type, so I guess it's impossible to implement something like noalloc that behaves like nothrow in a library... I wonder what it would take, it would be generally interesting to move some of the built-in attributes to UDA's if the system is rich enough to express it. As a side though though, the information about whether a function can allocate could be known implicitly by the compiler if it chose to track that detail. I wonder if functions could gain a constant property so you can assert on that detail in your own code? ie: void myFunction() { // does stuff... } { // ...code that i expect not to allocate... static assert(!myFunction.canAllocate); myFunction(); } This way, I know for sure my code is good, and if I modify the body of myFunction at some later time (or one of its sub-calls is modified), for instance, to make an allocating library call, then i'll know about it the moment I make the change. Then again, I wonder if a formal attribute noalloc would be useful in the same way as nothrow? The std library would be enriched with that information... issues like the one where toUpperInPlace() was allocating (which is clearly a bug, it's not 'in place' if it's allocating), should have ideally been caught at the time of authoring the function. Eliminating common sources of programmer errors and thus reducing bug counts is always an interesting prospect... and it would offer a major tool towards this thread's topic :)
Apr 07 2013
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 06:30, Manu wrote:

 I wonder if UDA's could be leveraged to implement this in a library?
 UDA's can not permute the type, so I guess it's impossible to implement
 something like  noalloc that behaves like  nothrow in a library...
 I wonder what it would take, it would be generally interesting to move
 some of the built-in attributes to UDA's if the system is rich enough to
 express it.

 As a side though though, the information about whether a function can
 allocate could be known implicitly by the compiler if it chose to track
 that detail. I wonder if functions could gain a constant property so you
 can assert on that detail in your own code?
 ie:

 void myFunction()
 {
    // does stuff...
 }


 {
    // ...code that i expect not to allocate...

    static assert(!myFunction.canAllocate);

    myFunction();
 }

 This way, I know for sure my code is good, and if I modify the body of
 myFunction at some later time (or one of its sub-calls is modified), for
 instance, to make an allocating library call, then i'll know about it
 the moment I make the change.
Scott Meyers had a talk about what he called red code/green code. It was supposed to statically enforce that green code cannot call red code. Then what is green code is completely up to you, if it's memory safe, thread safe, GC free or similar. I don't remember the conclusion and what could be implemented like this, but here's the talk: http://www.google.com/url?sa=t&rct=j&q=scott%20meyers%20red%20green%20code&source=web&cd=1&cad=rja&ved=0CCsQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D-gQ&ei=fXJiUfC3FuSB4gS41IHADQ&usg=AFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bvm=bv.44770516,d.bGE -- /Jacob Carlborg
Apr 08 2013
next sibling parent reply "eles" <eles eles.com> writes:
On Monday, 8 April 2013 at 07:35:59 UTC, Jacob Carlborg wrote:
 On 2013-04-08 06:30, Manu wrote:
 Scott Meyers had a talk about what he called red code/green 
 code. It was supposed to statically enforce that green code 
 cannot call red code. Then what is green code is completely up 
 to you, if it's memory safe, thread safe, GC free or similar.
That kind of genericity will be just wonderful in some cases. For example, one could make sure, at compilation, that interrupt code does not call sleeping code when it comes to Linux kernel programming. I wonder, however, if one could re-define the notions of green/red several times in a project. Maybe per-module basis?
Apr 08 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 09:46, eles wrote:

 That kind of genericity will be just wonderful in some cases. For
 example, one could make sure, at compilation, that interrupt code does
 not call sleeping code when it comes to Linux kernel programming.

 I wonder, however, if one could re-define the notions of green/red
 several times in a project. Maybe per-module basis?
Of course. If I recall correctly you just annotate a type or similar: struct ThreadSafe (T) { T t; } void bar (ThreadSafe!(Foo) foo) { ) Now "bar" will only accept a thread safe "Foo". -- /Jacob Carlborg
Apr 08 2013
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On 8 April 2013 17:35, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-08 06:30, Manu wrote:

  I wonder if UDA's could be leveraged to implement this in a library?
 UDA's can not permute the type, so I guess it's impossible to implement
 something like  noalloc that behaves like  nothrow in a library...
 I wonder what it would take, it would be generally interesting to move
 some of the built-in attributes to UDA's if the system is rich enough to
 express it.

 As a side though though, the information about whether a function can
 allocate could be known implicitly by the compiler if it chose to track
 that detail. I wonder if functions could gain a constant property so you
 can assert on that detail in your own code?
 ie:

 void myFunction()
 {
    // does stuff...
 }


 {
    // ...code that i expect not to allocate...

    static assert(!myFunction.**canAllocate);

    myFunction();
 }

 This way, I know for sure my code is good, and if I modify the body of
 myFunction at some later time (or one of its sub-calls is modified), for
 instance, to make an allocating library call, then i'll know about it
 the moment I make the change.
Scott Meyers had a talk about what he called red code/green code. It was supposed to statically enforce that green code cannot call red code. Then what is green code is completely up to you, if it's memory safe, thread safe, GC free or similar. I don't remember the conclusion and what could be implemented like this, but here's the talk: http://www.google.com/url?sa=3D**t&rct=3Dj&q=3Dscott%20meyers%** 20red%20green%20code&source=3D**web&cd=3D1&cad=3Drja&ved=3D** 0CCsQtwIwAA&url=3Dhttp%3A%2F%**2Fwww.youtube.com%2Fwatch%3Fv%** 3DJfu9Kc1D-gQ&ei=3D**fXJiUfC3FuSB4gS41IHADQ&usg=3D**AFQjCNGtKwLcr2jNjsC4R=
J0_**
 5k8WmAFzTw&bvm=3Dbv.44770516,d.**bGE<http://www.google.com/url?sa=3Dt&rct=
=3Dj&q=3Dscott%20meyers%20red%20green%20code&source=3Dweb&cd=3D1&cad=3Drja&= ved=3D0CCsQtwIwAA&url=3Dhttp%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D= -gQ&ei=3DfXJiUfC3FuSB4gS41IHADQ&usg=3DAFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bv= m=3Dbv.44770516,d.bGE> That sounds awesome. I'll schedule it for later on! :P
Apr 08 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/8/13 3:35 AM, Jacob Carlborg wrote:
 Scott Meyers had a talk about what he called red code/green code. It was
 supposed to statically enforce that green code cannot call red code.
 Then what is green code is completely up to you, if it's memory safe,
 thread safe, GC free or similar.

 I don't remember the conclusion and what could be implemented like this,
 but here's the talk:

 http://www.google.com/url?sa=t&rct=j&q=scott%20meyers%20red%20green%20code&source=web&cd=1&cad=rja&ved=0CCsQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D-gQ&ei=fXJiUfC3FuSB4gS41IHADQ&usg=AFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bvm=bv.44770516,d.bGE
Article: http://www.artima.com/cppsource/codefeaturesP.html It's one of Scott's better works but it went underappreciated. I think it would be worthwhile looking into how to implement such with D's features (notably attributes). Andrei
Apr 08 2013
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Apr-2013 19:17, Andrei Alexandrescu пишет:
 On 4/8/13 3:35 AM, Jacob Carlborg wrote:
 Scott Meyers had a talk about what he called red code/green code. It was
 supposed to statically enforce that green code cannot call red code.
 Then what is green code is completely up to you, if it's memory safe,
 thread safe, GC free or similar.

 I don't remember the conclusion and what could be implemented like this,
 but here's the talk:

 http://www.google.com/url?sa=t&rct=j&q=scott%20meyers%20red%20green%20code&source=web&cd=1&cad=rja&ved=0CCsQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D-gQ&ei=fXJiUfC3FuSB4gS41IHADQ&usg=AFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bvm=bv.44770516,d.bGE
Article: http://www.artima.com/cppsource/codefeaturesP.html It's one of Scott's better works but it went underappreciated. I think it would be worthwhile looking into how to implement such with D's features (notably attributes).
I guess that the implementation far behind the beauty of concept. IRC I once proposed something to the same affect of the red/green code. I was trying to see what kind of general feature could supersede safe/ trusted/ system, pure and nothrow. The end result of that exercise for me was that there is exactly 2 orthogonal features: - tags on code with policies that manage the relation of these (much like routing policy) - a tool to define restrict blocks of code to a certain subset of language(here goes nothrow, nogc, pure - various sets of restrictions) The problem is picking cool syntax, implementation :o) and doing some crazy field testing.
 Andrei
-- Dmitry Olshansky
Apr 08 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 17:17, Andrei Alexandrescu wrote:

 Article: http://www.artima.com/cppsource/codefeaturesP.html

 It's one of Scott's better works but it went underappreciated. I think
 it would be worthwhile looking into how to implement such with D's
 features (notably attributes).
Didn't know there was an article. -- /Jacob Carlborg
Apr 08 2013
prev sibling parent Martin Nowak <code dawg.eu> writes:
On 04/08/2013 05:17 PM, Andrei Alexandrescu wrote:
 Article: http://www.artima.com/cppsource/codefeaturesP.html
Great I already searched that article for quite a while to support an enhancement request. http://d.puremagic.com/issues/show_bug.cgi?id=9511
Apr 08 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 04:30:56 UTC, Manu wrote:
 I wonder if UDA's could be leveraged to implement this in a 
 library?
 UDA's can not permute the type, so I guess it's impossible to 
 implement
 something like  noalloc that behaves like  nothrow in a 
 library...
 I wonder what it would take, it would be generally interesting 
 to move some
 of the built-in attributes to UDA's if the system is rich 
 enough to express
 it.
Both blessing and curse of UDA's is that they are not part of type and thus mangling. I think it is possible to provide library implementation of nogc for cases when all source code is available, but for external libraries and separate compilation it will become matter of trust which is hardly good.
Apr 08 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 09:53, Dicebot wrote:

 Both blessing and curse of UDA's is that they are not part of type and
 thus mangling.
Can't you just annotate a type: class Foo { } struct ThreadSafe (T) { T t; } ThreadSafe!(Foo) foo; -- /Jacob Carlborg
Apr 08 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 09:15:50 UTC, Jacob Carlborg wrote:
 On 2013-04-08 09:53, Dicebot wrote:

 Both blessing and curse of UDA's is that they are not part of 
 type and
 thus mangling.
Can't you just annotate a type: class Foo { } struct ThreadSafe (T) { T t; } ThreadSafe!(Foo) foo;
Yes, and that allows some powerful stuff on its own, I am aware of it (and actually use sometimes). But a) it is not UDA ;) b) You are forced to make function templated to mark it as nogc. Bad.
Apr 08 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 11:17, Dicebot wrote:

 b) You are forced to make function templated to mark it as  nogc. Bad.
It depends on how it's used. If it's enough to annotate a type the function doesn't need to be templated. This should work just fine: class Foo { } struct ThreadSafe (T) { T t; } ThreadSafe!(Foo) foo; void process (ThreadSafe!(Foo) foo) { /* process foo */ } -- /Jacob Carlborg
Apr 08 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 09:41:05 UTC, Jacob Carlborg wrote:
 On 2013-04-08 11:17, Dicebot wrote:

 b) You are forced to make function templated to mark it as 
  nogc. Bad.
It depends on how it's used. If it's enough to annotate a type the function doesn't need to be templated. This should work just fine: class Foo { } struct ThreadSafe (T) { T t; } ThreadSafe!(Foo) foo; void process (ThreadSafe!(Foo) foo) { /* process foo */ }
Hm, so you propose to use something like Malloced!Data / Stack!Data instead of marking whole function with nogc? Interesting, I have never though about this approach, may be worth trying as proof-of-concept.
Apr 08 2013
next sibling parent Manu <turkeyman gmail.com> writes:
On 8 April 2013 20:10, Dicebot <m.strashun gmail.com> wrote:

 On Monday, 8 April 2013 at 09:41:05 UTC, Jacob Carlborg wrote:

 On 2013-04-08 11:17, Dicebot wrote:

  b) You are forced to make function templated to mark it as  nogc. Bad.

 It depends on how it's used. If it's enough to annotate a type the
 function doesn't need to be templated. This should work just fine:

 class Foo { }

 struct ThreadSafe (T)
 {
     T t;
 }

 ThreadSafe!(Foo) foo;

 void process (ThreadSafe!(Foo) foo) { /* process foo */ }
Hm, so you propose to use something like Malloced!Data / Stack!Data instead of marking whole function with nogc? Interesting, I have never though about this approach, may be worth trying as proof-of-concept.
It's such a dirty hack though ;) .. that does not give the impression, or confidence that the language addresses the problem. Actually, quite the opposite. If I saw that, I'd be worried...
Apr 08 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 12:10, Dicebot wrote:

 Hm, so you propose to use something like Malloced!Data / Stack!Data
 instead of marking whole function with  nogc? Interesting, I have never
 though about this approach, may be worth trying as proof-of-concept.
I don't know. The thread safe example probably works better with an annotated type than nogc. But the problem is still how to make it not use the GC. I mean, the red code/green code talk is more about statically enforce some property of your code. If you cannot accomplish that property in your code, regardless if it's statically enforced or not, I don't think that talk will help. -- /Jacob Carlborg
Apr 08 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/8/2013 5:06 AM, Jacob Carlborg wrote:
 I don't know. The thread safe example probably works better with an annotated
 type than nogc.
nogc would have to be enforced by the compiler. Furthermore, it's viral in that much of the runtime library would have to be gone through and marked nogc, otherwise it would be fairly useless. It's a fairly intrusive change.
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 11 April 2013 10:53, Walter Bright <newshound2 digitalmars.com> wrote:

 On 4/8/2013 5:06 AM, Jacob Carlborg wrote:

 I don't know. The thread safe example probably works better with an
 annotated
 type than nogc.
nogc would have to be enforced by the compiler. Furthermore, it's viral in that much of the runtime library would have to be gone through and marked nogc, otherwise it would be fairly useless. It's a fairly intrusive change.
As a key user, I can't say I'd even want to use this. I'd rather just see time put into improving the GC, making it more practical/flexible ;)
Apr 10 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/10/2013 8:36 PM, Manu wrote:
 As a key user, I can't say I'd even want to use this. I'd rather just see time
 put into improving the GC, making it more practical/flexible ;)
Me too. I shudder at the ugliness of pervasive nogc attributes.
Apr 10 2013
prev sibling parent Martin Nowak <code dawg.eu> writes:
On 04/08/2013 06:30 AM, Manu wrote:
 I wonder if UDA's could be leveraged to implement this in a library?
 UDA's can not permute the type, so I guess it's impossible to implement
 something like  noalloc that behaves like  nothrow in a library...
 I wonder what it would take, it would be generally interesting to move
 some of the built-in attributes to UDA's if the system is rich enough to
 express it.
It's one of the most interesting use-cases for attributes so I hope to see this at some point. http://d.puremagic.com/issues/show_bug.cgi?id=9511
Apr 09 2013
prev sibling next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 6 April 2013 at 04:16:13 UTC, Adrian Mercieca wrote:
 Hi,

 Is it possible to switch off the GC entirely in D?
 Can the GC be switched off completely - including within phobos?

 What I am looking for is absolute control over memory 
 management.
 I've done some tests with GC on and GC off and the performance 
 with GC is
 not good enough for my requirements.
You can switch off the GC, but then things will leak as the core language, druntime, and phobos all use the GC is many cases. What I do is just avoid the functions that allocate, and rewrite the ones I need. I also use a modified druntime that prints callstacks when a GC allocation occurs, so I know if it happens by accident.
Apr 06 2013
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Peter Alexander:

 I also use a modified druntime that prints callstacks when a GC 
 allocation occurs, so I know if it happens by accident.
Is it possible to write a patch to activate those prints with a compiler switch? Bye, bearophile
Apr 06 2013
parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 6 April 2013 at 11:01:09 UTC, bearophile wrote:
 Peter Alexander:

 I also use a modified druntime that prints callstacks when a 
 GC allocation occurs, so I know if it happens by accident.
Is it possible to write a patch to activate those prints with a compiler switch?
Yes, but I don't have time to do that right now.
Apr 06 2013
parent reply "Rob T" <alanb ucora.com> writes:
On Saturday, 6 April 2013 at 21:29:20 UTC, Peter Alexander wrote:
 On Saturday, 6 April 2013 at 11:01:09 UTC, bearophile wrote:
 Peter Alexander:

 I also use a modified druntime that prints callstacks when a 
 GC allocation occurs, so I know if it happens by accident.
Is it possible to write a patch to activate those prints with a compiler switch?
Yes, but I don't have time to do that right now.
We lack decent tools to even understand what the GC is doing, so that's the sort of thing that should be built directly into D rather than as a patch. --rt
Apr 06 2013
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Saturday, 6 April 2013 at 22:29:42 UTC, Rob T wrote:
 We lack decent tools to even understand what the GC is doing,
https://github.com/CyberShadow/Diamond D1-only due to lack of interest.
Apr 06 2013
prev sibling parent "Martin Nowak" <code dawg.eu> writes:
 We lack decent tools to even understand what the GC is doing, 
 so that's the sort of thing that should be built directly into 
 D rather than as a patch.

 --rt
GCStats isn't yet done, but would be a good start.
Apr 07 2013
prev sibling next sibling parent reply "Martin Nowak" <code dawg.eu> writes:
 I also use a modified druntime that prints callstacks when a GC 
 allocation occurs, so I know if it happens by accident.
I'd happily welcome any patches that get rid of GC usage in druntime.
Apr 07 2013
parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Sunday, 7 April 2013 at 20:20:52 UTC, Martin Nowak wrote:
 I also use a modified druntime that prints callstacks when a 
 GC allocation occurs, so I know if it happens by accident.
I'd happily welcome any patches that get rid of GC usage in druntime.
I meant that it prints a callstack whenever *anything* uses the GC, not just druntime.
Apr 07 2013
prev sibling parent Manu <turkeyman gmail.com> writes:
On 6 April 2013 19:51, Peter Alexander <peter.alexander.au gmail.com> wrote:

 You can switch off the GC, but then things will leak as the core language,
 druntime, and phobos all use the GC is many cases.

 What I do is just avoid the functions that allocate, and rewrite the ones
 I need. I also use a modified druntime that prints callstacks when a GC
 allocation occurs, so I know if it happens by accident.
This needs to be a feature in the standard library that you can turn on... or a compiler option (version?) that will make it complain at compile time when you call functions that may produce hidden allocations.
Apr 07 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-06 06:16, Adrian Mercieca wrote:
 Hi,

 Is it possible to switch off the GC entirely in D?
 Can the GC be switched off completely - including within phobos?

 What I am looking for is absolute control over memory management.
 I've done some tests with GC on and GC off and the performance with GC is
 not good enough for my requirements.
There's GC free fork of Phobos and druntime floating around at github. Links at the bottom: http://3d.benjamin-thaut.de/?p=20 -- /Jacob Carlborg
Apr 06 2013
prev sibling next sibling parent reply "Manipulator" <volcz kth.se> writes:
I just re-read the "Doom3 Source Code Review" by Fabien Sanglard 
(http://fabiensanglard.net/doom3/) and apparently they don't use 
the Standard C++ library. "The engine does not use the Standard 
C++ Library: All containers (map,linked list...) are 
re-implemented but libc is extensively used."
I certainly feel that there is room for improvement, like 
optimizing the GC, define a GC-free subset of Phobos etc. But it 
seems like if you're writing really performance critical realtime 
software most likely you've to implement everything bottom up to 
get the level of control.

Secondly it seems like it's most often cheaper to just throw 
faster hardware at a problem.
"You can do tricks to address any one of them; but I pretty 
strongly believe that with all of these things that are 
troublesome in graphics, rather than throwing really complex 
algorithms at them, they will eventually fall to raw processing 
power."(http://fabiensanglard.net/doom3/interviews.php)

My 2p.
Apr 08 2013
parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 18:22, Manipulator <volcz kth.se> wrote:

 I just re-read the "Doom3 Source Code Review" by Fabien Sanglard (
 http://fabiensanglard.net/**doom3/ <http://fabiensanglard.net/doom3/>)
 and apparently they don't use the Standard C++ library. "The engine does
 not use the Standard C++ Library: All containers (map,linked list...) are
 re-implemented but libc is extensively used."
 I certainly feel that there is room for improvement, like optimizing the
 GC, define a GC-free subset of Phobos etc. But it seems like if you're
 writing really performance critical realtime software most likely you've to
 implement everything bottom up to get the level of control.
I think it's important to realise though, that id software (and most game devs for that matter) only implement their own containers/etc because C++ utterly failed them. My whole point here is to make sure D gets it right in the std lib, so people don't have to waste that time. Rolling your own containers (or working upon any non-standard libs) leads to generally incompatible code. Try plugging library X with its own set of containers into application Y. If D fails on this front, I think it will fail as a candidate for these developers; it will not be selected by realtime developers trying to escape C++. I suspect Andrei for one knows this, and that's why the D containers are so... barely existing. The language is not yet ready to say with confidence how they should look. Performance critical shouldn't be incompatible with the notion of a standard library. It just means the standard library authors didn't give a shit, and I hope that doesn't transfer into D long-term... :/ Secondly it seems like it's most often cheaper to just throw faster
 hardware at a problem.
 "You can do tricks to address any one of them; but I pretty strongly
 believe that with all of these things that are troublesome in graphics,
 rather than throwing really complex algorithms at them, they will
 eventually fall to raw processing power."(http://fabiensanglard.**
 net/doom3/interviews.php <http://fabiensanglard.net/doom3/interviews.php>)
You saw the AMD guy who recently came out and said "moores law is over", right? ;) Remember that you're reading an article by a PC developer. The thing about phones and games consoles (particular games consoles, since they have a 10-ish year lifetime... phones are more like 6 months, but they'll slow with time), and now sunglasses (god knows what next!), is that they're fixed hardware. The developers are required to compete to get the most out of a fixed machine. The language/libs need to help with this and not waste everyones time. If the library is particularly bad, the developers will rewrite it themselves, at expense of time, sanity, and portability. I've reinvented more wheels in my career than I care to imagine. Actually, It's probably not a far fetched call to say I've spent MOST of my working life re-inventing wheels... just because a standard or popular library was written by a PC developer for x86 :( I don't believe D can afford to be that short-sighted. The future is not high performance x86 pc's, that time is _already_ 5 years behind us.
Apr 08 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 08 Apr 2013 09:58:15 +0100, Manu <turkeyman gmail.com> wrote:
 I suspect Andrei for one knows this, and that's why the D containers are
 so... barely existing. The language is not yet ready to say with  
 confidence
 how they should look.
That, and before you can design the containers you need a concrete allocator interface design. Actually, this is likely the same blocker for GC-free D as well. D should have a set of global allocator hooks. If it did, you could easily catch unexpected allocations in tight loops and realtime code. If it did, GC-free D would be trivial - just replace the default GC based allocator with a malloc/free one, or any other scheme you like. The hooks would ideally pass __FILE__ and __LINE__ information down from the call site in debug mode, etc. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 08 2013
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 8 April 2013 20:57, Regan Heath <regan netmail.co.nz> wrote:

 On Mon, 08 Apr 2013 09:58:15 +0100, Manu <turkeyman gmail.com> wrote:

 I suspect Andrei for one knows this, and that's why the D containers are
 so... barely existing. The language is not yet ready to say with
 confidence
 how they should look.
That, and before you can design the containers you need a concrete allocator interface design. Actually, this is likely the same blocker for GC-free D as well. D should have a set of global allocator hooks.
True. I've been saying for a long time that I'd really like filesystem hooks too while at it! If it did, you could easily catch unexpected allocations in tight loops and
 realtime code.  If it did, GC-free D would be trivial - just replace the
 default GC based allocator with a malloc/free one, or any other scheme you
 like.
D doesn't have a delete keyword, which is fairly important if you want to manually manage memory... The hooks would ideally pass __FILE__ and __LINE__ information down from
 the call site in debug mode, etc.
Apr 08 2013
next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 11:14:13 UTC, Manu wrote:
 D doesn't have a delete keyword, which is fairly important if 
 you want to
 manually manage memory...
I'd argue that. If concept of global allocator is added, new/delete can be done as library solutions (similar to current emplace/destroy). Defining those in language may have some benefits but definitely is not a requirement.
Apr 08 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-08 13:14, Manu wrote:

 D doesn't have a delete keyword, which is fairly important if you want
 to manually manage memory...
It's only deprecated (or possibly not even that yet). It's going to be replaced with a "destroy" function or similar. Then there's always "free". -- /Jacob Carlborg
Apr 08 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-08 12:57, Regan Heath wrote:

 D should have a set of global allocator hooks.  If it did, you could
 easily catch unexpected allocations in tight loops and realtime code.
 If it did, GC-free D would be trivial - just replace the default GC
 based allocator with a malloc/free one, or any other scheme you like.
You can already do that: https://github.com/D-Programming-Language/druntime/tree/master/src/gcstub -- /Jacob Carlborg
Apr 08 2013
parent reply "Dicebot" <m.strashun gmail.com> writes:
On Monday, 8 April 2013 at 12:22:04 UTC, Jacob Carlborg wrote:
 On 2013-04-08 12:57, Regan Heath wrote:

 D should have a set of global allocator hooks.  If it did, you 
 could
 easily catch unexpected allocations in tight loops and 
 realtime code.
 If it did, GC-free D would be trivial - just replace the 
 default GC
 based allocator with a malloc/free one, or any other scheme 
 you like.
You can already do that: https://github.com/D-Programming-Language/druntime/tree/master/src/gcstub
Not the same. Using stub gc can help finding unneeded gc allocations, but it is not designed to be generic allocator - API kind of assumes it actually is a GC. It is a starting place, but not a solution.
Apr 08 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 08 Apr 2013 13:28:20 +0100, Dicebot <m.strashun gmail.com> wrote:

 On Monday, 8 April 2013 at 12:22:04 UTC, Jacob Carlborg wrote:
 On 2013-04-08 12:57, Regan Heath wrote:

 D should have a set of global allocator hooks.  If it did, you could
 easily catch unexpected allocations in tight loops and realtime code.
 If it did, GC-free D would be trivial - just replace the default GC
 based allocator with a malloc/free one, or any other scheme you like.
You can already do that: https://github.com/D-Programming-Language/druntime/tree/master/src/gcstub
Not the same. Using stub gc can help finding unneeded gc allocations, but it is not designed to be generic allocator - API kind of assumes it actually is a GC. It is a starting place, but not a solution.
True, but it does help Manu with his problem of detecting unexpected GC allocations in realtime loops though. I've always hated the fact that C++ has 2 memory models new/delete and malloc/free and I've never liked new/delete because it doesn't allow anything like realloc - why can't I reallocate an array of char or wchar_t?? So, in my ideal world - if I needed manual memory management - I would want to be able to supply one set of allocator routines malloc, realloc, free (minimum) and have the runtime use those for all memory (heap only I guess) allocation. So, new would obtain the object memory from my routine, then do it's own (placement) construction in that memory block - for example. It would also be nice to be able to override new, for object tracking or similar purposes. As has been said, we no longer have delete. But, we might want to track GC collection or finalisation, again for object tracking or similar. If we get destroy we might want to hook that also. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 08 2013
parent reply "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
On Monday, 8 April 2013 at 13:01:18 UTC, Regan Heath wrote:

 I've always hated the fact that C++ has 2 memory models 
 new/delete and malloc/free and I've never liked new/delete 
 because it doesn't allow anything like realloc - why can't I 
 reallocate an array of char or wchar_t??
Try using malloc on something that contains a type with a constructor inside and you'll see. The constructor will not be called :) malloc and friends are bad heritage due to C compatibility. I stay away from them.
Apr 08 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 08 Apr 2013 15:24:06 +0100, Minas Mina  
<minas_mina1990 hotmail.co.uk> wrote:

 On Monday, 8 April 2013 at 13:01:18 UTC, Regan Heath wrote:

 I've always hated the fact that C++ has 2 memory models new/delete and  
 malloc/free and I've never liked new/delete because it doesn't allow  
 anything like realloc - why can't I reallocate an array of char or  
 wchar_t??
Try using malloc on something that contains a type with a constructor inside and you'll see. The constructor will not be called :)
Are you talking about C++ or D here? Memory allocation and object construction are separate concepts, malloc is for the former, not the latter. "new" on the other hand ties them both together .. except that "placement" new can be used to separate them again :) As in.. #include <stdlib.h> #include <new.h> class AA { public: AA() { printf("AA\n"); } ~AA() { printf("~AA\n"); } }; class BB { AA a; public: BB() { printf("BB\n"); } ~BB() { printf("~BB\n"); } }; void main() { void *mem = malloc(sizeof(BB)); BB *b = new (mem) BB(); delete b; } The above outputs: AA BB ~BB ~AA as expected. malloc is used for memory allocation and placement new for construction.
 malloc and friends are bad heritage due to C compatibility. I stay away  
 from them.
Mixing the 2 in the same piece of code is just asking for bugs - like trying to free something allocated with "new[]" for example.. *shudder*. There is nothing "bad heritage" about malloc, or more especially realloc, you try doing the equivalent of realloc with new .. it's not possible. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 08 2013
parent reply "Minas Mina" <minas_mina1990 hotmail.co.uk> writes:
Sorry, if it wasn't clear. I was talking about C++.

Even if you are not mixing the two, you can still get f*** up.

struct S
{
	S()
	{
		cout << "S()\n";
	}
};

int main()
{
	S *s = new S(); // constructor is called
	
	S *s2 = (S*)malloc(sizeof(S)); // constructor is NOT called
}

So you see an innocent struct type and you decide to use malloc 
instead of new... If it has a constructor/destructor/... those 
will not be called. It's just asking for trouble.
Apr 08 2013
parent "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 08 Apr 2013 16:16:07 +0100, Minas Mina  
<minas_mina1990 hotmail.co.uk> wrote:

 Sorry, if it wasn't clear. I was talking about C++.

 Even if you are not mixing the two, you can still get f*** up.

 struct S
 {
 	S()
 	{
 		cout << "S()\n";
 	}
 };

 int main()
 {
 	S *s = new S(); // constructor is called
 	
 	S *s2 = (S*)malloc(sizeof(S)); // constructor is NOT called
 }

 So you see an innocent struct type and you decide to use malloc instead  
 of new... If it has a constructor/destructor/... those will not be  
 called. It's just asking for trouble.
This is exactly what I was talking about.. you're expecting memory allocation to perform object construction.. it doesn't, never has, never will. :p There is a reason malloc returns void* you know, because it doesn't return an initialised S*, it doesn't return a initialised anything, not even an initialised char*, it just returns a block of memory which you are expected to initialise yourself. Separate the 2 concepts of memory allocation and object construction/initialisation in your head and and you'll never make this mistake again :) R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 08 2013
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Mon, 08 Apr 2013 11:57:08 +0100
schrieb "Regan Heath" <regan netmail.co.nz>:

 On Mon, 08 Apr 2013 09:58:15 +0100, Manu <turkeyman gmail.com> wrote:
 I suspect Andrei for one knows this, and that's why the D
 containers are so... barely existing. The language is not yet ready
 to say with confidence
 how they should look.
That, and before you can design the containers you need a concrete allocator interface design. Actually, this is likely the same blocker for GC-free D as well. D should have a set of global allocator hooks. If it did, you could easily catch unexpected allocations in tight loops and realtime code. If it did, GC-free D would be trivial - just replace the default GC based allocator with a malloc/free one, or any other scheme you like.
IIRC stuff like closures and dynamic array appending rely on a gc and it wouldn't be trivial to change that to a normal alloc/free allocator. A good & simple start would be a -vgc switch, similar to -vtls which prints out all hidden memory allocations. Custom allocators are still important for the library (toString etc). Apart from that I would just stay away from language features which allocate. For example instead of using the concatenate operators I'd just use something like appender (which should then support custom allocators).
Apr 08 2013
parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 08 Apr 2013 19:02:19 +0100, Johannes Pfau <nospam example.com>  
wrote:

 Am Mon, 08 Apr 2013 11:57:08 +0100
 schrieb "Regan Heath" <regan netmail.co.nz>:

 On Mon, 08 Apr 2013 09:58:15 +0100, Manu <turkeyman gmail.com> wrote:
 I suspect Andrei for one knows this, and that's why the D
 containers are so... barely existing. The language is not yet ready
 to say with confidence
 how they should look.
That, and before you can design the containers you need a concrete allocator interface design. Actually, this is likely the same blocker for GC-free D as well. D should have a set of global allocator hooks. If it did, you could easily catch unexpected allocations in tight loops and realtime code. If it did, GC-free D would be trivial - just replace the default GC based allocator with a malloc/free one, or any other scheme you like.
IIRC stuff like closures and dynamic array appending rely on a gc and it wouldn't be trivial to change that to a normal alloc/free allocator.
True, my comment actually contained 2 ideas, neither of which I explained properly :p Manu to catch unexpected allocations, or provide the GC with memory from a pre-allocated block etc. In this case we'd probably want the allocator to control when the GC performed collection - to avoid it in realtime code. I think this rather simple idea would give a lot more control and flexibility to applications with realtime requirements. As Manu himself said (IIRC) he doesn't mind the GC, what he minds is the hidden allocations which themselves cause a delay, and I would imagine he would mind even more if a collection was triggered by one :p without a GC we would have problems catching the 'free' for objects like closures and arrays (as we'd have no reference to them in user code upon which to call 'destroy' or similar. So, we'd either have to replace the GC with reference counting /and/ have some compiler support for triggers on references leaving scope, for example. Or, we could require code to call 'destroy' on everything to be freed and force users to keep references to closures and arrays and 'destroy' them. There are likely cases where the user code never sees the closure reference, in this case the allocator itself could perform some object tracking. We could aid this by having an allocation 'type' allowing the allocator to track only closure or array allocations - for example, and ignore user allocations where it assumes the user code will call 'destroy' on the reference (as delete/free would be called in C/C++).
 A good & simple start would be a -vgc switch, similar to -vtls which
 prints out all hidden memory allocations. Custom allocators are still
 important for the library (toString etc). Apart from that I would just
 stay away from language features which allocate. For example instead of
 using the concatenate operators I'd just use something like appender
 (which should then support custom allocators).
Did you see Manu's problem case? I can't recall the method name but it was not one which suggested it would allocate, and it didn't in the general case but in a very specific case it did. As it stands, it's very hard to tell which language features allocate (without code inspection of the compiler and standard library) so it's hard to avoid. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Apr 09 2013
parent reply Johannes Pfau <nospam example.com> writes:
Am Tue, 09 Apr 2013 11:29:09 +0100
schrieb "Regan Heath" <regan netmail.co.nz>:
 A good & simple start would be a -vgc switch, similar to -vtls which
 prints out all hidden memory allocations. Custom allocators are
 still important for the library (toString etc). Apart from that I
 would just stay away from language features which allocate. For
 example instead of using the concatenate operators I'd just use
 something like appender (which should then support custom
 allocators).
Did you see Manu's problem case? I can't recall the method name but it was not one which suggested it would allocate, and it didn't in the general case but in a very specific case it did. As it stands, it's very hard to tell which language features allocate (without code inspection of the compiler and standard library) so it's hard to avoid. R
toUpperInPlace IIRC? Yes, -vgc would not directly help here as toUpperInPlace is a library function. But I think this is a library / documentation bug: 1: We should explicitly document if a function is not supposed to allocate 2: If a function is called "inPlace" but can still allocate, it needs a huge red warning. Those kind of things can be solved partially with correctly documented functions. But hidden allocations caused by the language (closures, array literals) are imho more dangerous and -vgc could help there. Without -vgc it's very difficult to verify if some code could call into the gc. Btw: implementing -vgc shouldn't be too difficult: We have to check all runtime hooks ( http://wiki.dlang.org/Runtime_Hooks ) for allocations, then check all places in dmd where calls to those hooks are emitted.
Apr 09 2013
parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 09.04.2013 14:00, schrieb Johannes Pfau:
 Am Tue, 09 Apr 2013 11:29:09 +0100
 schrieb "Regan Heath" <regan netmail.co.nz>:
 A good & simple start would be a -vgc switch, similar to -vtls which
 prints out all hidden memory allocations. Custom allocators are
 still important for the library (toString etc). Apart from that I
 would just stay away from language features which allocate. For
 example instead of using the concatenate operators I'd just use
 something like appender (which should then support custom
 allocators).
Did you see Manu's problem case? I can't recall the method name but it was not one which suggested it would allocate, and it didn't in the general case but in a very specific case it did. As it stands, it's very hard to tell which language features allocate (without code inspection of the compiler and standard library) so it's hard to avoid. R
toUpperInPlace IIRC? Yes, -vgc would not directly help here as toUpperInPlace is a library function. But I think this is a library / documentation bug: 1: We should explicitly document if a function is not supposed to allocate 2: If a function is called "inPlace" but can still allocate, it needs a huge red warning. Those kind of things can be solved partially with correctly documented functions. But hidden allocations caused by the language (closures, array literals) are imho more dangerous and -vgc could help there. Without -vgc it's very difficult to verify if some code could call into the gc. Btw: implementing -vgc shouldn't be too difficult: We have to check all runtime hooks ( http://wiki.dlang.org/Runtime_Hooks ) for allocations, then check all places in dmd where calls to those hooks are emitted.
It's actually very easy to find hidden allocations. If you remove the gc entierly from the runtime hidden allocations will cause linker errors. Kind Regards Benjamin Thaut -- Kind Regards Benjamin Thaut
Apr 10 2013
parent reply Manu <turkeyman gmail.com> writes:
On 10 April 2013 23:45, Benjamin Thaut <code benjamin-thaut.de> wrote:

 Am 09.04.2013 14:00, schrieb Johannes Pfau:

  Am Tue, 09 Apr 2013 11:29:09 +0100
 schrieb "Regan Heath" <regan netmail.co.nz>:

 A good & simple start would be a -vgc switch, similar to -vtls which
 prints out all hidden memory allocations. Custom allocators are
 still important for the library (toString etc). Apart from that I
 would just stay away from language features which allocate. For
 example instead of using the concatenate operators I'd just use
 something like appender (which should then support custom
 allocators).
Did you see Manu's problem case? I can't recall the method name but it was not one which suggested it would allocate, and it didn't in the general case but in a very specific case it did. As it stands, it's very hard to tell which language features allocate (without code inspection of the compiler and standard library) so it's hard to avoid. R
toUpperInPlace IIRC? Yes, -vgc would not directly help here as toUpperInPlace is a library function. But I think this is a library / documentation bug: 1: We should explicitly document if a function is not supposed to allocate 2: If a function is called "inPlace" but can still allocate, it needs a huge red warning. Those kind of things can be solved partially with correctly documented functions. But hidden allocations caused by the language (closures, array literals) are imho more dangerous and -vgc could help there. Without -vgc it's very difficult to verify if some code could call into the gc. Btw: implementing -vgc shouldn't be too difficult: We have to check all runtime hooks ( http://wiki.dlang.org/Runtime_**Hooks<http://wiki.dlang org/Runtime_Hooks>) for allocations, then check all places in dmd where calls to those hooks are emitted.
It's actually very easy to find hidden allocations. If you remove the gc entierly from the runtime hidden allocations will cause linker errors.
Not a particularly user-friendly approach. I'd rather think of some proper tools/mechanisms to help in this area :)
Apr 10 2013
parent Johannes Pfau <nospam example.com> writes:
Am Thu, 11 Apr 2013 00:11:05 +1000
schrieb Manu <turkeyman gmail.com>:

 Btw: implementing -vgc shouldn't be too difficult: We have to
 check all runtime hooks
 ( http://wiki.dlang.org/Runtime_**Hooks<http://wiki.dlang.org/Runtime_Hooks>)
 for allocations, then check all places in dmd where calls to those
 hooks are emitted.
It's actually very easy to find hidden allocations. If you remove the gc entierly from the runtime hidden allocations will cause linker errors.
Not a particularly user-friendly approach. I'd rather think of some proper tools/mechanisms to help in this area :)
I like "test-vgc.d(9) vgc[CONCAT]: (a ~ b) causes gc allocation" a lot more than "undefined reference to gc_alloc" ;-) I posted a proof of concept pull request here: https://github.com/D-Programming-Language/dmd/pull/1886 It needs some work but maybe it will be ready for 2.063. Would be great if both of you could comment there as you probably have most experience in avoiding the GC :-)
Apr 11 2013
prev sibling parent reply Benjamin Thaut <code benjamin-thaut.de> writes:
Am 06.04.2013 06:16, schrieb Adrian Mercieca:
 Hi,

 Is it possible to switch off the GC entirely in D?
 Can the GC be switched off completely - including within phobos?

 What I am looking for is absolute control over memory management.
 I've done some tests with GC on and GC off and the performance with GC is
 not good enough for my requirements.

 Thanks.
 - Adrian.
It is possible, but it heavily crippeles the language and requires modifications to druntime. See: http://3d.benjamin-thaut.de/?p=20 Also see my GC free version of druntime: https://github.com/Ingrater/druntime My GC free version of phobos (heavily crippeled): https://github.com/Ingrater/phobos And my little GC free "standard library": https://github.com/Ingrater/thBase Its quite fun doing it when you want to learn new things, but I would not recommend doing so in a real wordl project. -- Kind Regards Benjamin Thaut
Apr 10 2013
parent reply "ixid" <nuaccount gmail.com> writes:
 It is possible, but it heavily crippeles the language and 
 requires modifications to druntime.

 See: http://3d.benjamin-thaut.de/?p=20

 Also see my GC free version of druntime:
 https://github.com/Ingrater/druntime

 My GC free version of phobos (heavily crippeled):
 https://github.com/Ingrater/phobos

 And my little GC free "standard library":
 https://github.com/Ingrater/thBase

 Its quite fun doing it when you want to learn new things, but I 
 would not recommend doing so in a real wordl project.
Given what you have learned about GC-free D how practical would it be to modify D's core to work without garbage collection and then build libraries on top of this which make their GC use explicit? It seems to me that this is a level of modularity that should have been built in from the start.
Apr 10 2013
parent Benjamin Thaut <code benjamin-thaut.de> writes:
Am 10.04.2013 18:05, schrieb ixid:
 It is possible, but it heavily crippeles the language and requires
 modifications to druntime.

 See: http://3d.benjamin-thaut.de/?p=20

 Also see my GC free version of druntime:
 https://github.com/Ingrater/druntime

 My GC free version of phobos (heavily crippeled):
 https://github.com/Ingrater/phobos

 And my little GC free "standard library":
 https://github.com/Ingrater/thBase

 Its quite fun doing it when you want to learn new things, but I would
 not recommend doing so in a real wordl project.
Given what you have learned about GC-free D how practical would it be to modify D's core to work without garbage collection and then build libraries on top of this which make their GC use explicit? It seems to me that this is a level of modularity that should have been built in from the start.
For Druntime this is possible without much issues. For phobos it gets more compilcated especially for functions which have to allocate, like string processing functions and other things. In my experience the API of an Library looks different if is uses a GC versus not using a GC. The advantage of a GC is that you can make the API more slim, easier to understand and "safe" because you know that you can rely on the GC. Some parts of phobos already are written in a way that does not require a GC. E.g. std.random, std.traits, std.typetuple and others are useable right away. The problem I see with such an approach is that everyone who is providing a library would basically have to maintain two version of the library. One without a GC and one with GC. But with some compiler support it would be perfectly doable, it would just be more work. Kind Regards Benjamin Thaut -- Kind Regards Benjamin Thaut
Apr 10 2013