digitalmars.D - new D2.0 + C++ language
- Weed (132/132) Mar 18 2009 Hi!
- bearophile (5/11) Mar 18 2009 No, thanks...
- Weed (6/21) Mar 18 2009 The proposal will be able support multiprocessing - for it provided a
- BCS (6/10) Mar 18 2009 Never delete anything?
- Weed (5/21) Mar 19 2009 Mmm
- naryl (3/8) Mar 19 2009 I wouldn't be so sure about CPU:
- Weed (5/14) Mar 19 2009 You should not compare benchmarks - they depend on the quality of the
- naryl (3/16) Mar 19 2009
- Weed (3/17) Mar 19 2009 I suggest that reference counting for -debug.
- Denis Koroskin (2/21) Mar 19 2009 Yeah, ref-count your objects in debug and let the memory leak in release...
- Weed (6/31) Mar 20 2009 Not leak - that may be a reference to non-existent object.
- BCS (12/35) Mar 19 2009 So do I.
- BCS (12/35) Mar 19 2009 So do I.
- Weed (14/52) Mar 20 2009
- Christopher Wright (2/19) Mar 18 2009 You cannot alter the reference count of an immutable variable.
- Weed (2/24) Mar 19 2009 Why?
- Christopher Wright (5/28) Mar 19 2009 Because it's immutable!
- Weed (4/19) Mar 19 2009 Precisely. I wrote the cost for that: 1 dereferencing + inc/dec of count...
- Christopher Wright (7/26) Mar 19 2009 It's more expensive than dereferencing. If your const object points to
- Weed (12/37) Mar 20 2009 It is designed not so. There will be a hidden dereferencing:
- Christopher Wright (3/12) Mar 20 2009 Okay, a language level feature, or a wrapper struct. That would work. If...
- BCS (2/6) Mar 20 2009 Who deletes those structs and when?
- Weed (2/13) Mar 20 2009 When an object is deleted the struct also is deleted
- Craig Black (5/19) Mar 18 2009 Multiprocessing can only improve performance for tasks that can run in p...
- Sergey Gromov (6/37) Mar 18 2009 I think that the "shared" memory concept in D2 is introduced
- Robert Jacques (10/41) Mar 18 2009 *Sigh*, you do know people run cluster & multi-threaded Java apps all th...
- Weed (6/33) Mar 19 2009 Who?
- Robert Jacques (17/50) Mar 19 2009 *sigh* All memory allocation must make some kernel calls. D's GC makes
- Weed (10/46) Mar 19 2009 + Sometimes allocation and freeing of memory in an arbitrary
- BCS (4/8) Mar 19 2009 malloc is not a system call. malloc make systems calls (IIRC mmap) once ...
- Simen Kjaeraas (5/13) Mar 19 2009 Then use the stub GC or disable the GC, then re-enable it when
- Weed (6/25) Mar 20 2009
- Simen Kjaeraas (5/27) Mar 20 2009 If so, you have allocated a lot of things you shouldn't have, or otherwi...
- Weed (7/39) Mar 20 2009
- BCS (10/16) Mar 20 2009 Are you saying that you have a program with a time critical section that...
- BCS (4/21) Mar 20 2009 I can't think of a case where having the GC running would be a problem w...
- BCS (5/9) Mar 19 2009 This issue is in no way special to GC systems, IIRC malloc has no upper ...
- Christopher Wright (2/6) Mar 19 2009 So you are optimizing for the uncommon case?
- Weed (2/9) Mar 20 2009 GC is an attempt of optimizing for the uncommon case )
- Christopher Wright (15/24) Mar 20 2009 I don't think so. Programmers have more important things to do than
- Weed (12/38) Mar 20 2009 I do not agree. I am quite easy to give tracking the creation and
- BCS (10/39) Mar 20 2009 Small applications are NOT the normal case.
- Christopher Wright (13/17) Mar 20 2009 Libraries will often have no need for data structures with complex
- Rainer Deyke (6/11) Mar 20 2009 If you spend hundreds of milliseconds on garbage collection every ten
- BCS (6/18) Mar 20 2009 If you spend a few 0.1ths of a ms every 10 ms on reference counting, sma...
- Christopher Wright (16/26) Mar 20 2009 I was pulling numbers out of my ass. If I wanted to do a proper job, I
- BCS (10/18) Mar 20 2009 http://libsigsegv.sourceforge.net/
- Rainer Deyke (10/13) Mar 20 2009 GC is useless for resource management. RAII solves the resource
- Christopher Wright (11/26) Mar 21 2009 I believe Python is using reference counting with a garbage collector,
- Sergey Gromov (4/5) Mar 21 2009 I think this is an overstatement. It's only abstract write buffers
- Rainer Deyke (24/27) Mar 21 2009 OpenGL objects (textures/shader programs/display lists).
- Sergey Gromov (3/32) Mar 22 2009 Thanks for the explanation, it really helps to keep this picture in
- Craig Black (17/63) Mar 20 2009 I admit to knowing nothing about clusters, so my point does not apply to...
- Kagamin (2/5) Mar 18 2009 Garbage collection can be turned off already, you can get rid of it just...
- Kagamin (2/9) Mar 18 2009 ...well, you already has it with structure constructors...
- Weed (5/15) Mar 18 2009 Remember that we have already discussed this here several times, and
- Weed (3/5) Mar 18 2009 colorized example:
- Yigal Chripun (11/11) Mar 19 2009 what you suggest is C++ with better syntax, *NOT* a variant of D. for th...
- Weed (4/15) Mar 20 2009 No! Only because of the "value semantic" returns used pointers instead
- bearophile (21/22) Mar 20 2009 Thank you for the link, I did know only "A Modest Proposal: C++ Resyntax...
- Jarrett Billingsley (5/11) Mar 20 2009 bugs like if(a =3D b).
- bearophile (15/19) Mar 20 2009 Let's say D has a workaround to patch most of that C-syntax hole :-)
- Daniel Keep (7/31) Mar 21 2009 { int -> int } // function
- Weed (2/7) Mar 21 2009 at least to something like this idea? )
- Piotrek (5/13) Mar 21 2009 The idea could be ok but have you written a compiler or specification?
- Weed (9/22) Mar 21 2009 My experience in the creation of the compilers is reduced to half-read
- Piotrek (10/13) Mar 21 2009 OK. I tell you what I think. D is a well designed language. What you
- bearophile (6/8) Mar 21 2009 D is surely not a puzzle language :-)
- Piotrek (18/24) Mar 21 2009 Haha, I found good discussion on reddit
- bearophile (17/19) Mar 21 2009 K language seems worse to me, this is a full raytracer that saves in pgm...
- Christopher Wright (9/26) Mar 21 2009 I inferred from your original post that you had written such a compiler....
Hi! I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: - Its does not contains garbage collection and - allows active using of a stack for the objects (as in C++) - Its uses syntax and a tree of objects taken from the D The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance. Compiler for that language does not exist! And it is unlikely that I will be able to do it. In any case, I want to discuss before thinking about compiler. I just give you an example of using, without a description of pure syntax because I do not propose anything new for those who remember C++ and D. Ask the questions! /* Demonstration of a new dialect of the D language It describes only what differs from D2.0 This language is compatible with C and (probably) D ABI, but not with C++. */ /* Structures and classes are completely similar, except structs are having controlled alignment and the lack of polymorphism. The structures are POD and fully compatible with the structures of the language C. */ struct S { int var0; int var1; int var2; // Structs constructors are entirely same as in a classes: this() { var1 = 5; } } interface I { void incr(); } // The structures is a POD. // They supports inheritance without a polymorphism and they support // interfaces too. struct SD : S, I { int var3; int var4; void incr() { ++var3; ++var4; } /* Structs constructors are similar to the class constructors. Calling base constructor super () is required. */ this() { super(); var4 = 8; } } class C, I { int var; void incr() { ++var; } /* Instead of overloading the operator "=" for classes and structures, there is present constructor, same as the copy constructor in C++ - in the parameters it accepts only object of the same type. This is differs from D and the need to ensure that copy constructor can change the source object (for example, to copy objects linked to the linked-list). Unlike the C++ constructor, it first makes a copy of the bitwise the original object, then an additional postblit, the same way as occurs in D2.0. This allows increase performance of copying than in C++. And about references: When compiling with "-debug" option compiler builds binary with the reference-counting. This approach is criticized by Walter Bright there: http://www.digitalmars.com/d/2.0/faq.html#reference-counting But, if the language is not have GC, reference-counting is a good way to make sure that the object which it references exists. The cost - an additional pointer dereferencing and checking the counter (and this is only when compiling with option "-debug"!). */ this( ref C src ) { var3 = src.var3; var4 = src.var4; } } class CD : C { real var2; void dumb_method() {}; } void func() { /* Classes to be addressed in the heap by pointer. "*" need to distinguish the classes in heap of classes in the stack. I.e., creating classes and structures takes place the same as creating them in the C++. */ CD cd_stack; // Creates class in a stack CD* cd_heap = new CD; // Creates class in a heap, new returns // pointer (same as in C++) C* c_heap = new C; C c_stack; // Copying of a objects (same as in C++) cd_stack = *cd_heap; *cd_heap = cd_stack; /* Copying of a pointers to the objects "c_heap" pointer points to the object "cd_heap", with the object to which the previously pointed "c_heap" is not removed (as there is no GC and not used smartpointer template). This is memory leak! */ c_heap = cd_heap; /* "Slicing" demo: As a parent object is copied from derived class with additional fields and methods. The "real var2" field data is not available in "c_stack" and not will be copied: */ c_stack = *cd_heap; /* Attempt to place an object of type C into the derived object of type CD. Field real var2 is not filled by C object. There field now contains garbage: */ cd_stack = c_stack; cd_stack.var2; // <- garbage data }
Mar 18 2009
Weed:I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing, that in real-world programs may need pure functions and immutable data. That D2 has already, while C++ is less lucky. Bye, bearophile
Mar 18 2009
bearophile :Weed:The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing,that in real-world programs may need pure functions and immutable data.I do not see any problems with thisThat D2 has already, while C++ is less lucky.
Mar 18 2009
Reply to Weed,If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!Never delete anything? One of the arguments for GC is that it might well have /less/ overhead than any other practical way of managing dynamic memory. Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC) but neither is particularly nice.
Mar 18 2009
BCS :Reply to Weed,Mmm When I say "overhead" I mean the cost of executions, and not cost of programmingIf you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!Never delete anything? One of the arguments for GC is that it might well have /less/ overhead than any other practical way of managing dynamic memory.Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.but neither is particularly nice.
Mar 19 2009
Weed Wrote:BCS �����:I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 19 2009
naryl :Weed Wrote:You should not compare benchmarks - they depend on the quality of the testing code. But it is important to provide an opportunity to write a program efficiently.BCS :I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 19 2009
Weed Wrote:naryl �����:Then find a way to prove that GC costs more CPU time than explicit memory management and/or reference counting.Weed Wrote:You should not compare benchmarks - they depend on the quality of the testing code.BCS ���������������:I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 19 2009
naryl :Weed Wrote:I suggest that reference counting for -debug. Yes, it slows down a bit. As invariant{}, in{}, out(){}, assert()naryl :Then find a way to prove that GC costs more CPU time than explicit memory management and/or reference counting.Weed Wrote:You should not compare benchmarks - they depend on the quality of the testing code.BCS :I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 19 2009
On Thu, 19 Mar 2009 19:54:10 +0300, Weed <resume755 mail.ru> wrote:naryl пишет:Yeah, ref-count your objects in debug and let the memory leak in release!Weed Wrote:I suggest that reference counting for -debug. Yes, it slows down a bit. As invariant{}, in{}, out(){}, assert()naryl яПНяПНяПНяПНяПН:Then find a way to prove that GC costs more CPU time than explicit memory management and/or reference counting.Weed Wrote:You should not compare benchmarks - they depend on the quality of the testing code.BCS яПНяПНяПНяПНяПНяПНяПНяПНяПНяПНяПНяПНяПНяПНяПН:I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 19 2009
Denis Koroskin :On Thu, 19 Mar 2009 19:54:10 +0300, Weed <resume755 mail.ru> wrote:Not leak - that may be a reference to non-existent object. The design of "invariant{}" does not reveal all problems with the class in all cases which compiled to the release. So that they abandon invariant{}? Yes, this language is the same danger as the C++.naryl :Yeah, ref-count your objects in debug and let the memory leak in release!Weed Wrote:I suggest that reference counting for -debug. Yes, it slows down a bit. As invariant{}, in{}, out(){}, assert()naryl :Then find a way to prove that GC costs more CPU time than explicit memory management and/or reference counting.Weed Wrote:You should not compare benchmarks - they depend on the quality of the testing code.BCS :I wouldn't be so sure about CPU: http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 20 2009
Hello Weed,BCS ?????:So do I. I figure unless it save me more times than it costs /all/ the users, run time cost trumps.Reply to Weed,Mmm When I say "overhead" I mean the cost of executions, and not cost of programmingIf you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!Never delete anything? One of the arguments for GC is that it might well have /less/ overhead than any other practical way of managing dynamic memory.ditto naryl on CPU As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.but neither is particularly nice.
Mar 19 2009
Hello Weed,BCS ?????:So do I. I figure unless it save me more times than it costs /all/ the users, run time cost trumps.Reply to Weed,Mmm When I say "overhead" I mean the cost of executions, and not cost of programmingIf you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!Never delete anything? One of the arguments for GC is that it might well have /less/ overhead than any other practical way of managing dynamic memory.ditto naryl on CPU As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.but neither is particularly nice.
Mar 19 2009
BCS :Hello Weed,This is a philosophical dispute. A good and frequently used code can be written once and then used 10 years in 50 applications in 10000 installations. Here, the costs of programming may be less than the cost of end-user's time and hardware.BCS ?????:So do I. I figure unless it save me more times than it costs /all/ the users, run time cost trumps.Reply to Weed,Mmm When I say "overhead" I mean the cost of executions, and not cost of programmingIf you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!Never delete anything? One of the arguments for GC is that it might well have /less/ overhead than any other practical way of managing dynamic memory.Once again I repeat: forget about reference counting - it is only for the debug purposes. I think this addition should be switchable by compiler option. It did not included into the resulting code. Ref-counting needed for multithreaded programs, when there is a risk to get and use a reference to an object that another process has already been killed. This situation needs to be recognized and issued to a run-time error. This is addition to synchronization of threads, which is realized in D.ditto naryl on CPU As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.Yes you can be very careful in keeping track of pointers (not practical) or use smart pointers and such (might end up costing more than GC)I am do not agree: GC overexpenditure CPU or memory. Typically, both.
Mar 20 2009
Reply to Weed,BCS ?????:You are agreeing with me.Hello Weed,This is a philosophical dispute. A good and frequently used code can be written once and then used 10 years in 50 applications in 10000 installations. Here, the costs of programming may be less than the cost of end-user's time and hardware.Mmm When I say "overhead" I mean the cost of executions, and not cost of programmingSo do I. I figure unless it save me more times than it costs /all/ the users, run time cost trumps.As I understand the concept "reference counting" is a form of GC. It has nothing to do with threading. The point is to keep track of how many references, in any thread, there are to a dynamic resource and to free it when there are no more. Normally (as in if you are not doing things wrong) you never release/free/delete a reference counted resource so it doesn't even check if it is delete. Also because the count is attached to the referenced resource, it can't do that check because the count is deleted right along with it. For that concept (the only meaning of the term "reference counting" I known of) the idea of turning it off for non-debug builds is silly. Are you referring to something else?As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.Once again I repeat: forget about reference counting - it is only for the debug purposes. I think this addition should be switchable by compiler option. It did not included into the resulting code. Ref-counting needed for multithreaded programs, when there is a risk to get and use a reference to an object that another process has already been killed. This situation needs to be recognized and issued to a run-time error.
Mar 20 2009
BCS :In the proposed language is a way to learn mistake - deleting object in another thread that still have a reference. In other words, it is a way to provide proof that the reference refers to an object rather than the emptiness or garbage.You are agreeing with me.I figure unless it save me more times than it costs /all/ the users, run time cost trumps.This is a philosophical dispute. A good and frequently used code can be written once and then used 10 years in 50 applications in 10000 installations. Here, the costs of programming may be less than the cost of end-user's time and hardware.As I understand the concept "reference counting" is a form of GC.As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.Once again I repeat: forget about reference counting - it is only for the debug purposes. I think this addition should be switchable by compiler option. It did not included into the resulting code. Ref-counting needed for multithreaded programs, when there is a risk to get and use a reference to an object that another process has already been killed. This situation needs to be recognized and issued to a run-time error.It has nothing to do with threading. The point is to keep track of how many references, in any thread, there are to a dynamic resource and to free it when there are no more. Normally (as in if you are not doing things wrong) you never release/free/delete a reference counted resource so it doesn't even check if it is delete. Also because the count is attached to the referenced resource, it can't do that check because the count is deleted right along with it. For that concept (the only meaning of the term "reference counting" I known of) the idea of turning it off for non-debug builds is silly. Are you referring to something else?
Mar 20 2009
Reply to Weed,BCS пишет:OK, than quit calling it reference counting because everyone will think of something else when you call it that. Also, what you are proposing would not be specific to threads. The problem of deleting stuff early is just as much a problem and looks exactly the same in non threaded code.As I understand the concept "reference counting" is a form of GC.In the proposed language is a way to learn mistake - deleting object in another thread that still have a reference. In other words, it is a way to provide proof that the reference refers to an object rather than the emptiness or garbage.
Mar 20 2009
Weed wrote:bearophile :You cannot alter the reference count of an immutable variable.Weed:The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing,
Mar 18 2009
Christopher Wright :Weed wrote:Why?bearophile :You cannot alter the reference count of an immutable variable.Weed:The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing,
Mar 19 2009
Weed wrote:Christopher Wright :Because it's immutable! Unless you're storing a dictionary of objects to reference counts somewhere, that is. Which would be hideously slow and a pain to use. Not that reference counting is fun.Weed wrote:Why?bearophile :You cannot alter the reference count of an immutable variable.Weed:The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing,
Mar 19 2009
Christopher Wright :Precisely. I wrote the cost for that: 1 dereferencing + inc/dec of counter. Ok, you can make reference counting is disabled by option. In any case, to release code that does not been included, it is only for -debug!Because it's immutable! Unless you're storing a dictionary of objects to reference counts somewhere, that is. Which would be hideously slow and a pain to use. Not that reference counting is fun.Why?You cannot alter the reference count of an immutable variable.And regarding performance, eventually it will come a lot from a good usage of multiprocessing,The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!
Mar 19 2009
Weed wrote:Christopher Wright :It's more expensive than dereferencing. If your const object points to its reference count, then the reference count is also const, so you can't alter it. So the best you can possibly do is one hashtable lookup for every time you alter the reference count for a non-mutable variable. That is a huge overhead, much more so than garbage collection.Precisely. I wrote the cost for that: 1 dereferencing + inc/dec of counter.Because it's immutable! Unless you're storing a dictionary of objects to reference counts somewhere, that is. Which would be hideously slow and a pain to use. Not that reference counting is fun.Why?You cannot alter the reference count of an immutable variable.And regarding performance, eventually it will come a lot from a good usage of multiprocessing,The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!
Mar 19 2009
Christopher Wright :It is designed not so. There will be a hidden dereferencing: const ref Obj object -> struct{ Obj* object; -> Obj object; int counter; }; For all objects will be created such structs. "this" inside the object returns ptr to struct. Any appeal by reference to an object would cause such dereferencing. Creating reference to object in a code block will cause an increase in the counter. Destruction of reference will cause an automatic decrease in the counter. By the way, upon arrival into the try{} it will save values of reference counters for the correct exit if exception will be generated.It's more expensive than dereferencing. If your const object points to its reference count, then the reference count is also const, so you can't alter it.Precisely. I wrote the cost for that: 1 dereferencing + inc/dec of counter.Because it's immutable! Unless you're storing a dictionary of objects to reference counts somewhere, that is. Which would be hideously slow and a pain to use. Not that reference counting is fun.Why?You cannot alter the reference count of an immutable variable.And regarding performance, eventually it will come a lot from a good usage of multiprocessing,The proposal will be able support multiprocessing - for it provided a references counting in the debug version of binaries. If you know the best way for language *without GC* guaranteeing the existence of an object without overhead - I have to listen!So the best you can possibly do is one hashtable lookup for every time you alter the reference count for a non-mutable variable. That is a huge overhead, much more so than garbage collection.
Mar 20 2009
Weed wrote:Christopher Wright :Okay, a language level feature, or a wrapper struct. That would work. If it's a library level feature, there's a problem of usage.It's more expensive than dereferencing. If your const object points to its reference count, then the reference count is also const, so you can't alter it.It is designed not so. There will be a hidden dereferencing: const ref Obj object -> struct{ Obj* object; -> Obj object; int counter; };
Mar 20 2009
Reply to Weed,It is designed not so. There will be a hidden dereferencing: const ref Obj object -> struct{ Obj* object; -> Obj object; int counter; };Who deletes those structs and when?
Mar 20 2009
BCS :Reply to Weed,When an object is deleted the struct also is deletedIt is designed not so. There will be a hidden dereferencing: const ref Obj object -> struct{ Obj* object; -> Obj object; int counter; };Who deletes those structs and when?
Mar 20 2009
bearophile Wrote:Weed:Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -CraigI want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing, that in real-world programs may need pure functions and immutable data. That D2 has already, while C++ is less lucky. Bye, bearophile
Mar 18 2009
Wed, 18 Mar 2009 13:48:55 -0400, Craig Black wrote:bearophile Wrote:I think that the "shared" memory concept in D2 is introduced specifically to improve multi-processing GC performance. There going to be thread-local GC for every thread allocating memory, and, since thread-local will be the default allocation strategy, most memory will be GCed without synchronizing with other threads.Weed:Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -CraigI want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - it is a necessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing, that in real-world programs may need pure functions and immutable data. That D2 has already, while C++ is less lucky. Bye, bearophile
Mar 18 2009
On Wed, 18 Mar 2009 13:48:55 -0400, Craig Black <cblack ara.com> wrote:bearophile Wrote:*Sigh*, you do know people run cluster & multi-threaded Java apps all the time right? I'd recommend reading about concurrent GCs http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._increme tal_vs._concurrent. By the way, traditional malloc has rather horrible multi-threaded performance as 1) it creates lots of kernel calls and 2) requires a global lock on access. Yes, there are several alternatives available now, but the same techniques work for enabling multi-threaded GCs. D's shared/local model should support thread local heaps, which would improve all of the above.Weed:Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -CraigI want to offer the dialect of the language D2.0, suitable for usewhereare now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - itis anecessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing, that in real-world programs may need pure functions and immutable data. That D2 has already, while C++ is less lucky. Bye, bearophile
Mar 18 2009
Robert Jacques :D2.0 with GC also creates lots of kernel calls!Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -Craig*Sigh*, you do know people run cluster & multi-threaded Java apps all the time right? I'd recommend reading about concurrent GCs http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._incremental_vs._concurrent. By the way, traditional malloc has rather horrible multi-threaded performance as 1) it creates lots of kernel callsand 2) requires a global lock on access.Who?Yes, there are several alternatives available now, but the same techniques work for enabling multi-threaded GCs. D's shared/local model should support thread local heaps, which would improve all of the above.It does not prevent pre-create the objects, or to reserve memory for them in advance. (This is what makes the GC, but a programmer would do it better)
Mar 19 2009
On Thu, 19 Mar 2009 07:32:18 -0400, Weed <resume755 mail.ru> wrote:Robert Jacques пишет:*sigh* All memory allocation must make some kernel calls. D's GC makes fewer calls than a traditional malloc. Actually, modern malloc replacements imitate the way GCs allocate memory since it's a lot faster. (Intel's threading building blocks mentions this as part of its marketing and performance numbers, so modern mallocs are probably not that common)D2.0 with GC also creates lots of kernel calls!Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -Craig*Sigh*, you do know people run cluster & multi-threaded Java apps all the time right? I'd recommend reading about concurrent GCs http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._incremental_vs._concurrent. By the way, traditional malloc has rather horrible multi-threaded performance as 1) it creates lots of kernel callsTraditional malloc requires taking global lock (and as point 1, often a kernel lock. Again, fixing this issue is one of Intel's TBB's marketing/performance points)and 2) requires a global lock on access.Who?I think the point you're trying to make is that a GC is more memory intensive. Actually, since fast modern mallocs and GC share the same underlying allocation techniques, they have about the same memory usage, etc. Of course, a traditional malloc with aggressive manual control can often return memory to the kernel in a timely manner, so a program's memory allocation better tracks actual usage as opposed to the maximum. Doing so is very performance intensive and GCs can return memory to the system too (Tango's does if I remember correctly).Yes, there are several alternatives available now, but the same techniques work for enabling multi-threaded GCs. D's shared/local model should support thread local heaps, which would improve all of the above.It does not prevent pre-create the objects, or to reserve memory for them in advance. (This is what makes the GC, but a programmer would do it better)
Mar 19 2009
Robert Jacques :*sigh* All memory allocation must make some kernel calls. D's GC makes fewer calls than a traditional malloc. Actually, modern malloc replacements imitate the way GCs allocate memory since it's a lot faster. (Intel's threading building blocks mentions this as part of its marketing and performance numbers, so modern mallocs are probably not that common)+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)Traditional malloc requires taking global lock (and as point 1, often a kernel lock. Again, fixing this issue is one of Intel's TBB's marketing/performance points)and 2) requires a global lock on access.Who?I think the point you're trying to make is that a GC is more memory intensive.Yes, there are several alternatives available now, but the same techniques work for enabling multi-threaded GCs. D's shared/local model should support thread local heaps, which would improve all of the above.It does not prevent pre-create the objects, or to reserve memory for them in advance. (This is what makes the GC, but a programmer would do it better)Actually, since fast modern mallocs and GC share the same underlying allocation techniques, they have about the same memory usage, etc. Of course, a traditional malloc with aggressive manual control can often return memory to the kernel in a timely manner, so a program's memory allocation better tracks actual usage as opposed to the maximum. Doing so is very performance intensive and GCs can return memory to the system too (Tango's does if I remember correctly).I think so: during the performance of malloc is controlled by the OS. And it is so does the optimization of memory allocation for programs. And OS has more facilities to do this. GC there will be just extra layer. A need language that does not contain a GC (or contains optional). Many C++ programmers do not affect the D only because of this.
Mar 19 2009
Reply to Weed,I think so: during the performance of malloc is controlled by the OS. And it is so does the optimization of memory allocation for programs. And OS has more facilities to do this. GC there will be just extra layer.malloc is not a system call. malloc make systems calls (IIRC mmap) once in a while to ask for more memory to be mapped in but I think that D's GC also works that way.
Mar 19 2009
Weed <resume755 mail.ru> wrote:Then use the stub GC or disable the GC, then re-enable it when you have the time to run a sweep (yes, you can).I think the point you're trying to make is that a GC is more memory intensive.+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)A need language that does not contain a GC (or contains optional). Many C++ programmers do not affect the D only because of this.While GC in D is not optional, it can be stubbed out or disabled, and malloc/free used in its place. What more is it you ask for?
Mar 19 2009
Simen Kjaeraas :Weed <resume755 mail.ru> wrote:Then a memory overrunThen use the stub GC or disable the GC, then re-enable it when you have the time to run a sweep (yes, you can).I think the point you're trying to make is that a GC is more memory intensive.+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)Then some part of the language will stop working (dynamic arrays, and possibly delegates)A need language that does not contain a GC (or contains optional). Many C++ programmers do not affect the D only because of this.While GC in D is not optional, it can be stubbed out or disabled,and malloc/free used in its place. What more is it you ask for?I need an optional GC and complete freedom to use the stack.
Mar 20 2009
Weed <resume755 mail.ru> wrote:Simen Kjaeraas пишет:If so, you have allocated a lot of things you shouldn't have, or otherwise would have the same problem using manual allocation.Weed <resume755 mail.ru> wrote:Then a memory overrunThen use the stub GC or disable the GC, then re-enable it when you have the time to run a sweep (yes, you can).I think the point you're trying to make is that a GC is more memory intensive.+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)Yes. So don't use those parts, or disable the GC and enable it when you have the time.Then some part of the language will stop working (dynamic arrays, and possibly delegates)A need language that does not contain a GC (or contains optional). Many C++ programmers do not affect the D only because of this.While GC in D is not optional, it can be stubbed out or disabled,
Mar 20 2009
Simen Kjaeraas пишет:Weed <resume755 mail.ru> wrote:No, as far as I know that some pieces of language does not imply a manual release memory. (See below)Simen Kjaeraas пишет:If so, you have allocated a lot of things you shouldn't have, or otherwise would have the same problem using manual allocation.Weed <resume755 mail.ru> wrote:Then a memory overrunThen use the stub GC or disable the GC, then re-enable it when you have the time to run a sweep (yes, you can).I think the point you're trying to make is that a GC is more memory intensive.+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)It is not impossible without blocking at the compiler level (by CLI option) in the real world.Yes. So don't use those parts,Then some part of the language will stop working (dynamic arrays, and possibly delegates)A need language that does not contain a GC (or contains optional). Many C++ programmers do not affect the D only because of this.While GC in D is not optional, it can be stubbed out or disabled,or disable the GC and enable it when you have the time.Again, the memory will be overrun)
Mar 20 2009
Reply to Weed,Simen Kjaeraas ?????:Are you saying that you have a program with a time critical section that allocates 100s of MB of memory? If so, you have other problems to fix. If that is not the case, then disable the GC run your critical/RT section allocating a few kB/MB and when you exit that section re enable the GC and clean up. This won't create a memory overrun unless you allocate huge amounts of memory or forget to re enable the GC. Short version: The only places I can think of where having the GC run would cause problems are can't be allowed to run for very long or allocate hardly ram at all. To argue this point you will have to give a specific use case.or disable the GC and enable it when you have the time.Again, the memory will be overrun)
Mar 20 2009
Reply to Weed,Simen Kjaeraas ?????:I can't think of a case where having the GC running would be a problem where allocating memory at all would not (malloc/new/most any allocator is NOT cheap)Weed <resume755 mail.ru> wrote:Then a memory overrunThen use the stub GC or disable the GC, then re-enable it when you have the time to run a sweep (yes, you can).I think the point you're trying to make is that a GC is more memory intensive.+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)
Mar 20 2009
Reply to Weed,+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)This issue is in no way special to GC systems, IIRC malloc has no upper limit on it's run time. Yes GC has some down sides, yes non GC has some down sides. Take your pick or use a language that lets you do both (like D).
Mar 19 2009
Weed wrote:+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)So you are optimizing for the uncommon case?
Mar 19 2009
Christopher Wright :Weed wrote:GC is an attempt of optimizing for the uncommon case )+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)So you are optimizing for the uncommon case?
Mar 20 2009
Weed wrote:Christopher Wright :I don't think so. Programmers have more important things to do than write memory management systems. My boss would not be happy if I produced an application that leaked memory at a prodigious rate, and he would not be happy if I spent much time at all on memory management. With the application I develop at work, we cache some things. These would have to be reference counted or deleted and recomputed every time. Reference counting is a lot of tedious developer effort. Recomputing is rather expensive. Deleting requires tedious developer effort and determining ownership of everything. This costs time and solves no problems for the customers. And the best manual memory management that I am likely to write would not be faster than a good garbage collector. What sort of applications do you develop? Have you used a garbage collector in a large application?Weed wrote:GC is an attempt of optimizing for the uncommon case )+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)So you are optimizing for the uncommon case?
Mar 20 2009
Christopher Wright :You should use language with GC in this case.I don't think so. Programmers have more important things to do than write memory management systems. My boss would not be happy if I produced an application that leaked memory at a prodigious rate, and he would not be happy if I spent much time at all on memory management.GC is an attempt of optimizing for the uncommon case )+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)So you are optimizing for the uncommon case?With the application I develop at work, we cache some things. These would have to be reference counted or deleted and recomputed every time. Reference counting is a lot of tedious developer effort. Recomputing is rather expensive. Deleting requires tedious developer effort and determining ownership of everything. This costs time and solves no problems for the customers.I do not agree. I am quite easy to give tracking the creation and deletion of the objects on the stack and on the heap. I do not see problem there. Although there is an alternative - C++, but not D. And you do not need to do reference counting for all the objects in the program. Normally, objects in need not so much as objects do not need. (Therefore, I propose to extend the stack usage for storing and passing objects.) Unfortunately, not yet thought of another way to memory managementAnd the best manual memory management that I am likely to write would not be faster than a good garbage collector. What sort of applications do you develop?games, images processingHave you used a garbage collector in a large application?I do not write really large applications
Mar 20 2009
Reply to Weed,Christopher Wright ?????:that type of cases IS the normals caseYou should use language with GC in this case.I don't think so. Programmers have more important things to do than write memory management systems. My boss would not be happy if I produced an application that leaked memory at a prodigious rate, and he would not be happy if I spent much time at all on memory management.GC is an attempt of optimizing for the uncommon case )+ Sometimes allocation and freeing of memory in an arbitrary unpredictable time unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)So you are optimizing for the uncommon case?Small applications are NOT the normal case. trying to design a language for large apps (one of the things D is targeted at) based on what works in small apps is like say "I know what works for bicycles so now I'll design a railroad train". Yes there are programs where manual memory management is easy, they are generally considered to be few and far between in real life. Many of them really have no need for memory management at all as they die before they would run out of ram anyway.And the best manual memory management that I am likely to write would not be faster than a good garbage collector. What sort of applications do you develop?games, images processingHave you used a garbage collector in a large application?I do not write really large applications
Mar 20 2009
Weed wrote:Christopher Wright :Libraries will often have no need for data structures with complex lifestyles. There are exceptions, of course, but that's what I have generally found to be the case. For the bulk of image processing, you can just throw the memory management problem to the end user. Games have strict performance requirements that a stop-the-world type of garbage collector violates. Specifically, a full collection would cause an undue delay of hundreds of milliseconds on occasion. If this happens once every ten seconds, your game has performance problems. This is not true of pretty much any other type of application. Games usually have scripting languages that might make use of a garbage collector, though. And there is research into realtime garbage collectors that would be suitable for use in games.What sort of applications do you develop?games, images processing
Mar 20 2009
Christopher Wright wrote:Games have strict performance requirements that a stop-the-world type of garbage collector violates. Specifically, a full collection would cause an undue delay of hundreds of milliseconds on occasion. If this happens once every ten seconds, your game has performance problems. This is not true of pretty much any other type of application.If you spend hundreds of milliseconds on garbage collection every ten second, you spend multiple percent of your total execution time on garbage collection. I wouldn't consider that acceptable anywhere. -- Rainer Deyke - rainerd eldwood.com
Mar 20 2009
Hello Rainer,Christopher Wright wrote:If you spend a few 0.1ths of a ms every 10 ms on reference counting, smart pointers or passing around other meta data it's the just as bad. I have no data but even worse would be, as I expect is true, if non GC apps end up being architected different to make memory management easier (if so, you can bet money it won't be faster as result).Games have strict performance requirements that a stop-the-world type of garbage collector violates. Specifically, a full collection would cause an undue delay of hundreds of milliseconds on occasion. If this happens once every ten seconds, your game has performance problems. This is not true of pretty much any other type of application.If you spend hundreds of milliseconds on garbage collection every ten second, you spend multiple percent of your total execution time on garbage collection. I wouldn't consider that acceptable anywhere.
Mar 20 2009
Rainer Deyke wrote:Christopher Wright wrote:I was pulling numbers out of my ass. If I wanted to do a proper job, I would have built a large application and modified druntime to get proper timings. 0.1 seconds out of every ten is a small amount to pay for the benefits of garbage collection in most situations. (Most GUI applications are idle most of the time anyway.) I did, however, specifically make the point that it's unacceptable in some situations. These situations may be your situations. Even so, the garbage collector might not be that slow. (And for what it's doing, that seems pretty fast to me.) It would be cool if the GC could watch for what pages have been written to since the last collection and only bother looking through them. That would require some additional accounting. On Windows, there's a system call GetWriteWatch that works in that regard, but on Linux, the only solution I've seen is marking the memory readonly and trapping SIGSEGV. That would be pretty expensive.Games have strict performance requirements that a stop-the-world type of garbage collector violates. Specifically, a full collection would cause an undue delay of hundreds of milliseconds on occasion. If this happens once every ten seconds, your game has performance problems. This is not true of pretty much any other type of application.If you spend hundreds of milliseconds on garbage collection every ten second, you spend multiple percent of your total execution time on garbage collection. I wouldn't consider that acceptable anywhere.
Mar 20 2009
Hello Christopher,It would be cool if the GC could watch for what pages have been written to since the last collection and only bother looking through them. That would require some additional accounting. On Windows, there's a system call GetWriteWatch that works in that regard, but on Linux, the only solution I've seen is marking the memory readonly and trapping SIGSEGV. That would be pretty expensive.http://libsigsegv.sourceforge.net/ """ What is libsigsegv? This is a library for handling page faults in user mode. A page fault occurs when a program tries to access to a region of memory that is currently not available. Catching and handling a page fault is a useful technique for implementing: ... * generational garbage collectors, """
Mar 20 2009
Christopher Wright wrote:I was pulling numbers out of my ass.That's what I assumed. I'm a game developer. I use GC.0.1 seconds out of every ten is a small amount to pay for the benefits of garbage collection in most situations.GC is useless for resource management. RAII solves the resource management problem, in C++ and D2. GC is a performance optimization on top of that. If the GC isn't faster than simple reference counting, then it serves no purpose, because you could use RAII with reference counting for the same effect. (No, I don't consider circular references a problem worth discussing.) -- Rainer Deyke - rainerd eldwood.com
Mar 20 2009
Rainer Deyke wrote:Christopher Wright wrote:I believe Python is using reference counting with a garbage collector, with the collector intended to solve the circular reference problem, so apparently Guido van Rossum thinks it's a problem worth discussing. And my opinion of reference counting is, if it requires no programmer intervention, it's just another garbage collector. Reference counting would probably be a win overall if a reference count going to zero would only optionally trigger a collection -- you're eliminating the 'mark' out of 'mark and sweep'. Though I would still demand a full mark-and-sweep, just not as often. Nontrivial data structures nearly always have circular references.I was pulling numbers out of my ass.That's what I assumed. I'm a game developer. I use GC.0.1 seconds out of every ten is a small amount to pay for the benefits of garbage collection in most situations.GC is useless for resource management. RAII solves the resource management problem, in C++ and D2. GC is a performance optimization on top of that. If the GC isn't faster than simple reference counting, then it serves no purpose, because you could use RAII with reference counting for the same effect. (No, I don't consider circular references a problem worth discussing.)
Mar 21 2009
Sat, 21 Mar 2009 00:59:22 -0600, Rainer Deyke wrote:GC is useless for resource management.I think this is an overstatement. It's only abstract write buffers where GC really doesn't work, like std.stream.BufferedFile. In any other resource management case I can think of GC works fine.
Mar 21 2009
Sergey Gromov wrote:I think this is an overstatement. It's only abstract write buffers where GC really doesn't work, like std.stream.BufferedFile. In any other resource management case I can think of GC works fine.OpenGL objects (textures/shader programs/display lists). SDL surfaces. Hardware sound buffers. Mutex locks. File handles. Any object with a non-trivial destructor. Any object that contains or manages one of the above. Many of the above need to be released in a timely manner. For example, it is a serious error to free a SDL surface after closing the SDL video subsystem, and closing the SDL video subsystem is the only way to close the application window under SDL. Non-deterministic garbage collection cannot work. Others don't strictly need to be released immediately after use, but should still be released as soon as reasonably possible to prevent resource hogging. The GC triggers when the program is low on system memory, not when the program is low on texture memory. By my estimate, in my current project (rewritten in C++ after abandoning D due to its poor resource management), about half of the classes manage resources (directly or indirectly) that need to be released in a timely manner. The other 50% does not need RAII, but also wouldn't benefit from GC in any area other than performance. -- Rainer Deyke - rainerd eldwood.com
Mar 21 2009
Sat, 21 Mar 2009 20:16:07 -0600, Rainer Deyke wrote:Sergey Gromov wrote:Thanks for the explanation, it really helps to keep this picture in mind.I think this is an overstatement. It's only abstract write buffers where GC really doesn't work, like std.stream.BufferedFile. In any other resource management case I can think of GC works fine.OpenGL objects (textures/shader programs/display lists). SDL surfaces. Hardware sound buffers. Mutex locks. File handles. Any object with a non-trivial destructor. Any object that contains or manages one of the above. Many of the above need to be released in a timely manner. For example, it is a serious error to free a SDL surface after closing the SDL video subsystem, and closing the SDL video subsystem is the only way to close the application window under SDL. Non-deterministic garbage collection cannot work. Others don't strictly need to be released immediately after use, but should still be released as soon as reasonably possible to prevent resource hogging. The GC triggers when the program is low on system memory, not when the program is low on texture memory. By my estimate, in my current project (rewritten in C++ after abandoning D due to its poor resource management), about half of the classes manage resources (directly or indirectly) that need to be released in a timely manner. The other 50% does not need RAII, but also wouldn't benefit from GC in any area other than performance.
Mar 22 2009
"Robert Jacques" <sandford jhu.edu> wrote in message news:op.uq0ng1we26stm6 sandford.myhome.westell.com...On Wed, 18 Mar 2009 13:48:55 -0400, Craig Black <cblack ara.com> wrote:I admit to knowing nothing about clusters, so my point does not apply to them. Also note that I didn't say GC was not useful. I said GC can be a bottleneck. If it is a bottleneck (on a single computer), throwing more CPU's at it doesn't help. Why? The big performance problem with GC is with large applications that allocate a lot of memory. In these apps, modern GC's are constantly causing page faults because they are touching too much memory. I look forward to the day where all the GC problems are solved, and I believe it will come. It would be really nice to have a faster GC in D. However, I don't see how each processor working on a separate heap will solve the problem of the GC causing page faults. But maybe I missed something. BTW, I don't use traditional malloc. I use nedmalloc and the performance is quite good. -Craigbearophile Wrote:*Sigh*, you do know people run cluster & multi-threaded Java apps all the time right? I'd recommend reading about concurrent GCs http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._increme tal_vs._concurrent. By the way, traditional malloc has rather horrible multi-threaded performance as 1) it creates lots of kernel calls and 2) requires a global lock on access. Yes, there are several alternatives available now, but the same techniques work for enabling multi-threaded GCs. D's shared/local model should support thread local heaps, which would improve all of the above.Weed:Multiprocessing can only improve performance for tasks that can run in parallel. So far, every attempt to do this with GC (that I know of) has ended up slower, not faster. Bottom line, if GC is the bottleneck, more CPU's won't help. For applications where GC performance is unacceptable, we either need a radically new way to do GC faster, rely less on the GC, or drop GC altogether. However, in D, we can't get rid of the GC altogether, since the compiler relies on it. But we can use explicit memory management where it makes sense to do so. -CraigI want to offer the dialect of the language D2.0, suitable for usewhereare now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++: ... The code on this language almost as dangerous as a code on C++ - itis anecessary cost for increasing performance.No, thanks... And regarding performance, eventually it will come a lot from a good usage of multiprocessing, that in real-world programs may need pure functions and immutable data. That D2 has already, while C++ is less lucky. Bye, bearophile
Mar 20 2009
Weed Wrote:- Its does not contains garbage collection and - allows active using of a stack for the objects (as in C++) - Its uses syntax and a tree of objects taken from the DGarbage collection can be turned off already, you can get rid of it just by minor modification of druntime, stack allocation is also a minor modification to the compiler, you just need to add syntactical support for it.
Mar 18 2009
Kagamin Wrote:Weed Wrote:...well, you already has it with structure constructors...- Its does not contains garbage collection and - allows active using of a stack for the objects (as in C++) - Its uses syntax and a tree of objects taken from the Dyou just need to add syntactical support for it.
Mar 18 2009
Kagamin :Kagamin Wrote:YesWeed Wrote:- Its does not contains garbage collection and - allows active using of a stack for the objects (as in C++) - Its uses syntax and a tree of objects taken from the Dyou just need to add syntactical support for it....well, you already has it with structure constructors...Remember that we have already discussed this here several times, and came to the conclusion (?) that emulation of the "value semantic" by structs unreasonably difficult
Mar 18 2009
Weed :Hi!colorized example: http://paste.dprogramming.com/dpd6j5co
Mar 18 2009
what you suggest is C++ with better syntax, *NOT* a variant of D. for that look at: http://en.wikipedia.org/wiki/Significantly_Prettier_and_Easier_C%2B%2B_Syntax C++ has the wrong semantics which D fixes. No sane person that moved to D would ever want to go back to C++ and its huge pile of issues. C++ implements only very basic mechanisms for OOP and even that is done poorly. Also, Your point of view about performance and GC is very much outdated and completely wrong. I will not go into GC implementation details since others did that already. All I'll say is that C++ will eventually get GC too. it was planned to be added in C++0x (the new standard that's planned for 2009) but was postponed because of lack of time and tight deadlines. to quote Wikipedia: <qoute> Transparent garbage collection C++0x will not feature transparent garbage collection directly. Instead, the C++0x standard will include features that will make it easier to implement garbage collection in C++. Full garbage collection support has been remanded to a later version of the standard or a Technical Report. </quote> in fact, if you look at the new C++ standard you'll see that they pretty much copied (poorly - see C++ delegates) most current D features. -- Yigal
Mar 19 2009
Yigal Chripun :what you suggest is C++ with better syntax, *NOT* a variant of D. for that look at: http://en.wikipedia.org/wiki/Significantly_Prettier_and_Easier_C%2B%2B_SyntaxNo! Only because of the "value semantic" returns used pointers instead of references for pointing to objects.C++ has the wrong semantics which D fixes. No sane person that moved to D would ever want to go back to C++ and its huge pile of issues. C++ implements only very basic mechanisms for OOP and even that is done poorly. Also, Your point of view about performance and GC is very much outdated and completely wrong. I will not go into GC implementation details since others did that already. All I'll say is that C++ will eventually get GC too. it was planned to be added in C++0x (the new standard that's planned for 2009) but was postponed because of lack of time and tight deadlines. to quote Wikipedia: <qoute> Transparent garbage collection C++0x will not feature transparent garbage collection directly. Instead, the C++0x standard will include features that will make it easier to implement garbage collection in C++.Only optional
Mar 20 2009
Yigal Chripun:what you suggest is C++ with better syntax, *NOT* a variant of D. for that look at: http://en.wikipedia.org/wiki/Significantly_Prettier_and_Easier_C%2B%2B_SyntaxThank you for the link, I did know only "A Modest Proposal: C++ Resyntaxed". In some situations that SPECS syntax is more readable than D syntax: Function having an int argument and returning pointer to float: (int -> ^float) Pointer to function having an int and float argument returning nothing: ^(int, float -> void) Note that SPECS uses ^ := and = as in Pascal. Pointer syntax of Pascal is better, and the := = often avoid the C bugs like if(a = b). But probably D needs to tell apart functions and delegates too, so that syntax isn't enough. And I think now it's not easy to change the meaning of ^ in D :-) So a possibility (keeping the usual * pointer syntax): {int => int} Delegate: {{int => int}} That can also offer a syntax for anonymous functions/delegates: {int x => x*x} {x => x*x} {{x => x*x}} Bye, bearophile
Mar 20 2009
On Fri, Mar 20, 2009 at 10:31 PM, bearophile <bearophileHUGS lycos.com> wro= te:Note that SPECS uses =A0^ =A0:=3D =A0and =A0=3D =A0as in Pascal. Pointer syntax of Pascal is better, and the :=3D =A0=3D often avoid the C=bugs like if(a =3D b). Which isn't a problem in D ;)That can also offer a syntax for anonymous functions/delegates: {int x =3D> x*x} {x =3D> x*x} {{x =3D> x*x}}That's actually pretty nice.
Mar 20 2009
Jarrett Billingsley:Let's say D has a workaround to patch most of that C-syntax hole :-) And I'll never like C pointer syntax.Pointer syntax of Pascal is better, and the := = often avoid the C bugs like if(a = b).Which isn't a problem in D ;)That's actually pretty nice.An alternative syntax that avoids the two nested {{}}: Lambda functions: {int x -> x*x} {x -> x*x} {float x, float x*y} Lambda delegates: {int x => x*x} {x => x*x} {float x, float y => x*y} I may even like that :-) Bye, bearophile
Mar 20 2009
bearophile wrote:Jarrett Billingsley:{ int -> int } // function { this int -> int } // delegate Not saying I support this syntax; just proposing an alternative. The way I see it, there's no reason why functions are -> and delegates are =>; the difference is non-obvious. -- DanielLet's say D has a workaround to patch most of that C-syntax hole :-) And I'll never like C pointer syntax.Pointer syntax of Pascal is better, and the := �= often avoid the C bugs like if(a = b).Which isn't a problem in D ;)That's actually pretty nice.An alternative syntax that avoids the two nested {{}}: Lambda functions: {int x -> x*x} {x -> x*x} {float x, float x*y} Lambda delegates: {int x => x*x} {x => x*x} {float x, float y => x*y} I may even like that :-) Bye, bearophile
Mar 21 2009
Weed :Hi! I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++:at least to something like this idea? )
Mar 21 2009
Weed pisze:Weed :The idea could be ok but have you written a compiler or specification? Or is it wishful thinking like let's make the language that's productive to the skies while faster than asm? ;) CheersHi! I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++:at least to something like this idea? )
Mar 21 2009
Piotrek :Weed pisze:My experience in the creation of the compilers is reduced to half-read book "Compilers: Principles, Techniques, and Tools". It was easier to write the differences from D, as fully to specification - I hoped that receive point to some fundamental problems, but there seems all to be good (excluding holy war about GC, of course)Weed :The idea could be ok but have you written a compiler or specification?Hi! I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++:at least to something like this idea? )Or is it wishful thinking like let's make the language that's productive to the skies while faster than asm? ;)No. ) I'm not suggesting anything new, it is suggested that all the time-tested things.
Mar 21 2009
Weed pisze:No. ) I'm not suggesting anything new, it is suggested that all the time-tested things.OK. I tell you what I think. D is a well designed language. What you suggest is some kind of hack to that language. I don't think there's much interest in it. As you said you don't have much experience in writing compilers (neither do I) but you should know how hard is to keep the design points all the way a language works. Walter spent many years on it. Form my point of view he does the best (of course there are bugs but when I write something in D I'm so glad I don't have to to do it in something else). Cheers
Mar 21 2009
Piotrek:(of course there are bugs but when I write something in D I'm so glad I don't have to to do it in something else).D is surely not a puzzle language :-) http://prog21.dadgum.com/38.html Well, writing D templates in functional style and lot of string mixins is a puzzle (even if they are less puzzles than some things you have to do in Forth). AST macros too can become puzzles, but I think if well designed they can be more natural to use than the current templates & string mixins. Bye, bearophile
Mar 21 2009
bearophile pisze:Piotrek:Haha, I found good discussion on reddit http://www.reddit.com/r/programming/comments/7vnm0/puzzle_languages/ Just couple of nice citations: J language expample: inter2=: ([: (<. inter pc) |:)^:2 fit=: [: >.`<./ , fit1=: [: >.`<./ (,~ (,~ -))~ Does it look like the Matrix? And the the one made me laugh the most: "CSS is the puzzliest puzzle that I ever puzzled" Every one should try CSS :D Cheers(of course there are bugs but when I write something in D I'm so glad I don't have to to do it in something else).D is surely not a puzzle language :-) http://prog21.dadgum.com/38.html
Mar 21 2009
Piotrek:Haha, I found good discussion on reddit<On the other hand, D too for me becomes a puzzle language when I use many string mixins or templates in functional-style. D macros will possibly improve that situation some.J language expample:<K language seems worse to me, this is a full raytracer that saves in pgm (in C++ it's about 120 lines of code written in normal style): http://www.nsl.com/k/ray/ray.k U:{x%_sqrt x _dot x} S:{[r;s]:[0>d:_sqr[s 1]+_sqr[b:v _dot r 1]-v _dot v:s[0]-*r;0i;0>t:b+e:_sqrt d;0i;0<u:b-e;u;t]} I:{[r;h;o]:[~4:*o;:[~S[r;*o]<*h;h;h _f[r]/o 1];~h[0]>l:S[r]o;h;(l;U r[0]-o[0]-l*r 1)]} T:{[r;o;d;z;l]:[0i=*h:I[r;z]o;0.;~0>g:h[1]_dot l;0.;0i=*I[(r[0]+(r[1]**h)+d*h 1;-l);z]o;-g;0.]} 2.}/+4_vs!16} R:{[k;n]"P5\n",(5:n,n),"\n255\n",_ci _.5+15.9375*N[n*1.;C[k;0 -1 0.]1.]'+| [n _vs!n*n;0;|:]} C:{[k;c;r]:[k=1;(c;r);((c;r*3);(,(c;r)),C[k-1;;r%2]'+c+-3 3[2_vs 2 3 6 7]*r%_sqrt 12)]} \t q:R[3]32 "temp.pgm"6:q \"C:\\Program Files\\IrfanView\\i_view32.exe" temp.pgm APL-like languages are a dead-end... Bye, bearophile
Mar 21 2009
Weed wrote:Piotrek :I inferred from your original post that you had written such a compiler. I think probably the first thing to do, if you are serious about this, is to choose one essential feature of your dialect and implement that. Then there will be something concrete to discuss. That isn't the normal modus operandi around here, but there doesn't seem to be much support for your suggestions. I think this might be less to do with your ideas and more that I can't really envision what you're talking about and how code would look with your dialect.Weed pisze:My experience in the creation of the compilers is reduced to half-read book "Compilers: Principles, Techniques, and Tools". It was easier to write the differences from D, as fully to specification - I hoped that receive point to some fundamental problems, but there seems all to be good (excluding holy war about GC, of course)Weed :The idea could be ok but have you written a compiler or specification?Hi! I want to offer the dialect of the language D2.0, suitable for use where are now used C/C++. Main goal of this is making language like D, but corresponding "zero-overhead principle" like C++:at least to something like this idea? )
Mar 21 2009