www.digitalmars.com         C & C++   DMDScript  

D - Marketplace for D?

reply "Peter Hercek" <vvp no.post.spam.sk> writes:
What is the marketplace for D?
I mean: even if it is not expected to be comercial product,
 it should have target customers. I have read a description
 for target users here: http://www.digitalmars.com/d
But it looks not sufficient to me especially if we compare


 GC (which is a cool feature and I think required for any
 new language). The only significant difference reamains
 the fact that D does not need runtime. This looks cool,
 but I think it is not so cool as it looks. The main good
 point of "no runtime" feature is that you do not need to
 distribute a lot of executables of the runtime environment.
 Besides this there is a platform dependence for D because
 of this.
When there is no runtime then I would tell, a good target
 for D could be some embedded systems, but these are
 typically real time oriented and D does not have a possibility
 to allocate from memory pool which would not be garbage
 collected and where one would be able to take care about
 memory management yourself. I'm not sure whether
 incremental/coloring garbage collector is enough at least
 for some RT systems (finaly it has an constant time
 overhead with each pointer dereference). Prealocating
 everything may not be an option too.
So I would think like this in a long term:
1) if I'm going to write a bussines application or PC or
   server application, where time or disk space is not critical,

   gives me chance I can run it on a different platform without
   recompile
2) when I'm going to write RT aplication, I continue to be
   struck with C, C++, because D is only garbage collected
3) I can use D for an application which is not time critical and
   which does have a limited diskspace or some other strange
   reason, because of which I cannot use a runtime

Result: Well Java has some performance problems (mostly
 due to some now already "legacy" reasons) and MS can
 screw up its implementations with some proprietary extensions
 or by requesting some fees for ECMA .net framework
 implementations (if I'm correct ECMA does not enforce
 a requirement of free implementations, it requires only
 nondiscriminatory fees or something like that - may be
 this gives MS a possibility to attak e.g. mono implementaion
 later when .net is accepted). Are these two assets (slow
 Java and unreliable MS marketing) the only chance for D?
 Or is market place described in point 3 big enoughf?

It looks to me that it makes *big* sense to allow manual
 memmory alocation / dealocation as an option for D.
 This way one can use D for eg whole firmware with a time
 critical threads withoug CG and other threads with CG.
 (Well if compilers will exist.)


 D syntax. There are some good ideas there, like e.g.
 override keyword etc.
I would also get rid of C style cast operator (type in
 braces). In my opinion this is one of the poorest operators

It should be something like:
    cast(Type) UnaryExpression
The point is that this operator can be dangerous and if it
 does not have a keyword, its hard to look it up in an
 editor.

Best regards,
       Peter Hercek.
Feb 01 2003
parent reply Ilya Minkov <midiclub 8ung.at> writes:
Welcome to the crew.


Peter Hercek wrote:
 What is the marketplace for D?
You mean the target usage? D allows for rapid development of high-performance applications, basically like C++, but possibly faster, and in a more unified and safe way. "Shooting yur own feet made complicated."
 I mean: even if it is not expected to be comercial product,
  it should have target customers. I have read a description
  for target users here: http://www.digitalmars.com/d
It might just as well be a commercial product, like some C++ implementations are. But there will be good free ones for sure, like DMD and OpenD.
 But it looks not sufficient to me especially if we compare


  GC (which is a cool feature and I think required for any
  new language). The only significant difference reamains
  the fact that D does not need runtime. This looks cool,
  but I think it is not so cool as it looks. The main good
  point of "no runtime" feature is that you do not need to
  distribute a lot of executables of the runtime environment.
  Besides this there is a platform dependence for D because
  of this.
D has a standard library. D software shall compile on different platforms without source code change. You can also make platform-dependant optimisations. So platform-dependance is binary level only, which is not necessarily a bad thing. There need not be even a binary platform dependance. It is very well possible to make a special VR for D. Possibly based upon MONO project, which is a .NET VR. It should be possible to use .NET -compatible JIT to create binary-cross-platform D software, it doesn't requiere the rest of the framework. I'm happy to tell you that i'm investigating in this direction. I also argue that having a compiler at hand gives a great nmber of features, like "eval()", and also allows for optimisations in the cases where constants become defined at runtime.
 When there is no runtime then I would tell, a good target
  for D could be some embedded systems, but these are
  typically real time oriented and D does not have a possibility
  to allocate from memory pool which would not be garbage
  collected and where one would be able to take care about
  memory management yourself. 
It does. See phobos library, GC unit, and somewhere below in this newsgroup.
 I'm not sure whether
  incremental/coloring garbage collector is enough at least
  for some RT systems (finaly it has an constant time
  overhead with each pointer dereference). Prealocating
  everything may not be an option too.
Incremental Mark/Sweep doesn't have an overhead, and coloring is non-intrusive. BTW, overhead of coloring systems goes not for pointer dereference, but *only* for pointer writes, which is usually minor. It is expected to optimise a Mark/Sweep GC in D to such extent, that it should not lead to noticable pauses as well. Mind that Walter pointed out, that in C malloc/free can also in principle take arbitrary time, and i bet a coloring collector could make realtime characteristics even better than of C, since all freeing would be done slowly in the background. Well, of yourse you could do that in C yourself, leading to the same result, but the purpose of D is to simplify development. I recommend that you read the thread "garbage collection is bad... other coments" started by Raphael Baptista, who is a game developer (who apparently is obsessed writing and debugging memory managenent spaghetti code), and where all of these has been gone through.
 So I would think like this in a long term:
 1) if I'm going to write a bussines application or PC or
    server application, where time or disk space is not critical,

    gives me chance I can run it on a different platform without
    recompile
 2) when I'm going to write RT aplication, I continue to be
    struck with C, C++, because D is only garbage collected
What does D lack what C/C++ has? You can use all of the same practices in D! You can disable GC and do a manual memory management. I currently argue for 2 separate heaps, one of which is not processed by garbage collection. The feature requieres careful use, but i think it is useful for those who are still unsure.
 3) I can use D for an application which is not time critical and
    which does have a limited diskspace or some other strange
    reason, because of which I cannot use a runtime
D applications are VERY VERY FAST!
 Result: Well Java has some performance problems (mostly
  due to some now already "legacy" reasons) and MS can
  screw up its implementations with some proprietary extensions
  or by requesting some fees for ECMA .net framework
  implementations (if I'm correct ECMA does not enforce
  a requirement of free implementations, it requires only
  nondiscriminatory fees or something like that - may be
  this gives MS a possibility to attak e.g. mono implementaion
  later when .net is accepted). Are these two assets (slow
  Java and unreliable MS marketing) the only chance for D?
  Or is market place described in point 3 big enoughf?
... Almost everything is written in C++ ... And D is a good alternative to it.
 It looks to me that it makes *big* sense to allow manual
  memmory alocation / dealocation as an option for D.
  This way one can use D for eg whole firmware with a time
  critical threads withoug CG and other threads with CG.
  (Well if compilers will exist.)
^^^^ ;) ^^^^ There shall be a GCC port... and GCC is though slow, it exists everywhere. And there might be a "limited" port, which outputs C code and thus will work everywhere.

  D syntax. There are some good ideas there, like e.g.
  override keyword etc.
 I would also get rid of C style cast operator (type in
  braces). In my opinion this is one of the poorest operators

 It should be something like:
     cast(Type) UnaryExpression
 The point is that this operator can be dangerous and if it
  does not have a keyword, its hard to look it up in an
  editor.
Exactly this already exists, and Walter intends to get rid of C-style "anonymous" cast as well, since it screws up parsing and interferes with some expressions. Reading manual often helps. Even me. Have fun with D. -i.
Feb 01 2003
next sibling parent "Walter" <walter digitalmars.com> writes:
"Ilya Minkov" <midiclub 8ung.at> wrote in message
news:b1hcks$2tm8$1 digitaldaemon.com...
 Mind that Walter pointed out, that in C malloc/free can also in
 principle take arbitrary time,
I used to think garbage collection was bad, too, until I tried it. I think gc has an undeservedly bad reputation. For example, the DMDScript implementation of javascript that I wrote is TWICE as fast as its nearest competitor, Microsoft Jscript, and up to TWENTY times faster than Mozilla's. Yet DMDScript internally is implemented as a mark/sweep garbage collected interpreter; Mozilla's is not, and I suspect Jscript is reference counted (because COM is).
Feb 01 2003
prev sibling parent reply "Peter Hercek" <vvp no.post.spam.sk> writes:
Hi Ilya,

Thanks for the aswer, you dispelled some of my worries. I'm
 not afraid of GC, I like it and this is a reason I want to move

 wanted to express that without non-GC-ed memory pool, D is

 problems if not pushed by big company. To make it a standard

 Without a standard it probably does not have a chance at all.
 From this point o view Walter did an excelent decision he did
 not allowed development of more compilers before the
 language is "settled".
I did read the thread "garbage collection is bad... other coments"
 even before I have written my original post. But my post was not
 about whether a GC is good or bad, my post was about who D
 language is targeted to, and how to make it more widely
 accepted in programmers comunity when taking the competitors
 into account (I would like to get rid of some scrap in C and
 C++ too :) ). Because (till you responded) I have thought the
 only interesting difference is the runtime requirement. And to
 make statements that natively compiled code is significantly
 quicker than JIT-ed code requires some courage, especially
 when we take JIT compilers development into account and
 when we do realize that .net allows "precompiled program
 image cache", which is persistent between separate process
 runs.

Do you have a link where the incremental GC is described?
 I have read only about GC, which had both write and read
 guards. And the program spent 10% of execution time only
 by chceking the read checks (not taking into account all
 the other GC execution time). Also the incremental GC seems
 to be very tough problem especially in multiprocessor
 environment ... ironically ... where it should be most required.

I'm temped to believe that, prealocation or incremental GC
 should be enough for most RT or RT like aplications, but
 I'm sure a lot of guys will not think the same way and these


I like very much you guys decided to get rid of preprocessor
 and to make grammar as simple as possible and without conflicts.
 This will get important for tool implemeters. E.g. editors with
 intelisense and "online" *syntactical* or even some semantical
 checks, case tool integration, ... and one needs this.

Peter Hercek.
Feb 02 2003
parent reply Ilya Minkov <midiclub tiscali.de> writes:
Hello.


  problems if not pushed by big company. To make it a standard

  Without a standard it probably does not have a chance at all.
  From this point o view Walter did an excelent decision he did
  not allowed development of more compilers before the
  language is "settled".
I don't think it requieres a standard, since Walter is one and only and he makes it standard. Standards are only useful when there are tons of people arguing, like in C and C++ case. Every other standard is a farce, even for this case with Microsoft, which was only to show that they don't intend to hold a monoply for some parts of the system, but for others they do. He did not forbid devlopment either. D for linux (DLI) deviates a bit, and allows constructs which are illegal in DMD or standard. And it would be really cool if GCC frontend development could be more productive, but i guess they're just climbing over their first problems yet. Unlike DLI, it would be synchromised to Walter's source. It would merely be a glue layer. And i consider hooking D to .NET, using Walter's source as well. Lacking awareness of D sort of slows down development of other high-quality compilers.
  only interesting difference is the runtime requirement. And to
  make statements that natively compiled code is significantly
  quicker than JIT-ed code requires some courage, especially
Of course a JIT VM execution speed could reach the speed of native-compiled software, but it requieres careful design. No single Java bytecode->native code performs as well as Java source->native compilation. The distributable representation has to retain more information. Maybe .NET (CLI) does exactly that. That's why i'm investigating it. ;) If the distributable representation allows for powerful optimisation, it is merely a tradeoff between execution speed and compilation time. But if profiling information is saved in, this can very well be taken into account. There are also cases where having a mediocre compier at hand can speed up code by order of magnitude, for example in the cases where a great number of constants are set in the beginning and then affect lots of branches and the code behaves almost as interpreted. There has been a proof for it. Look up "Tick C Compiler". Results are clearly impressing, scoring often much better than GCC -O2, although compiler used for dynamic compilation is real crap. But then again, it requeres high-level analysis, what should be re-compiled when, or "hints" in the programme.
 Do you have a link where the incremental GC is described?
  I have read only about GC, which had both write and read
  guards. And the program spent 10% of execution time only
  by chceking the read checks (not taking into account all
  the other GC execution time). Also the incremental GC seems
  to be very tough problem especially in multiprocessor
  environment ... ironically ... where it should be most required.
It was probably a JAVA GC? It's lame. And intrusive. Mark/Sweep collector does not have any guards. It just stops program execution and marks all used objects. After that, execution is continued as normal, and garbage is recycled in the background. Marking is only invoked at memory allocation, or at programmer's descretion. So it's very little intrusive, and no slowdown at all. "Conservative" Mark/Sweep GCs used in C are multiple orders of magnitude more intrusive than the one D shall get soon because of lacking type information. Three-colour collector needs only pointer write guards, i have already described it in "Re: GC is bad...". OCaml uses a variant of it. It's a slowdown, but not intrusive, and keeps memory footprint small. For papers on different methods of garbage collection, look at http://www.memorymanagement.org/ I just can't remember where i took information from, but i guess it's all there. In the thread "Improving garbage collection", Evan suugests this document for reading: http://cs.anu.edu.au/~Steve.Blackburn/pubs/papers/beltway-pldi-2002.pdf He said that he has changed sides after reading it. -i.
Feb 03 2003
parent "Peter Hercek" <vvp no.post.spam.sk> writes:
Hi Ilya,

Well, I have not looked in here almost for a week :)

As for as the standardization - I hope it will work as you did
 describe. I think that a posibility to change compiler and still
 compile sucessfuly is very important.

"Ilya Minkov" <midiclub tiscali.de> wrote in message
news:b1lq6u$2ffk$1 digitaldaemon.com...
[cut]---------------------------------------------------------
  > Do you have a link where the incremental GC is described?
  >  I have read only about GC, which had both write and read
  >  guards. And the program spent 10% of execution time only
  >  by chceking the read checks (not taking into account all
  >  the other GC execution time). Also the incremental GC seems
  >  to be very tough problem especially in multiprocessor
  >  environment ... ironically ... where it should be most required.

 It was probably a JAVA GC? It's lame. And intrusive.
[cut]--------------------------------------------------------- No, it was some collector used in lisp (I think - not sure about this - it was also an implementation for a single processor only). Later, after your previous post, I have seen a paper about a collector with write guards only, but it was very long and very complicated and for multiprocesor environment only (nothing like a quick introduction how it aprocimately works - it looked more like "spend two days with me" :-) ). Thank you for the links. Peter.
Feb 07 2003