www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D vs. C#

reply Mike <michi_w2001 yahoo.de> writes:
Hi,

I have some advanced knowledge of programming with C and C++.
While I like C for its simplicity and speed, it lacks some important
functionality (like OO). I'm not very fond of C++, since it is quite clumsy.
(But you know all that already)

Anyway, I was looking for a new programming language for little projects. I
looked into the specs of the D language and became quite fond of it. Anyway, I

I am not experienced enough to compare the two simply on the basis of their
specifications. I tried finding some comparison on the internet but failed to
find anything more recent than from 2003.

I was wondering about the advantages of either and languages, and in which case
one is more appropriate than the other and I hope you can help me out!


Many thanks in advance,
Mike
Oct 20 2007
next sibling parent reply Kyle Furlong <kylefurlong gmail.com> writes:
Mike wrote:
 Hi,
 
 I have some advanced knowledge of programming with C and C++.
 While I like C for its simplicity and speed, it lacks some important
functionality (like OO). I'm not very fond of C++, since it is quite clumsy.
(But you know all that already)
 
 Anyway, I was looking for a new programming language for little projects. I
looked into the specs of the D language and became quite fond of it. Anyway, I

 I am not experienced enough to compare the two simply on the basis of their
specifications. I tried finding some comparison on the internet but failed to
find anything more recent than from 2003.
 
 I was wondering about the advantages of either and languages, and in which
case one is more appropriate than the other and I hope you can help me out!
 
 
 Many thanks in advance,
 Mike
native, but runs in a VM like Java. While it does do some JITing, D and other compiled languages will always be faster. step up from C++ in that respect. Oh, and if it matters to you, D templates and metaprogramming blow performance increases.
Oct 20 2007
parent "Dave" <Dave_member pathlink.com> writes:
"Kyle Furlong" <kylefurlong gmail.com> wrote in message 
news:ffdqe7$pvi$1 digitalmars.com...
 Mike wrote:
 Hi,

 I have some advanced knowledge of programming with C and C++.
 While I like C for its simplicity and speed, it lacks some important 
 functionality (like OO). I'm not very fond of C++, since it is quite 
 clumsy. (But you know all that already)

 Anyway, I was looking for a new programming language for little projects. 
 I looked into the specs of the D language and became quite fond of it. 

 I am not experienced enough to compare the two simply on the basis of 
 their specifications. I tried finding some comparison on the internet but 
 failed to find anything more recent than from 2003.

 I was wondering about the advantages of either and languages, and in 
 which case one is more appropriate than the other and I hope you can help 
 me out!


 Many thanks in advance,
 Mike
native, but runs in a VM like Java. While it does do some JITing, D and other compiled languages will always be faster. up from C++ in that respect. Oh, and if it matters to you, D templates and metaprogramming blow performance increases.
'fixed') and easy use of C lib. routines. The price for most of that is a non-moving GC, but I think that those can be developed to rival the speed of the moving GC's. D's built-in array slicing mitigates a lot of the need for a super-fast GC as well.
Oct 20 2007
prev sibling next sibling parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Mike wrote:
 Hi,
 
 I have some advanced knowledge of programming with C and C++.
 While I like C for its simplicity and speed, it lacks some important
functionality (like OO). I'm not very fond of C++, since it is quite clumsy.
(But you know all that already)
 
 Anyway, I was looking for a new programming language for little projects. I
looked into the specs of the D language and became quite fond of it. Anyway, I

 I am not experienced enough to compare the two simply on the basis of their
specifications. I tried finding some comparison on the internet but failed to
find anything more recent than from 2003.
 
 I was wondering about the advantages of either and languages, and in which
case one is more appropriate than the other and I hope you can help me out!
1) you care about getting every little last bit of performance out of your code (http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=dlang&lang2=csharp) or 2) you care about your apps not requiring a 20MB runtime/VM to work. and maybe 3 - 3) you care about portability. Though I think Mono makes it a non-issue? obviously the community is much bigger, which translates to greater likelihood that someone will already have code you can steal that does certainly knock D's out of the water. 4) you want to be one of the "cool" kids. :-) --bb
Oct 20 2007
prev sibling next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Mike wrote:

 I have some advanced knowledge of programming with C and C++. While I
 like C for its simplicity and speed, it lacks some important
 functionality (like OO). I'm not very fond of C++, since it is quite
 clumsy. (But you know all that already)
Lots of us escapees from C++ here using D, pros/cons when compared to the D language.
 Anyway, I was looking for a new programming language for little
 projects. I looked into the specs of the D language and became quite

You might also want to check out Vala, More info at http://live.gnome.org/Vala (requires GLib and uses GObject system) --anders
Oct 20 2007
prev sibling next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Mike wrote:
 Hi,
 
 I have some advanced knowledge of programming with C and C++.
 While I like C for its simplicity and speed, it lacks some important
functionality (like OO). I'm not very fond of C++, since it is quite clumsy.
(But you know all that already)
 
 Anyway, I was looking for a new programming language for little projects. I
looked into the specs of the D language and became quite fond of it. Anyway, I

 I am not experienced enough to compare the two simply on the basis of their
specifications. I tried finding some comparison on the internet but failed to
find anything more recent than from 2003.
 
 I was wondering about the advantages of either and languages, and in which
case one is more appropriate than the other and I hope you can help me out!
 
 
 Many thanks in advance,
 Mike
1) it's a poor imitation of Java. Java may have its cons but at least they try to be consistent with the design of the language - every new feature goes through a JSR. this may make the language to evolve in a slower rate though. compare that to the MS approach of including the kitchen sink. the language has many features meant for corner cases which makes it bloated. on the other hand other features were discarded like the covariance of return types, which is heavily used in Java land. 2) who needs another proprietary language?? that's the most important issue for me. the mono project is a VERY stupid idea. it's a lost cause and a wasted effort - trying to play catch-up to MS while it constantly introduces breaking changes to the spec. just look at another project that needs to be compatible to MS - Samba. they need to jump through hoops to make it work - and that project is a necessary evil to make Linux and windows boxes work together. the best way to look at this is through history, check MS' past actions. for example if you look at VB. the new VB.net is a very different beast than the classic VB. from what i hear, half the vb coders like the new features a lot, while the other half claim MS ruined the language. no one asked the community what they want. i also bought software written with .net v1.1 which was abandoned due to .net v2 being not compatible with previous versions, forcing the company to rewrite the software from scratch which they didn't do due to lack of resources. 3) i don't see enough commitment from MS to the .net platform. Sun for example is fully committed to its Java platform as you can see by yourself - most (if not all) of their software is written with Java. can you say the same about MS? I don't think so. besides various toy utilities not even one major piece of software from MS is written with .net. in my book, if even MS itself prefers c++ for its products why should should i think anything different? _conclusion_ - if you want to run on a VM use Java it's open source and free. if you're looking for something more high-level, there are many other languages built on top of the JVM, I'd recommend checking Scala for example. you could also use a dynamic language such as Python or Ruby (i personally like Ruby syntax more). if you want a language that compiles to native code, providing you with all the power of C++ but with all the niceties of a modern language with a GC, than take a deep look at D. note that D is open source, except for the back-end of the official compiler made by Walter. however there is at least one other working compiler which uses gcc as its back-end. all round D is a much improved version of C++ with many new features already built-in in a consistent way. just my thoughts..
Oct 20 2007
parent reply David Brown <dlang davidb.org> writes:
On Sun, Oct 21, 2007 at 01:25:55AM +0200, Yigal Chripun wrote:

 1) it's a poor imitation of Java. Java may have its cons but at least they 
 try to be consistent with the design of the language - every new feature 
 goes through a JSR. this may make the language to evolve in a slower rate 
 though. compare that to the MS approach of including the kitchen sink. the 
 language has many features meant for corner cases which makes it bloated.
 on the other hand other features were discarded like the covariance of 
 return types, which is heavily used in Java land.
generics which cover many of the cases of templates. It has full support of the VM, so executes efficiently and safely.
 2) who needs another proprietary language?? that's the most important issue 
 for me. the mono project is a VERY stupid idea. it's a lost cause and a 
 wasted effort - trying to play catch-up to MS while it constantly 
 introduces breaking changes to the spec.
stupid? I've done development under mono and found it to work quite well. Where they play catchup is not the language but Microsoft's ever expanding proprietary libraries. multiple implementations. It's tradeoff choices may not be appropriate for all applications (using a VM most importantly). I think for the most part, D is a better language. David
Oct 20 2007
parent reply Yigal Chripun <yigal100 gmail.com> writes:
David Brown wrote:
 On Sun, Oct 21, 2007 at 01:25:55AM +0200, Yigal Chripun wrote:
 
 1) it's a poor imitation of Java. Java may have its cons but at least 
 they try to be consistent with the design of the language - every new 
 feature goes through a JSR. this may make the language to evolve in a 
 slower rate though. compare that to the MS approach of including the 
 kitchen sink. the language has many features meant for corner cases 
 which makes it bloated.
 on the other hand other features were discarded like the covariance of 
 return types, which is heavily used in Java land.
generics which cover many of the cases of templates. It has full support of the VM, so executes efficiently and safely.
from what i see, anything that needs fixing is being fixed within the Java community and I personally trust their decisions a lot more than MS. Adding properties to a languages isn't considered "fixing" it, it's just a convenience feature. On the other hand, removing covariance of return types is a very big mistake. there are others of course, but that one really pissed me off.
 2) who needs another proprietary language?? that's the most important 
 issue for me. the mono project is a VERY stupid idea. it's a lost 
 cause and a wasted effort - trying to play catch-up to MS while it 
 constantly introduces breaking changes to the spec.
stupid? I've done development under mono and found it to work quite well.
well, who is to prevent MS to publish a new "standard" every year? as I mentioned in my original post, I as a consumer got burnt on this exact issue. I paid 70$ for a piece of software i planned to use until the end of my degree. and that's quite expensive for a student in Israel. That piece of software was only scraped due to inability to port to a newer version of .net in order to support Vista and improve speed. MS didn't provide any way for them to upgrade except for re-writing the whole thing, which the company just didn't have the resources for. conveniently for MS as the company was making a product that competed with One note. and even today there are several features that they implemented much better than MS. that's just one way MS uses to push small ISVs that compete with it off the market.
 
 Where they play catchup is not the language but Microsoft's ever expanding
 proprietary libraries.
you can't really separate a language from it's standard library. almost every piece of code is dependent on that. have you used printf in your C program? well that's part of the standard C library. try to write an application without the standard library, and i assure you you won't get far. unless of course you're writing a kernel and need to implement printf by yourself.i don't think most application developers will go for that. hence, a change in the standard library would have the same effect as a change in the language itself. that's why Java takes a very cautious approach towards changing/deprecating parts of its standard lib. today, i can take a legacy Java 1.1 application and with minimal changes and probably a flag to the VM, i can run it on a modern Java 6 VM. that's a program written about 10 years ago. Can't say the same thing about a program written with any of MS tools.
 

 multiple implementations.  It's tradeoff choices may not be appropriate for
 all applications (using a VM most importantly).
 
would you call Microsoft's document format standard? being a standard means being accepted as the default by all parties, not just by one.
 I think for the most part, D is a better language.
i agree fully with that.
 
 David
Oct 20 2007
next sibling parent reply Reiner Pope <some address.com> writes:
Yigal Chripun wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 01:25:55AM +0200, Yigal Chripun wrote:

 1) it's a poor imitation of Java. Java may have its cons but at least 
 they try to be consistent with the design of the language - every new 
 feature goes through a JSR. this may make the language to evolve in a 
 slower rate though. compare that to the MS approach of including the 
 kitchen sink. the language has many features meant for corner cases 
 which makes it bloated.
 on the other hand other features were discarded like the covariance 
 of return types, which is heavily used in Java land.
generics which cover many of the cases of templates. It has full support of the VM, so executes efficiently and safely.
from what i see, anything that needs fixing is being fixed within the Java community and I personally trust their decisions a lot more than MS. Adding properties to a languages isn't considered "fixing" it, it's just a convenience feature. On the other hand, removing covariance of return types is a very big mistake. there are others of course, but that one really pissed me off.
types. I don't feel as dismissive of properties as you do because, in conjunction with operator overloading, they lead to a much cleaner (IMO) code look and feel than Java. Instead of writing foo.setAmount(foo.getAmount().add(5)); you get the much cleaner foo.Amount = foo.Amount + 5; which you are in fact allowed to rewrite to foo.Amount += 5; allows you to compile new code at runtime -- a useful optimization which can be used, for instance, for regexes. -- Reiner
Oct 21 2007
next sibling parent reply "Janice Caron" <caron800 googlemail.com> writes:
I don't think the original poster cares about Java though. The


I love D. It's getting close to everything I want a language to be.
The features it doesn't have now, it might have in the future, because
through discussion on this newsgroup, we, the users, get a say. (And



toward writing music, but it isn't. That's just pretentious. Seems
like just another aspect of Microsoft's attempts at world domination.
And certainly, you won't get a say in any decision they make.
Oct 21 2007
next sibling parent Yigal Chripun <yigal100 gmail.com> writes:
Janice Caron wrote:
 I don't think the original poster cares about Java though. The

 
 I love D. It's getting close to everything I want a language to be.
 The features it doesn't have now, it might have in the future, because
 through discussion on this newsgroup, we, the users, get a say. (And

 

 toward writing music, but it isn't. That's just pretentious. Seems
 like just another aspect of Microsoft's attempts at world domination.
 And certainly, you won't get a say in any decision they make.
I completely agree with the above post. regarding your first point: I've mentioned Java, and also a bunch of other languages because I wanted to list all the possibilities I consider worth checking. I didn't even mentioned functional languages yet... Maybe Scheme is a good candidate? I don't know. Personally I like the way D combines the best practices of all styles in a proper way.
Oct 21 2007
prev sibling parent Ary Manzana <ary esperanto.org.ar> writes:
Janice Caron escribi:

 toward writing music, but it isn't.
I've written a chord-position calculator for a guitar in Java, and I'm reflection.
Oct 21 2007
prev sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Reiner Pope wrote:
 Yigal Chripun wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 01:25:55AM +0200, Yigal Chripun wrote:

 1) it's a poor imitation of Java. Java may have its cons but at 
 least they try to be consistent with the design of the language - 
 every new feature goes through a JSR. this may make the language to 
 evolve in a slower rate though. compare that to the MS approach of 
 including the kitchen sink. the language has many features meant for 
 corner cases which makes it bloated.
 on the other hand other features were discarded like the covariance 
 of return types, which is heavily used in Java land.
generics which cover many of the cases of templates. It has full support of the VM, so executes efficiently and safely.
from what i see, anything that needs fixing is being fixed within the Java community and I personally trust their decisions a lot more than MS. Adding properties to a languages isn't considered "fixing" it, it's just a convenience feature. On the other hand, removing covariance of return types is a very big mistake. there are others of course, but that one really pissed me off.
types. I don't feel as dismissive of properties as you do because, in conjunction with operator overloading, they lead to a much cleaner (IMO) code look and feel than Java. Instead of writing foo.setAmount(foo.getAmount().add(5)); you get the much cleaner foo.Amount = foo.Amount + 5; which you are in fact allowed to rewrite to foo.Amount += 5; allows you to compile new code at runtime -- a useful optimization which can be used, for instance, for regexes. -- Reiner
regarding properties - even in D they aren't perfect: until there's a consistent interface, meaning that there are no differences between a field and a property i would avoid them. for example you can't do someObject.someProperty++; I'm sure that this one is on the list though. generally speaking: I regard properties a a convenience feature because you demonstrated yourself how to implement the same code with Java. properties also allow for more abuse: in strictly OOP design you should avoid getter/setters for every field. the best way to design OOP is if you want to perform an action on some object, send it a message and it should perform it by itself, rather than providing getter/setter and the action performed outside the object. example: --- object.doSomething(params); --- is more OOP correct than: --- a = object.getField(); b = doSomthing(a, params); object.setField(b); --- also compare D foreach loop with Ruby's: collection.each block which is much better from an OOP design point of view. the only place where you _need_ every field to have getter/setter is if you're writing a bean that would be processed by some automated tool like the visual designer of an IDE. properties allow easy misuse of that. with lexical closures you mean delegates, right? so in fact all the features you've mentioned are either part of the library, or could be part of the library (nullable types) and non provides a true fix for something wrong in Java. all those feature are ways to have shorter syntax than Java. I.E. niceties. most could be achieved in Java, and some would probably be added to Java (there's a debate regarding closures). so in fact nothing is broken in Java, it's just has a verbose syntax that frankly i don't like either. It all come down to prettiness of the syntax, not the design of the semantics of the language itself. on the flip side, how can I get covariant return types without changing
Oct 21 2007
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 10/21/07, Janice Caron <caron800 googlemail.com> wrote:
 On 10/21/07, Yigal Chripun <yigal100 gmail.com> wrote:
 object.doSomething(params);
 ---
 is more OOP correct than:
 ---
 a = object.getField();
 b = doSomthing(a, params);
 object.setField(b);
I think that's wrong.
Also, the fact that one can write a getter function without a setter function allows one to define properties which are read-only to the outside world, but read-write to the containing object, which again is something you can't do with a plain member variable.
Oct 21 2007
next sibling parent reply =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Janice Caron wrote:
 Also, the fact that one can write a getter function without a setter
 function allows one to define properties which are read-only to the
 outside world, but read-write to the containing object, which again is
 something you can't do with a plain member variable.
class Foo { public readonly string Bar = "Hello"; // or public string Baz { get { return m_bar; } } } The first one is read-only everywhere. The second is read-only in the outside but member functions can change the value of m_bar. I still miss read-only local variables, though. -- Julio Csar Carrascal Urquijo http://www.artelogico.com/
Oct 21 2007
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On 10/21/07, Julio Csar Carrascal Urquijo <jcarrascal gmail.com> wrote:

I think you can in /every/ language. I only posted that point because I disagreed with Yigal who said that properties were a bad thing.
Oct 21 2007
parent Yigal Chripun <yigal100 gmail.com> writes:
Janice Caron wrote:
 On 10/21/07, Julio C=C3=A9sar Carrascal Urquijo <jcarrascal gmail.com> =
wrote:

=20 I think you can in /every/ language. I only posted that point because I disagreed with Yigal who said that properties were a bad thing. =20
I didn't say they are a bad thing, I've said they allow for easy misuse=20 and breaking of OOP encapsulation. therefore they are a useful shortcut=20 for experienced programmers but they don't add new functionality to the=20 language and therefore do not fix anything in the language design.
Oct 21 2007
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
Janice Caron wrote:
 On 10/21/07, Janice Caron <caron800 googlemail.com> wrote:
 On 10/21/07, Yigal Chripun <yigal100 gmail.com> wrote:
 object.doSomething(params);
 ---
 is more OOP correct than:
 ---
 a = object.getField();
 b = doSomthing(a, params);
 object.setField(b);
I think that's wrong.
Also, the fact that one can write a getter function without a setter function allows one to define properties which are read-only to the outside world, but read-write to the containing object, which again is something you can't do with a plain member variable.
you're correct with you comment but that just shows that you're a true c++ programmer and completely missed my point. I haven't said that fields are better than properties. what i meant was the use of encapsulation from the OOP perspective. In pure OOP, method calls are considered messages to objects and the objects have attached behavior that handles its inner state. if you want to manipulate its inner state, the proper OOP way is to have the object contain such behavior itself and all you need to do is to send the object a message, rather than get its inner state with a getter and perform the action yourself. the latter breaks encapsulation from an OOP perspective. properties are easy to misuse by a programmer to break said encapsulation and implement the latter instead of the former design. that's not to say that getters should be avoided, on the contrary, please do use them when _appropriate_.
Oct 21 2007
prev sibling next sibling parent Ary Manzana <ary esperanto.org.ar> writes:
Yigal Chripun escribió:

The only thing I don't like about Java is that generics are not true generics. They are lost in runtime, the compiler erases the type before compiling. So, for example: class Zoo { void foo(List<Dog> dogs) { } void foo(List<Cat> cats) { } } won't compile, because the method "foo(List)" is duplicated. Also you can't have generics of primitive types...
Oct 21 2007
prev sibling parent reply =?UTF-8?B?SnVsaW8gQ8Opc2FyIENhcnJhc2NhbCBVcnF1aWpv?= writes:
Yigal Chripun wrote:

 from what i see, anything that needs fixing is being fixed within the 
 Java community and I personally trust their decisions a lot more than 
 MS. Adding properties to a languages isn't considered "fixing" it, it's
 just a convenience feature. On the other hand, removing covariance of 
 return types is a very big mistake. there are others of course, but that 
 one really pissed me off.
implementation details that really don't concern to your code: Int32 a1 = 1, a2 = 1; Int32 b1 = 128, b2 = 128; Console.WriteLine(a1 == a2); // True Console.WriteLine(b1 == b2); // True Java-the-language: Integer a1 = 1, a2 = 1; Integer b1 = 128, b2 = 128; System.out.println(a1 == a2); // true System.out.println(b1 == b2); // false This is because they cache the first 127 integers as a singleton while you'll get different objects for 128 and up. Also, Generics don't guarantee that a collection will only contain elements of the same type: List<Integer> lint = new ArrayList(); List lobj = lint; lobj.add("hello"); // WTF? And how come in Java you have to check your enums for null before using them? :D void foo(Color c) { if (null = c) // throw something. switch (c) { case Color.red: break; case Color.green: break; case Color.blue: break; default: break; } } Java-the-vm is a great implementation and I expect with anticipation a lot of useful languages to evolve on top of it, but Java-the-language isn't really a great example of language design.
 2) who needs another proprietary language?? that's the most important 
 issue for me. the mono project is a VERY stupid idea. it's a lost 
 cause and a wasted effort - trying to play catch-up to MS while it 
 constantly introduces breaking changes to the spec.
stupid? I've done development under mono and found it to work quite well.
well, who is to prevent MS to publish a new "standard" every year? as I
And who is to prevent SUN from publishing a new "standard" every year? The might have open sourced it's implementation but they still control the certifications that allow you to call your implementation *JAVA*.
 would you call Microsoft's document format standard? being a standard 
 means being accepted as the default by all parties, not just by one.
Do you mean Microsoft's OOXML? Yes, it was accepted by ECMA last December. I think that qualifies as a "standard". http://www.ecma-international.org/publications/standards/Ecma-376.htm
 I think for the most part, D is a better language.
i agree fully with that.
 David
very nice language and it has a very comprehensive library with independent implementations. Mono's implementations even has a more liberal license than Java's GPL. position. Of course I still prefer D when speed is important but the huge class library in Mono is certainly appealing. Maybe Tango in a couple of years will be there and this conversation can be dropped. -- Julio César Carrascal Urquijo http://www.artelogico.com/
Oct 21 2007
next sibling parent reply Sean Kelly <sean f4.ca> writes:
Julio César Carrascal Urquijo wrote:
 Yigal Chripun wrote:

 from what i see, anything that needs fixing is being fixed within the 
 Java community and I personally trust their decisions a lot more than 
 MS. Adding properties to a languages isn't considered "fixing" it, it's
 just a convenience feature. On the other hand, removing covariance of 
 return types is a very big mistake. there are others of course, but 
 that one really pissed me off.
implementation details that really don't concern to your code: Int32 a1 = 1, a2 = 1; Int32 b1 = 128, b2 = 128; Console.WriteLine(a1 == a2); // True Console.WriteLine(b1 == b2); // True Java-the-language: Integer a1 = 1, a2 = 1; Integer b1 = 128, b2 = 128; System.out.println(a1 == a2); // true System.out.println(b1 == b2); // false This is because they cache the first 127 integers as a singleton while you'll get different objects for 128 and up. Also, Generics don't guarantee that a collection will only contain elements of the same type: List<Integer> lint = new ArrayList(); List lobj = lint; lobj.add("hello"); // WTF? And how come in Java you have to check your enums for null before using them? :D void foo(Color c) { if (null = c) // throw something. switch (c) { case Color.red: break; case Color.green: break; case Color.blue: break; default: break; } } Java-the-vm is a great implementation and I expect with anticipation a lot of useful languages to evolve on top of it, but Java-the-language isn't really a great example of language design.
Technically, the auto-boxing issue is related to the implementation rather than the language design (unless that behavior is actually in the spec). Generics, however, are utter garbage. They are useful as a convenience tool for not manually casting and that's about it. Opinions Sean
Oct 21 2007
parent =?UTF-8?B?SnVsaW8gQ8Opc2FyIENhcnJhc2NhbCBVcnF1aWpv?= writes:
Sean Kelly wrote:
 Technically, the auto-boxing issue is related to the implementation 
 rather than the language design (unless that behavior is actually in the 
 spec).  Generics, however, are utter garbage.  They are useful as a 
 convenience tool for not manually casting and that's about it.  Opinions 

 
 
 Sean
Yes, it is in the "The Java Language Specification, Third Edition" at least: If the value p being boxed is true, false, a byte, a char in the range \u0000 to \u007f, or an int or short number between -128 and 127, then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2. -- Julio César Carrascal Urquijo http://www.artelogico.com/
Oct 21 2007
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Julio César Carrascal Urquijo Wrote:

 Java-the-vm is a great implementation and I expect with anticipation a 
 lot of useful languages to evolve on top of it, but Java-the-language 
 isn't really a great example of language design.
basically to be a better version of Java, since the paradigms and ideas work well for large corporate codebases, and to that extent it worked. I've done doesn't have them), and I never shed a tear for checked exceptions. I don't agree, however, that "Java-the-language isn't really a great example of language design." For a large codebase, Java is a extremely well-designed language (despite its couple quirks), which is why it has been so successful, and continues to be so successful. I think a lot of people with C/C++ backgrounds look at Java and cite the numerous problems with it, many of which can be wrapped up into the statement "It takes me two pages of code in Java to do what I can in one in C++" or "the performance sucks". The latter of those issues is mostly a non-issue, since if you're running a company running a large server-side software platform with 300 developers or designing a new cellular phone you want everyone to easily deploy applications on, the software engineering benefits of Java far outweigh the slight additional hardware costs. The first point (that coding in C/C++ is easier than in Java) is a more interesting one, but I think it stems from this implicit assumption that Java is just C++ with all the low-level, complex, and cool stuff ripped out. However, I think Java is more like Smalltalk with C/C++ syntax forced onto it. You _can_ write Java like you would C++ and end up with a total mess (look at Descent fro a Java port of the DMD front-end... you'll see what I mean), but at it core, Java is meant to be written in a more object-oriented style than even C++, and the "objects-for-everything" idea actually works. It's just something you need to subscribe to pretty completely; if you're complaining about having to type "new" all over the place or the lack of free functions, you are not one with the zen of OO programming. Of course, I'm a huge Smalltalk fan, and only actually worked to any great extent (more than a few smaller projects) in Java and Perl (the latter of which has scarred me permanently), so I may be a bit biased. In fact, even when I look at D, I'm looking at it from the perspective of a highly OO programming style.
Oct 21 2007
parent =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Robert Fraser wrote:


> basically to be a better version of Java, since the paradigms and 
ideas work
 well for large corporate codebases, and to that extent it worked. I've done 

 D doesn't have them), and I never shed a tear for checked exceptions.
Off course is strange to call Java a bad example of language design in of Java. The hindsight argument doesn't apply to these three reasons. The problem I'm trying to point is that Java-the-language was pretty much dormant since 1995 and then they added these *features* at the same broken behavior that they now have to support for, who know how many years. That's what I call bad language design.
 Of course, I'm a huge Smalltalk fan, and only actually worked to any great 
 extent (more than a few smaller projects) in Java and Perl (the latter of 
 which has scarred me permanently), so I may be a bit biased. In fact, even 
 when I look at D, I'm looking at it from the perspective of a highly OO 
 programming style.
I'm still learning Smalltalk (Squeak). I'm at the point where I can understand what a piece of code does but still don't get what is that I gain by writing code in this style. Any pointers to obtain the Zen experience? :D -- Julio Csar Carrascal Urquijo http://www.artelogico.com/
Oct 21 2007
prev sibling next sibling parent reply David Brown <dlang davidb.org> writes:
On Sat, Oct 20, 2007 at 04:49:39PM -0400, Mike wrote:

 I was wondering about the advantages of either and languages, and in
 which case one is more appropriate than the other and I hope you can help
 me out!
A couple of comparisons I can think of. parent class as a field in the child class, which is clumsy. generics. Somewhat less flexible than templates, but less prone to strange errors. Some people find them more difficult to understand, I find them clearer. It is more work to call C functions, and it isn't as good at manipulating data structures directly. - The .NET libraries are much richer, a lot more like Tango. Dave
Oct 20 2007
parent Ary Manzana <ary esperanto.org.ar> writes:
David Brown escribi:
 On Sat, Oct 20, 2007 at 04:49:39PM -0400, Mike wrote:
  - The .NET libraries are much richer, a lot more like Tango.
Especially the SortedList, which is... a dictionary! :-P (well, that's the part I hate most of their library: collections... and also that a red-black tree is an internal class)
Oct 20 2007
prev sibling next sibling parent reply Jussi Jumppanen <jussij zeusedit.com> writes:
Yigal Chripun Wrote:

 3) i don't see enough commitment from MS to the .net platform. 
Just give it a few years. I think Microsoft's longer term vision is to have .NET everywhere and I mean everywhere.
Oct 21 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
Oct 21 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
It's easier to change the functionality built into a VM than it is for hard-coded silicon. -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/10/2007 12:55:43 PM
Oct 21 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:
 
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
It's easier to change the functionality built into a VM than it is for hard-coded silicon.
Since the VM ultimately runs on that silicon, it's hard to see how.
Oct 21 2007
parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Sun, 21 Oct 2007 22:06:44 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:
 
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
It's easier to change the functionality built into a VM than it is for hard-coded silicon.
Since the VM ultimately runs on that silicon, it's hard to see how.
I suspect that we are talking about different things. When I say "VM" I'm referring to a Virtual Machine, that is, a CPU instruction set that is emulated by software. Because it is a software based emulation, it is easier/cheaper/faster to modify that silicon chips. The fact that a VM (the software) runs on a real machine is totally irrelevant to the reasons for having the VM. For example, I might have a VM that enables me to run Commodore-64 executable files on my Intel PC. Or another VM that runs Knuth's MIX instruction set. In many cases a VM is an idealized machine being emulated, and compilers can create object code for the idealized machine. This is then run on real machines of totally different architectures. If the idealized machine is enhanced, only the VM is updated and the silicon chips running the VM don't have to be replaced. A real boon if you are selling software for the various proprietary CPUs embedded in devices to the mass consumer market. -- Derek (skype: derek.j.parnell) Melbourne, Australia 22/10/2007 5:14:37 PM
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Sun, 21 Oct 2007 22:06:44 -0700, Walter Bright wrote:
 
 Derek Parnell wrote:
 On Sun, 21 Oct 2007 19:19:39 -0700, Walter Bright wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
It's easier to change the functionality built into a VM than it is for hard-coded silicon.
Since the VM ultimately runs on that silicon, it's hard to see how.
I suspect that we are talking about different things. When I say "VM" I'm referring to a Virtual Machine, that is, a CPU instruction set that is emulated by software. Because it is a software based emulation, it is easier/cheaper/faster to modify that silicon chips. The fact that a VM (the software) runs on a real machine is totally irrelevant to the reasons for having the VM.
I mean a VM like the Java VM or .net VM.
 For example, I might have a VM that enables me to run Commodore-64
 executable files on my Intel PC. Or another VM that runs Knuth's MIX
 instruction set. In many cases a VM is an idealized machine being emulated,
 and compilers can create object code for the idealized machine. This is
 then run on real machines of totally different architectures. If the
 idealized machine is enhanced, only the VM is updated and the silicon chips
 running the VM don't have to be replaced. A real boon if you are selling
 software for the various proprietary CPUs embedded in devices to the mass
 consumer market.
If the source code is portable, i.e. there is no undefined or implementation defined behavior, there's no reason that the VM object code should be more portable than the source. (And remember all the troubles with Java VMs behaving differently?)
Oct 22 2007
parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 02:14:29AM -0700, Walter Bright wrote:

 If the source code is portable, i.e. there is no undefined or 
 implementation defined behavior, there's no reason that the VM object code 
 should be more portable than the source. (And remember all the troubles 
 with Java VMs behaving differently?)
In the smartphone market, source is almost never distributed. Having multiple architectures would require every software vendor to support every desired architecture. Having a VM allows them to easily distribute a product that works on all phones instead of one. For this market, at least, native code isn't really even an option. Of course, since nearly all smart phones use a single processor (ARM), this really isn't applicable. The smart phone people do like the sandbox aspect as well, though. David
Oct 22 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
Oct 21 2007
next sibling parent reply "Dave" <Dave_member pathlink.com> writes:
"Robert Fraser" <fraserofthenight gmail.com> wrote in message 
news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.
Oct 21 2007
next sibling parent reply David Brown <dlang davidb.org> writes:
On Sun, Oct 21, 2007 at 11:21:37PM -0500, Dave wrote:

 Runtime reflection aside, I can't think of anything a VM can do that a 
 static compiler couldn't with the possible (but largely unproven) exception 
 of sometimes generating better code because of access to runtime info.
I believe most already do this kind of analysis. I'm not sure it helps, since there is plenty of other overhead to using a VM, so it probably just makes the VM use less costly. David
Oct 21 2007
parent reply "Dave" <Dave_member pathlink.com> writes:
"David Brown" <dlang davidb.org> wrote in message 
news:mailman.497.1193030905.16939.digitalmars-d puremagic.com...
 On Sun, Oct 21, 2007 at 11:21:37PM -0500, Dave wrote:

 Runtime reflection aside, I can't think of anything a VM can do that a 
 static compiler couldn't with the possible (but largely unproven) 
 exception of sometimes generating better code because of access to 
 runtime info.
I believe most already do this kind of analysis. I'm not sure it helps, since there is plenty of other overhead to using a VM, so it probably just makes the VM use less costly.
What I meant by 'largely unproven' is that when truly runtime-only info. (like machine load) is taken into account, it is hard to prove that using it to generate different machine code actually makes a difference but IIRC I've seen claims to that effect. For the more reproducable kind of runtime info. (like model of x86 CPU), one static compiler that can compile binaries to make use of that is Intel. It has a switch that will compile several sets of code and will run "the best" set depending on the chip.
 David 
Oct 21 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Dave wrote:
 For the more reproducable kind of runtime info. (like model of x86 CPU), 
 one static compiler that can compile binaries to make use of that is 
 Intel. It has a switch that will compile several sets of code and will 
 run "the best" set depending on the chip.
I've been doing that since the 80's (generated code would have different paths for floating point depending on the hardware).
Oct 22 2007
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Dave wrote:
 
 "Robert Fraser" <fraserofthenight gmail.com> wrote in message 
 news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.
One possibility is to do profiling while the application is running and do further optimizations based on that. The questions are, is the VM performance hit worse than the optimizations, and is there a compelling reason not to do those optimizations always?
Oct 22 2007
parent reply "Dave" <Dave_member pathlink.com> writes:
"Christopher Wright" <dhasenan gmail.com> wrote in message 
news:ffi6lh$1cn5$1 digitalmars.com...
 Dave wrote:
 "Robert Fraser" <fraserofthenight gmail.com> wrote in message 
 news:ffh727$1trc$1 digitalmars.com...
 Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
Other than some runtime reflection that can be done with a VM (and not a static binary), I think the only concrete advantage is to compile one set of bytecode for all platforms instead of one binary for each platform. But then bytecode can be more easily "decompiled" and copied too. Either way someone has to develop either a VM or a compiler for each platform. API's are really more a function of a library than a VM, IMO. Runtime reflection aside, I can't think of anything a VM can do that a static compiler couldn't with the possible (but largely unproven) exception of sometimes generating better code because of access to runtime info. For example, static compilers / libraries can do most of that too if needed (and they can do it w/o the extra runtime overhead of profiling and re-compiling) by compiling some heuristics right in to the binary.
One possibility is to do profiling while the application is running and do further optimizations based on that. The questions are, is the VM performance hit worse than the optimizations, and is there a compelling reason not to do those optimizations always?
That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.
Oct 22 2007
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Dave Wrote:

 One possibility is to do profiling while the application is running and do 
 further optimizations based on that. The questions are, is the VM 
 performance hit worse than the optimizations, and is there a compelling 
 reason not to do those optimizations always?
That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.
Right now in-flight optimization rarely makes code that runs faster, but it's a new technology. In 10 years, I'm guessing that most code will run equally fast under a VM as native, and another 10 and the VM will be superior. Especially as multi-core architectures become more popular, I think this will be a big issue (since the VM can automatically parallelize loops, etc.).
Oct 22 2007
next sibling parent "Dave" <Dave_member pathlink.com> writes:
"Robert Fraser" <fraserofthenight gmail.com> wrote in message 
news:ffj0pl$iuq$1 digitalmars.com...
 Dave Wrote:

 One possibility is to do profiling while the application is running and 
 do
 further optimizations based on that. The questions are, is the VM
 performance hit worse than the optimizations, and is there a compelling
 reason not to do those optimizations always?
That's what Sun Hotspot does, but I've rarely seen where the results are better than what a static compiler w/ the "-O2" switch can do and often seen where they are worse. IIRC (for example) the Jet "Ahead of Time" Java compiler can often outperform the Sun VM. Not that all this really matters for *most* code however, where just compiling to native code is a big enough win. But I have seen the old 80-20 rule at work -- cases where 80% of the time is spent on 20% of the code trying to make it run faster -- so it's not a moot point either.
Right now in-flight optimization rarely makes code that runs faster, but it's a new technology. In 10 years, I'm guessing that most code will run equally fast under a VM as native, and another 10 and the VM will be superior. Especially as multi-core architectures become more popular, I think this will be a big issue (since the VM can automatically parallelize loops, etc.).
I've (literally) heard that same thing for the last 10 years. Sun's Hotspot has been in constant development for about that long too, not to mention probably several times the amount of research $ on VM's rather than static compilers. Same w/ .NET which started out life as Visual J++. For static multi-core/multi-thread optimization there is OpenMP and also the Intel and AMD MT and math libs. Sun has had Java and multi-CPU machines in mind since day one, back when they were one of the few large vendors of those types of systems. Vendors like Sun are probably a decade ahead of commodity Intel machines when it comes to hardware and operating system architecture, and they're the ones developing the high-end VM's. I think it's probably at the point now where just about any improvement made to VM's could be matched by the same improvement in static compilers and/or static libraries.
Oct 22 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).
2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.
Oct 22 2007
next sibling parent Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).
2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.
Deja moo! I've heard that bull before. <g> I think the whole thing's a fallacy. The only real advantage I can see that a JIT compiler has, is in being able to inline dynamically loaded functions. It may also have some minor advantages in cache efficiency. (In both cases, this is actually an advantage of JIT linking, not JIT compilation). In reality, speed optimization only matters inside the innermost loops, and you get the big speed gains by algorithm changes (even small ones). A JIT compiler would seem to have an inherent disadvantage whenever the bytecode contains less information than was created in the compiler's semantic analysis. This is certainly true of the Java/.NET bytecode, which is far too low level.
Oct 23 2007
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Right now in-flight optimization rarely makes code that runs faster,
 but it's a new technology. In 10 years, I'm guessing that most code
 will run equally fast under a VM as native, and another 10 and the VM
 will be superior. Especially as multi-core architectures become more
 popular, I think this will be a big issue (since the VM can
 automatically parallelize loops, etc.).
2 years ago, I attended a Java seminar by a Java expert who predicted that in 10 years, Java code would run as fast as C code. Since it's still 10 years out, it must be like chasing a mirage <g>.
Maybe he meant that in 10 years, Java code would run as fast as C code does *now*. :P And that is certainly expectable. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Oct 24 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.
Oct 21 2007
next sibling parent reply David Brown <dlang davidb.org> writes:
On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.
It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target. Dave
Oct 21 2007
next sibling parent reply Roberto Mariottini <rmariottini mail.com> writes:
David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
[...]
 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.
It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.
And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future. Ciao
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Roberto Mariottini wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
[...]
 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.
It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.
And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future.
Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.
Oct 22 2007
parent reply Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Roberto Mariottini wrote:
[...]
 And not only that: if my product is compiled for Java-CLDC it will 
 work on any cell phone that support CLDC, based on any kind of 
 processor/architecture, included those I don't know of, included even 
 those that today don't exist and will be made in the future.
Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.
Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like). We are still talking only of the implementation, not the language itself. I consider the Javascript environment as a high level VM, and I still think that a compiled Javascript would be unusable. Even if technically the big difference between portable and non-portable resides on the language and the standard libraries, I think that not considering the hundreds of working VMs that exist today is narrow-thinking. To force developers to distribute their sources excludes a big part of the software world as it is today. Making D compilable for the Java VM today would make it immediately portable to tens of platforms (and hundreds of cell phones models), today. Ciao
Oct 22 2007
next sibling parent reply "Dave" <Dave_member pathlink.com> writes:
"Roberto Mariottini" <rmariottini mail.com> wrote in message 
news:ffi95a$1ihb$1 digitalmars.com...
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), today.
Making a D to C translater for that might actually make more sense, given the design of D and that all of the platforms would likely have a C compiler available. Then all that would be missing would be ease of distributing a single set of bytecode. Then again, binaries couldn't be reverse engineered as easily as bytecode either. In any case, GDC may have quite a few of those chips covered before either a D bytecode compiler or C2D was done <g>. Do the standard Java GUI libraries work the same for all cell phones, or in general does each cell phone vendor have their own specialized library? Walter had a great point earlier as well -- Is Java really "write once, run anywhere" especially where GUI's are concerned? I recall a lot of complaints where some things tended to work differently depending on the VM / platform but maybe those cases are rare now-a-days.
 Ciao 
Oct 22 2007
parent Roberto Mariottini <rmariottini mail.com> writes:
Dave wrote:
 
 "Roberto Mariottini" <rmariottini mail.com> wrote in message 
 news:ffi95a$1ihb$1 digitalmars.com...
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), 
 today.
Making a D to C translater for that might actually make more sense, given the design of D and that all of the platforms would likely have a C compiler available. Then all that would be missing would be ease of distributing a single set of bytecode. Then again, binaries couldn't be reverse engineered as easily as bytecode either. In any case, GDC may have quite a few of those chips covered before either a D bytecode compiler or C2D was done <g>.
I know of no cell phone with a C compiler today.
 Do the standard Java GUI libraries work the same for all cell phones, or 
 in general does each cell phone vendor have their own specialized 
 library?
MIDP and CDC are strict standards to which cell phones producer adhere. There are some vendor extensions, but they have few success: the scope of Java ME programming is to make your application/game work on any cell phone, so is in the developer interest to strictly apply the standard.
 Walter had a great point earlier as well -- Is Java really 
 "write once, run anywhere" especially where GUI's are concerned? I 
 recall a lot of complaints where some things tended to work differently 
 depending on the VM / platform but maybe those cases are rare now-a-days.
Java is really "write once, run anywhere", I've never found a GUI portability problem. The problems are the programmers that don't write portable code (this is independent from Java: you can write non-portable code in any language). Ciao
Oct 23 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Roberto Mariottini wrote:
 Walter Bright wrote:
 Javascript is distributed in source code, and executes on a variety of 
 machines. A VM is not necessary to achieve portability to machines 
 unknown. What is necessary is a portable language design.
Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like).
Distributing Java class files is not secure, as good decompilers exist for them. Might as well distribute source.
 We are still talking only of the implementation, not the language 
 itself. I consider the Javascript environment as a high level VM, and I 
 still think that a compiled Javascript would be unusable.
It's true that Javascript uses a VM, but it doesn't use a standardized VM for which one distributes precompiled binaries too. Javascript is always compiled/interpreted directly from source code, and source code is how its distributed.
 Even if technically the big difference between portable and non-portable 
 resides on the language and the standard libraries, I think that not 
 considering the hundreds of working VMs that exist today is 
 narrow-thinking.
Considering that a C compiler exists for a far broader range of devices than VMs do, all the motivation that is needed is the language needs to be a) popular or b) have huge resources from a company like Sun to finance development of all those VMs. Sun could just as easily have provided a generic back end & library.
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.
Not the Java world - decompilers are common and effective.
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), today.
The Java VM is insufficiently powerful to use as a back end for D. It can't even do C.
Oct 22 2007
next sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
Walter Bright wrote:
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.
Not the Java world - decompilers are common and effective.
You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Manzana wrote:
 Walter Bright wrote:
 To force developers to distribute their sources excludes a big part 
 of the software world as it is today.
Not the Java world - decompilers are common and effective.
You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)
ProGuard just renames the identifiers. A source code obfuscator can do the same thing.
Oct 22 2007
parent Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Ary Manzana wrote:
 Walter Bright wrote:
 To force developers to distribute their sources excludes a big part 
 of the software world as it is today.
Not the Java world - decompilers are common and effective.
You can obfuscate the bytecode, which makes it very difficult to analyze and change it. (check for example ProGuard)
ProGuard just renames the identifiers. A source code obfuscator can do the same thing.
Well, ProGuard is a bit more advanced, but obviously a source code obfuscator can _always_ do more. A source code obfuscator, by the way, is much more complex than ProGuard. Ciao
Oct 23 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Walter Bright Wrote:

 Considering that a C compiler exists for a far broader range of devices 
 than VMs do, all the motivation that is needed is the language needs to 
 be a) popular or b) have huge resources from a company like Sun to 
 finance development of all those VMs. Sun could just as easily have 
 provided a generic back end & library.
I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java. This has the added advantage of security and reliability, since there's no way an errant application can break the entire device, and allows RIM to change the instruction set architecture at any time. Of course, that distribute-binaries-as-source thing would work, too, but imagine sticking a whole lexer/parser/semantic/code generator on a mobile device... that processing power is better spent actually executing the application.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 Walter Bright Wrote:
 
 Considering that a C compiler exists for a far broader range of
 devices than VMs do, all the motivation that is needed is the
 language needs to be a) popular or b) have huge resources from a
 company like Sun to finance development of all those VMs. Sun could
 just as easily have provided a generic back end & library.
I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java.
Why couldn't RIM provide a back end as easily as a Java VM? Like I said, a simple back end could be as easy as: push operand push operand call ADD pop result Notice how close that looks to java bytecode! But it'll still execute much faster.
 This has the added advantage of security
 and reliability, since there's no way an errant application can break
 the entire device, and allows RIM to change the instruction set
 architecture at any time.
If the language has no pointers, and RIM provides the compiler for it, that is just as secure.
 Of course, that distribute-binaries-as-source thing would work, too,
 but imagine sticking a whole lexer/parser/semantic/code generator on
 a mobile device... that processing power is better spent actually
 executing the application.
It's about 500K of rom needed. And the code will run several times faster, even with a simplistic code generator, which will make up for it.
Oct 22 2007
next sibling parent Sean Kelly <sean f4.ca> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:

 Considering that a C compiler exists for a far broader range of
 devices than VMs do, all the motivation that is needed is the
 language needs to be a) popular or b) have huge resources from a
 company like Sun to finance development of all those VMs. Sun could
 just as easily have provided a generic back end & library.
I'm not o sure about that. For example, I did some development for BlackBerry devices, which don't have a native code generator (or spec) available outside RIM. All external BlackBerry applications must be deployed in Java.
Why couldn't RIM provide a back end as easily as a Java VM? Like I said, a simple back end could be as easy as: push operand push operand call ADD pop result Notice how close that looks to java bytecode! But it'll still execute much faster.
The only advantage I've been able to think of for a VM is language interoperability. The advantage of a VM over just an established calling convention and such being that it has better and more "native" support for garbage collected languages. This was the point of .NET so far as I'm aware (ie. it was a COM replacement). Sean
Oct 22 2007
prev sibling parent reply David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:

 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.
Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes. David
Oct 22 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times 
 faster, even with a simplistic code generator, which will make up for it.
Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.
It is a native compiler if it directly executes Java bytecodes!
Oct 22 2007
parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 07:46:52PM -0700, Walter Bright wrote:
 David Brown wrote:
 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 It's about 500K of rom needed. And the code will run several times 
 faster, even with a simplistic code generator, which will make up for it.
Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.
It is a native compiler if it directly executes Java bytecodes!
But it doesn't have to. Some phones will execute them directly, some will JIT or even simulate them. David
Oct 22 2007
prev sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.
Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.
It doesn't, but I think it might be stretching my NDA to explain how it actually works.
Oct 22 2007
parent reply David Brown <dlang davidb.org> writes:
On Tue, Oct 23, 2007 at 12:43:08AM -0400, Robert Fraser wrote:
David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:
 
 It's about 500K of rom needed. And the code will run several times faster, 
 even with a simplistic code generator, which will make up for it.
Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.
It doesn't, but I think it might be stretching my NDA to explain how it actually works.
Ok, vast oversimplification, but if it is arm based, it is probably Jazelle based, which, depending on implementation can either directly execute bytecodes or has a lot of support for JIT. <http://www.arm.com/products/esd/jazelle_home.html> which has plenty of non-NDA stuff people can read. David
Oct 22 2007
parent Robert Fraser <fraserofthenight gmail.com> writes:
David Brown wrote:
 On Tue, Oct 23, 2007 at 12:43:08AM -0400, Robert Fraser wrote:
 David Brown Wrote:

 On Mon, Oct 22, 2007 at 03:53:33PM -0700, Walter Bright wrote:

 It's about 500K of rom needed. And the code will run several times 
faster, > even with a simplistic code generator, which will make up for it. Doubtful that it would be faster, since the processor that they use directly executes Java bytecodes.
It doesn't, but I think it might be stretching my NDA to explain how it actually works.
Ok, vast oversimplification, but if it is arm based, it is probably Jazelle based, which, depending on implementation can either directly execute bytecodes or has a lot of support for JIT. <http://www.arm.com/products/esd/jazelle_home.html> which has plenty of non-NDA stuff people can read. David
BlackBerry doesn't use ARM... But this is getting quite off-topic.
Oct 23 2007
prev sibling parent reply Roberto Mariottini <rmariottini mail.com> writes:
Walter Bright wrote:
 Roberto Mariottini wrote:
 Walter Bright wrote:
 Javascript is distributed in source code, and executes on a variety 
 of machines. A VM is not necessary to achieve portability to machines 
 unknown. What is necessary is a portable language design.
Obviously, this is valid only if you want to distribute the sources, and sometimes you can't (i.e. royalties, sublicensing and the like).
Distributing Java class files is not secure, as good decompilers exist for them. Might as well distribute source.
This is something lawyers don't know. I have seen a couple of non-source Java library licensing.
 We are still talking only of the implementation, not the language 
 itself. I consider the Javascript environment as a high level VM, and 
 I still think that a compiled Javascript would be unusable.
It's true that Javascript uses a VM, but it doesn't use a standardized VM for which one distributes precompiled binaries too. Javascript is always compiled/interpreted directly from source code, and source code is how its distributed.
That's why I've said "High Level" VM. I see Javascript and Java as two equivalent VMs (+ standard libraries).
 Even if technically the big difference between portable and 
 non-portable resides on the language and the standard libraries, I 
 think that not considering the hundreds of working VMs that exist 
 today is narrow-thinking.
Considering that a C compiler exists for a far broader range of devices than VMs do, all the motivation that is needed is the language needs to be a) popular or b) have huge resources from a company like Sun to finance development of all those VMs. Sun could just as easily have provided a generic back end & library.
I've never said that VMs are better than C. I'm saying that VMs are there today, and they work, today. They work the "Compile-Once-Run-Everywhere" way.
 To force developers to distribute their sources excludes a big part of 
 the software world as it is today.
Not the Java world - decompilers are common and effective.
Don't say it to your attorney.
 Making D compilable for the Java VM today would make it immediately 
 portable to tens of platforms (and hundreds of cell phones models), 
 today.
The Java VM is insufficiently powerful to use as a back end for D. It can't even do C.
I thought that the Java VM was Turing-complete :-) I'm not an expert of compilers and VMs, so I believe you. It's a pity that I can't use D on those cell phones :-( Ciao
Oct 23 2007
parent reply Reiner Pope <some address.com> writes:
Roberto Mariottini wrote:
 Walter Bright wrote:
 The Java VM is insufficiently powerful to use as a back end for D. It 
 can't even do C.
I thought that the Java VM was Turing-complete :-) I'm not an expert of compilers and VMs, so I believe you. It's a pity that I can't use D on those cell phones :-( Ciao
D can segfault; Java can't. Thus D is more powerful. :-) -- Reiner
Oct 23 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Reiner Pope wrote:
 
 D can segfault; Java can't. Thus D is more powerful. :-)
 
    -- Reiner
Lol nice! I'm gonna quote you on that one :P -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Oct 24 2007
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 It's still a VM advantage.  It helps the model where there are many
 developers who only distribute binaries.  If they are distributing for a
 VM, they only have to distribute a single binary.  Otherwise, they still
 would have to recompile for every possible target.
With a portable language, it is not necessary to distribute binaries. You can distribute the *source* code! Then, the user can just recompile it on the fly (this can be automated so the user never has to actually invoke the compiler). Just like how Javascript is distributed as source.
Oct 22 2007
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 It's still a VM advantage.  It helps the model where there are many
 developers who only distribute binaries.  If they are distributing for a
 VM, they only have to distribute a single binary.  Otherwise, they still
 would have to recompile for every possible target.
With a portable language, it is not necessary to distribute binaries. You can distribute the *source* code! Then, the user can just recompile it on the fly (this can be automated so the user never has to actually invoke the compiler). Just like how Javascript is distributed as source.
Too bad that D isn't such a language then ? One "version" for each platform, and no autoconf or other helpers to cope with differences... As much as I do like D, the C language is *much* more portable - at least between the different GNU platforms (i.e. including MinGW too). --anders
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Anders F Bjrklund wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.
Too bad that D isn't such a language then ? One "version" for each platform, and no autoconf or other helpers to cope with differences...
These are not problems with language, but with the relative lack of resources applied to the dev tools.
 As much as I do like D, the C language is *much* more portable - at
 least between the different GNU platforms (i.e. including MinGW too).
If D had its own VM, the same issue would exist, because you'd have to have staff to port the VM to all those platforms and debug them.
Oct 22 2007
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:

 Too bad that D isn't such a language then ? One "version" for each 
 platform, and no autoconf or other helpers to cope with differences...
These are not problems with language, but with the relative lack of resources applied to the dev tools.
Agreed, not in the (extended) implementation of the language itself - just in the language specification and standard library. Same result. I just wish there had been a better solution to the linux/Unix/Posix. --anders
Oct 22 2007
prev sibling parent reply Joel Lucsy <jjlucsy gmail.com> writes:
Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just recompile 
 it on the fly (this can be automated so the user never has to actually 
 invoke the compiler). Just like how Javascript is distributed as source.
.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT. -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
Oct 22 2007
parent reply Radu <radu.racariu void.space> writes:
Joel Lucsy wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.
.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT.
Any system or collection of services that translates an abstract instruction set to a concrete one is a virtual machine: http://en.wikipedia.org/wiki/Virtual_machine (Process virtual machine). Leaving behind the MS propaganda, the implementation of such an system can be done as an interpreter, JIT, a combination of both + runtime profiler or as AOT + JIT (+ Interpreter). Currently MS's .Net uses a JIT and sometimes an AOT(ngen) implementation (they are working together), while the Sun Java implementation uses a combination of an interpreter, a JIT and a runtime profiler. Java has a larger set of implementations including AOT + JIT (JET Compiler), only AOT or AOT + Interpreter (GCJ), Interpreter (SableVM), JIT (Cacao). *AOT : http://en.wikipedia.org/wiki/AOT_compiler *JIT: http://en.wikipedia.org/wiki/Just-in-time_compilation
Oct 22 2007
parent reply Christopher Wright <dhasenan gmail.com> writes:
Radu wrote:
 Joel Lucsy wrote:
 Walter Bright wrote:
 With a portable language, it is not necessary to distribute binaries. 
 You can distribute the *source* code! Then, the user can just 
 recompile it on the fly (this can be automated so the user never has 
 to actually invoke the compiler). Just like how Javascript is 
 distributed as source.
.Net does not run in a VM, it is JIT compiled down to machine code. Assemblies *are* essentially source code. And, I belive in most cases, Javascript is either run on a VM, or JIT compiled just like .Net. And I suspect most browsers currently don't do JIT.
Any system or collection of services that translates an abstract instruction set to a concrete one is a virtual machine:
<nitpick> Rather, it is a virtual machine if it executes abstract instructions, or interprets them for immediate execution. Compilers aren't VMs. </nitpick>
Oct 22 2007
parent reply Joel Lucsy <jjlucsy gmail.com> writes:
Christopher Wright wrote:
 <nitpick>
 Rather, it is a virtual machine if it executes abstract instructions, or 
 interprets them for immediate execution. Compilers aren't VMs.
 </nitpick>
<grumbling> Bah, in that case DMD is a AOT VM as it compiles abstract instructions (the D language). Maybe its just me, but I really don't see the distinction between where it gets compiled. Either you do it before distribution, or like the .Net runtime, it does it on the client side. Without an interpreter, the compiled code is run directly. The .Net runtime from MS talks directly to the Win32 dlls. The CAS will block certain calls, thereby looking like its a VM, but really its not. There is no "virtualization" going on. And if you think it does, I task you to show me where. </grumbling> -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
Oct 22 2007
parent Radu <radu.racariu void.space> writes:
Joel Lucsy wrote:
 Christopher Wright wrote:
 <nitpick>
 Rather, it is a virtual machine if it executes abstract instructions, 
 or interprets them for immediate execution. Compilers aren't VMs.
 </nitpick>
<grumbling> Bah, in that case DMD is a AOT VM as it compiles abstract instructions (the D language). Maybe its just me, but I really don't see the distinction between where it gets compiled. Either you do it before distribution, or like the .Net runtime, it does it on the client side. Without an interpreter, the compiled code is run directly. The .Net runtime from MS talks directly to the Win32 dlls. The CAS will block certain calls, thereby looking like its a VM, but really its not. There is no "virtualization" going on. And if you think it does, I task you to show me where. </grumbling>
A compiler its still a compiler, as its translating human readable code into a machine readable one, you really can't blur that line. What a JIT VM does is that it translates one form of machine readable code into another concrete one in "realtime" at the point of execution. And JIT is an implementation detail of a Process Virtual Machine, the virtualization is placed in that JIT and runtime as it verifys CIL and it parses/compiles/optimizes into x86 opcode and applies different policies on how that code runs. Any process VM talks directly with the host OS and permits access to/from the controlled execution environment to the host one (with the required security checks); hell, even the machine VMs (vmware, paralles) do that now with network shares, drang&grop and unity. If you really want to pretend that .Net is some kind of a compiler backed, than you must admit that C compilers are also JITs in the case of how Ubuntu does it's application distribution. You have packages with abstract code (C, C++ mostly) an AOT VM (GCC) and there you go, and its one hell of an AOT VM :)
Oct 23 2007
prev sibling parent reply Michael P <dontspam me.com> writes:
You say it's more portable but you need a VM and often a compiler instead of
just having a compiler.

Distribute a single binary could be achived by encrypting the source code and
sending it to your client, Then you could have a compiler that knows the key to
it (or that the compiler gets the key from a server) so it takes the code and
decrypts it and at the same time compiles it. 

The problem with today's VM is that they are slow (and its not just a myth, try
it for yourself!). The argument that ppl will not notice the difference is not
true. Just starting a VM often takes far to long. That they often bring a huge
standard API is very good, but that is because its necessary for ppl to start
program in it (ppl are lazy). 

VMs doesn't solve anything IMH, its just easier to use them than not doing so
(greater security and control over what is happening at runtime and so on). 

I did program some pascal on a pda itself (palm pilot) and it worked perfect. I
didn't have to learn new tricks or anything like that ( unlike Symbians C++
api).

As a sidenote I'm taking a C++ course at my university and the lectures goes
something like this: bla, bla, bla, undefined behaviour, bla, bla,bla
undefined, bla, bla, bla, undefined. So code can work in one compiler but not
in another. In fact our final exam will be about avoiding pitfalls, not how to
"code" in C++. This is a good example of a language that causes pain to port to
different platforms. 

David Brown Wrote:

 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.
I'm sure there are a lot of advantages, but here's one I can think of off the top of my head: say you're designing an application for mobile devices/smartphones. There are a lot of different phones out there, probably all with different APIs (or no native API accessible outside the company that made the phone), but if your software is running in a VM it'll run the same everywhere. Now say you're a cell phone manufacturer introducing a new smartphone -- adding the VM makes all the software written to use that platform instantly compatible with your device.
That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.
It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target. Dave
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.
I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.
Oct 22 2007
next sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:27:47 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.
I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.
Source code obfuscators for most interpreted languages exist, as well as obfuscators for bytecode (VM languages) which make decompilation very hard. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 22 Oct 2007 11:27:47 -0700, Walter Bright wrote:

 Michael P wrote:
 Distribute a single binary could be achived by encrypting the source
 code and sending it to your client, Then you could have a compiler
 that knows the key to it (or that the compiler gets the key from a
 server) so it takes the code and decrypts it and at the same time
 compiles it.
I once went through the design of encrypting source, and concluded it wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear that VM bytecode does a lousy job of obfuscating source - good Java byte code decompilers exist. You might as well distribute source - after running it through a comment stripper, of course.
I work daily with a language called Progress. It is a 4GL style used primarily with large databases. Anyhow, it 'compiles' to a type of p-Code and we distribute our apps using its encrypted source facility. The run-time application server executes the p-Code in a VM. We have been doing this since 1994. It is very fast and applications are transportable to other architectures without changing the source code. I've moved applications from System V (Olivetti) to VAX-VMS to Redhat without having to even recompile. It is practical to encrypt source code. VM's can be bloody fast. One can distribute portable applications without compromising intellectual property. I regard your point of view as blinkered. It seems to me that your opinion could be paraphrased with "if we had a perfect world we wouldn't have to solve problems". There is a role for VM languages and there is a role for native-code languages. It is not an either/or situation. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Oct 22 2007
next sibling parent Jascha Wetzel <firstname mainia.de> writes:
Derek Parnell wrote:
 I regard your point of view as blinkered. It seems to me that your opinion
 could be paraphrased with "if we had a perfect world we wouldn't have to
 solve problems". There is a role for VM languages and there is a role for
 native-code languages. It is not an either/or situation.
true, of course. it's all a question of choosing the right tools for the task. IMHO, the problem is, that interpreted and VM languages have become that popular, that they are often being deployed in the wrong places. people start writing anything in the language they like or know best, regardless of whether it's the right tool for the job. and half baked arguments are being used to justify that. it has become necessary to point out what native tools can do at any given opportunity. this especially includes rectifying several myths about VMs.
Oct 23 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Mon, 22 Oct 2007 11:27:47 -0700, Walter Bright wrote:
 I once went through the design of encrypting source, and concluded it 
 wasn't very practical (look at CSS for DVDs!). Also, it's pretty clear 
 that VM bytecode does a lousy job of obfuscating source - good Java byte 
 code decompilers exist.

 You might as well distribute source - after running it through a comment 
 stripper, of course.
I work daily with a language called Progress. It is a 4GL style used primarily with large databases. Anyhow, it 'compiles' to a type of p-Code and we distribute our apps using its encrypted source facility. The run-time application server executes the p-Code in a VM. We have been doing this since 1994. It is very fast and applications are transportable to other architectures without changing the source code.
That's achievable with a language that is defined with portable semantics in mind. A VM doesn't contribute to it.
 I've moved
 applications from System V (Olivetti) to VAX-VMS to Redhat without having
 to even recompile.
Since you didn't have to change the source code, either, it doesn't make much difference if recompilation was necessary or not.
 It is practical to encrypt source code.
Since the people you are trying to hide it from must have the decryption keys in order to use it, it is inherently insecure. All it takes is one person with motivation to reverse engineer a crack, and then *all* of the source is available to *everyone*. It happens with DRM stuff all the time.
 VM's can be bloody fast.
They can be fast enough, but they'll never be faster than native code.
 One can distribute portable applications without compromising intellectual
 property.
All it takes is one motivated hacker, and *all* of your stuff then is compromised.
 I regard your point of view as blinkered. It seems to me that your opinion
 could be paraphrased with "if we had a perfect world we wouldn't have to
 solve problems". There is a role for VM languages and there is a role for
 native-code languages. It is not an either/or situation.
I've implemented both VMs and native compilers, I know intimately how they work. I don't believe that the claims made of VMs are justified. BTW, because of the way the Java VM bytecodes are defined, it is particularly easy to decompile.
Oct 27 2007
prev sibling parent Charles D Hixson <charleshixsn earthlink.net> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 Walter Bright Wrote:
 I've never been able to discover what the fundamental advantage of
 a VM is.
I'm sure there are a lot of advantages, but here's one I can think of ...
That isn't an advantage of the VM. It's an advantage of a language that has no implementation-defined or undefined behavior. Given that, the same portability results are achieved.
I'm not sure what the reason it, but programs in languages running on VM's (or otherwise interpreted) seem to be much better at introspection. This isn't just Java. LISP had to work quite hard to become a compiled language, but interpreters were available quickly, and the ability to do introspection easily during interpretation was a large chunk of the reason. N.B.: This doesn't mean that compiled languages can't introspect. After all, if you analyze everything down to assembler, it's all the same instructions. But it appears to be a lot more difficult.
Oct 22 2007
prev sibling next sibling parent Jussi Jumppanen <jussij zeusedit.com> writes:
Walter Bright Wrote:

 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
When I said everywhere, I guess I was thinking more in terms of software. For example, I've heard stories the latest Microsoft SQL Server lets you embed embedded scripting language of choice for all Microsoft software. as it gives the scripts access to the .NET framework.
Oct 21 2007
prev sibling next sibling parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM is.
I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario. 2. As far as the oldest VM I know designed for a specific language to be executed in is concerned: "UCSD p-System began around 1977 as the idea of UCSD's Kenneth Bowles, who believed that the number of new computing platforms coming out at the time would make it difficult for new programming languages to gain acceptance." (or that's what Wikipedia says). 3. From hxxp://en.wikipedia.org/wiki/P-code_machine: "a) For porting purposes. It is much easier to write a small (compared to the size of the compiler) p-code interpreter for a new machine, as opposed to changing a compiler to generate native code for the same machine. b) For quickly getting a compiler up and running. Generating machine code is one of the more complicated parts of writing a compiler. By comparison, generating p-code is much easier. c) Size constraints. Since p-code is based on an ideal virtual machine, many times the resulting p-code is much smaller than the same program translated to machine code. d) For debugging purposes. Since p-code is interpreted, the interpreter can apply many additional runtime checks that are harder to implement with native code."
Oct 22 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
0ffh wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.
I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario.
This has been done other ways - see the gcc, where they all share a common optimizer/backend. David Friedman used that to implement gdc. LLVM is another project aiming to do the same thing.
 2. As far as the oldest VM I know designed for a specific language to be 
 executed in is concerned: "UCSD p-System began around 1977 as the idea 
 of UCSD's Kenneth Bowles, who believed that the number of new computing 
 platforms coming out at the time would make it difficult for new 
 programming languages to gain acceptance." (or that's what Wikipedia says).
I know about the P-system. It was ahead of its time.
 3. From hxxp://en.wikipedia.org/wiki/P-code_machine:
 "a) For porting purposes. It is much easier to write a small (compared 
 to the size of the compiler) p-code interpreter for a new machine, as 
 opposed to changing a compiler to generate native code for the same 
 machine.
Interpreted VMs tend to suck. The good ones include JITs, which are full blown compiler optimizers and back ends. Even a brain-dead simple code generator will run 10x faster than an interpreter.
  b) For quickly getting a compiler up and running. Generating machine 
 code is one of the more complicated parts of writing a compiler. By 
 comparison, generating p-code is much easier.
Generating *good* code is hard. Most CPU instruction sets are actually not much more complex than p-code, if you're not trying to generate optimal code. You can do RPN stack machine code generation for the x86 very simply, for example. Heck, you can generate code that is a stream of *function calls* for each operation (often called 'threaded code').
  c) Size constraints. Since p-code is based on an ideal virtual machine, 
 many times the resulting p-code is much smaller than the same program 
 translated to machine code.
P-code does tend to be smaller, that's true. Except that the VM's bloat tends to way overwhelm any size savings in the executable code.
  d) For debugging purposes. Since p-code is interpreted, the interpreter 
 can apply many additional runtime checks that are harder to implement 
 with native code."
That's a crock.
Oct 22 2007
next sibling parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 [...] You can do RPN stack machine code generation for the x86 very
 simply, for example.
Yup, I've done my Forth. An experience I found very instructive. =)
 Heck, you can generate code that is a stream  of *function calls* for
 each operation (often called 'threaded code').
Heck, the metaprogramming capabilites are a dream! I'ts just that *tiny* bit too low level for my tastes. Regards, Frank
Oct 22 2007
parent 0ffh <spam frankhirsch.net> writes:
0ffh wrote about forth (sorry for the self-quote!):
 I'ts just that *tiny* bit too low level for my tastes.
Actually, this was my latest try at solving this: http://wiki.dprogramming.com/uploads/Drat/grace2.zip Regards, Frank
Oct 22 2007
prev sibling parent David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 02:29:04AM -0700, Walter Bright wrote:

  d) For debugging purposes. Since p-code is interpreted, the interpreter 
 can apply many additional runtime checks that are harder to implement with 
 native code."
That's a crock.
It's not that they're harder to implement, but who you trust to be making the runtime tests. For a VM, it can perform runtime tests, and if you trust the VM, you can run untrusted programs and trust that they won't do things (at least in the theory that your VM is perfect). You can't do this is you're trusting that the checks were done by whatever compiler compiled the program you're running. Both .NET and Java do this. They allow a kind of sandbox operation of untrusted code. It's almost impossible to do this with native code since most of the type information is gone at that point. While still in the VM, pointer types can be tested and casts and such forbidden. David
Oct 22 2007
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
0ffh wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.
I don't think there are any very fundamental advantages. But there sure seem to be a few things that make them attractive for some people. The most convincing of these revolve around neither run-time nor compile-time, but around write-time issues. In short: They try to make the language implementor's life easy. We see: The exact opposite of what you're trying to achieve (using C++). Regards, Frank Appendix: "Reasons for having a VM" 1. They are a way to separate the compiler back-end from the rest of the compiler. Clearly you wouldn't have to implement the VM in this scenario. 2. As far as the oldest VM I know designed for a specific language to be executed in is concerned: "UCSD p-System began around 1977 as the idea of UCSD's Kenneth Bowles, who believed that the number of new computing platforms coming out at the time would make it difficult for new programming languages to gain acceptance." (or that's what Wikipedia says). 3. From hxxp://en.wikipedia.org/wiki/P-code_machine: "a) For porting purposes. It is much easier to write a small (compared to the size of the compiler) p-code interpreter for a new machine, as opposed to changing a compiler to generate native code for the same machine. b) For quickly getting a compiler up and running. Generating machine code is one of the more complicated parts of writing a compiler. By comparison, generating p-code is much easier. c) Size constraints. Since p-code is based on an ideal virtual machine, many times the resulting p-code is much smaller than the same program translated to machine code. d) For debugging purposes. Since p-code is interpreted, the interpreter can apply many additional runtime checks that are harder to implement with native code."
Aside from the benefits of dubious reality, why not just emit LLVM code? It simplifies your backend at the expense of a longer compile, but still generates native code (for Intel-based, PowerPC, ARM, Thumb, SPARC, and Alpha processors, anyway). And if you really want it, there's a JIT compiler for those.
Oct 22 2007
parent 0ffh <spam frankhirsch.net> writes:
Christopher Wright wrote:
 0ffh wrote:
 1. They are a way to separate the compiler back-end from the rest of 
 the compiler. Clearly you wouldn't have to implement the VM in this 
 scenario.
 [...]
Aside from the benefits of dubious reality, why not just emit LLVM code? It simplifies your backend at the expense of a longer compile, but still generates native code (for Intel-based, PowerPC, ARM, Thumb, SPARC, and Alpha processors, anyway). And if you really want it, there's a JIT compiler for those.
I'd think that's covered by point 1. Regards, Frank
Oct 22 2007
prev sibling next sibling parent reply Lutger <lutger.blijdestijn gmail.com> writes:
Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
I thought the .NET platform was developed with the intent to replace COM? And by extension, complementing and / or replacing the C way of cross-talking between languages for application development.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Lutger wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.
I thought the .NET platform was developed with the intent to replace COM?
I don't know what MS's reasons were, but it seems strange to replace COM with something inaccessible from C++ (the main language used to interface with COM).
 And by extension, complementing and / or replacing the C way of 
 cross-talking between languages for application development.
Except that .net cannot talk to C or C++ code, which are the usual languages for applications. All languages need to interoperate are a standard calling convention, not a wholly different environment.
Oct 22 2007
next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Except that .net cannot talk to C or C++ code, which are the usual languages
for applications.
define the exact layout of structures, and thus share data structures with native code. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
<thecybershadow gmail.com> wrote:

 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Except that .net cannot talk to C or C++ code, which are the usual languages
for applications.
define the exact layout of structures, and thus share data structures with native code.
Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows). -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
 <thecybershadow gmail.com> wrote:
 
 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 Except that .net cannot talk to C or C++ code, which are the
 usual languages for applications.
can also define the exact layout of structures, and thus share data structures with native code.
Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows).
Thanks for the reference. It says that the parameters must go through "marshalling", which means they go through a translation layer.
Oct 22 2007
parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:39:51 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:41:26 +0300, Vladimir Panteleev
 <thecybershadow gmail.com> wrote:

 On Mon, 22 Oct 2007 21:22:50 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 Except that .net cannot talk to C or C++ code, which are the
 usual languages for applications.
can also define the exact layout of structures, and thus share data structures with native code.
Here's some documentation on it: http://msdn2.microsoft.com/en-us/library/aa288468(VS.71).aspx Note that you can also specify one of several calling conventions: Cdecl, Stdcall, Thiscall (which allows some basic OOP simulation) and Winapi (same as Stdcall on Windows).
Thanks for the reference. It says that the parameters must go through "marshalling", which means they go through a translation layer.
Quite so - I hope the JIT compiler generates proper optimal native code, though. (sorry, saw the e-mail reply before the NG reply) -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Walter Bright" wrote
 Lutger wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM 
 is.
I thought the .NET platform was developed with the intent to replace COM?
I don't know what MS's reasons were
I think the reason was pretty obvious... replace Java :) Sun slapped MS's hand when it tried to differentiate Java, so MS wanted to not depend on Java anymore. As far as COM, I don't think they ever wanted to replace it because that would make all the millions of lines of code already written with COM useless. MS's main goal in anything they do is, and always has been, backwards compatibility. Why do you think Windows is so damn bloated compared to Linux? Because it all has to be able to still run Windows 3.1 crap.
but it seems strange to replace COM with something inaccessible from C++ 
(the main language used to interface with COM).
.net is accessible from C++. It's called C++.net :) With .net, you can implement "managed" or garbage collected C++ classes, and from there you can call either normal C/C++ classes or other .net langauge-based classes/functions. I usually do this because it's much easier (in my mind) IMO, D does a much better job of importing C functions, and is much more understandable. As far as interfacing with C++, .net has D beat because it can use the classes directly from a C++.net class. But I think, as you do, that this is more trouble than it's worth.
 And by extension, complementing and / or replacing the C way of 
 cross-talking between languages for application development.
Except that .net cannot talk to C or C++ code, which are the usual languages for applications. All languages need to interoperate are a standard calling convention, not a wholly different environment.
The whole point of .net is to allow ANY language to generate the SAME bytecode. For example, you can have a C++.net class, calling a VB.net class, which calls a COBOL.net class (yes, COBOL.net exists, I can't believe it either). It's actually a really neat idea, but in practice, you generally only use one language anyways, and interfacing with old code usually means you have to write wrappers or reimplement the code, so it's not like you can magically merge legacy stuff with .net without a ton of work. So yes, you can talk to C or C++ using .net, but it's not always pretty. -Steve
Oct 22 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 The whole point of .net is to allow ANY language to generate the SAME 
 bytecode.
That's a means to an end. But there are other means to the same end. A VM is a very expensive and inefficient way to get there, and even then the results run slowly (relative to native code).
Oct 22 2007
prev sibling next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 I've never been able to discover what the fundamental advantage of a VM is.
Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?) 2) VMs make modularity much easier in that you don't have to recompile all modules ("plugins") on all platforms, which is often not possible with projects whose core supports many platforms, but most developers don't have access to all supported platforms. 3) very flexible reflection - like being able to derive from classes in other modules. Though this can be done in native languages by including enough metadata, most compiled languages don't. 4) compilation is not a simple process for most computer users out there. If you want to provide a simple, cross-platform end-user application, it's much easier to use a VM - the VM vendor has taken care of porting the VM to all those platforms, and you don't need to bother maintaining source code portability, make/autoconf/etc. files, and compilation instructions (dependencies, etc.) (ok, most computer users out there use Windows, and many non-Windows users know how to compile a program, but the point stands :P) 5) it's much easier to provide security/isolation for VM languages. Although native code isolation can be done using hardware, it's complicated and inefficient. This allows VM languages to be safely embedded in places such as web pages (Flash for ActionScript, applets for Java, Silverlight for .NET). -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 I've never been able to discover what the fundamental advantage of
 a VM is.
Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?)
Are you referring to a JIT? JITs aren't easier to implement than a compiler back end.
 2) VMs make modularity much easier in that you don't have to
 recompile all modules ("plugins") on all platforms, which is often
 not possible with projects whose core supports many platforms, but
 most developers don't have access to all supported platforms.
Problem is solved by defining your ".class" file to be compressed source code. Dealing with back end bugs on platform X is no different from dealing with VM bugs on platform X. Java is infamous for "compile once, debug everywhere".
 3) very flexible reflection - like being able to derive from classes
 in other modules. Though this can be done in native languages by
 including enough metadata, most compiled languages don't.
I think this is possible with compiled languages, but nobody has done it yet.
 4) compilation is not a simple process for most computer users out
 there.
Since the VM includes a JIT (a compiler) and runs it transparently to the user, there's no reason that compiler couldn't compile source code into native code transparently to the user.
 If you want to provide a simple, cross-platform end-user
 application, it's much easier to use a VM - the VM vendor has taken
 care of porting the VM to all those platforms,
And the language vendor would have taken care of getting a compiler for those platforms!
 and you don't need to
 bother maintaining source code portability, make/autoconf/etc. files,
 and compilation instructions (dependencies, etc.) (ok, most computer
 users out there use Windows, and many non-Windows users know how to
 compile a program, but the point stands :P)
This can be automated as well. BTW, non-programmers run compilers all the time - after all, how else could they run a javascript app?
 5) it's much easier to provide security/isolation for VM languages.
 Although native code isolation can be done using hardware, it's
 complicated and inefficient.
The virtualization hardware works very well! It's complex, but it is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).
It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.
Oct 22 2007
parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:54:40 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 05:19:39 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 I've never been able to discover what the fundamental advantage of
 a VM is.
Some of the things which are only possible, or a good deal easier to use/implement with VMs: 1) code generation - used very seldomly, it might be used for runtime-specified cases where top performance is required (e.g. genetic programming?)
Are you referring to a JIT? JITs aren't easier to implement than a compiler back end.
I'm referring about using the standard library to emit code. This allows to generate arbitrary code at runtime, without having to bundle a compiler or compiler components with your program. Integration with existing code is also available, so you could create an on-the-fly class that is derived from a "hard-coded" class in the application. The use case I mentioned is genetic programming - a technique where genetic evolution argorithms are applied to bytecode programs, and in this case it is desireable for the generated programs to run at maximum speed without compromising the host's stability.
 2) VMs make modularity much easier in that you don't have to
 recompile all modules ("plugins") on all platforms, which is often
 not possible with projects whose core supports many platforms, but
 most developers don't have access to all supported platforms.
Problem is solved by defining your ".class" file to be compressed source code. Dealing with back end bugs on platform X is no different from dealing with VM bugs on platform X. Java is infamous for "compile once, debug everywhere".
Yes, though I didn't mention debugging. Otherwise, see below.
 3) very flexible reflection - like being able to derive from classes
 in other modules. Though this can be done in native languages by
 including enough metadata, most compiled languages don't.
I think this is possible with compiled languages, but nobody has done it yet.
I believe DDL was going in that direction.
 4) compilation is not a simple process for most computer users out
 there.
Since the VM includes a JIT (a compiler) and runs it transparently to the user, there's no reason that compiler couldn't compile source code into native code transparently to the user.
Indeed. Infact, most of the issues I mentioned can be solved by distributing source code instead of intermediary bytecode. Actually, if you compare the Java/.NET VM with a hypothetical system which compiles the source code and runs the binary on the fly, the difference is pretty low - it's just that bytecode is one level lower than source code (and source code parsing/lexing would slow down compilation to native code by some degree). I don't think it would be hard to turn D into a VM just like .NET - just split the front-end from the back-end, make the front-end serialize the AST and distribute a back-end that reads ASTs, "JITs" them, links to Phobos/other libraries and runs them. You could even scan the AST for unsafe code (pointers, some types of casts), add that with forced bounds checking, and you have a "safe" D VM/compiler. So, I'd like to ask - what exactly are we debating again? :) When comparing VMs (systems that compile to bytecode) to just distributing the source code (potentially wrapping it in a bundle or framework that can automatically compile and run the source for the user), the later inherits all the disadvantages of the VM (slow on first start, as the source code has to be compiled; the source or some other high-level source structures can be extracted; etc.). The only obvious advantage is that the source is readily available in case it's necessary to debug the application, but Java already has the option to include the source in the .jar file (although this causes it to include code in both bytecode and source). If we assume that all bytecode or source is compiled before it's ran (nothing is interpreted), as should happen in a "perfect" VM, the term "VM" loses much of its original meaning. The only thing left is the restrictions imposed on the language (no unsafe constructs like pointers) and means to operate on the AST (reflection, code generation, etc.) Taking that into consideration, comparing a perfect "VM" with distributing native code seems to make slow start-up and the bulky VM runtime the only disadvantages of using VMs. (Have I abstractized so much that I'm forgetting something important here?)
 5) it's much easier to provide security/isolation for VM languages.
 Although native code isolation can be done using hardware, it's
 complicated and inefficient.
The virtualization hardware works very well! It's complex, but it is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!
Unfortunately, virtualization extensions are not available on all platforms - and implementing sandboxing on platforms where it's not supported by hardware would be quite complicated (involving disassembly, recompilation or interpretation). VirtualBox is a nice part-open-source virtualization product, and they stated that the software virtualization they implemented is faster than today's hardware virtualization.
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).
It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.
This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level. I also thought of another point (though it only stands against distributing native code binaries, not self-compiling source code): 6) Bytecode can be compiled to optimized code for the specific environment it is run on (processor vendor and family). It's not a big plus, just a "nice" advantage. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 Indeed. Infact, most of the issues I mentioned can be solved by
 distributing source code instead of intermediary bytecode. Actually,
 if you compare the Java/.NET VM with a hypothetical system which
 compiles the source code and runs the binary on the fly, the
 difference is pretty low - it's just that bytecode is one level lower
 than source code (and source code parsing/lexing would slow down
 compilation to native code by some degree).
To some degree, yes. You can address this, though, by pre-tokenizing the source.
 I don't think it would be hard to turn D into a VM just like .NET -
 just split the front-end from the back-end, make the front-end
 serialize the AST and distribute a back-end that reads ASTs, "JITs"
 them, links to Phobos/other libraries and runs them. You could even
 scan the AST for unsafe code (pointers, some types of casts), add
 that with forced bounds checking, and you have a "safe" D
 VM/compiler. So, I'd like to ask - what exactly are we debating
 again? :)
 
 When comparing VMs (systems that compile to bytecode) to just
 distributing the source code (potentially wrapping it in a bundle or
 framework that can automatically compile and run the source for the
 user), the later inherits all the disadvantages of the VM (slow on
 first start, as the source code has to be compiled; the source or
 some other high-level source structures can be extracted; etc.). The
 only obvious advantage is that the source is readily available in
 case it's necessary to debug the application, but Java already has
 the option to include the source in the .jar file (although this
 causes it to include code in both bytecode and source).
 
 If we assume that all bytecode or source is compiled before it's ran
 (nothing is interpreted), as should happen in a "perfect" VM, the
 term "VM" loses much of its original meaning. The only thing left is
 the restrictions imposed on the language (no unsafe constructs like
 pointers) and means to operate on the AST (reflection, code
 generation, etc.) Taking that into consideration, comparing a perfect
 "VM" with distributing native code seems to make slow start-up and
 the bulky VM runtime the only disadvantages of using VMs. (Have I
 abstractized so much that I'm forgetting something important here?)
I don't think you've forgotten anything important.
 
 5) it's much easier to provide security/isolation for VM
 languages. Although native code isolation can be done using
 hardware, it's complicated and inefficient.
The virtualization hardware works very well! It's complex, but it is far more efficient than a VM is. In fact, you're likely to be running on a hardware virtualized machine anyway!
Unfortunately, virtualization extensions are not available on all platforms - and implementing sandboxing on platforms where it's not supported by hardware would be quite complicated (involving disassembly, recompilation or interpretation).
I agree, but not in the case where source code is distributed and the compiler is controlled by the box, not the programmer.
 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.
I find that hard to believe.
 This allows VM languages to be safely embedded in places such as
 web pages (Flash for ActionScript, applets for Java, Silverlight
 for .NET).
It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.
This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level.
Right.
 I also thought of another point (though it only stands against
 distributing native code binaries, not self-compiling source code): 
 6) Bytecode can be compiled to optimized code for the specific
 environment it is run on (processor vendor and family). It's not a
 big plus, just a "nice" advantage.
That's often been touted, but it doesn't seem to produce much in the way of real results.
Oct 22 2007
next sibling parent reply David Brown <dlang davidb.org> writes:
On Mon, Oct 22, 2007 at 08:54:01PM -0700, Walter Bright wrote:

 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.
I find that hard to believe.
Simon Peyton-Jones spoke at the recent HOPL (History of Programming Languages) conference. His group was originally trying to come up with better hardware to directly execute the abstract machine used in haskell. They problem they found is that because of the enormous advances in general purpose processors, even a simple simulation of the virtual machine on a modern PC ran faster than the hardware machine they could build. I'm not sure if this applies to the x86 that virtual-box simulates, but it could easily be the case for something like JVM. A software JVM on a fast desktop machine is much faster than a hardware JVM on a small embedded system. David
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
David Brown wrote:
 Simon Peyton-Jones spoke at the recent HOPL (History of Programming
 Languages) conference.  His group was originally trying to come up with
 better hardware to directly execute the abstract machine used in haskell.
 They problem they found is that because of the enormous advances in general
 purpose processors, even a simple simulation of the virtual machine on a
 modern PC ran faster than the hardware machine they could build.
 
 I'm not sure if this applies to the x86 that virtual-box simulates, but it
 could easily be the case for something like JVM.  A software JVM on a fast
 desktop machine is much faster than a hardware JVM on a small embedded
 system.
It sounds like they discovered that fast hardware (a fast desktop machine) runs faster than slow hardware (small embedded system)! But it doesn't sound like on the same machine they showed that software ran faster than hardware.
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop 
 machine) runs faster than slow hardware (small embedded system)! But it 
 doesn't sound like on the same machine they showed that software ran 
 faster than hardware.
It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively. Regards, Frank
Oct 23 2007
next sibling parent reply "David Wilson" <dw botanicus.net> writes:
On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop
 machine) runs faster than slow hardware (small embedded system)! But it
 doesn't sound like on the same machine they showed that software ran
 faster than hardware.
It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively.
This is *not* the case. Current hardware virtualization cannot reach the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both. Thanks, David. "A little knowledge is dangerous"
 Regards, Frank
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
David Wilson wrote:
 On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 It sounds like they discovered, that for any resonable amount of money to
 spend on hardware, they'd be better of with cheap, fast, but "wrong" off-
 the-shelf hardware and software emulation than with some expensive custom
 made piece of metal that runs their stuff natively.
This is *not* the case. Current hardware virtualization cannot reach the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both.
There is just *no* *effing* *way* anything could be faster without hardware support compared to with. The ultimate hardware support is putting your sw into silicon. No way around here, just forget it. Complain to glod. It's just like that. Regards, Frank
Oct 23 2007
parent reply BCS <ao pathlink.com> writes:
Reply to 0ffh,

 David Wilson wrote:
 
 On 23/10/2007, 0ffh <spam frankhirsch.net> wrote:
 
 It sounds like they discovered, that for any resonable amount of
 money to spend on hardware, they'd be better of with cheap, fast,
 but "wrong" off- the-shelf hardware and software emulation than with
 some expensive custom made piece of metal that runs their stuff
 natively.
 
This is *not* the case. Current hardware virtualization cannot reach the performance of software paravirtualization for various reasons. Hardware nested page table support being the most often mentioned - many common ops are still very expensive under current Vanderpool/Pacifica, some worse than cooperative virt. See e.g. http://project-xen.web.cern.ch/project-xen/xen/hardware.html (search for "memory management"). Current virtualization hardware tech was pushed out the door to take advantage of the virtualization bubble last year/two ago. It is very wrong to assume that just because there is "hardware support" it's going to be "faster" (for some easily quantifiable value of faster), useful, or both.
There is just *no* *effing* *way* anything could be faster without hardware support compared to with. The ultimate hardware support is putting your sw into silicon. No way around here, just forget it. Complain to glod. It's just like that. Regards, Frank
OTOH Intel can spend a LOT more time and money getting there chips fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
BCS wrote:
 Reply to 0ffh,
 There is just *no* *effing* *way* anything could be faster without
 hardware support compared to with. The ultimate hardware support
 is putting your sw into silicon. No way around here, just forget it.
 Complain to glod. It's just like that.
 Regards, Frank
OTOH Intel can spend a LOT more time and money getting there chips fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.
Right now I ain't talking practical anymore. There was a clear challenge regarding basic truths. Basic truth is: Hardware is faster... :-P Regards, Frank
Oct 23 2007
parent reply BCS <ao pathlink.com> writes:
Reply to 0ffh,

 BCS wrote:
 
 Reply to 0ffh,
 
 There is just *no* *effing* *way* anything could be faster without
 hardware support compared to with. The ultimate hardware support
 is putting your sw into silicon. No way around here, just forget it.
 Complain to glod. It's just like that.
 Regards, Frank
OTOH Intel can spend a LOT more time and money getting there chips fast than most people can. If you can throw that kind of resources at it, you can make it faster. If all you can do is put it on a PCI card, forget it. Somewhere in between, they switch places. The question is where. It could be that right now, that point isn't yet practical.
Right now I ain't talking practical anymore. There was a clear challenge regarding basic truths. Basic truth is: Hardware is faster... :-P
I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.
 
 Regards, Frank
 
Oct 23 2007
parent reply 0ffh <spam frankhirsch.net> writes:
BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P
I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.
Well, I still have my FPGA kit.... heh! =) Regards, Frank
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
0ffh Wrote:

 BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P
I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.
Well, I still have my FPGA kit.... heh! =) Regards, Frank
I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future. Transmeta had a similar idea but they lost to the might of the intel / AMD war. (http://en.wikipedia.org/wiki/Transmeta). How are you finding your kit? I was thinking about getting some kit a while back. Have you got a D 'compiler' for it or indeed anything that turns higher level code (other than VHDL) directly into hardware? Regards, Bruce.
Oct 23 2007
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Bruce Adams wrote:
 0ffh Wrote:
 
 BCS wrote:
 Reply to 0ffh,
 Right now I ain't talking practical anymore. There was a clear
 challenge regarding basic truths. Basic truth is: Hardware is
 faster... :-P
I agree. But another basic truth is; if it ain't practical, /we/ ain't go'na get it. Yet.
Well, I still have my FPGA kit.... heh! =) Regards, Frank
I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future. Transmeta had a similar idea but they lost to the might of the intel / AMD war. (http://en.wikipedia.org/wiki/Transmeta). How are you finding your kit? I was thinking about getting some kit a while back. Have you got a D 'compiler' for it or indeed anything that turns higher level code (other than VHDL) directly into hardware? Regards, Bruce.
This is going way off-topic, but I remember reading a paper once where a group of researchers had taken some fairly well-optimised open-source speech recognition engine, and re-implemented it using an FPGA. Comparing against the software implementation running on a pretty fast machine, the FPGA blew it out of the water. It was something on the order of 10 times faster, on a fraction of the power, memory and clock speed. But yes, different tools for different jobs. I'm just pointing out that FPGAs have been used to improve performance beyond what could be done with a general purpose system. -- Daniel
Oct 24 2007
prev sibling parent 0ffh <spam frankhirsch.net> writes:
Bruce Adams wrote:
 0ffh Wrote:
 Well, I still have my FPGA kit.... heh! =)
I was going to raise that point. I think FPGAs are still too small, slow and expensive to compete at the moment but perhaps in the future.
Actually, I find FPGAs quite affordable already. It's just the development software used to be incredibly expensive, but that has changed a few years ago (at least for some manufacturers).
 How are you finding your kit?
It's fun but not much use without a hardware guy around. Luckily I know a few... =)
 I was thinking about getting some kit a while back. Have you
 got a D 'compiler' for it or indeed anything that turns higher level
 code (other than VHDL) directly into hardware?
No, I'm using Verilog and dreaming I had something /much/ better. VHDL is sure not it. I kinda like ABEL (no kiddin) because it's just so incredibly easy to use, but people keep telling me it scales badly or whatever. Regards, Frank
Oct 24 2007
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
0ffh wrote:
 Walter Bright wrote:
 It sounds like they discovered that fast hardware (a fast desktop 
 machine) runs faster than slow hardware (small embedded system)! But 
 it doesn't sound like on the same machine they showed that software 
 ran faster than hardware.
It sounds like they discovered, that for any resonable amount of money to spend on hardware, they'd be better of with cheap, fast, but "wrong" off- the-shelf hardware and software emulation than with some expensive custom made piece of metal that runs their stuff natively.
That's often true for embedded systems.
Oct 23 2007
prev sibling parent "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 06:54:01 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 VirtualBox is a nice
 part-open-source virtualization product, and they stated that the
 software virtualization they implemented is faster than today's
 hardware virtualization.
 
 I find that hard to believe.
I should have included this link to support this: http://www.virtualbox.org/wiki/Developer_FAQ See the 2nd Q/A pair. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 23 2007
prev sibling parent Reiner Pope <some address.com> writes:
Vladimir Panteleev wrote:
 This allows VM languages to be safely
 embedded in places such as web pages (Flash for ActionScript, applets
 for Java, Silverlight for .NET).
It is not necessary to have a VM to achieve this. If you design a language that does not have arbitrary pointers, and you control the code generation, you can sandbox it in software every bit as effectively. This is why, for example, the Java JITs don't compromise their security model.
This requires that the code is given at a level high enough where this is enforceable - that is, either at source or bytecode/AST level.
However, work has been done on Proof Carrying Code and Typed Assembly Language so that you can distribute code as assembly, and still have the security policy enforceable. I don't know how developed this is, though. -- Reiner
Oct 23 2007
prev sibling next sibling parent reply =?ISO-8859-1?Q?Julio_C=E9sar_Carrascal_Urquijo?= writes:
Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a VM is.
The only advantage a VM has over native code that I see is security. I'm not talking about this process can't write memory of another process. I'm talking about this process can't write to the hard disk, only to Isolated Storage; but this one can because it's signed by a Thawte certificate and the VM. This is a lot more than disallowing pointer arithmetic. I'm not aware of any compiled language that has managed to do this. On the other hand, most .NET developers ignore CAS (Code Access Security) in their apps, so it doesn't seem like a great advantage anyway. -- Julio Csar Carrascal Urquijo http://www.artelogico.com/
Oct 22 2007
parent Christopher Wright <dhasenan gmail.com> writes:
Julio Csar Carrascal Urquijo wrote:
 Walter Bright wrote:
 I've never been able to discover what the fundamental advantage of a 
 VM is.
The only advantage a VM has over native code that I see is security. I'm not talking about this process can't write memory of another process. I'm talking about this process can't write to the hard disk, only to Isolated Storage; but this one can because it's signed by a Thawte certificate and the VM.
This policy should be carried out at the operating system level for any reasonable assurance of security.
 This is a lot more than disallowing pointer arithmetic. I'm not aware of 
 any compiled language that has managed to do this.
C + SELinux? If your language doesn't have a VM, the VM can't check any certificates, only the OS. The reverse is not true -- your OS can check VM-bound applications' certificates, depending on how VM applications are launched and whether the VM cooperates. Though in SELinux, you don't have certificates; you have a complex set of permissions, essentially, that some really dedicated person has to come up with.
 On the other hand, most .NET developers ignore CAS (Code Access 
 Security) in their apps, so it doesn't seem like a great advantage anyway.
Nobody uses SELinux, either, so that's okay.
Oct 23 2007
prev sibling parent reply Clay Smith <clayasaurus gmail.com> writes:
Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
The ability to have multiple language's targeting the same VM, as well as lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.
Oct 22 2007
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
"Clay Smith" wrote
 Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
The ability to have multiple language's targeting the same VM, as well as lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.
http://en.wikipedia.org/wiki/.NET_Languages BTW, I don't think the result is an advantage, as in practice, the language is more important than the object format, so you still end up only using the languages must use the .net library to be compatible. -Steve
Oct 23 2007
parent Clay Smith <clayasaurus gmail.com> writes:
Steven Schveighoffer wrote:
 "Clay Smith" wrote
 Walter Bright wrote:
 Jussi Jumppanen wrote:
 I think Microsoft's longer term vision is to have .NET everywhere
 and I mean everywhere.
I've never been able to discover what the fundamental advantage of a VM is.
The ability to have multiple language's targeting the same VM, as well as lowering the barrier for a language to become cross platform. I think the future of computing may be to allow the programmer to choose whatever compiled language they want, and eventually have all languages compiled down to the same 'bytecode' so they can all interoperate with each other.
http://en.wikipedia.org/wiki/.NET_Languages BTW, I don't think the result is an advantage, as in practice, the language is more important than the object format, so you still end up only using the languages must use the .net library to be compatible. -Steve
The .NET languages do look really promising in this regard, the problem is that Microsoft only supports its own platform. Maybe Mono might catch up, but if Microsoft decides to be evil, it could easily pull the rig on Mono by changing specs or using their army of lawyers. What I'm envisioning will be something that is not tied to any one platform or corporation and will be open source, with no one claiming to own the technology. Of course, it would require all programmers to agree on a common ground, so it is probably unrealistic.
Oct 23 2007
prev sibling next sibling parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Walter Bright Wrote:

 Roberto Mariottini wrote:
 David Brown wrote:
 On Sun, Oct 21, 2007 at 10:08:26PM -0700, Walter Bright wrote:
[...]
 That isn't an advantage of the VM. It's an advantage of a language 
 that has no implementation-defined or undefined behavior. Given that, 
 the same portability results are achieved.
It's still a VM advantage. It helps the model where there are many developers who only distribute binaries. If they are distributing for a VM, they only have to distribute a single binary. Otherwise, they still would have to recompile for every possible target.
And not only that: if my product is compiled for Java-CLDC it will work on any cell phone that support CLDC, based on any kind of processor/architecture, included those I don't know of, included even those that today don't exist and will be made in the future.
Javascript is distributed in source code, and executes on a variety of machines. A VM is not necessary to achieve portability to machines unknown. What is necessary is a portable language design.
Imagine it as a compatibility layer or a shared library. If my OS supports POSIX I can develop for POSIX. If I develop for windows as well I have to learn and use other APIs. A VM is just a special kind of API that provides a language backend and interpreter.
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
 supports POSIX I can develop for POSIX. If I develop for windows as
 well I have to learn and use other APIs. A VM is just a special kind
 of API that provides a language backend and interpreter.
It can be thought of that way, it's just entirely unnecessary to achieve those goals, and throws in a bunch of problems: 1) perennial performance issues 2) incompatibility and inoperability with native languages 3) gigantic runtimes needed
Oct 22 2007
next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 3) gigantic runtimes needed
IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size. -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 3) gigantic runtimes needed
IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size.
This problem is addressed by DLLs (Windows) and shared libraries (Linux).
Oct 22 2007
next sibling parent reply "Vladimir Panteleev" <thecybershadow gmail.com> writes:
On Tue, 23 Oct 2007 00:56:28 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Vladimir Panteleev wrote:
 On Mon, 22 Oct 2007 21:19:30 +0300, Walter Bright
 <newshound1 digitalmars.com> wrote:

 3) gigantic runtimes needed
IMO it's better to have one copy of a gigantic runtime than having parts of it statically linked in every EXE, causing lots of repeating code in a product with lots of binaries (such as an operating system). .NET executables are much smaller compared to most native executables (where the runtime is statically linked) - so, knowing that .NET will only gain more popularity in the future, I find a one-time 20MB download preferred to re-downloading the same components with every new program. Now that Microsoft is including it in their new operating systems, Vista users will just benefit from a smaller download size.
This problem is addressed by DLLs (Windows) and shared libraries (Linux).
Except, they're not really as easy to use. With .NET, you can derive from a class in a compiled assembly without having access to the source. You just add the assembly in the project's dependencies and import the namespace with "using". In C, you must use the included .h files (and .h files are a pain to maintain anyway since you must maintain the declaration and implementation separately, but that's not news to you). You must still use .lib and .di files with D and such - although they can be automated in the build process, it's still a hassle. Besides that, statically linking in the runtime seems to be a too common practice, as "DLL hell" has been a discouragement for dynamically-linked libraries in the past (side-by-side assemblies is supposed to remedy that though). I guess the fault is not in the DLLs themselves, it's how people and Microsoft used them... -- Best regards, Vladimir mailto:thecybershadow gmail.com
Oct 22 2007
next sibling parent reply Jascha Wetzel <firstname mainia.de> writes:
Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
 
 With .NET, you can derive from a class in a compiled assembly without having
access to the source. You just add the assembly in the project's dependencies
and import the namespace with "using". In C, you must use the included .h files
(and .h files are a pain to maintain anyway since you must maintain the
declaration and implementation separately, but that's not news to you). You
must still use .lib and .di files with D and such - although they can be
automated in the build process, it's still a hassle. 
 
 Besides that, statically linking in the runtime seems to be a too common
practice, as "DLL hell" has been a discouragement for dynamically-linked
libraries in the past (side-by-side assemblies is supposed to remedy that
though). I guess the fault is not in the DLLs themselves, it's how people and
Microsoft used them... 
 
That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.
Oct 23 2007
parent reply "Chris Miller" <chris dprogramming.com> writes:
On Tue, 23 Oct 2007 07:28:42 -0400, Jascha Wetzel <firstname mainia.de>  
wrote:

 Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
  With .NET, you can derive from a class in a compiled assembly without  
 having access to the source. You just add the assembly in the project's  
 dependencies and import the namespace with "using". In C, you must use  
 the included .h files (and .h files are a pain to maintain anyway since  
 you must maintain the declaration and implementation separately, but  
 that's not news to you). You must still use .lib and .di files with D  
 and such - although they can be automated in the build process, it's  
 still a hassle.  Besides that, statically linking in the runtime seems  
 to be a too common practice, as "DLL hell" has been a discouragement  
 for dynamically-linked libraries in the past (side-by-side assemblies  
 is supposed to remedy that though). I guess the fault is not in the  
 DLLs themselves, it's how people and Microsoft used them...
That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.
http://www.codesourcery.com/cxx-abi/ I don't know the whole deal, but I guess some decided not to go by this; I don't even know if DMC does or not.
Oct 23 2007
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Chris Miller wrote:
   http://www.codesourcery.com/cxx-abi/
 I don't know the whole deal, but I guess some decided not to go by this; 
 I don't even know if DMC does or not.
DMC++ follows the Microsoft C++ ABI under Windows.
Oct 23 2007
prev sibling parent reply Don Clugston <dac nospam.com.au> writes:
Chris Miller wrote:
 On Tue, 23 Oct 2007 07:28:42 -0400, Jascha Wetzel <firstname mainia.de> 
 wrote:
 
 Vladimir Panteleev wrote:
 Except, they're not really as easy to use.
  With .NET, you can derive from a class in a compiled assembly 
 without having access to the source. You just add the assembly in the 
 project's dependencies and import the namespace with "using". In C, 
 you must use the included .h files (and .h files are a pain to 
 maintain anyway since you must maintain the declaration and 
 implementation separately, but that's not news to you). You must 
 still use .lib and .di files with D and such - although they can be 
 automated in the build process, it's still a hassle.  Besides that, 
 statically linking in the runtime seems to be a too common practice, 
 as "DLL hell" has been a discouragement for dynamically-linked 
 libraries in the past (side-by-side assemblies is supposed to remedy 
 that though). I guess the fault is not in the DLLs themselves, it's 
 how people and Microsoft used them...
That is correct, but the obvious solution to that problem is to support the OO paradigm in dynamic linking. That is, we don't need a VM, we need DDL. Had C++ standardized it's ABI, this problem would probably not exist today.
http://www.codesourcery.com/cxx-abi/ I don't know the whole deal, but I guess some decided not to go by this; I don't even know if DMC does or not.
That was added after the fact. Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. It's ridiculous. I don't know if D will be able to support a common ABI across both platforms.
Oct 23 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Don Clugston wrote:
 Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. It's 
 ridiculous. I don't know if D will be able to support a common ABI 
 across both platforms.
dmd already supports two different ABIs - win32 and linux 32. There are numerous subtle differences in calling conventions, alignment, register usage, as well as the major one of name mangling.
Oct 24 2007
parent Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Don Clugston wrote:
 Unfortunately, the ABIs for Linux 64 and Windows 64 are diverging. 
 It's ridiculous. I don't know if D will be able to support a common 
 ABI across both platforms.
dmd already supports two different ABIs - win32 and linux 32. There are numerous subtle differences in calling conventions, alignment, register usage, as well as the major one of name mangling.
The Linux64/Win64 difference is worse, though. It's possible to have a pure asm function which will work on both Linux32 and Win32; that's not possible for the 64 bit case. But I'm most worried about the requirements for system exception handling.
Oct 26 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).
Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.
D has the potential to do better, it's just that its a bit mired in the old school.
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...
The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.
Oct 23 2007
next sibling parent Kyle Furlong <kylefurlong gmail.com> writes:
Walter Bright wrote:
 Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).
Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.
D has the potential to do better, it's just that its a bit mired in the old school.
What do you envision as better for the future? Or were you just speaking hypothetically? Will link compatibility be kept for 2.0, 3.0 etc?
 
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...
The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.
Oct 23 2007
prev sibling parent davidl <davidl 126.com> writes:
在 Wed, 24 Oct 2007 08:54:16 +0800,Walter Bright  
<newshound1 digitalmars.com> 写道:

 Vladimir Panteleev wrote:
 With .NET, you can derive from a class in a compiled assembly without
 having access to the source. You just add the assembly in the
 project's dependencies and import the namespace with "using". In C,
 you must use the included .h files (and .h files are a pain to
 maintain anyway since you must maintain the declaration and
 implementation separately, but that's not news to you).
Yes, but that's a language bug, not anything inherent to native compilers.
 You must
 still use .lib and .di files with D and such - although they can be
 automated in the build process, it's still a hassle.
D has the potential to do better, it's just that its a bit mired in the old school.
 Besides that, statically linking in the runtime seems to be a too
 common practice, as "DLL hell" has been a discouragement for
 dynamically-linked libraries in the past (side-by-side assemblies is
 supposed to remedy that though). I guess the fault is not in the DLLs
 themselves, it's how people and Microsoft used them...
The solution to this is to have automatically generated versions for each build of a DLL/shared library. I imagine that .net does the same thing for assemblies.
The solution is banning those guys from creating changing DLL/shared libraries. They just have no idea of what DLLs are and how DLLs should be. Generating versions is a bad idea. Consider FireFox with its tons of plugins. Almost all plugins I use actually works well with *any* FireFox version. Just it bothers me to change the version no in the jar file. Cause FireFox APIs & Javascript is something fixed. So the interface of what plugins rely on is fixed. That's basically how and what DLLs should be. Interface interacts with design. I can't imagine a good design yields changing interface. -- 使用 Opera 革命性的电子邮件客户程序: http://www.opera.com/mail/
Oct 23 2007
prev sibling parent reply serg kovrov <sergk mailinator.com> writes:
Walter Bright wrote:
 This problem is addressed by DLLs (Windows) and shared libraries (Linux).
I wanted to ask long time ago, will D-runtime be available as dll/so? Sorry if this was asked/answered before, I didn't manage to find this. -- serg
Oct 23 2007
parent Walter Bright <newshound1 digitalmars.com> writes:
serg kovrov wrote:
 Walter Bright wrote:
 This problem is addressed by DLLs (Windows) and shared libraries (Linux).
I wanted to ask long time ago, will D-runtime be available as dll/so?
Eventually, yes. It just lacks someone working on it.
Oct 23 2007
prev sibling next sibling parent "Janice Caron" <caron800 googlemail.com> writes:
On 10/22/07, Walter Bright <newshound1 digitalmars.com> wrote:
 3) gigantic runtimes needed
This one is the killer for me. Java is huge. Net is even bigger. I'm just not interested in putting that much bloat onto my machine just to run the odd one or two programs.
Oct 22 2007
prev sibling parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Walter Bright Wrote:

 Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
 supports POSIX I can develop for POSIX. If I develop for windows as
 well I have to learn and use other APIs. A VM is just a special kind
 of API that provides a language backend and interpreter.
It can be thought of that way, it's just entirely unnecessary to achieve those goals, and throws in a bunch of problems: 1) perennial performance issues
The difference is what you are optimising the performance of. A dynamic language using a VM is optimising the programmers performance by allowing them to skip the compilation step at the expense of slower code.
 2) incompatibility and inoperability with native languages
This is partly by design. A VM operating as a sandbox should not be able to go down to the hardware level. However I think good interoperability has been demonstrated. Most scripting languages sport a way of writing extensions. These must be executed by the VM somehow. And then there's SWIG for automating the generation of wrappers.
 3) gigantic runtimes needed
An interpreter itself is relatively small. I can only assume that a lot of the bloat is down to bad coding. If you look at games these days they weigh in at a ridiculous 4Gb install. No amount of uncompressed data for performance gain excuses that. I suspect its the same sloppy coding for VMs on a smaller scale. It would not surprise me to see much smaller (and more elegantly designed) run-times on devices such as PDAs where the bloat cannot be tolerated. <asbestos suit> I wonder if the compile time side of D might benefit from running inside a VM when people start to do really evil and complicated things with it. </asbestos suit>
Oct 22 2007
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Bruce Adams wrote:
 Walter Bright Wrote:
 
 Bruce Adams wrote:
 Imagine it as a compatibility layer or a shared library. If my OS
  supports POSIX I can develop for POSIX. If I develop for windows
 as well I have to learn and use other APIs. A VM is just a
 special kind of API that provides a language backend and
 interpreter.
It can be thought of that way, it's just entirely unnecessary to achieve those goals, and throws in a bunch of problems: 1) perennial performance issues
The difference is what you are optimising the performance of. A dynamic language using a VM is optimising the programmers performance by allowing them to skip the compilation step at the expense of slower code.
I bet D compiles faster to native code <g>. In any case, I was talking about performance of apps, not the edit/compile/debug loop.
 2) incompatibility and inoperability with native languages
This is partly by design. A VM operating as a sandbox should not be able to go down to the hardware level. However I think good interoperability has been demonstrated. Most scripting languages sport a way of writing extensions. These must be executed by the VM somehow. And then there's SWIG for automating the generation of wrappers.
VMs go through some sort of marshalling and compatiblity layer to connect to the outside world. Native languages can connect directly.
 3) gigantic runtimes needed
An interpreter itself is relatively small. I can only assume that a lot of the bloat is down to bad coding. If you look at games these days they weigh in at a ridiculous 4Gb install. No amount of uncompressed data for performance gain excuses that. I suspect its the same sloppy coding for VMs on a smaller scale. It would not surprise me to see much smaller (and more elegantly designed) run-times on devices such as PDAs where the bloat cannot be tolerated.
The reason the gigantic runtimes are needed is because the VM has to carry around with it essentially an entire small operating system's worth of libraries. They all have to be there, not just the ones the app actually uses. The VM winds up duplicating much of the functionality of the underlying OS APIs.
 <asbestos suit> I wonder if the compile time side of D might benefit
 from running inside a VM when people start to do really evil and
 complicated things with it. </asbestos suit>
I don't think D compile times have been a problem <g>.
Oct 22 2007
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Bruce Adams wrote:
 
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 
It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Bill Baxter Wrote:

 Bruce Adams wrote:
 
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 
It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.
Oct 23 2007
parent reply Nathan Reed <nathaniel.reed gmail.com> writes:
Bruce Adams wrote:
 Bill Baxter Wrote:
 
 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 
It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.
Map and model file formats for most modern games that I know of *do* provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed
Oct 23 2007
parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Nathan Reed Wrote:

 Bruce Adams wrote:
 Bill Baxter Wrote:
 
 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 
It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.
Map and model file formats for most modern games that I know of *do* provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed
Don't get hung up on the geometry example. My example generator is broken. It is my contention that both the performance and compactness can be improved given the time and effort. I imagine it varies a lot from shop to shop but typically from what I hear they are working to tight deadlines with poor processes. Hopefully they still at least use the rule "get it right, then get it fast" but they miss off the "then get small" at the end. The huge install sizes and huge patches to supposedly "complete" games are one result of this. Battlefield 2 is painful slow to load each tiny level and yet still has a 4Gb install. Its almost a part of the package now. If someone realised a game that only needed a CD and not a DVD a lot of people would (wrongly) assume it was less feature rich than the DVD version. Take a look at a good shareware game and you see more of the full craft at work parly because download sizes are restrictive (though less so than they were).
Oct 23 2007
parent Robert Fraser <fraserofthenight gmail.com> writes:
Bruce Adams wrote:
 Nathan Reed Wrote:
 
 Bruce Adams wrote:
 Bill Baxter Wrote:

 Bruce Adams wrote:
 An interpreter itself is relatively small. I can only assume that a
 lot of the bloat is down to bad coding. If you look at games these
 days they weigh in at a ridiculous 4Gb install. No amount of
 uncompressed data for performance gain excuses that. 
It's not the code that makes modern games eat up 4Gb of space, it's the textures, animations, 3D models, audio, video cut scenes, etc. The code is a pretty small part of that. --bb
That's partly my point. A lot of that could be achieved programmatically or with better compression. You get map and model files (effectively data structures representing maps and model) that are huge and hugely inefficient with it, describing low level details with little or no abstraction. E.g. a pyramid might made of points rather than recognising a pyramid as an abstraction. Some bright sparks have decided to use XML as their data format. Its only a little bigger and only takes a little extra time to parse. This costs little on a modern machine but can hardly be considered compact and efficient.
Map and model file formats for most modern games that I know of *do* provide a way to factor out common geometry elements, so you only store one copy of the geometry for a streetlight (say) rather than repeating it for every streetlight in the game world. Even so, a modern game involves a hell of a lot of content. That's just the way it is. I'm not sure how compressed that data is on the hard drive. It's possible that they could shrink the data significantly with more attention to compression. However, that probably adversely impacts level loading times which are already long enough (I was playing the latest installment of Half-Life the other day, and seeing approx. 20-30 second load times). Despite your opinion about uncompressed data for performance's sake, a lot of gamers *would* rather the game take up 4GB of space than add to the load times. Thanks, Nathan Reed
Don't get hung up on the geometry example. My example generator is broken. It is my contention that both the performance and compactness can be improved given the time and effort. I imagine it varies a lot from shop to shop but typically from what I hear they are working to tight deadlines with poor processes. Hopefully they still at least use the rule "get it right, then get it fast" but they miss off the "then get small" at the end. The huge install sizes and huge patches to supposedly "complete" games are one result of this. Battlefield 2 is painful slow to load each tiny level and yet still has a 4Gb install. Its almost a part of the package now. If someone realised a game that only needed a CD and not a DVD a lot of people would (wrongly) assume it was less feature rich than the DVD version. Take a look at a good shareware game and you see more of the full craft at work parly because download sizes are restrictive (though less so than they were).
I'm guessing it's not cost-efficient to spend development time on minimizing file size, since mot PC gamers probably don't care. At $30/hour of developer time, it's hard to justify investing in something that's a non-isue to most of the audience... although, with online distribution systems like Steam so popular, it's becoming a bigger issue now.
Oct 23 2007
prev sibling next sibling parent Radu <radu.racariu void.space> writes:
Mike wrote:
 Hi,

 I have some advanced knowledge of programming with C and C++.
 While I like C for its simplicity and speed, it lacks some important
functionality (like OO). I'm not very fond of C++, since it is quite clumsy.
(But you know all that already)

 Anyway, I was looking for a new programming language for little projects. I
looked into the specs of the D language and became quite fond of it. Anyway, I

 I am not experienced enough to compare the two simply on the basis of their
specifications. I tried finding some comparison on the internet but failed to
find anything more recent than from 2003.

   
Hi Mike, both languages and their respective development arsenal are well suited for a specific pool of tasks. libraries, good documentation and a large community. Purely as a language, I consider it as mediocre, a better Java as others put it, but this doesn't necessary subtract the potential value of it when combined with the library, IDE, documentation, community and industry support (jobs...). D on the other hand is a better,cool language, with some great potential. Even if right now it's library set is not as vast and orthogonal, progress it made every day on improving the situation. IDE's and debuggers are developed (some already usable), packaging, build and distribution solutions are provided. It's community, even if small, packs a lot of smart people, a situation rarely seen on Java or .Net side :) and you learn *a lot* form them. Now, purely addressing your needs for a language geared towards developing small utilities you have to options(in my opinion of course): If you are distributing your programs to a limited audience like a corporate department or to an environment you can easily control .net + On the other hand if you are distributing you apps. to a larger audience, especially if you are doing it for profit, .Net will cost you dollars! Its large framework and sometimes its perceived lack of responsiveness plus the large resource hogging (a common sin of the VM be it Java or .Net), can be roadblocks for attracting potential users. Here D and its compiled nature can help you greatly, the lack of a vast library is an advantage now as you can control what goes in, the performance is better (if you code correctly) and your users are happy. You will be happier as the language puts a smile on your face most of the times :). That being said I hope I was of some help. Regards, Radu
Oct 22 2007
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:

seen (beside the IDE, the STD lib, the more widespread usage, the standard GUI
toolkit, etc):


are up to many times faster than D AAs, expecially when the memory used by the
AA becomes large (like 50 MB).

even small proggies with loops can end being quite faster (up to almost two
times faster, in some tests I have done), and you don't need to change your
sourcecode to do that (in C++ you can use OpenMP but you have to modify the
code and you have to be careful, to avoid breaking your code in many
interesting ways).

sometimes you can even mix them to write your languages.

(Note: the shootout used Mono, not dotnet, that is probably much faster).

Bye,
bearophile
Oct 24 2007
next sibling parent Radu <radu.racariu void.space> writes:
bearophile wrote:

even small proggies with loops can end being quite faster (up to almost two
times faster, in some tests I have done), and you don't need to change your
sourcecode to do that (in C++ you can use OpenMP but you have to modify the
code and you have to be careful, to avoid breaking your code in many
interesting ways).
   
Some C++ compilers pretend to do that also, mainly Intel Compilers (http://www.intel.com/cd/software/products/asmo-na/eng/compilers/cwin/279578.htm). I see no reason why D can't do that also, especially with new constructs coming in 2.0.
Oct 24 2007
prev sibling next sibling parent reply Joel Lucsy <jjlucsy gmail.com> writes:
bearophile wrote:

even small proggies with loops can end being quite faster (up to almost two
times faster, in some tests I have done), and you don't need to change your
sourcecode to do that (in C++ you can use OpenMP but you have to modify the
code and you have to be careful, to avoid breaking your code in many
interesting ways).
Oh? Since when? I know for a fact it doesn't, especially since MS has a new library they are constructing for .Net 3.5. Some links are: http://msdn.microsoft.com/msdnmag/issues/07/10/PLINQ/default.aspx http://en.wikipedia.org/wiki/Task_Parallel_Library#TPL -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
Oct 24 2007
parent reply bearophile <bearophileHUGS lycos.com> writes:
Joel Lucsy Wrote:
 Oh? Since when? I know for a fact it doesn't, especially since MS has a 
 new library they are constructing for .Net 3.5.
code on an beta version of dotnet 3.0 and it runs almost two times faster than the D version (and the C++ version without parallel annotations), I have done those tests with a friend :-) I have seen with my eyes that it uses both cores of the CPU while running, while the D version uses one only. So maybe we are talking about a bit different things... There are many ways to mess things with quick tests, so I won't push this topic more :-) Bye, bearophile
Oct 24 2007
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
bearophile Wrote:

 Joel Lucsy Wrote:
 Oh? Since when? I know for a fact it doesn't, especially since MS has a 
 new library they are constructing for .Net 3.5.
code on an beta version of dotnet 3.0 and it runs almost two times faster than the D version (and the C++ version without parallel annotations), I have done those tests with a friend :-) I have seen with my eyes that it uses both cores of the CPU while running, while the D version uses one only. So maybe we are talking about a bit different things... There are many ways to mess things with quick tests, so I won't push this topic more :-) Bye, bearophile
Indeed, automatic parallelization is a strong argument for VMs right now, since it doesn't require code modification. At the conference, Walter mentioned that D might be getting automatic parallelization of pure functions, though. I'm just afraid this won't apply to member functions, and since my programming style is so OO, it won't help me.
Oct 24 2007
parent Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
Robert Fraser Wrote:

 bearophile Wrote:
 
 Joel Lucsy Wrote:
 Oh? Since when? I know for a fact it doesn't, especially since MS has a 
 new library they are constructing for .Net 3.5.
code on an beta version of dotnet 3.0 and it runs almost two times faster than the D version (and the C++ version without parallel annotations), I have done those tests with a friend :-) I have seen with my eyes that it uses both cores of the CPU while running, while the D version uses one only. So maybe we are talking about a bit different things... There are many ways to mess things with quick tests, so I won't push this topic more :-) Bye, bearophile
Indeed, automatic parallelization is a strong argument for VMs right >now, since it doesn't require code modification. At the conference, >Walter mentioned that D might be getting automatic parallelization of >pure functions, though. I'm just afraid this won't apply to member >functions, and since my programming style is so OO, it won't help me.
Why is that an argument for VMs? There's no reason a compiler backend can't write code that paralelises to multiple CPUs. M$ has a lot of money and bodies to through at the problem so its managed to get something up and running quickly. We open sourcers need to pull our fingers out and leverage our synergies. :)
Oct 24 2007
prev sibling parent "Dave" <Dave_member pathlink.com> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:ffodsg$13q2$1 digitalmars.com...
 Joel Lucsy Wrote:
 Oh? Since when? I know for a fact it doesn't, especially since MS has a
 new library they are constructing for .Net 3.5.
I have done real tests of the nbody problem (from the Shootout site) of faster than the D version (and the C++ version without parallel annotations), I have done those tests with a friend :-) I have seen with my eyes that it uses both cores of the CPU while running, while the D version uses one only. So maybe we are talking about a bit different things... There are many ways to mess things with quick tests, so I won't push this topic more :-)
That sounds interesting - do you have a download link for the new runtime? Lately MS has been allowing the general public to download that type of stuff. I just got a dual-core machine and would like to see how it runs. Last I looked, they had a v3.0 and now v3.5 beta Framework, but that just used the v2.xxx runtime. Thanks.
 Bye,
 bearophile 
Oct 24 2007
prev sibling parent reply Bruce Adams <tortoise_74 yeah.who.co.uk> writes:
bearophile Wrote:


seen (beside the IDE, the STD lib, the more widespread usage, the standard GUI
toolkit, etc):
 

are up to many times faster than D AAs, expecially when the memory used by the
AA becomes large (like 50 MB).

even small proggies with loops can end being quite faster (up to almost two
times faster, in some tests I have done), and you don't need to change your
sourcecode to do that (in C++ you can use OpenMP but you have to modify the
code and you have to be careful, to avoid breaking your code in many
interesting ways).

sometimes you can even mix them to write your languages.
 
 (Note: the shootout used Mono, not dotnet, that is probably much faster).
 
 Bye,
 bearophile
Can you post a link? As far as I can see from my random googling M$ have a library based solution. The VM is doing nothing special. Take for example: http://msdn.microsoft.com/msdnmag/issues/07/10/Futures/default.aspx Basically they seem have a few of useful primatives (at the library level) like parallel.for and "delegate" which is more akin to unix fork() but at the thread level than to D delegates. Another useful abstraction is a scheduler though this "dispatcher" seems limited to use in UIs. Another couple of useful abstractions are futures and replicatible classes. Regards, Bruce.
Oct 25 2007
parent "Dave" <Dave_member pathlink.com> writes:
"Bruce Adams" <tortoise_74 yeah.who.co.uk> wrote in message 
news:ffph3r$6ji$1 digitalmars.com...
 bearophile Wrote:


 personally seen (beside the IDE, the STD lib, the more widespread usage, 
 the standard GUI toolkit, etc):


 3.0) are up to many times faster than D AAs, expecially when the memory 
 used by the AA becomes large (like 50 MB).

 cores even small proggies with loops can end being quite faster (up to 
 almost two times faster, in some tests I have done), and you don't need 
 to change your sourcecode to do that (in C++ you can use OpenMP but you 
 have to modify the code and you have to be careful, to avoid breaking 
 your code in many interesting ways).

 sometimes you can even mix them to write your languages.

 (Note: the shootout used Mono, not dotnet, that is probably much faster).

 Bye,
 bearophile
Can you post a link? As far as I can see from my random googling M$ have a library based solution. The VM is doing nothing special. Take for example: http://msdn.microsoft.com/msdnmag/issues/07/10/Futures/default.aspx Basically they seem have a few of useful primatives (at the library level) like parallel.for and "delegate" which is more akin to unix fork() but at the thread level than to D delegates. Another useful abstraction is a scheduler though this "dispatcher" seems limited to use in UIs. Another couple of useful abstractions are futures and replicatible classes.
I searched the MS site for a beta of the .NET runtime as well with no luck. I'd like to see a link also.
 Regards,

 Bruce. 
Oct 25 2007