digitalmars.D - D vs VM-based platforms
- lubosh (3/3) Apr 30 2007 Hi all,
- Ary Manzana (26/31) Apr 30 2007 Me too. :-)
- Jascha Wetzel (10/15) Apr 30 2007 This point is actually only about standard libraries, not VMs.
- Walter Bright (12/17) Apr 30 2007 The thing is, you don't need a VM to get such portability. You need a
- Jascha Wetzel (12/34) Apr 30 2007 to have a format for distribution that's still abstract but not human
- Walter Bright (4/9) Apr 30 2007 There's no point to that, since there are very good bytecode => java
- Sean Kelly (8/18) Apr 30 2007 What I find amazing is that a good bytecode => source translator for
- Jascha Wetzel (5/15) Apr 30 2007 isn't that mainly because Java's .class files also contain declarations?
- Jan Claeys (9/18) Apr 30 2007 But what people call a "VM" is in fact an interpreter or a (JIT)
- Sean Kelly (13/17) Apr 30 2007 One issue with run-time optimization is its impact on performance. A
- Jascha Wetzel (5/25) Apr 30 2007 even if JIT does equally well, it's basically O(n) vs. O(1), n being the
- Jan Claeys (18/36) Apr 30 2007 Well, in practice most Python code just runs on the Python bytecode
- Daniel Keep (22/26) Apr 30 2007 The really interesting stuff on Python is happening over at the PyPy[1]
- Jan Claeys (8/17) May 01 2007 It's a very interesting project, but RPython is not the same language as
- Paul Findlay (11/14) May 02 2007 AFAIK, just the PyPy compiler for python (amongst other tools) is writte...
- gareis (18/22) Apr 30 2007 And then you're sacrificing two or three cores to run one thread, rather...
- Dave (13/39) Apr 30 2007 But the optimizations are the same (basically), and the best and brighte...
- Sean Kelly (8/22) May 01 2007 That's because (I suspect) most of Sun's big customers use their static
- Walter Bright (3/6) Apr 30 2007 That's the theory. In practice, Python programmers who need performance
- Bill Baxter (18/25) Apr 30 2007 In practice with Python I think what happens is more like:
- Jan Claeys (15/23) Apr 30 2007 Just like some people write libraries in Fortran or assembler or some
- Daniel Keep (18/30) Apr 30 2007 Or out of sheer bloody-mindedness.
- Stephen Waits (5/8) May 01 2007 You should qualify this - I'm guessing you mean for a single dot
- Daniel Keep (16/26) May 01 2007 Sorry; yes, you're right: it's for a single dot product.
- Benji Smith (6/14) May 01 2007 I'm also assuming that's for some low-dimensionality vector? I'd
- Daniel Keep (17/34) May 01 2007 3D single-precision. The problem seems to be a combination of unaligned
- 0ffh (4/6) May 01 2007 Anyways, I admit I miss the "inline" keyword - fortunately it can be
- Benji Smith (28/30) Apr 30 2007 Some of the benefits of using a VM platform:
- Brad Roberts (12/49) Apr 30 2007 Interesting. Paraphrasing your reply: These are benefits of VM's, but
- Walter Bright (18/50) Apr 30 2007 That's an attribute of the language, not the VM. COM does the same thing...
- Benji Smith (56/100) May 01 2007 Actually, COM illustrates my point quite nicely. If you're going to
- BCS (16/16) May 01 2007 Reply to Benji,
- Benji Smith (13/33) May 01 2007 Sure. Fair enough. You *could* maybe do all of that stuff with native
- Walter Bright (32/105) May 01 2007 Not if the language is designed to be COM-compatible from the start. You...
- Benji Smith (22/54) May 01 2007 Aha. Very interesting point. I hadn't thought of that.
- Sean Kelly (8/14) May 01 2007 Since the name "virtual machine" implies the virtualization of a
- Benji Smith (6/23) May 01 2007 I agree. A good virtual machine will provide all of the features of a
- Walter Bright (22/56) May 01 2007 It turns out that Java and C# both can map directly onto COM, so if one
- Fredrik Olsson (11/21) May 05 2007 Visual Basic.
- Tom (7/44) Apr 30 2007 You people can list a million of (mostly) theoretical benefits in having...
- Mike Parker (24/30) May 01 2007 I strongly disagree that Java and C# are 'damn slow'. Have you seen some...
- Tom (11/45) May 01 2007 I've seen no big games written in Java/C# (so I can't really hold a posi...
- Jan Claeys (9/11) May 01 2007 "Virtual machines" (implemented in software) & "real machines"
- Don Clugston (8/24) Apr 30 2007 I think a big reason for .NET was the Itanium. It was going to make it
- Walter Bright (8/10) Apr 30 2007 Java originally was intended to be for embedded systems with very tight
- Sean Kelly (10/15) Apr 30 2007 Java runs on a VM largely because it allows proprietary applications to
- lubosh (5/9) Apr 30 2007 I don't mind to do build for every target platform. There's a lot of I/O...
- Don Clugston (8/20) Apr 30 2007 I don't think JIT as performed by Java and .NET makes any sense; it's
- Mike Parker (3/5) May 01 2007 You might be interested in this article:
- Bruno Medeiros (6/10) May 03 2007 Ah, interesting, so that's why the installtion of the the .NET framework...
- Pragma (4/15) May 03 2007 But what's truly ridiculous is that .NET has exactly *one* target platfo...
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (9/24) May 03 2007 Hehe, on slashdot a 'you must be new here' reply would be modded +5
- Sean Kelly (10/17) May 03 2007 To be fair, Ms does target ARM as well, for its handheld devices.
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (3/14) May 03 2007 I was a bit over dramatic. It might also be the only way to get rid of
- Anders Bergh (6/11) May 03 2007 Don't forget about PowerPC and IA-64. XNA lets you write games for the
- James Dennett (8/29) May 03 2007 Being a standard doesn't mean that it's free of patent
- Pragma (7/33) May 04 2007 True enough. Perhaps this is your reply sailing right over my head, but...
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (3/8) May 04 2007 Oh, that. I've probably spent one year too many compiling Gentoo, it did...
- Joel Lucsy (9/10) May 03 2007 Oh? So you're saying the optimized code coming out of the NGEN sequence
- Pragma (4/13) May 04 2007 Good point. -1 for me for not recalling what started this particular po...
- Bruno Medeiros (5/19) May 06 2007 Yup, that's was I was going to say, platform != CPU configuration. ^^
- Boris Kolar (4/5) May 02 2007 Both JVM and CLR (.NET) are badly designed. Both platforms are too tight...
- Trish Jones (4/9) May 02 2007 Have you checked out the work of Ian Piumarta? He has done some very int...
Hi all, I wonder what you all think about the future of programming platforms. I've got Honestly, I feel quite refreshed to re-discover native compilation in D again. It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code? What's your opinion? I wonder how much stir up would D cause if it would have nice and powerful standardized library and really good IDE (like VS.NET)
Apr 30 2007
lubosh escribió:Hi all, I wonder what you all think about the future of programming platforms. I've Honestly, I feel quite refreshed to re-discover native compilation in D again.Me too. :-) It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code? What's your opinion? I wonder how much stir up would D cause if it would have nice and powerful standardized library and really good IDE (like VS.NET) I think the thought is: if in every machine there is a virtual machine with a 10Gb standard library, then, although it's slower than native code (but we're improving it each day!), writing software is much more easier and faster. Why? Because you already have most of the common functions and classes written for you: xml, streams, collections, network, etc. This also means that if your public method recieves a "List", because it's standard, everyone understands it quickly. Also the standard library can be improved, so each program improves as well. Further, you have reflection, which gives you a tremendous power to extend your code with plugins (like in the Eclipse framework). But... Everytime I open an app and it takes from one to two minutes I remember the good old native code, and that's I'd like D to become more popular. And I know the language itself isn't enough these days, and that a good standard library is a must (phobos and tango), as well as a really good IDE. There isn't a "really good" IDE yet, but it's only a matter of time. Take a look at what the next release of Descent will have: http://www.dsource.org/projects/descent/browser/trunk/descent.ui/screenshots/descent_ddbg.jpg?format=raw
Apr 30 2007
Ary Manzana wrote:Why? Because you already have most of the common functions and classes written for you: xml, streams, collections, network, etc. This also means that if your public method recieves a "List", because it's standard, everyone understands it quickly. Also the standard library can be improved, so each program improves as well.This point is actually only about standard libraries, not VMs. As i see it, VMs actually are only about portability. Portability in theory also means better (more individual) code optimzation. VMs also make compilers a lot simpler. the difficult, platform dependent part of code optimzation lies in the VM. versatility and eat a lot of resources. But little of that is actually dependent on the VM concept. Reflection can be done natively, as well (also see FlectioneD).
Apr 30 2007
Jascha Wetzel wrote:This point is actually only about standard libraries, not VMs. As i see it, VMs actually are only about portability. Portability in theory also means better (more individual) code optimzation. VMs also make compilers a lot simpler. the difficult, platform dependent part of code optimzation lies in the VM.The thing is, you don't need a VM to get such portability. You need a language that doesn't have implementation defined or undefined behavior. It's *already* abstracted away from the target machine, why add another layer of abstraction? I just don't get the reason for a VM. It seems like a solution looking for a problem. As for the "makes building compilers easier", that is solved by defining an intermediate representation (don't need a VM), and building front ends to write to that intermediate representation, building separate optimizers and back ends to turn the intermediate representation into machine code. This is an old idea, and works fine (see gcc!).
Apr 30 2007
It's *already* abstracted away from the target machine, why add another layer of abstraction?to have a format for distribution that's still abstract but not human readable. but i agree that VMs are rather obsolete. one could as well ship intermediate code and finish compilation at first start or installation. ideally one could consider optional processor units like SSE in that last phase of compilation. actually i think this or a multi-target binary format that allows for alternate code units on function level would be a very effective approach to these issues. the later could be implemented by having the compiler generate several versions of a function and let a detection unit decide at startup which version to link. Walter Bright wrote:Jascha Wetzel wrote:This point is actually only about standard libraries, not VMs. As i see it, VMs actually are only about portability. Portability in theory also means better (more individual) code optimzation. VMs also make compilers a lot simpler. the difficult, platform dependent part of code optimzation lies in the VM.The thing is, you don't need a VM to get such portability. You need a language that doesn't have implementation defined or undefined behavior. It's *already* abstracted away from the target machine, why add another layer of abstraction? I just don't get the reason for a VM. It seems like a solution looking for a problem. As for the "makes building compilers easier", that is solved by defining an intermediate representation (don't need a VM), and building front ends to write to that intermediate representation, building separate optimizers and back ends to turn the intermediate representation into machine code. This is an old idea, and works fine (see gcc!).
Apr 30 2007
Jascha Wetzel wrote:There's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.It's *already* abstracted away from the target machine, why add another layer of abstraction?to have a format for distribution that's still abstract but not human readable.
Apr 30 2007
Walter Bright wrote:Jascha Wetzel wrote:What I find amazing is that a good bytecode => source translator for .NET will often produce the original source code... exactly. I assume the information is all present for reflection purposes, but I've never been able to get over it. The first time it was shown to me I expected to see some half readable mess, not a photographic duplicate of the original source code. SeanThere's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.It's *already* abstracted away from the target machine, why add another layer of abstraction?to have a format for distribution that's still abstract but not human readable.
Apr 30 2007
isn't that mainly because Java's .class files also contain declarations? in general it shouldn't be so easy to translate intermediate code back to source code, especially if general optimizations have already been applied. Walter Bright wrote:Jascha Wetzel wrote:There's no point to that, since there are very good bytecode => java source translators. Running your source through a comment stripper would be about as good.It's *already* abstracted away from the target machine, why add another layer of abstraction?to have a format for distribution that's still abstract but not human readable.
Apr 30 2007
Op Mon, 30 Apr 2007 10:06:47 -0700 schreef Walter Bright <newshound1 digitalmars.com>:I just don't get the reason for a VM. It seems like a solution looking for a problem. As for the "makes building compilers easier", that is solved by defining an intermediate representation (don't need a VM), and building front ends to write to that intermediate representation, building separate optimizers and back ends to turn the intermediate representation into machine code. This is an old idea, and works fine (see gcc!).But what people call a "VM" is in fact an interpreter or a (JIT) compiler for such an "intermediate representation"... ;-) And I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time. -- JanC
Apr 30 2007
Jan Claeys wrote:And I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today. Sean
Apr 30 2007
even if JIT does equally well, it's basically O(n) vs. O(1), n being the number of runs of the program. unless the advantage of dynamic optimization outweighs the cost of runtime compilation it's unlikely to be more efficient than pre-runtime compilation. Sean Kelly wrote:Jan Claeys wrote:And I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today. Sean
Apr 30 2007
Op Mon, 30 Apr 2007 11:51:07 -0700 schreef Sean Kelly <sean f4.ca>:Jan Claeys wrote:Well, in practice most Python code just runs on the Python bytecode interpreter (and in most other cases on the Java or .NET VMs), and with a good reason. Some code runs faster when using the third party 'psyco' JIT compiler (which only exists for x86 anyway), while other code gains nothing from it (and thus gets slower due to the additional compilation step). Fortunately you can also tell this JIT at runtime what you want to compile to native code and what not. OTOH I think every attempt to compile Python code into native machine code beforehand until now has resulted in code that runs up to 100x _slower_ than the interpreter(!). ;-) The "problem" with Python is that it's dynamic, and so there is *nothing* known about anything that touches something outside the current module... -- JanCAnd I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.One issue with run-time optimization is its impact on performance. A traditional compiler can take as long as it wants to exhaustively optimize an application, while a JIT-compiler may only optimize in a way that does not hurt application responsiveness or performance. At SDWest last year, there was a presentation on C++ vs. Java performance, and one of the most significant factors was that most Java JIT-compilers perform little if any optimization, while C++ compilers optimize exhaustively. That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today.
Apr 30 2007
Jan Claeys wrote:Well, in practice most Python code just runs on the Python bytecode interpreter (and in most other cases on the Java or .NET VMs), and with a good reason. ...The really interesting stuff on Python is happening over at the PyPy[1] project. They're basically trying to write a Python interpreter in a restricted subset of Python called RPython, which can then be translated into other formats like C or LLVM. One of the really weird things is that you can run various transformations over the RPython code to change how it works without ever having to rewrite any of the actual code. The classic example of this is integrating Stackless Python into the interpreter by basically throwing a switch. It's all very cool, and really hard to understand. :P -- Daniel [1] http://codespeak.net/pypy/ -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Apr 30 2007
Op Tue, 01 May 2007 11:25:06 +1000 schreef Daniel Keep <daniel.keep.lists gmail.com>:Jan Claeys wrote:It's a very interesting project, but RPython is not the same language as Python, and they left out some of the things that make compiling Python to native code so difficult...Well, in practice most Python code just runs on the Python bytecode interpreter (and in most other cases on the Java or .NET VMs), and with a good reason. ...The really interesting stuff on Python is happening over at the PyPy[1] project.It's all very cool, and really hard to understand. :PRight, I didn't try to understand the details yet. ;) -- JanC
May 01 2007
Jan Claeys wrote:It's a very interesting project, but RPython is not the same language as Python, and they left out some of the things that make compiling Python to native code so difficult...AFAIK, just the PyPy compiler for python (amongst other tools) is written in RPython. It can compile all normal (well all of it that is currently supported) Python code. This is the same approach as Squeak does for Smalltalk [1] and Rubinius [2] does for Ruby. What you described more accurately reflects Shedskin [3], a Python-to-C++ compiler that only supports a subset of Python  - Paul 1: http://www.squeak.org/Features/TheSqueakVM/ 2: http://en.wikipedia.org/wiki/Rubinius 3: http://mark.dufour.googlepages.com/home
May 02 2007
Sean Kelly wrote: ...That said, JIT optimization is still a relatively new practice, and with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today.And then you're sacrificing two or three cores to run one thread, rather than sacrificing most of the developer's computational power at compile time and running as efficiently (or more so) with at most one core per thread. Now, if the compiler cached its optimizations on disk, you could potentially get similar optimizations after some large number of runs. However, in the interim the program would run slower than the optimized precompiled code, and would start slower than the JIT code that didn't cache its optimizations (because that caching takes disk time, and that's one of the most expensive resources). Of course, your runtime compiler can optimize for your user's current CPU, even if that changes. I suppose you could create binaries optimized for each CPU and have a script determine which is appropriate for the current CPU, but that's spending disk space (also quite scarce) in exchange for CPU time (relatively abundant). I don't know how to solve this problem, but it's an interesting one.
Apr 30 2007
gareis wrote:Sean Kelly wrote: ...But the optimizations are the same (basically), and the best and brightest have been at it for years. I'd venture a guess that more has been / still is being spent on VM research rather than static compiler research. Over roughly the past 10 years, I've seen several articles promising Java would exceed C and Fortran in 'a year or two'. A couple of years later I also recall finding some pretty large performance regressions between major releases of their Java VM. I still think 1.3 does some things better than 1.6 and it's been, what, 5 years? Interestingly, Sun is still improving their static compiler tools though.That said, JIT optimization is still a relatively new practice, andSun's Hotspot does this (but it's not explicitly cached on disk).with more cores being added to computers these days it's entirely possible that a JIT optimizer could run on one or more background CPUs and do much better than today.And then you're sacrificing two or three cores to run one thread, rather than sacrificing most of the developer's computational power at compile time and running as efficiently (or more so) with at most one core per thread. Now, if the compiler cached its optimizations on disk, you could potentially get similar optimizations after some large number of runs.However, in the interim the program would run slower than the optimized precompiled code, and would start slower than the JIT code that didn'tThat's why Sun has both a 'client' and a 'server' VM.cache its optimizations (because that caching takes disk time, and that's one of the most expensive resources). Of course, your runtime compiler can optimize for your user's current CPU, even if that changes. I suppose you could create binaries optimized for each CPU and have a script determine which is appropriate for theI know Intel and (IIRC) to a lesser extent MS VS2005 C/C++ as well as Sun and HP compilers will compile this right into the binary for you (and then the best code is picked at runtime). Seems to work pretty well from what I've seen.current CPU, but that's spending disk space (also quite scarce) in exchange for CPU time (relatively abundant). I don't know how to solve this problem, but it's an interesting one.
Apr 30 2007
Dave wrote:gareis wrote:That's because (I suspect) most of Sun's big customers use their static compilers. On Java speed... I still debug in emacs instead of using Sun Studio because the latter is irritatingly slow. Java may have the potential to produce fast code, but I wish that were more evident in the performance of Java UI apps I've used. This may be entirely a problem with Swing or whatever, but appearances count. SeanSean Kelly wrote: ...But the optimizations are the same (basically), and the best and brightest have been at it for years. I'd venture a guess that more has been / still is being spent on VM research rather than static compiler research. Over roughly the past 10 years, I've seen several articles promising Java would exceed C and Fortran in 'a year or two'. A couple of years later I also recall finding some pretty large performance regressions between major releases of their Java VM. I still think 1.3 does some things better than 1.6 and it's been, what, 5 years? Interestingly, Sun is still improving their static compiler tools though.That said, JIT optimization is still a relatively new practice, and
May 01 2007
Jan Claeys wrote:And I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
Apr 30 2007
Walter Bright wrote:Jan Claeys wrote:In practice with Python I think what happens is more like: 1) make sure you're not doing something stupid. If not ... 2) try psycho (a kind of JIT) (http://psyco.sourceforge.net). If that doesn't help (it never has for me)... 3) rewrite slow parts in pyrex (http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/). If that's not feasible then... 4) rewrite it as a native code module with Boost::Python or SWIG or just using the raw C API. Or write a native shared library and use ctypes (http://python.net/crew/theller/ctypes/) to access it. If you're doing numerical code then there are a couple of things you can try before resorting to rewriting. numexpr (http://www.scipy.org/SciPyPackages/NumExpr) and scipy.weave (http://www.scipy.org/Weave). And now of course you also have the option of rewriting the slow parts in D, thanks to Kirk. --bbAnd I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
Apr 30 2007
Op Mon, 30 Apr 2007 13:44:05 -0700 schreef Walter Bright <newshound1 digitalmars.com>:Jan Claeys wrote:Just like some people write libraries in Fortran or assembler or some vector processor language because C and C++ and D are "too slow". ;-) There is one commonly used JIT-compiler for Python ('psyco') and it is actually useful in some cases, while I haven't seen one single Python-to-native-code compiler that makes code that's actually faster than the interpreter in most cases... Python's strength is its "dynamism" and ability to adapt to "unexpected" changes at run-time. And the fact that Python developers write extensions in other languages if speed is really important and 'psyco' doesn't help proves that compiling Python to native code before it's run is not really a useful option. -- JanCAnd I think in the case of dynamic languages like Python, a JIT-compiler often can create much better code at run-time than a compiler could do when compiling it before run-time.That's the theory. In practice, Python programmers who need performance will develop a hybrid Python/C++ app, with the slow stuff recoded in C++.
Apr 30 2007
Jan Claeys wrote:Just like some people write libraries in Fortran or assembler or some vector processor language because C and C++ and D are "too slow". ;-)Or out of sheer bloody-mindedness. Funny thing, turns out SSE is actually *slower* for doing a dot product than regular old x87 code!There is one commonly used JIT-compiler for Python ('psyco') and it is actually useful in some cases, while I haven't seen one single Python-to-native-code compiler that makes code that's actually faster than the interpreter in most cases... Python's strength is its "dynamism" and ability to adapt to "unexpected" changes at run-time. And the fact that Python developers write extensions in other languages if speed is really important and 'psyco' doesn't help proves that compiling Python to native code before it's run is not really a useful option.That's what I like about Python; it's a massively expressive language that doesn't get in your way if you need the speed. Incidentally, it's called "dynamicysm". *DRINK* -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
Apr 30 2007
Daniel Keep wrote:Funny thing, turns out SSE is actually *slower* for doing a dot product than regular old x87 code!You should qualify this - I'm guessing you mean for a single dot product? If so, this is the case in most vector coprocessors, as load/store overhead can easily outweigh the gains in vectorization. --Steve
May 01 2007
Stephen Waits wrote:Daniel Keep wrote:Sorry; yes, you're right: it's for a single dot product. I'm surprised at this because of the sheer number of articles I ran across touting "faster" dot product functions using SSE. I have a feeling these people have never bothered to actually *benchmark* their "faster" functions :P -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/Funny thing, turns out SSE is actually *slower* for doing a dot product than regular old x87 code!You should qualify this - I'm guessing you mean for a single dot product? If so, this is the case in most vector coprocessors, as load/store overhead can easily outweigh the gains in vectorization. --Steve
May 01 2007
Daniel Keep wrote:Sorry; yes, you're right: it's for a single dot product. I'm surprised at this because of the sheer number of articles I ran across touting "faster" dot product functions using SSE. I have a feeling these people have never bothered to actually *benchmark* their "faster" functions :P -- DanielI'm also assuming that's for some low-dimensionality vector? I'd likewise guess that there's some sweet spot where dot product calculation is faster with SSE, even for a single pair of vectors, if the vectors are of sufficient dimensionality. --benji
May 01 2007
Benji Smith wrote:Daniel Keep wrote:3D single-precision. The problem seems to be a combination of unaligned loads, and the trickery you have to resort to in order to sum the XMM register horizontally. There's a dot product instruction in SSE4, but I don't have a CPU that supports it. :P It also doesn't help that the compiler will inline the FPU functions, but won't inline the SSE ones. -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/Sorry; yes, you're right: it's for a single dot product. I'm surprised at this because of the sheer number of articles I ran across touting "faster" dot product functions using SSE. I have a feeling these people have never bothered to actually *benchmark* their "faster" functions :P -- DanielI'm also assuming that's for some low-dimensionality vector? I'd likewise guess that there's some sweet spot where dot product calculation is faster with SSE, even for a single pair of vectors, if the vectors are of sufficient dimensionality. --benji
May 01 2007
Daniel Keep wrote:It also doesn't help that the compiler will inline the FPU functions, but won't inline the SSE ones.Anyways, I admit I miss the "inline" keyword - fortunately it can be roughly emulated with mixin templates. :) Regards, Frank
May 01 2007
Walter Bright wrote:I just don't get the reason for a VM. It seems like a solution looking for a problem.Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Apr 30 2007
Benji Smith wrote:Walter Bright wrote:Interesting. Paraphrasing your reply: These are benefits of VM's, but no they're not. They're above list of 'benefits' are some things that current VM implementations and the languages that sit on top of them and the provided libraries that sit on top of them all add up to provide. They're very much not attributes of the VM underneath nor of VM's in general. Please be careful when attributing causal effects. A favorite phrase: correlation is not causation. Later, BradI just don't get the reason for a VM. It seems like a solution looking for a problem.Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Apr 30 2007
Brad Roberts wrote:Benji Smith wrote:That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.Walter Bright wrote:I just don't get the reason for a VM. It seems like a solution looking for a problem.Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically.I attribute this to two things, none of which are a characteristic of a VM: 1) Java is a very easy language to parse, with well defined semantics. This makes it easy to develop such tools for it. C++, on the other hand, is disastrously difficult to parse. 2) The two VMs out there have billions and billions of dollars sunk into them to create tools, no matter how easy/hard that might be.2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages.1) I wrote a GC for Java, back in the day. Doing a good GC is dependent on the right language semantics, having a VM has nothing to do with it. D works with add on DLLs by sharing a single instance of the GC.3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa).Every single VM based system, from javascript to Word macros, has turned into a vector for compromising a system. That's why I run email with javascript, etc., all turned off. It's why I don't use Word. I know about the promises of security, but I don't believe it holds up in practice.4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code.I believe you are seeing the effects of billions of dollars being invested in those VMs, not any fundamental advantage.Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine.
Apr 30 2007
Walter Bright wrote:Brad Roberts wrote:Actually, COM illustrates my point quite nicely. If you're going to write a COM-compatible library, you have to plan for it from the start, inheriting from IUnknown, creating GUIDs, and setting up reference counting functionality. Likewise, a consumer of a COM library has to know it uses COM semantics, since the application code will have to query the interface using COM-specific functions. Even in D, if I want to write code in a DLL (or call code from a DLL), my ***CODE*** has to be aware of the existence of the DLL. In Java, the code I write is identical, whether I'm calling methods on my own classes, calling methods on classes packaged up in a 3rd party library, or packaging up my own library for distribution to other API consumers. Since the VM provides all of the classloading functionality, the application code and the library code is completely agnostic of calling & linking conventions. Of course, the disadvantage of this is that there's no such thing as static linking. *Everything* is linked dynamically. But at least I don't have to rewrite my code just to create (or consume) a library.Benji Smith wrote:That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically.You can use the "billions of dollars" excuse if you like, but the "easy to parse" excuse doesn't hold water. Notice, I'm not talking about refactoring tools or code-coverage tools, or anything like that. I'm talking about profiling, debugging, reflection, and runtime instrumentation. Take debugging, for example... It's possible to hook a debugger to an already-running instance of the JVM on a remote machine. And you can do that without a special debug build of the application. The application binaries always contain the necessary symbols for debugging, so it's always possible to debug applications. The JVM has a debugging API which provides methods for suspending and resuming execution, walking the objects on the heap, querying objects on the stack, evaluating expressions, setting normal and conditional breakpoints, replacing or redefining entire class definitions in-place (without restarting the application). Essentially, the JVM already includes the complete functionality of a full-featured debugger. The debugging API is just a mechanism for controlling the debugger from a 3rd-party application, like a debugging GUI. Without a VM, I don't know how you could get a debugger implemented just by connecting some GUI code to a debugging API. The x86 doesn't have a debugger built-in. The same thing is true of profiling, reflection, and instrumentation. It has *nothing* to do with the semantics of the language, or with the syntax being "easy to parse". It has everything to do with the fact that a VM can provide hooks for looking inside itself. A non-virtual machine doesn't do that.I attribute this to two things, none of which are a characteristic of a VM: 1) Java is a very easy language to parse, with well defined semantics. This makes it easy to develop such tools for it. C++, on the other hand, is disastrously difficult to parse. 2) The two VMs out there have billions and billions of dollars sunk into them to create tools, no matter how easy/hard that might be.2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages.I won't argue this one, since I don't know much about D's shared GC implementation.1) I wrote a GC for Java, back in the day. Doing a good GC is dependent on the right language semantics, having a VM has nothing to do with it. D works with add on DLLs by sharing a single instance of the GC.3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa).You may argue that certain VMs (I suppose JavaScript and VBA) have implemented their security functionality poorly. But the core concept, if implemented correctly (as in the JVM and the CLR) allows a hosting application to load a plugin and restrict the functionality of the executable code within that plugin, preventing it from accessing certain platform features or resources. Natively-compiled code can't even *hope* to enforce that kind of isolation. --benjiEvery single VM based system, from javascript to Word macros, has turned into a vector for compromising a system. That's why I run email with javascript, etc., all turned off. It's why I don't use Word. I know about the promises of security, but I don't believe it holds up in practice.4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code.
May 01 2007
Reply to Benji, [...] Most of your rebuttal basically says that programs in VM can do X without the coder having to do something different than they would if they didn't do X. There is (lots of big problems aside) a simple solution to this problem in native code: Don't allow the coder to NOT do X, requiter that all classes be COM objects, always compile in debugging and profiling symbols, heck maybe even a full net-centric debugger. It seams to me (and I could be wrong) that the way that the VM languages get all of these advantages of these options is by not making them options, they are requirements. The only case that all of that doesn't cover is sand boxing. Well, something is going to have to run in native code, so why not make native code safe? Allow a process to span off a thread that is native code but sand boxed: some OS API's don't work, it has a Read Only or no access to some part of ram that the rest of the process can access. In short If security is such a big deal, why is the VM doing it instead of the OS?
May 01 2007
BCS wrote:Reply to Benji, [...] Most of your rebuttal basically says that programs in VM can do X without the coder having to do something different than they would if they didn't do X. There is (lots of big problems aside) a simple solution to this problem in native code: Don't allow the coder to NOT do X, requiter that all classes be COM objects, always compile in debugging and profiling symbols, heck maybe even a full net-centric debugger. It seams to me (and I could be wrong) that the way that the VM languages get all of these advantages of these options is by not making them options, they are requirements. The only case that all of that doesn't cover is sand boxing. Well, something is going to have to run in native code, so why not make native code safe? Allow a process to span off a thread that is native code but sand boxed: some OS API's don't work, it has a Read Only or no access to some part of ram that the rest of the process can access. In short If security is such a big deal, why is the VM doing it instead of the OS?Sure. Fair enough. You *could* maybe do all of that stuff with native code, if only someone had ever implemented it. ...Shrug... Rather than speculating on what's theoretically possible in a natively compiled platform, I'm pointing out some of the advantages that exist *today* in VM-based platforms. I never claimed those advantages outweighed the considerable advantages of native compilation. I'm just saying...there are some features that are *currently* being routinely provided in VM platforms that don't yet exist when you're compiling code to a native platform. Jeez. Talk about throwing stones in glass houses... --benji
May 01 2007
Benji Smith wrote:Walter Bright wrote:Not if the language is designed to be COM-compatible from the start. You don't need a VM to inherit from IUnknown, create GUIDs, or do reference counting.Brad Roberts wrote:Actually, COM illustrates my point quite nicely. If you're going to write a COM-compatible library, you have to plan for it from the start, inheriting from IUnknown, creating GUIDs, and setting up reference counting functionality.Benji Smith wrote:That's an attribute of the language, not the VM. COM does the same thing with natively compiled languages.1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically.Likewise, a consumer of a COM library has to know it uses COM semantics, since the application code will have to query the interface using COM-specific functions. Even in D, if I want to write code in a DLL (or call code from a DLL), my ***CODE*** has to be aware of the existence of the DLL. In Java, the code I write is identical, whether I'm calling methods on my own classes, calling methods on classes packaged up in a 3rd party library, or packaging up my own library for distribution to other API consumers. Since the VM provides all of the classloading functionality, the application code and the library code is completely agnostic of calling & linking conventions. Of course, the disadvantage of this is that there's no such thing as static linking. *Everything* is linked dynamically. But at least I don't have to rewrite my code just to create (or consume) a library.If you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.Take debugging, for example... It's possible to hook a debugger to an already-running instance of the JVM on a remote machine. And you can do that without a special debug build of the application. The application binaries always contain the necessary symbols for debugging, so it's always possible to debug applications.If you want, you can always compile your native app with debug symbols on.The JVM has a debugging API which provides methods for suspending and resuming execution, walking the objects on the heap, querying objects on the stack, evaluating expressions, setting normal and conditional breakpoints, replacing or redefining entire class definitions in-place (without restarting the application). Essentially, the JVM already includes the complete functionality of a full-featured debugger. The debugging API is just a mechanism for controlling the debugger from a 3rd-party application, like a debugging GUI. Without a VM, I don't know how you could get a debugger implemented just by connecting some GUI code to a debugging API. The x86 doesn't have a debugger built-in.Most debuggers are able to attach themselves to running processes. The CPU itself does contain specific hardware to support debugging.The same thing is true of profiling, reflection, and instrumentation. It has *nothing* to do with the semantics of the language, or with the syntax being "easy to parse". It has everything to do with the fact that a VM can provide hooks for looking inside itself. A non-virtual machine doesn't do that.Many profilers are able to hook into executables that have symbolic debug info present (Intel's comes to mind). Reflection can be done natively - D will get there. Instrumentation - depends on what instrumentation is done. You can't do line-by-line code coverage analysis without recompiling with such turned on, even with Java, because the bytecode simply doesn't contain that information.The x86 processors have 4 rings of hardware protection built in. The idea is to do the isolation in hardware, not software, and it does work (one process crashing can't bring down another process). Where it fails is where Windows runs all processes at ring 0. This is a terrible design mistake. The CPU *is* designed to provide the sandboxing that a VM can provide. Also, as VMware has demonstrated, the virtualization of hardware can provide complete sandbox capability. Another example of this sort of hardware sandboxing is if you run 16 bit DOS code under Windows. The virtualization software sets up a "DOS box" which is completely controlled by hardware, so any interrupts, I/O port instructions, etc., are intercepted by the hardware and transferred to software that decides what to do, whether to allow/deny, etc. These capabilities are all there in the hardware. The fact that systems software often fails to use it is no more of a fundamental flaw than the fact that all the VM systems are so routinely compromised that people run their mail and browsers with scripting disabled.Every single VM based system, from javascript to Word macros, has turned into a vector for compromising a system. That's why I run email with javascript, etc., all turned off. It's why I don't use Word. I know about the promises of security, but I don't believe it holds up in practice.You may argue that certain VMs (I suppose JavaScript and VBA) have implemented their security functionality poorly. But the core concept, if implemented correctly (as in the JVM and the CLR) allows a hosting application to load a plugin and restrict the functionality of the executable code within that plugin, preventing it from accessing certain platform features or resources. Natively-compiled code can't even *hope* to enforce that kind of isolation.
May 01 2007
Walter Bright wrote:Benji Smith wrote:Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?Actually, COM illustrates my point quite nicely...Not if the language is designed to be COM-compatible from the start. You don't need a VM to inherit from IUnknown, create GUIDs, or do reference counting. If you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.Cool. I didn't know that.Without a VM, I don't know how you could get a debugger implemented just by connecting some GUI code to a debugging API. The x86 doesn't have a debugger built-in.Most debuggers are able to attach themselves to running processes. The CPU itself does contain specific hardware to support debugging.Many profilers are able to hook into executables that have symbolic debug info present (Intel's comes to mind). Reflection can be done natively - D will get there. Instrumentation - depends on what instrumentation is done. You can't do line-by-line code coverage analysis without recompiling with such turned on, even with Java, because the bytecode simply doesn't contain that information.Lots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc. I'd actually argue, though, that these kinds of features are actually VM features, even if they have actually been implemented on silicon. Since these kinds of functions provide an outside observer with a view into the machine's internals, I think they're more naturally implemented in a virtual machine (and VMs will, no doubt, be the environments where the most interesting research is conducted into new techniques for profiling, debugging, instrumentation, etc). If you want these kinds of meta-platform features baked into silicon, or solidified in your platform, you either need to wait twenty years for the market to prove their viability, or you can get them in next year's VM technologies. --benji PS: Keep in mind, I'm playing devil's advocate here, not because I have anything against compilation for a native platform, but because I think there are lots of interesting innovation in the VM universe that could be useful to D.Natively-compiled code can't even *hope* to enforce that kind of isolation.The x86 processors have 4 rings of hardware protection built in. The idea is to do the isolation in hardware, not software, and it does work (one process crashing can't bring down another process). Where it fails is where Windows runs all processes at ring 0. This is a terrible design mistake. The CPU *is* designed to provide the sandboxing that a VM can provide. Also, as VMware has demonstrated, the virtualization of hardware can provide complete sandbox capability.
May 01 2007
Benji Smith wrote:Lots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc. I'd actually argue, though, that these kinds of features are actually VM features, even if they have actually been implemented on silicon.Since the name "virtual machine" implies the virtualization of a machine, it seems reasonable that a good VM would provide all the features normally found in a non-virtual (ie. real) machine. Why should these features be offered only in software? Particularly at a time where hardware support for VMs is being explicitly added to hardware to improve performance? Sean
May 01 2007
Sean Kelly wrote:Benji Smith wrote:I agree. A good virtual machine will provide all of the features of a real machine. The opposite, though, is not necessarily true. A real machine doesn't necessarily provide all of the features of a typical virtual machine. --benjiLots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc. I'd actually argue, though, that these kinds of features are actually VM features, even if they have actually been implemented on silicon.Since the name "virtual machine" implies the virtualization of a machine, it seems reasonable that a good VM would provide all the features normally found in a non-virtual (ie. real) machine. Why should these features be offered only in software? Particularly at a time where hardware support for VMs is being explicitly added to hardware to improve performance? Sean
May 01 2007
Benji Smith wrote:Walter Bright wrote:were to build a native compiler for them, that can be done. I didn't design D to map directly onto COM because COM is a dying technology.Benji Smith wrote: If you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?The problem is it's simply easier to just write a VM. But when you've got a billion dollars to spend, there's no need to take the easy route.The x86 processors have 4 rings of hardware protection built in. The idea is to do the isolation in hardware, not software, and it does work (one process crashing can't bring down another process). Where it fails is where Windows runs all processes at ring 0. This is a terrible design mistake. The CPU *is* designed to provide the sandboxing that a VM can provide. Also, as VMware has demonstrated, the virtualization of hardware can provide complete sandbox capability.Lots of great info. Thanks. I didn't know that the x86 had support for profiling, debugging, sandboxing, etc.I'd actually argue, though, that these kinds of features are actually VM features, even if they have actually been implemented on silicon. Since these kinds of functions provide an outside observer with a view into the machine's internals, I think they're more naturally implemented in a virtual machine (and VMs will, no doubt, be the environments where the most interesting research is conducted into new techniques for profiling, debugging, instrumentation, etc).These features existed in the x86 since the mid 1980's, a decade before the Java VM and 15 years before the CLR. Mainframe hardware virtualization has existed for much longer.If you want these kinds of meta-platform features baked into silicon, or solidified in your platform, you either need to wait twenty years for the market to prove their viability, or you can get them in next year's VM technologies.Hardware sandboxing on the x86 has been around at least since the infamous 286 "penalty box". The 286 was Intel's first try at hardware virtualization, and a lot of mistakes were made. The 386 got it right, and the first fruits of that came in Windows-386, which provided multiple virtual DOS sessions. The original 8086 had no virtualization capability, and as a result, it was a *terrible* platform for software development. Any errant program could pull down the whole system. With the 286 came 'protected mode', where errant pointers were trapped by the hardware. It was the first sandboxing for x86.PS: Keep in mind, I'm playing devil's advocate here, not because I have anything against compilation for a native platform, but because I think there are lots of interesting innovation in the VM universe that could be useful to D.Software VM features can certainly drive forward adoption of hardware features. They always have <g>.
May 01 2007
Benji Smith skrev:Walter Bright wrote:<smip>Benji Smith wrote:Visual Basic. The Visual Basic versions over time is pretty much a mirror of the capabilities of COM as implemented by Microsoft over time. Including inheriting the limitations; the reasons you can not inherit a class from another class in Visual basic is simply because you can not inherit a component from another component in COM. And the interfaces and classes (components) you create in Visual Basic are usable COM interfaces and components from C++, or what you like. // FredrikIf you designed a language around COM, you'd get all that stuff for free, too. I agree that using COM in C++ is a bit clunky, but after all, C++ was designed before there was COM.Aha. Very interesting point. I hadn't thought of that. Is there such a language? Or is this just hypothetical?
May 05 2007
You people can list a million of (mostly) theoretical benefits in having a VM. Java/.NET apps will continue to be damn slow despite of these statements (Java the most). That is the simple and self-evident truth. Aside from, the idea of having a CPU core for the exclusive use of a VM is a *total* waste. I don't trust in hardware solutions for software problems. Just my opinion. :) Tom; Benji Smith escribió:Walter Bright wrote:I just don't get the reason for a VM. It seems like a solution looking for a problem.Some of the benefits of using a VM platform: 1) Dynamic classloading. Linking is greatly simplified, and my code doesn't need to be written differently depending on whether I'm linking dynamically or statically. 2) Better tools for profiling, debugging, reflection, and runtime instrumentation than are typically available for natively-compiled languages. 3) Better memory management: with the memory manager located in the VM, rather than in the application code, the collection of garbage is much more well-defined. Since all classes are loaded into the same VM instance, there's only a single heap. Consequently, there's never an issue of what happens when an object passes from one module to another (as can be the case when a native library passes an object into the main application, or vice versa). 4) Better security/sandboxing. If you write a pluggable application in C++, how will you restrict plugin authors from monkeying with your application data structures? In the JVM or the CLR, the VM provides security mechanisms to restrict the functionality of sandboxed code. A particular CLR assembly might, for example, be restricted from accessing the file system or the network connection. You can't do that with native code. Sure, it's possible for natively-compiled languages to offer most of the same bells and whistles as dynamic languages or VM-based platforms. But, in the real world, those abstractions are usually difficult to implement in native code, so they become available much more readily in virtual machine. --benji
Apr 30 2007
Tom wrote:You people can list a million of (mostly) theoretical benefits in having a VM. Java/.NET apps will continue to be damn slow despite of these statements (Java the most). That is the simple and self-evident truth. Aside from, the idea of having a CPU core for the exclusive use of a VM is a *total* waste. I don't trust in hardware solutions for software problems.of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people who doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications. I have a good feel for what I think it is and isn't suitable for. What is and isn't beneficial is highly subjective. And really, someone who has never taken the time to roll their sleeves up and dive into a language can really only speculate about it. How many times have we seen C++ programmers dis D after glancing at the feature comparison list without ever writing a line of D code? When you have actually used a language in anger, you have a much better perspective as to what its strengths and weaknesses are. The benefits they see are not theoretical. To most Java programmers I know, speed is rarely a concern (though it does pop up occasionally, particularly with trig functions). If they weren't satisfied with the performance characteristics they wouldn't be using it. They are more often concerned with distribution, or the market penetration of a particular JRE version. Java and .NET both have a place. The benefits users see from them may or may not be related to the existence of a VM, but those who do use the languages usually do see benefits of some kind. Otherwise they'd all be using C or C++.
May 01 2007
Mike Parker escribió:Tom wrote:this ground). Though I've seen *A LOT* of server/client apps done in Java. The speed *IS* a concern, believe me. They ARE definitely slow in comparison to C/C++ apps. On the other hand, I remember a great game that was written in a mix of C++/Python, and was REALLY GOOD and fast: Blade of darkness was its name IIRC. Though, the speed code was C++, so...You people can list a million of (mostly) theoretical benefits in having a VM. Java/.NET apps will continue to be damn slow despite of these statements (Java the most). That is the simple and self-evident truth. Aside from, the idea of having a CPU core for the exclusive use of a VM is a *total* waste. I don't trust in hardware solutions for software problems.of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people who doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications.I have a good feel for what I think it is and isn't suitable for. What is and isn't beneficial is highly subjective. And really, someone who has never taken the time to roll their sleeves up and dive into a language can really only speculate about it. How many times have we seen C++ programmers dis D after glancing at the feature comparison list without ever writing a line of D code? When you have actually used a language in anger, you have a much better perspective as to what its strengths and weaknesses are.Ehm, I work with Java/Perl the better part of the time. So, I think I've roll my sleeves a lot with it. :)The benefits they see are not theoretical. To most Java programmers I know, speed is rarely a concern (though it does pop up occasionally, particularly with trig functions). If they weren't satisfied with the performance characteristics they wouldn't be using it. They are more often concerned with distribution, or the market penetration of a particular JRE version.I can't deny the benefits, and they're not ALL theoretical. Though, Java has a lot of drawbacks in the performance market. It's really good (yet slow but good) for server side apps.Java and .NET both have a place. The benefits users see from them may or may not be related to the existence of a VM, but those who do use the languages usually do see benefits of some kind. Otherwise they'd all be using C or C++.Of course, and coming from the C++ world, that's why I like D so much.
May 01 2007
Tom wrote:Mike Parker escribió:Ok, I know Perl is specialized for this type of thing (with many of the libs. written in C), but for small programs handling large chunks of data, Java has rarely been a consideration in the shops I've recently worked at. And believe me, it's not for lack of trying because it's easier to find decent Java hackers than good C or Perl hackers, IME. I remember actually scripting something like: if(file_size > X) java -server -XmsY -XmxZ App else java -client App and having to experiment to set X, Y and Z, and Perl still worked better. What a PITA. More of the same w/ .NET (speed critical stuff in native C++), although the .NET GC is very good and generally hard to beat with hand-crafted mem. mgmt. (again IME).Tom wrote:position on this ground). Though I've seen *A LOT* of server/client apps done in Java. The speed *IS* a concern, believe me. They ARE definitely slow in comparison to C/C++ apps.You people can list a million of (mostly) theoretical benefits in having a VM. Java/.NET apps will continue to be damn slow despite of these statements (Java the most). That is the simple and self-evident truth. Aside from, the idea of having a CPU core for the exclusive use of a VM is a *total* waste. I don't trust in hardware solutions for software problems.some of the games out there being developed in both languages? This is an argument that will last into infinity, I'm sure. There are people in doing so. The fact that you don't doesn't make it less true that they do. I've used Java for a variety of applications.On the other hand, I remember a great game that was written in a mix of C++/Python, and was REALLY GOOD and fast: Blade of darkness was its name IIRC. Though, the speed code was C++, so...I have a good feel for what I think it is and isn't suitable for. What is and isn't beneficial is highly subjective. And really, someone who has never taken the time to roll their sleeves up and dive into a language can really only speculate about it. How many times have we seen C++ programmers dis D after glancing at the feature comparison list without ever writing a line of D code? When you have actually used a language in anger, you have a much better perspective as to what its strengths and weaknesses are.Ehm, I work with Java/Perl the better part of the time. So, I think I've roll my sleeves a lot with it. :)The benefits they see are not theoretical. To most Java programmers I know, speed is rarely a concern (though it does pop up occasionally, particularly with trig functions). If they weren't satisfied with the performance characteristics they wouldn't be using it. They are more often concerned with distribution, or the market penetration of a particular JRE version.I can't deny the benefits, and they're not ALL theoretical. Though, Java has a lot of drawbacks in the performance market. It's really good (yet slow but good) for server side apps.Java and .NET both have a place. The benefits users see from them may or may not be related to the existence of a VM, but those who do use the languages usually do see benefits of some kind. Otherwise they'd all be using C or C++.Of course, and coming from the C++ world, that's why I like D so much.
May 01 2007
Dave escribió:Tom wrote:[...]Mike Parker escribió:Tom wrote:I love Perl, but once the project surpasses X lines of code (i.e. gets bigger enough), dynamic typing is just prohibitive. yet. Then, if I had etc.), I would choose D without hesitation. ;) Tom; (Tomás Rossi)position on this ground). Though I've seen *A LOT* of server/client apps done in Java. The speed *IS* a concern, believe me. They ARE definitely slow in comparison to C/C++ apps.Ok, I know Perl is specialized for this type of thing (with many of the libs. written in C), but for small programs handling large chunks of data, Java has rarely been a consideration in the shops I've recently worked at. And believe me, it's not for lack of trying because it's easier to find decent Java hackers than good C or Perl hackers, IME. I remember actually scripting something like: if(file_size > X) java -server -XmsY -XmxZ App else java -client App and having to experiment to set X, Y and Z, and Perl still worked better. What a PITA. More of the same w/ .NET (speed critical stuff in native C++), although the .NET GC is very good and generally hard to beat with hand-crafted mem. mgmt. (again IME).
May 01 2007
Op Tue, 01 May 2007 02:55:44 -0300 schreef Tom <tom nospam.com>:You people can list a million of (mostly) theoretical benefits in having a VM."Virtual machines" (implemented in software) & "real machines" (implemented in hardware, aka "CPUs") are both "machines". VMs have the advantage that they are easier & faster to change and also cheaper to (re)produce, that's also why every modern CPU starts life as a VM during its design & development. -- JanC
May 01 2007
Ary Manzana wrote:lubosh escribió:I think a big reason for .NET was the Itanium. It was going to make it possible to write x86 apps which would run without modification when we all switched to Itanium. We needed a virtual machine to isolate us from the thing which was likely to change (the CPU). Java had a VM so it could run on SPARC (now dead), Alpha (now dead), Itanium (never really alive), PowerPC, and x86. Instead, x86 asm now runs natively on the latest Macs.Hi all, I wonder what you all think about the future of programming platforms. about D. Honestly, I feel quite refreshed to re-discover native compilation in D again.Me too. :-) It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code?
Apr 30 2007
Don Clugston wrote:Java had a VM so it could run on SPARC (now dead), Alpha (now dead), Itanium (never really alive), PowerPC, and x86.Java originally was intended to be for embedded systems with very tight memory requirements, and having an interpreter is an easy way to squeeze more functionality into it. It's also hard to write a back end, so writing an interpreter instead is quicker and gets you to market faster. Also, early Javas were interpreter only. JITs didn't come until much later, and the first one wasn't developed by Sun, it was developed by Symantec.
Apr 30 2007
lubosh wrote:Hi all, I wonder what you all think about the future of programming platforms. I've Honestly, I feel quite refreshed to re-discover native compilation in D again. It seems so much more lightweight than .NET framework or Java. Why there's so much push on the market (Microsoft, Sun) for executing source code within virtual machines? Do we really need yet another layer between hardware and our code? What's your opinion? I wonder how much stir up would D cause if it would have nice and powerful standardized library and really good IDE (like VS.NET)Java runs on a VM largely because it allows proprietary applications to be run on any platform with a supporting VM. The alternative would be to distribute code in source form and have the user build locally, or to pre-build for every target platform (which is not always feasible). By contrast, the primary reason for .NET running in a VM is language interoperability (since .NET is a COM replacement). I would say that a VM-based D would be useful in the same situations, though I don't have a need for this myself. Sean
Apr 30 2007
Sean Kelly Wrote:Java runs on a VM largely because it allows proprietary applications to be run on any platform with a supporting VM. The alternative would be to distribute code in source form and have the user build locally, or to pre-build for every target platform (which is not always feasible).I don't mind to do build for every target platform. There's a lot of I/O and CPU overhead initializing JIT compiler and compiling source runtime. Users are constantly complaining about start-up times. That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation. I understand JIT compilation is not going away, especially for dynamic languages such as Python but I'm not sure if we can really squeeze that much more from JIT compilation of statically-typed languages. If there are not going to be significant performance gains in comparison to running pre-compiled programs, then I suppose we're just adding one unnecessary layer and JAVA+.NET are going wrong direction. I'm just looking for answers if JIT compilation for statically-typed languages is doomed or has any hope. Lubos
Apr 30 2007
lubosh wrote:Sean Kelly Wrote:I don't think JIT as performed by Java and .NET makes any sense; it's performed far too late. However, the fast fourier transform code in www.fftw.org is a stunning example of an alternative. It compiles several algorithms, and profiles each of them. Then it links in the fastest one. You have to be able to JIT the algorithm; JITing the code generation step is useless.Java runs on a VM largely because it allows proprietary applications to be run on any platform with a supporting VM. The alternative would be to distribute code in source form and have the user build locally, or to pre-build for every target platform (which is not always feasible).I don't mind to do build for every target platform. There's a lot of I/O and CPU overhead initializing JIT compiler and compiling source runtime. Users are constantly complaining about start-up times. That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation. I understand JIT compilation is not going away, especially for dynamic languages such as Python but I'm not sure if we can really squeeze that much more from JIT compilation of statically-typed languages. If there are not going to be significant performance gains in comparison to running pre-compiled programs, then I suppose we're just adding one unnecessary layer and JAVA+.NET are going wrong direction. I'm just looking for answers if JIT compilation for statically-typed languages is doomed or has any hope.
Apr 30 2007
Don Clugston wrote:I don't think JIT as performed by Java and .NET makes any sense; it's performed far too late.You might be interested in this article: http://www-128.ibm.com/developerworks/java/library/j-rtj2/index.html
May 01 2007
lubosh wrote:Sean Kelly Wrote: That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation.Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
May 03 2007
Bruno Medeiros wrote:lubosh wrote:But what's truly ridiculous is that .NET has exactly *one* target platform. -- - EricAnderton at yahooSean Kelly Wrote: That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation.Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
May 03 2007
Pragma wrote:Bruno Medeiros wrote:Hehe, on slashdot a 'you must be new here' reply would be modded +5 informative :P Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved. It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.lubosh wrote:But what's truly ridiculous is that .NET has exactly *one* target platform.Sean Kelly Wrote: That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation.Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
May 03 2007
Jari-Matti Mäkelä wrote:Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved.To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.They have to. The CLI is an open standard. They may choose to sell their implementation of it of course, but they can't forbid anyone from implementing a compatible VM. Sean
May 03 2007
Sean Kelly wrote:Jari-Matti Mäkelä wrote:I was a bit over dramatic. It might also be the only way to get rid of legacy x86 support, if ever possible.Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved.To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.
May 03 2007
Don't forget about PowerPC and IA-64. XNA lets you write games for the I think .NET is more of an effort to kill Java rather than replacing COM though. On 5/3/07, Sean Kelly <sean f4.ca> wrote:To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.-- Anders
May 03 2007
Sean Kelly wrote:Jari-Matti Mäkelä wrote:Being a standard doesn't mean that it's free of patent problems, so it may not be freely implementable. Patents *do* allow you a monopoly on devices implementing their claims. (Though recent US Supreme Court rulings might help to reduce the lunacy that has been ruling the software industry of late.) -- JamesYeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved.To be fair, Ms does target ARM as well, for its handheld devices. Though I wonder if those devices have a full .NET VM. In any case, pre-generating binary code is obviously more efficient, so why not use it for a VM? The original point of .NET is a COM replacement anyway, regardless of how things have been spun.It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.They have to. The CLI is an open standard. They may choose to sell their implementation of it of course, but they can't forbid anyone from implementing a compatible VM.
May 03 2007
Jari-Matti Mäkelä wrote:Pragma wrote:True enough. Perhaps this is your reply sailing right over my head, but I was commenting more about how the .NET installer spends all this effort NGEN-ing the CLI distribution on install (supposedly anyway). If they're deploying to just one target platform, why wouldn't they just pre-compile before release? But you have a point - they're obviously not trying to solve any portability problems. -- - EricAnderton at yahooBruno Medeiros wrote:Hehe, on slashdot a 'you must be new here' reply would be modded +5 informative :P Yeah, of course it makes sense. Let's abstract away the underlying hardware & operating system and lock people on this new highly portable platform with IP stuff, patents and DMCA. Problem solved. It's interesting to see how much effort MS has put into .NET platform and language research (well, except Java for some unknown reason :P) lately. I don't think they will be giving it all away for free.lubosh wrote:But what's truly ridiculous is that .NET has exactly *one* target platform.Sean Kelly Wrote: That's why Microsoft provided utility called NGEN which is producing native binaries of .NET bytecode so JIT compilation won't be needed. Whole .NET framework is practically NGENed during installation.Ah, interesting, so that's why the installtion of the the .NET framework takes a rather long time, mistery explained.
May 04 2007
Pragma wrote:Perhaps this is your reply sailing right over my head, but I was commenting more about how the .NET installer spends all this effort NGEN-ing the CLI distribution on install (supposedly anyway). If they're deploying to just one target platform, why wouldn't they just pre-compile before release?Oh, that. I've probably spent one year too many compiling Gentoo, it didn't even come to my head until some time after pressing the 'Send'. :)
May 04 2007
Pragma wrote:But what's truly ridiculous is that .NET has exactly *one* target platform.Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms". -- Joel Lucsy "The dinosaurs became extinct because they didn't have a space program." -- Larry Niven
May 03 2007
Joel Lucsy wrote:Pragma wrote:Good point. -1 for me for not recalling what started this particular portion of the thread. ;) -- - EricAnderton at yahooBut what's truly ridiculous is that .NET has exactly *one* target platform.Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms".
May 04 2007
Pragma wrote:Joel Lucsy wrote:Yup, that's was I was going to say, platform != CPU configuration. ^^ -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#DPragma wrote:Good point. -1 for me for not recalling what started this particular portion of the thread. ;)But what's truly ridiculous is that .NET has exactly *one* target platform.Oh? So you're saying the optimized code coming out of the NGEN sequence for a P4 CPU will be identical to the code for a P3 CPU? And what about Itaniums (or any other CPU) running 64 bit? I'd pretty sure Microsoft is counting those variations as "platforms".
May 06 2007
lubosh Wrote:Do we really need yet another layer between hardware and our code?Both JVM and CLR (.NET) are badly designed. Both platforms are too tightly tied guarantees or performance gains. Fundamentally, we don't need another layer between hardware and code. But since design of our typical hardware (like x86) is not very good either, VM can actually improve both performance (hardware sandboxing, for example, does not perform very well and doesn't allow enough granularity) and security (native code is extremely difficult to analyze from security point of view). VM then basically becomes what your hardware should be. I'm generally in favor of lightweight VMs that hide hardware deficiencies and differences. Such VM can improve code compactness, allow for more aggressive inlining, provide security and reliability guarantees,... Another significant advantage is that it would greatly reduce complexity of generating code at runtime (and generally promote a more layered approach to computation, like Lisp-like features).
May 02 2007
Have you checked out the work of Ian Piumarta? He has done some very intersting work on 'live' compilation of dynamic languages to native code. He is currently working with Alan Kay on their next generation smalltalk, but the technology seems to be applicable to most languages. Links to a lot of info can be found in this blog post: http://www.equi4.com/jcw/files/bcf5635ccbc5b6ab916a38ef7aaa844b-139.html Boris Kolar Wrote:Both JVM and CLR (.NET) are badly designed. Both platforms are too tightly security guarantees or performance gains. Fundamentally, we don't need another layer between hardware and code. But since design of our typical hardware (like x86) is not very good either, VM can actually improve both performance (hardware sandboxing, for example, does not perform very well and doesn't allow enough granularity) and security (native code is extremely difficult to analyze from security point of view). VM then basically becomes what your hardware should be. I'm generally in favor of lightweight VMs that hide hardware deficiencies and differences. Such VM can improve code compactness, allow for more aggressive inlining, provide security and reliability guarantees,... Another significant advantage is that it would greatly reduce complexity of generating code at runtime (and generally promote a more layered approach to computation, like Lisp-like features).
May 02 2007