digitalmars.D - Stroustrup's talk on C++0x
- Bill Baxter (8/8) Aug 19 2007 A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web.
- Lutger (4/13) Aug 19 2007 With opera you can just click on it and it works, if you don't want to
- Saaa (2/7) Aug 19 2007
- Bill Baxter (12/14) Aug 19 2007 :-). I think it's a firewall issue. I read the troubleshooting infos
- Saaa (6/14) Aug 19 2007 I somehow doubt this will ever happen :D
- Jb (6/22) Aug 20 2007 I second the uTorrent recomendation. By far the best client i've used.
- Chris Nicholson-Sauls (6/25) Aug 20 2007 Am I the only person who actually uses... BitTorrent, as my BitTorrent c...
- Bill Baxter (10/40) Aug 21 2007 Well the troubleshooting links pointed to by utorrent were spot-on. It
- Regan Heath (12/53) Aug 21 2007 uTorrent is my favourtire client, it is small, fast, fully featured but
- eao197 (9/12) Aug 19 2007 BTW, there is a C++0x overview in Wikipedia:
- Walter Bright (2/7) Aug 19 2007 Looks like C++ is adding D features thick & fast!
- Bill Baxter (5/13) Aug 19 2007 Yeh, from the way Stroustrup was talking I really wouldn't be surprised
- Sean Kelly (11/23) Aug 20 2007 It actually has to be finished by year end 2008, and they have committed...
- Walter Bright (6/12) Aug 20 2007 C++0x started out with the stated purpose of just a few core language
- Bill Baxter (22/36) Aug 20 2007 It probably gave them a nudge, but on the other hand, as is abundantly
- Walter Bright (22/47) Aug 22 2007 More than that. The active people on the C++ committee are well aware of...
- Bill Baxter (27/35) Aug 22 2007 I decided to download his GC for C++ recently to give it a try. I was
- Walter Bright (7/32) Aug 23 2007 Nearly all of them are, and D has quite a bit that isn't even on the
- Bill Baxter (20/27) Aug 23 2007 I'm not sure what you mean by that, but the feature that I liked most
- Walter Bright (20/40) Aug 25 2007 Yup:
- Bill Baxter (4/51) Aug 25 2007 Ok, but does that work if you want it to work with a built-in type too?
- Walter Bright (2/4) Aug 26 2007 No, it doesn't currently work with builtin types. But see Sean's approac...
- Sean Kelly (9/50) Aug 26 2007 The obvious disadvantage to this approach is that is requires
- Walter Bright (4/13) Aug 26 2007 This is a brilliant idea. It would make for a great article! Can I press...
- James Dennett (7/21) Aug 26 2007 Is this largely comparable to C++0x's enable_if (except that, as
- Sean Kelly (3/17) Aug 26 2007 Sure thing. :-)
- James Dennett (5/24) Aug 23 2007 They're far, far more than that: more akin to an enhanced
- Stephen Waits (9/19) Aug 23 2007 FWIW, I corresponded with Bjarne a little over 3 years ago. I asked him...
- Walter Bright (2/10) Aug 25 2007 I know Bjarne, and he's a class act. I have the greatest respect for him...
- BCS (3/20) Aug 20 2007 Does that make the c++ crowd's main objective to remain as the dominant ...
- Bill Baxter (8/32) Aug 20 2007 I doubt it. I think the C++ crowd's main objective is to turn C++ into
- BCS (4/42) Aug 20 2007 Yah, I see your point. However some times the best way to improve somthi...
- James Dennett (23/37) Aug 20 2007 Most of these features have been in development for years;
- Bruno Medeiros (6/47) Aug 21 2007 I think Walter wasn't saying that C++0x features were inspired or based
- Walter Bright (10/12) Aug 22 2007 D hasn't invented many truly *new* features. What it has done, however,
- eao197 (17/23) Aug 20 2007 Yes! But C++ is doing that without breaking existing codebase. So
- Walter Bright (9/34) Aug 20 2007 The trouble with the new features is they don't fix the inscrutably
- Uno (5/7) Aug 20 2007 GCC 4.3 has some of the coming standard features already implemented
- eao197 (14/46) Aug 20 2007 It reminds me 'Worse is Better'
- Walter Bright (2/4) Aug 22 2007 Every new revision to the C++ standard kills off more C++ vendors.
- janderson (4/10) Aug 20 2007 Another awesome (and annoying at the same time) thing about D, real-time...
- Craig Black (38/62) Aug 20 2007 Agreed. The standard is moving faster and includes more improvements th...
- Ingo Oeser (45/56) Aug 21 2007 Yes, I already had some discussions and proposal about this in another
- Downs (20/67) Aug 22 2007 -----BEGIN PGP SIGNED MESSAGE-----
- Ingo Oeser (31/42) Aug 23 2007 Ok, with your tools, I could already CODE like that right now
- 0ffh (7/12) Aug 23 2007 I do not agree. I've been writing system level software on a number of
- Ingo Oeser (22/34) Aug 26 2007 But they had quite different properties/semantics. You should have notic...
- 0ffh (15/26) Aug 26 2007 Sure did. Doesn't prevent the compiler from supporting it.
- Ingo Oeser (17/35) Aug 26 2007 I know. I'm a system programmer, too :-)
- Lutger (4/8) Aug 26 2007 Why must every D compiler implement them? I thought intrinsics are part
- 0ffh (9/12) Aug 26 2007 I concur with Lutger on this:
- 0ffh (5/6) Aug 26 2007 YAY!!!
- Carlos Santander (15/33) Aug 26 2007 I remember that Walter once said that all in Phobos under std/ was stand...
- Ingo Oeser (7/15) Aug 28 2007 Yes, that's my point.
- Downs (17/34) Aug 24 2007 -----BEGIN PGP SIGNED MESSAGE-----
- Walter Bright (6/8) Aug 22 2007 D will be addressing the problem by moving towards supporting pure
- serg kovrov (5/8) Aug 23 2007 Very interesting, could you tell more regarding automatically
- Walter Bright (3/10) Aug 23 2007 Andrei and I covered this in our presentation on D at the D conference,
- Stephen Waits (3/6) Aug 23 2007 I heard the songs of angels in my mind when I read this.
- Sean Kelly (6/16) Aug 25 2007 And with inline asm and volatile, an atomic operations package is fairly...
- Brad Roberts (9/28) Aug 25 2007 As long as you don't care about the performance of calling a function fo...
- anonymous (7/12) Aug 20 2007 First of all, thanks for the link. To me as a non-professional
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (9/21) Aug 20 2007 I would put my hopes on the macros, type system and other metaprogrammin...
- eao197 (12/29) Aug 20 2007 If someone really need flexible macro- and metaprogramming-subsystem it ...
- Bruno Medeiros (6/12) Aug 20 2007 That's my thoughts exactly on LISP's "power to create your own
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (15/49) Aug 20 2007 Is it better for each C++ coder to write his own domain-specific build t...
- eao197 (16/38) Aug 20 2007 What kind of system programming? Writting compilers and writting drivers...
- Gregor Kopp (3/4) Aug 20 2007 That is really annoying!
- eao197 (6/10) Aug 20 2007 I hope this issue will be solved during D Conference.
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (22/66) Aug 20 2007 If you look at e.g. linux sources there's some heavy use preprocessor
- eao197 (13/21) Aug 20 2007 I don't know about game development, but I can mention another area:
- Robert Fraser (4/12) Aug 20 2007 Really? JIT compilers (LaTtE, Sun Java 5+, probably IBMs JVMs .Net runti...
- =?UTF-8?B?U3TDqXBoYW4gS29jaGVu?= (9/11) Aug 20 2007 This is something LLVM [1] tries to fix, I think. I skimmed over some
- Carlos Santander (4/17) Aug 20 2007 --
- eao197 (7/18) Aug 20 2007 Will be? It is! :)
- Carlos Santander (5/25) Aug 20 2007 But it's not getting ugly or scary... Big difference... ;)
- Jascha Wetzel (12/25) Aug 20 2007 bjarne stroustroup is talking about ongoing discussion about GC features...
- Sean Kelly (7/10) Aug 20 2007 I don't :-) Bjarne may have created C++, but he hasn't had any real
- Robert Fraser (2/16) Aug 20 2007 You seem to forget that D is evolving, too. C++ might get a lot of the c...
- eao197 (12/34) Aug 20 2007 I didn't. From my point of view, permanent envolvement is a main D's
- Charles D Hixson (12/27) Aug 20 2007 To me it seems that D's main current problem is lack of
- Walter Bright (4/16) Aug 22 2007 I don't understand this. You could as well say that C++98 is obsolete
- Bill Baxter (4/24) Aug 22 2007 ..but C++98's features that were missing from D are still missing (both
- Walter Bright (3/8) Aug 23 2007 Like what? Virtual base classes? Argument dependent lookup? #include
- Bill Baxter (14/23) Aug 23 2007 The things that have me banging my head most often are
- Bill Baxter (21/44) Aug 23 2007 Sorry for the self-follow-up, but I just wanted to add that really C++
- Regan Heath (11/56) Aug 24 2007 Funny, after reading you post I was thinking that you would provide a
- Bill Baxter (4/65) Aug 24 2007 Really I'd rather have something that gives a little more control.
- Walter Bright (2/14) Aug 23 2007 These will all be addressed in 2.0.
- Bill Baxter (3/18) Aug 24 2007 Hot diggity. Looking forward to it.
- Reiner Pope (26/47) Aug 23 2007 All except Concepts.
- Reiner Pope (8/63) Aug 23 2007 I see Walter has now said elsewhere in this thread that 'concepts aren't...
- Walter Bright (3/6) Aug 23 2007 I don't know what that means - interfaces are already user-defined.
- Reiner Pope (59/68) Aug 24 2007 I'm not sure what you mean. But what I refer to is the part of the spec
- Oskar Linde (32/97) Aug 24 2007 Neither am I...
- Walter Bright (8/31) Aug 26 2007 You can write the templates as:
- eao197 (18/33) Aug 23 2007 AFAIK, C++0x doesn't break compatibility with C++98. So if I teach
- Walter Bright (41/60) Aug 25 2007 There is more breakage from 1.0 to 2.0, but the changes required are
- eao197 (45/91) Aug 26 2007 The first of all -- thanks for your patience!
- Don Clugston (3/5) Aug 29 2007 It's very amusing to read how Walter described D 1.0, seven years ago. I...
- eao197 (6/11) Aug 29 2007 Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. I...
- Walter Bright (3/5) Aug 29 2007 I don't know any language in wide use that is stable (i.e. not
- kris (3/10) Aug 29 2007 I guess there's "stable" and there's "stable"? The history of Simula67
- eao197 (10/15) Aug 29 2007 I mean changes in languages which break compatibility with previous code...
- Walter Bright (23/39) Aug 30 2007 C++ has been around for 20+ years now. I'll grant that for maybe 2 of
- Leandro Lucarella (38/68) Aug 30 2007 Forget about C++ for a second. Try with Python. It is an stable, or at
- 0ffh (7/12) Aug 30 2007 I rather think, that a "new major version" of any language that "doesn't
- eao197 (16/27) Aug 30 2007 ew =
- Downs (14/25) Aug 31 2007 -----BEGIN PGP SIGNED MESSAGE-----
- Don Clugston (8/24) Sep 04 2007 Actually, I think new features that make old code obsolete (even if it s...
- eao197 (20/24) Sep 04 2007 If you get 500 compile errors in old 10KLOC project its annoying.
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (9/25) Sep 07 2007 Oh, btw, Java 1.5 did break old code. I used to use Gentoo during the
- 0ffh (10/17) Sep 07 2007 Well, yeah, maybe (apart from what Jari-Matti said about Java 1.5 breaki...
- janderson (9/19) Aug 20 2007 To me this show why D may be the "better" language syntacticly in the
A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html I recommend hitting pause on the video and then go get some lunch while it buffers up enough that you won't get hiccups. Or if you can figure out how to get those newfangled torrent thingys to work, that's probably a good option too. --bb
Aug 19 2007
Bill Baxter wrote:A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlThanks for the link, missed it.I recommend hitting pause on the video and then go get some lunch while it buffers up enough that you won't get hiccups. Or if you can figure out how to get those newfangled torrent thingys to work, that's probably a good option too. --bbWith opera you can just click on it and it works, if you don't want to figure things out.
Aug 19 2007
D programming people who don't understand torrents... btw. here pausing wasn't necessary hereI recommend hitting pause on the video and then go get some lunch while it buffers up enough that you won't get hiccups. Or if you can figure out how to get those newfangled torrent thingys to work, that's probably a good option too. --bb
Aug 19 2007
Saaa wrote:D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.btw. here pausing wasn't necessary hereOk. Well, it's probably just slow because over here because I've got to pull it over the trans-pacific pipes. --bb
Aug 19 2007
I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
Aug 19 2007
"Saaa" <empty needmail.com> wrote in message news:fab0tc$1b0k$1 digitalmars.com...I second the uTorrent recomendation. By far the best client i've used. Bill, if you do try it, open the 'Speed Guide' from the options menu, you can test to see whether the port is open / forwarded correctly from there. jbI somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
Aug 20 2007
Saaa wrote:Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion) -- Chris Nicholson-SaulsI somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
Aug 20 2007
Chris Nicholson-Sauls wrote:Saaa wrote:Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me. --bbAm I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
Aug 21 2007
Bill Baxter wrote:Chris Nicholson-Sauls wrote:uTorrent is my favourtire client, it is small, fast, fully featured but setup in such a way as to be simple enough to use if you're new at this sort of thing. Torrents don't require you to have an open inbound port but without one you cannot receive connections from other peers. You can still connect to other peers, unless they too have no open ports, in which case you cannot form any connection with them and as a result you may get lower speeds. Just the other day I downloaded OpenOffice using a torrent, the download was fast, probably faster than getting it directly from any single website. ReganSaaa wrote:Well the troubleshooting links pointed to by utorrent were spot-on. It takes you right to a place that can give you step-by-step instructions about how to set up a huge number of different broadband routers. The others I tried just said vague things about needing to open up a port without suggesting how -- or suggesting I talk to my "system administratior". That said, now that thanks to utorrent I've got the hole punched through my firewall, probably any client will work fine for me.Am I the only person who actually uses... BitTorrent, as my BitTorrent client? :) http://www.bittorrent.com/download I haven't had any issues with it, though that doesn't mean no one will. Azureus/2.x is good too... the new version is an abomination. (In My Humble Opinion)I somehow doubt this will ever happen :D http://www.heise-security.co.uk/articles/82481 I can only recommend utorrent and tell you its probably not your software but hardware firewall which needs tinkling. I had to forward a port, but if I understand it correctly: newer routers with upnp will work without any hassle.D programming people who don't understand torrents...:-). I think it's a firewall issue. I read the troubleshooting infos that come with a couple of bittorrent clients, and they all point to firewalls as the problem. One bittorrent client actually managed to cause all networking on my machine to shut down whenever I tried to turn it on. There's probably some way to get it working but... no thanks. Wake me up when there's a client that works as seamlessly as Skype. And no, I'm not going to install a whole browser just to try out its bittorrent client.
Aug 21 2007
On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC. -- Regards, Yauheni Akhotnikau
Aug 19 2007
eao197 wrote:BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!
Aug 19 2007
Walter Bright wrote:eao197 wrote:Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009. So, Walter, are you planning to update DMC when the spec is finished? --bbBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!
Aug 19 2007
Bill Baxter wrote:Walter Bright wrote:It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features. As for the C++ 0x additions themselves, if D did not exist I might be excited. As it is, I can only cringe at the syntax in some of those examples and hope things turn out better than I fear they will. Seaneao197 wrote:Yeh, from the way Stroustrup was talking I really wouldn't be surprised if they haven't finished the spec by year-end 2009.BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!
Aug 20 2007
Sean Kelly wrote:It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
Walter Bright wrote:Sean Kelly wrote:It probably gave them a nudge, but on the other hand, as is abundantly clear here on this newsgroup, everybody has a favorite feature. So if you throw a bunch of engineers and language designers into a room, the natural tendency is towards trying to add everything and the kitchen Python, and Ruby, too) undoubtedly influenced people's votes when it came time to decide whether it was more important to have feature X or get the revision out sooner. It is pretty scary, though, to hear Stroustrup saying that the C++ text books will need to become thicker than they already are, which was already about 3x as big as K&R's original book on C. The one feature (or lack thereof) that surprises me about C++0x is nested functions. They're one of my favorite things about D, but they don't seem to be a part of C++0x. There can't be any fundamental reason for it, since I've heard g++ supports them. Maybe lambdas will serve that purpose? As for standards vs standards-compliant compilers, note that MS still hasn't made a C99 compiler, 8 years after the standard. And implementing *that* standard looks like an undergrad homework assignment compared to what compiler writers will have to go through for C++0x. --bbIt actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
Bill Baxter wrote:Walter Bright wrote:More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.I think it's the success of D that lit the fire.It probably gave them a nudge,but on the other hand, as is abundantly clear here on this newsgroup, everybody has a favorite feature. So if you throw a bunch of engineers and language designers into a room, the natural tendency is towards trying to add everything and the kitchen sink.One thing the C++ committee is good about is the features they have added *are* targeted at glaring shortcomings. They really are not throwing in the kitchen sink. How well those shortcomings are addressed, however, is another matter. For example, look at the C++ proposal for doing a very limited form of compile time function evaluation, then compare it with D's.Python, and Ruby, too) undoubtedly influenced people's votes when it came time to decide whether it was more important to have feature X or get the revision out sooner.GC is a prime example of that; C++ could no longer dismiss it. (And Hans Boehm, who I admire a lot, did a spectacular job of dealing with every objection to adding GC.)It is pretty scary, though, to hear Stroustrup saying that the C++ text books will need to become thicker than they already are, which was already about 3x as big as K&R's original book on C.There are two phases to learning C++: 1) learning the language 2) learning all the idioms and conventions used to avoid the shortcomings (One example we've discussed here recently is the slicing problem.)The one feature (or lack thereof) that surprises me about C++0x is nested functions. They're one of my favorite things about D, but they don't seem to be a part of C++0x. There can't be any fundamental reason for it, since I've heard g++ supports them. Maybe lambdas will serve that purpose?I was surprised to see lambdas without nested functions.As for standards vs standards-compliant compilers, note that MS still hasn't made a C99 compiler, 8 years after the standard. And implementing *that* standard looks like an undergrad homework assignment compared to what compiler writers will have to go through for C++0x.It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.
Aug 22 2007
Walter Bright wrote:Bill Baxter wrote:Walter Bright wrote:GC is a prime example of that; C++ could no longer dismiss it. (And Hans Boehm, who I admire a lot, did a spectacular job of dealing with every objection to adding GC.)I decided to download his GC for C++ recently to give it a try. I was amazed to find that the documentation is really quite bad from a user point of view. And what little user doc there was was mostly about the C interface. If you care about implementation, there's tons to read, but just not if you're interested in actually *using* it. I expected a little more pleasant user experience given how long its been around, how much I hear about it here and there, and how often I've heard C++ people say that you don't need GC in the language because you can just download Boehm's library.It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now, but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like concepts and thread stuff), which might appear in some C++ compiler before they appear D. Furthermore, I'm pretty sure some partially conforming C++98 compilers existed before the end of 93, so what I'm trying to say with all this is that if you're a programmer who's willing to work with an incompatible language that is has an ever-evolving spec, then you're probably also willing to use a bleeding edge C++ compiler that only partially supports the C++09 spec. So there may be less of a wait than 2014 for the sort of bleeding edgers who would be interested in D in the first place. But either way its still infinitely more waiting than "download and use it right now" -- the current situation with D. --bb
Aug 22 2007
Bill Baxter wrote:Walter Bright wrote:Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.and thread stuff), which might appear in some C++ compiler before they appear D. Furthermore, I'm pretty sure some partially conforming C++98 compilers existed before the end of 93,Partial, sure, including mine <g>.so what I'm trying to say with all this is that if you're a programmer who's willing to work with an incompatible language that is has an ever-evolving spec, then you're probably also willing to use a bleeding edge C++ compiler that only partially supports the C++09 spec. So there may be less of a wait than 2014 for the sort of bleeding edgers who would be interested in D in the first place. But either way its still infinitely more waiting than "download and use it right now" -- the current situation with D.Yes. And D 2.0 isn't standing still, either.
Aug 23 2007
Walter Bright wrote:Bill Baxter wrote:I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D, but I think defining and implementing concepts should be as easy as defining and implementing a run-time interface. Duck typing is nice, but if you look at even scripting language founded on the idea, like Python, you'll find that where people are putting together, they're also creating and using tools like zope.interface to get back some of the benefits of type checking. At the end of the day, even with duck typing there are some requirements I have to fulfill to use my object with your function. You want to be able to specify those things and have the compiler check it. --bbbut some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.
Aug 23 2007
Bill Baxter wrote:Walter Bright wrote:Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConceptBill Baxter wrote:I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.
Aug 25 2007
Walter Bright wrote:Bill Baxter wrote:Ok, but does that work if you want it to work with a built-in type too? Will a float be recognized as supporting opPostInc? --bbWalter Bright wrote:Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... } KewlContainer k; WrongContainer w; Foo!(k); // ok Foo!(w); // error, w is not a KewlIteratorConceptBill Baxter wrote:I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.
Aug 25 2007
Bill Baxter wrote:Ok, but does that work if you want it to work with a built-in type too? Will a float be recognized as supporting opPostInc?No, it doesn't currently work with builtin types. But see Sean's approach!
Aug 26 2007
Walter Bright wrote:Bill Baxter wrote:The obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them. SeanWalter Bright wrote:Yup: interface KewlIteratorConcept { T opPostInc(); U opSlice(); } class KewlContainer : KewlIteratorConcept { T opPostInc() { ... } U opSlice() { ... } } class WrongContainer { } template Foo(T : KewlIteratorConcept) { ... }Bill Baxter wrote:I'm not sure what you mean by that, but the feature that I liked most about it is interface checking. So A) Being able to document someplace that if you want to use my KewlContainer you must implement the KewlIteratorConcept which means, say, you support opPostInc() and opSlice() (for dereferencing as x[]). and then once that is documented B) being able to say that my class implements that concept and have the compiler check that indeed it does. I suppose there may be some way to do all that in current D,but some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.
Aug 26 2007
Sean Kelly wrote:The obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them.This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Aug 26 2007
Walter Bright wrote:Sean Kelly wrote:Is this largely comparable to C++0x's enable_if (except that, as I understand it, D appears to be more flexible in how the compile- time test can work/be expressed)? enable_if certainly covers many of the simple use cases for Concepts (though not so elegantly as C++0x Concepts do). -- JamesThe obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them.This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Aug 26 2007
Walter Bright wrote:Sean Kelly wrote:Sure thing. :-) SeanThe obvious disadvantage to this approach is that is requires implementation of an interface by the creator of the object. More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {} This also works for non-class types. I'm not sure I like the syntax quite as much as concepts here, but it's good enough that I haven't really missed them.This is a brilliant idea. It would make for a great article! Can I press you to write it? Doesn't have to be long, just explain the concept(!) and flesh it out with a few examples.
Aug 26 2007
Walter Bright wrote:Bill Baxter wrote:For marketing purposes, maybe ;)Walter Bright wrote:Nearly all of them are, and D has quite a bit that isn't even on the horizon for C++. I should draw up a chart...It took 5 years for a C++98 compliant compiler to emerge. Extrapolating to C++09, that would be 2014 to get features that existed in D years ago. I obviously gave up waiting for such features from C++ long ago.Well, that's true, but when comparing availability C++09 vs D, you should perhaps be a little more forgiving, given that D isn't quite done either. Sure, some C++09 features are available in D now,They're far, far more than that: more akin to an enhanced version of Haskell's typeclasses. -- Jamesbut some are also available in g++ now, I believe. And there are some features slated for C++ 09 that aren't on the roadmap for D at all (like conceptsConcepts aren't a whole lot more than interface specialization, which is already supported in D.
Aug 23 2007
Walter Bright wrote:Bill Baxter wrote:FWIW, I corresponded with Bjarne a little over 3 years ago. I asked him for his opinion of D. He refused to give an opinion on the grounds that he didn't want to get into a flamewar about "Walter's language". I wrote him back to make sure he understood that I wasn't looking for a fight. I simply respected him and was curious about his opinion, but that I also understand why, in his position, he cannot comment on such things. --SteveWalter Bright wrote:More than that. The active people on the C++ committee are well aware of D. Many have attended my presentations on D, correspond with me about it, and lurk in this n.g. Most of them will deny the influence, however, so feel free to decide what to believe <g>.I think it's the success of D that lit the fire.It probably gave them a nudge,
Aug 23 2007
Stephen Waits wrote:FWIW, I corresponded with Bjarne a little over 3 years ago. I asked him for his opinion of D. He refused to give an opinion on the grounds that he didn't want to get into a flamewar about "Walter's language". I wrote him back to make sure he understood that I wasn't looking for a fight. I simply respected him and was curious about his opinion, but that I also understand why, in his position, he cannot comment on such things.I know Bjarne, and he's a class act. I have the greatest respect for him.
Aug 25 2007
Reply to Walter,Sean Kelly wrote:Does that make the c++ crowd's main objective to remain as the dominant language? Maybe somebody needs to enforce term limit on programming languages.It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
BCS wrote:Reply to Walter,I doubt it. I think the C++ crowd's main objective is to turn C++ into something that doesn't suck like a dozen turbine jets strapped together with duct tape. They want it to be a better language for themselves, because they have to use it every day. I'm thinking specifically of generic and meta-programming functionality. That's what looks like will get the most benefit from the new language additions.Sean Kelly wrote:Does that make the c++ crowd's main objective to remain as the dominant language?It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.Maybe somebody needs to enforce term limit on programming languages."You don't vote for kings." -- King Arthur, Monty Python and the Holy Grail.
Aug 20 2007
Reply to Bill,BCS wrote:Nice <G>Reply to Walter,I doubt it. I think the C++ crowd's main objective is to turn C++ into something that doesn't suck like a dozen turbine jets strapped together with duct tape.Sean Kelly wrote:Does that make the c++ crowd's main objective to remain as the dominant language?It actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.They want it to be a better language for themselves, because they have to use it every day. I'm thinking specifically of generic and meta-programming functionality. That's what looks like will get the most benefit from the new language additions.Yah, I see your point. However some times the best way to improve somthing is to take it out back and shoot it. Not add more jet engines and duck tape.Maybe somebody needs to enforce term limit on programming languages."You don't vote for kings." -- King Arthur, Monty Python and the Holy Grail.
Aug 20 2007
Walter Bright wrote:Sean Kelly wrote:Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, forth. -- JamesIt actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 20 2007
James Dennett wrote:Walter Bright wrote:I think Walter wasn't saying that C++0x features were inspired or based on D, just that D speed up the adoption of those features. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#DSean Kelly wrote:Most of these features have been in development for years; it's desires to improve C++ that have lit these fires, just as your urge to create D was based on other ideas about how to improve on C++98. For many involved in language design, it's language features that are more tempting than new library functionality. I don't see much in C++0x that has much claim to being inspired by D. I look forward to type deduction with auto, but that dates from the 80's. Concepts will be great, but those have most overlap with Haskell's typeclasses, not mirrored in D. The new for syntax reflects many languages (D, Perl, Java, sh, others) in some ways. GC for C++ predates D. The smart pointers have no counterpart in D, yet. D has cool metaprogramming facilities, and does some other things nicely, but C++ faces more competition It would, however, seem reasonable for C++ to pick up on good features of D, when they are a match for C++, forth. -- JamesIt actually has to be finished by year end 2008, and they have committed to getting the standard done on time even if it means dropping features. In fact, last I heard, a few features were indeed being dropped for lack of time, but I can't recall what they were. I haven't been keeping that close an eye on the C++ standardization process recently, aside from the new memory model and atomic features.C++0x started out with the stated purpose of just a few core language tweaks, and a bunch of new libraries. Sometime in the last couple of years, that was abandoned wholesale and a big raft of complex new features were proposed. I think it's the success of D that lit the fire.
Aug 21 2007
Bruno Medeiros wrote:I think Walter wasn't saying that C++0x features were inspired or based on D, just that D speed up the adoption of those features.D hasn't invented many truly *new* features. What it has done, however, is dramatically demonstrate that: 1) they fit well in a language that is very close to C++ 2) they dramatically improve productivity When one can point to real, live, *relevant* implementations of a feature, it tends to be convincing. After all, one can actually fly it rather than dreaming about paper airplanes. The further a language is from C++, the easier it is to dismiss a feature of that language as irrelevant.
Aug 22 2007
On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:eao197 wrote:Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml). -- Regards, Yauheni AkhotnikauBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!
Aug 20 2007
eao197 wrote:On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.eao197 wrote:Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete of functional languages (like Haskell and OCaml).The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.
Aug 20 2007
The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented.GCC 4.3 has some of the coming standard features already implemented (like variadic templates). ConceptGCC has working concepts. So there is a chance at least one compiler will be available when new standard comes out. Uno
Aug 20 2007
On Mon, 20 Aug 2007 20:44:22 +0400, Walter Bright <newshound1 digitalmars.com> wrote:eao197 wrote:It reminds me 'Worse is Better' (http://en.wikipedia.org/wiki/Worse_is_Better). I'm not a C++ expert but I haven't any serious problem with C++. And such features allow me to write in C++ more productive and use all my codebase. So I'm affraid many expirienced C++ programmers remain with C++. Because of that D must be focused to different programmer audience, toOn Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:The trouble with the new features is they don't fix the inscrutably awful syntax of complex C++ code, in fact, they make it worse. C++ will further become an "experts only" language.eao197 wrote:Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :(BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!Yes, but now there are only few C++ compiler vendors (unlike 98). There is hope that GCC will have almost all new C++ features in near future. -- Regards, Yauheni AkhotnikauCurrent C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would with some of functional languages (like Haskell and OCaml).The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.
Aug 20 2007
eao197 wrote:Yes, but now there are only few C++ compiler vendors (unlike 98). There is hope that GCC will have almost all new C++ features in near future.Every new revision to the C++ standard kills off more C++ vendors.
Aug 22 2007
Walter Bright wrote:The C++ standard will have those features. C++ compilers? Who knows. It took five years for C++98 to get implemented. C++'s problems are still in place, though. Like no modules, verbose and awkward syntax, very long learning curve, very difficult to do the simplest metaprogramming, etc.Another awesome (and annoying at the same time) thing about D, real-time development, well not quite but you know what I mean. -Joel
Aug 20 2007
"eao197" <eao197 intervale.ru> wrote in message news:op.txc0txbtsdcfd2 eao197nb2.intervale.ru...On Mon, 20 Aug 2007 10:05:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:Agreed. The standard is moving faster and includes more improvements than I previously expected. What's more, major compilers have already begun working on these new features. Some of these features will be available in GCC 4.3 and VC++ 2008 (expected February). I would not be surprised if most of the standard features are implemented in compilers by 2010 as you suggest. However, the benefit of D over C++ is still cleaner, more powerful syntax in general. C++ might be getting more capability, but even so, it will not match D in clean expressive power. Additionally, two and a a half years is also a long time for D to advance as well, and D's progression is very fast. Overall I have been very happy with D's progress over the past few years, both in compiler and library development. That said, there are a number of things that I think would help aid the adoption of D over the next few years. First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency. Another thing that will aid in the adoption of D is to iron out whatever issues or percieved issues there are with the D 2.0 so that it will be accepted by the D community enough for library writers to migrate their code. A dead horse perhaps, but I still think it would serve D well to have better C++ integration. Granted this is a tough problem, as Walter emphasizes, but so was integrating Managed .NET C++ with native C++, which Microsoft was able to do rather well. Experience has taught me that there is always a solution to issues like this, but sometimes requires us to think about the problem in a different way. Other than that, fixing compiler bugs is probably the most important thing for D right now. I am especially looking forward to fixes that will make __traits usable (if that's still what it's called). One particular feature of pesonal interest is better support for structs (ctors, dtors, etc.) This will help with complex mathematical data structures that I use that must be uber-efficient. As far as these new namfangled macros, D is so powerful already, I don't really know exactly what this will give us over what we already have. But perhaps I haven't given this as much thought as others have. -Craigeao197 wrote:Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml). -- Regards, Yauheni AkhotnikauBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Looks like C++ is adding D features thick & fast!
Aug 20 2007
Craig Black wrote:First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency.Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter(). These are well understood and known idioms, which explicitly state how the data dependency is with just a single keyword. OpenMP basically does just this plus some thread management for advanced stuff. -> Easy to add now and improve later support with newer library functions. -> Allows also to let the compiler do the optimisation via auto vectorisation for simpler cases (like shorter loop bodies). - Please remove the inXX() and outXX() intrinsics. They are oneliners in asm on X86 and not present on many architectures. - asm construct should be backend dependent. This will aid its optimal integration into the surrounding D code. Reason: asm is very seldom used this days and usually states one of 3 things 1. The optimizer of my compiler sucks 2. I'm l33t 3. I write sth. which can/should not be expressed in D, because it is highly machine dependend and a NOP on many machines. (e.g. inp()/outp()) Short term acceptable is 1. and 2., but long term acceptable is only 3. Case 3. is "write once and never touch again" but has to integrate very tightly into the surrounding code and thus has to answer many questions: - How many delay slots are still free? - Which registers are spilled, read, written? - Where is the result? - Which CPU units are used/unused? - In what state is the pipeline of that unit? ... Even GCC asm syntax can not express all of this yet, AFAIK. Many platform specific hard assembly stuff is written with GCC asm syntax. So being at least as powerful of defining it as opaque as a mixin might be more useful here. Maybe sth. opaque like the mixin() statement, which passes everything there to the backend would be better. Esp. for DSP architectures.Other than that, fixing compiler bugs is probably the most important thing for D right now. I am especially looking forward to fixes that will make __traits usable (if that's still what it's called).Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.One particular feature of pesonal interest is better support for structs (ctors, dtors, etc.) This will help with complex mathematical data structures that I use that must be uber-efficient.ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.As far as these new namfangled macros, D is so powerful already, I don't really know exactly what this will give us over what we already have. But perhaps I haven't given this as much thought as others have.Are there any articles about the current macro design decisions? Best Regards Ingo Oeser
Aug 21 2007
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ingo Oeser wrote:Craig Black wrote:See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency.Yes, I already had some discussions and proposal about this in another thread. - More explicit loop notion which can do map(), reduce(), filter().These are well understood and known idioms, which explicitly state how the data dependency is with just a single keyword. OpenMP basically does just this plus some thread management for advanced stuff. -> Easy to add now and improve later support with newer library functions. -> Allows also to let the compiler do the optimisation via auto vectorisation for simpler cases (like shorter loop bodies).GCC has autovectorization support in principle, so whenever the gdc maintainer gets around to fixing the tree format for statements, gdc will be able to take advantage of this.- Please remove the inXX() and outXX() intrinsics. They are oneliners in asm on X86 and not present on many architectures.I don't know those; what do they do?- asm construct should be backend dependent.[snip asm stuff]Definitely agreed.Other than that, fixing compiler bugs is probably the most important thing for D right now.I am especially looking forward to fixes that will makeAlso agreed. _Please_.__traits usable (if that's still what it's called).Yes, progress there is most exciting for me at the moment and I think the developers do a good job there.One particular feature of pesonal interest is better support for structs (ctors, dtors, etc.)This will help with complex mathematical data- --downs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzSrdpEPJRr05fBERAlXxAJ9uxf9C9IDQ4xwpS1U6ZR2ymGHFpQCgmSro TJhan50sNAM0WqVkuOGMD60= =7Eka -----END PGP SIGNATURE-----structures that I use that must be uber-efficient.ctors which only assign and MUST assign all values might be very useful. Static initializers with C99 syntax will be very welcome, too.As far as these new namfangled macros, D is so powerful already, I don't really know exactly what this will give us over what we already have. But perhaps I haven't given this as much thought as others have.Are there any articles about the current macro design decisions? Best Regards Ingo Oeser
Aug 22 2007
Downs wrote:Ingo Oeser wrote:Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.- More explicit loop notion which can do map(), reduce(), filter().See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.GCC has autovectorization support in principle, so whenever the gdc maintainer gets around to fixing the tree format for statements, gdc will be able to take advantage of this.I know. But this is just a side effect for me and is the compiler guessing himself, that I do map(), reduce(), filter(). The optimizer is more useful to the programmer doing (memory) copy elimination and life time analysis, so the programmer can write more readable code by using more temporaries.They provide port io (IN PORT and OUT PORT) on X86. in Intel syntax for your reference. OUT PORT mov dx,IOport mov al,Value out dx,al IN PORT mov dx,IOport in al,dx mov ReturnValue,al These are of course privileged operations and the compiler doesn't actually know your privilege level. So this kind of intrinsic is just nonsense. It is also non-sense in a standard library, since >99% of all programs out there don't need to do that and the rest codes it either in assembly or uses a special API for that. Best Regards Ingo Oeser- Please remove the inXX() and outXX() intrinsics. They are oneliners in asm on X86 and not present on many architectures.I don't know those; what do they do?
Aug 23 2007
Ingo Oeser wrote [about port i/o]:These are of course privileged operations and the compiler doesn't actually know your privilege level. So this kind of intrinsic is just nonsense. It is also non-sense in a standard library, since >99% of all programs out there don't need to do that and the rest codes it either in assembly or uses a special API for that.I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used. Also, I never even heard of any "special API" for port i/o, and wonder what such a thing might be needed for (unless your compiler is crippled). Regards, Frank
Aug 23 2007
0ffh wrote:Ingo Oeser wrote [about port i/o]:But they had quite different properties/semantics. You should have noticed, that some architectures don't even have them and threat all IO as special memory accesses. E.g. "There exists no such thing as port-based I/O on AVR32" and "The ARM doesn't have special IO access instructions". Or "On MIPS I/O ports are memory mapped, so we access them using normal load/store instructions" Just to quote some random comments from include/asm-*/io.h from Linux. Oh and some port i/o needs to slowed down, byte swapped, include barriers. How should a intrinsic handle that?These are of course privileged operations and the compiler doesn't actually know your privilege level. So this kind of intrinsic is just nonsense. It is also non-sense in a standard library, since >99% of all programs out there don't need to do that and the rest codes it either in assembly or uses a special API for that.I do not agree. I've been writing system level software on a number of architectures since more than 15 years, mostly in C, and of course these functions are used.Also, I never even heard of any "special API" for port i/o, and wonder what such a thing might be needed for (unless your compiler is crippled).I mean special API for ENABLING port i/o for your task. How does the intrinsic know, whether you can actually DO port i/o in that task? Inline assembler functions are much more suited to that task. And how to do that in assembler is usually in the example collection of your system manual. Even better is to threat that stuff as special memory and define unified i/o memory accessors somehere in a library. That stuff isn't fast anyway, so the class overhead might not be too significant here, as long as it is bounded. Best Regards Ingo Oeser
Aug 26 2007
Ingo Oeser wrote:0ffh wrote: But they had quite different properties/semantics. You should have noticed, that some architectures don't even have them and threat all IO as special memory accesses.Sure did. Doesn't prevent the compiler from supporting it. Some compilers support FP on architectures without FP. Good thing, too!Oh and some port i/o needs to slowed down, byte swapped, include barriers. How should a intrinsic handle that?You normally use them as primitives and build more complex funs around.I mean special API for ENABLING port i/o for your task. How does the intrinsic know, whether you can actually DO port i/o in that task?It can't and why should it? BTW not all architectures (and all modes) need any special enabling.Inline assembler functions are much more suited to that task.I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.That stuff isn't fast anyway, so the class overhead might not be too significant here, as long as it is bounded.I wouldn't count on it. Anyways my basic point is: Why throw it out just because "99% don't need it anyway"? It's there already, and some people are quite happy about it! It does not in any way make DMD less usable for the "99%", so why do you insist on treading on the "1%" minority? Regards, Frank
Aug 26 2007
0ffh wrote:Ingo Oeser wrote:My problem is, that this behavior isn't defined.Oh and some port i/o needs to slowed down, byte swapped, include barriers. How should a intrinsic handle that?You normally use them as primitives and build more complex funs around.I know. I'm a system programmer, too :-)I mean special API for ENABLING port i/o for your task. How does the intrinsic know, whether you can actually DO port i/o in that task?It can't and why should it? BTW not all architectures (and all modes) need any special enabling.GDC does, so DMD might get it one day.Inline assembler functions are much more suited to that task.I have to use inline asm every time? IIRC DMD doesn't inline funs with inline asm, so no good.Why throw it out just because "99% don't need it anyway"? It's there already, and some people are quite happy about it! It does not in any way make DMD less usable for the "99%", so why do you insist on treading on the "1%" minority?Because every D compiler HAS to implement it for every architecture. And it has to fake it somehow on architectures, which don't have port i/o. How to fake that correctly, is simply not defined (in contrast to IEEE FP math). Making inline assembly better (e.g. giving the compiler more knowledge about the instructions and their constraints, make it inlineable) is the more useful goal. These compiler refinements then work for ALL instructions on EVERY architectures and may someday even be more powerful than GCCs inline assembler syntax. This will reduce X86ism in D. DMD can learn A LOT from GCC in that area. Best Regards Ingo Oeser
Aug 26 2007
Ingo Oeser wrote:Because every D compiler HAS to implement it for every architecture. And it has to fake it somehow on architectures, which don't have port i/o. How to fake that correctly, is simply not defined (in contrast to IEEE FP math).Why must every D compiler implement them? I thought intrinsics are part of the Phobos / DMD implementation of D, not of the D language itself - contrary to FP math.
Aug 26 2007
Ingo Oeser wrote:Because every D compiler HAS to implement it for every architecture.I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.Making inline assembly better [...] is the more useful goal. [...] This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank
Aug 26 2007
0ffh wrote:ALL HAIL WALTER AND HIS AST MACROS! :-)))YAY!!! Regards, frank p.s. Sorry for self-reply, I re-read it and couldn't resist! =)
Aug 26 2007
0ffh escribió:Ingo Oeser wrote:I remember that Walter once said that all in Phobos under std/ was standard, as a part of the D standard (I hope I'm getting my words right.) Thus, as Ingo said, a standard D compiler has to implement those things. This (not specifically inp/outp, but Phobos in general) was a problem when there were licensing issues (I wonder if still there are some), as other D implementations would not be able to provide those features. In this case, the architecture-specific parts would be the issue to overcome. In a way, it would be like expecting all OSes to have a registry. The sound solution was to put the Windows Registry stuff under std.windows. A D compiler that doesn't run on Windows wouldn't need to provide those modules. The same could be done for these intrinsics: put them in std.arch.x86.intrinsics, or something like that.Because every D compiler HAS to implement it for every architecture.I concur with Lutger on this: IIRC intrinsics are a compiler thing, not a language thing. They are routinely used for compiler-specific stuff, I take that as a hint.-- Carlos Santander BernalMaking inline assembly better [...] is the more useful goal. [...] This will reduce X86ism in D. DMD can learn A LOT from GCC in that area.Ach, blast! I don't care to fight any more over this... come macros I don't even need inlineable functions anymore! ALL HAIL WALTER AND HIS AST MACROS! :-))) They might be just the kick ass feature to kick off the final take off... Regards, frank
Aug 26 2007
Carlos Santander wrote:I remember that Walter once said that all in Phobos under std/ was standard, as a part of the D standard (I hope I'm getting my words right.) Thus, as Ingo said, a standard D compiler has to implement those things.Yes, that's my point.In a way, it would be like expecting all OSes to have a registry. The sound solution was to put the Windows Registry stuff under std.windows. A D compiler that doesn't run on Windows wouldn't need to provide those modules. The same could be done for these intrinsics: put them in std.arch.x86.intrinsics, or something like that.Yes, that sounds sane enough. And if you implement a D-compiler for AVR, you don't need to implement that stuff. Best Regards Ingo Oeser
Aug 28 2007
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ingo Oeser wrote:Downs wrote:FWIW, you can already run a MT foreach variation on the tools.threadpool class, and it wouldn't be that hard to write a tree reduce that does the same .. but I see your point and agree. In the end, programming languages should come as close as possible to capturing your intentions with a given piece of code, which is why this kind of metadata would be extremely useful to a clever compiler. --downs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGzp8/pEPJRr05fBERAkaBAJwKPlb5Y5allxdSLS9yjia72k6iJwCZAXGF dGP7v6VDgB9eA/NvkFBOto0= =91hz -----END PGP SIGNATURE-----Ingo Oeser wrote:Ok, with your tools, I could already CODE like that right now and remove the "import iter;" later. Thanks for that :-) But I want to express that there isn't any defined order of execution, as long as data flow is correct. And of course I would like them overloadable to compose and distribute them driven by data flow. I just like to express the data flow more explicitly instead of depending on the optimizer to figure it out himself. Let's help the optimizer greatly here.- More explicit loop notion which can do map(), reduce(), filter().See also my "Some tools for D" post. I implemented those with iterators, but they're trivial to do as freestanding functions.
Aug 24 2007
Craig Black wrote:First, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency.D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
Aug 22 2007
Walter Bright wrote:D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model.Very interesting, could you tell more regarding automatically parallelizable functions? -- serg
Aug 23 2007
serg kovrov wrote:Walter Bright wrote:Andrei and I covered this in our presentation on D at the D conference, which we'll post here soon.D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model.Very interesting, could you tell more regarding automatically parallelizable functions?
Aug 23 2007
Walter Bright wrote:D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model.I heard the songs of angels in my mind when I read this. --Steve
Aug 23 2007
Walter Bright wrote:Craig Black wrote:And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. SeanFirst, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency.D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
Aug 25 2007
On Sat, 25 Aug 2007, Sean Kelly wrote:Walter Bright wrote:As long as you don't care about the performance of calling a function for a single asm operation or writing asm { ... } at each callsite for the atomic operations. The problem is that dmd won't inline functions with inline asm. GDC will, so all isn't lost. Luckily, a future 2.0 feature, macros, will make it easy to shove the asm inline. But yes, it's possible. But it's no better than c++ on that front.. it's only on par. Later, BradCraig Black wrote:And with inline asm and volatile, an atomic operations package is fairly easy to implement in D (most easily for x86 for obvious reasons). I really think D is in fairly good shape for concurrent programming even without a carefully established multithread-aware memory model. SeanFirst, D needs to at the very least match the features that are added to C++ with regards to parallelism and concurrency.D will be addressing the problem by moving towards supporting pure functions, which are automatically parallelizable. I think this will be much more powerful than C++'s model. Also, D already implements a superset of some of C++0x's synchronization primitives.
Aug 25 2007
eao197 Wrote: [...]BTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.First of all, thanks for the link. To me as a non-professional programmer, D is much clearer and cleaner structured. Coming from Pascal/Delphi, it is far easier to understand. It also has a bit the air of perl: There is more than one way to do it. In C++, there seems to be only one right way, but that is hard to understand.
Aug 20 2007
eao197 wrote:On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:I would put my hopes on the macros, type system and other metaprogramming stuff. Those are areas in which C++ doesn't really shine. I think "We, the Gods, decided to give you now 2 new keywords - if you ask nicely, the next release might have 3 more." vs "We give you all the power to create your own constructs." becomes more apparent now that C++ started to take its steps towards Lisp expressiveness-wise. Still, if C++1x will implement these too, there isn't much need for D anymore beside as a syntactic "skin" for the C++ ugliness.A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
Aug 20 2007
On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mäkelä <jmjmak utu.fi.invalid> wrote:eao197 wrote:If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:I would put my hopes on the macros, type system and other metaprogramming stuff.A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Those are areas in which C++ doesn't really shine.IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'."We give you all the power to create your own constructs."I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem. -- Regards, Yauheni Akhotnikau
Aug 20 2007
eao197 wrote:That's my thoughts exactly on LISP's "power to create your own constructs" issue! -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D"We give you all the power to create your own constructs."I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.
Aug 20 2007
eao197 wrote:On Mon, 20 Aug 2007 16:35:41 +0400, Jari-Matti Mäkelä <jmjmak utu.fi.invalid> wrote:It isn't well suited for system programming.eao197 wrote:If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:I would put my hopes on the macros, type system and other metaprogramming stuff.A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?Those are areas in which C++ doesn't really shine.IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.If C++ would have had enough compile time capabilities, many of the new feature proposals would have been implementable on library level and thus available to most currently available compilers without much transitioning costs/delays like now. What's so bad about DSLs - they're a typical programming idiom in Lisp just as functions or assignments are in BCPL-like languages. Also, I think the amount of algorithm implementations for Lisp can be pretty much explained by the age and popularity of the language. In case you haven't noticed, there are already 18+ GUI toolkit bindings/implementations (according to wiki4d) for D and D is 41 years younger than Lisp."We give you all the power to create your own constructs."I'm affraid it would lead to another Lisp-like failure: each lisper write its own domain-specific language to solve exactly the same problem.
Aug 20 2007
On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mäkelä <jmjmak utu.fi.invalid> wrote:What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?It isn't well suited for system programming.I would put my hopes on the macros, type system and other metaprogramming stuff.If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?Those are areas in which C++ doesn't really shine.IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.In case you haven't noticed, there are already 18+ GUI toolkit bindings/implementations (according to wiki4d) for D and D is 41 years younger than Lisp.Do you think that 18+ GUI bindings is good thing? I know that D has two standard libraries and this is not good at all :( -- Regards, Yauheni Akhotnikau
Aug 20 2007
eao197 wrote:I know that D has two standard libraries and this is not good at all :(That is really annoying! I think that this could break the neck of D.
Aug 20 2007
On Mon, 20 Aug 2007 18:21:13 +0400, Gregor Kopp <gregor.kopp chello.at> wrote:eao197 wrote:I hope this issue will be solved during D Conference. -- Regards, Yauheni AkhotnikauI know that D has two standard libraries and this is not good at all :(That is really annoying! I think that this could break the neck of D.
Aug 20 2007
eao197 wrote:On Mon, 20 Aug 2007 17:31:59 +0400, Jari-Matti Mäkelä <jmjmak utu.fi.invalid> wrote:If you look at e.g. linux sources there's some heavy use preprocessor macros. With proper language support that could be made less error prone and productive. Even some formal proofing might became possible. Nemerle isn't bad as a language, but it doesn't scale to low level stuff as long as it runs on a VM. In my opinion having a language that scales from driver/kernel level to end user GUI apps and OTOH from asm opcode level to high level FP constructs isn't a bad idea. For example game development is one area where more speed and abstractions are always welcome. I suppose most application programs will run in a VM in the distant future, but at the moment at least my PCs are crying for mercy if I try to run bigger Java apps (running code analysis for a relatively small Java GUI app took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200 mpeg2 movie can choke the system even though mplayer plays several simultaneous 720p streams just fine. The only .Net app I've used was a video card control panel on Windows. It felt too unresponsive to be usable in the long term.What kind of system programming? Writting compilers and writting drivers are examples of system programming, but Nemerle is well suited for the first, but not the second. Is low-level system programming (like drivers really need macros or/and metaprogramming)?It isn't well suited for system programming.I would put my hopes on the macros, type system and other metaprogramming stuff.If someone really need flexible macro- and metaprogramming-subsystem it is better to look to Nemerle.I agree. But I was on another abstraction level. Things like the active and atomic keywords in C++0x are features that are much easier to implement with builtin macro functionality than 3. party preprocessors / build tools.There are more modern than make build tools (like Scons) which make pre-compile-time code generation more smoothly task. And after years of C++ development I've came to conclusion that usage of C++ and some scripting language (like Ruby) is more flexible, easy and fast than usage of only one main language (like C++/Java).Is it better for each C++ coder to write his own domain-specific build tool to solve exactly the same problem than bolt that functionality to the core language?Those are areas in which C++ doesn't really shine.IMHO, macro and metaprogramming are areas which C++ simply does not need. It is much easyer to write some small codegeneration script in Perl/Ruby/Python and include its result into C++ via '#include'.Probably not.In case you haven't noticed, there are already 18+ GUI toolkit bindings/implementations (according to wiki4d) for D and D is 41 years younger than Lisp.Do you think that 18+ GUI bindings is good thing?I know that D has two standard libraries and this is not good at all :(I know :/
Aug 20 2007
On Mon, 20 Aug 2007 18:39:44 +0400, Jari-Matti Mäkelä <jmjmak utu.fi.invalid> wrote:Nemerle isn't bad as a language, but it doesn't scale to low level stuff as long as it runs on a VM. In my opinion having a language that scales from driver/kernel level to end user GUI apps and OTOH from asm opcode level to high level FP constructs isn't a bad idea. For example game development is one area where more speed and abstractions are always welcome.I don't know about game development, but I can mention another area: telecommunications. SMS/MMS gateway requires much low-level bit/byte transformation operation and much high-level logic like transaction routing. But I've learnt that more verbose code, written only with standard language features, is much more maintainlable than more compact code, written with some domain-specific extensions. But may be it is just my Karma :) -- Regards, Yauheni Akhotnikau
Aug 20 2007
I suppose most application programs will run in a VM in the distant future, but at the moment at least my PCs are crying for mercy if I try to run bigger Java apps (running code analysis for a relatively small Java GUI app took about 2 hours in Eclipse). Flash applets aren't any better: a 300x200 mpeg2 movie can choke the system even though mplayer plays several simultaneous 720p streams just fine. The only .Net app I've used was a video card control panel on Windows. It felt too unresponsive to be usable in the long term.Really? JIT compilers (LaTtE, Sun Java 5+, probably IBMs JVMs .Net runtime, Mono...) compile things to native code, so besides a little overhead as far as the compilation goes at startup, they should run just as fast as native code, if not faster. I think one of the problems as far as speed is concerned is that the languages were not designed for pure efficiency, and make heavy use of heap allocation, etc. That said, there are already some applications that run faster on a JIT VM because the VM can do certain optimizations that a native compiler can't, and, further, can do it transparently (without programmer interaction). Especially as multi-cores become more prevalent, VMs will be able to automatically vectorize/parallelize loops, which means that, if nothing else, they provide the power to spare the "average developer" who might not know so much about optimization or paralellism, from those horrors. Flash is a different story, as I doubt ActionScript is JITed. As far as Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes every time you stop typing for a couple seconds, marks semantic errors & resolves bindings as you type, etc., etc.), so on slower computers it might sometimes feel a little sluggish. Java's String class is also partly to blame (40 bytes of heap-allocated overhead for every string is a lot), but that's more an issue with the coding style & standard library than with VMs in general.
Aug 20 2007
Robert Fraser schreef:That said, there are already some applications that run faster on a JIT VM because the VM can do certain optimizations that a native compiler can't, and, further, can do it transparently (without programmer interaction). Especially as multi-cores become more prevalent, VMs will be able to automatically vectorize/parallelize loops, which means that, if nothing else, they provide the power to spare the "average developer" who might not know so much about optimization or paralellism, from those horrors.This is something LLVM [1] tries to fix, I think. I skimmed over some articles about it, and they talk about run-time optimization of native code. It's all over the site; a lot of interesting jargon to me. :)Flash is a different story, as I doubt ActionScript is JITed. As far as Eclipse goes, the IDE does a _lot_ behind the scenes (it compiles any changes every time you stop typing for a couple seconds, marks semantic errors & resolves bindings as you type, etc., etc.), so on slower computers it might sometimes feel a little sluggish. Java's String class is also partly to blame (40 bytes of heap-allocated overhead for every string is a lot), but that's more an issue with the coding style & standard library than with VMs in general.Recently, Adobe donated the source of their ActionScript VM and (there it is) JIT compiler to Mozilla. The project is called Tamarin [2]. Lazy web, signing off. :) [1] http://llvm.org/ [2] http://mozilla.org/projects/tamarin/
Aug 20 2007
eao197 escribió:On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:C++0x will be an enormous, ugly, and scary language...A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0xIt is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.-- Carlos Santander Bernal
Aug 20 2007
On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander <csantander619 gmail.com> wrote:eao197 escribió:Will be? It is! :) But it is here, and will be here. And D is just growing. -- Regards, Yauheni AkhotnikauOn Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:C++0x will be an enormous, ugly, and scary language...A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x
Aug 20 2007
eao197 escribió:On Mon, 20 Aug 2007 18:41:13 +0400, Carlos Santander <csantander619 gmail.com> wrote:Hehe, true.eao197 escribió:Will be? It is! :)On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:C++0x will be an enormous, ugly, and scary language...A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0xBut it is here, and will be here. And D is just growing.But it's not getting ugly or scary... Big difference... ;) -- Carlos Santander Bernal
Aug 20 2007
eao197 wrote:On Sun, 19 Aug 2007 23:36:07 +0400, Bill Baxter <dnewsgroup billbaxter.com> wrote:bjarne stroustroup is talking about ongoing discussion about GC features getting into the standard. but obviously, none of C++'s real problems will be fixed in C++0x, since that would require to remove features, not to add them. on the other hand, they didn't even get the ABI/linking issues into the standard. backward compatibility is bad. C++ and Windows are the most prominent examples for that. it's better to have cuts every now and then and provide separate tools that ease the transition. i don't see any of D's potential diminished by C++0x. i think it's curious how much time bjarne stroustroup spends explaining how constrained C++'s language design process is.A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.htmlBTW, there is a C++0x overview in Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B0x It is iteresting to know which advantages will have D (2.0? 3.0? 4.0?) over C++0x? May be only high speed compilation and GC.
Aug 20 2007
Jascha Wetzel wrote:i think it's curious how much time bjarne stroustroup spends explaining how constrained C++'s language design process is.I don't :-) Bjarne may have created C++, but he hasn't had any real control over the language for perhaps the last fifteen years. Still, Bjarne is the one people look to when wondering why C++ doesn't have some feature they consider important (as this interview can attest). What else can he do but explain, again, why things are the way they are? Sean
Aug 20 2007
eao197 Wrote:Yes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml).You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.
Aug 20 2007
On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser <fraserofthenight gmail.com> wrote:eao197 Wrote:I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications. -- Regards, Yauheni AkhotnikauYes! But C++ is doing that without breaking existing codebase. So significant amount of C++ programmers needn't look to D -- they will have new advanced features without dropping their old tools, IDE and libraries. I'm affraid that would play against D :( Current C++ is far behind D, but D is not stable, not mature, not equiped by tools/libraries as C++. So it will took several years to make D competitive with C++ in that area. But if in 2010 (it is only 2.5 year ahead) C++ will have things like lambdas and autos (and tons of libraries and army of programmers), what will be D 'killer feature' to attract C++ programmers? And not only C++, at this time D would compete with new functional languages (like Haskell and OCaml).You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.
Aug 20 2007
eao197 wrote:On Mon, 20 Aug 2007 23:26:33 +0400, Robert Fraser <fraserofthenight gmail.com> wrote:To me it seems that D's main current problem is lack of dependable libraries. A secondary problem is lack of run-time flexibility (ala Python, etc.), but that may be intractable in a language that intends to be fast. Well... the libraries problem is intractable, also. Just, perhaps, less so. OTOH, it is crucial that new releases not break working libraries. If they do it will not only prevent the accumulation over time of working libraries, but will also discourage people from working on them.eao197 Wrote:I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet. To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications....
Aug 20 2007
eao197 wrote:On Mon, 20 Aug 2007 23:26:33 +0400, Robert FraserI don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.
Aug 22 2007
Walter Bright wrote:eao197 wrote:..but C++98's features that were missing from D are still missing (both good and bad ones). --bbOn Mon, 20 Aug 2007 23:26:33 +0400, Robert FraserI don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.
Aug 22 2007
Bill Baxter wrote:Walter Bright wrote:Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.C++0x's new features are essentially all present in D 1.0...but C++98's features that were missing from D are still missing (both good and bad ones).
Aug 23 2007
Walter Bright wrote:Bill Baxter wrote:The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting. 2) lack of a way to return a reference. 3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance. --bbWalter Bright wrote:Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.C++0x's new features are essentially all present in D 1.0...but C++98's features that were missing from D are still missing (both good and bad ones).
Aug 23 2007
Bill Baxter wrote:Walter Bright wrote:Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.Bill Baxter wrote:The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.Walter Bright wrote:Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.C++0x's new features are essentially all present in D 1.0...but C++98's features that were missing from D are still missing (both good and bad ones).2) lack of a way to return a reference.This would also be less critical given a way to fall-back to a member's implementation.3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance.--bb
Aug 23 2007
Bill Baxter wrote:Bill Baxter wrote:Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :) ReganWalter Bright wrote:Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.Bill Baxter wrote:The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.Walter Bright wrote:Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.C++0x's new features are essentially all present in D 1.0...but C++98's features that were missing from D are still missing (both good and bad ones).2) lack of a way to return a reference.This would also be less critical given a way to fall-back to a member's implementation.
Aug 24 2007
Regan Heath wrote:Bill Baxter wrote:Really I'd rather have something that gives a little more control. Returning a reference is like pulling down your pants in public. --bbBill Baxter wrote:Funny, after reading you post I was thinking that you would provide a way to fallback by returning a reference :P eg. ref T opDereference() { return ptr; } which would then automatically be called when using [] . etc on a T* I guess we wait and see what Walter cooks up for us in 2.0 :)Walter Bright wrote:Sorry for the self-follow-up, but I just wanted to add that really C++ smart pointers are themselves kind of klunky due to the fact that _all_ you have access to is that operator*/operator-> thing. So for instance if you make a boost::shared_ptr<std::map>, you end up always having to dereference to do anything interesting involving operator overloads. mymap["foo"] doesn't work, you need to use (*mymap)["foo"]. What you really want most of the time is something more like "smart references". This kind of thing is coming close to possibility with the reflection stuff some people are doing. Basically shapred_ptr!(T) would do introspection on T and populate itself with basic foward-to-T implementations of all of T's methods. But that seems kind of heavyweight to me. All you really want to do is define a fallback -- when the compiler sees foo[x] and foo is a shared_ptr!(T), there should be a way to tell it to check T for an opIndex if the shared_ptr itself doesn't have one. That would handle the access syntax. But that still leaves the destructor/copy constructors necessary to get a real smart pointer.Bill Baxter wrote:The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting.Walter Bright wrote:Like what? Virtual base classes? Argument dependent lookup? #include files? C++ can keep them <g>.C++0x's new features are essentially all present in D 1.0...but C++98's features that were missing from D are still missing (both good and bad ones).2) lack of a way to return a reference.This would also be less critical given a way to fall-back to a member's implementation.
Aug 24 2007
Bill Baxter wrote:The things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting. 2) lack of a way to return a reference. 3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance.These will all be addressed in 2.0.
Aug 23 2007
Walter Bright wrote:Bill Baxter wrote:Hot diggity. Looking forward to it. --bbThe things that have me banging my head most often are 1) the few things preventing an implementation of smart pointers [destructors, copy constructors and opDot]. There are some cases where you just want to refcount objects. This is the one hole in D that I haven't heard any reasonable workaround for. I don't necessarily _want_ copy constructors in general but they seem to be necessary for implementing automatic reference counting. 2) lack of a way to return a reference. 3) From what I can tell "const ref" doesn't work for parameters in D 2.0. Oh, and 4) real struct constructors. Just a syntactic annoyance, but still an annoyance.These will all be addressed in 2.0.
Aug 24 2007
Walter Bright wrote:eao197 wrote:All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- ReinerOn Mon, 20 Aug 2007 23:26:33 +0400, Robert FraserI don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.
Aug 23 2007
Reiner Pope wrote:Walter Bright wrote:I see Walter has now said elsewhere in this thread that 'concepts aren't a whole lot more than interface specialization, which is already supported in D.' True; what I'm really wondering, though, is 1. Will specialisation be "fixed" to work with IFTI? 2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy? -- Reinereao197 wrote:All except Concepts. I know there was a small discussion of Concepts here after someone posted a Doug Gregor video on Concepts, but other than that they haven't really got much attention. I know that a lot of the problems they solve in simplifying template error messages can be done alternatively in D with static-if, is() and now __traits, in conjunction with the 'static unittest' idiom, but even then, I think C++0x Concepts give a nicer syntax for expressing exactly what you want, and they also allow overloading on Concepts (which AFAIK there is no way to emulate in D). Two characteristic examples (the first one is in would-be D with Concepts): // if D had Concepts void sort(T :: RandomAccessIteratorConcept)(T t) {...} // currently void sort(T)(T t) { static assert(IsRandomAccessIterator!(T), T.stringof ~ " isn't a random access iterator"); ... } alias sort!(MinimalRandomAccessIterator) _sort__UnitTest; It isn't syntactically clean, so people won't be encouraged to support this idiom, and it doesn't allow the Concepts features of overloading or concept maps (I think concept maps can be emulated, but they currently break IFTI). I'm interested in knowing your thoughts/plans for this. -- ReinerOn Mon, 20 Aug 2007 23:26:33 +0400, Robert FraserI don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.
Aug 23 2007
Reiner Pope wrote:1. Will specialisation be "fixed" to work with IFTI?You can simply specialize the parameter to the function.2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy?I don't know what that means - interfaces are already user-defined.
Aug 23 2007
Walter Bright wrote:Reiner Pope wrote:I'm not sure what you mean. But what I refer to is the part of the spec (the templates page, under Function Templates) that says "Function template type parameters that are to be implicitly deduced may not have specializations:" and gives the example: void Foo(T : T*)(T t) { ... } int x,y; Foo!(int*)(x); // ok, T is not deduced from function argument Foo(&y); // error, T has specialization Perhaps you mean that you can write void Foo(T)(T* t) { ... } ... int x; Foo(&x); Sure. But the following doesn't work: void Foo(T)(T t) { ... } void Foo(T)(T* t) { /* different implementation for this specialisation */ } ... int x; Foo(x); Foo(&x); // ambiguous and using template parameter specialisation, IFTI breaks.1. Will specialisation be "fixed" to work with IFTI?You can simply specialize the parameter to the function.They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... } (The rest is just how I think it should work) The user-defined specialisation would be an alias which must define two templates which can answer the two questions: - does a given type meet the requirements of this specialisation? - is this specialisation a superset or subset of this other specialisation, or can't you tell? (giving the partial ordering rules) This allows user-defined predicates to fit in neatly with partial ordering of templates. -- Reiner2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy?I don't know what that means - interfaces are already user-defined.
Aug 24 2007
Reiner Pope wrote:Walter Bright wrote:Neither am I... [snip]Reiner Pope wrote:I'm not sure what you mean.1. Will specialisation be "fixed" to work with IFTI?You can simply specialize the parameter to the function.Sure. But the following doesn't work: void Foo(T)(T t) { ... } void Foo(T)(T* t) { /* different implementation for this specialisation */ } ... int x; Foo(x); Foo(&x); // ambiguous and using template parameter specialisation, IFTI breaks.This is the workaround I've been using: void Foo_(T: T*)(T* a) { writefln("ptr"); } void Foo_(T)(T a) { writefln("non-ptr"); } // dispatcher void Foo(T)(T x) { Foo_!(T)(x); } void main() { int x; Foo(x); Foo(&x); }They are, but they only allow you to stipulate requirements on the types place in the inheritance hierarchy. Two things that inheritance doesn't cover is structural conformance, and complicated predicates. Structural conformance is clearly important simply because templates make it possible and it avoids the overheads of inheriting from an interface. This is what C++ Concepts have on D interface specialisation. As to complicated predicates, I refer to the common idiom in D templates which looks like the following: template Foo(T) { static assert(SomeComplicatedRequirement!(T), "T doesn't meet condition"); ... // implementation } (SomeComplicatedRequirement is something inexpressible with the inheritance system; something like "a static array with a size that is a multiple of 1KB") Some people have suggested (Don Clugston, from memory) that failing the static assert should cause the compiler to try another template overload. I thought this would be easier if you allowed custom specialisations on templates. This would allow the above idiom to turn into something like template Foo(T :: SomeComplicatedRequirement) { ... }2. Will there be a way to support user-defined specialisations, for instance once which don't depend on the inheritance hierarchy?I don't know what that means - interfaces are already user-defined.The user-defined specialisation would be an alias which must define two templates which can answer the two questions: - does a given type meet the requirements of this specialisation? - is this specialisation a superset or subset of this other specialisation, or can't you tell? (giving the partial ordering rules) This allows user-defined predicates to fit in neatly with partial ordering of templates.My suggestion has been the following: template Foo(T : <compile time expression yielding boolean value>), where the expression may depend on T. E.g: template Foo(T: RandomIndexableContainer!(T)) { ... } template RandomIndexableContainer(T) { const RandomIndexableContainer = HasMember!(T, "ValueType") && HasMember!(T, "length") && HasMember!(T, "opIndex",int); } Even something like this should be possible: struct RandomIndexableContainerConcept {...} template Foo(T: Implements!(T, RandomIndexableContainerConcept)) { } or something. This suggestion lacks the partial ordering of specializations, but those could be probably imposed on a case by case basis by nesting the conditions. -- Oskar
Aug 24 2007
Reiner Pope wrote:Perhaps you mean that you can write void Foo(T)(T* t) { ... } ... int x; Foo(&x); Sure. But the following doesn't work: void Foo(T)(T t) { ... } void Foo(T)(T* t) { /* different implementation for this specialisation */ } ... int x; Foo(x); Foo(&x); // ambiguous and using template parameter specialisation, IFTI breaks.You can write the templates as: void Foo(T)(T t) { ... } void Foo(T, dummy=void)(T* t) { /* different implementation for this specialisation */ } Not so pretty, but it works.As to complicated predicates, I refer to the common idiom in D templates which looks like the following:Sean Kelly had a solution for that of the form:More often, I use an additional value parameter to specialize against: template Foo(T, bool isValid : true = PassesSomeTest!(T)) {}
Aug 26 2007
On Thu, 23 Aug 2007 10:14:39 +0400, Walter Bright <newshound1 digitalmars.com> wrote:eao197 wrote:AFAIK, C++0x doesn't break compatibility with C++98. So if I teach students C++98 now they could use C++0x. Moreover they could use in C++0x all their C++98 code. Now I see D 2.0 as very different language from D 1.0.On Mon, 20 Aug 2007 23:26:33 +0400, Robert FraserI don't understand this. You could as well say that C++98 is obsolete and C++0x is not finished yet.You seem to forget that D is evolving, too. C++ might get a lot of the cool D features (albiet with ugly syntax), but by that time, D might have superpowers incomprehensible to the C++ mind.I didn't. From my point of view, permanent envolvement is a main D's problem. I can't start use D on my work regulary because D and Tango is not stable enough. I can't start teach students D because D 1.0 is obsolete and D 2.0 is not finished yet.Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive. I know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014? -- Regards, Yauheni AkhotnikauTo outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.
Aug 23 2007
eao197 wrote:AFAIK, C++0x doesn't break compatibility with C++98. So if I teach students C++98 now they could use C++0x. Moreover they could use in C++0x all their C++98 code.It's not a perfect superset, but the breakage is very small.Now I see D 2.0 as very different language from D 1.0.There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarationsYes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.To outperform C++ in 2009-2010 D must have full strength now and must be stable during some years to proof that strength in some killer applications.C++0x's new features are essentially all present in D 1.0.I know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014?Even if it does take that long, D 1.0 is still far ahead, and is available now. To see how much more productive D is, compare Kirk McDonald's amazing PyD http://pyd.dsource.org/dconf2007/presentation.html with Boost Python. To see what D can do that C++ can't touch, see Don Clugston's incredible optimal code generator at http://s3.amazonaws.com/dconf2007/Don.ppt
Aug 25 2007
The first of all -- thanks for your patience! On Sun, 26 Aug 2007 10:35:47 +0400, Walter Bright <newshound1 digitalmars.com> wrote:Yes, but I mean changes not only in syntax, but in program design. And see yet another comment on that below.Now I see D 2.0 as very different language from D 1.0.There is more breakage from 1.0 to 2.0, but the changes required are straightforward to find and correct.In November 2006 in a Russian developers forum I noticed [1] the following D's advantages: 1) fixed and portable data type sizes (byte, short,...); 2) type properties (like .min, .max, ...); 3) all variables and members have default init values; 4) local variables can't be defined without initial values; 5) type inference in 'auto' declaration and in foreach; 6) unified type casting with 'cast'; 7) strict 'typedef' and relaxed 'alias'; 8) array have 'length' property and slicing operations; 9) exception in switch if no appropriate 'case'; 10) string values in 'case'; 11) static constructors and destructors for classes/modules; 12) class invariants; 13) unit tests; 14) static assert; 15) Error as root for all exception classes; 16) scope constructs; 17) nested classes, structs, functions; 18) there aren't macros, all symbol mean exactly what they mean; 19) typesafe variadic functions; 20) floats and strings as template parameters; 21) template parameters specialization. There are a lot of intersections in our lists ;)D 1.0 provides a lot of things completely missing in C++0x: 1) unit tests 2) documentation generation 3) modules 4) string mixins 5) template string & floating point parameters 6) compile time function execution 7) contract programming 8) nested functions 9) inner classes 10) delegates 11) scope statement 12) try-finally statement 13) static if 14) exported templates that are implementable 15) compilation speeds that are an order of magnitude faster 16) unambiguous template syntax 17) easy creation of tools that need to parse D code 18) synchronized functions 19) template metaprogramming that can be done by mortals 20) comprehensive support for array slicing 21) inline assembler 22) no crazy quilt dependent/non-dependent 2 level lookup rules that major compilers still get wrong and for which I still regularly get 'bug' reports because DMC++ does it according to the Standard 23) standard I/O that runs several times faster 24) portable sizes for types 25) guaranteed initialization 26) out function parameters 27) imaginary types 28) forward referencing of declarationsC++0x's new features are essentially all present in D 1.0.Yes, but C++ doesn't require programmers to change their language, tools and libraries. Such change require a lot of time and efforts. Such effors could be applied to the current projects instead of switching to D. But, if D could afford something else, something that completely missing from C++0x (like non-null reference/pointers, some kind of functional programming (pattern-matching) and so on) than such switching would be much more attractive.As I can see from your D conf's presentation D 2.0 is in the begining of long road. I've seen from your presentation what will D provide as an ultimate answer to C++ and some others languages. As for me, D 2.0 is a descendant of D (almost as D is descendant of C++). So it is better to think that now we have modern language D 1.0 and we will have better language D 2.0 in time (may be it is better to chose new name for D 2.0, something like D-Bright ;) ). And now the key factor to make D successful is creating D 1.0 tools, libraries, docs and applications. And show how D 1.0 outperform C++ and others. If we will made this than D 2.0 will come on the prepared ground. So it is time for pragmatics to focus on D 1.0 and let language enthusiasts play with D 2.0 prototypes. [1] http://www.rsdn.ru/forum/message/2222569.aspx -- Regards, Yauheni AkhotnikauI know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014?Even if it does take that long, D 1.0 is still far ahead, and is available now.
Aug 26 2007
eao197 wrote:I know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014?It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.
Aug 29 2007
On Wed, 29 Aug 2007 15:56:29 +0400, Don Clugston <dac nospam.com.au> wrote:eao197 wrote:Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language. -- Regards, Yauheni AkhotnikauI know that you work very hard on D, but D 1.0 took almost 7 years. D 2.0 started in 2007, so final D 2.0 could be in 2014?It's very amusing to read how Walter described D 1.0, seven years ago. It wasn't going to have templates, for example.
Aug 29 2007
eao197 wrote:Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language.I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 29 2007
Walter Bright wrote:eao197 wrote:I guess there's "stable" and there's "stable"? The history of Simula67 illustrates what can happen when a language is nailed to the wall :)Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language.I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 29 2007
On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:eao197 wrote:I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major even C++ sometimes). -- Regards, Yauheni AkhotnikauUnfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language.I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 29 2007
eao197 wrote:On Wed, 29 Aug 2007 23:15:26 +0400, Walter Bright <newshound1 digitalmars.com> wrote:C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability. I have been programming in C++ since 1987. It's pretty normal to take a C++ project from the past and have to dink around with it to get it to compile with a modern compiler. The odds of taking a few thousand lines of C++ pulled off the web that's set up to compile with C++ Brand X are about 0% for getting it to compile with C++ Brand Y without changes.eao197 wrote:I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new Python, even C++ sometimes).Unfortunately I see to D's evolution, perhabs, from 2002 or 2003 year. It looks as D have never been a stable language.I don't know any language in wide use that is stable (i.e. not changing). A stable language is a dead language.
Aug 30 2007
Walter Bright, el 30 de agosto a las 00:07 me escribiste:Forget about C++ for a second. Try with Python. It is an stable, or at least *predictable* language. It's evolution is well structured, so you know you will have no suprises, and you know the language will evolve. Python is *really* community driven (besides the BDFL[1] ;). It has a formal proposal system to make changes to the language: PEPs[2]. When a PEP is aproved, it's included in the next version and can be used *optionally* (if it could break backward compatibility). For example, you can use now the "future" behavoir of division:C++ has been around for 20+ years now. I'll grant that for maybe 2 of those years (10%) it was stable. C++ has the rather dubious distinction of it being very hard to get two different compilers to compile non-trivial code without some sort of code customization needed. As evidence of that, just browse the STL and Boost sources. While the C++ standard has been stable for a couple years (C++98, C++03), it being nearly impossible to implement has meant the implementations have been unstable. For example, name lookup rules vary significantly *today* even among the major compilers. I regularly get bug reports that DMC++ does it wrong, even though it actually does it according to the Standard, and it's other compilers that get it wrong. On the other hand, when C++ has been stable, it rapidly lost ground relative to other languages. The recent about face in rationale and flurry of core language additions to C++0x is evidence of that. I haven't programmed long term in the other languages, so don't have a good basis for commenting on their stability.dead language.I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: even C++ sometimes).310/33.3333333333333335 In the next python version, the new feature is included without need to import __future__, and the old behavior is deprecated (for example, with libraries, when something changes, in the first version you can ask for the new feature, in the second the new feature is the default but you can fallback to the old behavior, and in the third version teh old behavior is completely removed). [1] http://en.wikipedia.org/wiki/BDFL [2] http://www.python.org/dev/peps/from __future__ import division 10/3I have been programming in C++ since 1987. It's pretty normal to take a C++ project from the past and have to dink around with it to get it to compile with a modern compiler. The odds of taking a few thousand lines of C++ pulled off the web that's set up to compile with C++ Brand X are about 0% for getting it to compile with C++ Brand Y without changes.You are talking about 20 years. D evolves in a daily basis and the worst is that this evolution is without any formal procedure. Forking D 2.0 was a huge improvement in this matter, but I think there's is more work to be done so D can success as a long term language (or at least to be trusted). Another good step forward this could be to maintain phobos (or whatever the standard library would be :P) as an open source project. You can create a repository (please use git! :) so people can track its development and send patches easier. Same for the D frontend. It's almost impossible for someone who is used to colaborate in open source projects to do it with D. And that's a shame... -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ .------------------------------------------------------------------------, \ GPG: 5F5A8D05 // F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05 / '--------------------------------------------------------------------' Pa' ella cociné, pa' ella lavé, pa' ella soñe Paella completa, $2,50 Pero, la luz mala me tira, y yo? yo soy ligero pa'l trote La luz buena, está en el monte, allá voy, al horizonte
Aug 30 2007
eao197 wrote:I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new Python, even C++ sometimes).I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version. An D /does have/ a stable language version, D1. Regards, Frank
Aug 30 2007
On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:eao197 wrote:I mean changes in languages which break compatibility with previous =3 =code. AFAIK, successful languages always had some periods (usually 2-=ew =years, sometimes more) when there were no additions to language and n=, ='tPython, even C++ sometimes).I rather think, that a "new major version" of any language that "doesn=break existing code" could hardly justify it's new major version numbe=r.A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.k = old code.An D /does have/ a stable language version, D1.http://d.puremagic.com/issues/show_bug.cgi?id=3D302 -- very strange bag= for = _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/ -- = Regards, Yauheni Akhotnikau
Aug 30 2007
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 eao197 wrote:break old code.I have to agree on this. Sometimes, when I write code and run into a feature that's documented, but not implemented yet (GC, I'm looking at you) or supposed to be working, but broken in strange ways, I can't help thinking D isn't nearly 1.0 yet, let alone 2.0. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG2E3ApEPJRr05fBERAtJaAJ9U065ri1iBTuDOlg//ZHPVwUbMMACgjXly R0bTvNP7b3ivgQkdC5UC2sE= =vc38 -----END PGP SIGNATURE-----An D /does have/ a stable language version, D1.http://d.puremagic.com/issues/show_bug.cgi?id=302 -- very strange bag for _stable_ version. Try to imagine _stable_ Eiffel with broken DesignByContract support :-/
Aug 31 2007
eao197 wrote:On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:Actually, I think new features that make old code obsolete (even if it still compiles and works perfectly) are even more of a problem -- breaking "mental been a problem for C++ and D. If you get 500 compile errors you need to fix, that's annoying and tedious. But when your code uses a technique that still works, but isn't supported by recent libraries, you're locked into the past forever.eao197 wrote:break old code.I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new major version didn't break existing code (for example: Java,I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.
Sep 04 2007
On Tue, 04 Sep 2007 12:34:14 +0400, Don Clugston <dac nospam.com.au> wrote:If you get 500 compile errors you need to fix, that's annoying and tedious.If you get 500 compile errors in old 10KLOC project its annoying. If you would get 500 compile errors in each of tens of legacy projects that is much more that simply 'annoying and tedious'.But when your code uses a technique that still works, but isn't supported by recent libraries, you're locked into the past forever.There is a good example in C++ world: the ACE library. It has been started a long time ago, it has been ported to various systems, it outlived many changes in the language and suffered from different compilers. Because of that ACE use C++ almost as "C++ with classes", even without usage of exceptions. In comparision with modern C++ (over)designed libraries (like Crypto++ or parts of Boost) ACE is an ugly old monster. But it has no real competitors in C++ and it allow me to write complex software more easyly than if I try to write part of ACE on modern C++ myself. So I don't think that old ACE library look me in the past (even if I can't use STL and exceptions with ACE). IMHO, the real power of any language is its code base -- all projects which have been developed using the language. And any actions which descriminate legacy code lead to decreasing the language's power. -- Regards, Yauheni Akhotnikau
Sep 04 2007
eao197 wrote:On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:Oh, btw, Java 1.5 did break old code. I used to use Gentoo during the transition phase so I had some experience compiling stuff. :) There were at least a couple of commonly used libraries and programs that broke. One minor problem was the new 'enum' keyword. Of course at least Sun Java compiler allows compiling in 1.4 mode too. I think Gentoo has a common practice nowadays to compile each Java program using the oldest compatible compiler profile for best compatibility. IIRC there were also some incompatible ABI changes because of the generics.eao197 wrote:old code.I mean changes in languages which break compatibility with previous code. AFAIK, successful languages always had some periods (usually 2-3 years, sometimes more) when there were no additions to language and new Python, even C++ sometimes).I rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.
Sep 07 2007
eao197 wrote:On Thu, 30 Aug 2007 15:44:25 +0400, 0ffh <spam frankhirsch.net> wrote:Well, yeah, maybe (apart from what Jari-Matti said about Java 1.5 breaking code). But anyways, adding something to a language without breaking old code does only work so often. C++ tried to add to C without breaking code (it still does, but it tried) and you can see what came from it. New language features tend to need new syntax. If you want to remain compatible, you'll have to find a way to introduce that new syntax without breaking the old ones. This is usually quite hard to achieve without making the new syntax either cumbersome or fragile and hard to grok. Regards, FrankI rather think, that a "new major version" of any language that "doesn't break existing code" could hardly justify it's new major version number. A complete rewrite of the compiler, e.g., would justify a majer new compiler version, but not even a teeny-minor new language version.break old code.
Sep 07 2007
Bill Baxter wrote:A lot of you probably saw Bjarne Stroustrup's talk on C++0x on the web. If not here's the link: http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html I recommend hitting pause on the video and then go get some lunch while it buffers up enough that you won't get hiccups. Or if you can figure out how to get those newfangled torrent thingys to work, that's probably a good option too. --bbTo me this show why D may be the "better" language syntacticly in the long run. While legacy code is a great thing, it also is a weight around C++'s head. D still has the flexibility to take on many of these good features that would be improbable in C++ due to all parties involved. Although I hope that D takes a serious look at the new much more complicated CPU architectures because I'm afraid that is one area where it could be left behind. -Joel
Aug 20 2007