digitalmars.D - Massive loss for D on Tiobe
- Georg Wrede (9/9) May 06 2009 D made the May headline on Tiobe: "Programming language D suffers sharp
- grauzone (3/18) May 06 2009 D2.0.
- dsimcha (7/16) May 06 2009 This fully convinces me that the Tiobe index should not be taken at face...
- Carlos Smith (18/21) May 06 2009 Yes.
- Eldar Insafutdinov (4/19) May 06 2009 Both hands raised for that...
- Vincenzo Ampolo (21/23) May 07 2009 From dmd.2.029/dmd/src/dmd/backendlicense.txt
- Walter Bright (13/16) May 06 2009 Of course it's unbelievable. This change didn't happen over a year's
- superdan (2/22) May 06 2009 i sorta prefer grau douche zone's theory. makes no sense 'cept in da fra...
- grauzone (2/2) May 06 2009 You're offending me. Please stop this immediately.
- superdan (3/6) May 06 2009 wut happened to `flame on', hercules?
- grauzone (4/13) May 07 2009 When it comes to "everyone being better off", what about stopping
- superdan (3/16) May 07 2009 expletives deleted? then u missed the subject. subject was yer bashing d...
- grauzone (7/23) May 07 2009 I don't dislike D2. (OK, except for some parts like const & immutable.)
- Vincenzo Ampolo (4/6) May 07 2009 +1
-
Don
(2/8)
May 07 2009
I think you'll like the next DMD release
. - Jarrett Billingsley (2/11) May 07 2009 Oman, will it be another 0.178? I can only hope..
- bearophile (3/3) May 06 2009 Ignore the Tiobe index. It's trash.
- BCS (5/17) May 06 2009 take a look at the graph for RPG(OS/400) and D
- Walter Bright (8/9) May 06 2009 Here's some more food for thought. Tiobe says they do a search for "xxx
- Denis Koroskin (2/11) May 06 2009 I got just 184 000 for "D programming"
- Daniel Keep (5/23) May 06 2009 Just did a google search, got 186k.
- Georg Wrede (12/25) May 06 2009 Tried Google:
- Georg Wrede (11/32) May 06 2009 http://en.wikipedia.org/wiki/IBM_RPG
- Derek Parnell (11/29) May 07 2009 I wonder if they have accidentally include "RolePlayingGame" programming...
- Walter Bright (10/12) May 07 2009 I sent an email to Tiobe, and received a nice reply from Paul Jansen,
- Vincenzo Ampolo (10/13) May 06 2009 IMHO it's just marketing.
- Alix Pexton (8/23) May 07 2009 There was a small drop last month, and a note saying that hits for
- Nick B (6/6) May 07 2009 Hi
- Bartosz Milewski (2/13) May 26 2009
- Andrei Alexandrescu (3/4) May 26 2009 Has anyone reddit'ed it yet?
- Andrei Alexandrescu (4/5) May 26 2009 http://www.reddit.com/r/programming/comments/8ngwn/racefree_multithreadi...
- Jason House (17/18) May 26 2009 We've been teased for 6 months or more. I'm hoping the details will com...
- Bartosz Milewski (1/1) May 27 2009 You pretty much nailed it. The ownership scheme will be explained in mor...
- Tim Matthews (29/29) May 27 2009 This may seem slightly OT but in your blog "I will use syntax similar to...
- Robert Fraser (5/38) May 27 2009 I think most of Bartoz's readers are C++ users. The "I will use syntax
- Jason House (3/36) May 27 2009 Don't read into it. I took it as being more readable for non-D users. An...
- Tim Matthews (5/8) May 27 2009 Fuck that! I am very experienced in C++ , C# and some in java and read
- Jason House (17/33) May 27 2009 The article implies some level of flow analysis. Has Walter come around ...
- Jason House (4/45) May 28 2009 I'm really surprised by the lack of design discussion in this thread. It...
- Denis Koroskin (2/13) May 28 2009 It's plain easier to discuss bycicle shed color, because everyone is exp...
- grauzone (8/8) May 28 2009 1. Everyone agrees anyway, that emulating fork() is the best idea to
- dsimcha (5/13) May 28 2009 Yeah, unfortunately for something as complex as what's being proposed, I...
- Tim Matthews (8/14) May 28 2009 I have a few things I would like to discuss but I feel you are going to
- Jason House (4/19) May 28 2009 I won't bite your head off, or anyone else's. I'm sorry if a prior post ...
- Robert Jacques (19/30) May 28 2009 Well, there's been a fair amount of previous related discussion. I've
- Steven Schveighoffer (14/20) May 28 2009 For the most part, this really academic threading stuff is beyond me. I...
- Leandro Lucarella (19/28) May 28 2009 I just find the new "thread-aware" design of D2 so complex, so twisted
- Andrei Alexandrescu (25/53) May 28 2009 On the contrary, we all (Bartosz, Walter, myself and probably other
- BCS (10/26) May 28 2009 I get the impression, from what little I known about threading, that it ...
- Andrei Alexandrescu (4/6) May 28 2009 That is correct, just that it's 40 years late. Right now everything is
- BCS (5/15) May 28 2009 I'm talking at the ASM level (not the language model level) and as oppos...
- Andrei Alexandrescu (11/29) May 28 2009 What happens is that memory is less shared as cache hierarchies go
- BCS (7/13) May 28 2009 I'm thinking implementation not model. How is the message passing implem...
- Sean Kelly (13/26) May 29 2009 I think it depends on whether the message is intraprocess or
- Daniel Keep (23/38) May 28 2009 This is all very interesting. I've recently been playing with a little
- Michel Fortin (30/39) May 30 2009 While message-passing might be useful for some applications, I have a
- Andrei Alexandrescu (10/48) May 30 2009 Depends on what you want to do to those arrays. If concurrent writing is...
- Michel Fortin (35/54) May 30 2009 If you include passing unique pointers to shared memory in your
- Sean Kelly (13/43) May 30 2009 Perhaps at an implementation level in some instances, yes. But look at
- Denis Koroskin (6/12) May 28 2009 That's true.
- Robert Jacques (7/29) May 28 2009 I agree that Andrei's right, but your example is wrong. The Cell's SPU a...
- Denis Koroskin (3/37) May 28 2009 I wanted to stress that multicore PUs tent to have their own local memor...
- Robert Jacques (9/53) May 28 2009 Well, I thought you were making a different point. Really, the Cell SPU ...
- Bartosz Milewski (8/16) May 28 2009 I understand where you stand. You are looking at where the state-of-the-...
- Andrei Alexandrescu (55/110) May 28 2009 I understand that. However, I don't understand how the comment applies
- Leandro Lucarella (10/15) May 29 2009 I agree. Maybe is just unjustified fear, but I see D2 being to concurren...
- bearophile (5/7) May 29 2009 Sometimes you need lot of time to find what a simple implementation can ...
- Leandro Lucarella (11/19) May 29 2009 Exactly. I think D had a good model of "steal good proven stuff that oth...
- Andrei Alexandrescu (9/24) May 29 2009 With its staunch default isolation, I think D is already making a
- Bartosz Milewski (16/77) May 29 2009 It's a very sweeping statement. I just looked at the TIOBE index and cou...
- Andrei Alexandrescu (43/169) May 29 2009 We can safely ditch Tiobe, but I agree that functional languages aren't
- Bartosz Milewski (1/1) May 29 2009 Can you believe it? I was convinced that my response was lost because th...
- Bartosz Milewski (10/14) May 30 2009 This is the missing second reply to Andrei. I'm posting parts of it beca...
- grauzone (9/11) May 30 2009 Everyone knows that D is full of half-baked ideas. We're not using D
- Michel Fortin (34/53) May 30 2009 Bartosz, you're arguing that your proposal isn't that complex compared
- Bartosz Milewski (1/1) May 29 2009 I don't think the item-by-item pingpong works well in the newsgroup. Let...
- Andrei Alexandrescu (5/12) May 29 2009 I'm sure it's a good idea, particularly if others will participate as
- bearophile (6/7) May 30 2009 Beside multiprocessing (that I am ignorant to comment on still), I can s...
- Andrei Alexandrescu (6/14) May 30 2009 Correct. We've been trying valiantly to introduce unique in the type
- Leandro Lucarella (27/54) May 28 2009 I guess all depends on the kind of fine granularity you want. I work on
- Bartosz Milewski (4/10) May 28 2009 Probably the majority of users either don't use multithreading (yet) or ...
- dsimcha (17/27) May 28 2009 only for very simple tasks. My stated goal is not to force such users to...
- Jason House (6/20) May 28 2009 My hobby project is a multi-threaded game-playing AI. My current scheme ...
- Bartosz Milewski (3/11) May 28 2009 I don't have much to say about that because it's a know problem and it h...
- Jason House (4/19) May 28 2009 Far from it! I'm stumbling through in an attempt to teach myself the bla...
- Bartosz Milewski (3/8) May 30 2009 These will either be implemented in the library (inline assembly) or as ...
- BCS (4/8) May 28 2009 As in threaded min-max? Have you got anything working? I known from expe...
- Jason House (2/14) May 28 2009 No. Min-max is only good for theory. I'm also not doing alpha-beta which...
- Tim Matthews (3/5) May 28 2009 Can you elaborate on this? I think of the word macro as a C preprocessor...
- Denis Koroskin (2/7) May 28 2009 I believe he is talking about AST macros that are postponed until D3 bec...
- Tim Matthews (3/13) May 28 2009 OK thanks I see now because macros have that extra flexibility over
- BCS (2/9) May 28 2009 AST macros. Look up Walter et al's talk from the D conference
- Sean Kelly (16/25) May 28 2009 That was basically the complaint about the const design for D2, and
- bearophile (8/9) May 28 2009 I think shifting to concurrent programming is now the right choice, all ...
- Bartosz Milewski (3/8) May 28 2009 It's not that bad. I actually wrote the examples in D and then replaced ...
- Bartosz Milewski (3/11) May 29 2009 I don't have much to say about that because it's a know problem and it h...
D made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?
May 06 2009
Georg Wrede wrote:D made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?D2.0. Now flame away.
May 06 2009
== Quote from Georg Wrede (georg.wrede iki.fi)'s articleD made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?This fully convinces me that the Tiobe index should not be taken at face value. How does a language hit an all time high and a multiyear low within 2 months of each other? At best the Tiobe index is an unbiased but extremely high variance estimator of language popularity, and meaningful results can only be produced by averaging results over a period much longer than a month. At worst, it's so biased that it's just plain garbage.
May 06 2009
"Georg Wrede" <georg.wrede iki.fi> a écritCan this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?Yes. Give D1 a future. Shift focus on D1 (D2 is experimental). Make D1 really usable in the workplace. Being called stable is not enough. Produce a grammar for the language. This will give it a definition on which every body will align. Fix any inconsistencies in the language. Choose one (1) license for all DigitalMars D related stuff. Go true Open Source. No strings attached. I was absolutely thrilled to read that D has gone Open Source. Then i was quite unthrilled after i have read the official license. This episode had a negative impact on D. Get rid of OMF... D has a future ...
May 06 2009
Carlos Smith Wrote:"Georg Wrede" <georg.wrede iki.fi> a écritD1 has complete open sources now, submit patches, make it more stable. That's what people are actually doing.Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?Yes. Give D1 a future. Shift focus on D1 (D2 is experimental). Make D1 really usable in the workplace. Being called stable is not enough.Get rid of OMF...Both hands raised for that...D has a future ...Oh yeah!
May 06 2009
Eldar Insafutdinov wrote:D1 has complete open sources now, submit patches, make it morestable.That's what people are actually doing.From dmd.2.029/dmd/src/dmd/backendlicense.txt "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars." This is not Open Source neither Free Software IMHO. So D2, the "bleeding edge" which could be very interesting for development is not free software (at least backend) I remember that an open backend is needed to port a compiler into other platforms (x86_64, armel, sparc, powerpc, ecc ecc) Let's see D1. i don't see any "backend" (backend not released?) directory and in dmd.1.030/dmd/license.txt there is again: "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars." inside dmd/src/dmd there are gpl.txt and artistic.txt. So dmd frontend is gplv1 (well, quite old version of gpl, but at least it's gpl!).
May 07 2009
Georg Wrede wrote:D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason??Of course it's unbelievable. This change didn't happen over a year's time, it happened in one month. This means that the methodology Tiobe uses changed, or the search portals changed their hit count algorithm. Notice http://www.tiobe.com/index.php/content/paperinfo/tpci/tpci_definition.htm where they automatically discount all D hits by 10%. They don't do that for C. For example, of the first 100 hits of "D programming" on google, I found only 6 that were not about D, two of which was already excluded by Tiobe's algorithm. That's 4%, not 10%. I found 3 non-C ones for "C programming" in the first 100. That's 3%, not 0%.
May 06 2009
Walter Bright Wrote:Georg Wrede wrote:i sorta prefer grau douche zone's theory. makes no sense 'cept in da framework where he's a retarded dumbass & evil to boot. but u gotta respect the man. he's waited so patiently fer dis opportunity to suck collective cock. gotta give it 2 da man.D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason??Of course it's unbelievable. This change didn't happen over a year's time, it happened in one month. This means that the methodology Tiobe uses changed, or the search portals changed their hit count algorithm. Notice http://www.tiobe.com/index.php/content/paperinfo/tpci/tpci_definition.htm where they automatically discount all D hits by 10%. They don't do that for C. For example, of the first 100 hits of "D programming" on google, I found only 6 that were not about D, two of which was already excluded by Tiobe's algorithm. That's 4%, not 10%. I found 3 non-C ones for "C programming" in the first 100. That's 3%, not 0%.
May 06 2009
You're offending me. Please stop this immediately. Thank you.
May 06 2009
grauzone Wrote:You're offending me. Please stop this immediately. Thank you.wut happened to `flame on', hercules? anyway just killfile me. i dun change handles. better yet. stop being an ass. we'd all be way better off. suit yerself.
May 06 2009
superdan wrote:grauzone Wrote:When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up. Thank you.You're offending me. Please stop this immediately. Thank you.wut happened to `flame on', hercules? <expletives deleted>
May 07 2009
grauzone Wrote:superdan wrote:expletives deleted? then u missed the subject. subject was yer bashing d2 more often than a teen has a boner. that is da problem, not my expletives. ok, yer highness, we fuckin' get it. u dun like d2. u made ur point several times now move on with life. if u r too hung up u say idiotic crap like this with tiobe & d2. better have a foul mouth & a clean mind than vice versa. so u grow up friend. til then, at least dun flamebait. dun do da crime if u can't do da time.grauzone Wrote:When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up.You're offending me. Please stop this immediately. Thank you.wut happened to `flame on', hercules? <expletives deleted>
May 07 2009
superdan wrote:grauzone Wrote:I don't dislike D2. (OK, except for some parts like const & immutable.) I'm just thinking that what D actually needs, is a stable implementation, and not more features.superdan wrote:expletives deleted? then u missed the subject. subject was yer bashing d2 more often than <expletives deleted>grauzone Wrote:When it comes to "everyone being better off", what about stopping talking like a 16 year old rapper on a hormone trip? Grow up.You're offending me. Please stop this immediately. Thank you.wut happened to `flame on', hercules? <expletives deleted>better have a foul mouth & a clean mind than vice versa.Sorry, the "I have a foul mouth, but what I'm really saying is highly intellectual and deep, and thus everyone criticizing me is actually dumb" turn doesn't work on me. Grow up.
May 07 2009
grauzone wrote:I'm just thinking that what D actually needs, is a stable implementation, and not more features.+1 And... i think grauzone is right, Superdan, your language seems offensive to me too. Please stop a not needed flame.
May 07 2009
Vincenzo Ampolo wrote:grauzone wrote:I think you'll like the next DMD release <g>.I'm just thinking that what D actually needs, is a stable implementation, and not more features.+1
May 07 2009
On Thu, May 7, 2009 at 10:30 AM, Don <nospam nospam.com> wrote:Vincenzo Ampolo wrote:Oman, will it be another 0.178? I can only hope..grauzone wrote:I think you'll like the next DMD release <g>.I'm just thinking that what D actually needs, is a stable implementation, and not more features.+1
May 07 2009
Ignore the Tiobe index. It's trash. Bye, bearophile
May 06 2009
Reply to Georg,D made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html http://www.tiobe.com/index.php/paperinfo/tpci/D.html something/someone is gameing the system.
May 06 2009
BCS wrote:something/someone is gameing the system.Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
May 06 2009
On Thu, 07 May 2009 01:52:23 +0400, Walter Bright <newshound1 digitalmars.com> wrote:BCS wrote:I got just 184 000 for "D programming"something/someone is gameing the system.Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
May 06 2009
Denis Koroskin wrote:On Thu, 07 May 2009 01:52:23 +0400, Walter Bright <newshound1 digitalmars.com> wrote:Just did a google search, got 186k. I think we should stop basing D's worth as a language on what Tiobe says. Sticks and stones may break D's bones, but Tiobe has to sleep eventually... -- DanielBCS wrote:I got just 184 000 for "D programming"something/someone is gameing the system.Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
May 06 2009
Walter Bright wrote:BCS wrote:Tried Google: 184,000 for "D programming" 166,000 for "3d programming" 1,590,000 for "C++ programming" 1,950,000 for "C programming" 125,000 for "Pascal programming" 292,000 for "Delphi programming" 2,920,000 for "Java programming" But 114,000 for "abap programming" 44,400 for "rpg programming"something/someone is gameing the system.Here's some more food for thought. Tiobe says they do a search for "xxx programming". "C programming" 2,000,000 19.537 "Pascal programming" 136,000 .776 "D programming" 187,000 .628 This doesn't add up. Also, I tried "D programming" an hour ago and got 437,000 hits. ???
May 06 2009
BCS wrote:Reply to Georg,http://en.wikipedia.org/wiki/IBM_RPG http://en.wikipedia.org/wiki/Abap Tears and grief, RPG(OS/400) is a language for *punch cards*. And ABAP is Gerry's answer to *COBOL*. And Finland just lost to the *USA* in hockey, right after we beat Canada. Jansen (of Tiobe) should probably not go wild adjusting the knobs. It may erode their credibility. But blaming him doesn't tidy our nest either. The D landscape in front of the programmer in search of a C++ replacement isn't what it should be. The language is ten years old, but you'd never guess. (Except from outdated stuff on each website.)D made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html http://www.tiobe.com/index.php/paperinfo/tpci/D.html something/someone is gameing the system.
May 06 2009
On Wed, 6 May 2009 21:02:58 +0000 (UTC), BCS wrote:Reply to Georg,I wonder if they have accidentally include "RolePlayingGame" programming in the RPG category? For "RPG Programming" I get I get 47,500 hits. For "RPG Programming" + OS/400 I get 9,840 hits. On the surface, it seems that the Tiobe figures a very "rubbery", if not outright dishonest. -- Derek Parnell Melbourne, Australia skype: derek.j.parnellD made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?take a look at the graph for RPG(OS/400) and D http://www.tiobe.com/index.php/paperinfo/tpci/RPG_(OS_400).html
May 07 2009
Derek Parnell wrote:On the surface, it seems that the Tiobe figures a very "rubbery", if not outright dishonest.I sent an email to Tiobe, and received a nice reply from Paul Jansen, who runs the index. He showed me his numbers, and the biggest factor in the drop in D's ranking was a large drop in hits from Yahoo's engine. Why that would be neither of us knows. He agreed that the 90% "adjustment" factor needed to be revisited. Anyhow, I don't believe there is anything dishonest going on. It's just the erratic nature of what search engines report as the "number of hits". Google's varies wildly all over the place. Who knows what is going on at Yahoo.
May 07 2009
Georg Wrede wrote:D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What couldbethe reasons for it? Is it even possible to figure out any reason??IMHO it's just marketing. Do you wanna still rise that 0.628% share? If yes: 1) Use D 2) Help into other projects and do not create one-man projects 3) Hope that the "big" projects merge into few useful ones well supported (like phobos and tango in D2)(maybe compilers should do it too?).
May 06 2009
Georg Wrede wrote:D made the May headline on Tiobe: "Programming language D suffers sharp fall". You can say that again, D went down 5 places, to below languges like RPG(OS/400) and ABAP! D's loss seems unbelievable. D now has a 0.628% share, which is even less than what it's lost (-0.82%) in the last 12 months. What could be the reasons for it? Is it even possible to figure out any reason?? Can this loss induce people to abandon D, and others to not take it up, leading to cumulating losses in the coming months? What do we have to do to prevent this?There was a small drop last month, and a note saying that hits for DTrace had been eliminated as false positives for the D Programming Language. If it had been possible for DTrace to be such a false positive, I am curious about what other false positives could be effecting other languages, but I definitly think it is a blow to the index's credibility. A...
May 07 2009
Hi It seems that Bartosz's latest post, dated April 26 th is missing from his blog. See : http://bartoszmilewski.wordpress.com/ Nick B.
May 07 2009
The post is back, rewritten and with some code teasers. Nick B Wrote:Hi It seems that Bartosz's latest post, dated April 26 th is missing from his blog. See : http://bartoszmilewski.wordpress.com/ Nick B.
May 26 2009
Bartosz Milewski wrote:The post is back, rewritten and with some code teasers.Has anyone reddit'ed it yet? Andrei
May 26 2009
Bartosz Milewski wrote:The post is back, rewritten and with some code teasers.http://www.reddit.com/r/programming/comments/8ngwn/racefree_multithreading_in_a_hypothetical_language/ Vote up! Andrei
May 26 2009
Bartosz Milewski wrote:The post is back, rewritten and with some code teasers.We've been teased for 6 months or more. I'm hoping the details will come out quickly now! Here's what I took away from the article: * Goal is to have minimal code changes for single threaded code * unique and lent are two new new transitive type constructors * lockfree is a new storage class (to guarantee sequential consistency) * The new := operator is used for move semantics (if appropriate for type) * Objects can be declared as self-owned I think a deep understanding of exactly what the final design is requires understanding the ownership scheme which isn't described yet. unique is to invariant as lent is to const. Function arguments can be declared as lent and accept both unique and non-unique types (just like const can accept immutable and non-immutable types). Lent basically means what I think scope was intended to mean for function arguments. I'm happy to finally see unique in the type system since it really felt like a gaping hole to me.
May 26 2009
You pretty much nailed it. The ownership scheme will be explained in more detail in the next two installments, which are almost ready.
May 27 2009
This may seem slightly OT but in your blog "I will use syntax similar to that of the D programming language, but C++ and Java programmers shouldn’t have problems following it." class MVar<T> { private: T _msg; bool _full; public: // put: asynchronous (non-blocking) // Precondition: MVar must be empty void put(T msg) { assert (!_full); _msg := msg; // move _full = true; notify(); } // take: If empty, blocks until full. // Removes the message and switches state to empty T take() { while (!_full) wait(); _full = false; return := _msg; } } auto mVar = new MVar<owner::self, int>; Why not MVar!(owner::self, int)? Why go back to ambiguous templates? Apart from the move operator it looks like c++ to me. Sorry if this doesn't make sense but I've missed a few previous posts.
May 27 2009
Tim Matthews wrote:This may seem slightly OT but in your blog "I will use syntax similar to that of the D programming language, but C++ and Java programmers shouldn’t have problems following it." class MVar<T> { private: T _msg; bool _full; public: // put: asynchronous (non-blocking) // Precondition: MVar must be empty void put(T msg) { assert (!_full); _msg := msg; // move _full = true; notify(); } // take: If empty, blocks until full. // Removes the message and switches state to empty T take() { while (!_full) wait(); _full = false; return := _msg; } } auto mVar = new MVar<owner::self, int>; Why not MVar!(owner::self, int)? Why go back to ambiguous templates? Apart from the move operator it looks like c++ to me. Sorry if this doesn't make sense but I've missed a few previous posts.I think most of Bartoz's readers are C++ users. The "I will use syntax similar to that of the D programming language" was probably put there in a first draft and after revision it was changed to more C++y example code, but the sentence wasn't removed.
May 27 2009
Tim Matthews Wrote:This may seem slightly OT but in your blog "I will use syntax similar to that of the D programming language, but C++ and Java programmers shouldn’t have problems following it." class MVar<T> { private: T _msg; bool _full; public: // put: asynchronous (non-blocking) // Precondition: MVar must be empty void put(T msg) { assert (!_full); _msg := msg; // move _full = true; notify(); } // take: If empty, blocks until full. // Removes the message and switches state to empty T take() { while (!_full) wait(); _full = false; return := _msg; } } auto mVar = new MVar<owner::self, int>; Why not MVar!(owner::self, int)? Why go back to ambiguous templates? Apart from the move operator it looks like c++ to me. Sorry if this doesn't make sense but I've missed a few previous posts.Don't read into it. I took it as being more readable for non-D users. Angle more recognizable, even for those that don't code in any of the languages mentioned. D's syntax is good, just not wide spread. Notice the lack of a template<typename T> that's required for C++, instead, the template argument is after the class name. There's also no constrictor or initializers which would be bugs in C++. It still looks like tweaked D code.
May 27 2009
Jason House wrote:Don't read into it. I took it as being more readable for non-D users. Angle more recognizable, even for those that don't code in any of the languages mentioned. D's syntax is good, just not wide spread. Notice the lack of a template<typename T> that's required for C++, instead, the template argument is after the class name. There's also no constrictor or initializers which would be bugs in C++. It still looks like tweaked D code.some of his articles a long time ago, I was just pointing out that design decision and that "Similar to D but C++/Java users will be OK" message.
May 27 2009
The article implies some level of flow analysis. Has Walter come around on this topic? As far as considering a variable moved, I believe the following should be reasonable • Any if statement (or else clause) containing a move • Any switch statement containing a move for any case • Any fall-through cases where the prior case moved the variable • Any function call not using a lent argument for the variable • Moving inside a loop should be illegal An explicit is null check should be able to bypass these rules. There are probably ways to loosen the looping rule such as if there is a way to guarantee the moved variable won't be read from again. Very similar rules can be used for detecting initialization of (unique) variables. A variable can be considered initialized if: • Both the if and else must initialize a variable • All cases in a switch must initialize a variable • Out parameter in a function call • Loops can't initialize variables relaxation: can init if guaranteed to run at least once Those rules should be sufficiently simple to implement and extremely tolerable for programmers. Inevitably, I missed a case, but I hope the idea is clear, and that whatever I overlooked does not add complexity. Bartosz Milewski Wrote:The post is back, rewritten and with some code teasers. Nick B Wrote:Hi It seems that Bartosz's latest post, dated April 26 th is missing from his blog. See : http://bartoszmilewski.wordpress.com/ Nick B.
May 27 2009
I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture. Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place? This seems really unfair to Walter. Then again, I see no indication of Walter wanting anything else. Jason House Wrote:The article implies some level of flow analysis. Has Walter come around on this topic? As far as considering a variable moved, I believe the following should be reasonable • Any if statement (or else clause) containing a move • Any switch statement containing a move for any case • Any fall-through cases where the prior case moved the variable • Any function call not using a lent argument for the variable • Moving inside a loop should be illegal An explicit is null check should be able to bypass these rules. There are probably ways to loosen the looping rule such as if there is a way to guarantee the moved variable won't be read from again. Very similar rules can be used for detecting initialization of (unique) variables. A variable can be considered initialized if: • Both the if and else must initialize a variable • All cases in a switch must initialize a variable • Out parameter in a function call • Loops can't initialize variables relaxation: can init if guaranteed to run at least once Those rules should be sufficiently simple to implement and extremely tolerable for programmers. Inevitably, I missed a case, but I hope the idea is clear, and that whatever I overlooked does not add complexity. Bartosz Milewski Wrote:The post is back, rewritten and with some code teasers. Nick B Wrote:Hi It seems that Bartosz's latest post, dated April 26 th is missing from his blog. See : http://bartoszmilewski.wordpress.com/ Nick B.
May 28 2009
On Thu, 28 May 2009 16:45:42 +0400, Jason House <jason.james.house gmail.com> wrote:I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture. Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place? This seems really unfair to Walter. Then again, I see no indication of Walter wanting anything else.It's plain easier to discuss bycicle shed color, because everyone is expert in it.
May 28 2009
1. Everyone agrees anyway, that emulating fork() is the best idea to deal with multithreading and synchronization. 2. We'll yet have to see how an implementation of the proposed design will work out. This means Walter has to implement it. Reading blog entries about it is almost a bigger waste of time than discussing in this newsgroup. 3. Not that many people are interested in D2. 4. Bikeshed colors
May 28 2009
== Quote from grauzone (none example.net)'s article1. Everyone agrees anyway, that emulating fork() is the best idea to deal with multithreading and synchronization. 2. We'll yet have to see how an implementation of the proposed design will work out. This means Walter has to implement it. Reading blog entries about it is almost a bigger waste of time than discussing in this newsgroup. 3. Not that many people are interested in D2. 4. Bikeshed colorsYeah, unfortunately for something as complex as what's being proposed, I have a hard time understanding/forming an opinion of it until I've gotten my hands dirty and actually tried to use it a little. Just reading about it in the abstract, it's hard to really form much of an opinion on it.
May 28 2009
Jason House wrote:I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture. Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place? This seems really unfair to Walter. Then again, I see no indication of Walter wanting anything else.I have a few things I would like to discuss but I feel you are going to reply again with something like "do go there, the syntax is too dangerous for you" (you can really offend people with comments like that and you should get to know them first) I also feel you are going to keep top-replying unless someone tells you not to do so quit complaining and get to your points, design recommendations, ideas etc..
May 28 2009
Tim Matthews Wrote:Jason House wrote:I won't bite your head off, or anyone else's. I'm sorry if a prior post on this NG came across that way. That wasn't my intent.I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture. Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place? This seems really unfair to Walter. Then again, I see no indication of Walter wanting anything else.I have a few things I would like to discuss but I feel you are going to reply again with something like "do go there, the syntax is too dangerous for you" (you can really offend people with comments like that and you should get to know them first)I also feel you are going to keep top-replying unless someone tells you not to do so quit complaining and get to your points, design recommendations, ideas etc..I top-posted because what I had to say had very little to do with the message I replied to. Many of my posts lately have aimed at trying to encourage collaboration. Maybe I'm going about it the wrong way. I know I'm a nobody, but I'm still trying in my own way to have a positive impact on D. So far, I think I'm just pissing people off.
May 28 2009
On Thu, 28 May 2009 08:45:42 -0400, Jason House <jason.james.house gmail.com> wrote:I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture. Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place? This seems really unfair to Walter. Then again, I see no indication of Walter wanting anything else.Well, there's been a fair amount of previous related discussion. I've placed a proposal up on Wiki4D (http://www.prowiki.org/wiki4d/wiki.cgi?OwnershipTypesInD), though since it was assembled from a bunch of personal notes on the subject and uses Walter's old suggestion of using 'scope' instead of Bartosz's 'lent' it's a bit confusing. I'm planning on re-working it, but other deadlines come first. There's also been a lot of talk about message passing/future/promise/task/actor/agent based concurrency, data parallel models such as bulk synchronous programming (BSP) or GPU programming and auto-parallelization of pure functions. About the only thing needed from the type system to implement either of these models is the ability for uniques/mobiles to do a do-si-do type move (which should supported by ref unique). And BSP/GPU stuff are way too bleeding edge to support in the language proper yet. Honestly, I think people are holding back in part because Bartosz has only started to reveal a threading scheme and so are waiting for him to complete it, before proverbially ripping it apart.
May 28 2009
On Thu, 28 May 2009 08:45:42 -0400, Jason House <jason.james.house gmail.com> wrote:I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture.For the most part, this really academic threading stuff is beyond me. It took me long enough to understand threading with mutex locks... In any case, it didn't seem from the post that this was coming to D. It seemed like it was for a language Bartosz was working on besides D, the syntax doesn't even look close. Is this planned for D2 or D3? Or not at all? I remember Walter saying he didn't want to add umpteen different type constructor keywords, even unique, because of the confusion it would cause. In any case, once I decided it wasn't D related, I ignored it just like I usually ignore bearophile's "look at what the obscureX language does" posts (no offense bearophile). -Steve
May 28 2009
Jason House, el 28 de mayo a las 08:45 me escribiste:I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture.I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all. I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level. I would like D2 better if it was focussed on macros for example.Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place?No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =) -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 28 2009
Leandro Lucarella wrote:Jason House, el 28 de mayo a las 08:45 me escribiste:On the contrary, we all (Bartosz, Walter, myself and probably other participants) think this would be valuable feedback. We'll always have some insecurity that we cut the pie the wrong way, and therefore we're continuously on lookout for well-argued positives or negatives. Those could lead to very useful quantitative discussions a la "X, Y, and Z together are way too complex, but X' and Z seems palatable and get 90% of the territory covered". I like Bartosz's design, it's sound (as far as I can tell) and puts the defaults in the right place so there's a nice pay-as-you-need quality to it. There are two details that are wanting. One is that I'm not sure we want high-level race-free so badly, we're prepared to pay that kind of price for it. Message passing is more likely to work well (better than lock-based concurrency) on contemporary and future processors. Then there's a design for solving low-level races that is much simpler and solves the nastiest part of the problem, so I wonder whether that would be more suitable. We also have immutable sharing that should help. Given this landscape, do we want to focus on high-level race elimination that badly? I'm not sure. Second, there is no regard to language integration. Bartosz says syntax doesn't matter and that he's flexible, but what that really means is that no attention has been paid to language integration. There is more to language integration than just syntax (and then even syntax is an important part of it). AndreiI'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture.I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all. I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level. I would like D2 better if it was focussed on macros for example.Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place?No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =)
May 28 2009
Hello Leandro,Jason House, el 28 de mayo a las 08:45 me escribiste:I get the impression, from what little I known about threading, that it is likely you are under estimating the complexity of the threading problem. I get the feeling that *most* non-experts do (either that, or they just assume it more complex than they want to deal with).I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture.I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all.I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level.You are crazy! processes+IPC only works well if either the OS supports very fast IPC (IIRC none do aside from shared memory and now we are back where we started) or the processing between interaction is very long. Everything is indicating that shared memory multi-threading is where it's all going.
May 28 2009
BCS wrote:Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
Reply to Andrei,BCS wrote:I'm talking at the ASM level (not the language model level) and as opposed to each thread running in its own totally isolated address space. Am I wrong in assuming that most languages use user mode (not kernel mode) shared memory for inter thread communication?Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
BCS wrote:Reply to Andrei,What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing. AndreiBCS wrote:I'm talking at the ASM level (not the language model level) and as opposed to each thread running in its own totally isolated address space. Am I wrong in assuming that most languages use user mode (not kernel mode) shared memory for inter thread communication?Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
Reply to Andrei,It follows that message passing is not only an attractive modelI'm thinking implementation not model. How is the message passing implemented? OS system calls (probably on top of kernel level shared memory)? user space standardfor programming at large, but also a model that's closer to machine than memory sharing.I think I see what your getting at,.. even for shared memory on a deep cache; the cache invalidation system /is/ your message path.Andrei
May 28 2009
BCS wrote:Reply to Andrei,I think it depends on whether the message is intraprocess or interprocess. In the first case, I expect message passing would probably be done via user space shared memory if possible (things get a bit weird with per-thread heaps). In the latter case, a kernel api would probably be used if possible--perhaps TIPC or something related to MPI. It's the back door bit that's at issue right now. Should the language provide full explicit support for the intraprocess message passing? ie. move semantics, memory protection, etc?It follows that message passing is not only an attractive modelI'm thinking implementation not model. How is the message passing implemented? OS system calls (probably on top of kernel level shared memory)? user space shared memory? Special hardware? If you can't getYeah kinda. Look at NUMA machines, for example (SPARC, etc). I expect that NUMA architectures will become increasingly common in the coming years, and it makes total sense to try and build a language that expects such a model.for programming at large, but also a model that's closer to machine than memory sharing.I think I see what your getting at,.. even for shared memory on a deep cache; the cache invalidation system /is/ your message path.
May 29 2009
Andrei Alexandrescu wrote:BCS wrote:This is all very interesting. I've recently been playing with a little toy language I'm designing. It's a postfix language, so I'm fairly certain no one will ever want to even look at it. :P But when I was designing it, I was adamant that it should do safe parallelism. I worked out that I could get everything other than deadlock safety by giving everything value semantics (using copy-on-write for anything larger than an atomic value.) Add in references that remember their "owner" thread and can only be dereferenced by that single thread, and then note that the global dict and stack are just values themselves and hence copied not referenced when you create a new thread. The only method of communication between threads is using message channels. This could be quite slow if you try to pass a very large data structure (since everything always gets copied), so you can create it on the heap via a reference, then "disown" the reference and assign it to another thread. That way you can get the efficiency of pass-by-reference without inter-thread aliasing. I also have a plan for making the language deadlock-free by either re-expressing all locks as blocking messages such that the interpreter knows who is blocking who, or by going all-out and just using transactions. But this is all just me stuffing about with a completely impractical language. What's being done for D is much more interesting. :)... Am I wrong in assuming that most languages use user mode (not kernel mode) shared memory for inter thread communication?What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing.
May 28 2009
On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> said:What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing.While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases. There's a reason why various operating systems support shared memory between different processes: sometime it's easier to deal with shared memory than messaging, even with all the data races you have to deal with. Shared memory becoming more and more implemented as message passing at the very low level might indicate that some uses of shared memory will migrate to message passing at the application level and get some performance gains, but I don't think message passing will ever completely replace shared memory for dealing with large data sets. It's more likely that shared memory will become a scarse resource for some systems while it'll continue to grow for others. That said, I'm no expert in this domain. But I believe D should have good support both shared memory and message passing. I also take note that having a good shared memory model could prove very useful when writting on-disk databases or file systems, not just RAM-based data structures. You could have objects representing disk sectors or file segments, and the language's type system would help you handle the locking part of the equation. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 30 2009
Michel Fortin wrote:On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> said:Depends on what you want to do to those arrays. If concurrent writing is limited (e.g. quicksort) then there's no need to copy. Then many times you want to move (hand over) data from one thread to another. Here something like unique would help because you can safely pass pointers without worrying about subsequent contention.What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing.While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases.There's a reason why various operating systems support shared memory between different processes: sometime it's easier to deal with shared memory than messaging, even with all the data races you have to deal with.Of course sometimes shared memory is a more natural fit. My argument is that that's the rare case.Shared memory becoming more and more implemented as message passing at the very low level might indicate that some uses of shared memory will migrate to message passing at the application level and get some performance gains, but I don't think message passing will ever completely replace shared memory for dealing with large data sets. It's more likely that shared memory will become a scarse resource for some systems while it'll continue to grow for others. That said, I'm no expert in this domain. But I believe D should have good support both shared memory and message passing. I also take note that having a good shared memory model could prove very useful when writting on-disk databases or file systems, not just RAM-based data structures. You could have objects representing disk sectors or file segments, and the language's type system would help you handle the locking part of the equation.I think shared files with interlocking can support such cases with ease. Andrei
May 30 2009
On 2009-05-30 09:36:19 -0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> said:If you include passing unique pointers to shared memory in your definition of "message passing", then yes, you can work with "message passing", and yes 'unique' would help a lot to ensure safety. But then you still need to have shared memory between threads: it's just that by making the pointer 'unique' we're ensuring that no more than one thread at a time is accessing that particular piece of data in the shared memory. I'm still convinced that we should offer a good way to access shared mutable data. It'll have to have limits to what the language can enforce and I'm not sure what they should be. For instance, while I see a need for 'lockfree', it's footprint in language complexity seems a little big for a half-unsafe manual performance enhancement capability, so I'm undecided on that one. Implicit synchronization of shared object member functions seems a good idea good however. I'll be waiting a little more to see what Bartosz has to say about expressing object ownership before making other comments.While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases.Depends on what you want to do to those arrays. If concurrent writing is limited (e.g. quicksort) then there's no need to copy. Then many times you want to move (hand over) data from one thread to another. Here something like unique would help because you can safely pass pointers without worrying about subsequent contention.Shared memory is rare between processes, but not between threads. At least, in my experience. Shared memory is something you want to use whenever you can't afford copying data. Message passing is often implemented using shared memory, especially when between threads. That said, sometime you don't have the choice, you need to copy the data (to the GPU or to somewhere else on the network). Also, shared memory is something you want for storage systems you want to be accessible concurently by many threads like a database, a cache, a filesystem, etc. Those are shared storage systems and message passing doesn't help them really much since we're talking about storage here, not communication. Do you realy want to say that multithreaded access to stored data is rare? -- Michel Fortin michel.fortin michelf.com http://michelf.com/There's a reason why various operating systems support shared memory between different processes: sometime it's easier to deal with shared memory than messaging, even with all the data races you have to deal with.Of course sometimes shared memory is a more natural fit. My argument is that that's the rare case.
May 30 2009
Michel Fortin wrote:On 2009-05-28 12:52:06 -0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> said:Perhaps at an implementation level in some instances, yes. But look at Folding home, etc. The approach is based on message passing. Just if you were Folding OnOnePCOnly then you might pass references to array regions around instead of copying the data. I suppose what I'm getting at is that an interface doesn't typically necessitate a particular implementation.What happens is that memory is less shared as cache hierarchies go deeper. It was a great model when there were a couple of processors hitting on the same memory because it was close to reality. Cache hierarchies reveal the hard reality that memory is shared to a decreasing extent and that each processor would rather deal with its own memory. Incidentally, message-passing-style protocols are prevalent in such architectures even at low level. It follows that message passing is not only an attractive model for programming at large, but also a model that's closer to machine than memory sharing.While message-passing might be useful for some applications, I have a hard time seeing how it could work for others. Try split processing of a 4 Gb array over 4 processors, or implement multi-threaded access to an in-memory database. Message passing by copying all the data might happen at the very low-level, but shared memory is more the right abstraction for these cases.There's a reason why various operating systems support shared memory between different processes: sometime it's easier to deal with shared memory than messaging, even with all the data races you have to deal with. Shared memory becoming more and more implemented as message passing at the very low level might indicate that some uses of shared memory will migrate to message passing at the application level and get some performance gains, but I don't think message passing will ever completely replace shared memory for dealing with large data sets. It's more likely that shared memory will become a scarse resource for some systems while it'll continue to grow for others.Well sure. At some level, sharing is going to be happening even in message-passing oriented applications. The issue is more what the "encouraged" approach to solving problems that a language supports than what the language allows. D will always allow all sorts of wickedness because it's a systems language. But that doesn't mean this stuff has to be the central feature of the language.
May 30 2009
On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:BCS wrote:That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com> wrote:On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.BCS wrote:That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
On Thu, 28 May 2009 21:07:57 +0400, Robert Jacques <sandford jhu.edu> wrote:On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com> wrote:I wanted to stress that multicore PUs tent to have their own local memory (small but fast) and little or none global (shared) memory access (it is not efficient and error prone - race condition et al.) I believe SIMD/MIMD discussion is irrelevant here. It's all about Shared/Distributed Memory Model. MIMD devices can be both (http://en.wikipedia.org/wiki/MIMD)On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.BCS wrote:That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
On Thu, 28 May 2009 13:36:28 -0400, Denis Koroskin <2korden gmail.com> wrote:On Thu, 28 May 2009 21:07:57 +0400, Robert Jacques <sandford jhu.edu> wrote:Well, I thought you were making a different point. Really, the Cell SPU is the only current PU with the design you're talking about. All commercial CPUs and GPUs have very large global memory buses. Every blog and talk I've read/attended has painted the SPU in a very negative light, at least with regard to the programming model. (Which makes sense, since it's sorta like non-cache coherent NUMA, which pretty much all everyone decided is a bad idea.)On Thu, 28 May 2009 12:45:41 -0400, Denis Koroskin <2korden gmail.com> wrote:I wanted to stress that multicore PUs tent to have their own local memory (small but fast) and little or none global (shared) memory access (it is not efficient and error prone - race condition et al.) I believe SIMD/MIMD discussion is irrelevant here. It's all about Shared/Distributed Memory Model. MIMD devices can be both (http://en.wikipedia.org/wiki/MIMD)On Thu, 28 May 2009 20:32:29 +0400, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:I agree that Andrei's right, but your example is wrong. The Cell's SPU are a SIMD vector processors, not general CPUs. I also work with vector processors (NVIDIA's CUDA) but every software/hardware iteration gets further and further away from pure vector processing. Rumor has it that the NVIDIA's next chip will be MIMD, instead of SIMD.BCS wrote:That's true. For example, we develop for PS3, and its 7 SPU cores have 256KiB of TLS each (which is as fast as L2 cache) and no direct shared memory access. Shared memory needs to be requested via asynchronous memcpy requests, and this scheme doesn't work with OOP well: even after you transfer some object, its vtbl etc still point to shared memory. We had hard time re-arranging our data so that object and everything it owns (and points to) is stored sequencially in a single large block of memory. This also resulted in replacing most of the pointers with relative offsets. Parallelization is hard, but the result is worth the trouble.Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
Andrei Alexandrescu Wrote:BCS wrote:I understand where you stand. You are looking at where the state-of-the-art hardware is going and that makes perfect sense. There is, however, a co-evolution of hardware and software that is not a simple software-follow-hardware (remember the era of RISC processors?). I'm looking at programming languages and I don't see that away-from-shared-memory trend--neither in mainstream languages, nor in newer languages like Scala, nor in the operating systems. There are many interesting high-level paradigms like message passing, futures, actors, etc.; and I'm sure there will be more in the future. D has a choice of betting the store on one of these paradigms (like ML did on message passing or Erlang on actors) or to try, build solid foundations for a multi-paradigm language or do nothing. I am trying to build the foundations. The examples that I'm exploring in my posts are data structures that support higher level concurrency: channels, message queues, lock-free objects, etc. I want to build a system where high-level concurrency paradigms are reasonably easy to implement. Let's look at the alternatives. Nobody thinks seriously of making message passing a language feature in D. The Erlangization of D wouldn't work because D does not provide guarantees of address space separation (and providing such guarantees would cripple the language beyond repair). Another option we discussed was to provide specialized, well tested message queues in the library and count on programmers' discipline not to stray away from the message passing paradigm. Without type-system support, though, such queues would have to be either unsafe (no guarantee that the client doesn't have unprotected aliases to messages), or restrict messages to very simple data structures (pass-by-value and maybe some very clever unique pointer/array template). The latter approach introduces huge complexity into the library, essentially making user-defined extensions impossible (unless the user's name is Andrei ;-) ). Let's not forget that right now D is _designed_ to support shared-memory programming. Every D object has a lock, it supports synchronized methods, the keyword "shared" is being introduced, etc. It doesn't look like D is moving away from shared memory. It looks more like it's adding some window dressing to the pre-existing mess and bidding its time. I haven't seen a comprehensive plan for D to tackle concurrency and I'm afraid that if D doesn't take a solid stance right now, it will miss the train.Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. Andrei
May 28 2009
Bartosz Milewski wrote:Andrei Alexandrescu Wrote:I understand that. However, I don't understand how the comment applies to the situation at hand.BCS wrote:I understand where you stand. You are looking at where the state-of-the-art hardware is going and that makes perfect sense. There is, however, a co-evolution of hardware and software that is not a simple software-follow-hardware (remember the era of RISC processors?).Everything is indicating that shared memory multi-threading is where it's all going.That is correct, just that it's 40 years late. Right now everything is indicating that things are moving *away* from shared memory. AndreiI'm looking at programming languages and I don't see that away-from-shared-memory trend--neither in mainstream languages, nor in newer languages like Scala, nor in the operating systems.Scala doesn't know what to do about threads. The trend I'm seeing is that functional languages are getting increasing attention, and that's exactly because they never share mutable memory. As far as I can see, languages that are based on heavy shared mutation are pretty much dead in the water. We have the chance to not be so.There are many interesting high-level paradigms like message passing, futures, actors, etc.; and I'm sure there will be more in the future. D has a choice of betting the store on one of these paradigms (like ML did on message passing or Erlang on actors) or to try, build solid foundations for a multi-paradigm language or do nothing. I am trying to build the foundations.Building foundations is great. What I'm seeing, however, is one very heavy strong pillar put in a place that might become the doghouse. I'm not at all sure the focus must be put on high-level race avoidance, particularly given that the cost in perceived complexity is this high.The examples that I'm exploring in my posts are data structures that support higher level concurrency: channels, message queues, lock-free objects, etc. I want to build a system where high-level concurrency paradigms are reasonably easy to implement. Let's look at the alternatives. Nobody thinks seriously of making message passing a language feature in D. The Erlangization of D wouldn't work because D does not provide guarantees of address space separation (and providing such guarantees would cripple the language beyond repair).Why wouldn't we think of making message passing a language feature in D? Why does message passing need erlangization to support message passing?Another option we discussed was to provide specialized, well tested message queues in the library and count on programmers' discipline not to stray away from the message passing paradigm.That's not what I discussed. I think there is is an interesting point that you've been missing, so please allow me to restate it. What I discussed was a holistic approach in which language + standard library provides a trusted computing base. Consider Java's new (for arrays) and C's malloc. The new function cannot be defined in Java because it would require unsafe manipulation underneath, so it is defined by its runtime support library. That runtime support is implemented in the likes of C. However, due to the fact that the runtime support is part of Java, in fact Java does have dynamic memory allocation - and nobody blinks an eye. C has famously had a mantra of self-sufficiency: its own support libraries have been written in C, which is pretty remarkable - and almost unprecedented at the time C was defined. For example, C's malloc is written in C, but (and here's an important detail) at some point it becomes _nonportable_ C. So even C has to cross a barrier of some sorts at some point. How is this related to the discussion at hand? You want to put all concurrency support in the language. That is, you want to put enough power into the language to be able to typecheck a variety of concurrent programming primitives and patterns. This approach is blindsided to the opportunity of defining some of these primitives in the standard library, in unsafe/unportable D, yet offering safe primitives to the user. In the process, the user is not hurt because she still has access to the primitives. What she can't do is define their own primitives in safe D. But I think that's as useless a pursuit as adding keywords to C to allow one to implement malloc() in safe C.Without type-system support, though, such queues would have to be either unsafe (no guarantee that the client doesn't have unprotected aliases to messages), or restrict messages to very simple data structures (pass-by-value and maybe some very clever unique pointer/array template). The latter approach introduces huge complexity into the library, essentially making user-defined extensions impossible (unless the user's name is Andrei ;-) ).Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too. Semantic checking will always be harder on everyone than a human being who sits down and implements a provably safe library in ways that the compiler can't prove.Let's not forget that right now D is _designed_ to support shared-memory programming. Every D object has a lock, it supports synchronized methods, the keyword "shared" is being introduced, etc. It doesn't look like D is moving away from shared memory. It looks more like it's adding some window dressing to the pre-existing mess and bidding its time. I haven't seen a comprehensive plan for D to tackle concurrency and I'm afraid that if D doesn't take a solid stance right now, it will miss the train.I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so. To me, adding concurrency capabilities to D is nothing like adding window dressing on top of whatever crap is there. Java and C++ are in trouble, and doing what they do doesn't strike me as a good bet. You're right about missing the train, but I think you and I are talking about different trains. I don't want to embark on the steam-powered train. Andrei
May 28 2009
Andrei Alexandrescu, el 28 de mayo a las 19:52 me escribiste:To me, adding concurrency capabilities to D is nothing like adding window dressing on top of whatever crap is there. Java and C++ are in trouble, and doing what they do doesn't strike me as a good bet. You're right about missing the train, but I think you and I are talking about different trains. I don't want to embark on the steam-powered train.I agree. Maybe is just unjustified fear, but I see D2 being to concurrency what C++ was to templates. Great new idea, terrible hard to use and understand for mortals. For some time people used to think they were complex because they had to, but I think D could prove that wrong ;) -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 29 2009
Leandro Lucarella:I agree. Maybe is just unjustified fear, but I see D2 being to concurrency what C++ was to templates.Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility. Bye, bearophile
May 29 2009
bearophile, el 29 de mayo a las 13:39 me escribiste:Leandro Lucarella:Exactly. I think D had a good model of "steal good proven stuff that other languages got right". With this, I thinks it's taking a new path of being a pioneer, and chances are it get it wrong (I don't mean to be offensive with this, I'm just speaking statistically) and suffer the mistake for a long long time because of backward compatibility. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------I agree. Maybe is just unjustified fear, but I see D2 being to concurrency what C++ was to templates.Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility.
May 29 2009
Leandro Lucarella wrote:bearophile, el 29 de mayo a las 13:39 me escribiste:With its staunch default isolation, I think D is already making a departure from the traditional imperative languages (which extend their imperative approach to concurrent programming). The difference is that it takes what I think is a sound model (interprocess isolation) and augment it with the likes of shared and Bartosz's work. So my perception is that it's less likely to get things regrettably wrong. But then you never know. AndreiLeandro Lucarella:Exactly. I think D had a good model of "steal good proven stuff that other languages got right". With this, I thinks it's taking a new path of being a pioneer, and chances are it get it wrong (I don't mean to be offensive with this, I'm just speaking statistically) and suffer the mistake for a long long time because of backward compatibility.I agree. Maybe is just unjustified fear, but I see D2 being to concurrency what C++ was to templates.Sometimes you need lot of time to find what a simple implementation can be. Often someone has to pay the price of being the first one to implement something :-] This is bad if you mix it with the will of keeping backwards compatibility.
May 29 2009
Andrei Alexandrescu Wrote:Scala doesn't know what to do about threads.That's my impression too, although Scala's support for actors leaves D in the dust.The trend I'm seeing is that functional languages are getting increasing attention, and that's exactly because they never share mutable memory. As far as I can see, languages that are based on heavy shared mutation are pretty much dead in the water. We have the chance to not be so.It's a very sweeping statement. I just looked at the TIOBE index and couldn't find _any_ functional languages in the top 20.I'm not at all sure the focus must be put on high-level race avoidance, particularly given that the cost in perceived complexity is this high.The complexity argument is tenuous. It might look like my proposal is complex because I'm dropping the whole system in at once. But it's a solid, well thought-out system. What I see happening in D is the creeping complexity resulting from sloppy design. We've been talking for years about closing some gaping holes in the design of arrays, slices, immutable, qualifier polymorphism--the list goes on--and there's little progress. There is no solid semantics for scope and shared. A solid solution to those issues will look complex too.Why wouldn't we think of making message passing a language feature in D?Because we don't have even a tiniest proposal describing it, not to mention a design.Why does message passing need erlangization to support message passing?Because the strength of the Erlang model is the isolation of processes. Take away isolation and it's no better than Scala or Java. Granted, having to explicitly mark objects for sharing in D is a big help. Here we agree.What I discussed was a holistic approach in which language + standard library provides a trusted computing base.Have you thought about how to eliminate data races in your holistic approach? Will "shared" be forbidden in SafeD? Will library-based message-passing channels (and actors?) only accept simple value types and immutables? Andrei, you are the grand wizard of squeezing powerful abstractions out of a kludge of a language that is C++ with its ad-hoc support for generics. It's impressive and very useful, but it's also hermetic. By contrast, generic programming in D is relatively easy because of the right kind of support built into the language (compile-time interpreter). I trust that you could squeeze powerful multithreading abstraction out of D, even if the language/type system doesn't offer much of a support. But it will be hermetic. Prove me wrong by implementing a message queue using the current D2 (plus some things that are still in the pipeline).How is this related to the discussion at hand? You want to put all concurrency support in the language. That is, you want to put enough power into the language to be able to typecheck a variety of concurrent programming primitives and patterns. This approach is blindsided to the opportunity of defining some of these primitives in the standard library, in unsafe/unportable D, yet offering safe primitives to the user. In the process, the user is not hurt because she still has access to the primitives. What she can't do is define their own primitives in safe D. But I think that's as useless a pursuit as adding keywords to C to allow one to implement malloc() in safe C.That's a bad analogy. I'm proposing the tightening of the type system, not the implementation of weak atomics. A better analogy would be adding immutable/const to the language. Except that I don't think const-correctness is as important as the safety of shared-memory concurrency.I'm being very careful not to hit the language user. You might have noticed that my primitive channel, the MVar, is less complex than a D2 implementation (it doesn't require "synchronized"). And the compiler will immediately tell you if you use it in an unsafe way.Without type-system support, though, such queues would have to be either unsafe (no guarantee that the client doesn't have unprotected aliases to messages), or restrict messages to very simple data structures (pass-by-value and maybe some very clever unique pointer/array template). The latter approach introduces huge complexity into the library, essentially making user-defined extensions impossible (unless the user's name is Andrei ;-) ).Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too.Semantic checking will always be harder on everyone than a human being who sits down and implements a provably safe library in ways that the compiler can't prove.Be careful with such arguments. Somebody might use them to discredit immutability.I realize that. Except that the "shared" concept was added very recently.Let's not forget that right now D is _designed_ to support shared-memory programming. Every D object has a lock, it supports synchronized methods, the keyword "shared" is being introduced, etc. It doesn't look like D is moving away from shared memory. It looks more like it's adding some window dressing to the pre-existing mess and bidding its time. I haven't seen a comprehensive plan for D to tackle concurrency and I'm afraid that if D doesn't take a solid stance right now, it will miss the train.I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so.To me, adding concurrency capabilities to D is nothing like adding window dressing on top of whatever crap is there. Java and C++ are in trouble, and doing what they do doesn't strike me as a good bet.So far D has been doing exactly what Java and C++ are doing. My proposal goes way beyond that. But if you mean D should not support shared-memory concurrency or give it only lip service, than you really have to come up with something revolutionary to take its place. This would obviously not make it into D2 or into your book. So essentially D2 would doomed in the concurrency department.You're right about missing the train, but I think you and I are talking about different trains. I don't want to embark on the steam-powered train.Should we embark on a vapor-powered train then ;-)
May 29 2009
Bartosz Milewski wrote:Andrei Alexandrescu Wrote:Scala actors are a library.Scala doesn't know what to do about threads.That's my impression too, although Scala's support for actors leaves D in the dust.We can safely ditch Tiobe, but I agree that functional languages aren't mainstream. There are two trends though. One is that for most of its existence Haskell has had about 12 users. It has definitely turned an exponential elbow during the recent years. Similar trends are to be seen for ML, Ocaml, and friends. The other trend is that all of today's languages are scrambling to add support for pure functional programming.The trend I'm seeing is that functional languages are getting increasing attention, and that's exactly because they never share mutable memory. As far as I can see, languages that are based on heavy shared mutation are pretty much dead in the water. We have the chance to not be so.It's a very sweeping statement. I just looked at the TIOBE index and couldn't find _any_ functional languages in the top 20.What do those holes have to do with the problem at hand? I'm seeing implementation bugs, not design holes. I'd love them to be fixed as much as the next guy, but I don't think we're looking at issues that would be complex (except for scope which sucks; I never claimed there was a solution to that). All of other features you mention have no holes I know of in their design.I'm not at all sure the focus must be put on high-level race avoidance, particularly given that the cost in perceived complexity is this high.The complexity argument is tenuous. It might look like my proposal is complex because I'm dropping the whole system in at once. But it's a solid, well thought-out system. What I see happening in D is the creeping complexity resulting from sloppy design. We've been talking for years about closing some gaping holes in the design of arrays, slices, immutable, qualifier polymorphism--the list goes on--and there's little progress. There is no solid semantics for scope and shared. A solid solution to those issues will look complex too.That doesn't mean we shouldn't think of it.Why wouldn't we think of making message passing a language feature in D?Because we don't have even a tiniest proposal describing it, not to mention a design.So we should be thinking about it, right?Why does message passing need erlangization to support message passing?Because the strength of the Erlang model is the isolation of processes. Take away isolation and it's no better than Scala or Java. Granted, having to explicitly mark objects for sharing in D is a big help. Here we agree.Shared will be allowed in SafeD but it will be the responsibility of user code to ensure high-level race elimination. Shared will eliminate low-level races. I think it's worth contemplating a scenario in which message passing is restricted to certain types.What I discussed was a holistic approach in which language + standard library provides a trusted computing base.Have you thought about how to eliminate data races in your holistic approach? Will "shared" be forbidden in SafeD? Will library-based message-passing channels (and actors?) only accept simple value types and immutables?Andrei, you are the grand wizard of squeezing powerful abstractions out of a kludge of a language that is C++ with its ad-hoc support for generics. It's impressive and very useful, but it's also hermetic. By contrast, generic programming in D is relatively easy because of the right kind of support built into the language (compile-time interpreter).Please no ad hominem, flattering or not.I trust that you could squeeze powerful multithreading abstraction out of D, even if the language/type system doesn't offer much of a support. But it will be hermetic. Prove me wrong by implementing a message queue using the current D2 (plus some things that are still in the pipeline).I don't have the time, but I think it's worth looking into what a message queue implementation should look like.It's a good analogy because you want to make possible the implementation of threading primitives in portable D. I am debating whether that is a worthy goal. I doubt it is.How is this related to the discussion at hand? You want to put all concurrency support in the language. That is, you want to put enough power into the language to be able to typecheck a variety of concurrent programming primitives and patterns. This approach is blindsided to the opportunity of defining some of these primitives in the standard library, in unsafe/unportable D, yet offering safe primitives to the user. In the process, the user is not hurt because she still has access to the primitives. What she can't do is define their own primitives in safe D. But I think that's as useless a pursuit as adding keywords to C to allow one to implement malloc() in safe C.That's a bad analogy. I'm proposing the tightening of the type system, not the implementation of weak atomics. A better analogy would be adding immutable/const to the language. Except that I don't think const-correctness is as important as the safety of shared-memory concurrency.Let them discredit it and we'll see how strong their argument is.I'm being very careful not to hit the language user. You might have noticed that my primitive channel, the MVar, is less complex than a D2 implementation (it doesn't require "synchronized"). And the compiler will immediately tell you if you use it in an unsafe way.Without type-system support, though, such queues would have to be either unsafe (no guarantee that the client doesn't have unprotected aliases to messages), or restrict messages to very simple data structures (pass-by-value and maybe some very clever unique pointer/array template). The latter approach introduces huge complexity into the library, essentially making user-defined extensions impossible (unless the user's name is Andrei ;-) ).Complexity will be somewhere. The problem is, you want to put much of it in the language, and that will hit the language user too.Semantic checking will always be harder on everyone than a human being who sits down and implements a provably safe library in ways that the compiler can't prove.Be careful with such arguments. Somebody might use them to discredit immutability.My problem is that I think it goes way beyond that straight in the wrong directions. It goes on and on and on about how to make deadlock-oriented programming less susceptible to races. I don't care about deadlock-oriented programming. I want to stay away from deadlock-oriented programming. I don't understand why I need a hecatomb of concepts and notions that help me continue using a programming style that is unrecommended.I realize that. Except that the "shared" concept was added very recently.Let's not forget that right now D is _designed_ to support shared-memory programming. Every D object has a lock, it supports synchronized methods, the keyword "shared" is being introduced, etc. It doesn't look like D is moving away from shared memory. It looks more like it's adding some window dressing to the pre-existing mess and bidding its time. I haven't seen a comprehensive plan for D to tackle concurrency and I'm afraid that if D doesn't take a solid stance right now, it will miss the train.I think we can safely ditch this argument. Walter had no idea what to do about threads when he defined D, so he put whatever he saw and understood in Java. He'd be the first to say that - actually, he wouldn't tell me so, he _told_ me so.To me, adding concurrency capabilities to D is nothing like adding window dressing on top of whatever crap is there. Java and C++ are in trouble, and doing what they do doesn't strike me as a good bet.So far D has been doing exactly what Java and C++ are doing. My proposal goes way beyond that.But if you mean D should not support shared-memory concurrency or give it only lip service, than you really have to come up with something revolutionary to take its place. This would obviously not make it into D2 or into your book. So essentially D2 would doomed in the concurrency department.Message passing and functional style have been around for a while and form an increasingly compelling alternative to mutable sharing. We can support deadlock-oriented programming in addition to these for those who want it, but I don't think it's an area where we need to pay an arm and a leg for eliminating high-level races. I just think it's the wrong problem to work on. Andrei
May 29 2009
Can you believe it? I was convinced that my response was lost because the stupid news reader on Digital Mars web site returned an error (twice, hence two posts). I diligently rewrote the riposte from scratch and tried to post it. It flunked again! Now I'm not sure if it won't appear in the newsgroup after an hour. (By the way, I refined my arguments.)
May 29 2009
This is the missing second reply to Andrei. I'm posting parts of it because it my help understand my position better. I wouldn't dismiss Scala out of hand. The main threading model in Scala is (library-supported) actor model. Isn't that what you're proposing for D? Except that Scala has much better support for functional programming. The languages [C++ and Java] may be dead in the water (altough still the overwhelming majority of programmers use them), but I don't see the idea of shared-memory concurrency dying any time soon. I bet it will be the major programming model for the next ten years. What will come after that, nobody knows. The complexity argument: My proposal looks complex because I am dropping the whole comprehensive solution on the D community all at once. I would be much wearier of the kind of creeping complexity resulting from incremental ad-hoc solutions. For instance, the whole complexity of immutability hasn't been exposed yet. If it were, there would be a much larger insurgency among D users. You know what I'm talking about--invariant constructors. My proposal goes into nooks and crannies and, of course, it makes it look more complex than it really is. Not to mention that there could be a lot of ideas that would lead to simplifications. I sometimes discuss various options for discussion. Take my proposal for unique objects. I could have punted the need for "lent". Maybe nobody would ask for it? Compare "unique" with "scope"--nobody knows the target semantics of "scope". It's a half-baked idea, but nobody's protesting. Try to define the semantics of array slices and you'll see eyes glazing. We know we have to fix them, but we don't know how (array[new]?). Another half-baked idea. Are slices simple or complex? Define the semantics of "shared". Or should we implement it first and hope that the complexity won't creep in when we discover its shortcomings.Why wouldn't we think of making message passing a language feature in D?Well, we could, but why? We don't have to add any new primitives to the language to implement message queues. We would have to *eliminate* some features to make message passing safe. For instance, we'd have to eliminate "shared". Is that an option?Why does message passing need erlangization to support message passing?The power of Erlang lies in guaranteed process isolation. If we don't guarantee that, we are in the same league as Java or C++.What I discussed was a holistic approach in which language + standard library provides a trusted computing base.I like that very much. But I see the library as enabling certain useful features, while the type system as disabling the dangerous ones. You can't disable features through a library.
May 30 2009
For instance, the whole complexity of immutability hasn't been exposed yet.What? I thought immutable was already quite complex.Compare "unique" with "scope"--nobody knows the target semantics of "scope". It's a half-baked idea, but nobody's protesting.Everyone knows that D is full of half-baked ideas. We're not using D because it's a beautiful or elegant language - we use it because it makes life easier. Slices and arrays are half-baked, but they are much simpler and easier to use than corresponding C/C++ solutions. We're also using D because it's so C/C++ like. D is to C what C++ should have been to C. Other than that, there are already languages, which could have taken D's job: Delphi-Pascal, Ada, Modula... If D stops making life easier, it will be the death of D.
May 30 2009
On 2009-05-30 13:00:14 -0400, Bartosz Milewski <bartosz-nospam relisoft.com> said:The complexity argument: My proposal looks complex because I am dropping the whole comprehensive solution on the D community all at once. I would be much wearier of the kind of creeping complexity resulting from incremental ad-hoc solutions. For instance, the whole complexity of immutability hasn't been exposed yet. If it were, there would be a much larger insurgency among D users. You know what I'm talking about--invariant constructors. My proposal goes into nooks and crannies and, of course, it makes it look more complex than it really is. Not to mention that there could be a lot of ideas that would lead to simplifications. I sometimes discuss various options for discussion. Take my proposal for unique objects. I could have punted the need for "lent". Maybe nobody would ask for it? Compare "unique" with "scope"--nobody knows the target semantics of "scope". It's a half-baked idea, but nobody's protesting. Try to define the semantics of array slices and you'll see eyes glazing. We know we have to fix them, but we don't know how (array[new]?). Another half-baked idea. Are slices simple or complex?Bartosz, you're arguing that your proposal isn't that complex compared to strange semantics of other parts of the language and I agree... I should even say that what you propose offer a solution for fixing these other parts of the language. It's funny how what's needed to make multithreading safe is pretty much the same as what is needed to make safe immutable constructors and safe array slices, let's take a look: A constructor for a unique object is all you need to build an immutable one: move the unique pointer to an immutable pointer and you're sure no one has a mutable pointer to it. Of course, to implement unique constructors, you need 'lent' (or 'scope', whatever our prefered keyword) so you can call functions that will alter the unique object and its member without escaping a reference. As for slices, as long as your slice is 'unique', you can enlarge it without side effects (relocating the slice in memory won't affect any other slice because you're guarentied there aren't any), making a 'unique T[]' as good as an equivalent container... or should I say even safer since enlarging a non-unique container might be as bad as enlarging a slice (the container may realocate and disconnect from all its slices). You could also later transform 'unique T[]' to 'immutable T[]', or to a mutable 'T[]', but then you shouldn't be able to grow it without making a duplicate first to avoid undesirable side effects. So instead of fighting over what's too complex and what isn't by looking at each hole of the language in isolation, I think it's time to look at the various problems as a whole. I believe all those half-baked ideas point to the same underlying deficiency: the lack of a safe unique type (which then requires move semantics and 'lent'/'scope' constrains). C++ will get half of that soon (unique_ptr) but it will still be missing 'lent' though so it won't be so safe. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 30 2009
I don't think the item-by-item pingpong works well in the newsgroup. Let's separate our discussion into separate threads. One philosophical, about the future of concurrency. Another about the immediate future of concurrency in D2. And a separate one about my proposed system in the parallel universe where we all agree that for the next 10 years shared-memory concurrency will be the dominating paradigm.
May 29 2009
Bartosz Milewski wrote:I don't think the item-by-item pingpong works well in the newsgroup. Let's separate our discussion into separate threads. One philosophical, about the future of concurrency. Another about the immediate future of concurrency in D2. And a separate one about my proposed system in the parallel universe where we all agree that for the next 10 years shared-memory concurrency will be the dominating paradigm.I'm sure it's a good idea, particularly if others will participate as well. I warn that I'll be at a conference Sat-Thu and I don't have much time even apart from that. Andrei
May 29 2009
Andrei Alexandrescu:I just think it's the wrong problem to work on.<Beside multiprocessing (that I am ignorant to comment on still), I can see other purposes for having a way to tell the type system that it exists only one reference/pointer to mutable data and ways to safely change ownership of such pointer. It can also be used by the compiler to optimize in various situations. It can be good when the type system gives you a formal way to state a constraint that the programmer wants to put in the program anyway (often just stated in comments, if the language doesn't allow such higher level feature). For example a type system can give a big help avoiding using null object references in a program, saving lot of time of the programmer (eventually D will need this feature), in such situations a more refined type system reduces the time/complexity to write correct programs (and in 20-lines long programs you may just don't use such features). Bye, bearophile
May 30 2009
bearophile wrote:Andrei Alexandrescu:Correct. We've been trying valiantly to introduce unique in the type system two times, first time in 2007. Our conclusion back then has been that unique brings more problems than it solves. Last meeting the same pattern ensued. Problems with unique cropped up faster than those solved. AndreiI just think it's the wrong problem to work on.<Beside multiprocessing (that I am ignorant to comment on still), I can see other purposes for having a way to tell the type system that it exists only one reference/pointer to mutable data and ways to safely change ownership of such pointer. It can also be used by the compiler to optimize in various situations.
May 30 2009
BCS, el 28 de mayo a las 15:57 me escribiste:Hello Leandro,I guess all depends on the kind of fine granularity you want. I work on a distributed application, so threading is not very tempting for me, I get to use the multi-cores by splitting the work among processes, not threads, because I need "location-transparency" (I don't care if the process I'm communicating with runs in the same computer or not, so things like "move semantics" are not interesting for me). Sometimes I need some threading support, for example to be able to receive queries and do some I/O intensive stuff in the same thread, but the thread-communication I need for that is so trivial, using simple mutexes works just fine. And I never needed so much performance to think about lock-free communication either (mutexes are really fast in Linux). I guess threading complexity is proportional to the complexity of the design. If your design is simple, concurrency is simple. Is really hard to get a deadlock in a simple design. Races are a little trickier, but they are inherent to some kind of problems, so there is not a lot to do about that, you have to see problem by problem and try to get a good design to handle them well.Jason House, el 28 de mayo a las 08:45 me escribiste:I get the impression, from what little I known about threading, that it is likely you are under estimating the complexity of the threading problem. I get the feeling that *most* non-experts do (either that, or they just assume it more complex than they want to deal with).I'm really surprised by the lack of design discussion in this thread. It's amazing how there can be huge bursts of discussion on which keyword to use (e.g. manifest constants), but then complete silence about major design decisions like thread safety that defines new transitive states and a bunch of new keywords. The description even made parallels to the (previously?) unpopular const architecture.I just find the new "thread-aware" design of D2 so complex, so twisted that I don't even know where to start. I think the solution is way worse than the problem here. That's why I don't comment at all.The later is my case =)I think D duplicate functionality. For "safe" concurrency I use processes and IPC (I have even more guarantees that D could ever give me). That's all I need. I don't need a huge complexity in the language for that. And I think D2 concurrency model is still way too low level.You are crazy! processes+IPC only works well if either the OS supports very fast IPC (IIRC none do aside from shared memory and now we are back where we started) or the processing between interaction is very long.Everything is indicating that shared memory multi-threading is where it's all going.Maybe, I'm just saying why I don't comment on D2 concurrency model. I find it too complex for my needs (i.e. for what I know, I won't give my opinion about things I don't know/use). -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
May 28 2009
Leandro Lucarella Wrote:BCS, el 28 de mayo a las 15:57 me escribiste:Maybe, I'm just saying why I don't comment on D2 concurrency model. I find it too complex for my needs (i.e. for what I know, I won't give my opinion about things I don't know/use).Probably the majority of users either don't use multithreading (yet) or use it only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions. The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations. I should probably write a simple tutorial that would show how to use my system for simple tasks.
May 28 2009
== Quote from Bartosz Milewski (bartosz-nospam relisoft.com)'s articleLeandro Lucarella Wrote:only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions.BCS, el 28 de mayo a las 15:57 me escribiste: Maybe, I'm just saying why I don't comment on D2 concurrency model. I find it too complex for my needs (i.e. for what I know, I won't give my opinion about things I don't know/use).Probably the majority of users either don't use multithreading (yet) or use itThe complex part is for library writers who have very demanding needs.Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.I should probably write a simple tutorial that would show how to use my systemfor simple tasks. This would be much appreciated. I try to read your blogs, which are geared toward hardcore multithreading people. I know just enough about multithreading to understand why it's a hard problem, so I usually get to about the second paragraph before I feel lost. I would love to see a version that offers simple examples of how the new multithreading might be useful to the kinds of people (like me) who understand the basics of multithreading and write multithreaded code in the very simple cases, but are not experts in concurrency, etc. For my purposes, I'm more interested in the mildly complicated things that are made simple, not the highly complicated things that are made possible.
May 28 2009
Bartosz Milewski Wrote:Leandro Lucarella Wrote:My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results. Besides general ability to use your scheme for what I've already done, I'm also interested in how to overhaul the garbage collector and implementing lockless hashtables (see high-scale-lib on sf.net) I was interested in doing some of that infrastructure and contributing, but so far I've had no luck getting something as simple as a weak references into druntime :(BCS, el 28 de mayo a las 15:57 me escribiste:Maybe, I'm just saying why I don't comment on D2 concurrency model. I find it too complex for my needs (i.e. for what I know, I won't give my opinion about things I don't know/use).Probably the majority of users either don't use multithreading (yet) or use it only for very simple tasks. My stated goal is not to force such users to learn the whole race-free type system. In most cases things "just work" by default, and the compiler catches any accidental race conditions.The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.I should probably write a simple tutorial that would show how to use my system for simple tasks.
May 28 2009
Jason House Wrote:Bartosz Milewski Wrote: My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results. Besides general ability to use your scheme for what I've already done, I'm also interested in how to overhaul the garbage collector and implementing lockless hashtables (see high-scale-lib on sf.net)I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
May 28 2009
Bartosz Milewski Wrote:Jason House Wrote:Far from it! I'm stumbling through in an attempt to teach myself the black art. I'm probably in my 3rd coding of the project. The first incarnation had no threads. The 2nd used message passing. The current one is lockless, but still a work in progress.Bartosz Milewski Wrote: My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results. Besides general ability to use your scheme for what I've already done, I'm also interested in how to overhaul the garbage collector and implementing lockless hashtables (see high-scale-lib on sf.net)I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.What about cmpchx (AKA compare and swap). It occurs in a lot of algorithms. Also, "lock inc" is fundamental to my use of lockless variables.I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
May 28 2009
Jason House Wrote:Are you sure it's worth the effort? It's extremely hard to get lock-free right, and it often doesn't offer as much speedup as you'd expect. Well, in D it might, because it still doesn't use thin locks.I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.Far from it! I'm stumbling through in an attempt to teach myself the black art. I'm probably in my 3rd coding of the project. The first incarnation had no threads. The 2nd used message passing. The current one is lockless, but still a work in progress.What about cmpchx (AKA compare and swap). It occurs in a lot of algorithms. Also, "lock inc" is fundamental to my use of lockless variables.These will either be implemented in the library (inline assembly) or as compiler intrinsic. It's not hard.
May 30 2009
Reply to Jason,My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results.As in threaded min-max? Have you got anything working? I known from experience that this ones a cast iron SOB. http://arrayboundserror.blogspot.com/search/label/min%20max
May 28 2009
BCS Wrote:Reply to Jason,No. Min-max is only good for theory. I'm also not doing alpha-beta which is successful in chess. I'm doing UCT and mostly aim to play the game of "go". UCT uses statistical bounds instead of hard heuristics. It also has less uniform tree exploration.My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results.As in threaded min-max? Have you got anything working? I known from experience that this ones a cast iron SOB. http://arrayboundserror.blogspot.com/search/label/min%20max
May 28 2009
Leandro Lucarella wrote:I would like D2 better if it was focussed on macros for example.Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
May 28 2009
On Thu, 28 May 2009 19:59:00 +0400, Tim Matthews <tim.matthews7 gmail.com> wrote:Leandro Lucarella wrote:I believe he is talking about AST macros that are postponed until D3 because current focus has shifted to concurrency.I would like D2 better if it was focussed on macros for example.Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
May 28 2009
Denis Koroskin wrote:On Thu, 28 May 2009 19:59:00 +0400, Tim Matthews <tim.matthews7 gmail.com> wrote:OK thanks I see now because macros have that extra flexibility over templates/mixins. Very useful and I agree so is parallelism/concurrency.Leandro Lucarella wrote:I believe he is talking about AST macros that are postponed until D3 because current focus has shifted to concurrency.I would like D2 better if it was focussed on macros for example.Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
May 28 2009
Hello Tim,Leandro Lucarella wrote:AST macros. Look up Walter et al's talk from the D conferenceI would like D2 better if it was focussed on macros for example.Can you elaborate on this? I think of the word macro as a C preprocessor feature which is no longer needed in D.
May 28 2009
== Quote from Leandro Lucarella (llucax gmail.com)'s articleJason House, el 28 de mayo a las 08:45 me escribiste:That was basically the complaint about the const design for D2, and it did end up being simplified. I also think it would have been simplified further if anyone knew how to do so without losing any required functionality. Regarding the shared proposal so far, I think D will always support sharing memory across processes so the issue is really where to post the sign that says "here be monsters." Bartosz has come up with a model that would provide complete (?) verifiable data integrity, and therefore makes the domain of "safe" shared-memory programming as large as possible (deadlocks aside, of course). However, the overarching question in my mind is whether we really want to build so much support into the language for something that is intended to be used sparingly at best. I can just see someone saying "so you have all these new keywords and all this stuff and you're saying that despite all this I'm really not supposed to use any of it?" This is an area where community feedback would be very valuable, I'd think.Maybe people are waiting for Walter to go through all the hard work of implementing this stuff before complaining that it's crap and proclaiming Walter should have done in the first place?No, I don't see any point in saying what I said above, because I don't think anything will change. If I didn't like some little detail, that could worth discussing because it has any chance to change Walter/Bartoz mind, but saying "I think all the model is way too complex" don't help much IMHO =)
May 28 2009
Denis Koroskin:I believe he is talking about AST macros that are postponed until D3 because current focus has shifted to concurrency.<I think shifting to concurrent programming is now the right choice, all other modern languages do the same, because people have more and more cores sleeping in their computers. But data-parallelism too needs more care/focus in D (I have discussed about it diffusely when I have shown Chapel language, for example, and elsewhere). So far Bartosz has not discussed enough about this very large (and important for D future users) topic. Those things are also much more easy to understand and use for me (and I think for other people too). Thead/Actor/Agent/etc parallelism alone is NOT going to be enough for the numeric computing community (and my numeric needs too). Some support for data-parallelism is currently probably more important than macros for D2. Bye, bearophile
May 28 2009
Andrei Alexandrescu Wrote:Second, there is no regard to language integration. Bartosz says syntax doesn't matter and that he's flexible, but what that really means is that no attention has been paid to language integration. There is more to language integration than just syntax (and then even syntax is an important part of it).It's not that bad. I actually wrote the examples in D and then replaced !() with angle brackets to make it readable to non-D programmers. BTW, Scala doesn't use angle brackets. It uses square brackets [] for template arguments and parens () for array access. Interesting choice.
May 28 2009
Jason House Wrote:Bartosz Milewski Wrote: My hobby project is a multi-threaded game-playing AI. My current scheme uses a shared search tree using lockless updates with search results. Besides general ability to use your scheme for what I've already done, I'm also interested in how to overhaul the garbage collector and implementing lockless hashtables (see high-scale-lib on sf.net)I see, you're a hardcore lockfree programmer. All you can expect from D is Sequential Consistency--nothing fancy like C++ weak atomics. But that's for the better.I don't have much to say about that because it's a know problem and it has already been solved in Java. I can tell you what is required on an x86: use xchg for writes, and that's all. I think Walter has already implemented it, because he asked me the same question.The complex part is for library writers who have very demanding needs. Unfortunately, I have to describe the whole shebang in my blog, otherwise people won't believe that the system is workable and that it satisfies their high expectations.Yeah, I'm waiting for more details like which fences are introduced by the lockless SC requirements. The high-scale-lib is virtually fence free.
May 29 2009