www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - The future of concurrent programming

reply Henrik <zodiachus gmail.com> writes:
Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we havent seen a veritable rush to adopt parallelized
programming strategies, some which include:

* Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
* Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
* Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
* We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.

I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?


Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml


/// Henrik
May 29 2007
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Henrik wrote:
 Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we haven�t seen a veritable rush to adopt parallelized
programming strategies, some which include:
 
 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the community�s opinions on this.
What will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the �old school� programming languages fade away,
as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
 
 
 Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
 
 
 /// Henrik
I think it's a combination of a lot of things. Firstly, our languages suck. I know about Erlang, but Erlang is an alien language to all the weird syntax, but it's "functional", too. Tim Sweeny once remarked that Haskell would be a god-send for game programming if only they got rid of the weird syntax[1]. Secondly, people think very linearly. Some people break the mould and seem to do well thinking in parallel, but it's damned hard. The fact that, up until now, most of us were working with single core CPUs that meant there was little point in using parallelism for performance reasons, also isn't helping things. Let's not forget that our tools suck, too. I've tried to debug misbehaving multithreaded code before; I now avoid writing MT code where at all possible. I think the comment about programmers being stupid is just wrong. People only learn what they're taught (either by someone else or by themselves). If they're never taught how to write good parallel code, you can't suddenly expect them to turn around and start doing so. Hell, after four and half years of university, I've never needed to write a single line of MT code for a subject. Does that mean I'm stupid? Before programmers can really start to get into parallel code, I think several things have to happen. First, we need some new concepts for talking about parallel code. Hell, maybe they already exist; but until they're widely used by programmers, the may as well not. Second, we need a good, efficient C-style language to implement them and demonstrate how to use them and why they're useful. It being C-style is absolutely critical; look how many people *haven't* switched over to "superior" languages like Erlang and Haskell. My money is on "it is different and scary" being the primary reason. We also need better tools for things like debugging. So yeah; I think concurrent/parallel programming *is* too hard. Programmers aren't omniscient; just because you throw more cores at us doesn't mean we automatically know how to use them :P </$0.02> -- Daniel [1] And change it so that it wasn't lazily evaluated, at least by default. -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 29 2007
prev sibling next sibling parent freeagle <dalibor.free gmail.com> writes:
Henrik wrote:
 Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we havent seen a veritable rush to adopt parallelized
programming strategies, some which include:
 
 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
 
 
 Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
 
 
 /// Henrik
 
I think the languages are enough for concurrent programming. I'm planing to work on a project that will use concurrency from the ground up ( it will be a multimedia library ). I'm also interested in game/graphics development. I've read a few articles that talk about using multiple threads and multi-core CPUs in such environments. The ideas published were based on current generation of programming languages. They provide methods/approaches of coding a multi-threaded game engine with performance rising nearly linearly with additional CPU cores. So I think the problem with MT applications is that people has not yet adapted to thinking in parallel. They don't divide the problem correctly into parts that can be executed concurrently. Therefor they use a lot of locking mechanisms, which lead into other, and harder to solve, problems, like deadlocks. What may be lacking are not features in current programming languages, but maybe tools that would help with designing such applications. freeagle
May 29 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Henrik wrote:
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
It won't be via explicit threading, mutexes, etc. I suspect that will largely be left to library programmers and people who have very specific requirements. I'm not sure that we've seen the new means of concurrent programming yet, but there are a lot of options which have the right idea (some of which are 40 years old). For now, I'd be happy with a version of CSP that works in-process as easily as it does across a network (this has come up in the Tango forums in the past, but we've all been too busy with the core library to spend much time on such things). Transactions are another idea, though the common implementation of software transactional memory (cloning objects and such) isn't really ideal. I think this will initially be most useful for fairly low-level work--kind of an LL/SC on steroids. Sean
May 29 2007
parent reply Mike Capp <mike.capp gmail.com> writes:
== Quote from Sean Kelly (sean f4.ca)'s article

 Transactions are another idea, though the common
 implementation of software transactional memory
 (cloning objects and such) isn't really ideal.
Would genuine compiler guarantees regarding const (or invariant, or final, or whatever it's called today) reduce the need for cloning?
May 29 2007
parent reply "David B. Held" <dheld codelogicconsulting.com> writes:
Mike Capp wrote:
 == Quote from Sean Kelly (sean f4.ca)'s article
 
 Transactions are another idea, though the common
 implementation of software transactional memory
 (cloning objects and such) isn't really ideal.
Would genuine compiler guarantees regarding const (or invariant, or final, or whatever it's called today) reduce the need for cloning?
Word-based STM doesn't require cloning except when necessary to preserve logical consistency, and then it doesn't require whole-object cloning. On the other hand, it may not always be as efficient because it only knows about words and not objects. It's all a trade-off. Dave
May 29 2007
parent Brad Roberts <braddr puremagic.com> writes:
David B. Held wrote:
 Mike Capp wrote:
 == Quote from Sean Kelly (sean f4.ca)'s article

 Transactions are another idea, though the common
 implementation of software transactional memory
 (cloning objects and such) isn't really ideal.
Would genuine compiler guarantees regarding const (or invariant, or final, or whatever it's called today) reduce the need for cloning?
Word-based STM doesn't require cloning except when necessary to preserve logical consistency, and then it doesn't require whole-object cloning. On the other hand, it may not always be as efficient because it only knows about words and not objects. It's all a trade-off. Dave
Objects (or memory locations) that aren't changing don't get cloned. Constants are a stronger case of something not changing (because it can't by language rules). So, const (or invariant, or final) really doesn't assist in STM in any way.
May 29 2007
prev sibling next sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Henrik wrote:
 Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we havent seen a veritable rush to adopt parallelized
programming strategies, some which include:
 
 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
 
 
 Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
 
 
 /// Henrik
 
The way I've often thought of it is that we're lacking the higher-level constructs needed to take advantage of what modern processors have to offer. My apologies for not offering an exact solution, but rather my feelings on the matter. Who knows, maybe someone already has a syntax for what I'm attempting to describe? I liken the problem to the way that OOP redefined how we build large scale systems. The change was so profound that it would be difficult and cumbersome to use a purely free-function design past a certain degree of complexity. Likewise, with parallelism, we're still kind of at the free-function level with semaphores, mutexes and threads. Concepts like "transactional memory" are on the right path, but there's more to it than that. What is needed is something "higher level" that is easily grokked by the programmer, yet just as optimizable by the compiler. Something like a "MT package definition" that allows us to bind code and data to a certain heap, processor, thread priority or whatever, so that parallelism happens in a controlled yet abstract way. Kind of like what the GC has done for eliminating calls to delete()/free(), such a scheme should free our hands and minds in a similar way. The overall idea I have is to whisper to the compiler about the kinds of things we'd like to see, instead of working with so much minutia all the time. Let the compiler worry about how to cross heap boundaries and insert semaphores/mutexes/queues/whatever when contexts mix; it's make-work, and error prone stuff, which is what the compiler is for. Now while you could do this stuff with compiler options, I think we need to be more far more expressive than "-optimize-the-hell-out-of-it-for-MT"; it needs to be in the language itself. That way you can say things like "these modules are on a transactional heap, for at most 2 processors" and "these modules must have their own heap, and can use n processors", all within the same program. At the same time, you could also say "parallelize this foreach statement", "single-thread this array operation", or "move this instance into package Foo's heap (whatever that is)". The idea is to say what we really want done, and trust the compiler (and runtime library complete with multi-heap support and process/thread scheduling) to do it for us. Sure, you'd loose a lot of fine-grained control with such an approach, but as new processors are produced with exponentially more cores than the generation before it, we're going to yearn for something more sledgehammer-like. -- - EricAnderton at yahoo
May 29 2007
next sibling parent Pragma <ericanderton yahoo.removeme.com> writes:
Pragma wrote:
 Sure, you'd loose a lot of fine-grained control with such an approach, 
Erm.. rather /lose/ a lot of control. -- - EricAnderton at yahoo
May 29 2007
prev sibling parent reply freeagle <dalibor.free gmail.com> writes:
Why do people think there is a need for another language/paradigm to 
solve concurrent problem? OSes deal with parallelism for decades, 
without special purpose languages. Just plain C, C++. Just check Task 
manager in windows and you'll notice there's about 100+ threads running.
If microsoft can manage it with current languages, why we cant?

freeagle
May 29 2007
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
freeagle wrote:
 Why do people think there is a need for another language/paradigm to
 solve concurrent problem? OSes deal with parallelism for decades,
 without special purpose languages. Just plain C, C++. Just check Task
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?
 
 freeagle
We can; it's just hard as hell and thoroughly unenjoyable. Like I said before: I can and have written multithreaded code, but it's so utterly painful that I avoid it wherever possible. It's like trying to wash a car with a toothbrush and one of those giant novelty foam hands. Yeah, you could do it, but wouldn't it be really nice if someone would go and invent the sponge and wash-cloth? -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 29 2007
parent reply Regan Heath <regan netmail.co.nz> writes:
Daniel Keep Wrote:
 freeagle wrote:
 Why do people think there is a need for another language/paradigm to
 solve concurrent problem? OSes deal with parallelism for decades,
 without special purpose languages. Just plain C, C++. Just check Task
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?
 
 freeagle
We can; it's just hard as hell and thoroughly unenjoyable. Like I said before: I can and have written multithreaded code, but it's so utterly painful that I avoid it wherever possible.
I must be strange then because after 5+ years of multithreaded programming it's the sort I prefer to do. Each to their own I guess. I think perhaps it's something that can be learnt, but it takes a bit of time, similar in fact to learning to program in the first place. I enjoy the challenge of it and I think once you understand the fundamental problems/rules/practices with multithreaded development it becomes almost easy, almost.
 It's like trying to wash a car with a toothbrush and one of those giant
 novelty foam hands.  Yeah, you could do it, but wouldn't it be really
 nice if someone would go and invent the sponge and wash-cloth?
I think with some higher level constructs it becomes easier. For example one of the main problems you face are deadlocks. A common cause of a deadlocks is: class A { void foo() { synchronize(this) { ... } } } void main() { A a = new A(); synchronize(a) { a.foo(); } } causes a deadlock above because you cannot lock the same object twice, even from the same thread. In my previous job we used a mutex object that allowed the same thread to lock the same object any number of times, counting the number of lock calls and requiring an equal number of unlock calls. This idea makes life a lot easier. No more deadlocks of this type. That just leaves the deadlock you get when you say: synchronize(a) { synchronize(b) { .. } } and in another thread: synchronize(b) { synchronize(a) { .. } } Given the right (or rather wrong) timing this can result in a dealock of both threads. This situation is less common simply because it's less common for 2 blocks of code in 2 different threads to need 2 or more mutexes _at the same time_. Or, at least, that is my experience. In my previous job we went so far as to intentionally terminate the process if a deadlock was detected and could then give the exact file:line of the last lock request making it fairly trivial to debug. We could do this because our server process had a 2nd process watching over it, and restarting it at a moments notice. It's things like these which make multithreaded programming much easier for developers to get their head around. I wonder what D's synchronized statement does? Does it allow multiple locks from the same thread? It should, especially given that it is impossible to forget the unlock call (that in itself is a great boon to multithreaded development). Another good idea is to provide a ThreadPool construct, do we have one of those floating around (pun intended). The idea being that when you need a thread you ask the pool for one and it supplies it, then once you're done with it you release it back into the pool for the next piece of code to pick it up and run with it (get it, run with it, like a ball... well I thought it was funny). Just a few ideas. Like several people have posted, the next great idea is probably waiting to be thought up! but, in the meantime we can make life a little easier, one step at a time. Regan Heath
May 29 2007
next sibling parent Regan Heath <regan netmail.co.nz> writes:
Some ideas/terms for those that are interested:

http://en.wikipedia.org/wiki/Lock_%28computer_science%29
http://en.wikipedia.org/wiki/Mutual_exclusion
http://en.wikipedia.org/wiki/Critical_section
http://en.wikipedia.org/wiki/Semaphore_%28programming%29
http://en.wikipedia.org/wiki/Spinlock
http://en.wikipedia.org/wiki/Seqlock

as you can see there are many ways to implement the humble 'synchronized'
statement.  I wonder which D uses?

Regan
May 29 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Regan Heath wrote:
 Daniel Keep Wrote:

 We can; it's just hard as hell and thoroughly unenjoyable.  Like I said
 before: I can and have written multithreaded code, but it's so utterly
 painful that I avoid it wherever possible.
I must be strange then because after 5+ years of multithreaded programming it's the sort I prefer to do. Each to their own I guess.
Same here. That said, I still believe this is something the user shouldn't generally have to think about. Let's say you want to sort a large array. Is it better to do so using a single thread or multiple threads? What if the app must adapt to use whatever resources are available, be that 1 CPU or 16 CPUs? We are quickly heading towards an area where few competent multithreaded programmers even have experience. It's not terribly difficult to target 2-4 CPUs because the number is sufficiently small that multi-threaded programs still look much like single-threaded programs. But what if the target machine has 16 CPUs? 32? The more parallel machines get the worse explicit multi-threading fits. Eventually, I will want the compiler/VM to figure most of it out for me, or at least let me explicitly designate functions as atomic, etc. Cilk is a decent example of how a C-like language could be adapted for multi-threading, but I still consider it an example of yesterday's solution, not tomorrow's.

causes a deadlock above because you cannot lock the same object twice, even
from the same thread.
Locks are recursive in D (thank goodness). But that only solves on eof the two problems you mention.
 In my previous job we used a mutex object that allowed the same thread to lock
the same object any number of times, counting the number of lock calls and
requiring an equal number of unlock calls.  This idea makes life a lot easier. 
No more deadlocks of this type.
Yup, this is how D works.
 That just leaves the deadlock you get when you say:
 
 synchronize(a) { synchronize(b) { .. } }
 
 and in another thread:
 
 synchronize(b) { synchronize(a) { .. } }
 
 Given the right (or rather wrong) timing this can result in a dealock of both
threads.  This situation is less common simply because it's less common for 2
blocks of code in 2 different threads to need 2 or more mutexes _at the same
time_.  Or, at least, that is my experience.
This is why some people (like Herb Sutter) say that object-oriented programming is inherently incompatible with explicit multi-threading. Any call into unknown code risks deadlock, and the whole point of OOP is generalizing problems in a manner that requires calling into unknown code. It's no wonder that message passing (as with CSP) seems to be gaining traction.
 I wonder what D's synchronized statement does?  Does it allow multiple locks
from the same thread?  It should, especially given that it is impossible to
forget the unlock call (that in itself is a great boon to multithreaded
development).
Yes it does.
 Another good idea is to provide a ThreadPool construct, do we have one of
those floating around (pun intended).  The idea being that when you need a
thread you ask the pool for one and it supplies it, then once you're done with
it you release it back into the pool for the next piece of code to pick it up
and run with it (get it, run with it, like a ball... well I thought it was
funny).
Tango will almost certainly have one prior to its 1.0 release. It already has a ThreadGroup object, but this is more for simply grouping multiple threads than it is for providing a general means for performing async. tasks. I'm not yet sure just how extensive multi-threading support will be added to Tango by 1.0, but definitely more than it has now. Sean
May 29 2007
parent Regan Heath <regan netmail.co.nz> writes:
Sean Kelly Wrote:
 Regan Heath wrote:
 Daniel Keep Wrote:
>>>
 We can; it's just hard as hell and thoroughly unenjoyable.  Like I said
 before: I can and have written multithreaded code, but it's so utterly
 painful that I avoid it wherever possible.
I must be strange then because after 5+ years of multithreaded programming it's the sort I prefer to do. Each to their own I guess.
Same here. That said, I still believe this is something the user shouldn't generally have to think about. Let's say you want to sort a large array. Is it better to do so using a single thread or multiple threads? What if the app must adapt to use whatever resources are available, be that 1 CPU or 16 CPUs? We are quickly heading towards an area where few competent multithreaded programmers even have experience. It's not terribly difficult to target 2-4 CPUs because the number is sufficiently small that multi-threaded programs still look much like single-threaded programs. But what if the target machine has 16 CPUs? 32? The more parallel machines get the worse explicit multi-threading fits. Eventually, I will want the compiler/VM to figure most of it out for me, or at least let me explicitly designate functions as atomic, etc. Cilk is a decent example of how a C-like language could be adapted for multi-threading, but I still consider it an example of yesterday's solution, not tomorrow's.
I can see your point... it's definately something to think about.
 Given the right (or rather wrong) timing this can result in a dealock of both
threads.  This situation is less common simply because it's less common for 2
blocks of code in 2 different threads to need 2 or more mutexes _at the same
time_.  Or, at least, that is my experience.
This is why some people (like Herb Sutter) say that object-oriented programming is inherently incompatible with explicit multi-threading. Any call into unknown code risks deadlock, and the whole point of OOP is generalizing problems in a manner that requires calling into unknown code. It's no wonder that message passing (as with CSP) seems to be gaining traction.
I need to do some reading about CSP and message passing. I haven't poked my head outside my comfy shell for a good while now. In the case I mention above you can at least solve it by giving each mutex an id, or priority. Upon aquisition you ensure that no other mutex of lower priority is currently held, if it is you release both and re-aquire in the correct order (high to low or low to high whichever you decide, all that matters is that there is an order defined and adhered to in all cases). Another higher level construct to consider for tango perhaps?
 I wonder what D's synchronized statement does?  Does it allow multiple locks
from the same thread?  It should, especially given that it is impossible to
forget the unlock call (that in itself is a great boon to multithreaded
development).
Yes it does.
Ahh, good to know. I suspected it would but was a bit lazy by not testing it before I posted, thanks for the confirmation.
 Another good idea is to provide a ThreadPool construct, do we have one of
those floating around (pun intended).  The idea being that when you need a
thread you ask the pool for one and it supplies it, then once you're done with
it you release it back into the pool for the next piece of code to pick it up
and run with it (get it, run with it, like a ball... well I thought it was
funny).
Tango will almost certainly have one prior to its 1.0 release. It already has a ThreadGroup object, but this is more for simply grouping multiple threads than it is for providing a general means for performing async. tasks. I'm not yet sure just how extensive multi-threading support will be added to Tango by 1.0, but definitely more than it has now.
Good to know. I'm just about to start a new job but once I settle in I might have some time to help out, if you want any. I have sooo many hobbies (the latest of which is snooker! that I dont want to promise anything.) Regan Heath
May 29 2007
prev sibling next sibling parent reply BCS <BCS pathlink.com> writes:
Regan Heath wrote:
 That just leaves the deadlock you get when you say:
 
 synchronize(a) { synchronize(b) { .. } }
 
 and in another thread:
 
 synchronize(b) { synchronize(a) { .. } }
 
what D need is a: synchronize(a, b) // gets lock on a and b but not until it can get both Now what about where the lock are in different functions.... :b
May 29 2007
parent Regan Heath <regan netmail.co.nz> writes:
BCS Wrote:
 Regan Heath wrote:
 That just leaves the deadlock you get when you say:
 
 synchronize(a) { synchronize(b) { .. } }
 
 and in another thread:
 
 synchronize(b) { synchronize(a) { .. } }
 
what D need is a: synchronize(a, b) // gets lock on a and b but not until it can get both Now what about where the lock are in different functions.... :b
Exactly. In my reply to Sean I mentioned a possible solution which is perhaps more robust and flexible: <quote me> In the case I mention above you can at least solve it by giving each mutex an id, or priority. Upon aquisition you ensure that no other mutex of lower priority is currently held, if it is you release both and re-aquire in the correct order (high to low or low to high whichever you decide, all that matters is that there is an order defined and adhered to in all cases). </quote> In other words you solve it by defining an order of aquisition in the implementation itself, so the programmer cannot make that mistake. Regan Heath
May 29 2007
prev sibling parent James Dennett <jdennett acm.org> writes:
Regan Heath wrote:
 Daniel Keep Wrote:
 freeagle wrote:
 Why do people think there is a need for another language/paradigm to
 solve concurrent problem? OSes deal with parallelism for decades,
 without special purpose languages. Just plain C, C++. Just check Task
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?

 freeagle
We can; it's just hard as hell and thoroughly unenjoyable. Like I said before: I can and have written multithreaded code, but it's so utterly painful that I avoid it wherever possible.
I must be strange then because after 5+ years of multithreaded programming it's the sort I prefer to do. Each to their own I guess. I think perhaps it's something that can be learnt, but it takes a bit of time, similar in fact to learning to program in the first place. I enjoy the challenge of it and I think once you understand the fundamental problems/rules/practices with multithreaded development it becomes almost easy, almost.
It seems that most people, on finding a deep understanding of multi-threaded programming and concurrent design, find that it is hugely more complicated to do well than designs which do not need concurrency, in most situations. They also find that making effective use of a large number of processors is a very difficult problem (except in the case of so-called "embarrassingly parallel" tasks). Many problems split into a number of naturally parallelizable parts, and exploiting that isn't very hard, but efficiency and true scalability is a lot more work that just creating some threads and using message passing and/or synchronization for shared state. I've seen a lot of code written by a lot of professionals, and the multi-threaded code generally has close to an order of magnitude more defects than the single-threaded code. You may actually be proficient, but sadly most of them also think that they are proficient. The better ones tend to be very wary of concurrency -- not that they avoid it, but they take great care when working with parallelism. -- James
May 29 2007
prev sibling next sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
freeagle wrote:
 Why do people think there is a need for another language/paradigm to 
 solve concurrent problem? OSes deal with parallelism for decades, 
 without special purpose languages. Just plain C, C++. Just check Task 
 manager in windows and you'll notice there's about 100+ threads running.
Why limit yourself to hundreds of threads when you can have thousands? http://www.sics.se/~joe/apachevsyaws.html -Jeff
May 29 2007
parent reply Sean Kelly <sean f4.ca> writes:
Jeff Nowakowski wrote:
 freeagle wrote:
 Why do people think there is a need for another language/paradigm to 
 solve concurrent problem? OSes deal with parallelism for decades, 
 without special purpose languages. Just plain C, C++. Just check Task 
 manager in windows and you'll notice there's about 100+ threads running.
Why limit yourself to hundreds of threads when you can have thousands?
Because context switching is expensive. Running thousands of threads on a system with only a few CPUs may use more time simply switching between threads than it does executing the thread code. Sean
May 29 2007
next sibling parent BCS <BCS pathlink.com> writes:
Sean Kelly wrote:
 Jeff Nowakowski wrote:
 
 freeagle wrote:

 Why do people think there is a need for another language/paradigm to 
 solve concurrent problem? OSes deal with parallelism for decades, 
 without special purpose languages. Just plain C, C++. Just check Task 
 manager in windows and you'll notice there's about 100+ threads running.
Why limit yourself to hundreds of threads when you can have thousands?
Because context switching is expensive. Running thousands of threads on a system with only a few CPUs may use more time simply switching between threads than it does executing the thread code. Sean
Why burn cycle on the context switch? If the CPU had a "back door" to swap out the register values on a second bank of registers, then a context switch could run in the time it takes to drain and refill the internal pipes. This would requirer a separate control system to manage the scheduling but that would have some interesting uses in and of it's self (drop user/kernel mode for user/kernel CPU's).
May 29 2007
prev sibling parent reply Jeff Nowakowski <jeff dilacero.org> writes:
Sean Kelly wrote:
 Because context switching is expensive.  Running thousands of threads on 
 a system with only a few CPUs may use more time simply switching between 
 threads than it does executing the thread code.
Did you look at the throughput in the graph I linked to? Erlang has it's own concept of threads that don't map directly to the OS. Look again: http://www.sics.se/~joe/apachevsyaws.html -Jeff
May 29 2007
parent reply Sean Kelly <sean f4.ca> writes:
Jeff Nowakowski wrote:
 Sean Kelly wrote:
 Because context switching is expensive.  Running thousands of threads 
 on a system with only a few CPUs may use more time simply switching 
 between threads than it does executing the thread code.
Did you look at the throughput in the graph I linked to? Erlang has it's own concept of threads that don't map directly to the OS. Look again: http://www.sics.se/~joe/apachevsyaws.html
Sorry, I misundertood. For some reason I thought you were saying Apache could scale to thousands of threads. In any case, D has something roughly akin to Erlang's thread with Mikola Lysenko's StackThreads and Tango's Fibers. Sean
May 29 2007
parent reply "David B. Held" <dheld codelogicconsulting.com> writes:
Sean Kelly wrote:
 [...]
 Sorry, I misundertood.  For some reason I thought you were saying Apache 
 could scale to thousands of threads.  In any case, D has something 
 roughly akin to Erlang's thread with Mikola Lysenko's StackThreads and 
 Tango's Fibers.
Erlang's threads are better than fibers because they are pre-emptive. However, this is only possible because Erlang runs on a VM. Context-switching in the VM is much cheaper than in the CPU (ironically enough), which means that D isn't going to get near Erlang's threads except on a VM that supports it (somehow I doubt the JVM or CLR come close). Fibers are nice when you don't need pre-emption, but having to think about pre-emption makes the parallelism intrude on your problem-solving, which is what we would like to avoid. Dave
May 29 2007
next sibling parent Paul Findlay <r.lph50+d gmail.com> writes:
 Fibers are nice when you don't need pre-emption, but having to think
 about pre-emption makes the parallelism intrude on your problem-solving,
 which is what we would like to avoid.
Do you know of any good guides or "design patterns" for when using explicit pre-emption? - Paul
May 30 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
David B. Held wrote:
 Sean Kelly wrote:
 [...]
 Sorry, I misundertood.  For some reason I thought you were saying 
 Apache could scale to thousands of threads.  In any case, D has 
 something roughly akin to Erlang's thread with Mikola Lysenko's 
 StackThreads and Tango's Fibers.
Erlang's threads are better than fibers because they are pre-emptive. However, this is only possible because Erlang runs on a VM. Context-switching in the VM is much cheaper than in the CPU (ironically enough), which means that D isn't going to get near Erlang's threads except on a VM that supports it (somehow I doubt the JVM or CLR come close). Fibers are nice when you don't need pre-emption, but having to think about pre-emption makes the parallelism intrude on your problem-solving, which is what we would like to avoid.
If I understand you correctly, I don't think either are a clear win. Preemptive multithreading, be it in a single kernel thread or in multiple kernel threads, require mutexes to protect shared data. Cooperative multithreading does not, but requires explicit yielding instead. So it's mostly a choice between deadlocks and starvation. However, if the task is "fire and forget" then preemption is a clear win, since that eliminates the need for mutexes, while cooperation still requires yielding. I like that Sun's pthread implementation in Solaris will spawn both user and kernel threads based on the number of CPUs available. It saves the programmer from having to think too much about it, and guarantees a decent distribution of load across available resources. I'm not aware of any other OS that does this though. Sean
May 30 2007
parent reply eao197 <eao197 intervale.ru> writes:
On Wed, 30 May 2007 18:01:26 +0400, Sean Kelly <sean f4.ca> wrote:

  Erlang's threads are better than fibers because they are pre-emptive.  
 However, this is only possible because Erlang runs on a VM.  
 Context-switching in the VM is much cheaper than in the CPU (ironically  
 enough), which means that D isn't going to get near Erlang's threads  
 except on a VM that supports it (somehow I doubt the JVM or CLR come  
 close).
  Fibers are nice when you don't need pre-emption, but having to think  
 about pre-emption makes the parallelism intrude on your  
 problem-solving, which is what we would like to avoid.
If I understand you correctly, I don't think either are a clear win. Preemptive multithreading, be it in a single kernel thread or in multiple kernel threads, require mutexes to protect shared data. Cooperative multithreading does not, but requires explicit yielding instead. So it's mostly a choice between deadlocks and starvation.
AFAIK, there isn't shared data in Erlang -- processes in Erlang VM (threads in D) communicate each to another by sending and receiving messages. And message passing mechanism is very efficient in Erlang VM. -- Regards, Yauheni Akhotnikau
May 30 2007
parent Brad Anderson <brad dsource.org> writes:
eao197 wrote:
 On Wed, 30 May 2007 18:01:26 +0400, Sean Kelly <sean f4.ca> wrote:
 
  Erlang's threads are better than fibers because they are
 pre-emptive. However, this is only possible because Erlang runs on a
 VM. Context-switching in the VM is much cheaper than in the CPU
 (ironically enough), which means that D isn't going to get near
 Erlang's threads except on a VM that supports it (somehow I doubt the
 JVM or CLR come close).
  Fibers are nice when you don't need pre-emption, but having to think
 about pre-emption makes the parallelism intrude on your
 problem-solving, which is what we would like to avoid.
If I understand you correctly, I don't think either are a clear win. Preemptive multithreading, be it in a single kernel thread or in multiple kernel threads, require mutexes to protect shared data. Cooperative multithreading does not, but requires explicit yielding instead. So it's mostly a choice between deadlocks and starvation.
AFAIK, there isn't shared data in Erlang -- processes in Erlang VM (threads in D) communicate each to another by sending and receiving messages. And message passing mechanism is very efficient in Erlang VM.
This is true, for the most part. However, you can have shared data among Erlang processes. Mnesia, the distributed database system (in-memory, on-disk, or both) that ships with Erlang, is an example of an app that allows shared data. It's fairly battle-tested in regards to locking, dirty reads/writes, etc. BA
May 30 2007
prev sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
David B. Held wrote:
 ...
 Fibers are nice when you don't need pre-emption, but having to think
 about pre-emption makes the parallelism intrude on your problem-solving,
 which is what we would like to avoid.
 
 Dave
Have you seen Stackless Python? That uses cooperative multithreading, but I've never had to write an explicit yield. The way that works is that you do all communication, and all blocking actions via channels. The channels transfer control of the CPU as well as data, meaning that you don't really need to think about what's going on behind the scenes. It all just kinda works. One of these days, I'll get around to doing stackless in D... -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 30 2007
parent "David B. Held" <dheld codelogicconsulting.com> writes:
Daniel Keep wrote:
 
 David B. Held wrote:
 ...
 Fibers are nice when you don't need pre-emption, but having to think
 about pre-emption makes the parallelism intrude on your problem-solving,
 which is what we would like to avoid.
Have you seen Stackless Python? That uses cooperative multithreading, but I've never had to write an explicit yield. [...]
Well, cooperative multithreading is different from fibers, because fibers are actually coroutines, so yield is almost always explicit. And yes, I think we need all the parallel libraries we can get, so knock yourself out. ;) Dave
May 30 2007
prev sibling next sibling parent Thomas de Grivel <billitch epita.fr> writes:
freeagle Wrote:

 Why do people think there is a need for another language/paradigm to 
 solve concurrent problem? OSes deal with parallelism for decades, 
 without special purpose languages. Just plain C, C++. Just check Task 
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?
Maybe because we may want to parallelize arbitrary bits of code, much like URBI does by introducing other statements combinators than ';'. They introduce syntax like
 whenever (ball.visible) {
   head.rotX += ball.alpha   &   head.rotY += ball.tetha;
 }
Notice how the two statements are combined with '&' and not ';' which means that they are to run *simultaneously*. URBI is event driven : the 'whenever' keyword indicates the block is to be executed every time the statement is true. There are other combinators to force a task to start before another, express mutual exclusion, etc This is so easy to understand and use that kids can play with it already. URBI was designed to control robots and this all implies an event-driven paradigm but to me it shows that powerful parallelization primitives can be expressed using very simple syntax. I believe much parallel-computing research is still ongoing but now more toward implementations than its theory as it was already done few decades ago. Parallel computing primitives are well known by researchers but is just not taught at all since it was only available to big corporations/universities and large computing clusters. Now every new CPU has multiple cores and we have to introduce PP concepts into "everyday programming" but I'm sure there's a clever and simple way to achieve it, in the style of garbage collection vs manual memory handling. These are like parallelisation *very* complex tools to design and to understand fully, but they are made really easy to use by the approach D has taken. -- Thomas de Grivel Epita 2009
May 29 2007
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
freeagle wrote:
 Why do people think there is a need for another language/paradigm to 
 solve concurrent problem? OSes deal with parallelism for decades, 
 without special purpose languages. Just plain C, C++. Just check Task 
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?
 
 freeagle
That's a good point... Services and daemons have been running on multi-cpu machines for years, and AFAICT even multi-core machines prior to Intel Core 2 didn't require "a whole new paradigm" [or even a whole new compiler or new libraries]. For language support, Java, its runtime and std. lib. for example were designed with concurrency in mind and I'm not so sure much more can reasonably be done with an imperative language (although I'm sure some improvements could be made). How will the new Intel chip architecture really drastically change that? I recently read an article that many server applications are already "close to ideally suited" to the new CPU architectures, and "should immediately benefit" from them. (If they can "immediately benefit" from the multi-core designs, then it's not a leap to suppose that these "old" techniques must still hold a good deal of merit. Why re-invent the wheel?). Sure, there are some areas like "lock-free hastables", theading libs. and such that will need to be done differently to get the _most_ benefit from the new architectures, but I don't think it will or need to go much beyond that (I'm not so sure that other more complex things like Garbage Collection will need to be re-developed to take advantage of multi-core). What I think will need to change for the most part will be how _some_ "fat client" applications are developed, but that is becoming less relevant in this era of "thin-client" computing. IIS, Apache, Oracle, SQL Server and the rest take care of most of the concurrent operation worries for us <g> OTOH, maybe multi-core becoming available on a typical client will create a resurgence of demand for "fat-client". - Dave
May 29 2007
parent Sean Kelly <sean f4.ca> writes:
Dave wrote:
 
 How will the new Intel chip architecture really drastically change that? 
 I recently read an article that many server applications are already 
 "close to ideally suited" to the new CPU architectures, and "should 
 immediately benefit" from them.
Most of them are. I think the problem will be more with user apps, particularly games.
 (If they can "immediately benefit" from the multi-core designs, then 
 it's not a leap to suppose that these "old" techniques must still hold a 
 good deal of merit. Why re-invent the wheel?).
Because the traditional means of multi-threading will not scale well to systems with a large number of CPUs (in my opinion). By large I mean at least 16, which people at Intel have said we'll be using within a few years. That doesn't give software developers much lead time to figure out how to easily use all that hardware.
 What I think will need to change for the most part will be how _some_ 
 "fat client" applications are developed, but that is becoming less 
 relevant in this era of "thin-client" computing. IIS, Apache, Oracle, 
 SQL Server and the rest take care of most of the concurrent operation 
 worries for us <g> OTOH, maybe multi-core becoming available on a 
 typical client will create a resurgence of demand for "fat-client".
How many "thin client" applications do you use? I don't use any, unless you count web forums. If the "thin client" idea invades the desktop I suspect it will be as common for desktops to be running both the client and the server as it will to run only the client with a remote server. And that still leaves out games, which have been driving personal computer development for almost 20 years. I suppose that means I think "fat clients" are the more likely scenario. Sean
May 29 2007
prev sibling next sibling parent janderson <askme me.com> writes:
Henrik wrote:
 Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we havent seen a veritable rush to adopt parallelized
programming strategies, some which include:
 
 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
 
 
 Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
 
 
 /// Henrik
 
I think if D were to grow parallel legs, I think it would be a great incentive for people to make the switch. -Joel
May 29 2007
prev sibling next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Henrik Wrote:

 Todays rant on Slashdot is about parallel programming and why the
... [snip] At work, I'm using SEDA: http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf Although it's designed primarily for internet services (and indeed, I'm crafting an internet service...), I'm using it for a lot more than just the server portions of the program, and I plan to use it in future (non-internet-based) solutions. The general idea behind something like this is "fire and forget". There is a set of "stages", each with a queue of events, and a thread pool varying in size depending on load (all managed behind the scenes). The creator of a stage needs only to receive an event and process it, possibly (usually), pushing other events onto other stages, where they will be executed when there's time. Stages are highly modular, and tend to serve only one, or a small group of, functions, but each stage is managed behind the scenes with monitoring tools that increase and decrease thread count respective to thread load. The advantage of such a system is it allows the designer to think in a very "single-threaded" mindset. You deal with a single event, and when you're done processing it, you let someone else (or a few other people) deal with the results. It also encourages encapsulation and modularity. The disadvantage? It's not suited for all types of software. It's ideal for server solutions, and I could see its use in various GUI apps, but it might be hard to force an event-driven model onto something like a game. Combined with something like the futures paradigm, though, I can see this being very helpful for allowing multi-threaded code to be written like single-threaded code. Now only to port it to D...
May 29 2007
next sibling parent reply Pragma <ericanderton yahoo.removeme.com> writes:
Robert Fraser wrote:
 Henrik Wrote:
 
 Todays rant on Slashdot is about parallel programming and why the
... [snip] At work, I'm using SEDA: http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf
Wow. Anyone doubting if this makes a difference should compare the graphs on pages 18 and 25.
 
 Although it's designed primarily for internet services (and indeed, I'm
crafting an internet service...), I'm using
 it for a lot more than just the server portions of the program, and I plan to
use it in future (non-internet-based)
 solutions.
 
 The general idea behind something like this is "fire and forget". There is a
set of "stages", each with a queue of
 events, and a thread pool varying in size depending on load (all managed
behind the scenes). The creator of a stage
 needs only to receive an event and process it, possibly (usually), pushing
other events onto other stages, where they
 will be executed when there's time. Stages are highly modular, and tend to
serve only one, or a small group of,
 functions, but each stage is managed behind the scenes with monitoring tools
that increase and decrease thread count
 respective to thread load.
 
 The advantage of such a system is it allows the designer to think in a very
"single-threaded" mindset. You deal with
 a single event, and when you're done processing it, you let someone else (or a
few other people) deal with the
 results. It also encourages encapsulation and modularity.
 
 The disadvantage? It's not suited for all types of software. It's ideal for
server solutions, and I could see its use
 in various GUI apps, but it might be hard to force an event-driven model onto
something like a game.
Actually, just about everything in most modern 3d games is event driven, except for the renderer. FWIW, the renderer simply redraws the screen on a zen timer, based on what's sitting in the render queue, so it can easily be run in parallel with the event pump. The event pump, in turn, modifies the render queue. The only difference between a game and a typical GUI app is that even modest event latency can be a showstopper. The renderer *must* run on time (or else you drop frames), I/O events must be handled quickly (sluggish handling), game/entity events must be fast, and render queue contention must be kept very low.
 
 Combined with something like the futures paradigm, though, I can see this
being very helpful for allowing
 multi-threaded code to be written like single-threaded code.
 
 Now only to port it to D...
-- - EricAnderton at yahoo
May 29 2007
parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Those are just comparing a thread-for-each-request model with a single-threaded
model, showing the single-threaded model doing quite a bit better at high
loads. The interesting graphs are on page 126, where the SEDA-based server is
compared against Apache and Flash, and to a lesser extent 133, which shows
pings against a single-threaded model vs. a SEDA model.

Whatever the case, though, I was just thinking of this as a
concurrent-programming architecture that may feel more familiar to coders used
to single-threaded development.

I didn't know games were so highly event-driven... I guess I always think of
game programming by reflecting on my own short experience with it: an
interconnected mess. I still wouldn't suggest SEDA for game programming, since
its main focus is adaptive overload control, which shouldn't be needed in a
well-controlled system. Networked gaming or AI might benefit from it, though.

Pragma Wrote:

 Robert Fraser wrote:
 Henrik Wrote:
 
 Todays rant on Slashdot is about parallel programming and why the
... [snip] At work, I'm using SEDA: http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf
Wow. Anyone doubting if this makes a difference should compare the graphs on pages 18 and 25.
 
 Although it's designed primarily for internet services (and indeed, I'm
crafting an internet service...), I'm using
 it for a lot more than just the server portions of the program, and I plan to
use it in future (non-internet-based)
 solutions.
 
 The general idea behind something like this is "fire and forget". There is a
set of "stages", each with a queue of
 events, and a thread pool varying in size depending on load (all managed
behind the scenes). The creator of a stage
 needs only to receive an event and process it, possibly (usually), pushing
other events onto other stages, where they
 will be executed when there's time. Stages are highly modular, and tend to
serve only one, or a small group of,
 functions, but each stage is managed behind the scenes with monitoring tools
that increase and decrease thread count
 respective to thread load.
 
 The advantage of such a system is it allows the designer to think in a very
"single-threaded" mindset. You deal with
 a single event, and when you're done processing it, you let someone else (or a
few other people) deal with the
 results. It also encourages encapsulation and modularity.
 
 The disadvantage? It's not suited for all types of software. It's ideal for
server solutions, and I could see its use
 in various GUI apps, but it might be hard to force an event-driven model onto
something like a game.
Actually, just about everything in most modern 3d games is event driven, except for the renderer. FWIW, the renderer simply redraws the screen on a zen timer, based on what's sitting in the render queue, so it can easily be run in parallel with the event pump. The event pump, in turn, modifies the render queue. The only difference between a game and a typical GUI app is that even modest event latency can be a showstopper. The renderer *must* run on time (or else you drop frames), I/O events must be handled quickly (sluggish handling), game/entity events must be fast, and render queue contention must be kept very low.
 
 Combined with something like the futures paradigm, though, I can see this
being very helpful for allowing
 multi-threaded code to be written like single-threaded code.
 
 Now only to port it to D...
-- - EricAnderton at yahoo
May 29 2007
parent Sean Kelly <sean f4.ca> writes:
Robert Fraser wrote:
 
 I didn't know games were so highly event-driven... I guess I always think of
game programming by reflecting on my own short experience with it: an
interconnected mess. I still wouldn't suggest SEDA for game programming, since
its main focus is adaptive overload control, which shouldn't be needed in a
well-controlled system. Networked gaming or AI might benefit from it, though.
There have been one or two interesting articles on Valve's updated Half-Life engine. They've done a fairly decent job of making it more parallel and have some good demos to show the difference. I don't have any links offhand, though I recall one article being on http://www.arstechnica.com Sean
May 29 2007
prev sibling parent BCS <BCS pathlink.com> writes:
Robert Fraser wrote:
 
 .... [snip]
 
 At work, I'm using SEDA:
 
 http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf
 
[...]
 
 The general idea behind something like this is "fire and forget". There is a
set of "stages", each with a queue of events, and a thread pool varying in size
depending on load (all managed behind the scenes). The creator of a stage needs
only to receive an event and process it, possibly (usually), pushing other
events onto other stages, where they will be executed when there's time. Stages
are highly modular, and tend to serve only one, or a small group of, functions,
but each stage is managed behind the scenes with monitoring tools that increase
and decrease thread count respective to thread load.
 
[...]
 
 Now only to port it to D...
This sounds somewhat Like an idea I has a while ago: Build a thread safe queue for delegates taking void and returning arrays of more delegates of the same type. Then have a bunch of threads spin on this loop while(true) queue.Enqueue(queue.Dequeue()()); Each function is single threaded, but If multi threaded stuff is needed, return several delegate, one for each thread. Race conditions and sequencing would still be an issue but some administrative rules might mitigate some of that. One advantage of it is that it is somewhat agnostic about thread count.
May 29 2007
prev sibling next sibling parent reply Daniel919 <Daniel919 web.de> writes:
Hi, what do you think about approaches like
Intel Threading Building Blocks ?
http://www.intel.com/cd/software/products/asmo-na/eng/threading/294797.htm

"It uses common C++ templates and coding style to eliminate tedious 
threading implementation work."

Anyone has made experiences with it ?
May 29 2007
parent Sean Kelly <sean f4.ca> writes:
Daniel919 wrote:
 Hi, what do you think about approaches like
 Intel Threading Building Blocks ?
 http://www.intel.com/cd/software/products/asmo-na/eng/threading/294797.htm
 
 "It uses common C++ templates and coding style to eliminate tedious 
 threading implementation work."
 
 Anyone has made experiences with it ?
It looks like a good library, but I've never actually used it. I imagine we'll get a lot of similar things in D before long. Sean
May 30 2007
prev sibling next sibling parent BCS <BCS pathlink.com> writes:
Henrik wrote:
 Todays rant on Slashdot is about parallel programming and why the support for
multiple cores in programs is only rarely seen. There are a lot of different
opinions on why we havent seen a veritable rush to adopt parallelized
programming strategies, some which include:
 
 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
 I know concurrent programming has been a frequent topic in the D community
forums, so I would be interested to hear the communitys opinions on this. What
will the future of parallel programming look like? 
 Are new concepts and tools that support parallel programming needed, or just a
new way of thinking? Will the old school programming languages fade away, as
some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional
Languages)? Where will/should D be in all this? Is it a doomed language if it
does incorporate an efficient way of dealing with this (natively)?
 
 
 Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
 
 
 /// Henrik
 
My issue with functional languages is that, in the end, what the CPU does will be imperative. Why then wright stuff in a functional language? If it is a concession to our ability to understand stuff, why functional as the abstraction? I find it harder to think in a functional manner than in an imperative manner. If we need an abstraction between us and the CPU, I would rather make it an "operative" or "relational" abstraction (something like UML only higher level). This would provide the abstractions need for things other than just concurrence, like memory management (no deletes like GC'ed languages but all of the "manual" memory management is handled by the compiler) or data access, (pure getter functions would disappear and "wormhole" functions would appear for common complex access chains). I guess what I'm saying is that I see functional languages are in between the level that the computer needs to work at and the level that the programmer actually thinks at. There two hight level to allow the programmer to "bit-twiddle" but not high enough that the programmer to abstract to the level that would really let the compiler take over the boring but critical stuff. But I rant :)
May 29 2007
prev sibling next sibling parent Robert Fraser <fraserofthenight gmail.com> writes:
Heh; it's not too bad once you get used to it. The most important thing to keep
in mind is taht all the concurrent programming features people hate (locks,
etc.) are usually unnecessary and possibly even bad for performance. I read
somewhere that something like 60% of locks (in Java, specifically, but this
probably applies to most imperative languages), are actually bad for
performance, since it takes longer to acquire all those locks than to allocate
immutable objects and use AtomicReferences, etc (memory's cheaper than
processing power these days). Of course, that's not possible in all cases, but
when it is it's a great boon.

Even if you can't just pass stuff around/use immutables for something, needing
to use more than a few locks/muticies generally indicative of tight coupling
between modules.

Often, the fewer locks/muticies you use, the better your performance and peace
of mind. I haven't tried any multi-threaded coding in D, but with a whole set
of atomic variables and transaction support in Tango, it should end up being
pretty painless.

Daniel Keep Wrote:

 
 
 freeagle wrote:
 Why do people think there is a need for another language/paradigm to
 solve concurrent problem? OSes deal with parallelism for decades,
 without special purpose languages. Just plain C, C++. Just check Task
 manager in windows and you'll notice there's about 100+ threads running.
 If microsoft can manage it with current languages, why we cant?
 
 freeagle
We can; it's just hard as hell and thoroughly unenjoyable. Like I said before: I can and have written multithreaded code, but it's so utterly painful that I avoid it wherever possible. It's like trying to wash a car with a toothbrush and one of those giant novelty foam hands. Yeah, you could do it, but wouldn't it be really nice if someone would go and invent the sponge and wash-cloth? -- Daniel -- int getRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } http://xkcd.com/ v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP http://hackerkey.com/
May 29 2007
prev sibling parent Mikola Lysenko <mclysenk mtu.edu> writes:
Henrik Wrote:

 * Multiple cores haven't been available/affordable all that long, programmers
just need some time to catch up.
 * Parallel programming is hard to do (as we lack the proper programming tools
for it). We need new concepts, new tools, or simply a new generation of
programming languages created to handle parallelization from start.
 * Parallel programming is hard to do (as we tend to think in straight lines,
lacking the proper cognitive faculties to parallelize problem solving). We must
accept that this is an inherently difficult thing for us, and that there never
will be an easy solution.
 * We have both the programming tools needed and the cognitive capacity to deal
with them, only the stupidity of the current crop of programmers or their
inability to adapt stand in the way. Wait a generation and the situation will
have sorted itself out.
 
I'm going to go out on a limb and suggest a 5th possibility: * Parallel programming is easy for today's programmers. Writing code to do two things at once is not at all challenging. Just fork two threads, and let them do their thing, collect the result whenever. Of course, I don't mean to trivialize threads code - it is certainly a task that requires enormous skill. The difficulty is not within parallel execution, but instead within the communication. Therefore, a better statement would be: * Concurrent programming is hard for today's programmers. In the Communicating Sequential Processes (CSP) sense of the word, concurrent programs are made out of two basic elements; processes and connections. From a formal perspective, this is just a generalization of the idea of a call stack - programs can now split into multiple stacks and communicate between themselves. Instead of a call hierarchy, we have a graph of interconnected processes. From this definition, it should be obvious that concurrency is not just an issue for threads code, but rather it is relevant to ordinary stuff like GUIs, videogames and operating systems. Concurrency is an everyday phenomenon, that we have all dealt with in some way or another - its just that few programmers bother to put a name on it. This is a damn shame, since there is so much we can gain from understanding programs in these terms. Some powerful examples are given in Rob Pike's NewSqueak presentation: http://video.google.com/videoplay?docid=810232012617965344&q=rob+pike+newsqueak+google -Mikola Lysenko
May 30 2007