www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Thoughts on parallel programming?

reply Sean Kelly <sean invisibleduck.org> writes:
Don Wrote:

 Sean Kelly wrote:
 Walter Bright Wrote:
 
 Russel Winder wrote:
 At the heart of all this is that programmers are taught that algorithm
 is a sequence of actions to achieve a goal.  Programmers are trained to
 think sequentially and this affects their coding.  This means that
 parallelism has to be expressed at a sufficiently high level that
 programmers can still reason about algorithms as sequential things. 

inherent nature of how we think.

Distributed programming is essentially a bunch of little sequential program that interact, which is basically how people cooperate in the real world. I think that is by far the most intuitive of any concurrent programming model, though it's still a significant conceptual shift from the traditional monolithic imperative program.

The Erlang people seem to say that a lot. The thing they omit to say, though, is that it is very, very difficult in the real world! Consider managing a team of ten people. Getting them to be ten times as productive as a single person is extremely difficult -- virtually impossible, in fact.

True enough. But it's certainly more natural to think about than mutex-based concurrency, automatic parallelization, etc. In the long term there may turn out to be better models, but I don't know of one today. Also, there are other goals for such a design than increasing computation speed: decreased maintenance cost, system reliability, etc. Erlang processes are equivalent to objects in C++ or Java with the added benefit of asynchronous execution in instances where an immediate response (ie. RPC) is not required. Performance gain is a direct function of how often this is true. But even where it's not, the other benefits exist.
 I agree with Walter -- I don't think it's got much to do with programmer 
 training. It's a problem that hasn't been solved in the real world in 
 the general case.

I agree. But we still need something better than the traditional approach now :-)
 The analogy with the real world suggests to me that there are three 
 cases that work well:
 * massively parallel;
 * _completely_ independent tasks; and
 * very small teams.
 
 Large teams are a management nightmare, and I see no reason to believe 
 that wouldn't hold true for a large number of cores as well.

Back when the Java OS was announced I envisioned a modular system backed by a database of objects serving different functions. Kind of like the old OpenDoc model, but at an OS level. It clearly didn't work out this way, but I'd be interested to see something along these lines. I honestly couldn't say whether apps would turn out to be easier or more difficult to create in such an environment though.
Nov 13 2010
parent sybrandy <sybrandy gmail.com> writes:
 True enough.  But it's certainly more natural to think about than mutex-based
concurrency, automatic parallelization, etc.  In the long term there may turn
out to be better models, but I don't know of one today.

 Also, there are other goals for such a design than increasing computation
speed: decreased maintenance cost, system reliability, etc.  Erlang processes
are equivalent to objects in C++ or Java with the added benefit of asynchronous
execution in instances where an immediate response (ie. RPC) is not required. 
Performance gain is a direct function of how often this is true.  But even
where it's not, the other benefits exist.

I like that description! Casey
Nov 13 2010