www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - reddit discussion about Go turns to D again

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
http://www.reddit.com/r/programming/comments/hb6m8/google_io_2011_writing_web_apps_in_go/

Andrei
May 14 2011
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On 14.05.2011 20:10, Andrei Alexandrescu wrote:
 http://www.reddit.com/r/programming/comments/hb6m8/google_io_2011_writing_web_apps_in_go/


 Andrei
I have been playing lately with Go, and I must say that the language might be a good replacement for C usage with a more up to date language features (GC + safer type system + modules + reflection + concurrency). But I doubt that Go can be a good replacement for systems that are nowadays programmed on the large in Java/.Net/C++. Maybe D can eventually belong to this area. Regardless of the merits of both languages, Google backing plays certainly a role. I doubt any of us would have looked at it, if the main developers weren't working at Google. The App Engine support will make more people curious to look up the language. Plus they already have a few success stories: - Heroku - Atlassian http://www.youtube.com/watch?v=7QDVRowyUQA (around 00:30m) -- Paulo
May 14 2011
prev sibling next sibling parent reply Adam Ruppe <destructionator gmail.com> writes:
Ugh, it annoys me so much that they do those long videos instead
of some plain text!

But, I watched parts of it, and I really wasn't impressed. They didn't
use any interesting techniques - it was just a straight forward app
using some uninteresting libraries. Even the HTTP server was incredibly
plain; there's nothing remarkable about that code.

There was one thing I'd remark on though. They talked about the
importance of error handling in Go... but their solution was lame. We've
talked about it before here, but blargh, Go's error handling sucks. Very
ugly code, and looks easy to get wrong.

Then they went into some appengine stuff. Again, unremarkable aside
from the ugliness. Poo.


After watching it, out of curiosity, I looked at Go's documentation
for the http package. Of course, they immediately attack CGI on it's
page. Blargh.

But, one thing that is ok is your client code looks the same with
a variety of methods. Good.

What's weak is the poor offering of the library. I haven't used it,
of course, the documentation and that video were both very
unimpressive.


Go's library has a wide breadth... but very little depth. Much of
what it offers is trivial, and it doesn't go far beyond that. It's
a very thin wrapper... and the abstractions it does offer seem to
be leaky.


I wouldn't use it for real work, even if the syntax wasn't so ugly.


Looking at the Reddit thread too, I notice nobody is actually talking
about the video. I imagine the reason why is just how utterly
uninteresting it was.  And odds are the stupid video presentation
means half the commentators didn't even watch it!
May 14 2011
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
You are to a certain extent right, but Go is appealing in a few ways.

Many Go users are coming from C or scripting languages, so Go is an
evolution for them, even if the language is a downgrade from major
programming language features.

Then many of the developers that are impressed by Go's multicore
features, are not aware of the nice libraries available for C++, JVM or
.Net.

There is the possibility that Go will make it into Android.

The web site is always up to date with the latest language specification and
they have weekly and stable releases.

There not much to say about Go, other than the language looks like a new
version of Alef from Plan9 with a bit of Oberon. But Google's backing, plus
the way they deal with the community is increasing its use.

I wish D would evolve the same way.

--
Paulo

"Adam Ruppe" <destructionator gmail.com> wrote in message 
news:iqn484$2fkm$1 digitalmars.com...
 Ugh, it annoys me so much that they do those long videos instead
 of some plain text!

 But, I watched parts of it, and I really wasn't impressed. They didn't
 use any interesting techniques - it was just a straight forward app
 using some uninteresting libraries. Even the HTTP server was incredibly
 plain; there's nothing remarkable about that code.

 There was one thing I'd remark on though. They talked about the
 importance of error handling in Go... but their solution was lame. We've
 talked about it before here, but blargh, Go's error handling sucks. Very
 ugly code, and looks easy to get wrong.

 Then they went into some appengine stuff. Again, unremarkable aside
 from the ugliness. Poo.


 After watching it, out of curiosity, I looked at Go's documentation
 for the http package. Of course, they immediately attack CGI on it's
 page. Blargh.

 But, one thing that is ok is your client code looks the same with
 a variety of methods. Good.

 What's weak is the poor offering of the library. I haven't used it,
 of course, the documentation and that video were both very
 unimpressive.


 Go's library has a wide breadth... but very little depth. Much of
 what it offers is trivial, and it doesn't go far beyond that. It's
 a very thin wrapper... and the abstractions it does offer seem to
 be leaky.


 I wouldn't use it for real work, even if the syntax wasn't so ugly.


 Looking at the Reddit thread too, I notice nobody is actually talking
 about the video. I imagine the reason why is just how utterly
 uninteresting it was.  And odds are the stupid video presentation
 means half the commentators didn't even watch it! 
May 14 2011
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
 You are to a certain extent right, but Go is appealing in a few ways.

 Many Go users are coming from C or scripting languages, so Go is an
 evolution for them, even if the language is a downgrade from major
 programming language features.

 Then many of the developers that are impressed by Go's multicore
 features, are not aware of the nice libraries available for C++, JVM or
 .Net.

 There is the possibility that Go will make it into Android.

 The web site is always up to date with the latest language specification and
 they have weekly and stable releases.

 There not much to say about Go, other than the language looks like a new
 version of Alef from Plan9 with a bit of Oberon. But Google's backing, plus
 the way they deal with the community is increasing its use.

 I wish D would evolve the same way.

 --
 Paulo
I think D has difficulties getting new users, although it is superior to any programming language I know in almost every way. Probably the main 'show brakes' for D are: 1. Lack of documentation. The documentation we have on digitalmars.com/d/2.0 is sufficient for me, but it is not up-to-date and it is too complicated for a newcomer to get started with. I think many will be turned off by the fact that there is no tutorial for newcomers on the main site, but you can get all details about some old version of the D grammar. It also stops D from becoming a teaching language at institutions. Apart from that, the website does not look half as professional as D is well designed. It is not structured at all. If you want to learn about what D is about, you have to read the whole website. Also, the documentation comments in some Phobos modules should improve, regardless of their formatting. 2. Someone who is curious about D will google 'd', which takes them straight to http://www.digitalmars.com/d/. They will then press the back button on their browser, because it does not look appealing. Then they will find the link to Wikipedia. If they are really curious, they will read the whole thing, to learn that they really can get the official compiler from digitalmars.com they will also see very little of D but read everything about "Problems and Controversies". The Wikipedia article, in my eyes, fails to give sufficient information about what D is about. It only lists features and gives code samples. They will then go back to http://www.digitalmars.com/d/ , where the most important link is not only the most important, but well, the smallest in size as well: 2.0 After clicking it, you they have to scroll down to find a link to the download site, where they need to read the whole table, because there are no OS-symbols leading them to the one-click-install. Many will just download the first thing and end up with the source and some binaries. (I have seen it happen multiple times!) A better process would be: google 'd', get to a totally beautiful website, have some display of D philosophy, a big section DOWNLOAD DMD D COMPILER that cannot be missed, beneath it there are symbols representing different OS's that can be clicked to get the appropriate installer and done. The next thing on the site should be a big link D TUTORIAL, linking to a very well written tutorial. This needs fixing, badly. But it is much work... 3. The reference compiler is somewhat buggy. But after seeing the changelog for 2.053 I am optimistic this will change very soon. Timon
May 15 2011
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 2:56 AM, Timon Gehr wrote:
 This needs fixing, badly. But it is much work...
Some great feedback and suggestions. Thanks!
May 15 2011
prev sibling parent reply Matthew Ong <ongbp yahoo.com> writes:
On 5/15/2011 5:56 PM, Timon Gehr wrote:

 You are to a certain extent right, but Go is appealing in a few ways.

 Many Go users are coming from C or scripting languages, so Go is an
 evolution for them, even if the language is a downgrade from major
 programming language features.

 Then many of the developers that are impressed by Go's multicore
 features, are not aware of the nice libraries available for C++, JVM or
 .Net.
That is because of the goroutine and channel syntax. I can emulate some of the channel syntax using my own wrapper class for from NIO piped. But the goroutine part is more like java kilim(but without) the nasty bytecode postprocessor (a "weaver"), http://www.malhar.net/sriram/kilim/ Perhaps D can approach this person to make things as interesting but keep the dmd process simple like javac.
 There is the possibility that Go will make it into Android.

 The web site is always up to date with the latest language specification and
 they have weekly and stable releases.

 There not much to say about Go, other than the language looks like a new
 version of Alef from Plan9 with a bit of Oberon. But Google's backing, plus
 the way they deal with the community is increasing its use.

 I wish D would evolve the same way.

 --
 Paulo
I think D has difficulties getting new users, although it is superior to any programming language I know in almost every way.
 Probably the main 'show brakes' for D are:

 1. Lack of documentation. The documentation we have on digitalmars.com/d/2.0 is
 sufficient for me, but it is not up-to-date and it is too complicated for a
 newcomer to get started with. I think many will be turned off by the fact that
 there is no tutorial for newcomers on the main site, but you can get all
details
 about some old version of the D grammar. It also stops D from becoming a
teaching
 language at institutions. Apart from that, the website does not look half as
 professional as D is well designed. It is not structured at all. If you want to
 learn about what D is about, you have to read the whole website. Also, the
 documentation comments in some Phobos modules should improve, regardless of
their
 formatting.
Yes. This part I agrees. As I am a new comer. It seems to me that I need to go all over the places within the wiki to figure things out. Go build a tool to do that automatically. http://golang.org/cmd/godoc/ With the -http flag, it runs as a web server and presents the documentation as a web page. godoc -http=:6060 From the browser, you can view the entire build in API. D can also do that without a build in server, but the navigation is not as organized as javadoc api format.
 2. Someone who is curious about D will google 'd', which takes them straight to
 http://www.digitalmars.com/d/. They will then press the back button on their
 browser, because it does not look appealing. Then they will find the link to
 Wikipedia. If they are really curious, they will read the whole thing, to learn
 that they really can get the official compiler from digitalmars.com they will
also
 see very little of D but read everything about "Problems and Controversies".
The
 Wikipedia article, in my eyes, fails to give sufficient information about what
D
 is about. It only lists features and gives code samples.

 They will then go back to http://www.digitalmars.com/d/ , where the most
important
 link is not only the most important, but well, the smallest in size as well:
2.0
 After clicking it, you they have to scroll down to find a link to the download
 site, where they need to read the whole table, because there are no OS-symbols
 leading them to the one-click-install. Many will just download the first thing
and
 end up with the source and some binaries.

 (I have seen it happen multiple times!)

 A better process would be: google 'd', get to a totally beautiful website, have
 some display of D philosophy, a big section DOWNLOAD DMD D COMPILER that
cannot be
 missed, beneath it there are symbols representing different OS's that can be
 clicked to get the appropriate installer and done. The next thing on the site
 should be a big link D TUTORIAL, linking to a very well written tutorial.

 This needs fixing, badly. But it is much work...

 3. The reference compiler is somewhat buggy. But after seeing the changelog for
 2.053 I am optimistic this will change very soon.


 Timon
-- Matthew Ong email: ongbp yahoo.com
May 16 2011
next sibling parent reply Matthew Ong <ongbp yahoo.com> writes:
Hi,

Oh a few more thing that got my interest is how the Go model their data. Not
entirely like how other conventional OO does it. They do NOT
have Object inheritance. But uses interface to some how 'bypass' that.


http://golang.org/doc/effective_go.html

Conversions
...
// this function is now available for the object type Sequence.
func (s Sequence) MyFunction() string {
   //  My Function has access to all the public/internal data of the
   // Sequence here.
}
...
s Sequence;
s.MyFunction(); // now that can be used.

That is some how like JRuby ability to 'add' method into the java final String
class without really touching that API.


Generality
It also >avoids< the need to repeat the documentation on every instance of a
common method.

Interfaces and methods

Since almost anything can have methods attached, almost anything can satisfy an
interface. One illustrative example is in the http package, which defines the
Handler interface. Any object that implements Handler can serve HTTP requests.


It is not just entirely like java interfaces, but java has to somehow, code the
api in a very strange pattern to support this.

Hopefully, does D currently support such ability?

Matthew Ong
ongbp yahoo.com
May 16 2011
parent Adam D. Ruppe <destructionator gmail.com> writes:
Matthew Ong  wrote:
 Hopefully, does D currently support such ability? [go interface]
You can do it with templates. Most of Phobos' range functions are written in a similar style.
May 16 2011
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Matthew Ong" <ongbp yahoo.com> wrote in message 
news:iqr858$1eqa$1 digitalmars.com...
 On 5/15/2011 5:56 PM, Timon Gehr wrote:
 I think D has difficulties getting new users, although it is superior to 
 any
 programming language I know in almost every way.
T add(T)(T a, T b) { return a+b; }
May 16 2011
prev sibling parent dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 2:08 AM, Paulo Pinto wrote:
 Then many of the developers that are impressed by Go's multicore
 features, are not aware of the nice libraries available for C++, JVM or
 ..Net.
...or D. http://digitalmars.com/d/2.0/phobos/std_parallelism.html
May 15 2011
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:iqmgq7$1fgn$1 digitalmars.com...
 http://www.reddit.com/r/programming/comments/hb6m8/google_io_2011_writing_web_apps_in_go/
Heh, this is one of the best things I've read in quite some time: "This supports my guess that Google bought up a bunch of "big names" (Pike, van Rossum, etc.), then provided them with a sort of "early retirement community" where, instead of playing bridge or watching "Wheel of Fortune," they play around with computers all day. Occasionally they get trotted out for an internal "tech talk," but their productive careers are over." Classic. Every time I look at Go^H^HIssue 9, I can't help wondering why there's people out there who apparently assume that just because someone did something significant 40 years ago somehow implies they have the Midas touch.
May 14 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-05-15 at 01:12 -0400, Nick Sabalausky wrote:
[ . . . ]
 Every time I look at Go^H^HIssue 9, I can't help wondering why there's=
=20
 people out there who apparently assume that just because someone did=20
 something significant 40 years ago somehow implies they have the Midas=
=20
 touch.
Indeed. Having said that, whatever may be wrong with Go (and actually I think there is a lot), the Channels/Goroutines system is a significant improvement in programming language technology. Hopefully soon C++ will have something not dissimilar (cf. Anthony William's work on Just::Thread Pro), and D will add dataflow and CSP to the actor and data parallelism stuff it already has. Actors, dataflow, CSP and data parallelism are all subtly different and serve different purposes in different applications and systems. Having just one model of concurrency and parallelism stunts usage. This lesson is rapidly being learned in Scala. GPars rocks (cf. http://gpars.codehaus.org). But then I would say that wouldn't I. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 14 2011
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's PPL and 
Agents libraries.

Intel's Cilk you also provides interesting extensions to C and C++, and they 
look pretty much
like Go's ideas.

--
Paulo


"Russel Winder" <russel russel.org.uk> wrote in message 
news:mailman.180.1305441718.14074.digitalmars-d puremagic.com...
On Sun, 2011-05-15 at 01:12 -0400, Nick Sabalausky wrote:
[ . . . ]
 Every time I look at Go^H^HIssue 9, I can't help wondering why there's
 people out there who apparently assume that just because someone did
 something significant 40 years ago somehow implies they have the Midas
 touch.
Indeed. Having said that, whatever may be wrong with Go (and actually I think there is a lot), the Channels/Goroutines system is a significant improvement in programming language technology. Hopefully soon C++ will have something not dissimilar (cf. Anthony William's work on Just::Thread Pro), and D will add dataflow and CSP to the actor and data parallelism stuff it already has. Actors, dataflow, CSP and data parallelism are all subtly different and serve different purposes in different applications and systems. Having just one model of concurrency and parallelism stunts usage. This lesson is rapidly being learned in Scala. GPars rocks (cf. http://gpars.codehaus.org). But then I would say that wouldn't I. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 15 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-05-15 at 09:46 +0200, Paulo Pinto wrote:
 Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's PPL =
and=20
 Agents libraries.
TBB is very good in terms of performance but it can be rather awkward to use. It is thought a great step forward for data parallelism is C++. I have no experience of Microsoft stuff as I don't use their compilers/libraries.
 Intel's Cilk you also provides interesting extensions to C and C++, and t=
hey=20
 look pretty much
 like Go's ideas.
Cilk per se has lost its way a bit recently, and anyway was C focused. Cilk++ is a commercial enterprise. Intel have licenced it (Intel Cilk Plus) as part of their "pay for" C++ development suite which includes TBB and ABB. I have downloaded the 1.3GB file but have yet to unpack it.=20 The idea of using asynchronous function call as the initiator of concurrency/parallelism is fairly standard across the board these days. Of course Cilk is still focusing on shared-memory systems.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 15 2011
next sibling parent Gilbert Dawson <why needed.com> writes:
Russel Winder Wrote:

 On Sun, 2011-05-15 at 09:46 +0200, Paulo Pinto wrote:
 Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's PPL and 
 Agents libraries.
TBB is very good in terms of performance but it can be rather awkward to use. It is thought a great step forward for data parallelism is C++. I have no experience of Microsoft stuff as I don't use their compilers/libraries.
 Intel's Cilk you also provides interesting extensions to C and C++, and they 
 look pretty much
 like Go's ideas.
Cilk per se has lost its way a bit recently, and anyway was C focused. Cilk++ is a commercial enterprise. Intel have licenced it (Intel Cilk Plus) as part of their "pay for" C++ development suite which includes TBB and ABB. I have downloaded the 1.3GB file but have yet to unpack it.
Lost its way? Like.. in which market segment? I come from application programming community and we are still transitioning from single threaded sequential programs to SIMD and servers with multiple backend processes + db locking. Is Google or Facebook using Cilk? Haven't even heard about it.
 
 The idea of using asynchronous function call as the initiator of
 concurrency/parallelism is fairly standard across the board these days.
 Of course Cilk is still focusing on shared-memory systems. 
 
 -- 
 Russel.
 =============================================================================
 Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder ekiga.net
 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel russel.org.uk
 London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
 
May 15 2011
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/15/11 6:52 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 09:46 +0200, Paulo Pinto wrote:
 Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's PPL and
 Agents libraries.
TBB is very good in terms of performance but it can be rather awkward to use. It is thought a great step forward for data parallelism is C++.
I wonder how TBB compares to the recent Gnu parallel library http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html. Now that we have std.parallelism, we should kick-off a std.parallel_algorithm project building on top of it, and make it high priority. Lambdas make parallel algorithms infinitely more powerful. Andrei
May 15 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 10:00 AM, Andrei Alexandrescu wrote:
 On 5/15/11 6:52 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 09:46 +0200, Paulo Pinto wrote:
 Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's
 PPL and
 Agents libraries.
TBB is very good in terms of performance but it can be rather awkward to use. It is thought a great step forward for data parallelism is C++.
I wonder how TBB compares to the recent Gnu parallel library http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html. Now that we have std.parallelism, we should kick-off a std.parallel_algorithm project building on top of it, and make it high priority. Lambdas make parallel algorithms infinitely more powerful.
Agreed. Let's start gathering a list of what said primitives should be. I don't have time to do a comprehensive std.parallel_algorithm but I could contribute some stuff, as well as help out with any issues people run into with std.parallelism while trying to do this. We already have map and reduce since in parallelism they're more like fundamental primitives than "algorithms". I've been prototyping parallel sorting. The obvious way to do it is to units, fire off a task to sort each of these, then merge the results. We could even template it on the base sorting algorithm. One thing I'm waiting on before I start implementing this is getting TempAlloc into Phobos so that all the temporary buffers efficiently will be easy to manage. Other than that, I really don't see any obvious candidates for parallelization in std.algorithm.
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/15/11 9:29 AM, dsimcha wrote:
 On 5/15/2011 10:00 AM, Andrei Alexandrescu wrote:
 On 5/15/11 6:52 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 09:46 +0200, Paulo Pinto wrote:
 Well, C++ already kind of has, thanks to Intel's TBB and Microsoft's
 PPL and
 Agents libraries.
TBB is very good in terms of performance but it can be rather awkward to use. It is thought a great step forward for data parallelism is C++.
I wonder how TBB compares to the recent Gnu parallel library http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html. Now that we have std.parallelism, we should kick-off a std.parallel_algorithm project building on top of it, and make it high priority. Lambdas make parallel algorithms infinitely more powerful.
Agreed. Let's start gathering a list of what said primitives should be. I don't have time to do a comprehensive std.parallel_algorithm but I could contribute some stuff, as well as help out with any issues people run into with std.parallelism while trying to do this. We already have map and reduce since in parallelism they're more like fundamental primitives than "algorithms". I've been prototyping parallel sorting. The obvious way to do it is to units, fire off a task to sort each of these, then merge the results. We could even template it on the base sorting algorithm. One thing I'm waiting on before I start implementing this is getting TempAlloc into Phobos so that all the temporary buffers efficiently will be easy to manage. Other than that, I really don't see any obvious candidates for parallelization in std.algorithm.
Whoa. Did you skim the list at http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html? A _ton_ of algorithms in std.algorithms are parallelizable, and many in trivial ways too. Just take std.algorithm and go down the list thinking of what algorithms can be parallelized. I wouldn't be surprised if two thirds are. What we need is a simple design to set up the minimum problem size and other parameters for each algorithm, and then just implement them one by one. Example: import std.parallel_algorithm; void main() { double[] data; ... // Use parallel count for more than 10K objects algorithmConfig!(count).minProblemSize = 10_000; // Count all negative numbers auto negatives = count!"a < 0"(data); ... } A user-level program could import std.parallel_algorithm and std.algorithm, and choose which version to use by simply qualifying function calls with the same signature. (That's also why we should think of a shorter name instead of parallel_algorithm... and with this parenthesis I instantly commanded the attention of the entire community.) Andrei
May 15 2011
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei:

 (That's also why we should think 
 of a shorter name instead of parallel_algorithm...
The idea of adding parallel algorithms to Phobos is good, people may use them more than std.algorithm. Regarding the module name, std.palgorithm? :-) Bye, bearophile
May 15 2011
parent reply Sean Kelly <sean invisibleduck.org> writes:
std.paralellogrithm ;-)

Sent from my iPhone

On May 15, 2011, at 8:01 AM, bearophile <bearophileHUGS lycos.com> wrote:

 Andrei:
=20
 (That's also why we should think=20
 of a shorter name instead of parallel_algorithm...
=20 The idea of adding parallel algorithms to Phobos is good, people may use t=
hem more than std.algorithm.
 Regarding the module name, std.palgorithm? :-)
=20
 Bye,
 bearophile
May 16 2011
parent reply bearophile <bearophileHUGS lycos.com> writes:
Sean Kelly:

 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
May 16 2011
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
bearophile wrote:
 Sean Kelly:

 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
It is also possible to have both "p" prefix and identical names. The latter would be mere aliases and would be activated by a version declaration or similar. I think both possibilities have their merits, so letting the user choose would be an option. parallel_algorithm is to the point, but not very concise. I do not think we can do very much better. Timon
May 16 2011
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 bearophile wrote:
 Sean Kelly:
 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
It is also possible to have both "p" prefix and identical names. The latter would be mere aliases and would be activated by a version declaration or similar. I think both possibilities have their merits, so letting the user choose would be an option. parallel_algorithm is to the point, but not very concise. I do not think we can do very much better.
That doesn't work. As soon as the non-p versions exist, you have name clashes. So, if the point of starting the function names with p is to avoid name clashes, then you've gained nothing. Besides, we generally try to avoid aliases like that in Phobos. Such a scheme would never be accepted. - Jonathan M Davis
May 16 2011
parent Timon Gehr <timon.gehr gmx.ch> writes:
Jonathan M Davis wrote:
 bearophile wrote:
 Sean Kelly:
 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
It is also possible to have both "p" prefix and identical names. The latter would be mere aliases and would be activated by a version declaration or similar. I think both possibilities have their merits, so letting the user choose would be an option. parallel_algorithm is to the point, but not very concise. I do not think we can do very much better.
That doesn't work. As soon as the non-p versions exist, you have name clashes. So, if the point of starting the function names with p is to avoid name clashes, then you've gained nothing. Besides, we generally try to avoid aliases like that in Phobos. Such a scheme would never be accepted. - Jonathan M Davis
I think the point of having the same names is that you can parallelize code easily that was sequential before (or even for mere cosmetic reasons, by substituting the import, so no name clashes) But if there is only one option, then the "p" prefix is as far as I can see to be preferred, users can always do the aliases manually. Timon
May 16 2011
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
On May 16, 2011, at 10:32 AM, Jonathan M Davis wrote:

 bearophile wrote:
 Sean Kelly:
 std.paralellogrithm ;-)
=20 The module name I like more so far is the simple =
"parallel_algorithm".
 But I don't mind the "p" prefix for the parallel function names.
=20
 Bye,
 bearophile
=20 It is also possible to have both "p" prefix and identical names. The =
latter
 would be mere aliases and would be activated by a version declaration =
or
 similar. I think both possibilities have their merits, so letting the =
user
 choose would be an option.
=20
 parallel_algorithm is to the point, but not very concise. I do not =
think we
 can do very much better.
=20 That doesn't work. As soon as the non-p versions exist, you have name =
clashes.=20
 So, if the point of starting the function names with p is to avoid =
name=20
 clashes, then you've gained nothing. Besides, we generally try to =
avoid=20
 aliases like that in Phobos. Such a scheme would never be accepted.
I don't foresee a simple solution to the 'p' vs. non-'p' problem. It's = a complete mess.=
May 16 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/16/11 1:00 PM, Sean Kelly wrote:
 On May 16, 2011, at 10:32 AM, Jonathan M Davis wrote:

 bearophile wrote:
 Sean Kelly:
 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
It is also possible to have both "p" prefix and identical names. The latter would be mere aliases and would be activated by a version declaration or similar. I think both possibilities have their merits, so letting the user choose would be an option. parallel_algorithm is to the point, but not very concise. I do not think we can do very much better.
That doesn't work. As soon as the non-p versions exist, you have name clashes. So, if the point of starting the function names with p is to avoid name clashes, then you've gained nothing. Besides, we generally try to avoid aliases like that in Phobos. Such a scheme would never be accepted.
I don't foresee a simple solution to the 'p' vs. non-'p' problem. It's a complete mess.
Indeed it is widely conjectured that p!=np... :o) Andrei
May 16 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Could templates in std.algorithm be expanded to have an optional
thread-count compile-time argument? Maybe they'd be used like so:

import std.stdio;
import std.array;
import std.range;
import std.functional;

enum Threads
{
    x1 = 1,
    x2 = 2,
    x3 = 3,
    x4 = 4,  // etc..
}

void main()
{
    int[] data = [1, 2, 3];
    count!("a < 0", Threads.x4)(data);
}

size_t count(alias pred = "true", alias th = Threads.x1, Range)(Range
r) if (isInputRange!(Range))
{
    static if (th == Threads.x1)
    {
        // call normal count
    }
    else
    {
        // call parallel count
    }
    return -1;
}
May 16 2011
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
*actually a compile-time argument would be a bad idea, it's much more
useful to know the core count at runtime, doh.
May 16 2011
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2011-05-15 10:43:17 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 (That's also why we should think of a shorter name instead of 
 parallel_algorithm... and with this parenthesis I instantly commanded 
 the attention of the entire community.)
Actually, why not put those algorithms as standalone functions in std.parallelism directly? -- Michel Fortin michel.fortin michelf.com http://michelf.com/
May 15 2011
prev sibling next sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 15/05/2011 15:43, Andrei Alexandrescu wrote:
 Whoa. Did you skim the list at
 http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html? A
 _ton_ of algorithms in std.algorithms are parallelizable, and many in
 trivial ways too.
I should find an excuse to use more algorithms in my apps :>
 Just take std.algorithm and go down the list thinking of what algorithms
 can be parallelized. I wouldn't be surprised if two thirds are.

 What we need is a simple design to set up the minimum problem size and
 other parameters for each algorithm, and then just implement them one by
 one. Example:

 import std.parallel_algorithm;

 void main()
 {
 double[] data;
 ...
 // Use parallel count for more than 10K objects
 algorithmConfig!(count).minProblemSize = 10_000;
 // Count all negative numbers
 auto negatives = count!"a < 0"(data);
 ...
 }
Automatically using a parallel algorithm if it's likely to improve speed? Awesome. I assume that std.parallelism sets up a thread pool upon program start so that you don't have the overhead of spawning threads when you use a parallel algorithm for the first time?
 A user-level program could import std.parallel_algorithm and
 std.algorithm, and choose which version to use by simply qualifying
 function calls with the same signature. (That's also why we should think
 of a shorter name instead of parallel_algorithm... and with this
 parenthesis I instantly commanded the attention of the entire community.)
I'll give my +1 to bearophile's palgorithm unless something better appears.
 Andrei
-- Robert http://octarineparrot.com/
May 15 2011
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 11:41 AM, Robert Clipsham wrote:
 Automatically using a parallel algorithm if it's likely to improve
 speed? Awesome. I assume that std.parallelism sets up a thread pool upon
 program start so that you don't have the overhead of spawning threads
 when you use a parallel algorithm for the first time?
No, it does so lazily. It seemed silly to me to do this eagerly when it might never be used. If you want to make it eager all you have to do is reference the taskPool property in the first line of main().
 A user-level program could import std.parallel_algorithm and
 std.algorithm, and choose which version to use by simply qualifying
 function calls with the same signature. (That's also why we should think
 of a shorter name instead of parallel_algorithm... and with this
 parenthesis I instantly commanded the attention of the entire community.)
I'll give my +1 to bearophile's palgorithm unless something better appears.
 Andrei
May 15 2011
next sibling parent reply Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 5/15/2011 11:04 AM, dsimcha wrote:
 On 5/15/2011 11:41 AM, Robert Clipsham wrote:
 Automatically using a parallel algorithm if it's likely to improve
 speed? Awesome. I assume that std.parallelism sets up a thread pool upon
 program start so that you don't have the overhead of spawning threads
 when you use a parallel algorithm for the first time?
No, it does so lazily. It seemed silly to me to do this eagerly when it might never be used. If you want to make it eager all you have to do is reference the taskPool property in the first line of main().
I haven't looked at the library in depth, but after taking a peek I'm left wondering how to configure the stack size. My concern is what to do if the parallel tasks are running out of stack, or (more likely) are given way too much stack because they need to 'handle anything'.
May 15 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 12:21 PM, Sean Cavanaugh wrote:
 On 5/15/2011 11:04 AM, dsimcha wrote:
 On 5/15/2011 11:41 AM, Robert Clipsham wrote:
 Automatically using a parallel algorithm if it's likely to improve
 speed? Awesome. I assume that std.parallelism sets up a thread pool upon
 program start so that you don't have the overhead of spawning threads
 when you use a parallel algorithm for the first time?
No, it does so lazily. It seemed silly to me to do this eagerly when it might never be used. If you want to make it eager all you have to do is reference the taskPool property in the first line of main().
I haven't looked at the library in depth, but after taking a peek I'm left wondering how to configure the stack size. My concern is what to do if the parallel tasks are running out of stack, or (more likely) are given way too much stack because they need to 'handle anything'.
I never thought to make this configurable because I've personally never needed to configure it. You mean the stack sizes of the worker threads? I just use whatever the default in core.thread is. This probably errs on the side of too big, but usually you're only going to have as many worker threads as you have cores, so it's not much waste in practice. If you really need this to be configurable, I'll add it for the next release.
May 15 2011
parent reply Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 5/15/2011 11:49 AM, dsimcha wrote:
 On 5/15/2011 12:21 PM, Sean Cavanaugh wrote:
 I haven't looked at the library in depth, but after taking a peek I'm
 left wondering how to configure the stack size. My concern is what to do
 if the parallel tasks are running out of stack, or (more likely) are
 given way too much stack because they need to 'handle anything'.
I never thought to make this configurable because I've personally never needed to configure it. You mean the stack sizes of the worker threads? I just use whatever the default in core.thread is. This probably errs on the side of too big, but usually you're only going to have as many worker threads as you have cores, so it's not much waste in practice. If you really need this to be configurable, I'll add it for the next release.
I'm used to working with embedded environments so its just something I notice right away when looking at threaded libraries. We generally have to tune all the stacks to the bare minimum to get memory back, since its extremely noticeable when running on a system without virtual memory. A surprising number of tasks can be made run safely on 1 or 2 pages of stack. Looking into core.thread the default behavior (at least on windows) is to use the same size as the main thread's stack, so basically whatever is linked in as the startup stack is used. It is a safe default as threads can handle anything the main thread can and vice versa, but is usually pretty wasteful for real-world work tasks. A single thread pool by itself is never really the problem. Pretty much each set of middleware that does threading makes their own threads and thread pools, and it can and up pretty fast. Even a relatively simple application can end up with something like 20 to 50 threads, if any of the libraries are threaded (audio, physics, networking, background io etc). Anyway, if you have lots of threads for various reasons, you can quickly have 50-100MB or more of your address space eaten up with stack. If this happens to you say, on an XBOX 360, thats 10-20% of the RAM on the system, and tuning the stacks is definitely not a waste of time, but it has to be possible to do it :) As an unrelated note I got the magic VC exception that names threads in the visual studio debugger to work in D pretty easily, if anyone wants it I've linked it :) http://snipt.org/xHok
May 15 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 1:38 PM, Sean Cavanaugh wrote:
 On 5/15/2011 11:49 AM, dsimcha wrote:
 On 5/15/2011 12:21 PM, Sean Cavanaugh wrote:
 I haven't looked at the library in depth, but after taking a peek I'm
 left wondering how to configure the stack size. My concern is what to do
 if the parallel tasks are running out of stack, or (more likely) are
 given way too much stack because they need to 'handle anything'.
I never thought to make this configurable because I've personally never needed to configure it. You mean the stack sizes of the worker threads? I just use whatever the default in core.thread is. This probably errs on the side of too big, but usually you're only going to have as many worker threads as you have cores, so it's not much waste in practice. If you really need this to be configurable, I'll add it for the next release.
I'm used to working with embedded environments so its just something I notice right away when looking at threaded libraries. We generally have to tune all the stacks to the bare minimum to get memory back, since its extremely noticeable when running on a system without virtual memory. A surprising number of tasks can be made run safely on 1 or 2 pages of stack. Looking into core.thread the default behavior (at least on windows) is to use the same size as the main thread's stack, so basically whatever is linked in as the startup stack is used. It is a safe default as threads can handle anything the main thread can and vice versa, but is usually pretty wasteful for real-world work tasks. A single thread pool by itself is never really the problem. Pretty much each set of middleware that does threading makes their own threads and thread pools, and it can and up pretty fast. Even a relatively simple application can end up with something like 20 to 50 threads, if any of the libraries are threaded (audio, physics, networking, background io etc). Anyway, if you have lots of threads for various reasons, you can quickly have 50-100MB or more of your address space eaten up with stack. If this happens to you say, on an XBOX 360, thats 10-20% of the RAM on the system, and tuning the stacks is definitely not a waste of time, but it has to be possible to do it :)
Fair enough. So I guess stackSize should just be a c'tor parameter and there should be a global for the default pool, kind of like defaultPoolThreads? Task.executeInNewThread() would also take a stack size. Definitely do-able, but I'm leery of cluttering the API with a feature that's probably going to be used so infrequently.
May 15 2011
parent Sean Cavanaugh <WorksOnMyMachine gmail.com> writes:
On 5/15/2011 12:45 PM, dsimcha wrote:
 Fair enough. So I guess stackSize should just be a c'tor parameter and
 there should be a global for the default pool, kind of like
 defaultPoolThreads? Task.executeInNewThread() would also take a stack
 size. Definitely do-able, but I'm leery of cluttering the API with a
 feature that's probably going to be used so infrequently.
I would say sleep on it at least. At least with source libraries we have the option of hacking the library or forking it (and ripping it out of std if need be for nefarious purposes :)
May 15 2011
prev sibling parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 15/05/2011 17:04, dsimcha wrote:
 On 5/15/2011 11:41 AM, Robert Clipsham wrote:
 Automatically using a parallel algorithm if it's likely to improve
 speed? Awesome. I assume that std.parallelism sets up a thread pool upon
 program start so that you don't have the overhead of spawning threads
 when you use a parallel algorithm for the first time?
No, it does so lazily. It seemed silly to me to do this eagerly when it might never be used. If you want to make it eager all you have to do is reference the taskPool property in the first line of main().
Fair enough, I guess that makes more sense. But surely if you import std.parallelism then you plan on using it? In which case initialization in a static this() would probably be a good idea? (Although I guess that means you can't customize the number of worker threads, in the case where it can't automatically be detected for whatever reason?) -- Robert http://octarineparrot.com/
May 15 2011
parent dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 1:06 PM, Robert Clipsham wrote:
 Fair enough, I guess that makes more sense. But surely if you import
 std.parallelism then you plan on using it? In which case initialization
 in a static this() would probably be a good idea? (Although I guess that
 means you can't customize the number of worker threads, in the case
 where it can't automatically be detected for whatever reason?)
Customizing the number of worker threads is another good reason. Besides, some people (myself included) maintain a module in their personal libs that publicly imports all their most commonly used Phobos modules to save on import statement boilerplate. Therefore, I don't think that just importing something without using it should have substantial overhead.
May 15 2011
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/15/11 10:41 AM, Robert Clipsham wrote:
 On 15/05/2011 15:43, Andrei Alexandrescu wrote:
 Whoa. Did you skim the list at
 http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html? A
 _ton_ of algorithms in std.algorithms are parallelizable, and many in
 trivial ways too.
I should find an excuse to use more algorithms in my apps :>
 Just take std.algorithm and go down the list thinking of what algorithms
 can be parallelized. I wouldn't be surprised if two thirds are.

 What we need is a simple design to set up the minimum problem size and
 other parameters for each algorithm, and then just implement them one by
 one. Example:

 import std.parallel_algorithm;

 void main()
 {
 double[] data;
 ...
 // Use parallel count for more than 10K objects
 algorithmConfig!(count).minProblemSize = 10_000;
 // Count all negative numbers
 auto negatives = count!"a < 0"(data);
 ...
 }
Automatically using a parallel algorithm if it's likely to improve speed? Awesome. I assume that std.parallelism sets up a thread pool upon program start so that you don't have the overhead of spawning threads when you use a parallel algorithm for the first time?
No need. In all likelihood startup overheads are negligible for an application that's serious about parallel use, and should be nonexistent for an application that's not. Creating a pool upon a need basis should be perfect. Andrei
May 15 2011
prev sibling next sibling parent dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 10:43 AM, Andrei Alexandrescu wrote:
 Whoa. Did you skim the list at
 http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html? A
 _ton_ of algorithms in std.algorithms are parallelizable, and many in
 trivial ways too.
(Smacks self on forehead.) Yeah, here's an (untested, quick and dirty) implementation of parallelCount(). size_t parallelCount(alias pred, R)(R range) { // Use the fact that bools are implicitly convertible to ints. return taskPool.reduce!"a + b"( cast(size_t) 0, std.algorithm.map!pred(range) ); }
May 15 2011
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 7:43 AM, Andrei Alexandrescu wrote:
 (That's also why we should think of a shorter name instead of
 parallel_algorithm... and with this parenthesis I instantly commanded the
 attention of the entire community.)
Leave it as std.parallel_algorithm: 1. people instantly know what it is 2. google will index it as "parallel algorithm", exactly what we want. URL names carry a lot of weight with google page rank Calling it "palgorithm" will get us nowhere in terms of visibility.
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 02:35 PM, Walter Bright wrote:
 On 5/15/2011 7:43 AM, Andrei Alexandrescu wrote:
 (That's also why we should think of a shorter name instead of
 parallel_algorithm... and with this parenthesis I instantly commanded the
 attention of the entire community.)
Leave it as std.parallel_algorithm: 1. people instantly know what it is 2. google will index it as "parallel algorithm", exactly what we want. URL names carry a lot of weight with google page rank Calling it "palgorithm" will get us nowhere in terms of visibility.
Sounds like a good argument. Andrei
May 15 2011
parent reply Robert Clipsham <robert octarineparrot.com> writes:
On 15/05/2011 23:39, Andrei Alexandrescu wrote:
 On 05/15/2011 02:35 PM, Walter Bright wrote:
 On 5/15/2011 7:43 AM, Andrei Alexandrescu wrote:
 (That's also why we should think of a shorter name instead of
 parallel_algorithm... and with this parenthesis I instantly commanded
 the
 attention of the entire community.)
Leave it as std.parallel_algorithm: 1. people instantly know what it is 2. google will index it as "parallel algorithm", exactly what we want. URL names carry a lot of weight with google page rank Calling it "palgorithm" will get us nowhere in terms of visibility.
Sounds like a good argument. Andrei
Unfortunately, I'm inclined to agree. While palgorithm saves on typing and avoids the hideous underscore, parallel_algorithm is far easier to find. increase D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers). Besides which, it's not like you'll be typing it all the time, if you're doing things that are likely to need parallel computation then that one line of import isn't going to make much difference to the amount of typing you have to do. -- Robert http://octarineparrot.com/
May 15 2011
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-15 15:48, Robert Clipsham wrote:
 On 15/05/2011 23:39, Andrei Alexandrescu wrote:
 On 05/15/2011 02:35 PM, Walter Bright wrote:
 On 5/15/2011 7:43 AM, Andrei Alexandrescu wrote:
 (That's also why we should think of a shorter name instead of
 parallel_algorithm... and with this parenthesis I instantly commanded
 the
 attention of the entire community.)
Leave it as std.parallel_algorithm: 1. people instantly know what it is 2. google will index it as "parallel algorithm", exactly what we want. URL names carry a lot of weight with google page rank Calling it "palgorithm" will get us nowhere in terms of visibility.
Sounds like a good argument. Andrei
Unfortunately, I'm inclined to agree. While palgorithm saves on typing and avoids the hideous underscore, parallel_algorithm is far easier to find. increase D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers). Besides which, it's not like you'll be typing it all the time, if you're doing things that are likely to need parallel computation then that one line of import isn't going to make much difference to the amount of typing you have to do.
If it's a big problem, then that's where alias comes in. However, if parallel_algorithm's function names are the same as algorithm's, that could result in a lot of name clashing and force you to either use alias or fully qualify the package name frequently. If the function signatures are different enough though, that won't actually end up being a problem. - Jonathan M Davis
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 06:10 PM, Jonathan M Davis wrote:
 On 2011-05-15 15:48, Robert Clipsham wrote:
 On 15/05/2011 23:39, Andrei Alexandrescu wrote:
 On 05/15/2011 02:35 PM, Walter Bright wrote:
 On 5/15/2011 7:43 AM, Andrei Alexandrescu wrote:
 (That's also why we should think of a shorter name instead of
 parallel_algorithm... and with this parenthesis I instantly commanded
 the
 attention of the entire community.)
Leave it as std.parallel_algorithm: 1. people instantly know what it is 2. google will index it as "parallel algorithm", exactly what we want. URL names carry a lot of weight with google page rank Calling it "palgorithm" will get us nowhere in terms of visibility.
Sounds like a good argument. Andrei
Unfortunately, I'm inclined to agree. While palgorithm saves on typing and avoids the hideous underscore, parallel_algorithm is far easier to find. increase D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers). Besides which, it's not like you'll be typing it all the time, if you're doing things that are likely to need parallel computation then that one line of import isn't going to make much difference to the amount of typing you have to do.
If it's a big problem, then that's where alias comes in. However, if parallel_algorithm's function names are the same as algorithm's, that could result in a lot of name clashing and force you to either use alias or fully qualify the package name frequently. If the function signatures are different enough though, that won't actually end up being a problem. - Jonathan M Davis
The function signatures would be identical underlying the fact that their actual semantics are identical. Andrei
May 15 2011
parent reply dsimcha <dsimcha yahoo.com> writes:
On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision). Andrei
May 15 2011
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.
 
 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls. That would _not_ be pleasant. Now, it may be worth that pain simply because then it's incredibly obvious that the parallel versions are the same (except parallel) and because it means that you could possibly just swap out std.algorithm for std.parallel_algorithm in some cases, but I think that we should seriously consider whether we want a whole module's worth of name clashes. - Jonathan M Davis
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 09:17 PM, Jonathan M Davis wrote:
 On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls.
import std.algorithm; static import std.parallel_algorithm; That uses stuff in std.algorithm by default, and stuff in std.parallel_algorithm on demand. Perfect. Andrei
May 15 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 16.05.2011 04:59, schrieb Andrei Alexandrescu:
 On 05/15/2011 09:17 PM, Jonathan M Davis wrote:
 On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls.
import std.algorithm; static import std.parallel_algorithm; That uses stuff in std.algorithm by default, and stuff in std.parallel_algorithm on demand. Perfect. Andrei
So you have to write std.parallel_algorithm.map() instead of map() all the time? IMHO that's ugly, especially with the long parallel_algorithm name. Why not just prefix all parallel versions with "p_"? Then you can easily change the used function (just add/remove "p_"), the name is kind of descriptive and it's still short. Cheers, - Daniel
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 Am 16.05.2011 04:59, schrieb Andrei Alexandrescu:
 On 05/15/2011 09:17 PM, Jonathan M Davis wrote:
 On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls.
import std.algorithm; static import std.parallel_algorithm; That uses stuff in std.algorithm by default, and stuff in std.parallel_algorithm on demand. Perfect. Andrei
So you have to write std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
May 15 2011
next sibling parent reply Daniel Gibson <metalcaedes gmail.com> writes:
Am 16.05.2011 05:06, schrieb Andrei Alexandrescu:
 On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 Am 16.05.2011 04:59, schrieb Andrei Alexandrescu:
 On 05/15/2011 09:17 PM, Jonathan M Davis wrote:
 On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.

 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls.
import std.algorithm; static import std.parallel_algorithm; That uses stuff in std.algorithm by default, and stuff in std.parallel_algorithm on demand. Perfect. Andrei
So you have to write std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
Right, I haven't thought about using alias. Cheers, - Daniel
May 15 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 8:18 PM, Daniel Gibson wrote:
 Right, I haven't thought about using alias.
alias has been such a huge win, I often wonder why other languages don't adopt it.
May 15 2011
parent Matthew Ong <ongbp yahoo.com> writes:
Hi Walter,

alias is indeed a very very useful feature in D. It make string easy and well
define. Just need some Auto Documentation search-able tool for new developer to
help them find. As for now, I am using grepwin to help figure things out.

Matthew Ong
May 16 2011
prev sibling next sibling parent reply "Alex_Dovhal" <alex_dovhal yahoo.com> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> ???????/???????? ? 
???????? ?????????: news:iqq4jv$2eh3$2 digitalmars.com...
 So you have to write
 std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p;
Not so easy: 1. This alias would break UFCS possibility. 2. p is very short name, too good to be used for local variables - i, j, p, q. 3. One have to remember always static import std.parallel_algorithm if he also imports std.algorithm, but when he doesn't import std.algorithm - he/she gets same function names. If he/she later adds import std.algorithm he breaks all his previous code!!! 4. While to modules has same function signatures, one can use std.parallel_algorithm everywhere because it looks like more general version of std.algorithm. Daniel proposal IMHO looks good, p_map, pMap??
May 16 2011
parent bearophile <bearophileHUGS lycos.com> writes:
Alex_Dovhal:
 Daniel proposal IMHO looks good, p_map, pMap?? 
pmap sounds good :-) Bye, bearophile
May 16 2011
prev sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 5/16/2011 12:06 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 So you have to write
 std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
Or this, which I prefer to alias: import p = std.parallel_algorithm;
May 16 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Mike Parker" <aldacron gmail.com> wrote in message 
news:iqrbht$1k1o$1 digitalmars.com...
 On 5/16/2011 12:06 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 So you have to write
 std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
Or this, which I prefer to alias: import p = std.parallel_algorithm;
What would be the difference between... alias std.parallel_algorithm p; ...and... import p = std.parallel_algorithm; ..?
May 16 2011
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/16/11 5:03 PM, Nick Sabalausky wrote:
 "Mike Parker"<aldacron gmail.com>  wrote in message
 news:iqrbht$1k1o$1 digitalmars.com...
 On 5/16/2011 12:06 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 So you have to write
 std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
Or this, which I prefer to alias: import p = std.parallel_algorithm;
What would be the difference between... alias std.parallel_algorithm p; ...and... import p = std.parallel_algorithm; ..?
One line. Andrei
May 16 2011
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 "Mike Parker" <aldacron gmail.com> wrote in message
 news:iqrbht$1k1o$1 digitalmars.com...
 
 On 5/16/2011 12:06 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 10:04 PM, Daniel Gibson wrote:
 So you have to write
 std.parallel_algorithm.map() instead of map() all the time?
alias std.parallel_algorithm p; Andrei
Or this, which I prefer to alias: import p = std.parallel_algorithm;
What would be the difference between... alias std.parallel_algorithm p; ...and... import p = std.parallel_algorithm; ..?
Without private on the alias, it'll affect any module which imports the module that you created the alias on, and thanks to http://d.puremagic.com/issues/show_bug.cgi?id=6013 it'll happen anyway. I would also expect that using std.parallel_algorithm.func would still work with the alias whereas it wouldn't with the import p. - Jonathan M Davis
May 16 2011
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On 2011-05-15 19:59, Andrei Alexandrescu wrote:
 On 05/15/2011 09:17 PM, Jonathan M Davis wrote:
 On 2011-05-15 17:20, Andrei Alexandrescu wrote:
 On 05/15/2011 07:11 PM, dsimcha wrote:
 On 5/15/2011 8:06 PM, Andrei Alexandrescu wrote:
 The function signatures would be identical underlying the fact that
 their actual semantics are identical.
 
 Andrei
Not so sure. For parallel computation, you'd probably want to have some additional, though optional, configurability for things like work unit size.
Sure. Those are not per call call though. std.algorithm offers a ton of functions. The cleanest way to expose parallel equivalents is as functions with identical signatures in a different module. It's very much in the spirit of D. We should take a look at Gnu parallel. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode.html, which does the same (and I think they made the right decision).
The problem is that then you have name-clashes galor. If you ever import std.algorithm and std.parallel_algorithm in the same module (which is very likely to happen, I expect), then you're either going to have to use aliases all over the place, or give the whole module name for std.algorithm and std.parallel_algorithm with most function calls.
import std.algorithm; static import std.parallel_algorithm; That uses stuff in std.algorithm by default, and stuff in std.parallel_algorithm on demand. Perfect.
I don't know about perfect. You still have the problem of conflicting names and being forced to fully specify them in many cases. It may work well enough though to justify giving them the exact same names, particularly given the benefits of possibly automatically replacing calls to std.algorithm with calls to std.parallel_algorithm (and vice versa) by changing the imports as well as making it obvious that the functions do essentially the same thing by having exactly the same names. However, you then have the problem of it being harder to know whether you're dealing with functions from std.algorithm or std.parallel_algorithm when reading code. A look at the imports will tell you, but it would still be easier with separate names. So, I'm a bit divided on the matter. However, I think the primary issue here is that we be aware of what the pros and cons are of naming the parallel functions with exactly the same names as their serial counterparts, and name clashes are generally a major con. But if the pros outweigh the cons, then it makes sense to give them the same names. We already have several name clashes in Phobos (primarily between std.string and std.algorithm), and they're always annoying to deal with, so I'm generally biased against name clashes, but the module system is definitely designed to allow them and to give us the tools to get around the issues caused by them. I had forgotten about static imports though, so thanks for the reminder about them. - Jonathan M Davis
May 15 2011
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
It'd be nice if std.parallelism could somehow wrap std.algorithm
functions with a template instead of creating special names like
parallelCount, parallelMap, etc.. I'm thinking of something like:
static import std.algorithm.count;
alias Parallel!(std.algorithm.count, 10) count;  // 10 threads
auto negatives = count!"a < 0"(data);

Parallel would be a template which would know about std.algorithm
functions, and would statically disallow instantiating a template with
some algorithm which doesn't have a parallel implementation yet. Of
course someone would have to write these algorithms.

Then when you have a problem and want to eliminate that the cause is a
race condition you might do:

version(MultiCore)
{
     alias Parallel!(std.algorithm.count, 10) count;
}
else
{
    alias std.algorithm.count count;
}

I dunno, maybe that's just overkill and fantasizing (and probably
wouldn't work..).

Either way having parallel algorithms at the reach of a single import
sounds great, it could be one of D's top library features.
May 15 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 3:48 PM, Robert Clipsham wrote:
 2. google will index it as "parallel algorithm", exactly what we want.
 URL names carry a lot of weight with google page rank
D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers).
Yeah, but I'd like anyone actually searching for "parallel algorithm" to find the D library at the top or near the top of the list. If we make up a unique name, like "palgorithm", nobody is going to find it. Making up unique names is the right thing to do for branding and establishing a trademark. Otherwise, search engine friendly terms are far and away the better option.
May 15 2011
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 06:39 PM, Walter Bright wrote:
 On 5/15/2011 3:48 PM, Robert Clipsham wrote:
 2. google will index it as "parallel algorithm", exactly what we want.
 URL names carry a lot of weight with google page rank
increase D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers).
Yeah, but I'd like anyone actually searching for "parallel algorithm" to find the D library at the top or near the top of the list. If we make up a unique name, like "palgorithm", nobody is going to find it. Making up unique names is the right thing to do for branding and establishing a trademark. Otherwise, search engine friendly terms are far and away the better option.
That was much more the case for AltaVista than Google, which uses quite different approaches to ranking. Andrei
May 15 2011
next sibling parent Adam Richardson <simpleshot gmail.com> writes:
On Sun, May 15, 2011 at 8:22 PM, Andrei Alexandrescu <
SeeWebsiteForEmail erdani.org> wrote:

 On 05/15/2011 06:39 PM, Walter Bright wrote:

 On 5/15/2011 3:48 PM, Robert Clipsham wrote:

 2. google will index it as "parallel algorithm", exactly what we want.
 URL names carry a lot of weight with google page rank
increase D's usage (not that many people are looking for it - http://www.google.com/trends?q=parallel+algorithm although that's probably not indicative, considering it contains the majority, who aren't programmers).
Yeah, but I'd like anyone actually searching for "parallel algorithm" to find the D library at the top or near the top of the list. If we make up a unique name, like "palgorithm", nobody is going to find it. Making up unique names is the right thing to do for branding and establishing a trademark. Otherwise, search engine friendly terms are far and away the better option.
That was much more the case for AltaVista than Google, which uses quite different approaches to ranking. Andrei
Agreed. Name the library, module, function, etc. according to what you honestly believe leads to the best user experience. We can take steps to improve SEO if needed. Now, that's not to say that the suggested name is bad (std.parallel_algorithm), or that SEO considerations don't often overlap user-friendly labeling schemes. The point is that the web pages can be crafted to enhance findability if needed. Adam
May 15 2011
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 5:22 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 06:39 PM, Walter Bright wrote:
 Making up unique names is the right thing to do for branding and
 establishing a trademark. Otherwise, search engine friendly terms are
 far and away the better option.
That was much more the case for AltaVista than Google, which uses quite different approaches to ranking.
For ranking, perhaps. But for figuring out what the page is about (relevance), I think the url has a large influence.
May 15 2011
parent Kagamin <spam here.lot> writes:
Walter Bright Wrote:

 On 5/15/2011 5:22 PM, Andrei Alexandrescu wrote:
 On 05/15/2011 06:39 PM, Walter Bright wrote:
 Making up unique names is the right thing to do for branding and
 establishing a trademark. Otherwise, search engine friendly terms are
 far and away the better option.
That was much more the case for AltaVista than Google, which uses quite different approaches to ranking.
For ranking, perhaps. But for figuring out what the page is about (relevance), I think the url has a large influence.
Like this? http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt12ch31s03.html
May 16 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:iqop1p$2e5r$1 digitalmars.com...
 A user-level program could import std.parallel_algorithm and 
 std.algorithm, and choose which version to use by simply qualifying 
 function calls with the same signature.
I'd be *very* cautious about that sort of thing. Actually, no, I'd be against it. Using fully-qualified function calls destroys the ability to use member call syntax which is one of my absolute favorite D features (in large part because it reduces excess parenthesis-nesting and reduces the amount of code that's executed in the opposite order it's written). Unfortunately, member call syntax seems to have become completely forgotten since about a few years ago. And instead of gaining the ability to use it on more types as was supposed to happen, the number of places it can be used has been *decreasing*.
May 15 2011
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 05/15/2011 03:13 PM, Nick Sabalausky wrote:
 "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org>  wrote in message
 news:iqop1p$2e5r$1 digitalmars.com...
 A user-level program could import std.parallel_algorithm and
 std.algorithm, and choose which version to use by simply qualifying
 function calls with the same signature.
I'd be *very* cautious about that sort of thing. Actually, no, I'd be against it. Using fully-qualified function calls destroys the ability to use member call syntax which is one of my absolute favorite D features (in large part because it reduces excess parenthesis-nesting and reduces the amount of code that's executed in the opposite order it's written). Unfortunately, member call syntax seems to have become completely forgotten since about a few years ago. And instead of gaining the ability to use it on more types as was supposed to happen, the number of places it can be used has been *decreasing*.
Taken care of. You do static import on the rarely-used lib. Andrei
May 15 2011
prev sibling next sibling parent reply Gilbert Dawson <why needed.com> writes:
Russel Winder Wrote:

Actors, dataflow, CSP and data parallelism are all
subtly different and serve different purposes in different applications
 and systems.  Having just one model of concurrency and parallelism stunts
 usage.  This lesson is rapidly being learned in Scala.
You could shed some llight on this if you're an expert. I don't know what's so different between actors and CSP. How does D support dataflow concurrency?
May 15 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-05-15 at 05:53 -0400, Gilbert Dawson wrote:
[ . . . ]
 You could shed some llight on this if you're an expert. I don't know what=
's so different between actors and CSP. How does D support dataflow concurr= ency? I like to think of myself as an expert on this . . . you'll have to ask others if I actually am :-) Actors -- An actor is a self-contained process that communicates only by sending and receiving messages. Each actor has a single message queue that it processes messages from in its own time. Message sending is asynchronous. Dataflow -- a program is a collection of operators that are connected by channels. Operators can have many output and many input channels. Operators are event listeners, a computation is triggered by a certain state of the input. Message sending on channels is asynchronous. CSP -- a program is a collection of single threaded processes. Each process can have many input and many output channels. Channels have no buffering, all message passing is synchronous (rendezvous). Data Parallelism -- a computation is (effectively) the synchronous evolution of an array where each element evolves in parallel. There is also transactional memory, but unless it is supported in hardware, there are some doubts about the ability of this technique to scale -- though I have yet to find any real data dealing with this in either positive or negative light. Although software transactional memory (STM) is getting some airplay in the functional programming community (along with data parallelism), the HPC community is not taking it up -- though this may be inertia. Personally I am not a fan of STM, it tries to make shared-memory multi-threading work when it would be better to use a message passing architecture in the first place. Scala (with or without Akka) spearheaded the resurgence of actor model, though Erlang has been successfully using it for many years. D has picked up on it as well, it is part of std.concurrent and std.parallel. As far as I know D has no support for Dataflow and CSP. It is possible to implement actors with dataflow and vice versa but it is much better to treat the two as needing two implementations founded on a shared set of tools to handle message passing and locking. GPars is bringing all of the above models to the JVM. It's actor model and dataflow support is it's own, CSP is provided via an adaptor to JCSP (Kent University -- the one in the UK that is), and STM via an adapter to Multiverse. Why do all this? To get rid of shared memory. No shared memory, no need for locking, sempahores, monitors, etc. Message passing doesn't guarantee no deadlock or livelock, but it is much, much, much better that fighting shared-memory multi-threading. I hope this helps. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 15 2011
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/15/2011 5:24 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 05:53 -0400, Gilbert Dawson wrote:
 [ . . . ]
 You could shed some llight on this if you're an expert. I don't know what's so
different between actors and CSP. How does D support dataflow concurrency?
I like to think of myself as an expert on this . . . you'll have to ask others if I actually am :-)
I asked google, and they said 9,750 hits on Russel Winder Data Flow Concurrency !!
May 15 2011
parent reply Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-05-15 at 13:41 -0700, Walter Bright wrote:
 On 5/15/2011 5:24 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 05:53 -0400, Gilbert Dawson wrote:
 [ . . . ]
 You could shed some llight on this if you're an expert. I don't know w=
hat's so different between actors and CSP. How does D support dataflow conc= urrency?
 I like to think of myself as an expert on this . . . you'll have to ask
 others if I actually am :-)
=20 I asked google, and they said 9,750 hits on Russel Winder Data Flow Concu=
rrency !! Is that good or bad ;-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 16 2011
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/16/2011 11:51 AM, Russel Winder wrote:
 On Sun, 2011-05-15 at 13:41 -0700, Walter Bright wrote:
 I like to think of myself as an expert on this . . . you'll have to ask
 others if I actually am :-)
I asked google, and they said 9,750 hits on Russel Winder Data Flow Concurrency !!
Is that good or bad ;-)
I'd say good, since Russel Winder Talentless Hack only turned up a handful of hits :-)
May 16 2011
prev sibling parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Russel Winder wrote:
 On Sun, 2011-05-15 at 01:12 -0400, Nick Sabalausky wrote:
 [ . . . ]
 Every time I look at Go^H^HIssue 9, I can't help wondering why there's=
=20
 people out there who apparently assume that just because someone did=20
 something significant 40 years ago somehow implies they have the Midas=
=20
 touch.
=20 Indeed. Having said that, whatever may be wrong with Go (and actually =
I
 think there is a lot), the Channels/Goroutines system is a significant
 improvement in programming language technology.
Sure is, however it was not invented by Go. For example, the Felix programming language has had something similar since at least 2005. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
May 15 2011
parent Russel Winder <russel russel.org.uk> writes:
On Sun, 2011-05-15 at 18:28 +0200, "J=C3=A9r=C3=B4me M. Berger" wrote:
 Russel Winder wrote:
 On Sun, 2011-05-15 at 01:12 -0400, Nick Sabalausky wrote:
 [ . . . ]
 Every time I look at Go^H^HIssue 9, I can't help wondering why there's=
=20
 people out there who apparently assume that just because someone did=
=20
 something significant 40 years ago somehow implies they have the Midas=
=20
 touch.
=20 Indeed. Having said that, whatever may be wrong with Go (and actually =
I
 think there is a lot), the Channels/Goroutines system is a significant
 improvement in programming language technology.
=20 Sure is, however it was not invented by Go. For example, the Felix programming language has had something similar since at least 2005.
This risks turning into the Monty Python "Yorkshireman Sketch": I and my team invented and implemented a fully parallel, object-oriented, message-passing, active-object with transactional state language in 1987. Sadly, by 1990, funding was only available for C++ as funding authorities knew that C++ had won the language wars. So to stay funded we had to invent UC++. Another language no-one has ever heard of except the inventors. My serious point here though is that it doesn't actually matter who was first to the model, what Go has done is raise it in the consciousness of the masses. There may be elements of fashionism even fanboiism in the fact that Go has managed to achieved this position in the massed consciousness, but is has done that. Having a CSP implementation other than JCSP or C++CSP2 (which very few have ever heard of) is good for evolution of the practice of concurrency and parallelism. C++ will undoubtedly gain these ways of structuring code, as the JVM-based languages already have, at which point will D have gained market consciousness to stop Go and C++ becoming the de facto standard languages -- to the detriment of quality development? --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
May 15 2011
prev sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
 On May 16, 2011, at 10:32 AM, Jonathan M Davis wrote:
 bearophile wrote:
 Sean Kelly:
 std.paralellogrithm ;-)
The module name I like more so far is the simple "parallel_algorithm". But I don't mind the "p" prefix for the parallel function names. Bye, bearophile
It is also possible to have both "p" prefix and identical names. The latter would be mere aliases and would be activated by a version declaration or similar. I think both possibilities have their merits, so letting the user choose would be an option. parallel_algorithm is to the point, but not very concise. I do not think we can do very much better.
That doesn't work. As soon as the non-p versions exist, you have name clashes. So, if the point of starting the function names with p is to avoid name clashes, then you've gained nothing. Besides, we generally try to avoid aliases like that in Phobos. Such a scheme would never be accepted.
I don't foresee a simple solution to the 'p' vs. non-'p' problem. It's a complete mess.
At least it's not p vs np. ;) - Jonathan M Davis
May 16 2011