digitalmars.D - concurrency
- Denton Cockburn (4/4) Feb 02 2008 Ok, Walter's said previously (I think) that he's going to wait to see wh...
- Craig Black (13/17) Feb 03 2008 Walter also has said recently that he wants to implement automatic
- Daniel Lewis (6/19) Feb 03 2008 Craig, I'm not sure if you noticed that AMD and Intel had "HT" for a lon...
- Craig Black (14/42) Feb 03 2008 Yes everything is going multi-threaded and multi-core. Any feature that...
- Christopher Wright (8/51) Feb 04 2008 I'm curious how automatic parallelization might work with delegates. It
- Craig Black (10/63) Feb 04 2008 Good question. Yes, it would seem necessary that delegates be pure or
- Christopher Wright (11/16) Feb 04 2008 A static if or two in the event broker would solve it. There would be a
- Craig Black (6/22) Feb 05 2008 It might not be as fancy as using static if, but it might be simpler to ...
- Robert Fraser (17/22) Feb 03 2008 There were two solutions for concurrent programming proposed at the D
- Sean Kelly (14/39) Feb 03 2008 STM actually offers worse performance than lock-based programming, but
- Bedros Hanounik (15/60) Feb 03 2008 I think the best way to tackle concurrency is to have two types of funct...
- Sean Kelly (3/10) Feb 03 2008 This is basically how futures work. It's a pretty useful approach.
- interessted (3/16) Feb 04 2008 hi,
- interessted (3/16) Feb 04 2008 hi,
- Daniel Lewis (5/6) Feb 04 2008 Agreed. Steve Dekorte has been working with them for a long time and in...
- Sean Kelly (8/14) Feb 04 2008 Actually, it's entirely possible to do lock-free allocation and
- Bedros Hanounik (5/22) Feb 04 2008 Guys,
- Sean Kelly (4/33) Feb 05 2008 There's also a presentation about how it might apply to D here:
- Jason House (2/15) Feb 04 2008 I've never heard of that. Does anyone have a good link for extra detail...
- downs (11/29) Feb 04 2008 Basically, it comes down to a function that takes a delegate dg, and run...
- Joel C. Salomon (18/25) Feb 04 2008 -----BEGIN PGP SIGNED MESSAGE-----
- downs (14/29) Feb 04 2008 Heh.
- Sean Kelly (5/19) Feb 04 2008 Futures are basically Herb Sutter's rehashing of Hoare's CSP model.
- Mike Koehmstedt (3/8) Feb 09 2008 How does garbage collection currently work in a multi-processor environm...
- Robert Fraser (2/15) Feb 09 2008 It pauses all threads on all processors.
Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?
Feb 02 2008
"Denton Cockburn" <diboss hotmail.com> wrote in message news:pan.2008.02.03.02.33.36.603288 hotmail.com...Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -Craig
Feb 03 2008
Craig Black Wrote:Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -CraigCraig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging. D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed. That doesn't require any more work than deciding whether something should be constant, and then making it compile. I really have no idea what the approach will be for parallelization, but if Walter's waiting for C++ to figure it out then it'll be better than what they have. Regards, Dan
Feb 03 2008
"Daniel Lewis" <murpsoft hotmail.com> wrote in message news:fo5vdf$2q2e$1 digitalmars.com...Craig Black Wrote:Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -CraigCraig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed. That doesn't require any more work than deciding whether something should be constant, and then making it compile.Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.I really have no idea what the approach will be for parallelization, but if Walter's waiting for C++ to figure it out then it'll be better than what they have.I guess we can wait and see what happens. It just seems that everyone is anticipating a silver bullet that may never arive. -Craig
Feb 03 2008
Craig Black wrote:"Daniel Lewis" <murpsoft hotmail.com> wrote in message news:fo5vdf$2q2e$1 digitalmars.com...I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates. I was wondering because I work with a highly event-driven application in subscribers probably modify data that they don't own.Craig Black Wrote:Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -CraigCraig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed. That doesn't require any more work than deciding whether something should be constant, and then making it compile.Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.
Feb 04 2008
"Christopher Wright" <dhasenan gmail.com> wrote in message news:fo74ij$2asd$1 digitalmars.com...Craig Black wrote:Good question. Yes, it would seem necessary that delegates be pure or non-pure. And I agree, pure should convert easily to non-pure, but not vice-versa."Daniel Lewis" <murpsoft hotmail.com> wrote in message news:fo5vdf$2q2e$1 digitalmars.com...I'm curious how automatic parallelization might work with delegates. It probably won't, unless you put the 'pure' keyword in the signature of the delegates. In that case, I hope that pure delegates are implicitly convertible to non-pure delegates.Craig Black Wrote:Yes everything is going multi-threaded and multi-core. Any feature that aids programmers in writing multi-threaded software is a plus. However, I'm skeptical that a compiler will be able to take code that is written without any consideration for threading, and parallelize it.Walter also has said recently that he wants to implement automatic parallelization, and is working on features to will support this (const, invariant, pure). I think Andrei is pushing this. I have my doubts that this will be useful for most programs. I think that to leverage this automatic parallelization, you will have to code in a functional style, or build your application using pure functions. Granularity will also probably be an issue. Because of these drawbacks, automatic parallelization may not be so automatic, but may require careful programming, just like manual parallelization. But maybe I'm wrong and it will be the greatest thing ever. -CraigCraig, I'm not sure if you noticed that AMD and Intel had "HT" for a long time and are now pushing multicore on desktop users now, as well as servers. Const and pure are also relevant to live application migration, embedded application interfacing, optimization, and debugging.D is moving towards supporting some assertions that data isn't changed by an algorithm, and/or that it must not be changed. That doesn't require any more work than deciding whether something should be constant, and then making it compile.Consider that the compiler is relying on pure functions for parallelization. If (1) the programmer doesn't write any pure functions, or (2) the granularity of the pure function does not justify the overhead of parallelization, then there's no benefit. Thus, careful consideration will be required to leverage automatic parallelization.I was wondering because I work with a highly event-driven application in subscribers probably modify data that they don't own.In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues. However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue. -Craig
Feb 04 2008
Craig Black wrote:In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues. However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue.A static if or two in the event broker would solve it. There would be a method: void subscribe (T)(EventTopic topic, T delegate) { static assert (is (T == delegate)); static if (is (T == pure)) { // add to the pure event subscribers for auto parallelization } else { // add to the impure ones } }-Craig
Feb 04 2008
"Christopher Wright" <dhasenan gmail.com> wrote in message news:fo8o62$2m1t$1 digitalmars.com...Craig Black wrote:It might not be as fancy as using static if, but it might be simpler to use overloading (if the syntax will support it). void subscribe(EventTopic topic, void delegate() del) { ... } void subscribe(EventTopic topic, pure void delegate() del) { ... }In that case, it may be beneficial to somehow separate parallel and sequential events, perhaps with separate event queues. However, it would require that each event knows whether it is "pure" or not, so that it is placed on the appropriate queue.A static if or two in the event broker would solve it. There would be a method: void subscribe (T)(EventTopic topic, T delegate) { static assert (is (T == delegate)); static if (is (T == pure)) { // add to the pure event subscribers for auto parallelization } else { // add to the impure ones } }-Craig
Feb 05 2008
Denton Cockburn wrote:Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.) I think an ideal solution is two combine the two techniques. If functional-style programming is emphasized, and STM is used where state-based programming makes more sense, it frees the programmer to write code without worrying about the complexities of synchronization. That said, I never found traditional concurrency that hard, especially within frameworks like SEDA, etc.
Feb 03 2008
Robert Fraser wrote:Denton Cockburn wrote:STM actually offers worse performance than lock-based programming, but in exchange gains a guarantee that the app won't deadlock (though I believe it could theoretically livelock, at least with some STM strategies). Also it's simply easier for most people to think in terms of transactions. For the average application, I think it's a preferable option to lock-based programming. However, I think even STM will only get us so far, and eventually we're going to need to move to more naturally parallelizable methods of programming. The 'pure' functions and such in D are an attempt to get some of this without losing the imperative syntax that is so popular today.Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.)I think an ideal solution is two combine the two techniques. If functional-style programming is emphasized, and STM is used where state-based programming makes more sense, it frees the programmer to write code without worrying about the complexities of synchronization.If we're talking about D, then I agree.That said, I never found traditional concurrency that hard, especially within frameworks like SEDA, etc.Me either, but from what I've heard, this is not typical.
Feb 03 2008
I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completed for example a = foo(); // para1_foo and para2_foo are completely independent and executed in parallel b = para1_foo(); c = para2_foo(); // wait here for both functions to finish // another syntax could be used also if (b.done and c.done) continue: I'm not sure about supporting non-pure functions (or allowing accessing global vars); it's just too ugly for no good reason. Sean Kelly Wrote:Robert Fraser wrote:Denton Cockburn wrote:STM actually offers worse performance than lock-based programming, but in exchange gains a guarantee that the app won't deadlock (though I believe it could theoretically livelock, at least with some STM strategies). Also it's simply easier for most people to think in terms of transactions. For the average application, I think it's a preferable option to lock-based programming. However, I think even STM will only get us so far, and eventually we're going to need to move to more naturally parallelizable methods of programming. The 'pure' functions and such in D are an attempt to get some of this without losing the imperative syntax that is so popular today.Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?There were two solutions for concurrent programming proposed at the D conference. Walter talked about automatic parallelization made available functional programming styles, which Craig & Daniel are discussing. The other solution presented, which I have seen comparatively little discussion in the NG about, was software transactional memory. I don't think that STM necessarily leads to simpler or more readable code than lock-based concurrency, however I think STM has two distinct advantages over these traditional methods: 1. possibly better performance 2. better reliability (i.e. no need to worry about deadlocks, etc.)I think an ideal solution is two combine the two techniques. If functional-style programming is emphasized, and STM is used where state-based programming makes more sense, it frees the programmer to write code without worrying about the complexities of synchronization.If we're talking about D, then I agree.That said, I never found traditional concurrency that hard, especially within frameworks like SEDA, etc.Me either, but from what I've heard, this is not typical.
Feb 03 2008
Bedros Hanounik wrote:I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach. Sean
Feb 03 2008
hi, wouldn't it be okay to do it like in 'Active Oberon' (http://bluebottle.ethz.ch/languagereport/ActiveReport.html) or 'Zennon' (http://www.oberon.ethz.ch/oberon.net/)? Sean Kelly Wrote:Bedros Hanounik wrote:I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach. Sean
Feb 04 2008
hi, wouldn't it be okay to do it like in 'Active Oberon' (http://bluebottle.ethz.ch/languagereport/ActiveReport.html) or 'Zennon' (http://www.oberon.ethz.ch/oberon.net/)? Sean Kelly Wrote:Bedros Hanounik wrote:I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach. Sean
Feb 04 2008
Sean Kelly Wrote:This is basically how futures work. It's a pretty useful approach.Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever. Regards, Dan
Feb 04 2008
Daniel Lewis wrote:Sean Kelly Wrote:Actually, it's entirely possible to do lock-free allocation and deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. SeanThis is basically how futures work. It's a pretty useful approach.Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.
Feb 04 2008
Guys, take a look at transactional memory concept; very interesting type of locking (or should I say sharing) of memory allocations. http://en.wikipedia.org/wiki/Software_transactional_memory -Bedros Sean Kelly Wrote:Daniel Lewis wrote:Sean Kelly Wrote:Actually, it's entirely possible to do lock-free allocation and deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. SeanThis is basically how futures work. It's a pretty useful approach.Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.
Feb 04 2008
There's also a presentation about how it might apply to D here: http://s3.amazonaws.com/dconf2007/DSTM.ppt http://www.relisoft.com/D/STM_pptx_files/v3_document.htm Bedros Hanounik wrote:Guys, take a look at transactional memory concept; very interesting type of locking (or should I say sharing) of memory allocations. http://en.wikipedia.org/wiki/Software_transactional_memory -Bedros Sean Kelly Wrote:Daniel Lewis wrote:Sean Kelly Wrote:Actually, it's entirely possible to do lock-free allocation and deletion. HOARD does lock-free allocation, for example, and lock-free deletion would be a matter of appending the block to a lock-free slist on the appropriate heap. A GC could do basically the same thing, but collections would be a bit more complex. I've considered writing such a GC, but it's an involved project and I simply don't have the time. SeanThis is basically how futures work. It's a pretty useful approach.Agreed. Steve Dekorte has been working with them for a long time and integrated them into his iolanguage. He found he could regularly get comparable performance to Apache even in a pure OO framework (even Number!?) just 'cause his parallelization was better. I personally believe the best way though is to take advantage of lock instructions for *allocation* of memory. Once memory is allocated, it's "yours" to do with as you please. I haven't looked at this for a few months but I remember seeing an algorithm that did first-through concurrency loop-locks for malloc and free and had practically no overhead ever.
Feb 05 2008
Sean Kelly Wrote:Bedros Hanounik wrote:I've never heard of that. Does anyone have a good link for extra detail on futures?I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach. Sean
Feb 04 2008
Jason House wrote:Sean Kelly Wrote:Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object. The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :) scrapple.tools' ThreadPool class has a futures implementation. Here's an example: auto t = new Threadpool(2); auto f = t.future(&do_complicated_calculation); auto g = t.future(&do_complicated_calculation2); return f() + g(); --downsBedros Hanounik wrote:I've never heard of that. Does anyone have a good link for extra detail on futures?I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach. Sean
Feb 04 2008
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 downs wrote:Jason House wrote:… while Sean Kelly wrote:I've never heard of that. Does anyone have a good link for extra detail on futures?Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object. The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)Futures are basically Herb Sutter's rehashing of Hoare's CSP model.More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted. (Generally, channels are comparable to UNIX pipes and can transmit many data.) Russ Cox has a nice introduction to channel/thread programming at <http://swtch.com/~rsc/talks/threads07> and an overview of the field at <http://swtch.com/~rsc/thread>. - --Joel -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHp16azLx4GzBL9dYRAgd6AKCg0wpdVmUDPfPiKaf1hZlp7uE9fgCfXon+ T/F2qWd+OcrgVrIuDejZ14o= =n9T/ -----END PGP SIGNATURE-----
Feb 04 2008
Joel C. Salomon wrote:downs wrote:Heh. Funny coincidence. Let's take a look at the implementation of Future(T): class Future(T) { T res; bool done; MessageChannel!(T) channel; this() { New(channel); } T eval() { if (!done) { res=channel.get(); done=true; } return res; } alias eval opCall; bool finished() { return channel.canGet; } } :) --downsJason House wrote:& while Sean Kelly wrote:I've never heard of that. Does anyone have a good link for extra detail on futures?Basically, it comes down to a function that takes a delegate dg, and runs it on a threadpool, returning a wrapper object. The wrapper object can be evaluated, in which case it blocks until the original dg has returned a value. This value is then returned by the wrapper, as well as cached. The idea is that you create a future for a value that you know you'll need soon, then do some other task and query it later. :)Futures are basically Herb Sutter's rehashing of Hoare's CSP model.More specifically, this sounds like a special case of a CSP-like channel where only one datum is ever transmitted. (Generally, channels are comparable to UNIX pipes and can transmit many data.)
Feb 04 2008
Jason House wrote:Sean Kelly Wrote:Futures are basically Herb Sutter's rehashing of Hoare's CSP model. Here's a presentation of his where he talks about it: http://irbseminars.intel-research.net/HerbSutter.pdf SeanBedros Hanounik wrote:I've never heard of that. Does anyone have a good link for extra detail on futures?I think the best way to tackle concurrency is to have two types of functions blocking functions (like in the old sequential code execution) and non-blocking functions (the new parallel code execution) for non-blocking functions, the function returns additional type which is true when function execution is completedThis is basically how futures work. It's a pretty useful approach.
Feb 04 2008
How does garbage collection currently work in a multi-processor environment? My plan is to only have one thread per processor in addition to the main thread. When GC runs, does it pause all threads on all processors or does it only pause threads on a per-processor basis? Denton Cockburn Wrote:Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?
Feb 09 2008
Mike Koehmstedt wrote:How does garbage collection currently work in a multi-processor environment? My plan is to only have one thread per processor in addition to the main thread. When GC runs, does it pause all threads on all processors or does it only pause threads on a per-processor basis? Denton Cockburn Wrote:It pauses all threads on all processors.Ok, Walter's said previously (I think) that he's going to wait to see what C++ does in regards to multicore concurrency. Ignoring this for now, for fun, what ideas do you guys have regarding multicore concurrency?
Feb 09 2008