digitalmars.D - Why does std.concurrency.Mailbox use lists ?
- badlink (19/19) Sep 08 2014 Hello,
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (5/24) Sep 08 2014 AFAICS, the documentation doesn't write about any ordering
- Sean Kelly (10/14) Sep 09 2014 The actor model imposes no requirement on the order of received
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (10/28) Sep 10 2014 This example from Andrei's TDPL certainly depends on it:
- Sean Kelly (6/10) Sep 10 2014 Agreed. I absolutely don't want to get into a situation where
- Andrei Alexandrescu (3/20) Sep 08 2014 Yah, that does look like a perf bug. Please bugzillize and assign
- badlink (3/5) Sep 09 2014 Done. https://issues.dlang.org/show_bug.cgi?id=13444
- Sean Kelly (17/38) Sep 09 2014 That's strange. As you can see, spawn() just creates a kernel
- Sean Kelly (5/14) Sep 09 2014 Oops... I'd forgotten that Message is constructed in-place within
- Andrei Alexandrescu (2/7) Sep 10 2014 I think freelists are what the doctor prescribed. -- Andrei
- badlink (2/5) Sep 10 2014 I made a mistake in the post, the function is _send_ not spawn !
Hello, I'm creating a real-time 3D app using std.concurrency for exchanging messages between the renderer and a few mesher threads. The app runs fine, but once in a while I get a consistent FPS drop. Already aware that the cause is the GC (a call to GC.disable eliminates the slowdowns) I timed all functions and found that spawn() sometimes requires more than 50 ms to returns. I did a quick search through Phobos and found out that the Mailbox implementation in std.concurency uses a private List struct where every call to .put() needs to allocate a new node :( Why is that ? I would have used for example a ring buffer which can do all the things the Mailbox needs and faster in every way. The growing time can be compensated by a right call to setMaxMailboxSize() and any random removal can be a swap of the last element with the one being removed. If I haven't overlooked something, I'd like to fill a performance bug on Bugzilla.
Sep 08 2014
On Monday, 8 September 2014 at 17:06:34 UTC, badlink wrote:Hello, I'm creating a real-time 3D app using std.concurrency for exchanging messages between the renderer and a few mesher threads. The app runs fine, but once in a while I get a consistent FPS drop. Already aware that the cause is the GC (a call to GC.disable eliminates the slowdowns) I timed all functions and found that spawn() sometimes requires more than 50 ms to returns. I did a quick search through Phobos and found out that the Mailbox implementation in std.concurency uses a private List struct where every call to .put() needs to allocate a new node :( Why is that ? I would have used for example a ring buffer which can do all the things the Mailbox needs and faster in every way. The growing time can be compensated by a right call to setMaxMailboxSize() and any random removal can be a swap of the last element with the one being removed.AFAICS, the documentation doesn't write about any ordering guarantees of messages, but I can imagine that there are implicit expectations. In particular prioritySend() could break, if the implementation isn't done carefully.
Sep 08 2014
On Monday, 8 September 2014 at 17:24:53 UTC, Marc Schütz wrote:AFAICS, the documentation doesn't write about any ordering guarantees of messages, but I can imagine that there are implicit expectations. In particular prioritySend() could break, if the implementation isn't done carefully.The actor model imposes no requirement on the order of received messages. However, I want to try and guarantee that messages will be received from a given sender in the order that they were sent. I think this is a much easier model to code to, and it's what people expect. It may be that at some point in the future for some desirable communication protocol it will not be possible to make this guarantee, but in that case the people who choose to use that protocol can make an informed decision concerning the tradeoffs.
Sep 09 2014
On Tuesday, 9 September 2014 at 22:34:09 UTC, Sean Kelly wrote:On Monday, 8 September 2014 at 17:24:53 UTC, Marc Schütz wrote:This example from Andrei's TDPL certainly depends on it: http://www.informit.com/articles/article.aspx?p=1609144&seqNum=7 It would be much more difficult to implement without an ordering guarantee.AFAICS, the documentation doesn't write about any ordering guarantees of messages, but I can imagine that there are implicit expectations. In particular prioritySend() could break, if the implementation isn't done carefully.The actor model imposes no requirement on the order of received messages. However, I want to try and guarantee that messages will be received from a given sender in the order that they were sent. I think this is a much easier model to code to, and it's what people expect.It may be that at some point in the future for some desirable communication protocol it will not be possible to make this guarantee, but in that case the people who choose to use that protocol can make an informed decision concerning the tradeoffs.UDP comes to mind; it is neither reliable nor guarantees ordering. I think there should be an explicit guarantee for those protocols that can provide it. Otherwise it would just be an implementation detail that cannot be relied upon, and people always have to assume the worst.
Sep 10 2014
On Wednesday, 10 September 2014 at 11:11:08 UTC, Marc Schütz wrote:I think there should be an explicit guarantee for those protocols that can provide it. Otherwise it would just be an implementation detail that cannot be relied upon, and people always have to assume the worst.Agreed. I absolutely don't want to get into a situation where people are trying to create an ordered protocol on top of an unordered one, like how people occasionally end up trying to implement TCP via UDP.
Sep 10 2014
On 9/8/14, 10:06 AM, badlink wrote:Hello, I'm creating a real-time 3D app using std.concurrency for exchanging messages between the renderer and a few mesher threads. The app runs fine, but once in a while I get a consistent FPS drop. Already aware that the cause is the GC (a call to GC.disable eliminates the slowdowns) I timed all functions and found that spawn() sometimes requires more than 50 ms to returns. I did a quick search through Phobos and found out that the Mailbox implementation in std.concurency uses a private List struct where every call to .put() needs to allocate a new node :( Why is that ? I would have used for example a ring buffer which can do all the things the Mailbox needs and faster in every way. The growing time can be compensated by a right call to setMaxMailboxSize() and any random removal can be a swap of the last element with the one being removed. If I haven't overlooked something, I'd like to fill a performance bug on Bugzilla.Yah, that does look like a perf bug. Please bugzillize and assign provisionally to Sean. -- Andrei
Sep 08 2014
On Tuesday, 9 September 2014 at 00:36:04 UTC, Andrei Alexandrescu wrote:Yah, that does look like a perf bug. Please bugzillize and assign provisionally to Sean. -- AndreiDone. https://issues.dlang.org/show_bug.cgi?id=13444
Sep 09 2014
On Monday, 8 September 2014 at 17:06:34 UTC, badlink wrote:Hello, I'm creating a real-time 3D app using std.concurrency for exchanging messages between the renderer and a few mesher threads. The app runs fine, but once in a while I get a consistent FPS drop. Already aware that the cause is the GC (a call to GC.disable eliminates the slowdowns) I timed all functions and found that spawn() sometimes requires more than 50 ms to returns.That's strange. As you can see, spawn() just creates a kernel thread. There's an allocation for the context of a closure, but this has nothing to do with the message queue.I did a quick search through Phobos and found out that the Mailbox implementation in std.concurency uses a private List struct where every call to .put() needs to allocate a new node :( Why is that ?Incoming messages are appended to the list and removed from the middle during receive, so a list seemed like a natural representation. This could be optimized by putting a "next" ptr inside the Message object itself and make the list literally just a list of messages instead of a list of nodes referencing messages. That would eliminate half of the allocations and not have any of the problems that a ring buffer brings with removals from the middle or growing the list size.I would have used for example a ring buffer which can do all the things the Mailbox needs and faster in every way. The growing time can be compensated by a right call to setMaxMailboxSize() and any random removal can be a swap of the last element with the one being removed. If I haven't overlooked something, I'd like to fill a performance bug on Bugzilla.See above. I think the optimization of the list is a good idea regardless. I've also been considering adding free lists for discarded messages to avoid GC allocations. I did this when originally writing std.concurrency but it didn't actually seem to speed things up when profiling. This probably deserves a revisit.
Sep 09 2014
On Tuesday, 9 September 2014 at 20:47:24 UTC, Sean Kelly wrote:Incoming messages are appended to the list and removed from the middle during receive, so a list seemed like a natural representation. This could be optimized by putting a "next" ptr inside the Message object itself and make the list literally just a list of messages instead of a list of nodes referencing messages. That would eliminate half of the allocations and not have any of the problems that a ring buffer brings with removals from the middle or growing the list size.Oops... I'd forgotten that Message is constructed in-place within a Node, and so this wouldn't save any allocations. It still might be a bit of an optimization, but only to save a bit of copying.
Sep 09 2014
On 9/9/14, 1:47 PM, Sean Kelly wrote:See above. I think the optimization of the list is a good idea regardless. I've also been considering adding free lists for discarded messages to avoid GC allocations. I did this when originally writing std.concurrency but it didn't actually seem to speed things up when profiling. This probably deserves a revisit.I think freelists are what the doctor prescribed. -- Andrei
Sep 10 2014
On Tuesday, 9 September 2014 at 20:47:24 UTC, Sean Kelly wrote:That's strange. As you can see, spawn() just creates a kernel thread. There's an allocation for the context of a closure, but this has nothing to do with the message queue.I made a mistake in the post, the function is _send_ not spawn !
Sep 10 2014