www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Async-await on stable Rust!

reply Heromyth <bitworld qq.com> writes:
See https://blog.rust-lang.org/2019/11/07/Async-await-stable.html.
Nov 07 2019
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 08/11/2019 3:04 PM, Heromyth wrote:
 See https://blog.rust-lang.org/2019/11/07/Async-await-stable.html.
https://rust-lang.github.io/async-book/02_execution/04_executor.html https://rust-lang.github.io/async-book/03_async_await/01_chapter.html So from what I can tell, there are two aspects to this. 1. Future's which have custom executors which you must explicitly call. 2. async/await which ties into the borrow checker and requires await to be called so pretty much stack only. From previous discussion this isn't what we want in D. The previous designs discussed is an event loop poll based and heap allocate the closure. It does mean that we need an event loop in druntime, but since I am expecting to write up an event loop soon for the graphics workgroup I'll add that to part of its requirements (with -betterC compatible).
Nov 07 2019
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 8 November 2019 at 02:13:28 UTC, rikki cattermole 
wrote:
 So from what I can tell, there are two aspects to this.

 1. Future's which have custom executors which you must 
 explicitly call.
 2. async/await which ties into the borrow checker and requires 
 await to be called so pretty much stack only.

 From previous discussion this isn't what we want in D.
 The previous designs discussed is an event loop poll based and 
 heap allocate the closure.

 It does mean that we need an event loop in druntime, but since 
 I am expecting to write up an event loop soon for the graphics 
 workgroup I'll add that to part of its requirements (with 
 -betterC compatible).
Please have a look at the approach taken by structured concurrency. Recently mentioned on this forum by John Belmonte: https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb forum.dlang.org Kotlin has taken that route as well and I have found working with it's concurrency very pleasant. The central idea of structured concurrency (a term taken from structured programming) is to provide 'blocks' or nurseries where tasks (threads/fiber/coroutines) run concurrent, and only exit those blocks when all it's concurrent tasks are finished. It is a simple restriction, but solves a lot of problems. You can follow the links provided in John Belmonte's posts for some more explanation.
Nov 08 2019
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 08/11/2019 10:57 PM, Sebastiaan Koppe wrote:
 On Friday, 8 November 2019 at 02:13:28 UTC, rikki cattermole wrote:
 So from what I can tell, there are two aspects to this.

 1. Future's which have custom executors which you must explicitly call.
 2. async/await which ties into the borrow checker and requires await 
 to be called so pretty much stack only.

 From previous discussion this isn't what we want in D.
 The previous designs discussed is an event loop poll based and heap 
 allocate the closure.

 It does mean that we need an event loop in druntime, but since I am 
 expecting to write up an event loop soon for the graphics workgroup 
 I'll add that to part of its requirements (with -betterC compatible).
Please have a look at the approach taken by structured concurrency. Recently mentioned on this forum by John Belmonte: https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb forum.dlang.org Kotlin has taken that route as well and I have found working with it's concurrency very pleasant. The central idea of structured concurrency (a term taken from structured programming) is to provide 'blocks' or nurseries where tasks (threads/fiber/coroutines) run concurrent, and only exit those blocks when all it's concurrent tasks are finished. It is a simple restriction, but solves a lot of problems. You can follow the links provided in John Belmonte's posts for some more explanation.
This is a better article from one of the sites you linked: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ After reading this, I can confidently say that this is not the problem I am trying to solve. To me async/await is synchronously executed when a closure is ready to execute. However, that doesn't mean it can't work with my existing idea ;) Nursery nursery; nursery.async { .... }; nursery.async { .... }; return; // nursery.__dtor == run
Nov 08 2019
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 8 November 2019 at 10:36:05 UTC, rikki cattermole 
wrote:
 This is a better article from one of the sites you linked: 
 https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
Yes, that is a good one.
 After reading this, I can confidently say that this is not the 
 problem I am trying to solve.
I understand. I just wanted you to be aware of it.
 To me async/await is synchronously executed when a closure is 
 ready to execute.
What does 'ready to execute' mean? And why synchronously? isn't the idea of async to run concurrently?
 However, that doesn't mean it can't work with my existing idea 
 ;)

 Nursery nursery;

 nursery.async {
 	....
 };

 nursery.async {
 	....
 };

 return; // nursery.__dtor == run
Except that you would want to run immediately, and then joinAll on the __dtor. For it is perfectly possible to have a long living nursery; the one at the root of the program for instance. E.g --- with (Nursery()) { async { }
Nov 08 2019
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 09/11/2019 2:24 AM, Sebastiaan Koppe wrote:
 On Friday, 8 November 2019 at 10:36:05 UTC, rikki cattermole wrote:
 This is a better article from one of the sites you linked: 
 https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement
considered-harmful/ 
Yes, that is a good one.
 After reading this, I can confidently say that this is not the problem 
 I am trying to solve.
I understand. I just wanted you to be aware of it.
I am, but thanks for the reminder.
 To me async/await is synchronously executed when a closure is ready to 
 execute.
What does 'ready to execute' mean? And why synchronously? isn't the idea of async to run concurrently?
So ready to execute could mean that a socket has data in the buffer ready to be read. So I'm treating async as if it its synchronous from a language design point of view, but if you want it to be asynchronous it can be. It should depend upon the library implementation its calling into. In other words, I don't want this behavior baked into the language. That seems like a good way to have regrets that we can't fix easily.
 However, that doesn't mean it can't work with my existing idea ;)

 Nursery nursery;

 nursery.async {
     ....
 };

 nursery.async {
     ....
 };

 return; // nursery.__dtor == run
Except that you would want to run immediately, and then joinAll on the __dtor. For it is perfectly possible to have a long living nursery; the one at the root of the program for instance. E.g --- with (Nursery()) {   async { }
Indeed that was a very simple example. But throw in a copy constructor + destructor pair, you should be able to build up a nice little tree on the Nursery implementation instance. Which from what I read could be quite useful with executing the closures.
Nov 08 2019
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 8 November 2019 at 14:28:36 UTC, rikki cattermole 
wrote:
 On 09/11/2019 2:24 AM, Sebastiaan Koppe wrote:
 What does 'ready to execute' mean? And why synchronously? 
 isn't the idea of async to run concurrently?
So ready to execute could mean that a socket has data in the buffer ready to be read.
Ah, you mean after it first yielded.
 In other words, I don't want this behavior baked into the 
 language. That seems like a good way to have regrets that we 
 can't fix easily.
I am not so sure, good concurrency seems to require a little help from the compiler.
 Indeed that was a very simple example. But throw in a copy 
 constructor + destructor pair, you should be able to build up a 
 nice little tree on the Nursery implementation instance. Which 
 from what I read could be quite useful with executing the 
 closures.
Perfect.
Nov 08 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 09/11/2019 4:11 AM, Sebastiaan Koppe wrote:
 On Friday, 8 November 2019 at 14:28:36 UTC, rikki cattermole wrote:
 On 09/11/2019 2:24 AM, Sebastiaan Koppe wrote:
 What does 'ready to execute' mean? And why synchronously? isn't the 
 idea of async to run concurrently?
So ready to execute could mean that a socket has data in the buffer ready to be read.
Ah, you mean after it first yielded.
That's one way to do it, yes. You may not want to wrap it in a fiber though. That can be a bit costly if you don't need it. But that is a decision that the language does not need to make luckily. It can be made by the compiler hook implementation and what library its hooking the closure creation into.
 In other words, I don't want this behavior baked into the language. 
 That seems like a good way to have regrets that we can't fix easily.
I am not so sure, good concurrency seems to require a little help from the compiler.
From what I've read over the years, nobody seems to have any compelling solution to concurrency. Its a hard problem to solve, at least in the general case. That is why I don't like the idea of baking one model into the language. It probably won't work for a lot of people, assuming it works as advertised.
Nov 08 2019
prev sibling next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 8 November 2019 at 09:57:59 UTC, Sebastiaan Koppe 
wrote:
 [snip]

 Please have a look at the approach taken by structured 
 concurrency. Recently mentioned on this forum by John Belmonte: 
 https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb forum.dlang.org

 Kotlin has taken that route as well and I have found working 
 with it's concurrency very pleasant.
 [snip]
The nurseries idea looks the same as Chapel's cobegin blocks https://chapel-lang.org/docs/primers/taskParallel.html
Nov 08 2019
parent reply Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-11-08 at 15:24 +0000, jmh530 via Digitalmars-d wrote:
 [=E2=80=A6]
=20
 The nurseries idea looks the same as Chapel's cobegin blocks
 https://chapel-lang.org/docs/primers/taskParallel.html
Chapel has many things to teach most other programming languages about parallelism, especially on a truly multi-processor computer. Not least of which is partitioned global address space (PGAS). --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Nov 08 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 15:42:40 UTC, Russel Winder wrote:
 Chapel has many things to teach most other programming 
 languages about parallelism, especially on a truly 
 multi-processor computer. Not least of which is partitioned 
 global address space (PGAS).
Yeah, but it seems geared towards HPC scenarios and I wonder how their model will hold up when "home computers" move towards many cores with local memory. I've got a feeling that some model reminiscent of actor based languages will take over at some point. E.g. something closer to Go and Pony, but with local memory baked in as a design premise. Still, it is interesting that we now see pragmatic languages that are designed with parallell computing as a premise. So we now have at least 3 young ones that try to claim parts of this space: Chapel, Go and Pony. And they are all quite different! Which I can't really say about the non-concurrent languages; C++, D and Rust are semantically much closer than Chapel, Go and Pony are.
Nov 08 2019
parent reply Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-11-08 at 15:57 +0000, Ola Fosheim Gr=C3=B8stad via Digitalmars=
-d
wrote:
[=E2=80=A6]
=20
 Yeah, but it seems geared towards HPC scenarios and I wonder how=20
 their model will hold up when "home computers" move towards many=20
 cores with local memory.
Chapel and it's parallelism structures work just fine on a laptop.
 I've got a feeling that some model reminiscent of actor based=20
 languages will take over at some point. E.g. something closer to=20
 Go and Pony, but with local memory baked in as a design premise.
We were saying that in 1988, my research team and I even created a programm= ing language, Solve =E2=80=93 admittedly active objects rather than actors but = in some ways the two are indistinguishable to most programmers. We even did a coupl= e of versions of the model based on C++ in the early 1990s: UC++ and KC++. I = am still waiting for people to catch up. I am not holding my breath, obviously= .
 Still, it is interesting that we now see pragmatic languages that=20
 are designed with parallell computing as a premise. So we now=20
 have at least 3 young ones that try to claim parts of this space:=20
 Chapel, Go and Pony. And they are all quite different! Which I=20
 can't really say about the non-concurrent languages; C++, D and=20
 Rust are semantically much closer than Chapel, Go and Pony are.
Chapel and Pony are the interesting ones here. Chapel I believe can get traction since it is about using declarative abstractions to harness parallelism on a PGAS model. Pony I fear may be a bit too niche to get traction but it proves (existence proof) an important point about actors th= at previously only Erlang was pushing as an idea. Go is not uninteresting, exactly the opposite since it is based on processe= s and channels, and effectively implements CSP. However far too many people using Go are failing to harness goroutines properly since they have failed = to learn the lesson that shared memory multi-threading is not the right model = for harnessing parallelism. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Nov 10 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 10 November 2019 at 10:57:19 UTC, Russel Winder wrote:
 Chapel and it's parallelism structures work just fine on a 
 laptop.
Ok, maybe I've read the Chapel spec through a SGI-lens that has made me a bit biased there. I will have to give Chapel a spin to figure it out, but quite frankly, right now the future of C++ seems more interesting than Chapel from a pragmatic application-programming concurrency viewpoint, with the upcoming concurrency-related extensions, stackless coroutines etc. I cannot really see myself using Chapel to build a desktop application. Maybe unjustified bias on my part, though.
 I've got a feeling that some model reminiscent of actor based 
 languages will take over at some point. E.g. something closer 
 to Go and Pony, but with local memory baked in as a design 
 premise.
We were saying that in 1988, my research team and I even created a programming language, Solve – admittedly active objects rather than actors but in some ways the two are indistinguishable to most programmers. We even did a couple of versions of the model based on C++ in the early 1990s: UC++ and KC++. I am still waiting for people to catch up. I am not holding my breath, obviously.
Cool, I think really the hardware is the main issue, and "installed base" issues with existing applications requiring the current hardware model. So, lately speed has primarily come from special processors, GPUs, first as SIMD-style VLIW, now as many-core RISC to be more flexible. So it might take time before we see "clusters" of simple CPUs with local memory. Maybe it will come through embedded. Maybe with automation/robotics, where you don't want the whole machine to fail just because a small part of it failed. But culture is a strong force... so the "not holding my breath" makes sense... :-/
 Chapel and Pony are the interesting ones here. Chapel I believe 
 can get traction since it is about using declarative 
 abstractions to harness parallelism on a PGAS model. Pony I 
 fear may be a bit too niche to get traction but it proves 
 (existence proof) an important point about actors that 
 previously only Erlang was pushing as an idea.
Outside research languages, I agree. For supported languages Chapel and Pony are very interesting and worth keeping an eye on, even if they have very low adoption. You can also take their core ideas with you when programming in other languages.
 Go is not uninteresting, exactly the opposite since it is based 
 on processes and channels, and effectively implements CSP. 
 However far too many people using Go are failing to harness 
 goroutines properly since they have failed to learn the lesson 
 that shared memory multi-threading is not the right model for 
 harnessing parallelism.
That is probably true. I've only used Go routines in the most limited trivial way in my own programs (basically like a future). There are probably other patterns that I could consider, but don't really think of. Although, I don't really feel the abstraction mechanisms in Go encourage you to write things that could become complex... I haven't written enough Go code to know this for sure, but I tend to get the feeling that "I better keep this really simple and transparent" when writing Go code. It is a bit too C-ish in some ways (despite being fairly high level).
Nov 10 2019
parent reply Russel Winder <russel winder.org.uk> writes:
On Sun, 2019-11-10 at 14:42 +0000, Ola Fosheim Gr=C3=B8stad via Digitalmars=
-d
wrote:
[=E2=80=A6]
 Ok, maybe I've read the Chapel spec through a SGI-lens that has=20
 made me a bit biased there.  I will have to give Chapel a spin to=20
 figure it out, but quite frankly, right now the future of C++=20
 seems more interesting than Chapel from a pragmatic=20
 application-programming concurrency viewpoint, with the upcoming=20
 concurrency-related extensions, stackless coroutines etc. I=20
 cannot really see myself using Chapel to build a desktop=20
 application. Maybe unjustified bias on my part, though.
Chapel is very definitely a language for computationally intensive code, a replacement for Fortran (and C++). It has no pretensions to be a general purpose language. The intention has been to integrate well with Python so t= hat Python is the language of the frontend and Chapel is the language of the computational backend =E2=80=93 cf. CERN's view of C++ and Python. The firs= t attempts at integration of Python and Chapel didn't work as well as hoped and were dropped. Now there are new ways of inter-working that shows some serious promise. One of these is Arkouda https://github.com/mhmerrill/arkouda I haven't tried this yet, but I will have to if I decide to go to PyConUK 202= 0. [=E2=80=A6]
 Cool, I think really the hardware is the main issue, and=20
 "installed base" issues with existing applications requiring the=20
 current hardware model. So, lately speed has primarily come from=20
 special processors, GPUs, first as SIMD-style VLIW, now as=20
 many-core RISC to be more flexible.
In hindsight, what we were trying to do with programming languages in late 1980s and early 1990 was at least a decade too early =E2=80=93 the processo= rs were not up to what we wanted to do. A decade or a decade and half later and we woul= d have had no problem. The issue was not processor cycles, it was functionali= ty to support multi-threading at the kernel level, and fibres at the process level. If we had the money and the team today, I'd hope we would beat Pony, C++, D, Go, Rust, etc. at their own game. Ain't going to happen, but that's life.
 So it might take time before we see "clusters" of simple CPUs=20
 with local memory. Maybe it will come through embedded. Maybe=20
 with automation/robotics, where you don't want the whole machine=20
 to fail just because a small part of it failed. But culture is a=20
 strong force... so the "not holding my breath" makes sense... :-/
Intel had the chips in 2008, e.g. The Polaris Chip =E2=80=93 cf. SuperCompu= ter 2008 proceedings. But the experiment failed to be picked up for reasons that I h= ave no idea of =E2=80=93 possibly the chips were available a decade or more ahe= ad of software developers ability to deal with the concepts. [=E2=80=A6]
 Although, I don't really feel the abstraction mechanisms in Go=20
 encourage you to write things that could become complex... I=20
 haven't written enough Go code to know this for sure, but I tend=20
 to get the feeling that "I better keep this really simple and=20
 transparent" when writing Go code. It is a bit too C-ish in some=20
 ways (despite being fairly high level).
Go implementation of CSP is not infallible, it is still possible to create livelock and deadlock =E2=80=93 but you do have to try very hard, or be com= pletely unaware of how message passing between processes over a kernel thread pool works. NB CSP doesn't stop you creating livelock or deadlock but it does te= ll you when and why it happens. Go was intended to be a replacement for C, so if it feels C-ish the design = has achieved success! --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Nov 10 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 10 November 2019 at 21:49:45 UTC, Russel Winder wrote:
 pretensions to be a general purpose language. The intention has 
 been to integrate well with Python so that Python is the 
 language of the frontend and Chapel is the language of the 
 computational backend – cf. CERN's view of C++ and Python.
Yes, that would make Chapel much more interesting. I think that being able to write libraries/engines in a new language and call into it from an established high-level language is a good general strategy. Often times language authors see their language as the host language and other languages as "subordinate" library languages. That is probably a strategic mistake. It is kinda sad that so much open source library features have to be reimplemented in various languages.
 If we had the money and the team today, I'd hope we would beat 
 Pony, C++, D, Go, Rust, etc. at their own game. Ain't going to 
 happen, but that's life.
You could write up your ideas and create a blog post that describes it. I believe there is a reddit for people creating their own languages, could inspire someone?
 Intel had the chips in 2008, e.g. The Polaris Chip – cf. 
 SuperComputer 2008 proceedings. But the experiment failed to be 
 picked up for reasons that I have no idea of – possibly the 
 chips were available a decade or more ahead of software 
 developers ability to deal with the concepts.
Ah, interesting, Polaris used network-on-a-chip. According to some VLSI websites the Polaris led to Intel's many-micro-cores-on-a-chip architecture, which led to Larrabee and Xeon Phi. Last version of Phi was released in 2017. Another type of many-micro-core processors with some local memory are the ones geared towards audio/video that use some kind of multiplexed grid-like crossbar-databusses for internal communication between cores. But they are more in the DSP tradition, so not really suitable for actor-style languages I think.
 between processes over a kernel thread pool works. NB CSP 
 doesn't stop you creating livelock or deadlock but it does tell 
 you when and why it happens.
Pony claims to be deadlock free: https://www.ponylang.io/discover/ I assume that you could still experience starvation-like scenarios or run out of memory as unprocessed events pile up, but I don't know exactly how they schedule actors.
 Go was intended to be a replacement for C, so if it feels C-ish 
 the design has achieved success!
Yes… just like Php…
Nov 11 2019
prev sibling parent reply Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-11-08 at 09:57 +0000, Sebastiaan Koppe via Digitalmars-d wrote=
:
[=E2=80=A6]
 Please have a look at the approach taken by structured=20
 concurrency. Recently mentioned on this forum by John Belmonte:=20
 https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb forum.dlang.org
[=E2=80=A6] It is also worth remembering Reactive Programming https://en.wikipedia.org/wiki/Reactive_programming There was a lot of over exaggerate hype when it first came out, but over ti= me it all settled down leading to a nice way of composing event streams and handling futures in a structured way. gtk-rs has built in support for this that makes programming GTK+ UIs nice. = It is an extra over GTK+ but should be seen as essential. It would be nice if GtkD could provide support for it. And yes it is all about event loops. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Nov 10 2019
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 11/11/2019 12:56 AM, Russel Winder wrote:
 On Fri, 2019-11-08 at 09:57 +0000, Sebastiaan Koppe via Digitalmars-d wrote:
 […]
 Please have a look at the approach taken by structured
 concurrency. Recently mentioned on this forum by John Belmonte:
 https://forum.dlang.org/post/rnqbswwwhdwkvvqvodlb forum.dlang.org
[…] It is also worth remembering Reactive Programming https://en.wikipedia.org/wiki/Reactive_programming There was a lot of over exaggerate hype when it first came out, but over time it all settled down leading to a nice way of composing event streams and handling futures in a structured way. gtk-rs has built in support for this that makes programming GTK+ UIs nice. It is an extra over GTK+ but should be seen as essential. It would be nice if GtkD could provide support for it. And yes it is all about event loops.
Okay, now this is a concept that interests me. It hits a lot closer to what I would consider is a good event loop implementation, even if my existing designs are not complete enough for it. Any more resources I should take a look at?
Nov 10 2019
parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Sunday, 10 November 2019 at 12:10:00 UTC, rikki cattermole 
wrote:
 On 11/11/2019 12:56 AM, Russel Winder wrote:
 [...]
Okay, now this is a concept that interests me. It hits a lot closer to what I would consider is a good event loop implementation, even if my existing designs are not complete enough for it. Any more resources I should take a look at?
Take a look here: http://reactivex.io There's also a D library inspired from that somewhere ...
Nov 10 2019
parent Russel Winder <russel winder.org.uk> writes:
On Sun, 2019-11-10 at 13:48 +0000, Paolo Invernizzi via Digitalmars-d wrote=
:
 On Sunday, 10 November 2019 at 12:10:00 UTC, rikki cattermole=20
 wrote:
 On 11/11/2019 12:56 AM, Russel Winder wrote:
 [...]
=20 Okay, now this is a concept that interests me. =20 It hits a lot closer to what I would consider is a good event=20 loop implementation, even if my existing designs are not=20 complete enough for it. =20 Any more resources I should take a look at?
=20 Take a look here: http://reactivex.io =20 There's also a D library inspired from that somewhere ...
There are lots of implementations of the official ReactiveX API managed by this GitHub organisation: https://github.com/ReactiveX The implementation of the reactive idea in gtk-rs is a specialised one sinc= e the manager of the futures stream must integrate with the GTK event loop = =E2=80=93 there is no event loop for the futures, it is fully integrated into the GTK= + event loop. A real-world example. In D to receive events from other threads and process them in the GTK+ thread I have to do: new Timeout(500, delegate bool() { receiveTimeout(0.msecs, (FrontendAppeared message) { addFrontend(message.fei); }, (FrontendDisappeared message) { removeFrontend(message.fei); }, ); return true; }); which is not very event driven and is messy =E2=80=93 unless someone knows = how to do this better. Don't ask how to do this in C++ with gtkmm, you really do not want to know. With Rust: message_channel.attach(None, move |message| { match message { Message::FrontendAppeared{fei} =3D> add_frontend(&c_w, &fei), Message::FrontendDisappeared{fei} =3D> remove_frontend(&c_w, &f= ei), Message::TargettedKeystrokeReceived{tk} =3D> process_targetted_= keystroke(&c_w, &tk), } Continue(true) }); which abstract things far better, and in a way that is comprehensible and y= et hides the details. The Rust implementation is handling more events since the D implementation = is now archived and all the work is happening on the Rust implementation. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Nov 10 2019
prev sibling parent Heromyth <bitworld qq.com> writes:
On Friday, 8 November 2019 at 02:04:07 UTC, Heromyth wrote:
 See 
 https://blog.rust-lang.org/2019/11/07/Async-await-stable.html.
Here are two projects about this: https://github.com/evenex/future http://code.dlang.org/packages/dpromise I make some improvements based on them. Here is an example: void test05_03() { auto ex5a = tuple(3, 2)[].async!((int x, int y) { Promise!void p = delayAsync(5.seconds); await(p); return to!string(x * y); }); assert(ex5a.isPending); // true auto r = await(ex5a); assert(r == "6"); }
Nov 08 2019