digitalmars.dip.development - Second Draft: Coroutines
- Richard (Rikki) Andrew Cattermole (21/21) Dec 12 2024 Stackless coroutines, is a way to enable asynchronous
- Atila Neves (12/33) Jan 13 I had trouble understanding the proposal. If I didn't already
- Richard (Rikki) Andrew Cattermole (37/42) Jan 13 what a coroutine is, I wouldn't have found out by reading the abstract.
- Atila Neves (19/47) Jan 15 I haven't read it before; I'm also not an LLM.
- Richard (Rikki) Andrew Cattermole (96/143) Jan 15 Before reading all this, I have something I want to make clear about
- Jin (16/18) Jan 15 I note that with the advent of async/await in JS, development for
- Richard (Rikki) Andrew Cattermole (24/46) Jan 16 Well yes, this is generating data using a very simple algorithm.
- Jin (7/12) Jan 19 You are very inattentive to the arguments of the interlocutor. I
- Mai Lapyst (76/87) Jan 23 I think you have a misunderstanding (or mutliple here). Nobody
- Jin (37/52) Jan 28 This is not the first day in programming and, unfortunately, I
- Dennis (4/7) Jan 28 Hello, I don't want to delete your post because it has relevant
- Mai Lapyst (76/91) Feb 03 Sure, but you forget a curcial detail in your analysis: Humans
- Jin (4/5) Feb 10 FYI
- Jack Hicks (23/23) Feb 12 "Thank you for the detailed explanation about opConstructCo. As
- Quirin Schroll (9/12) Jan 23 Agreed. My two cents: C# has `yield return` and `yield break`.
- Sebastiaan Koppe (16/37) Jan 23 The design as presented isn't focused enough IMO. It lacks detail
- Richard (Rikki) Andrew Cattermole (12/57) Jan 23 That article by Lewis Baker is pretty good at laying out how C++ does
- Sebastiaan Koppe (6/10) Jan 23 I wouldn't dismiss it so easily. For one it explains the
- Richard (Rikki) Andrew Cattermole (10/23) Jan 23 Ahhh ok, you are looking for a statement to the effect of: "A coroutine
- Sebastiaan Koppe (10/34) Jan 23 No, that is not what I mean.
- Richard (Rikki) Andrew Cattermole (7/46) Jan 23 Right, I handle this as part of my scheduler and worker pool.
- Sebastiaan Koppe (8/19) Jan 23 Without having a notion on how this might work I can't reasonably
- Richard (Rikki) Andrew Cattermole (36/61) Jan 23 Are you wanting this snippet?
- Mai Lapyst (61/138) Jan 23 First off: nice work on the proposal here; I really like it.
- Richard (Rikki) Andrew Cattermole (58/209) Jan 23 Atila had a problem with this also. I haven't been able to change it as
- Mai Lapyst (77/124) Jan 24 With error you mean an exception? As there are compiler errors
- Richard (Rikki) Andrew Cattermole (109/251) Jan 24 I mean a compiler error. Not a runtime exception.
- Sebastiaan Koppe (14/23) Jan 24 No, not specifically. I am requesting the DIP to clarify the
- Richard (Rikki) Andrew Cattermole (72/99) Jan 24 It should be scheduled:
- Sebastiaan Koppe (52/95) Jan 25 Well, then the DIP needs to be more explicit that the compiler is
- Richard (Rikki) Andrew Cattermole (35/141) Jan 25 It is in there.
- Mai Lapyst (157/185) Jan 25 So "preventing breaking" is only reserved for phobos then, and
- Richard (Rikki) Andrew Cattermole (130/319) Jan 25 The ``await`` statement only works in a coroutine, it should not break
- Quirin Schroll (11/12) Jan 23 I might want to say, the term confused me quite a while. That’s
- Richard (Rikki) Andrew Cattermole (12/25) Jan 23 The term is correct.
- Richard (Rikki) Andrew Cattermole (7/7) Jan 31 Perma:
- Mai Lapyst (55/65) Feb 03 There's still no writing about `opConstructCo`. Maybe a bit of
- Richard (Rikki) Andrew Cattermole (11/52) Feb 03 "In the following example, a new operator overload ``opConstructCo``
- Richard (Rikki) Andrew Cattermole (5/20) Feb 03 Okay I changed my mind.
- Mai Lapyst (11/16) Feb 04 I hadn't added UAX31 since I've used DLang's grammar
Stackless coroutines, is a way to enable asynchronous programming, for lesser skilled and less knowledgable people whilst offering efficient processing of events, safely. This version of the proposal has been rewritten to account for a lack of understanding on the separation of library code versus what the language is offering. And a few changes related to yielding. Yielding is no longer guaranteed to be implicit. You may explicitly yield using an ``await`` statement should you wish to. The library type must support implicit yielding if you wish to use it. Both may be used on the same type, it is entirely dependent upon the called methods attributes. Lastly, the changes have been made to simplify the descriptor to make the implementation within the compiler a little bit easier. It does mean that you as a library author have no way to know about the functions in the state machine (not that you could have done much with them). Current: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/649a5a6cc68c4bfe9f5a62f746a3a90f6b4beaf4 Latest: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4
Dec 12 2024
On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:Stackless coroutines, is a way to enable asynchronous programming, for lesser skilled and less knowledgable people whilst offering efficient processing of events, safely. This version of the proposal has been rewritten to account for a lack of understanding on the separation of library code versus what the language is offering. And a few changes related to yielding. Yielding is no longer guaranteed to be implicit. You may explicitly yield using an ``await`` statement should you wish to. The library type must support implicit yielding if you wish to use it. Both may be used on the same type, it is entirely dependent upon the called methods attributes. Lastly, the changes have been made to simplify the descriptor to make the implementation within the compiler a little bit easier. It does mean that you as a library author have no way to know about the functions in the state machine (not that you could have done much with them). Current: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/649a5a6cc68c4bfe9f5a62f746a3a90f6b4beaf4 Latest: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4I had trouble understanding the proposal. If I didn't already know what a coroutine is, I wouldn't have found out by reading the abstract. There are a few sentences I didn't understand in their entirety either such as "If it causes an error, this error is guaranteed to be wrong in a multi-threaded application of it.". My main issue is that I don't think the DIP justifies the need for stackless coroutines (which I think are a good idea). It also seems more complicated than what other languages have, and I'm not sure why that is. Why ` async return` instead of `yield`? Why have to add ` async` to the grammar if it looks like an attribute?
Jan 13
On 14/01/2025 6:59 AM, Atila Neves wrote:I had trouble understanding the proposal. If I didn't already knowwhat a coroutine is, I wouldn't have found out by reading the abstract. There are a few sentences I didn't understand in their entirety either such as "If it causes an error, this error is guaranteed to be wrong in a multi-threaded application of it.". I do not understand why you are having trouble with it. It has not come up before and Gemini understood it. It is short, precise and complete to what I mean. If you want me to change it, I need a lot more feedback including: - How you are interpreting it - What questions you had after reading it - what did you expect it to contain As it currently stands this is not constructive feedback, there is nothing I can do with it. I do not understand what problems you are having with it.It also seems more complicated than what other languages have, and I'm not sure why that is.Its not more complicated, but I can understand that it may appear that way. Other languages can tie the feature to a specific library, which will not work for us. Consider why we cannot: a coroutine language feature is tied to its representation in library, which is tied to is eventloop, which is tied to sockets, windowing, pipes, processes, thread pool ext. None of which can be in druntime, has to be Phobos. But we cannot tie a language feature to Phobos, and if we do that I cannot experiment prior to PhobosV3 to ensure it both works as expected and to learn if any further expansion is needed. Also coroutines are used in both generative and event handling basis, they are not the same library wise. Tieing it to just one is going to be hell for someone. Most likely me as I'm responsible for the user experience.Why | async return| instead of |yield|?Then ``yield`` would be a keyword, which in turn breaks code which is known to exist. There is no benefit to doing this. But we _could_ do it. However there is a good question here, why not combine ``await`` statement with `` async return``? Well the answer is you may want to return a coroutine, which couldn't be differentiated by the compiler.Why have to add | async| to the grammar if it looks like an attribute?All language attributes are in the grammar, there is nothing special going on there. https://dlang.org/spec/grammar.html#attributes
Jan 13
On Monday, 13 January 2025 at 18:51:27 UTC, Richard (Rikki) Andrew Cattermole wrote:On 14/01/2025 6:59 AM, Atila Neves wrote:I had trouble understanding the proposal. If I didn't alreadyI do not understand why you are having trouble with it. It has not come up before and Gemini understood it.I haven't read it before; I'm also not an LLM.It is short, precise and complete to what I mean.I don't think that's the case.If you want me to change it, I need a lot more feedback including: - How you are interpreting it - What questions you had after reading it - what did you expect it to containSure. I think the feedback would be quite long, though. I wonder if it would be better to have a coroutine library first; I know that it would be a lot more clumsy to use than it would be with language support. But maybe having a library prove itself useful first would be the way forward.Other languages can tie the feature to a specific library, which will not work for us.Why is that?Consider why we cannot: a coroutine language feature is tied to its representation in library, which is tied to is eventloop, which is tied to sockets, windowing, pipes, processes, thread pool ext.How is this different in other languages?None of which can be in druntime, has to be Phobos.Why is that?Also coroutines are used in both generative and event handling basis, they are not the same library wise. Tieing it to just one is going to be hell for someone. Most likely me as I'm responsible for the user experience.Again, how is this different in other languages?C++ got around that with `co_yield`.Why | async return| instead of |yield|?Then ``yield`` would be a keyword, which in turn breaks code which is known to exist.There is no benefit to doing this. But we _could_ do it.Familiarity would be a benefit.All language attributes are in the grammar, there is nothing special going on there. https://dlang.org/spec/grammar.html#attributesFor historical reasons, yes. I'm aware one can't attach an attribute to `return` otherwise right now, but wherever they already work I would argue that `core.attributes` is the way to go.
Jan 15
Before reading all this, I have something I want to make clear about coroutines that probably should be said earlier contextually to it. There will be people who are not happy with our library design and implementation. It will NOT matter what choices we will make, we cannot make everyone happy if we limit the language feature to one solution. This group includes me, due to -betterC (and some other misc concerns). Alternatively, which is what I've gone with, we can just not do that. We can make it work for any library. Then people can do their own thing or pick someone elses. This is a strength of D not a weakness. And the best part? It is not more complicated. It is not more work to implement, if anything it is a subset of what you would need to have instead. Nor does it give a worse user experience. It is a different design with better tradeoffs for us, that is all. On 15/01/2025 10:10 PM, Atila Neves wrote:I cannot see a problem with it and I've given evidence that I have good reason to not, so a statement like "I don't understand it" is not helpful if the goal is to see changes. So yes please, give me more information that I can take action on! It may be a good idea to ask Mike for help, this kind of feedback is something he is good at (considering his job).If you want me to change it, I need a lot more feedback including: - How you are interpreting it - What questions you had after reading it - what did you expect it to containSure. I think the feedback would be quite long, though.I wonder if it would be better to have a coroutine library first; I know that it would be a lot more clumsy to use than it would be with language support. But maybe having a library prove itself useful first would be the way forward.Been there done that. https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine It is absolutely hell to write them by hand. async/await, yes they wrote the library before the language feature and were stuck with some less than desirable choices (at least in terms of D they are anyway). This also happens to be why my library is so much hell to write the coroutines for currently, because you are effectively replacing the language and that is intentional.Well for one thing, I am not putting experimental code into Phobos let alone druntime for an event loop. This needs to work outside of it. I need to be able to modify my existing event loop that is designed for coroutines in -betterC, to use it. Otherwise I will not be able to find any problems it may have or other tunings that will give a better user experience once we turn it on. Plus I see no reason to start tieing this language feature to a specific library. We do not box. But we do use templates. We love templates. We love generating types and symbols to do this kind of thing. It is both a well loved aspect of D, and a very well understood one. Lean into it, not against it.Other languages can tie the feature to a specific library, which will not work for us.Why is that?As far as I'm aware it is not, but you do have to acknowledge it to understand the decision on this front.Consider why we cannot: a coroutine language feature is tied to its representation in library, which is tied to is eventloop, which is tied to sockets, windowing, pipes, processes, thread pool ext.How is this different in other languages?It is an absolute massive project. With a ton of platform and runtime specific things. Trust me, it does not belong in druntime. You cannot convince me that it is the right place.None of which can be in druntime, has to be Phobos.Why is that?C++ has a massive proposal to handle generative data handling side of things. It includes scheduler support, (note that this proposal does not need the language to be aware of such things). I cannot find the paper in question, otherwise I would link it. data (multiple value returns), the focus is upon event handling. They are sadly different use cases and are going to result in different libraries. Rust literally ties the language to POSIX specific event loop function calls, that end up requiring them to use undocumented API's on Windows to make work. At some point you gotta admit, having the compiler produce a state struct with a handle method with everything a library needs to work with the language feature looks quite simple in comparison ;) Building up the state machine and extraction of information such as what is returned, how it completes with what types (including exceptions ext.) happens in all languages. But they tend to go a step further and start messing around with library code, this doesn't, nor would it be to our advantage.Also coroutines are used in both generative and event handling basis, they are not the same library wise. Tieing it to just one is going to be hell for someone. Most likely me as I'm responsible for the user experience.Again, how is this different in other languages?To C++ that has had them for only a couple of years. From my perspective, C++ has an ugly solution to the problem, that need not exist in terms of syntax. Now compare it to what I proposed: - Uses an attribute that exists for the same concept, but in a different place in grammar. - It would still be read in a way that is understood control flow wise, even if you did not understand coroutines. - Does not risk breaking code. To me this is a much better solution that fits D, rather than blindingly copying another language with very different needs in terms of syntax than we have. We don't need to copy C++, nor do we have the same baggage as C++ so our choices can be different on this, so why should we?C++ got around that with `co_yield`.Why | async return| instead of |yield|?Then ``yield`` would be a keyword, which in turn breaks code which is known to exist.There is no benefit to doing this. But we _could_ do it.Familiarity would be a benefit.Which has to be imported. I argue similarly, but there are target audience and language awareness to what I recommend. `` async`` is special, it is used to trigger a head line language feature with a very large target audience. Therefore it goes in language. All of the library attributes in the DIP that the average developer doesn't need to know exists, they are in ``core.attributes``. Plus, coroutines really need to have support for slicing and dicing at the parser level. I worked really hard to make that possible for Walter due to his issues with ``opApply``. It took weeks of back and forth with Adam, for me to come up with the second draft. Just so I could make it easier on Walter, but at the same time prevent any of the very large teams, they can get nasty. In a DIP this size, there is a lot of contextual information that shouldn't be in it. This is a great example of it.All language attributes are in the grammar, there is nothing special going on there. https://dlang.org/spec/grammar.html#attributesFor historical reasons, yes. I'm aware one can't attach an attribute to `return` otherwise right now, but wherever they already work I would argue that `core.attributes` is the way to go.
Jan 15
On Wednesday, 15 January 2025 at 16:19:37 UTC, Richard (Rikki) Andrew Cattermole wrote:added async/awaitI note that with the advent of async/await in JS, development for the browser turned into hell. And when node-fibers (a native nodejs extension that adds support for coroutines with a stack at runtime) was broken, all hell broke loose on the servers. Briefly about async/await problems: - [Low performance due to the inability to properly optimize the code.](https://page.hyoo.ru/#!=btunlj_fp1tum/View'btunlj_fp1tum'.Details=%D0%90%D1%81%D0%B8%D0%BD%D1%85%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B9%20%D0%BA%D0%B5%D0%B9%D1%81) - [Different colors of functions that virally affect the call stack.](https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/) - [Inability to abort a deep subtasks without manually drawing a CancellationToken since await is not owning.](https://hackernoon.com/why-do-you-need-a-cancellation-token-in-c-for-tasks) - [The need to reinvent the stack as an AsyncContext.](https://github.com/tc39/proposal-async-context)
Jan 15
On 16/01/2025 11:08 AM, Jin wrote:On Wednesday, 15 January 2025 at 16:19:37 UTC, Richard (Rikki) Andrew Cattermole wrote:Well yes, this is generating data using a very simple algorithm. The overhead of a coroutine is always going to be higher than some basic integral instructions. This is to be expected, this is not what it is good at.async/awaitI note that with the advent of async/await in JS, development for the browser turned into hell. And when node-fibers (a native nodejs extension that adds support for coroutines with a stack at runtime) was broken, all hell broke loose on the servers. Briefly about async/await problems: - [Low performance due to the inability to properly optimize the code.] (https://page.hyoo.ru/#!=btunlj_fp1tum/ View'btunlj_fp1tum'.Details=%D0%90%D1%81%D0%B8%D0%BD%D1%85%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B9%20%D0%BA%D0%B5%D0%B9%D1%81)- [Different colors of functions that virally affect the call stack.] (https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your- function/)I have this same link in the DIP. This problem exists because you can yield at an abstraction on top of a thread. It exists in a stackful coroutine such as a Fiber in druntime, just as much as it does for a stackless coroutine. The difference between the two coroutine types, is that a stackless coroutine will scream it at you. You are forced to handle it. This is a feature to aid in thread safety. It is not a bug or undesirable behavior. If you do not have thread safety modelled by the compiler, you will mess it up. It is too easy to do this. It doesn't matter how how much anyone argues "oh just do X", people have been making that argument for C wrt. pointers without a length for forever, this is no different. And look at how the CVE's keep being created due to it.- [Inability to abort a deep subtasks without manually drawing a CancellationToken since await is not owning.](https://hackernoon.com/ why-do-you-need-a-cancellation-token-in-c-for-tasks)This is a good tool if it is what you need. I don't see the problem here. If you need a way to break cycles due to the use of reference counting, an extra type like this can be a great way to handle cancellation that lifetimes or error handling alone cannot do.- [The need to reinvent the stack as an AsyncContext.](https:// github.com/tc39/proposal-async-context)This is just weirdo behavior of JavaScript for globals. It does not apply to D.
Jan 16
On Thursday, 16 January 2025 at 22:16:10 UTC, Richard (Rikki) Andrew Cattermole wrote:This is to be expected, this is not what it is good at. It exists in a stackful coroutine such as a Fiber in druntime, just as much as it does for a stackless coroutine. I don't see the problem here. This is just weirdo behavior of JavaScript for globals.You are very inattentive to the arguments of the interlocutor. I see that you prefer not to notice the problems instead of solving them. Good luck to you with this undertaking - you will need it. But if this cancer of "modern" programming languages creeps into D, I’ll finally switch to some Go.
Jan 19
On Sunday, 19 January 2025 at 18:46:23 UTC, Jin wrote:I see that you prefer not to notice the problems instead of solving them. Good luck to you with this undertaking - you will need it. But if this cancer of "modern" programming languages creeps into D, I’ll finally switch to some Go.I think you have a misunderstanding (or mutliple here). Nobody here want's to take away threads or fibers from the language. And also: even threads and fibers have many of the same problems than stackless coroutines; the only difference really is the implementation and somewhat their usage.- [Low performance due to the inability to properly optimize the code.] (https://page.hyoo.ru/#!=btunlj_fp1tum/ View'btunlj_fp1tum'.Details=%D0%90%D1%81%D0%B8%D0%BD%D1%85%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B9%20%D0%BA%D0%B5%D0%B9%D1%81)Benchmarking is always only as good and usefull when used in the right environments. I can easily create benchmarks that also show how "slow" fibers are and how "fast" async is, as well as otherwise. Hell anyone could claim that just spawning more OS threads is "somehow" faster than any greenthread or async continuation if they just tweak their workload enough; because at the end of the day its exactly that whats the key to benchmarks: workload. Any form of concurrency only really excells at what they are doing if used in an workload where it is key to do things concurrent / in parallel, which mainly is IO bound applications such as webservers. Any linear job, such as calculating a fibonacci number will always be slower when bloated with **ANY** form of concurrency. Just go ahead and try re-implementing it with fibers or OS Threads where every call to `fib(n)` spawns a new thread and joins it. I think anyone would agree that thats just insane waste of performance, which it rightfully is! Nobody in their right mind would try to calculate it in parallel because its still only a "simple" calculation. Another thing is when you have to deal with Millions (!) of concurrent request on an webserver where there's no gurantee that any of the request resolve in linear time, or with other words, without waiting on another thing in some form, which is a stark contrast to a fibonacci calculation which always be resolveable without any further waiting once it's started. This is due to the purity of these two workloads: fibonacci is pure as it only ever require the inputs you give it directly. But 99.99% of any webrequest deals with some form of waiting: be it because you have an database you need to wait for, an cache adapter like redis or a file you need to read: IO is a large portion of time waiting for it. Thats why we have invented Fibers or async in the first place: spending the precious time we would wait otherwise doing actual work.- [The need to reinvent the stack as an AsyncContext.](https:// github.com/tc39/proposal-async-context)This need only arises from poorly used global variables / "impure" code, as the example you reference very good demonstrates; the async code captures all explicitly passed values to functions correctly. Only in the example where a "shared" variable (an global for all that matters here), is introduced, problems start to creep in. These problems also arise if one uses fibers btw, as globals are **always** a source of errors if not managed correctly. Thats one of the reasons D supports writing "pure" code: if you eliminate any implicit outside truth and only consider values explicitly passed via parameters or return values, your code magically gets way safer and also for a compiler to optimize for. And btw, even threads and fibers have this context problem: because of that, we're invented thread-locals, or for the case of fibers, fiber locals. Just look and vibe.d; they build ontop of fibers and added a fiber-local storage because globals are inheritently a problem in physically **all** concurrent code, not only async/await stackless coroutines.But if this cancer of "modern" programming languages creeps into D, I’ll finally switch to some Go.Thats funny that you mention go, as it has even some of the flaws you yourself mentioned; it has the same context problem with globals; it expects you (like many other languages) to use an mutex to protect it or use an type **literally** named 'Context'. Sure it has additionaly some race detection, but that gets you only so far. And your point on how you need an extra CancellationToken type: thats also true for **any** threading and/or fibers, and in go it's litterally one of the first thing you learn: waitgroups and context (again). And I would ask you to keep this negativity out of these sort of discussions. Again, nobody will take away threads or fibers; all thats propsed here is that we get another tool in our toolbox. If you want to continue using fibers you're free to go. I also would mention that I wouldn't want fibers to be removed once stackless coroutines landed in D; D is an language for everyone, and as such should give as many tools to people as they need. There will always be some tool thats not used by everyone, but I see that as a win. Better have one tool to much than lacking it and resorting to weird hacks to get stuff working.
Jan 23
On Friday, 24 January 2025 at 04:32:21 UTC, Mai Lapyst wrote:I think you have a misunderstanding (or mutliple here). Nobody here want's to take away threads or fibers from the language.This is not the first day in programming and, unfortunately, I know perfectly well how the Overton window works: - No one will suffer from another feature X. Do not want to - do not use. - Here you have a new API / library, but it is available only through X. Please rewrite part of your code through X. - There are too many crutches for X in the code. In order to get rid of them, you need to rewrite your entire code on X. - X is actively supported, while the “outdated” mode of operation is drowned in problems that have not been repaired for years, some of which appeared during the implementation of X. - At some point, the code without X simply stops working and you have no choice completely left. See the example: https://www.npmjs.com/package/fibersBenchmarking is always only as good and usefull when used in the right environments. I can easily create benchmarks that also show how "slow" fibers are and how "fast" async is, as well as otherwise.When the smart indicates the problem, the fool looks at the finger. Asynchronous function can not be inlined at the place of use and use a faster allocation on the stack. Benchmark only shows that even the coolest modern JIT compilers are not able to optimize them.Just go ahead and try re-implementing it with fibers or OS Threads where every call to `fib(n)` spawns a new thread and joins it. I think anyone would agree that thats just insane waste of performance, which it rightfully is! Nobody in their right mind would try to calculate it in parallel because its still only a "simple" calculation.You either decisively do not understand what you are talking about, or do intentional demagogy. Both options do not honor you. Where one single fiber will have a dozen fast calls and the optional one yield somewhere in the depths, with async functions you will have a dozen slow asynchronous calls, even if no asynchrony (for example, because caching) will not be required.This need only arises from poorly used global variables / "impure" code, as the example you reference very good demonstrates;Any useful code is not pure. You either did not understand the problem that AsyncContext solves, or did not try to understand. Global (or rather Thread/Fiber Local) variables are used for special programming techniques that allow you to write a simpler, more reliable and effective code. I will not carry out a lecture on reactive programming, logging in exceptional situations and tracking user actions. See this series for example: https://dev.to/ninjin/perfect-reactive-dependency-tracking-85e Concurrent access to variables with multi-threading has nothing to do with it.
Jan 28
On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:You either decisively do not understand what you are talking about, or do intentional demagogy. Both options do not honor you.Hello, I don't want to delete your post because it has relevant insights, but personal attacks are not allowed in this forum. Please phrase your criticism in a less hostile manner. Thank you.
Jan 28
On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:This is not the first day in programming and, unfortunately, I know perfectly well how the Overton window works:Sure, but you forget a curcial detail in your analysis: Humans and Intention. Other projects (maybe with coporate funding behind them) will indeed throw usability under the bus for some sweet sweet money (i.e. Blockchain / AI), but Dlang is an community effort, entirely driven and held up by humans that put their heart into it. Throwing both into the same bin and makeing conclusions about them isn't fair game. While the possibility that the same happens to Dlang, it will only because theres no one currently working on these features, and thats only because theres generally to few contributors which inturn is an effect bc theres hardly any help for onbording / mentoring an new contributor onto the project, which also comes from the lack of people wanting to put effort into the language. But thats all a management- & reputation issue of the project, not an ill-intend or genral lack of empaty towards people that prefer to work at a lower level (i.e. golang, which dosnt even give you access to their greenthreads implemntation for you to tweak!).Asynchronous function can not be inlined at the place of use and use a faster allocation on the stack.First: they are just **functions**, ofc they will use stack allocation whenever possible, just like normal functions. The only difference is the state machine thats wrapped around it. Any value that needs to survive into another state will put outside of the stack. But it still dosnt say **how** this memory gets allocated, as this is the responsibility of the executor driving the statemachine! So you can perfectly fine allocate it onto the stack as well, removing any "perfomance issue" that might arise. Only downside then is that your statemachine is somewhat useless, but that would be as well if you did it by hand, so it's then more a design problem of the executor than the technique.Benchmark only shows that even the coolest modern JIT compilers are not able to optimize them.Because you once again try to compare apples with oranges. Ofc will be linear code with "normal" functions be way more performant if you just look at the asm generated by them and draw your conclusion. But once again: async is for handling waiting for states which you **dont** know when they will be ready, such as damn IO! You just can't predict when your harddrive / kernel will answer the request for more data, you can just wait until it says so! Thats why blocking IO fell out of favor: it just stalls your programm and you cant do anything else. How did we solve that? Right, by introducing **parallelism** via threads with is just doing code asyncronous to each other!!! But it was slow bc of kernel context switches, the solution? Move it to userspace, aka fibers / greenthreading / lightwightthreads! Same technique, other place; still the same idea of parallelism by executing code **seemingly** asyncronous to each other. Async functions / state machienes are just the next evolution of that, just like we thought one day that `goto` for simple branching would be to cumbersome to write + it can go a lot wrong, so we created `if X ... else ...`, `for X ...`, `while X ...` and so forth!Where one single fiber will have a dozen fast calls and the optional one yield somewhere in the depths, with async functions you will have a dozen slow asynchronous calls, even if no asynchrony (for example, because caching) will not be required.Sure, but you again dont compare them fairly. An async call (with use of `await`) is just like calling `Fiber.yield()`! So to compare them on a same level, you would need to yield in every call, makeing my analogy to an own thread per `fib(n)` call understandable. So yes, ofc will async functions for a **non-async task** be wasetfull, but so will be using threads for the same thing! At the end, it's not the techniques fault if an programmer just uses it wrong; `fib(n)` shouldn't be parallel (in any form!) to begin with.Any useful code is not pure.Fib is pure. Addition is pure. Any arithmetic is pure. Modifing an object via an method that only changes fields is pure. I would argue they are usefull, unless ofc you thing that **any code anywhere** is a waste of time, then we can stop the whole thing right here. I'm not saying that we'll all only should do functional programming (I also dont like overuse of it!), but considering what it **teaches** you, isn't a bad thing; like pureness (which dlang has itself!) and effects. Just go ahead: grab any code and tell dlang it should output you the processed dlang code (i.e on run.dlang.io the "AST" button), you'll see quickly that a ton of functions are actually marked pure by the compiler while containing sensible and usefull code!Global (or rather Thread/Fiber Local) variables are used for special programming techniques that allow you to write a simpler, more reliable and effective code.It's optimization. Just like makeing a feature to automatically creates & optimize state machienes is. Once again: if you're fine with writing functions by hands, no one is stopping you. Fibers in dlang are an entirely **library** driven construct. You can just rip them out of phobos and maintain an own version. Nothing prevents you from that!
Feb 03
On Monday, 3 February 2025 at 21:43:09 UTC, Mai Lapyst wrote:On Tuesday, 28 January 2025 at 16:31:35 UTC, Jin wrote:FYI https://reductor.dev/cpp/2023/08/10/the-downsides-of-coroutines.html https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md
Feb 10
"Thank you for the detailed explanation about opConstructCo. As someone in the process of writing my own frontend for the dlang language, I find this information very valuable. The clarity you provided about the necessity of opConstructCo in the language documentation is much appreciated. It's indeed crucial to include it clearly as a new operator in the relevant sections to avoid confusion for implementers like myself. I also wanted to highlight a related concern about hardware compatibility. For instance, when working with advanced features like coroutines, the overall system performance is a significant factor. This brings to mind the importance of having fast and reliable storage solutions. Recently, I've been exploring Hard Disk Drives with SATA 12GBPS https://serverorbit.com/hard-disk-drives/sata-12gbps interfaces. These drives offer a substantial boost in data transfer rates compared to older models, which can be particularly beneficial for high-performance computing tasks, including those involving complex coroutine implementations. Overall, ensuring that both the software and hardware aspects are well-documented and compatible is essential for seamless development and execution. Thanks again for shedding light on opConstructCo and helping us navigate these technical details more effectively.
Feb 12
On Monday, 13 January 2025 at 17:59:35 UTC, Atila Neves wrote:[…] Why ` async return` instead of `yield`? Why have to add ` async` to the grammar if it looks like an attribute?The funny thing is, if D were open to contextual keywords, we could do the same. Then, `yield` wouldn’t even have to become a keyword. Alternatively, use `yield_return` and `yield_break`, which, yes, are valid identifiers, but have a near-zero probability being present in existing D code. If anything, the proper way to make `yield` into a keyword is `__yield`, not ` yield`.
Jan 23
On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:Stackless coroutines, is a way to enable asynchronous programming, for lesser skilled and less knowledgable people whilst offering efficient processing of events, safely. This version of the proposal has been rewritten to account for a lack of understanding on the separation of library code versus what the language is offering. And a few changes related to yielding. Yielding is no longer guaranteed to be implicit. You may explicitly yield using an ``await`` statement should you wish to. The library type must support implicit yielding if you wish to use it. Both may be used on the same type, it is entirely dependent upon the called methods attributes. Lastly, the changes have been made to simplify the descriptor to make the implementation within the compiler a little bit easier. It does mean that you as a library author have no way to know about the functions in the state machine (not that you could have done much with them). Current: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/649a5a6cc68c4bfe9f5a62f746a3a90f6b4beaf4 Latest: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4The design as presented isn't focused enough IMO. It lacks detail in core areas and adds in too many (questionable) nice-to-have or day-two features. I believe the challenge with coroutines is twofold: 1) the code transformations and 2) how/when to resume suspended coroutines. Then there are secondary goals like extensibility/flexibility and eliding allocations. Additionally, the DIP has no reference to C++'s coroutine feature. I am sure there is a ton written on the subject by users, implementors, designers, etc. A good starting point would be the official reference https://en.cppreference.com/w/cpp/language/coroutines as well as Lewis Baker's blog: https://lewissbaker.github.io/2022/08/27/understanding-the-compiler-transform (part 5 of 5), who is knee-deep in this stuff.
Jan 23
On 23/01/2025 10:00 PM, Sebastiaan Koppe wrote:On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:I didn't seen anything worth adding. So I haven't.Stackless coroutines, is a way to enable asynchronous programming, for lesser skilled and less knowledgable people whilst offering efficient processing of events, safely. This version of the proposal has been rewritten to account for a lack of understanding on the separation of library code versus what the language is offering. And a few changes related to yielding. Yielding is no longer guaranteed to be implicit. You may explicitly yield using an ``await`` statement should you wish to. The library type must support implicit yielding if you wish to use it. Both may be used on the same type, it is entirely dependent upon the called methods attributes. Lastly, the changes have been made to simplify the descriptor to make the implementation within the compiler a little bit easier. It does mean that you as a library author have no way to know about the functions in the state machine (not that you could have done much with them). Current: https://gist.github.com/rikkimax/ fe2578e1dfbf66346201fd191db4bdd4/649a5a6cc68c4bfe9f5a62f746a3a90f6b4beaf4 Latest: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4The design as presented isn't focused enough IMO. It lacks detail in core areas and adds in too many (questionable) nice-to-have or day-two features. I believe the challenge with coroutines is twofold: 1) the code transformations and 2) how/when to resume suspended coroutines. Then there are secondary goals like extensibility/flexibility and eliding allocations. Additionally, the DIP has no reference to C++'s coroutine feature. I am sure there is a ton written on the subject by users, implementors, designers, etc. A good starting point would be the official reference https:// en.cppreference.com/w/cpp/language/coroutinesas well as Lewis Baker's blog: https://lewissbaker.github.io/2022/08/27/understanding-the- compiler-transform (part 5 of 5), who is knee-deep in this stuff.That article by Lewis Baker is pretty good at laying out how C++ does it, although it does hand waive some details away that I would be interested in. It also confirms a couple of things for me. 1. Tieing the language to a specific library does not inherently make it simpler. 2. C++ has a more complex design, that does not offer anything above that of my proposal. I'll add it to the references section, but not the prior work. This is not a copy C++ situation.
Jan 23
On Thursday, 23 January 2025 at 09:49:54 UTC, Richard (Rikki) Andrew Cattermole wrote:On 23/01/2025 10:00 PM, Sebastiaan Koppe wrote:I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.A good starting point would be the official reference https:// en.cppreference.com/w/cpp/language/coroutinesI didn't seen anything worth adding. So I haven't.
Jan 23
On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:On Thursday, 23 January 2025 at 09:49:54 UTC, Richard (Rikki) Andrew Cattermole wrote:Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value." The reason it is not in the DIP is because this a library behavior. On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case. About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).On 23/01/2025 10:00 PM, Sebastiaan Koppe wrote:I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.A good starting point would be the official reference https:// en.cppreference.com/w/cpp/language/coroutinesI didn't seen anything worth adding. So I haven't.
Jan 23
On Thursday, 23 January 2025 at 17:14:50 UTC, Richard (Rikki) Andrew Cattermole wrote:On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:No, that is not what I mean. Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine. The execution context in this case could be the main thread, a pool, etc. From that above mentioned C++ link:I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value." The reason it is not in the DIP is because this a library behavior. On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case. About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).The coroutine is suspended (its coroutine state is populated with local variables and current suspension point). awaiter.await_suspend(handle) is called, where handle is the coroutine handle representing the current coroutine. Inside that function, the suspended coroutine state is observable via that handle, and __it's this function's responsibility to schedule it to resume on some executor__, or to be destroyed (returning false counts as scheduling)
Jan 23
On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:On Thursday, 23 January 2025 at 17:14:50 UTC, Richard (Rikki) Andrew Cattermole wrote:Right, I handle this as part of my scheduler and worker pool. The language has no knowledge, nor need to know any of this which is why it is not in the DIP. How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).On 24/01/2025 4:55 AM, Sebastiaan Koppe wrote:No, that is not what I mean. Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine. The execution context in this case could be the main thread, a pool, etc. From that above mentioned C++ link:I wouldn't dismiss it so easily. For one it explains the mechanism by which suspended coroutines can get scheduled again, whereas your DIP only mentioned the `WaitingOn` but doesn't go into detail how it actually works.Ahhh ok, you are looking for a statement to the effect of: "A coroutine may only be executed if it is not complete and if it has a dependency for that to be complete or have a value." The reason it is not in the DIP is because this a library behavior. On the language side there is no such guarantee, you should be free to execute them repeatedly without error. There could be logic bugs, but the compiler cannot know that this is the case. About the only time the compiler should prevent you from calling it is if there is no transition to execute (such as it is now complete).The coroutine is suspended (its coroutine state is populated with local variables and current suspension point). awaiter.await_suspend(handle) is called, where handle is the coroutine handle representing the current coroutine. Inside that function, the suspended coroutine state is observable via that handle, and __it's this function's responsibility to schedule it to resume on some executor__, or to be destroyed (returning false counts as scheduling)
Jan 23
On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:Without having a notion on how this might work I can't reasonably comment on this DIP.Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.Right, I handle this as part of my scheduler and worker pool. The language has no knowledge, nor need to know any of this which is why it is not in the DIP.How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.
Jan 23
On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:Are you wanting this snippet? ```d // if any dependents unblock them and schedule their execution. void onComplete(GenericCoroutine); // Depender depends upon dependency, when dependency has value or completes unblock depender. // May need to handle dependency for scheduling. void seeDependency(GenericCoroutine dependency, GenericCoroutine depender); // Reschedule coroutine for execution void reschedule(GenericCoroutine); void execute(COState)(GenericCoroutine us, COState* coState) { if (coState.tag >= 0) { coState.execute(); coState.waitingOnCoroutine.match{ (:None) {}; (GenericCoroutine dependency) { seeDependency(dependency, us); }; // Others? Future's ext. }; } if (coState.tag < 0) onComplete(us); else reschedule(us); } ``` Where ``COState`` is the generated struct as per Description -> State heading. Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated. Due to this depending on sumtypes I can't put it in as-is. Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:Without having a notion on how this might work I can't reasonably comment on this DIP.Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.Right, I handle this as part of my scheduler and worker pool. The language has no knowledge, nor need to know any of this which is why it is not in the DIP.How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.
Jan 23
On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:First off: nice work on the proposal here; I really like it. Would love to try it once it's in an beta stage as it's quite promising. As an individual that implemented their own userspace eventloop via fibers, I would love to have another utility in my belt to use in the implementation. The only thing I had a hard time figuring out what you ment by "If it causes an error, this error is guaranteed to be wrong in a multi-threaded application of it."; What I think is that you mean that any exception created / captured by the coroutine is guranteed to be indeed an execption and should be threaded as such. Correct me if I'm wrong. Another thing is the visibility of the members of the created struct; shouldn't some of them be read-only (aka const for anyone outside) or completly be private? Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:Are you wanting this snippet? ```d // if any dependents unblock them and schedule their execution. void onComplete(GenericCoroutine); // Depender depends upon dependency, when dependency has value or completes unblock depender. // May need to handle dependency for scheduling. void seeDependency(GenericCoroutine dependency, GenericCoroutine depender); // Reschedule coroutine for execution void reschedule(GenericCoroutine); void execute(COState)(GenericCoroutine us, COState* coState) { if (coState.tag >= 0) { coState.execute(); coState.waitingOnCoroutine.match{ (:None) {}; (GenericCoroutine dependency) { seeDependency(dependency, us); }; // Others? Future's ext. }; } if (coState.tag < 0) onComplete(us); else reschedule(us); } ``` Where ``COState`` is the generated struct as per Description -> State heading. Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated. Due to this depending on sumtypes I can't put it in as-is. Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:Without having a notion on how this might work I can't reasonably comment on this DIP.Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.Right, I handle this as part of my scheduler and worker pool. The language has no knowledge, nor need to know any of this which is why it is not in the DIP.How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Then `yield` would be a keyword, which in turn breaks code which is known to exist.Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword. For yield the only thing I can think of is to introduce a way like `Fiber.yield`, maybe `Coro.yield` that gets picked up by any dlang edition that understands coroutine and gets rewritten into a proper yield while older versions would see a reference to an function / field, which can be provided to these editions as a symbol with `static assert(false, "...")` to inform them about the inproper usage; but that would have the same problems as there could well be already such a construct... But if we're using an attribute, I like the ` yield` from Quirin's post a lot more (and `__yield` seems very clumpsy to me).Rust has a Waker, ... ... ``` coState.waitingOnCoroutine.match{ (:None) {}; (GenericCoroutine dependency) { seeDependency(dependency, us); }; // Others? Future's ext. }; ```The waker design seems much more flexible than a dependency system. For example, with wakers one could implement asyncronous IO by using epoll and invoking the waker when there's date available. I'm a bit confused on how that would look in your proposal. Sure your executor uses a match on a sumtype to determine what's it waiting on, but how does one "register" a custom dependency type? Granted, the compiler can scan the code and pickup any type thats been waited on as a dependency, but how does a executor know how to handle it? Currently, the type must be known beforehand from the executor, thus meaning that the executor and the IO library must be developed as one, instead of being two seperate things that only share a common protocol between them. And even when having compiler support for sumtypes, when the sumtype is dynamically created, there will be times where the sumtypes does not contain all types the executor can process, ending up with unreachable branches which could lead to compiler warnings or even errors that are cryptic. While I agree that we should have a notion on how coroutines can be put to sleep until an certain event took place, I think dependencies aren't a great solution to that. As mentioned would a waker API be better suited for this task as it lets executor and IO be their own thing instead of trying to forcefully combine it into one.
Jan 23
On 24/01/2025 5:33 PM, Mai Lapyst wrote:On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:Atila had a problem with this also. I haven't been able to change it as he didn't give me anything to work from, which you did, thank you. "If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:First off: nice work on the proposal here; I really like it. Would love to try it once it's in an beta stage as it's quite promising. As an individual that implemented their own userspace eventloop via fibers, I would love to have another utility in my belt to use in the implementation. The only thing I had a hard time figuring out what you ment by "If it causes an error, this error is guaranteed to be wrong in a multi- threaded application of it."; What I think is that you mean that any exception created / captured by the coroutine is guranteed to be indeed an execption and should be threaded as such. Correct me if I'm wrong.On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote:Are you wanting this snippet? ```d // if any dependents unblock them and schedule their execution. void onComplete(GenericCoroutine); // Depender depends upon dependency, when dependency has value or completes unblock depender. // May need to handle dependency for scheduling. void seeDependency(GenericCoroutine dependency, GenericCoroutine depender); // Reschedule coroutine for execution void reschedule(GenericCoroutine); void execute(COState)(GenericCoroutine us, COState* coState) { if (coState.tag >= 0) { coState.execute(); coState.waitingOnCoroutine.match{ (:None) {}; (GenericCoroutine dependency) { seeDependency(dependency, us); }; // Others? Future's ext. }; } if (coState.tag < 0) onComplete(us); else reschedule(us); } ``` Where ``COState`` is the generated struct as per Description -> State heading. Where ``GenericCoroutine`` is the parent struct to ``Future`` as described by the DIP, that is not templated. Due to this depending on sumtypes I can't put it in as-is. Every library will do this a bit differently, but it does give the general idea of it. For example you could return the dependency and have it immediately executed rather than let the scheduler handle it.On 24/01/2025 9:12 AM, Sebastiaan Koppe wrote:Without having a notion on how this might work I can't reasonably comment on this DIP.Upon yielding a coroutine, say a socket read, you'll want to park the coroutine until the socket read has completed. This requires a signal on completion of the async operation to the execution context to resume the coroutine.Right, I handle this as part of my scheduler and worker pool. The language has no knowledge, nor need to know any of this which is why it is not in the DIP.How scheduling works, can only lead to confusion if it is described in a language only proposal (I've had Walter attach on to such descriptions in the past and was not helpful).You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Another thing is the visibility of the members of the created struct; shouldn't some of them be read-only (aka const for anyone outside) or completly be private?I don't see a reason to do so (we can change this later if it is shown to be a problem). Its meant for library authors to have full control over lifetimes, and inspect general lifecycle stuff. End users should never see it. If they can see it without explicit opting into it that is something we should probably close a hole on.Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?You may wish to complete a coroutine early. Nothing bad should happen if you do this. If it does, that is likely a compiler bug, or the user did something nasty.I don't expect code breakage. Its a new declaration so I'd be calling for this to only be available in a new edition. Worse case scenario we simply won't parse it in a function that isn't a coroutine. We have multiple tools for dealing with this :)Then `yield` would be a keyword, which in turn breaks code which is known to exist.Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.For yield the only thing I can think of is to introduce a way like `Fiber.yield`, maybe `Coro.yield` that gets picked up by any dlang edition that understands coroutine and gets rewritten into a proper yield while older versions would see a reference to an function / field, which can be provided to these editions as a symbol with `static assert(false, "...")` to inform them about the inproper usage; but that would have the same problems as there could well be already such a construct... But if we're using an attribute, I like the ` yield` from Quirin's post a lot more (and `__yield` seems very clumpsy to me).If this is needed I'm sure we can figure something out. I'm hopeful that we'll have stuff like this figured out if changes are needed prior to it being turned on. Although I am currently doubtful of it.Currently the DIP has no filtering on this. It chucks the type into the sumtype (i.e. when it sees the ``await``) and its good to go. The library would then be responsible for going "hey I don't know what this type is ERROR". We may need to filter things out, which we could do once we have some experience with it. Of course it could be possible that library code can handle this just fine (what I expect).Rust has a Waker, ... ... ``` coState.waitingOnCoroutine.match{ (:None) {}; (GenericCoroutine dependency) { seeDependency(dependency, us); }; // Others? Future's ext. }; ```The waker design seems much more flexible than a dependency system. For example, with wakers one could implement asyncronous IO by using epoll and invoking the waker when there's date available. I'm a bit confused on how that would look in your proposal. Sure your executor uses a match on a sumtype to determine what's it waiting on, but how does one "register" a custom dependency type?Granted, the compiler can scan the code and pickup any type thats been waited on as a dependency, but how does a executor know how to handle it? Currently, the type must be known beforehand from the executor, thus meaning that the executor and the IO library must be developed as one, instead of being two seperate things that only share a common protocol between them. And even when having compiler support for sumtypes, when the sumtype is dynamically created, there will be times where the sumtypes does not contain all types the executor can process, ending up with unreachable branches which could lead to compiler warnings or even errors that are cryptic.Yes, my implementation is all in one. Eventloop + coroutine library. This will likely need some further design work to see if we can split them without exposing any nasty details of the coroutine library to people who should never see it. I don't see an issue with the sumtypes as far as usage is concerned. ```d static if (is(Dependency : Future!ReturnType, ReturnType)) { } else static if (is(Dependency : GenericCoroutine)) { } else { static assert(0, "what type is this?"); } ```While I agree that we should have a notion on how coroutines can be put to sleep until an certain event took place, I think dependencies aren't a great solution to that. As mentioned would a waker API be better suited for this task as it lets executor and IO be their own thing instead of trying to forcefully combine it into one.They are not necessarily the same thing, although there are benefits in doing so (like sharing the same thread pool). In my library I have something called a future completion. This is the backbone of my eventloop library for when events take place and you want to get notification into the hands of the user like reading from a socket (with the value that was read). https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/future_completion.d#L216 Essentially it allows you to use the coroutine abstraction to return a specific value out and it works with the scheduler as if it was user defined. Except it will never be completed by the scheduler, it is done by some other code. I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.
Jan 23
On Friday, 24 January 2025 at 06:16:27 UTC, Richard (Rikki) Andrew Cattermole wrote:"If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."With error you mean an exception? As there are compiler errors (as in the compiler refuses to compile something), and execptions (i.e. `throw X`). Just makeing sure we're on the same page. If so, then I get what you are meaning and should ofc be the case, as is not really different as non-multithreaded non-coroutine code: any exception thrown shouldn't be a false-positive as long as the logic guarding it is not flawed in any form.Hmmm, thats indeed a reason for changing `tag`; you wouldn't need a cancelation token as the tag is this cancelation token to some extend. On that note, we could add a third negative value to indicate an coroutine was canceled from an external source or one could generally specify that any negative value means canceled and libraries can "encode" their own errorcodes into this...Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?You may wish to complete a coroutine early. Nothing bad should happen if you do this. If it does, that is likely a compiler bug, or the user did something nasty.Sadly it will; take for example my own little attempt to build a somewhat async framework ontop of fibers: https://github.com/Bithero-Agency/ninox.d-async/blob/f5e94af440d09df33f1d0f19557628735b04cf43/source/ninox/asy c/futures.d#L42-L44 it declares a function `await` for futures; if `await` will become a general keyword, it will have the same problems as if `yield` becomes one: all places where `await` was an identifier before become invalid.I don't expect code breakage. Its a new declaration so I'd be calling for this to only be available in a new edition.Then `yield` would be a keyword, which in turn breaks code which is known to exist.Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.Worse case scenario we simply won't parse it in a function that isn't a coroutine.Which could be done also with `yield` tbh. I dont see why `await` is allowed to break code and `yield` is not. We could easily make both only available in coroutines / ` async` functions.I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.It's easier, as it describes how an coroutine should be woken up by the executor, a dependency system is IMO more complicated because you need to differentiate between dependencies whereas Wakers serve only one purpose: wakeup a coroutine / Future that was pending before to be re-polled / executed. --- I've read a second time through your DIP and also took a look at your implementation and have some more questions:opConstructCoYou use this in the DIP to showcase how an coroutine would be created, but it's left unclear if this is part of the DIP or not. Which is weird because without it the translation ```d ListenSocket ls = ListenSocket.create((Socket socket) { ... }); ``` to ```d ListenSocket ls = ListenSocket.create( InstantiableCoroutine!(__generatedName.ReturnType, __generatedName.Parameters) .opConstructCo!__generatedName); ); ``` would not be possible as the compiler would not know that opConstructCo should be invoked here. Which also has another problem: how do one differentiate between asyncronous closures and non-asyncronous closures? Because you clearly intend here to use the closure passed to `ListenSocket.create` as an coroutine, but it lacks any indicator that it is one. Imho it should be written like this: ```d ListenSocket ls = ListenSocket.create((Socket socket) async { ... }); ```GenericCoroutineWhats this type anyway? I understand that `COState` is the state of the coroutine, aka the `__generatedName` struct which is passed in as a generic parameter and I think the `execute(COState)(...)` function is ment to be called through a type erased version of it that is somehow generated from each COState encountered. But what is `GenericCoroutine` itself? Is it your "Task" object that holds not only the state but also the type erased version of the execute function for the executor?Function callsI also find no information in the DIP on how function calls itself are transformed. What the transformation of a function looks like is clear, but what about calling them in a non-async function? I would argue that this should be possible and have an type that reflects that they're a coroutine as well as the returntype, similar to rust's `Future<T>`. This would also proof that coroutines are zero-overhead, which I would really like them to be in D.```d struct AnotherCo { int result() safe waitrequired { return 2; } } int myCo() async { AnotherCo co = ...; // await co; int v = co.result; return 0; } ```How is `AnotherCo` here a coroutine that can be `await`ed on? With my current understanding of your proposal, only functions and methods are transformed, which means that `AnotherCo.result` would be the coroutine, not it's whole parent struct.
Jan 24
On 25/01/2025 9:49 AM, Mai Lapyst wrote:On Friday, 24 January 2025 at 06:16:27 UTC, Richard (Rikki) Andrew Cattermole wrote:I mean a compiler error. Not a runtime exception. I listed it as a requirement just to make sure we tune any additional errors that can be generated towards being 100% correct. Its more for me than anyone else. I.e. preventing TLS memory from crossing yield points."If the compiler generates an error that a normal function would not have, the error is guaranteed to not be a false positive when considering a multithreaded context of a coroutine."With error you mean an exception? As there are compiler errors (as in the compiler refuses to compile something), and execptions (i.e. `throw X`). Just makeing sure we're on the same page. If so, then I get what you are meaning and should ofc be the case, as is not really different as non-multithreaded non-coroutine code: any exception thrown shouldn't be a false-positive as long as the logic guarding it is not flawed in any form.I don't think that we need to. The language only has to know about -1, -2 and >= 0. At least currently, anything below -64k you can probably set safely. The >= 0 ones are used for the branch table, and you really want those values for that use case as its an optimization. Just in case we had more tags in the language, they'll be more like -10 not -100k.Hmmm, thats indeed a reason for changing `tag`; you wouldn't need a cancelation token as the tag is this cancelation token to some extend. On that note, we could add a third negative value to indicate an coroutine was canceled from an external source or one could generally specify that any negative value means canceled and libraries can "encode" their own errorcodes into this...Like `tag`: there should be no situation where an outside entitiy should control the state of the coroutine, not even in as a part of a library or do I miss something?You may wish to complete a coroutine early. Nothing bad should happen if you do this. If it does, that is likely a compiler bug, or the user did something nasty.The ``await`` keyword has been used for multithreading longer than I've been alive. To mean what it does. Its also very uncommon and does not see usage in druntime/phobos. As it has no meaning outside of a coroutine, it'll be easy to handle I think.Sadly it will; take for example my own little attempt to build a somewhat async framework ontop of fibers: https://github.com/Bithero- Agency/ninox.d-async/blob/f5e94af440d09df33f1d0f19557628735b04cf43/ source/ninox/async/futures.d#L42-L44 it declares a function `await` for futures; if `await` will become a general keyword, it will have the same problems as if `yield` becomes one: all places where `await` was an identifier before become invalid.I don't expect code breakage. Its a new declaration so I'd be calling for this to only be available in a new edition.Then `yield` would be a keyword, which in turn breaks code which is known to exist.Which is the same with `await`; I honestly like the way rust solved it: any Future (rust's equivalent to a coroutine type), has implicitly the `.await` method, so instead of writing `await X`, you have `X.await`. This dosn't break exisiting code as `.await` is still perfectly fine an method invocation. When we're here to reduce breaking code as much as possible, I strongly would go with the `.await` way instead of adding a new keyword.Worse case scenario we simply won't parse it in a function that isn't a coroutine.Which could be done also with `yield` tbh. I dont see why `await` is allowed to break code and `yield` is not. We could easily make both only available in coroutines / ` async` functions.If you want to do this you can. I did spend some time last night thinking about this. ```d sumtype PollResult(T) = :NotReady | T; PollResult!(int[]) co(Socket socket) async { if (!socket.ready) { return :NotReady; } async return socket.read(1024); } ``` The rest is all on the library side, register in the waker, against the socket. Or have the socket reschedule as you please. Note: the socket would typically be the one to instantiate the coroutine, so it can do the registration with all the appropriate object references. Stuff like this is why I added the multiple returns support, even though I do not believe it is needed. Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!I am struggling to see how the waker/poll API from Rust is not a more complicated mechanism for describing a dependency for when to continue.It's easier, as it describes how an coroutine should be woken up by the executor, a dependency system is IMO more complicated because you need to differentiate between dependencies whereas Wakers serve only one purpose: wakeup a coroutine / Future that was pending before to be re- polled / executed.--- I've read a second time through your DIP and also took a look at your implementation and have some more questions:It is not part of the DIP. Without the operator overload example, it wouldn't be understood.opConstructCoYou use this in the DIP to showcase how an coroutine would be created, but it's left unclear if this is part of the DIP or not. Which is weird because without it the translation```d ListenSocket ls = ListenSocket.create((Socket socket) { ... }); ``` to ```d ListenSocket ls = ListenSocket.create( InstantiableCoroutine!(__generatedName.ReturnType, __generatedName.Parameters) .opConstructCo!__generatedName); ); ``` would not be possible as the compiler would not know that opConstructCo should be invoked here.Let's break it down a bit. The compiler using just the parse tree can see the function ``opConstructCo`` on the library type ``InstantiableCoroutine``. Allowing it to flag the type as a instantiable coroutine. It can see that the parameter in ``ListenSocket.create`` is of type ``InstantiableCoroutine`` via a little special casing (if it hasn't been template instantiated explicitly). The argument to parameter matching only needs to verify that the parameter has the flag that it is a instantiable coroutine, and the argument is some kind of function, it does not need to instantiate any template. Once matched, then it'll do the conversion and instantiations as required. I've played with this area of dmd, it should work. Although if the parameter is templated, then we may have trouble, but I am not expecting it for things like sockets especially with partial arguments support. https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/instanceable.d#L72Which also has another problem: how do one differentiate between asyncronous closures and non-asyncronous closures? Because you clearly intend here to use the closure passed to `ListenSocket.create` as an coroutine, but it lacks any indicator that it is one. Imho it should be written like this: ```d ListenSocket ls = ListenSocket.create((Socket socket) async { ... }); ```See above, it can see that it is a coroutine by the parameter, rather than on the argument. Even with the explicit `` async`` it is likely that the error message would have to do something similar to detect that case. Otherwise people are going to get confused. You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".I didn't define ``GenericCoroutine`` in the DIP, as it wasn't needed. Indeed, this is my task abstraction with the type erased executor for execution. Think of the hierarchy as this, it is what I have implemented (more or less), and you could do it differently if it doesn't suit you: ```d struct GenericCoroutine { bool isComplete(); CoroutineCondition condition(); void unsafeResume(); void blockUntilCompleteOrHaveValue(); } struct Future(ReturnType) : GenericCoroutine { ReturnType result(); } struct InstantiableCoroutine(ReturnType, Parameters...) { Future!ReturnType makeInstance(Parameters); InstantiableCoroutine!(ReturnType, ughhhhh) partial(Args...)(Args); // removes N from start of Parameters static InstantiableCoroutine opConstrucCo(CoroutineDescriptor : __descriptorco)(); } ``` https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine Consider why ``GenericCoroutine`` exists, internals, the scheduler ext. cannot deal with a typed coroutine object, it must have an untyped one. Here is how I do it: https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/builder.d#L47GenericCoroutineWhats this type anyway? I understand that `COState` is the state of the coroutine, aka the `__generatedName` struct which is passed in as a generic parameter and I think the `execute(COState)(...)` function is ment to be called through a type erased version of it that is somehow generated from each COState encountered. But what is `GenericCoroutine` itself? Is it your "Task" object that holds not only the state but also the type erased version of the execute function for the executor?Currently they cannot be. It was heavily discussed, and I did support it originally. It was decided that the amount of code that will actually use this is minimal enough, and there are problems/confusion possible that it wasn't worth it for the time being. See the ``Prime Sieve`` example for one way you can do this. I can confirm that it does work in practice :) https://github.com/Project-Sidero/eventloop/blob/master/examples/networking/source/app.d#L398Function callsI also find no information in the DIP on how function calls itself are transformed. What the transformation of a function looks like is clear, but what about calling them in a non-async function?I would argue that this should be possible and have an type that reflects that they're a coroutine as well as the returntype, similar to rust's `Future<T>`. This would also proof that coroutines are zero-overhead, which I would really like them to be in D.Nothing in ``AnotherCo`` would be transformed. The ``await`` statement does two things. 1. It assigns the expression's value into the state variable for waiting on. 2. It yields. It doesn't know, nor care what the type of the expression resolves to. The expression has no reason to be transformed in any way. Also there struct/classes are inherently defined as supporting methods that are `` async``, what happens it the this pointer for that type, goes after the state struct pointer post transformation and you have to explicitly pass it in (via partial perhaps?).```d struct AnotherCo { int result() safe waitrequired { return 2; } } int myCo() async { AnotherCo co = ...; // await co; int v = co.result; return 0; } ```How is `AnotherCo` here a coroutine that can be `await`ed on? With my current understanding of your proposal, only functions and methods are transformed, which means that `AnotherCo.result` would be the coroutine, not it's whole parent struct.
Jan 24
On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself. The snippet you posted raises more questions than it answers to be honest. First of all I still don't know what a GenericCoroutine or what a Future is. It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them. Why was this done? C++'s approach of having an awaiter seems simpler. For one it allows the object you are awaiting on to control the continuation directly.On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote: You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Are you wanting this snippet?
Jan 24
On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:It should be scheduled: If: tag >= 0 And: waitingOnCoroutine == None || (waitingOnCoroutine != None && waitingOnCoroutine.isCompleteOrHaveValue) Where isCompleteOrHaveValue is: tag < 0 || haveValue The DPI does not require you to do any of this (if things are written correctly it should not segfault and hopefully won't corrupt anything), but this would be good practice. And yes it is library code. The compiler does not help you to do any of this. You the library author are responsible for it. If you want to do something different like a waker style where these rules do not apply, you are free to. The language only requires the tag to be >= 0 due to the branch table stuff.On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote: You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Are you wanting this snippet?The snippet you posted raises more questions than it answers to be honest. First of all I still don't know what a GenericCoroutine or what a Future is.I wrote it out for someone else here: https://forum.dlang.org/post/vn14i8$1g46$1 digitalmars.com ```d struct GenericCoroutine { bool isComplete(); CoroutineCondition condition(); void unsafeResume(); void blockUntilCompleteOrHaveValue(); } struct Future(ReturnType) : GenericCoroutine { ReturnType result(); } struct InstantiableCoroutine(ReturnType, Parameters...) { Future!ReturnType makeInstance(Parameters); InstantiableCoroutine!(ReturnType, ughhhhh) partial(Args...)(Args); // removes N from start of Parameters static InstantiableCoroutine opConstrucCo(CoroutineDescriptor : __descriptorco)(); } ``` https://github.com/Project-Sidero/eventloop/tree/master/source/sidero/eventloop/coroutine If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes. If it turns out those attributes are not enough (I am not expecting any to be needed), we can add some to allow your library to communicate to the compiler on how it needs to do the slicing and dicing of the function into the state object that you can consume and call.It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it! Remember ``await`` statement does two things, assign to ``waitingOn`, then yield (aka return) (and set tag appropriately).Why was this done? C++'s approach of having an awaiter seems simpler.https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#1298-await-expressions A significantly more mature solution where we have Adam who has experience working with it since it was created in teams. He has dealt with all the problems that come with that. I don't have a stake holder who fits the bill for other styles. In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.For one it allows the object you are awaiting on to control the continuation directly.If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that. You could even add support for it as part of instantiation of the coroutine! Its your library code, you can do whatever you want on this front. You control execution of the coroutine itself, you can see that this value was set. You can inspect it, you can call whatever you like. That is what the last example with `` void execute(COState)(GenericCoroutine us, COState* coState) {`` shows. You are fully in control over the coroutines execution the language is focused solely on the slicing and dicing of the function into something that a library can then call. The language defines none of this _on purpose_.
Jan 24
On Friday, 24 January 2025 at 23:22:19 UTC, Richard (Rikki) Andrew Cattermole wrote:On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:Well, then the DIP needs to be more explicit that the compiler is merely doing the code transformation, that the created a coroutine frame needs to be driven completely by library code, and that the types that are awaited on are opaque to the compiler and simply passed along to library code.On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes.On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote: You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Are you wanting this snippet?The name `GenericCoroutine` suggests there is type erasure, but if the library driving the coroutine can work with the direct types that are awaited on, that would work. As an optimisation possibility it would be good if the coroutine frame could have some storage space for async operations, which would allow us to eliminate some heap allocations. The easiest way to support that is by having the compiler call a predefined function on the object in the await expression (say `getAwaiter`), whose returned object would be stored in the coroutine frame. This offers quite a bit of flexibility for library authors without putting any burden on the user. In the Fiber support in my Sender/Receiver library there is only one single allocation per yield point. Would be good if we can get at least as few allocations.It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it!https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#1298-await-expressionsFrom that link: "The operand of an await_expression is called the task. It represents an asynchronous operation that may or may not be complete at the time the await_expression is evaluated. The purpose of the await operator is to suspend execution of the enclosing async function until the awaited task is complete, and then obtain its outcome." Note that `task` is a way better name than `Future`. And: "The task of an await_expression is required to be awaitable. An expression t is awaitable if one of the following holds: [...] - t has an accessible instance or extension method called GetAwaiter [...] The purpose of the GetAwaiter method is to obtain an awaiter for the task. [...] The purpose of the INotifyCompletion.OnCompleted method is to sign up a “continuation” to the task; i.e., a delegate (of type System.Action) that will be invoked once the task is complete." You see? It defines the mechanism by which to resume an awaitable.In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.It is hard for me to see if there are any shortcomings at this point. Is there an dmd implemention I could try to integrate with?As mentioned above, this needs to be made more clear in the DIP. One possible challenge with this flexibility is whether it isn't too flexible. It is not uncommon to have multiple eventloops in a program, potentionally coming from distinct libraries. Without a common mechanism to resume awaitables from each it might result in incompatibility galore.For one it allows the object you are awaiting on to control the continuation directly.If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that. [...] The language defines none of this _on purpose_.
Jan 25
On 25/01/2025 11:38 PM, Sebastiaan Koppe wrote:On Friday, 24 January 2025 at 23:22:19 UTC, Richard (Rikki) Andrew Cattermole wrote:It is in there. "The language feature must not require a specific library to be used with it." But, you want it to be stated again some place, will do. I am very happy that we have got this resolved.On 25/01/2025 9:56 AM, Sebastiaan Koppe wrote:Well, then the DIP needs to be more explicit that the compiler is merely doing the code transformation, that the created a coroutine frame needs to be driven completely by library code, and that the types that are awaited on are opaque to the compiler and simply passed along to library code.On Thursday, 23 January 2025 at 23:09:42 UTC, Richard (Rikki) Andrew Cattermole wrote:If it doesn't work for you, do it a different way. The language has no inbuilt knowledge of any of these types. It determines everything that it needs from the operator overload and the core.attributes attributes.On 24/01/2025 10:17 AM, Sebastiaan Koppe wrote:No, not specifically. I am requesting the DIP to clarify the mechanism by which a scheduler is notified when a coroutine is ready for resumption, not the specific scheduling itself.On Thursday, 23 January 2025 at 20:37:59 UTC, Richard (Rikki) Andrew Cattermole wrote: You don't need to describe how scheduling works, just the mechanism by which a scheduler gets notified when a coroutine is ready for resumption. Rust has a Waker, C++ has the await_suspend function, etc.Are you wanting this snippet?My main concern is it'll result in stack memory escaping. We may want to limit that with an attribute, but that is an open problem that isn't going to limit us for the time being.The name `GenericCoroutine` suggests there is type erasure, but if the library driving the coroutine can work with the direct types that are awaited on, that would work. As an optimisation possibility it would be good if the coroutine frame could have some storage space for async operations, which would allow us to eliminate some heap allocations. The easiest way to support that is by having the compiler call a predefined function on the object in the await expression (say `getAwaiter`), whose returned object would be stored in the coroutine frame. This offers quite a bit of flexibility for library authors without putting any burden on the user.It seems that in your design coroutines are only able to wait for other coroutines. This means you need to model async operations as coroutines in order to suspend on them.It should be coroutines, but I left out the filtering for the type that the ``await`` statement will accept. It'll chuck whatever you want into the sumtype value. Its your job to filter it. If you want to support other types and behaviors go for it!In the Fiber support in my Sender/Receiver library there is only one single allocation per yield point. Would be good if we can get at least as few allocations.The best way to handle that is one allocation (at CT) for the descriptor that you can instantiate coroutines form. Then one big allocation for all the different structs involved. Could use a free list to optimize that a bit. Some interesting possibilities here for someone that cares.It can be, I intentionally tried to conflate a promise and a coroutine into a single object. There is a bunch of fairly standard names for this stuff, whatever I picked people would have opinions on and since its my stuff, I can disregard them. PhobosV3 would need to be argued about.https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/From that link: "The operand of an await_expression is called the task. It represents an asynchronous operation that may or may not be complete at the time the await_expression is evaluated. The purpose of the await operator is to suspend execution of the enclosing async function until the awaited task is complete, and then obtain its outcome." Note that `task` is a way better name than `Future`.And: "The task of an await_expression is required to be awaitable. An expression t is awaitable if one of the following holds: [...] - t has an accessible instance or extension method called GetAwaiter [...] The purpose of the GetAwaiter method is to obtain an awaiter for the task. [...] The purpose of the INotifyCompletion.OnCompleted method is to sign up a “continuation” to the task; i.e., a delegate (of type System.Action) that will be invoked once the task is complete." You see? It defines the mechanism by which to resume an awaitable.There is no language implementation currently, only my library (which hasn't made it to branch tables just yet, and I'll wait for language support before hand). Sadly it is not priority to implement this year even if it is accepted, stuff like escape analysis is up this year.In saying all that, I find the dependency approach to be very intuitive, and I was able to implement it purely off of first principles. Whereas the other approaches including C++ is still after much reading not in my mental model.It is hard for me to see if there are any shortcomings at this point. Is there an dmd implemention I could try to integrate with?I fear the opposite, that any attempted typed vtable to merge implementations is going to have minimal use and for all intents and purposes they will each be too specialized for their use case to make it worth adding. Consider vibe.d a lot of projects are derived from it, and they use its abstractions. In PhobosV3 is meant to take on the role of a correct event based library, so the hope is it'll be a root. Most likely there would also be one focused on speed. Do you really want the correct-but-slower design to be used as part of the fast-with-assumptions implementation?As mentioned above, this needs to be made more clear in the DIP. One possible challenge with this flexibility is whether it isn't too flexible. It is not uncommon to have multiple eventloops in a program, potentionally coming from distinct libraries. Without a common mechanism to resume awaitables from each it might result in incompatibility galore.For one it allows the object you are awaiting on to control the continuation directly.If you want that for your library to look at the ``waitingOn`` variable for control over scheduling, go for it! Nothing in the DIP currently should stop you from doing that. [...] The language defines none of this _on purpose_.
Jan 25
On Saturday, 25 January 2025 at 13:41:24 UTC, Richard (Rikki) Andrew Cattermole wrote:The ``await`` keyword has been used for multithreading longer than I've been alive. To mean what it does. Its also very uncommon and does not see usage in druntime/phobos.So "preventing breaking" is only reserved for phobos then, and any user-written code is fine to break at every moment. I find that a very problematic way when implementing / enhancing a language. "Dont break userspace" comes to mind; we should first and foremost be concerned with users interacting with the feature (which you seem to be concerned with as well), and as such I would'nt want to break all existing asyncronous libraries out there when the new edition rolls around. This makes dlang seem even more broken and "too niche" for people to use as any async library up to this point used in examples, tutorials etc will horrobly break.As it has no meaning outside of a coroutine, it'll be easy to handle I think.Then the DIP should specify it. Either the tokens `await` becomes an hard-keyword, disallowing any identifier usage of it, or it becomes a soft one, where it only acts as a keyword in ` async` contexts and like an normal identifier outside of it. You even needed for it: ``` Inside an async function, await shall not be used as an available_identifier although the verbatim identifier await may be used. There is therefore no syntactic ambiguity between await_expressions and various expressions involving identifiers. Outside of async functions, await acts as a normal identifier. ```Stuff like this is why I added the multiple returns support, even though I do not believe it is needed.Which multiple return support? The DIP states clearly that it is **NOT** supported.Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!Yes, but honestly you do the same: your dependency system define how libraries need to interact with coroutines, the same way waker does. I dont want to argue that wakers dont define a library usage as well, but dependencies to so as well.It is not part of the DIP. Without the operator overload example, it wouldn't be understood.Then do not put it into the DIP. It should **only** contain your design and whats possible with it, without having to rely on possible future DIP's to add some operators to make your DIP actually work.The compiler using just the parse tree can see the function ``opConstructCo`` on the library type ``InstantiableCoroutine``. Allowing it to flag the type as a instantiable coroutine.Again: this description says that the compiler treats `opConstructCo` differently as other functions. What would happen if I want to use another name? What will happen if I have multiple functions with the same signature but different names?See above, it can see that it is a coroutine by the parameter, rather than on the argument.So the argument (lambda) would not be a coroutine and could not use `await` or ` async return`? This seems counter-intuitive, as I clearly can see that code as this will exist: ```d ListenSocket ls = ListenSocket.create((Socket socket) async { auto line = await socket.readLine(); // ... }); ``` therefore the function should be anotated to be `async`; espc. bc you say time and time again it should be useable by users without prior knowlage of the insides of the system. Makeing it that functions can only have `await` if they're ` async` but lambdas are whatever they want to be seems like a hughe boobytrap.You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".It makes things clearer for the writer (and future readers), and by extend the compiler as it now certainly knows to slice the lambda as well as this is the intention of the developer.It was heavily discussedWhere exactly? Haven't seen it yet sorry. And even then: these should be part of the DIP under a section "non-goals" or "discarded ideas" so people know that a) they where considered and b) what where the considerations that lead to the decision.See the ``Prime Sieve`` example for one way you can do this.I've seen it, but again: it uses undeclared things that aren't as clear as day if your'e **not** the writer of the DIP. ```d InstantiableCoroutine!(int) ico = \&generate; Future!int ch = ico.makeInstance(); ``` Why does this work? `generate` is an coroutine, but why can it be "just" assigned to an library shell? Does it "just work"? Thats not how programming works or how standards should be written. I **could** see that you ment that an constructor that takes an template parameter with the `__descriptorco` should be used, but again: it is not stated in the DIP and as such should not be taken as "granted" just bc you expect people to come to the conclusion themself. Look at C++ papers, they are **hughe** for a reason: EVERYTHING gets written down so no confusion can happen.The ``await`` statement does two things. 1. It assigns the expression's value into the state variable for waiting on. 2. It yields.Then please for the love of good put it into the DIP! I'm sorry that im so picky about this, but a **specification** (what your DIP is), should contain **every detail of your idea** not only the bits gemini deemed as important. We're humans, and as such we should be espc carefull to give us each other as much information as possible.Whereas the other approaches including C++ is still after much reading not in my mental model.I somewhat start to get a graps of yours, while in your model, you try to just "throw" the awaited-on back to anyone interested in it and use an sumtype to do it, other languages define an stricter interface that need to be followed: c++ with awaiters and rust with it`s `Future<>`s and `Waker`s. Both ways prevent splits in the ecosystem or that only one library gets on top while everything else just dies. Thats what I tbh fear with the current approach: there will be one way to use dependencies and thats it. The problems it have will extend to all async code and an outside viewer will declare async in dlang broken without anyone realising thats just the library thats broken. Take dlang's std.regex for example: it's very slow in comparison with others and you easily could roll your own, but nobody does so everybody just assumes it's a "dlang" problem and moves on. While this has only minimal impact bc it's just regex, with an entire language feature that will be presented through the lens of the most used or most "present" library (not popular! big difference), this will make people say "Hey dlangs async is so bad bc. that and that". I want to prevent such a thing. With an more strict protocol on how things are awaited (c++) or a coroutine can be "retried" / woken up (rust) these problems go away. Any executor can rely on the fact that any io / waiting structure **will** follow protocol, and as such they're interchangeable, which comes to a **big** benefit of user and application code as noone needs to re-invent the whole weel. Another benefit is also thag it (somewhat) helps in ensuring that the coroutine is actually in a good state without the executor needing to know about that state itself. To help understanding a bit more the two models lets take a look at a "typical" flow of a coroutine: - starts coroutine - initiate `read_all()` of a file - `await`s the `read_all()` and pauses the coroutine - gets re-called since the waited on part is now resolved - processes data In your proposal this works by setting a dependency on the `read_all()`'s returntype. If now the executor simply ignores the dependency, it recalls the coroutine and the coroutine is in a bad state, as it does not validate if the dependency is actually resolved (how would it?). As a result, you would need to put it inside a loop: ```d ReadDependency r = ...; while (!r.isReady) { await r; } ``` Which is boilerplait best avoided. Secondly the read_all itself: It and the exector would need to agree on an out-of-language protocol on how to actually handle the dependency; this will mostlikely be that an library would expose an interface like `Awaitable` that any dependency would need to implement, but with the downside that any dependent now has an explicit dependency on said library. Sure, maybe over time a standard set of interfaces would araise that the community would adapt, but then we have the API-dependency hell in java just re-invented. In C++ the `co_await` dictates that the coroutine is blocked as long as the `Awaiter` protocol says it does, since any user **expects** that the `await`ed thing is actually resolved after it's `await`ed. It dosn't mater if successfully or not the keypoint is that it's **not pending** anymore. In rust it's even simpler: polling is an concept that even kids understand: when you want your parents to give you something, you "poll" until they give it to you or tell you no in a way that keeps you from continuing what you originaly wanted to do. Same thing in rust: a coroutine is "polled" by the exector and can either resolve with the data you expected, or tell you that's it still waiting and to come back later. The compiler ensures that only ever a ready state is allowed to continue the coroutine. If you want to be more performant and not spin-lock in the executor in the hopes that someday the future will resolve, you can give it a waker and say: "hey, if you say you are still not done, I will do other things; if you think you're ready for me to try again, just call this and I will come to you!".
Jan 25
On 26/01/2025 5:44 AM, Mai Lapyst wrote:On Saturday, 25 January 2025 at 13:41:24 UTC, Richard (Rikki) Andrew Cattermole wrote:The ``await`` statement only works in a coroutine, it should not break anything. Its entirely new code that it applies to. Old code that uses that identifier won't be compatible with the new eventloop anyway, and probably won't be desirable to call (i.e. blocking where you don't want it to block). We have strict rules these days on breaking code, which is to not do it. The breaking changes and deprecations section reflects this. I have no intention on breaking anything in this proposal as it isn't needed.The ``await`` keyword has been used for multithreading longer than I've been alive. To mean what it does. Its also very uncommon and does not see usage in druntime/phobos.So "preventing breaking" is only reserved for phobos then, and any user- written code is fine to break at every moment. I find that a very problematic way when implementing / enhancing a language. "Dont break userspace" comes to mind; we should first and foremost be concerned with users interacting with the feature (which you seem to be concerned with as well), and as such I would'nt want to break all existing asyncronous libraries out there when the new edition rolls around. This makes dlang seem even more broken and "too niche" for people to use as any async library up to this point used in examples, tutorials etc will horrobly break.It depends. If we get editions, then it can be a keyword in a new edition, but not in an old one. If we don't get it, it can be a soft keyword where it only applies in context of a coroutine. Whatever is picked, it will be tuned towards "non-breaking".As it has no meaning outside of a coroutine, it'll be easy to handle I think.Then the DIP should specify it. Either the tokens `await` becomes an hard-keyword, disallowing any identifier usage of it, or it becomes a soft one, where it only acts as a keyword in ` async` contexts and like that has the (somewhat) exact wording needed for it: ``` Inside an async function, await shall not be used as an available_identifier although the verbatim identifier await may be used. There is therefore no syntactic ambiguity between await_expressions and various expressions involving identifiers. Outside of async functions, await acts as a normal identifier. ```Adding "This is a return that does not complete the coroutine, to enable multiple value returns." to make it very explicit that this is what it is offering.Stuff like this is why I added the multiple returns support, even though I do not believe it is needed.Which multiple return support? The DIP states clearly that it is **NOT** supported.This isn't what I am meaning. The DIP only defines the language transformation, you are responsible for how it gets called, and what can be waited upon ext. I.e. if you don't support ``await`` statements, you can static assert out if they are used. ```d __generatedName generatedFromCompilerStateStruct = ...; ... static assert(co.WaitingON.__tags.length == 1); ``` Or something akin to it. It could return a waker, socket or anything else. You control what can be waited upon. The language isn't filtering it.Its also a good example of why the language does not define the library, so you have the freedom to do this stuff!Yes, but honestly you do the same: your dependency system define how libraries need to interact with coroutines, the same way waker does. I dont want to argue that wakers dont define a library usage as well, but dependencies to so as well.The operator overload ``opConstructCo`` is part of the DIP. Therefore there are examples for it. But the library types such as ``GenericCoroutine``, ``InstantiableCoroutine``, and ``Future`` are what isn't part of the DIP and they are needed to show how the language feature can be used.It is not part of the DIP. Without the operator overload example, it wouldn't be understood.Then do not put it into the DIP. It should **only** contain your design and whats possible with it, without having to rely on possible future DIP's to add some operators to make your DIP actually work.It is an operator overload, like any other. You use what the language specifies end of. It has the ``op`` prefix, which is established for use by operator overload methods.The compiler using just the parse tree can see the function ``opConstructCo`` on the library type ``InstantiableCoroutine``. Allowing it to flag the type as a instantiable coroutine.Again: this description says that the compiler treats `opConstructCo` differently as other functions. What would happen if I want to use another name? What will happen if I have multiple functions with the same signature but different names?Almost got to a good example on this, the ``await`` is a statement not an expression. It'll be easier to transform into the state machine. ```d ListenSocket ls = ListenSocket.create((Socket socket) async { auto line = socket.readLine(); await line; // ... }); ``` Lambdas if you do not specify types in the parameter lists, are actually templates. It is explicitly required in this case that it'll take the `` async`` attribute from the parameter on ``create`` based upon the parameter type. Which does imply that we cannot limit ``await`` statements and `` async`` returns during parsing. Which shouldn't be a problem due to the whitespace. ``await ...;`` not ``await;`` and there are no attributes on statements currently (but there are for declarations).See above, it can see that it is a coroutine by the parameter, rather than on the argument.So the argument (lambda) would not be a coroutine and could not use `await` or ` async return`? This seems counter-intuitive, as I clearly can see that code as this will exist: ```d ListenSocket ls = ListenSocket.create((Socket socket) async { auto line = await socket.readLine(); // ... }); ```therefore the function should be anotated to be `async`; espc. bc you say time and time again it should be useable by users without prior knowlage of the insides of the system. Makeing it that functions can only have `await` if they're ` async` but lambdas are whatever they want to be seems like a hughe boobytrap.We infer attributes on templates. I see no difference here. Not doing it here, seems like it would create more surprises then not.You don't win a whole lot by requiring it. Especially when they are templates and they look like they should "just work".It makes things clearer for the writer (and future readers), and by extend the compiler as it now certainly knows to slice the lambda as well as this is the intention of the developer.This is a trust me, adding such a section is non-helpful. It ends up derailing things for the D community.It was heavily discussedWhere exactly? Haven't seen it yet sorry. And even then: these should be part of the DIP under a section "non-goals" or "discarded ideas" so people know that a) they where considered and b) what where the considerations that lead to the decision."Given the following _potential_ shell of a library struct that is used for the purpose of examples only:" Added the clarification at the end that it is only used for example, but it was stated as part of ``Constructing Library Representation``.See the ``Prime Sieve`` example for one way you can do this.I've seen it, but again: it uses undeclared things that aren't as clear as day if your'e **not** the writer of the DIP. ```d InstantiableCoroutine!(int) ico = \&generate; Future!int ch = ico.makeInstance(); ```Why does this work? `generate` is an coroutine, but why can it be "just" assigned to an library shell? Does it "just work"? Thats not how programming works or how standards should be written. I **could** see that you ment that an constructor that takes an template parameter with the `__descriptorco` should be used, but again: it is not stated in the DIP and as such should not be taken as "granted" just bc you expect people to come to the conclusion themself. Look at C++ papers, they are **hughe** for a reason: EVERYTHING gets written down so no confusion can happen.This is described in ``Constructing Library Representation``. The relevant lowering is: ```d // The location of this struct is irrelevant, as long as compile time accessible things remain available struct __generatedName { } InstantiableCoroutine!(int, int) co = InstantiableCoroutine!(int, int) .opConstructCo!__generatedName; ```Gemini is a test to see how well it could be understood prior to humans having to review it. If it cannot pass that, it cannot pass a human. Hmm, ``Yielding`` does cover the tag side of things, but not the variable assignment in the state. ``// If we yield on a coroutine, it'll be stored here`` It was indeed added to the generated state struct, just not at the yielding side of it. Also added to exceptions too.The ``await`` statement does two things. 1. It assigns the expression's value into the state variable for waiting on. 2. It yields.Then please for the love of good put it into the DIP! I'm sorry that im so picky about this, but a **specification** (what your DIP is), should contain **every detail of your idea** not only the bits gemini deemed as important. We're humans, and as such we should be espc carefull to give us each other as much information as possible.Talking about regex engines... guess what I've been writing over the last two months :) And no, I cannot confirm that it is easy, especially with the Unicode stuff. Other languages define the library stuff and directly tie it into the language lowering. This proposal does not do that. It is purely the transformation. How you design the library is on the library author, not the language! One of the lessons we have learned about tieing the language to a specific library is it tends to err on the side of not working for everyone. D classes are a great example of this, forcing you to use the root class ``Object``, and hit issues with attributes, monitor ext. I don't intend for us to make the same mistake here, especially on a subject where people have such different views on how it should work.Whereas the other approaches including C++ is still after much reading not in my mental model.I somewhat start to get a graps of yours, while in your model, you try to just "throw" the awaited-on back to anyone interested in it and use an sumtype to do it, other languages define an stricter interface that need to be followed: c++ with awaiters and rust with it`s `Future<>`s and `Waker`s. Both ways prevent splits in the ecosystem or that only one library gets on top while everything else just dies. Thats what I tbh fear with the current approach: there will be one way to use dependencies and thats it. The problems it have will extend to all async code and an outside viewer will declare async in dlang broken without anyone realising thats just the library thats broken. Take dlang's std.regex for example: it's very slow in comparison with others and you easily could roll your own, but nobody does so everybody just assumes it's a "dlang" problem and moves on. While this has only minimal impact bc it's just regex, with an entire language feature that will be presented through the lens of the most used or most "present" library (not popular! big difference), this will make people say "Hey dlangs async is so bad bc. that and that". I want to prevent such a thing.With an more strict protocol on how things are awaited (c++) or a coroutine can be "retried" / woken up (rust) these problems go away. Any executor can rely on the fact that any io / waiting structure **will** follow protocol, and as such they're interchangeable, which comes to a **big** benefit of user and application code as noone needs to re-invent the whole weel.So do it that way. Neither I, nor the language will stop you!Another benefit is also thag it (somewhat) helps in ensuring that the coroutine is actually in a good state without the executor needing to know about that state itself. To help understanding a bit more the two models lets take a look at a "typical" flow of a coroutine: - starts coroutine - initiate `read_all()` of a file - `await`s the `read_all()` and pauses the coroutine - gets re-called since the waited on part is now resolved - processes data In your proposal this works by setting a dependency on the `read_all()`'s returntype. If now the executor simply ignores the dependency, it recalls the coroutine and the coroutine is in a bad state, as it does not validate if the dependency is actually resolved (how would it?). As a result, you would need to put it inside a loop:Sounds like a bug, if it allows you to ``await`` and not actually respect it.```d ReadDependency r = ...; while (!r.isReady) { await r; } ``` Which is boilerplait best avoided.Agreed. I do not like this waker design. It seems highly inefficient. I prefer the dependency design, as you will only be executed if you have what you need to make progress. But if you the library author wants to do it differently, all I can say is go for it!Secondly the read_all itself: It and the exector would need to agree on an out-of-language protocol on how to actually handle the dependency; this will mostlikely be that an library would expose an interface like `Awaitable` that any dependency would need to implement, but with the downside that any dependent now has an explicit dependency on said library. Sure, maybe over time a standard set of interfaces would araise that the community would adapt, but then we have the API-dependency hell in java just re-invented.That is correct, the language level transformation that this DIP proposes does not deal with this library stuff. The usage in examples is just that example code to show it can be utilized. If I were to propose a specific approach to this, I would have people complaining that it doesn't work the way that they want it to and for good reason. My library uses the ``GenericCoroutine`` and ``Future`` to do all of this. With the help of what I call future completion that is a ``Future`` in API, but isn't actually a coroutine. Which is how my socket reads return. https://github.com/Project-Sidero/eventloop/blob/master/source/sidero/eventloop/coroutine/future_completion.d#L216In C++ the `co_await` dictates that the coroutine is blocked as long as the `Awaiter` protocol says it does, since any user **expects** that the `await`ed thing is actually resolved after it's `await`ed. It dosn't mater if successfully or not the keypoint is that it's **not pending** anymore.Yeah, the way I view it is that a coroutine has to be complete (error, have a value ext.), or have a value before continuation to occur (multiple return). But the language transformation isn't responsible for guaranteeing it. Although I would recommend it.In rust it's even simpler: polling is an concept that even kids understand: when you want your parents to give you something, you "poll" until they give it to you or tell you no in a way that keeps you from continuing what you originaly wanted to do. Same thing in rust: a coroutine is "polled" by the exector and can either resolve with the data you expected, or tell you that's it still waiting and to come back later. The compiler ensures that only ever a ready state is allowed to continue the coroutine. If you want to be more performant and not spin- lock in the executor in the hopes that someday the future will resolve, you can give it a waker and say: "hey, if you say you are still not done, I will do other things; if you think you're ready for me to try again, just call this and I will come to you!".Yes, that is a kind of dependency approach. But it is done by means other than how I do it. The DIP as far as I know (and I've done some minimal exploration in this thread), should work for this. Since the language knows nothing about how your scheduler works.
Jan 25
On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:Stackless coroutinesI might want to say, the term confused me quite a while. That’s because the coroutine does have a stack (its own stack). I thought it would somehow not have one, since it’s called “stackless,” but it just means its stack isn’t the caller’s stack. That fact was kind of obvious to me, since that’s what “coroutine” meant to me already. In my head I don’t see how a coroutine could even work otherwise. Maybe it’s a good idea to call the proposal “Coroutines” and omit “stackless.”
Jan 23
On 24/01/2025 5:33 AM, Quirin Schroll wrote:On Thursday, 12 December 2024 at 10:36:50 UTC, Richard (Rikki) Andrew Cattermole wrote:The term is correct. A stackless coroutine, uses the thread stack, except for variables that cross a yield point in its function body. These get extracted on to the heap. A stackful coroutine, uses its own stack, not the threads. This is otherwise known in D as a fiber. Over the last 20 years stackful coroutines have seen limited use, but stackless has only grown in implementations. If for no other reason than thread safety. Hence the association. But the word itself could mean either, which is why the DIP has to clarify which it is, although the spec may not add it.Stackless coroutinesI might want to say, the term confused me quite a while. That’s because the coroutine does have a stack (its own stack). I thought it would somehow not have one, since it’s called “stackless,” but it just means its stack isn’t the caller’s stack. That fact was kind of obvious to me, since that’s what “coroutine” meant to me already. In my head I don’t see how a coroutine could even work otherwise. Maybe it’s a good idea to call the proposal “Coroutines” and omit “stackless.”
Jan 23
Perma: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142 I won't do another round. The functionality hasn't changed, but there have been clarifications to how I describe some things as people have requested. Along with a new abstract. I intend to put it into the queue before next monthly meeting. So roughly a week away.
Jan 31
On Friday, 31 January 2025 at 16:12:43 UTC, Richard (Rikki) Andrew Cattermole wrote:Perma: https://gist.github.com/rikkimax/fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142There's still no writing about `opConstructCo`. Maybe a bit of background will help you. I'm in the process of writing an own frontend for the dlang language, and for me as an sudden implementor of the language, I find myself inside a unique spot for interacting wich such feature requests. Like I said; you explaint that only `opConstructCo` is the method / function that should be called to construct coroutines by the AST lowering, even called it a "new operator", but you still failed to include it in the section for changes to the language documentation. While it's correct that it's not needed for the grammar section, it is described in an text with starts with "a *potential* shell" telling me as a implementor that this is **not** required while infact it is; just like you said: its an new operator, so it should be clearly marked as such. Maybe in a new section after the grammar changes, or wherever, as long as it's more clearly that it is required as you yourself said:It is not part of the DIP. Without the operator overload example, it wouldn't be understood. .... The operator overload ``opConstructCo`` is part of the DIP. Therefore there are examples for it.So it is part of the DIP! Please state it so without adding "potential" or "purpose of examples only" before the **only occurence** of it inside the whole document. Maybe a change like this: ``` ... Implementors also need to be aware of the new `opConstructCo` operator which is used as an way to morph coroutine objects into library understandable types. ``` Or something similar.D classes are a great example of this, forcing you to use the root class ``Object``, and hit issues with attributes, monitor ext.I understand the sentiment behind it, and I agree that forcing the wrong things can be unproductive. But then again, it's an language, even requiring the spelling of an attribute is forcing the hand of programmers, so one or more types in `core`, will not be much of a difference. I only fear that introducing such a complicated technique will lead to either more fractation of the community and/or vendor log-in for the only library that will araise out of this, leading to more problems of root classes like ``Object`` that everyone finds unfortunate but is not willing enough to go head-to-head with said library to make things better. It's also a concern about people new to the language; it's still hard as it is to get into dlang with it many pitfalls, not only that classes are GC'd, but also things like postblitting which completly work differnetly than any other language and makes seemingly local created instances suddenly globally shared between the parent instances. My fear is that introducing a non-easy to understand way of using asyncronous functions will lead to incompatible libraries that throw errors nobody quite understands, epsc beginners that will drive them out of the room entirely they're not even halfway in. But it will remain to be seen what the future holds. Like you said, it's atleast able to morph into the other provided solutions, so for the start any knowledgeable enough person can write their abstraction ontop of it to get started using dlang's coroutines and we'll see how it all plays out.
Feb 03
On 04/02/2025 10:43 AM, Mai Lapyst wrote:On Friday, 31 January 2025 at 16:12:43 UTC, Richard (Rikki) Andrew Cattermole wrote:"In the following example, a new operator overload ``opConstructCo`` static method is used in an example definition of a library type that represents a coroutine. It is later used in the construction of the library type from the language representation of it." Is that better? A link to your frontend would be appreciated, I'd like to see if you've done UAX31/C23 identifiers (yet).Perma: https://gist.github.com/rikkimax/ fe2578e1dfbf66346201fd191db4bdd4/7bba547fb6ea09deb2f0cfda2d852c409ace0142There's still no writing about `opConstructCo`. Maybe a bit of background will help you. I'm in the process of writing an own frontend for the dlang language, and for me as an sudden implementor of the language, I find myself inside a unique spot for interacting wich such feature requests.Like I said; you explaint that only `opConstructCo` is the method / function that should be called to construct coroutines by the AST lowering, even called it a "new operator", but you still failed to include it in the section for changes to the language documentation. While it's correct that it's not needed for the grammar section, it is described in an text with starts with "a *potential* shell" telling me as a implementor that this is **not** required while infact it is; just like you said: its an new operator, so it should be clearly marked as such. Maybe in a new section after the grammar changes, or wherever, as long as it's more clearly that it is required as you yourself said:Its not in the grammar section because operator overloads are not here: https://dlang.org/spec/grammar.htmlSee above.It is not part of the DIP. Without the operator overload example, it wouldn't be understood. .... The operator overload ``opConstructCo`` is part of the DIP. Therefore there are examples for it.So it is part of the DIP! Please state it so without adding "potential" or "purpose of examples only" before the **only occurence** of it inside the whole document. Maybe a change like this: ``` ... Implementors also need to be aware of the new `opConstructCo` operator which is used as an way to morph coroutine objects into library understandable types. ``` Or something similar.
Feb 03
On 04/02/2025 12:42 PM, Richard (Rikki) Andrew Cattermole wrote:Like I said; you explaint that only |opConstructCo| is the method / function that should be called to construct coroutines by the AST lowering, even called it a "new operator", but you still failed to include it in the section for changes to the language documentation. While it's correct that it's not needed for the grammar section, it is described in an text with starts with "a /potential/ shell" telling me as a implementor that this is *not* required while infact it is; just like you said: its an new operator, so it should be clearly marked as such. Maybe in a new section after the grammar changes, or wherever, as long as it's more clearly that it is required as you yourself said: Its not in the grammar section because operator overloads are not here: https://dlang.org/spec/grammar.html <https://dlang.org/spec/grammar.html>Okay I changed my mind. "In addition to syntax changes there is a new operator overload ``opConstructCo`` which is a static method. This will flag the type it is within as an instanceable library coroutine type."
Feb 03
On Tuesday, 4 February 2025 at 02:30:49 UTC, Richard (Rikki) Andrew Cattermole wrote:A link to your frontend would be appreciated, I'd like to see if you've done UAX31/C23 identifiers (yet).I hadn't added UAX31 since I've used DLang's grammar specification to implement the lexer and it didn't contain them at the time of writing (and how it seems at qick glance still). But it's not that big of a deal, although I have to thank you bc it revealed a slight problem in creating code position data when using utf8 codepoints. Also didn't send a link bc I didn't knew if anyone is interested, but here ya [go](https://codearq.net/mdc/mdc)."In addition to syntax changes there is a new operator overload ``opConstructCo`` which is a static method. This will flag the type it is within as an instanceable library coroutine type."That sound's awesome! Thank you!
Feb 04