digitalmars.D.announce - D Language Foundation September 2024 Monthly Meeting Summary
- Mike Parker (691/691) Jan 05 The D Language Foundation's monthly meeting is normally held on
- monkyyy (3/7) Jan 07 all`y`alls need to publish api experiments; you airnt resolving
- Anonymouse (2/3) Jan 10 As always, thanks!
- Sergey (6/8) Jan 10 As a regular user who still reading changelogs of new versions of
The D Language Foundation's monthly meeting is normally held on the second Friday of each month, but with some of us traveling before DConf, we agreed to have the September meeting on the 6th. It lasted about an hour and a half. The following people attended: * Walter Bright * Rikki Cattermole * Jonathan M. Davis * Timon Gehr * Martin Kinkelin * Dennis Korpel * Mathias Lang * Átila Neves * Razvan Nitu * Mike Parker * Robert Schadek * Steven Schveighoffer * Adam Wilson Walter said nobody had wanted to merge the PR for the DMD AArch64 backend. As such, he'd continued to add changes and it had grown enormous and unreviewable, something he'd wanted to avoid. The size just discouraged people from reviewing it, and no one had shown any interest in doing so. He asked if we could just get it merged. He thought merging wouldn't be a problem except for some heisenbugs in the test suite that baffled him. Rikki said that as long as it didn't affect the other target triples, we shouldn't be blocking it. Dennis said we had agreed to merge it in a previous meeting, but he had understood that it wouldn't be in the changelog. He had asked about that but didn't get an answer. That's where it stalled. Walter didn't see how we could have an 8,000-line PR without a changelog entry. People would see it and wonder what the hell it was. Dennis said people wouldn't see the source code difference when upgrading DMD. Walter countered that developers looking at the PRs would see this massive one with no changelog. Dennis said the changelog entry could be in the PR description. It didn't have to be in the public changelog. They were developer comments, not user comments. Walter said those were the same thing. Mathias said he'd looked at the PR last week. In the history, Dennis had been ready to merge it if the changelog entry was removed. Mathias thought that was a simple thing to do. Then when it was released, we could add the full documentation and the changelog entry. If Walter's interest was just in moving forward, removing the changelog entry was the way. Walter didn't see how that made any sense. We had a procedure: change the code, and add a changelog entry. How could we merge an 8000-line PR without one? The changelog entry explicitly said that it was a feature in development. He didn't understand the issue with that. There was no documentation anywhere else. Jonathan said the changelog was on the website to show people what had changed in a release. It had nothing to do with what was currently in development. Anyone looking at it from a development standpoint could look at the commit message. The changelog was for end users. Dennis showed us a comment from the PR thread in which someone had complained they didn't want to see the change log littered with unfinished features; that's what a TODO list or issue tracker was for. I said I'd always seen the changelog as an end-user document, not a developer document. Walter said our policy had always been that if there was a significant change, it should have a changelog entry. Jonathan said that was for things end users would see. In this case, they wouldn't see it at all. It would be behind an undocumented switch. Walter said he was clearly alone in his position, so he'd remove the changelog entry. (__UPDATE:__ Walter made the requested change and the PR was merged.) Rikki said that Hipreme needed to get information from the compiler to optimize builds in redub. We had the `-deps` switch, but it gave more information than was needed and wasn't in an easily consumable format like JSON. Then there was the question of what kind of metadata we wanted in it. For example, `pragma(lib)` didn't necessarily survive multiple invocations of the compiler and the linker. Although this was mostly solved, and we could do PRs for `pragma(lib)`, JSON, and whatever we wanted, we needed to also think about telling build managers about the compiler configuration. Where was the config file? What triples were supported? Where were the static and shared libraries? What were the default linker flags? And so on. He thought it would be nice to come up with the requirements for the extra information and add it to `-deps`. Átila said that, having written a build system for D, he wondered why any of that was required. Rikki asked what would happen if you wanted to link with GCC instead of LDC. In that case, you wouldn't know where the static and shared libraries were. Were any linker flags used? Átila didn't know why we needed that information from the compiler given that we could do that right now. Rikki said it was because we had to make assumptions. The libraries, for example, could be anywhere on the system. Martin said LDC had that problem. They could link LDC itself via the C++ compiler. They figured out the location of DRuntime and Phobos for all of the supported host compilers by running a dummy link process with `-v` and parsing the linker command line flag. It was ugly, but as far as he knew this was the status quo for all of the existing compilers, at least for system programming languages, where the build system had to come up with profiles. Like CMake, for example. Dummy compiles made it very slow to check whether the compiler supported a specific command-line flag. Maybe some things could be extended there, but he was very skeptical that we needed it. His main use case for a build system was using one that wasn't tied to a specific language. If his build system had to support C and C++ anyway, because he wanted to link his D code to C and C++ code, it needed care about all the C and C++ compiler ugliness, too. He thought implementing it would be a low return on investment. Átila agreed. Rikki said that right now it printed out a subset of the AST. If you wanted to look at, e.g., imports, it didn't see function bodies or go into stuff like that. It wasn't a complete solution. We had subsets of the problem solved, but not a nice user experience, or a good story to tell people. Martin said that was build-system internals. He wouldn't say it was users interested in such information, only build systems. Existing build systems already had hacks for it from dealing with C++ compilers. Rikki agreed, but said you called the C++ compiler to do the link, and you couldn't necessarily do that with D. Martin said, to give an idea, they couldn't use the ld linker directly. They had to use the C compiler because it knew where the C runtime libraries were located; the start object files, termination object files, all of that. It was easy on Windows, not on POSIX. We already couldn't call the linker directly, so Rikki's comparison in D with DRuntime and Phobos didn't hold, because we needed those additionally on top of all the C stuff. Rikki said that still left finding runtime sources, IDE installations, or just having it auto configure. He added that DUB's platform probe wasn't the fastest thing in the world either. Átila said we'd be better off optimizing that than going the other way. We already had IDEs getting DUB information. He'd written this stuff already, and most people were using DUB. If he opened a file in a DUB project, things worked. The import path was set, he didn't have to do anything. Rikki said there were a lot of assumptions in place with the IntelliJ plugin. It had hardcoded source paths for different distros. Átila said that wasn't good. Rikki said it could be anywhere on the file system, and it changed based on the distro and installation method. It would be nice to have a reliable method for it. Martin agreed, but again questioned the return on investment. He thought it would be extremely, extremely, extremely low. Dennis said this was a tooling feature, not a language feature. In that light, we had discussed before that the JSON output was designed to be useful. If there were uses for more information, and it would make toolmakers happy getting it through the compiler even if there were other ways to get it, then he didn't see a problem with it. The worst case scenario was that they ended up not needing it. Átila said that as one of those people, he'd gotten everything he wanted when `make deps` was fixed. Martin said as an example that the compiler had no idea where DRuntime and Phobos files were. The only thing done on that front was an implicit default `-i` in the LDC config file. But the user could override that with a custom subset of the DRuntime and Phobos files and the compiler wouldn't know. Not even the directory where the libraries were taken from. Depending on the command, you could specify multiple libraries, 32-bit and 64-bit, and the linker would use the correct directory, all stuff the compiler didn't know about. Dennis pointed out that Rikki had mentioned `pragma(lib)`. So if someone had `pragma(lib, gdi32.lib)` to make their code compile on Windows, it would only work if the compiler invoked the linker. Martin said that was not correct. He wasn't sure about DMD, but he thought it was the same as LDC. Linker comments could be embedded in object files. So in this case, it was baked into the object file. As soon as the linker saw the linker directive, then `gdi32.lib` would be pulled into the link. The compiler didn't need to do anything. That's how it worked on Windows, but on POSIX it only worked with LD. It wasn't standardized. Rikki said it sounded like it wasn't something we wanted to spend time on, but it could be left as an enhancement request that anyone willing to work on could pick up. No one objected. I reported that the SAOC judges had accepted three projects this year and that I had notified the successful applicants. We had one project to replace libdparse in dscanner with DMD-as-a-library, one for Razvan's ongoing project to separate semantic routines from AST nodes in the compiler, and one to improve D error messages. Adam said that a discussion about some technical details of the Phobos 3 design had come up in Phobos yesterday. Someone had suggested getting rid of ` nogc`. That brought up a broader question about what Phobos 3 should be. Some people wanted what we might call a systems standard library: a few building blocks that they might use to implement emitting stuff to a console or a file and not much else, where we'd want ` nogc` and `nothrow`. Other people wanted more of an applications programming library where we used the GC and allowed throwing. He said this had been the perennial debate about Phobos, and he was trying to figure out what to do. Exceptions were one case that had been discussed. He said Paul Backus had an idea for a layered approach and had used the example of `std.conv.to`. We could make a ` nogc nothrow` version that did no allocation and returned an optional type, then we'd put a throwing, allocating wrapper around it. It would read the optional type and then allocate and throw in the case of failure. That was all that `std.conv.to` did, so that was fine here. But there would be places where Adam didn't think we could have two layers like that. We needed to answer the big-picture question: were we going to be a systems library or an application programming library? That had been vexing him since he started on this project. I pointed out that this was Tango vs. Phobos all over again, so we definitely needed a solution for this. Átila verified with Adam and Jonathan that exceptions were not the only allocations in Phobos. He said we should handle other allocations differently. We should try to do what Walter did when he eliminated some of the allocations from Phobos. There was no reason to allocate unless it was absolutely needed, especially when the user could do it, like when using ranges and adding `.array` at the end. He thought the guiding philosophy for allocations should be "don't". Let the user choose when to do it. He gave the example of lists in Python. People allocated those a lot, but often they didn't really need to because they weren't storing them anywhere, just using them somewhere else in the pipeline. Walter said that Phobos shouldn't be allocating memory where it didn't have to, a point he'd made in his DConf '23 talk. In the cases where allocation couldn't be avoided, we should just accept a user allocator. Then the perennial debate of GC or no GC, throw or don't throw, would be a user decision. We were never going to resolve the question of the correct way to allocate memory. The only solution was to let the user decide. There were several options for how a user could handle an error. Did they want to print an error message? Did they want to throw? That should be their decision, not the library's. Letting the user decide how to allocate and how to handle errors would make Phobos a more flexible library. That may not be possible in all cases, but he thought that between all of us we could come up with a way to allow the user to do it in an attractive manner. He'd switched to this model in his own programming and found it very nice. For example, his ARM disassembler didn't call `printf` internally for output. It put all the output in an output range and the caller decided what to do with it. Generic functions in Phobos should be doing the same. The compiler was now doing it, running error messages through an abstract interface. That had turned out to be a very positive change, because you could do whatever you wanted with the error messages, including ignore them. That fit right in with DMD-as-a-library. He said it might be difficult to make it work in all cases, but with the brain power of our collective community, we could figure out ways to do it. Dennis said he considered console output and integer-to-string conversion so elementary and ubiquitous that they should probably even be in the runtime. At the very least there should be ` nogc nothrow` versions. But most of the batteries-included stuff used in application code, like a JSON parser or big integer type, could use whatever was convenient, like the GC. Anyone needing specialized versions of those would use a specialized library anyway. For example, Phobos had a transform function, but audio programmers used a specialized library like dplug. Jonathan agreed that range-based stuff generally didn't need to allocate. There were cases with exceptions where it made more sense to return something instead of throwing. But for most of this stuff, he thought it should be handled on a case-by-case basis. For something like, e.g., an XML parser, he would never want to write one that did anything other than throw. It was just cleaner at that point to allocate. He cited dxml as an example. For most stuff, it just returned slices of whatever you gave it, so it didn't allocate except when it needed to throw an exception. Even when it was necessary to return a string, we were typically able to overload an allocating function with one that takes an output buffer to avoid the allocations. We had tools to avoid allocating and should be using them heavily where it made sense. But it had to be done on a case-by-case basis. Robert agreed. If we had an XML parser that required passing in an allocator, he was just going to write his own XML parser. Another thing to consider was that safe-by-default would be a very hard sell if we were requiring people to pass allocators around. Users would wonder why their program in this safe-by-default language was no longer ` safe` just because they passed an allocator to a function. On a more abstract level, the default for a Python programmer dipping their toes into D and Phobos should be stupidly simple and ` safe`. If you were two years into your D journey and knew about chaining ranges, then there should be options for you, and maybe those options would be to build the thing that allocates and throws. But the surface that people see should be stupidly simple. That was one of the reasons he fell in love with D. Requiring users to pass in allocators made it feel like C++. Walter agreed and suggested that the stupid simple interface could be a layer on top of Phobos that handled the allocator for you and passed it to the engine underneath. Robert agreed that could be possible for some things, but for others, like parsing XML and JSON, it might be extremely difficult. At some point, it had to be a judgment call. Rikki said that we sometimes had that nice split where we could have a buffer or allocator at the low level and a wrapper that allocated above it. Other times, like with something using system handles, it should always be reference counted because cleanup needed to be deterministic. You could run out of handles and crash your program. As for throwing, with libraries you really wanted to catch exceptions close to the call. You didn't want them to escape the thread. But for a framework, you *did* want that. There were different mechanisms there: value type exceptions and the runtime unwinding mechanism. They weren't the same thing. Martin said that Robert's point was very important to him as well. And it wasn't just about making the interface easy, but also about documentation. He really hated all the implementation details in the C++ docs, like template parameters. If he was looking at something for the first time, he was just interested in the functionality. He didn't care about an allocator template parameter. Moreover, it was bad for compile times if you needed to templatize everything just for the allocator. He then went back to the point about using wrappers that returned some kind of optional or result type. In his experience, that kind of thing propagated like cancer. The LLVM code base had stuff like that to avoid throwing exceptions. It was extremely ugly. As an example, he postulated a wrapper function called `tryUpdateFile` that read a file, did some work, and wrote to a file. It would be `nothrow`. Internally, it would need to use `tryReadFile` rather than `readFile`, which might throw, and the equivalent `tryWriteFile`. Then the cancer problem came in dealing with the ugly result types those `try*` functions returned. There was probably no easy way to deal with the multiple different exceptions that were possible, but they'd need to be taken care of somehow, and that was going to be extremely ugly. He preferred exceptions for this stuff. They were perfect for exceptional cases. Having to use an allocator just to allocate exceptions would be ugly, too. DIP 1008 had tried to tackle this problem. We could have other options, like reformulating the semantics to allow GC invocations for `throw` expressions. He was just very skeptical about the wrapper functions. Razvan said there had been comments from some D users that D didn't know what it wanted to be. He was getting that feeling listening to this discussion. We had GC and we had exceptions, but now we wanted to jump through all these hoops to offer 100% flexibility. He found that weird. Even when he'd first started contributing to Phobos, there were some functions with several parameters to configure if an exception should be thrown and other stuff. It was super confusing. He just wanted to use the function to do something. He didn't care about exceptions and allocators. From his perspective, if we had GC and exceptions, we should be using them. He believed other use cases were niche. He knew there were people out there who wanted to aoid them, and they would make a fuss about it. In that case, maybe we should have a third-party `nothrow nogc` library they could use. GC was embedded in the language. Razvan felt if we tried to solve all the use cases, we'd end up with a terrible standard library. Rikki pointed out that the big problem people had with exceptions was that stack unwinding forced allocations. With value-type exceptions, it would all be purely on the stack. It could be as cheap as a single tag value, an integer. It would force you to catch it close to where it was called, but it solved all the problems we had with exceptions. The only thing holding us up on that was sumtypes. Steve circled back to the Discord discussion Adam mentioned. The question that had brought this up was why string to integer couldn't be ` nogc`. He thought that was a good question, as it was an operation that shouldn't need to allocate. The answer, of course, was that it might need to allocate an exception if the string didn't contain an integer. We could respond by making a version that didn't allocate, but we'd still need a mechanism to deal with failure. He didn't see a problem with having two versions of string-to-integer, one that throws an exception and one that otherwise indicates an error and lets the user handle it in a different way. That didn't seem like it would cause huge problems with most code bases. He understood Martin's point that composing everything into different versions could become viral, but Steve didn't see how we couldn't have a string-to-integer function that didn't throw. Regarding allocators, he'd never liked the way we handled them. There'd always been the problem that the GC flavor of an allocator didn't need to deallocate. With all other allocators, you needed to deallocate explicitly. That changed the way you wrote the code. He didn't want to poison the whole Phobos API with allocators because it meant more things to worry about in the implementation, whereas when using the GC you just allocated and that was it. In his view, we should be using the GC for allocations. If someone didn't want allocations, they could provide a buffer. Finished. Timon thought it would be weird to change the interface from one that was exception-based to one based on explicit sumtypes just because exceptions allocated. He thought the right idea was to put the blame for exception allocation on any caller that caught the exception and then leaked it. In any other case, there was no reason to keep the exception on the GC heap: it was allocated, the stack was unwound, the exception was parsed once to take an action, and then it was thrown away. It didn't make sense to him that we would have a `nothrow` version just because throwing an exception allocated with the GC. There should be a different solution for this. He added that if you wanted implicitly unrolling error handling, then exceptions were the way to go. If you didn't like the unwinding itself, then maybe DMD could have an option to implicitly unwind the stack without unwinding the stack by just returning some error flags. It might be useful for porting to new platforms before stack unwinding was implemented. In general, Timon thought that if you wanted a `nothrow` interface, then it should be about wanting explicit control flow and not about the GC. Martin said that he'd been thinking about a kind of weak ` nogc` that allowed GC allocations only in throw expressions. That might go a long way. We could give ` nogc` the new semantics in a new edition, and then add something like `nothrowpure` that really makes sure nothing escapes and nothing throws. As for performance concerns, he'd heard this way too often. Why did anyone care about unwinding performance? Exceptions were meant to be used in exceptional circumstances. He didn't care at all about the performance of an unwind. He wasn't going to be throwing a million exceptions per second. And when it came to value-type exceptions, you could escape exceptions as soon as you caught them, so it wasn't like all of them were thrown away immediately after or during the unwind. Performance concerns in his view were completely invalid. Walter agreed that exception performance was not an issue. Jonathan said he didn't care about the implementation details, whether GC was used or not, as long as exceptions worked the way they currently did. If someone could come up with a better approach, he didn't care. But there were cases when you had to store a caught exception somewhere and couldn't assume they'd be deallocated. It was also common to allocate stuff to put into the exception. Then there was the case of needing to catch an exception and pass it to another thread. He believed with our thread stuff, when a thread was killed by an exception, it would throw it on another thread when the join happened. So trying to free the exception at the catch site wouldn't work at that point. This made him nervous about wanting to avoid GC-allocating exceptions by ref-counting them or something like that. He felt that the people freaking out over it were people who refused to use the GC anyway. They wouldn't be happy no matter what we did unless we didn't use the GC at all, and that would be miserable. Though he was all for avoiding allocations where reasonable, it would be cleaner to allocate and throw. If you didn't like that, then you could avoid that part of the library and do your own thing for your more restrictive use cases. We should be smarter about allocations, but it should be on a case-by-case basis. Avoiding GC just to avoid GC would make for a miserable user experience. Walter cited `std.path` as an example. He had rewritten it to avoid allocations, but to the user the functionality and interface hadn't changed. Jonathan said a lot of range-based stuff could do that. Yes, there were some issues with delegates, but for a lot of range-based stuff there was no need for allocations. We had some really good tools to reduce allocations and we should use them, but we wanted something that was usable, and that sometimes meant allocating. Rikki said that not all exceptions had the same pattern. Some were caught really close to where they were thrown, others killed the process. They weren't the same. A different mechanism was appropriate for each. We should recognize that relying on one solution wasn't scaling to the full scope. Given how often people derived from the `Exception` class, he thought that a lot of times what they really wanted was just to throw a unique identifier. We needed a completely different mechanism for that. He emphasized that we should be using the right tool for the right job, whatever that might be. For example, system handles ought to be reference counted, not GC-allocated, but business logic absolutely should use the GC so that the logic didn't have to worry about memory management. We needed to pick scalable solutions based on the use cases rather than picking one for everyone. Timon said that an issue with letting the exceptional path be slow was that the library function didn't know the user's use case. The use case might be, "If this data is valid according to this rule set, then parse it." In that case, validating and parsing at the same time is more efficient. Then you were kind of left in a position where you did need to provide two versions, and he didn't know if that was a good place to be in. Mathias agreed that was true in some cases. He gave the example of a file-removing function. You wanted to ensure that the file wasn't there, in which case you didn't want an exception thrown if the file didn't exist. So in cases like that, it made sense to wrap a non-throwing function with a throwing one. But he also agreed with Razvan and Jonathan. We had to have an identity. We were a very versatile language, but Phobos couldn't cover every use case. Every choice we made in the interface would bubble up when we tried to compose. That wasn't scalable. So people who wanted to use ` nogc` on their `main` wouldn't use Phobos. They'd have their own runtime. Supporting ` nogc` on `main` shouldn't be in scope for us. The main use for ` nogc` was to ensure a function didn't allocate. For him, it was about avoiding allocations on the hot path. He went back to Steve's motivation for bringing this up, that people were complaining about string to int allocating. It couldn't be ` nogc` because it might allocate, not because it always did. And the complaints probably came because people were using it in a ` nogc` context. He said it felt like a self-inflicted thing. We've added attributes that weren't really that good, that didn't really match our identity, and now we had a tempest. Steve said it was true that string to int didn't allocate. The problem was you didn't always know if a string had an integer in it. The only way to be sure was to try the conversion and see if it failed. But failure meant catching an exception when we could instead be handing back an error code. For something that small, he could understand not wanting to deal with exceptions. When it was something big like XML, then yeah, maybe you had to do exceptions. But for small stuff like string to int, it made no sense to have that level of complexity. He brought up Walter's earlier point about calling an error handler so the user could decide what to do. What would you do at that point in the parsing if an exception wasn't thrown by the handler? Dennis jokingly suggested returning `int.min`. Steve said you had to do something, but inside the parser, it wasn't going to work. Mathias said that was a good example of where we should expose a function that didn't error out. The big question for him was, how would we provide rich error information for that kind of function if we didn't have a `try` wrapper for it? He disagreed with Rikki's point that most of the time you really wanted a unique identifier. For him, a really useful thing about exceptions was that you could provide information as rich as you wanted. You could say, for example, which field of a struct couldn't be read and what its value was. That usually meant allocating. Robert asked what would happen if the value he wanted to parse was `int.min`. What return code would you use then? What about `float.nan` for floats? He didn't want to have to pass a `bool` as an out parameter, or a sumtype with both a `bool` and the value, and have to check if it's `true` after the function call. Although it seemed simple on the surface, the closer you looked, the more you realized it wasn't so simple. He said the TL;DR here was, "It depends." As Átila liked to say, "You have to think." Even if it was painful. We had to think for each and every function, "Can this thing throw?". If the answer was "yes", then it couldn't be `nothrow`. Or we had to provide an interface through which you could pass something where the exception information could be stored. Átila suggested we take Martin's idea of having the `throw` expression be exempt from ` nogc` and add a compiler flag to make it non-exempt for those who wanted it. Steve pointed out that throwing didn't use the GC. It used to, but it no longer did. It was `new Exception` that was the problem. Átila understood, but the idea was that anything following `throw` in the expression could still allocate, e.g., `throw new Exception`. Martin agreed and said it wasn't just about the exception itself, but mostly about the string messages, many of which needed to format stuff. Timon interjected that he thought it was a bad idea to have a string message in the base `Exception` class, because when did you really need to eagerly format an exception string? Adam said he had been taking notes and had some comments. Regarding error sink parameters, this got at the heart of a design issue that was always vexing. Did we want the library interface to be simple and easily accessible to newcomers? Error sinks were kind of antithetical to that. Another question: how easy were they to ignore? A new person might come in and just throw something to shut the compiler up and ignore the actual error. Then we'd get all kinds of questions about why their code was erroring out. He thought we could make something that was super flexible, but it would end up being for the kind of people that were currently on his screen, people who knew D so well they dreamed in it. We shouldn't do that. He said this went back to what Razvan said about identity. What was Phobos trying to be to people? We should of course try to avoid allocating where possible, but if you were writing ` nogc nothrow` code and needed that level of performance, you were probably going to be doing stuff on your own anyway. Was that really something we wanted in a standard library trying to cover the broadest possible use cases? That was kind of a specialized demand. He said the ` nogc` stuff seemed to mostly revolve around exceptions anyway. He thought Martin and Átila had really hit on something interesting in asking how much we really care about performance and GC inside `throw`. If we went the "weak nogc" route, he thought we'd find a lot of complaints about GC in Phobos would disappear. Then we'd get a much better read on how much people actually cared. He had a feeling we'd get to the point where we would say that we just weren't going to provide a lot of flexibility for ` nogc` code. He thought Steve's point about GC being incompatible with allocators was a great one. Paul Backus had said in the Discord he'd rather see us get rid of allocators and just go with the GC in Phobos. If somebody needed to do funky allocations, they'd need to write their own stuff anyway. Steve said he took Adam's point. Advanced users who cared about low-level things probably weren't going to be using the exception-throwing stuff. But it was probably pretty annoying to use a language in which you couldn't convert a string to an integer without a catch block unless you wrote your own. Adam said he agreed with that. There were some cases where it made sense to have a fundamental function that we then wrapped, e.g., `to` and `tryTo`. We could have e.g., a `phobos.sys` module for the low-level stuff. Like Martin said, that could get interesting, so we'd need to be careful with it. But then there were cases where that wouldn't be feasible, like `phobos.data` where we would have all the parsers. In that case, there would be GC and exceptions. Mathias said that all the companies he had worked for using D had cared about performance and still used the GC. At Sociomantic they had used and thrown exceptions. They were simply careful about it and they never had a problem. There were a few tricks they used, like pre-allocated exceptions or exceptions that had an internal buffer. That was why they introduced the `message` function in `Exception`. They couldn't use the `msg` property. So no one could tell him that the exceptions weren't performant enough or that the GC couldn't be used in a high-performance context. It wasn't true. Walter said that there needed to be one low-level string to integer conversion that didn't allocate and was focused on high speed, and then we could write a user-friendly one that called it. That should be fine. Martin said Adam's idea about hiding the implementation details in a separate module or package was a good one on a case-by-case basis. We could have building blocks that didn't pollute the main modules and would allow us to keep the documentation and the interface simple. Then for advanced use cases, people could use the building blocks directly. But then string to integer conversion was something for which it wouldn't be nice to import the building block stuff. In that case, it made sense to have two functions in the main interface. Adam said that was a longer discussion that had started back when he had asked about what should be in DRuntime and what should be in Phobos. He thought we were heading to a place where we would have a separate layer where these lower-level functions existed. We might even consider making it a separate library. We could tell the ` nogc` folks, "Okay, here's your walled garden over here." And then Phobos could build on top of that. This came back to determining the split between Phobos and DRuntime because some of this stuff would have to be in the runtime. Maybe this new layer would be a library between Phobos and DRuntime. Walter said he was fine with the building-block approach. The bottom level should be ` nogc nothrow` and all the nice, user-friendly stuff should be on top of that, probably in Phobos. Because if you didn't have a low-level, high-speed function, users would always write their own. We shouldn't be doing that. Put the low-level stuff in the runtime. If you wanted the simple, uncomplicated things, Phobos was where you should be looking. Rikki wanted to know why the non-allocating, non-throwing, high-speed stuff should be in the runtime. Why couldn't it be below that, like at the BetterC level? Walter said that BetterC was designed to only depend on the C runtime library. That wasn't part of this. Martin clarified that Rikki was suggesting a kind of stripped-down runtime library apart from DRuntime which was usable in BetterC. Átila reminded us that he'd said many times that BetterC should be implicit. Martin said that was what he was getting at. Ideally, we'd have a runtime that was properly structured in terms of object granularity so that, e.g., we'd have the string to integer parsing in a standalone object file without any further dependencies so that it could then be linked into BetterC. If this were the only thing we needed, then we could just link in that object file and be done with it. Currently, we had `ModuleInfo` and all that stuff, but ideally, this would be taken care of by a properly structured runtime. Átila said that should come out as a result of not using features to begin with. It should be pay-as-you-go. If you didn't use something, you didn't pay for it. It shouldn't have to be anything the user selected. It should just work. And then BetterC would be implicit. I suggested that we should just document that BetterC was created to facilitate porting C code to D and integrating D code into C, which was the motivating use case for it. We didn't have to support anything beyond that. If you wanted to use BetterC to write your game or whatever, that was on you. We shouldn't have anything to do with that. We should just document it and leave it there. Rikki said his point about the extra library wasn't just about BetterC. The idea was that if it didn't depend on anything platform-specific or anything in DRuntime, then why not separate it out? It would be BetterC as a consequence and wouldn't need the switch. It would be pay-as-you-go. Martin said they only disagreed on the idea that it needed to be split out. He emphasized that if the object file granularity were taken into account, we wouldn't have these issues. We wouldn't need to split the runtime. It would be implicit and pay-as-you-go. Someone would need to do the work of going through each module and determining if the object file granularity was fine or if decoupling was needed. That would be a tedious task. Walter said another way to do that would be to take an ordinary function and convert it to a template. Then you didn't need to link a separate library to use it. Rikki said he'd asked for `core.int128` to be all templated and Walter had turned him down. Walter said he hadn't been thinking about it being used from BetterC at the time. I asked Adam if he was in a better place than he'd been at the start. He said he was. We still needed to debate some of this stuff, like whether we should have a separate lower-level library in the stack and where it should go. But he was happy with the notes he had now. Our next meeting was a quarterly meeting on Friday, October 4th. Our next monthly took place on Friday, October 11th. If you have something you'd like to discuss with us in one of our monthly meetings, feel free to reach out to me and let me know.
Jan 05
On Monday, 6 January 2025 at 06:10:56 UTC, Mike Parker wrote:Phobos 3 design nogc allocators conv redesignall`y`alls need to publish api experiments; you airnt resolving the allocator debate with another year of talking
Jan 07
On Monday, 6 January 2025 at 06:10:56 UTC, Mike Parker wrote:[...]As always, thanks!
Jan 10
On Monday, 6 January 2025 at 06:10:56 UTC, Mike Parker wrote:Walter said he was clearly alone in his position, so he'd remove the changelog entry.As a regular user who still reading changelogs of new versions of compiler can confirm - I would assume to have only implemented features to be presented in this list. As a user I don't care about not ready/WIP code to be mentioned over there
Jan 10