digitalmars.D.announce - D Language Foundation October Monthly Meeting Summary
- Mike Parker (501/501) Dec 31 2023 The D Language Foundation's monthly meeting for October 2023 took
- Richard (Rikki) Andrew Cattermole (22/29) Dec 31 2023 It's 2024, so lets hunt down what the problem is with ``std.file``.
- zjh (3/6) Dec 31 2023 [chinese
- ryuukk_ (30/34) Dec 31 2023 I just tested, and the issue happens again, i don't know what
- Konstantin (6/11) Jan 01 And what about https://code.dlang.org/packages/dlangide/0.8.18 or
- ryuukk_ (19/30) Jan 01 DCD is the project that empower them all
- Mike Shah (9/23) Jan 01 Some of the important ecosystem projects when I teach D are
The D Language Foundation's monthly meeting for October 2023 took place on Friday the 13th at 15:00 UTC. It lasted around one hour and thirty minutes. I was unable to attend, so thanks to Razvan for running it and to Dennis for recording it. Two attendees were first-timers: Adam Wilson and Luís Ferreira. I invited Adam along after a conversation we had a DConf. Luís was originally supposed to attend the quarterly meeting the week before as a representative of Weka but had been unable to make it, so I invited him to this one. The following people attended the meeting: * Andrei Alexandrescu * Walter Bright * Luís Ferreira * Timon Gehr * Martin Kinkelin * Dennis Korpel * Átila Neves * Razvan Nitu * Adam D. Ruppe * Steven Schveighoffer * Adam Wilson Razvan got started with an issue he had recently encountered with the inliner that he didn't know how to resolve. Normally, when module A imports module B and calls a function from B, then you'll end up with a linker error if B's object file isn't handed off to the linker, e.g., when compiling with `A.d` on the command line and omitting `B.d`. However, when the called function is inlined, no linker error is raised. He suspected this was an optimization. He'd found that this behavior breaks building with BetterC with the inline flag. He explained that the inliner runs after Semantic 3, but since B is not the root module some extra semantic stuff is done, and then you end up with `TypeInfo` errors. His first instinct was that this was a hack and that the inliner shouldn't be doing any semantic analysis. But if we change this, it might cause linker errors elsewhere. Walter said this was the first he'd heard of this. Why not just link in module B or add it as a root file? And why was it a particular problem for BetterC? To the former, Razvan said that's the way to fix the error. Then he asked if Walter agreed that the inliner shouldn't be doing any semantic analysis. Walter said he didn't know as he hadn't looked into it. He didn't remember that the inliner was doing semantic analysis. Razvan then asked if everyone agreed that compilation with and without the inline flag should yield the same result. Walter said not necessarily. If a function is inlined, you don't need to link it in. And to link it, you have to add it as a root module. So he didn't see why this was a problem and why it was a particular problem for BetterC. Steve said that the inliner had to semantically analyze the inlined code. So he asked if Razvan was talking about some other semantic analysis taking place, like on code that wasn't used. Razvan explained he was talking about the analysis the inliner has to do on inlined code when the imported module isn't a root module. Martin mentioned some LDC linker flags that require extra semantic analysis on things that would otherwise be linked externally. He said if something is a root module, it's going to be analyzed anyway, but if you just compile module A and it decides to inline another function that would otherwise not require semantic analysis, then yes... He wasn't sure he understood what the problem was here, but he could confirm that they'd had a problem a couple of weeks before. They were getting linker errors related to inlining, but it had nothing to do with separate compilation. He said that was a topic for later. Walter noted that if the inlined function from module B called any other functions from module B, then you'd see linker errors if B wasn't linked. Was that the problem? Martin said it wasn't the problem in their case. It was a unit test runner that included all of the code, so there shouldn't be any undefined symbols. Walter asked Razvan to clarify what sort of linker errors he was seeing. Razvan said it had to do with `TypeInfo` generation in BetterC code. The fundamental issue was that semantic analysis was being done when it shouldn't and `TypeInfo` was being generated. If you compile with `-betterC` but without `-inline`, you don't see the errors, but if you compile with both, you see them. Walter said that, okay, it was a `TypeInfo` issue. To figure that out, he'd have to trace through the code to understand why a `TypeInfo` was being generated in module B in that case. He couldn't think of a reason off the top of his head. Razvan suggested an alternative fix would be to only inline functions from root modules. Walter said that would never work. You wouldn't be able to have header-only libraries very well. He said the correct fix was to determine why a `TypeInfo` was being generated when it shouldn't be. Martin said LDC didn't have these kinds of issues with `TypeInfo` because it had a different emission strategy for them. He suggested DMD take the same route. For classes, the `TypeInfo` is always generated in the module which contains the class declaration. That was the same as DMD as far as he knew. Where they diverge is with structs. The special `TypeInfo` members which are added to the struct are all emitted into the object file that contains the struct, but the actual `TypeInfo` is emitted lazily whenever it's accessed, but from the codegen layer. So they avoid all the trouble with `TypeInfo` being needed at CTFE and such because it's generated lazily in each module that needs it, which accesses it in actual code that's executed at runtime. All of the speculative instantiation stuff could be untangled by just doing it in the codegen layer rather than as part of semantic analysis. Steve was trying to understand that if you're calling a function with BetterC that needs `TypeInfo`, and it's linking without `-inline`, then that must mean that the function being called isn't really BetterC. Because it needs the `TypeInfo`. Dennis thought it came up when you want to call a CTFE-only function in a library, e.g., a Phobos function that uses GC only for CTFE, and with inlining it's inserting the Phobos code into your BetterC code. He said that basically what it comes down to is that BetterC is a bunch of hacks and the front-end inliner is a bit of a hack, and in the long term they should both go. Steve summed it up as "BetterC can't use CTFE when a CTFE function uses runtime features, which is a long-standing problem." Walter asked Razvan if there was a Bugzilla issue for this. Razvan said there was and there was also a PR. Walter asked if the PR worked, and Razvan said it didn't. (__UPDATE__: Both [the Bugzilla issue](https://issues.dlang.org/show_bug.cgi?id=24153) and [the pull request](https://github.com/dlang/dmd/pull/15627) have since been closed, as the issue is no longer reproducible.) Dennis wanted to know what the future of the error interface is going to be in DMD. They'd been working to use an error sink so that it wasn't just printing directly to the console, but the interface is a thin wrapper around `printf`. Some issues with it had come up and attempts to fix them had fallen short because the interface is too limiting. Walter wanted a simple interface and had rejected proposals that would replace it with one that's complex. So Dennis was wondering if there was a place to meet in the middle: make it slightly more complex to tackle the issues they'd encountered. He noted that one of our goals was to improve error messages. Walter said he agreed with the goal. With his recent pull requests, he'd been trying not to have multiple interfaces to the error message handler. He cited the diagnostic error message printer as another thing they were trying to simplify. He recalled agreeing with Iain to remove the complexity that had been added to the error sink, but nothing had happened yet. The purpose for the error sink was twofold: to simplify the interface, and to make it usable with DMD-as-a-library. Then the library can provide its own interface to do what it wants. So he wanted to keep that simple. DMD-as-a-library was going to be simple so we could do the LSP implementation simply. This was followed by a discussion about issues with the current implementation, the use of `toChars` and `toPrettyChars`, decisions about truncation and formatting, etc. A big point here was that Walter said all the custom formatting should be upfront before the message gets to the error sink. Syntax highlighting and things like that should be done by whatever the error sink calls. But the error sink itself shouldn't be making any decisions about formatting and highlighting. The outcome was that Dennis said he would experiment with a new `toChars` method and see how far he could get. Timon had nothing to bring up this time. Adam started by saying he had spoken with me at DConf, and I had invited him to the meeting to talk about how MS handles their release stuff for .NET editions. He summarized how it works: * It's a one-year release cycle that ends in November. * They have a three-month planning phase with the first preview coming in February. * They do seven total previews. * Those are followed by two release candidates. He said that during the three-month planning phase, they still push out library fixes even for features they've decided to cut. They do feature requests and such right on GitHub and also list their focus areas there. And they have pretty strict rules about what can and cannot be done and when. So in the preview phase features can be added at any time even if it's running late. The language is finalized before RC1, and then RC1 is bug fixes only, and RC2 is polish and critical bug fixes only. They're very focused on forward compatibility. If a feature is in there now, they have to support it even if they remove it from later editions. In terms of new language features, the compiler and the library are tied together, so you can't necessarily build the latest version of the library with an older version of the compiler, but you can go the other way. When they release, they do multiple articles on their blog describing all the new stuff, written by the developers who wrote the features. They push a lot of stuff out on Twitter and get YouTube creators involved. The final release is done on the Monday night before their annual .NET Conf. Then on Tuesday morning at the start of the conference, everyone gets to download the new release. He had talked with me at one of the BeerConf sessions in London about this and suggested that once we get going with editions, we consider tying each release to DConf. Walter noted it's always difficult getting people to write blog articles about this stuff. Adam agreed. He said he'd tell me that I'd wear my fingers out if I tried to write it all, but suggested that I could spend some time interviewing the people who write the features. He said we could pull some of Adam Ruppe's writing into the mainstream. And he said he'd been doing some more writing lately himself, so he'd be willing to contribute on that front. Walter said articles are a great marketing tool, but the most effective ones are from users. He cited all the articles out there from Rust programmers, like "I wrote TicTacToe in Rust" or "I wrote a text editor in Rust" or "I wrote some boring conventional thing in Rust". Many of them had no merit, but the constant drumbeat of articles about Rust appearing on social media was effective. They don't even have to be very substantive. Just something that gives a constant presence out there. He said that another effective approach was responding to programming articles you see on social media with articles about how D solves whatever problem the article was solving. He gave an example of a discussion he'd seen on Hacker News about fallthrough in switch statements, so he wrote a little post about how D handles it with `goto case`. It got a lot of upvotes. He avoids criticizing other languages in that kind of writing. He just says, here's our solution and maybe they should adapt their solution to it. So if more D users were doing that kind of thing, or periodically tweeting out three lines about what they're working on with relevant hashtags, maybe throwing some polls out there, that sort of thing drives engagement. This led to some discussion about SEO, hashtags, the ineffectiveness of ads, etc. To wrap up, Adam said he'd been working on an ImportC article based on the stuff he'd done at DConf. __CTFE integer overflows__ Luís opened with an issue Weka had run into with constant folding integers at compile time: there's no way to know if an integer is going to overflow. They'd like to have a warning for that. He's working on a linter using DMD-as-a-library and as a plugin for LDC. He'd like to have a warning for that in the linter, but there's no way to hook the constant folder to do what they want. Walter said that the problem with doing integer overflow checks is that sometimes you want integer overflow. Luís agreed and said that we could have a way to ignore the checks when they're really wanted, but most of the time they don't want them. He said for the compiler, at least for compile-time stuff, we could do what Clang and GCC do for sanitizing signed integer overflow checks. At run time, we can use whatever sanitizers GCC and LLVM support. He said he'd opened a PR about this and Dennis told him this was defined behavior. He agreed with that, but there were some use cases where it wasn't wanted. He understood that Walter didn't like warnings. Even if this was something that wasn't going to be upstreamed in the compiler, he'd like a way to query if a constant has been poisoned somehow and then it can be linted afterward. It fitted in with the work Razvan was doing on DMD-as-a-library. Walter said that first of all, sometimes you do want integer overflow. Second, integer adds happen in a bunch of places that aren't in the source, e.g., for example, adding on the offset of a struct member. Should those places be checked for integer overflow? He didn't think that was clear. There were a lot of issues around integer overflow that he hadn't resolved. Another problem was that it didn't fit in DMD's backend. Having different behavior at compile time and run time was not ideal. Luís said that if DMD could do it at compile time, they could check it at run time. Ideally, they wanted both. Martin noted that Weka already had their own fork of the compiler. If Luís already had the PR, then they could just implement it on their side. (Dennis lost his Jitsi connection in the middle of this, so whatever else Martin said was lost with it. Dennis got back in fairly quickly, but Walter was speaking then). Walter said that they'd have different behavior running through constant folding vs. run time. He thought that was not a nice thing to have, but if Weka were okay with that, then they could implement it in their fork of the compiler. And he said Steven had pointed out that they could use `checkedint` explicitly. Some of those were done in the DMD source. Places that were vulnerable to overflow have an explicit check. Timon noted that floating-point behavior at CTFE is different from run time. Walter said that was a known issue that was difficult to fix. He'd made a PR to fix it and it broke a lot of stuff, so he backed off of it. This led to a bit of a side discussion on floating point differences, how real is explicitly specced as implementation dependent, use doubles if you want portability... Walter said we could replace all the floating point calculations with our own emulator, but then it runs like a pig. At some point, you just have to live with the differences. The discussion got back to integer overflow, scenarios that are susceptible to it, how you should explicitly check in those cases, the performance cost of having it always enabled, and so on. It went on until Luís noted that LDC has a flag to enable overflow checks at runtime. Walter suggested that they should also then be able to have a flag to enable them at compile time, and thought that would be a good idea. He suggested Luís talk to Martin about it (Martin had to leave the meeting just a few minutes earlier). Luís agreed. __Attribute inference bug__ Luís's second issue was a bug with attribute inference that manifests during separate compilation. There were some cases where it wasn't happening correctly. When the issue showed up with something in Phobos, he didn't have a way to fix it. It didn't affect their main codebase too badly because they compiled it with one compiler invocation. But some of their projects weren't compiled with the same build system, and that was where it became an issue. When working on their laptops, compiling with one compiler invocation used too much memory. They needed to be able to compile with multiple invocations, but this bug was blocking them. Walter said that was usually caused by forward reference issues. Luís agreed and said he'd tried to fix it but had been unable to. Walter said Dennis had some ideas about that. He'd wanted to swap the default. Dennis said he'd given it a try but had run into issues. When you queried the type of a function, then it eagerly needed to know the attributes. In the interest of time, further discussion of this issue was pushed off to later. __Semantic analysis in AST nodes__ Next, Luís had a topic related to DMD-as-a-library. He said he'd been working on some linting rules for an LDC lint. He'd found that a lot of AST methods did semantics under the hood. He thought it would be cool if we could have a project to split those out. This was part of the big refactor of the compiler. Walter said he'd slowly been working toward minimizing the AST functions, pulling out non-virtual functions that didn't need to be there. It was a time-consuming process, but that was the direction he was headed. Luís said the main issue from LDC lint's perspective was they wanted to query the AST, but not mutate it. Currently, some queries, like when testing for the presence of ` nogc`, would run semantics if the attribute was not yet found on that call. He didn't want to run semantics in those query functions. If it was a forward reference, just let him know. He went on to explain that LDC lint was using LDC's AST, and if between semantics and codegen, if he mutated the AST and something was relying on it not being mutated, then he was going to have undefined behavior on his side. Átila said that const should help with that. Luís agreed. But from a refactoring perspective, the semantics should be separated from the queries. You need to know that if you're querying something, it isn't going to mutate. Átila noted that was exactly what const was for. Some people complain that it's too strict, but this was the point of it. Walter said the idea about separating things so that only const functions are available in the AST was a pretty good one. Steve started by noting that in D1, you could cast an int array literal to ubyte to set the type, e.g., `cast(ubyte)[1,2,3,4]`. He said someone in Discord had shown that `cast(ubyte)[10000,2,3,4]` did the same thing, but that the 10,000 ended up as whatever the truncated ubyte value of 10,000 was. He remembered this being just a means to set the type, not so you could cast away information. He thought this was weird and wondered if it was something we should address. Walter asked him to file a Bugzilla issue, and Steve said there should already be one. (No one posted a link in the chat, and I was unable to find it with a few different search terms.) Next, he said he'd found a significant flaw in his compile-time associative array code, which Dennis had merged into DMD for him. The `hashOf` function was doing things differently than the `toHash` on `TypeInfo`. This could cause the hashes coming out of them to be different sometimes, and that would result in an incorrect representation at run time. He said he thought he had a solution for it. Next, he reported that he had a bunch of new students in his homeschool coding class. He'd started rewriting [his website that talks about it](https://codingcat.club). He said this kind of thing was useful for bringing in people who had never programmed, and that D was a really good first language. He was hoping to get that more completely filled out and maybe publish some videos to go along with it. Finally, he brought up code-d, [the Visual Studio Code extension for D](https://github.com/Pure-D/code-d) maintained by Jan Jurzitza (Webfreak). Steve said that it was great when it worked, but there were a lot of weird things that caused it to break. He thought it would be important to have the DLF sponsor Jan to add some stuff to it. Átila wondered what that would look like since Jan was already working on it anyway. Steve wondered if there was anything we could do to help with debugger support or anything like that. Walter suggested that as a start, we could bring it under the DLF GitHub umbrella and encourage people to help out with it. Steve said he would bring it up with Jan. Luís said Weka was sponsoring work on serve-d, a core component of code-d, and he talked about some of the work he'd done on it. Steve had experienced some serve-d crashes, so there was a bit of discussion about that. (__NOTE__: Had I attended this meeting, I would have noted that we raised something like $3000 for Jan back in 2018. It was to be paid out for specific milestones. Once we hit the goal, Jan asked me to delay the payments for a while, and ultimately told me they were motivated just by working on D and not by the money. I recall we used some of the money to get Jan to DConf 2019, and Jan talked about putting some bounties on bugs. I also would have reminded everyone that one of our major goals right now is to strengthen the ecosystem. We're absolutely willing to throw some money at code-d and any other important projects in our ecosystem where that money can help get something done. We have over $11,000 sitting [in our OpenCollective account](https://opencollective.com/dlang) that can be used for this sort of thing. Jan or anyone working on a key D project is welcome to reach out to me to discuss possibilities: bug bounties, contract work for specific tasks, etc. What is an important, or key, D project other than code-d? That's one of the things we need to sort out. Until then, if you think your D project is important to the D ecosystem and you have a specific need for some financial support, please get in touch with me at social dlang.org and we can talk about it. For now, if you'd like to contribute to the D community in some way, helping improve code-d is a high-impact way to do it.) __Mac issues__ First, Adam brought up the state of D on Mac. He said it was okay, but not great. It felt unfinished. With LDC, you had all the architectures, but not the latest language features. With DMD, you had the language features, but not the architectures. And lately, regular users had been encountering linker issues, and he'd found some codegen bugs. It was good enough for him, and he made it work, but it wasn't as good as it could be. Luís said that Weka was going to support AArch64 and would probably have some changes for upstream. Walter asked Adam if the codegen problems were with DMD, LDC, GDC, or all of them. Adam said that DMD had the codegen bug. Walter asked for a link. Adam said he didn't have it right now. Walter asked if Adam could email it to him. Adam said he would. He said to reproduce it, you had to do a GUI application. He said he could try to make a smaller one. Walter said that if Adam could isolate it to one function that he could compile and look at the generated code, that would be even better. He said it was often relocations or the fixups that were wrong on the Mac because they did weird things in it. __String interpolation__ Next, he brought up string interpolation, saying we'd been working on this for years. He said John and Andrei had written [a pretty good proposal](https://github.com/John-Colvin/YAIDIP) (YAIDIP) that hit real-world issues they were having at Symmetry, and it was a pity that the D language was so stagnant. Átila said the issue was that John and Andrei had never finished the proposal. Adam said the implementation worked and we shouldn't put up so much useless red tape when we could have just moved on and been productive. Átila he hadn't been aware there was an implementation of it. The last he'd heard they were still working on it. Adam said he'd written an implementation for the other DIP he'd worked on and had withdrawn it in favor of YAIDIP, but the core of it was essentially the same. Átila asked where this implementation was. Adam said it was in a DMD PR somewhere. (__NOTE__: Had I attended, I would have noted as a reminder that we've got a pause on new features at the moment. One of our major goals is stabilizing the language and the library. We plan to start looking at new features again, and launch a new streamlined DIP process, once we've finalized the editions proposal.) Átila started by saying he had done some work on editions the week before but had been too busy to make more progress this week. Instead, he had put together and given a talk about D at CERN (he used to work there), hoping to capitalize on the fact that they might build a new accelerator and are thinking of switching languages. Next, he said he'd discovered that a one-line file with `import std.file` takes 200ms to compile, and that was nuts. He needed to figure out at some point exactly what the problem was. It was just semantic analysis just from the import. He wasn't even generating the object file. On the same machine, he also tried a C++ compile with just `#include <iostream>` and that took 400ms. He said that twice as fast as C++ was nowhere good enough. Walter agreed. Átila wasn't sure what the next steps were. He thought we needed to solve the problem of circular imports in Phobos anyway, but he wasn't 100% convinced that would make a big dent in these build times. Steve wondered if it had to do with CTFE running things when it didn't have to. Átila said sure. It was one thing if you had a static foreach over a million items. That was going to take a while and there was nothing you could do about it besides having a better CTFE engine. But other than doing stupid things, it shouldn't be taking so long. He didn't even use any of the symbols from the imported file. Walter noted that templates weren't semantically analyzed just by importing them. They had to be expanded. So something in `std.file` was expanding the templates. Átila suspected it had something to do with `std.uni`. He remembered something like this on something else he was working on that imported it. Walter said he'd been refactoring the front end with the goals of simplifying it and making it easier to understand and easier to implement DMD-as-a-library. He said he would defer discussion of that to the upcoming meeting that Razvan was in charge of. He thought Luís's idea of having const AST functions was a good one. He'd managed to minimize the dependencies of several modules. That made the compiler easier to work with. He planned to continue with that. Luís noted that the expression module was one of the biggest files in DMD. Walter said that one was on his list. He'd been looking at splitting it into two files. Aside from that, he was still doing the constant work of bug fixing and trying to stabilize the language. (__NOTE__: The meeting Walter referred to was a kind of focused workgroup we'd scheduled to sort out some decisions about the implementation of DMD-as-a-library. There ended up being two of them in October, and they were held in place of our normal planning sessions. I sent invitations to Jan Jurzitza, Prajawal SN, and Luís Ferreira. I'll include some info about those meetings in a combined October/November planning update.) Both Martin and Andrei had to leave the meeting before they had a turn. As the meeting wrapped up and Razvan asked if anyone had anything to add, Timon said he'd recently participated in a couple of programming contests. One thing he'd noticed was that his D solutions were usually the most succinct among all the contestants. Walter encouraged him to write up a paragraph or two about it in the forums and publicize it elsewhere. Our next monthly meeting took place on November 10 at 16:00 UTC.
Dec 31 2023
On 01/01/2024 12:12 AM, Mike Parker wrote:Next, he said he'd discovered that a one-line file with |import std.file| takes 200ms to compile, and that was nuts. He needed to figure out at some point exactly what the problem was. It was just semantic analysis just from the import. He wasn't even generating the object file. On the same machine, he also tried a C++ compile with just |#include <iostream>| and that took 400ms. He said that twice as fast as C++ was nowhere good enough. Walter agreed.It's 2024, so lets hunt down what the problem is with ``std.file``. On my machine compiling it using ldc2 1.35.0, it took ~500ms (frontend only). That is quite a long time. So lets go hunting! I've found a bunch of cost associated with ``std.uni``, specifically from ``std.windows.charset``. Which imports ``std.uni`` via ``std.string``, and ends up imports the unicode tables which take 117ms to sema2 (no surprises there). Why does it import ``std.string``? To call toStringz. What did I replace it with in my test code? ``return cast(typeof(return))(s ~ "\0").ptr;``. Bye bye 117ms. Next up ``std.datetime.timezone`` 40ms of that is from ``std.string``, but alas that actually is needed. Now on to sema2 for ``std.datetime.systime``, again into ``std.datetime.timezone``, nothing we can do there as above. For all 111ms of it. All and all, I can't find anything to really prune for this. There won't be any easy wins here. The Unicode tables would need to be completely redone to improve it and if done may only decrease the sema2 from ~100ms.
Dec 31 2023
On Sunday, 31 December 2023 at 11:12:23 UTC, Mike Parker wrote:The D Language Foundation's monthly meeting for October 2023 took place on Friday the 13th at 15:00 UTC. It lasted around one hour and thirty minutes.[chinese version](https://fqbqrr.blog.csdn.net/article/details/135319694)
Dec 31 2023
On Sunday, 31 December 2023 at 11:12:23 UTC, Mike Parker wrote:(__UPDATE__: Both [the Bugzilla issue](https://issues.dlang.org/show_bug.cgi?id=24153) and [the pull request](https://github.com/dlang/dmd/pull/15627) have since been closed, as the issue is no longer reproducible.)I just tested, and the issue happens again, i don't know what yielded it to disapear previously, maybe a mistake on my end (i probably forgot -inline) Anyways, here is the code that reproduces the issue: ```D struct InvBoneBindInfo { } struct Test(Value) { void test() { auto t = Value.init; // <--- it's because of this } } extern(C) void main() { Test!(InvBoneBindInfo[32]) test; test.test(); } ``` Compile it with: ``dmd -betterC -inline -run test.d`` You will get: ``` test.d(1): Error: `TypeInfo` cannot be used with -betterC ``` The issue remains because of the ``Value.init``, wich is a static array, dmd for some reasons require the typeinfo
Dec 31 2023
On Sunday, 31 December 2023 at 11:12:23 UTC, Mike Parker wrote:Finally, he brought up code-d, [the Visual Studio Code extension for D](https://github.com/Pure-D/code-d) maintained by Jan Jurzitza (Webfreak). Steve said that it was great when it worked, but there were a lot of weird things that caused it to break.And what about https://code.dlang.org/packages/dlangide/0.8.18 or https://gitlab.com/basile.b/dexed? They are both not extensions, but full IDEs. I tried to install both of them (dlangide does not compile with dmd 2.097), dexed has working executables and looks good.
Jan 01
On Monday, 1 January 2024 at 10:50:22 UTC, Konstantin wrote:On Sunday, 31 December 2023 at 11:12:23 UTC, Mike Parker wrote:DCD is the project that empower them all Improvements to DCD = improvements to both serve-d, dlangide and dexed But it's a waste of effort if DMD as library project becomes usable for a language server So who ever is working on DMD as library should get the funding to speed it up Funding for fixing serve-d crashing is useless if it still can't work with D's features (mixin/template) Funding should to towards having these features that is missing - good mixin support - good template support - good debugger support Anything else is just distraction I tried but it didn't got any steam,and github fucked up by deleting the branch, i still have it locally, so whoever wants to pursue this work, let me know https://github.com/dlang-community/DCD/pull/714Finally, he brought up code-d, [the Visual Studio Code extension for D](https://github.com/Pure-D/code-d) maintained by Jan Jurzitza (Webfreak). Steve said that it was great when it worked, but there were a lot of weird things that caused it to break.And what about https://code.dlang.org/packages/dlangide/0.8.18 or https://gitlab.com/basile.b/dexed? They are both not extensions, but full IDEs. I tried to install both of them (dlangide does not compile with dmd 2.097), dexed has working executables and looks good.
Jan 01
On Sunday, 31 December 2023 at 11:12:23 UTC, Mike Parker wrote:The D Language Foundation's monthly meeting for October 2023 took place on Friday the 13th at 15:00 UTC. It lasted around one hour and thirty minutes. I was unable to attend, so thanks to Razvan for running it and to Dennis for recording it. [...]I also would have reminded everyone that one of our major goals right now is to strengthen the ecosystem. We're absolutely willing to throw some money at code-d and any other important projects in our ecosystem where that money can help get something done. We have over $11,000 sitting in our OpenCollective account that can be used for this sort of thing. Jan or anyone working on a key D project is welcome to reach out to me to discuss possibilities: bug bounties, contract work for specific tasks, etc.Some of the important ecosystem projects when I teach D are code-d, IntelliJ support, and fixing issues for MacOS (e.g. Having to type out 'export MACOSX_DEVELOPMENT=13.0' or something similar for the tools is tricky for students). The current Symmetry of Code projects (Dfmt, C++ Interop, etc.) and d-scanner are also very valuable tools for adoption in my opinion -- appreciate the efforts for those contributing to the tooling ecosystem!
Jan 01