www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - D Language Foundation February 2024 Monthly Meeting Summary

reply Mike Parker <aldacron gmail.com> writes:
The D Language Foundation's monthly meeting for February 2024 
took place on Friday the 9th. It lasted around an hour.

Razvan was the only member who sent in any agenda items before 
the meeting.



The following people attended:

* Paul Backus
* Walter Bright
* Iain Buclaw
* Ali Çehreli
* Jonathan M. Davis
* Martin Kinkelin
* Dennis Korpel
* Mathais Lang
* Átila Neves
* Razvan Nitu
* Mike Parker
* Robert Schadek
* Steven Schveighoffer
* Adam Wilson



Before getting to the first agenda item, I updated everyone on 
DConf planning. Symmetry had recently finalized the contract with 
Brightspace, our event planner. I was waiting for confirmation 
that Brightspace had signed the venue contract before making the 
initial announcement.

Next, I told everyone that [the first video in the revival of the 
Community Conversations series](https://youtu.be/XpPV5OBJEvg) had 
gone pretty well. I thought Martin had done an excellent job. 
Since Razvan and I had already discussed his participation, I 
asked if he was willing to do the next one. [He 
accepted](https://youtu.be/Wndz2hLpbdM).

I then asked for a volunteer for the March episode. No one 
stepped forward, so Walter suggested I just pick somebody. I said 
I'd allow time for a volunteer to email me, but I'd pick if no 
one stepped forward. (In the end, I asked Dennis, [and he 
accepted](https://youtu.be/KxlY2ZQpiuI) )



Razvan summarized an issue that Teodor Dutu had encountered in 
his project to replace DRuntime hooks with templates. 
Specifically, the approach he'd been taking to lowering, handling 
it during expression semantic and storing the lowering in an 
expression node, wasn't possible with `ArrayLiteralExp` because 
of the way the compiler handled it. (Rather than summarize 
Razvan's summary here, I'll point you to [Teodor's forum 
post](https://forum.dlang.org/thread/ykojheyrmrmpxgjfcsyy forum.dlang.org),
which Razvan linked in the meeting chat, if you want the details).

Other hooks had a similar issue. The solution Razvan and Teodor 
had discussed was to save pointers to the expressions that needed 
lowering in an array or a list, and then just do the lowering 
after semantic. This would also allow them to get rid of the 
fields they'd been using in the AST nodes to store the lowerings.

Martin noted that Iain had proposed that approach in past 
discussions about lowering. The main reason they'd gone with 
fields in the AST nodes was that post-semantic lowering caused a 
performance problem with the CTFE engine. Using the fields was 
just simpler. That was why these AST nodes with the lowering 
fields still existed.

Razvan said that what he was proposing wouldn't affect CTFE. The 
main problem with Iain's proposed approach was that it required 
another pass on the AST. But what he and Teodor were proposing 
would avoid that, since they were going to globally store 
pointers to the expressions that needed lowering.

Martin said he supposed it was an optimization thing, then. We'd 
have to see if the cache would pay off or if there would still be 
a performance hit. He then recalled another issue with Iain's 
proposal, which was that in some cases when outputting the AST 
nodes, you didn't want to see the lowering, for example, when 
generating C and C++ headers or DI files. He had no idea how to 
proceed, but it seemed we were reaching the limitations of the 
lowering field.

Razvan said they did have workarounds for this, but they made the 
code uglier. The new approach would be much cleaner.

Walter noted that he'd recently run into a problem where DRuntime 
was calling an array constructor, and that was removed and 
replaced with a template in the semantic pass. Then the inliner 
tried to generate more cases that needed to be converted from an 
array constructor to the template, and the back end would just 
fail because it couldn't call the array constructor anymore. So 
he'd added code to the inliner to prevent it from inlining code 
that would cause another runtime hook to be generated.

So he thought re-engineering when the runtime hooks get lowered 
was a good idea. Right now, it was ad hoc and causing problems.

Razvan said that the new hooks were being lowered during 
semantic, but the old hooks were lowered after semantic during 
the IR generation. Since they'd been implementing the new hooks 
incrementally, they currently had both old and new hooks in the 
code. When they finished, they wanted them all in one place, and 
Razvan thought doing it after semantic was best.

Walter said another problem was that CTFE would be faced with a 
lowered array constructor, so he'd had to put in code to unwind 
the constructor in CTFE. So when to do the lowerings was a 
problem. Doing them as a separate pass might be the only 
solution. He asked what some of the new lowerings should be.

Razvan said that they'd just been picking the ones the IR 
generator was using and pushing them up to the expression 
semantic. The goal was to do them all. That would be good for GDC 
and LDC, as they'd no longer have to support the code. It would 
all be handled in the front end. If Walter was asking how many 
were left, Razvan wasn't sure. Teodor had said he'd be done by 
summer, so most of them were already implemented.

Walter said okay, and suggested the best place for them was 
before E2 IR but after semantic and inlining were done. Razvan 
said that was what he was advocating for. If he could get a green 
light for it, he'd talk with Teodor about implementing it. Walter 
said he didn't see any other practical place to put it, so they 
could go with it.

Martin said he thought the lowering should happen before the 
inlining because, on some of them, the overhead might be 
noticeable. The hooks were probably going to be quite tiny. So if 
just at least some of them were forwarding to another function, 
which might not be a template, those could be pre-compiled into 
the runtime.

Razvan asked if he was suggesting the inliner be put into the 
back end or if it was enough to do the lowerings before the 
front-end inliner kicked in. Martin said he didn't know how it 
was done before. So if it were just the front-end inliner, then 
it was a DMD-only problem, as LDC and GDC didn't use it anyway. 
He had no opinions on that.

Walter said he'd implemented another inliner that operated on the 
intermediate code. It was incomplete, which was why the front-end 
inliner was still there. Ideally, that would just go away. It 
should be done at the intermediate code level like GDC and LDC do 
it. It was a design mistake on his part to put it in the front 
end. He hadn't put in the work yet to finish the intermediate 
inliner, but it was in there and running.

He said that its problem at the moment was that it only worked on 
single-expression functions. It didn't work yet on functions with 
multiple statements, and that was why it wasn't good enough. That 
was on him. He said it would be fair to work under the assumption 
that the front-end inliner would be removed. He'd like to get it 
completed as it would allow the removal of a lot of code from the 
front end, which had been the wrong way to do things.

Razvan said that made sense, and it would be easier to implement 
the hooks that way. Walter said the new inliner was done as a 
separate pass between semantic and codegen, so it fit right in 
with Razvan's idea of where to put the lowerings.

Steve noted that the inliner wasn't needed for code to compile. 
It was just an extra optimization. Since people using D who cared 
about performance were going to be using LDC or GDC rather than 
DMD anyway, he thought it might be fine to just go ahead and get 
rid of the front-end inliner even if the new one wasn't yet 
complete.

Walter said that was a good idea. He also said he needed to look 
into how much work it would be to implement multiple statements.



Next, Razvan wanted to talk about Google Summer of Code projects. 
Some projects in [the issues list in the project ideas 
repository](https://github.com/dlang/project-ideas) were 
outdated. He said it would be nice if we could add some new 
projects more in line with the priorities we'd been discussing. 
If we were accepted, it would be a good opportunity to have 
people working on projects important to us. He didn't know if the 
priority list had been ironed out yet, but suggested we could all 
propose projects we thought were important.

Steve said the stakeholders involved should look at the project 
list and ensure there were still things we wanted done. There had 
been a couple of cases in the past of students getting excited 
about a particular project and finding a mentor for it, only to 
learn it was outdated, or whatever.

As an example, he'd seen [a project listing for new Phobos 
support for text-based file formats like XML and 
YAML](https://github.com/dlang/project-ideas/pull/100). He didn't 
think that would be the right move, given the existing packages 
we had out there like [D-YAML in the DLang Community 
Hub](https://github.com/dlang-community/D-YAML). We had XML 
projects in-depth, too.

Walter said the fact that we had so many YAML and JSON 
implementations out there was kind of disturbing. We should have 
one for each that's good enough, and it should be in the standard 
library. Then people could improve them as needed. We should be 
looking at what's there already, like `std.json`, and figuring 
out if we wanted to fix it or scrap it. Given that Adam was 
working on the next iteration of Phobos, that would be something 
he should look into.

Razvan said there were multiple ideas in the repo and we didn't 
have time to discuss all of them here. He suggested we might have 
a separate planning meeting for it, but he just wanted to make 
sure we had an up-to-date list. People had been creating project 
ideas and we hadn't been curating the list in any way, mostly 
because we hadn't given anyone the authority to say "we don't 
want to pursue that" when someone submitted an idea. He thought 
the list looked abandoned, and that might have played a role in 
why we were rejected from GSoC the last couple of times.

I said that the point was that we were supposed to be putting the 
"blessed" projects in the root folder. Everything in the issue 
tracker was just whatever.

I also warned that we shouldn't speculate on why we were 
rejected. Google doesn't give reasons. It could have been flaws 
with our application, with our project list, just the luck of the 
draw, or anything. This time, I'd taken extra steps to make the 
application as rock solid as I could, but I had noticed on the 
application form a recommendation not to use unsorted issue 
trackers.

Our issue tracker had always been sorted via the labels, so I 
didn't know if that had played a role or not, but this time on 
the application form I'd pointed them to the source folder with 
specific selected projects rather than the tracker.

I'd also noticed that the document we'd been using to describe 
how to write an application wasn't even from our organization. 
Someone had linked to it in the past and we'd just been using it 
ever since. So I'd written up a new one specific to what we're 
looking for in an application and used that instead.

I linked in the chat [the list we'd come up 
with](https://gist.github.com/mdparker/db7e9dafd14d4b9632b6d5056f50d236) in a
recent planning session. I didn't know if there was anything in there that
would make a good GSOC project. I asked Mathias if there was anything he could
use some help on with dub.

Mathias said something for the registry would be great, as it 
needed a lot of love, like implementing OAuth or better 
interaction with GitHub.

Paul noted that the search feature on the dub registry had been 
in a bad state for a long time. It had gotten to the point that 
he recommended people go to Google or DuckDuckGo and do a site 
search instead of using the search box on the registry. People 
had proposed potential ways to improve it, but he thought that 
was a real low-hanging fruit.

I asked Mathias and Paul to submit issues to the ideas 
repository. Then I asked Iain if he needed any help with GDC. He 
said "always", but couldn't think of anything too specific.

Adam said a bigger project to think about that he and Walter had 
discussed a little would be a partially compacting GC. He said 
Vladimir Panteleev had been hitting some memory exhaustion issues 
on long-running projects. Adam had mentored a GC project for GSoC 
in the past that Adam had mentored, so maybe this kind of thing 
could work for GSoC again.



Razvan said a student was working on integrating DMD-as-a-library 
into dfmt as part of SAOC. He was almost finished with it. There 
was a problem, though, in that DMD was discarding non-Ddoc 
comments. Sometimes with dfmt, you didn't want to discard the 
comments.

There was another project to use DMD-as-a-library in D-Scanner. 
There, they'd worked on the replacement of libdparse 
incrementally. Then sometime last summer, there'd been a major 
refactoring of the D-Scanner upstream code. That made rebasing a 
nightmare. But they got it done and then wanted to use the latest 
version of DMD-as-a-library. It was then that Razvan noticed that 
`ASTBase` was untested anywhere in the compiler code base. Some 
fields had been removed and some functions moved, and so now they 
had all sorts of errors because of it.

He'd made a pull request to add a test to the test suite that 
makes sure the parser compiles with `ASTBase`. So it was good 
now. But all of this had led him to think about the future. dfmt 
and D-Scanner were going to use DMD-as-a-library. That was a done 
deal. But now we had to worry about not breaking those tools. So 
he'd been wondering what our policy was going to be regarding the 
interface the compiler offers.

Right now, we didn't have any projects using the semantic 
routines. They just used the file parser and `ASTBase`. That was 
fine, as those were easy to fix when some code was moved around 
in the compiler. But once people started using semantic routines 
and code in DMD that was often modified, then we'd end up with 
this interface problem.

He cited a specific example in D-Scanner that came up due to a 
change in DMD where a `bool` field in the expression AST node had 
been removed. This field had been used to indicate, for example, 
when logical operators were used in an `if` expression without 
parentheses so that D-Scanner could warn about it. But now with 
that gone, they had to re-lex the code to see if the parentheses 
were balanced.

He predicted that this kind of thing could generally turn out to 
be a problem. If fields were being deleted from AST nodes, maybe 
we could use a version for DMD-as-a-library to keep them outside 
of DMD proper. That might not be a good solution, though, as then 
we'd end up having an AST node with some fields that were 
generated for any kind of build and some only for 
DMD-as-a-library.

Walter said he didn't know why the parens field had been removed 
and asked if he'd been the one to do it. Razvan said yes, and 
thought Walter had done it to save space.

Walter said there were a couple of ways to do it. The parens 
field should probably be a bit flag rather than a separate bool 
to save space. But that wouldn't solve the problem generally.

He said to solve it generally, it might be a good idea for the 
AST nodes to have a pointer to a hash table. The hash table would 
store what we might call "optional" fields. That may be a more 
general solution. That way, DMD-as-a-library could add its own 
fields to the hash table and it wouldn't interfere with what the 
compiler was trying to do.

He said that was just a thought. Another might be just to resort 
to derived classes, but then what about other derived classes? 
That would lead to a branching of the AST tree, and Razvan had 
already been through that problem.

Razvan said the thing is that it wasn't just about the field. 
When you remove a field, then you probably also would remove 
whatever logic was associated with it. Then reproducing that in 
DMD-as-a-library was going to be more complicated.

He said this wasn't a big problem now, but it was something we 
needed to think about. People weren't using DMD-as-a-library much 
yet, but once they started depending on it, then we'd end up 
having problems with breaking people's tools from compiler 
changes.

He recalled that Martin had once suggested maintaining 
DMD-as-a-library as a fork of DMD, but Razvan thought that was 
going to be more difficult to maintain.

Walter said it would be impossible to maintain. It was bad enough 
that we already had three versions of the AST---two D versions 
and the C++ version. He'd love to get rid of the C++ version, but 
he knew that Martin and Iain used it.

Robert said there was something he'd spoken about with Walter and 
Iain at DConf last year and had been persistently pushing for at 
pretty much every meeting we had last year. He hoped that at some 
point all of these tools would be folded into DMD, and then DMD 
would become a compiler daemon where you send jobs to it, it 
compiles them, and whatever can be cached is cached.

He thought this year we needed to make the first step in that 
direction. We needed to decide that this was actually something 
we wanted. And if that were the case, then maybe these tools 
could be the first ones to get merged in, ultimately, or a subset 
of them. He thought it was decision time.

I said I thought we'd already decided to go in that direction and 
that the LSP server was now at the top of our list as one of the 
steps toward it.

Razvan said the way he envisioned it was that we'd still have the 
release compiler with, say, a design close to what we had now. 
Then you'd also have DMD-as-a-library and a separate project that 
used it to implement the LSP logic. He didn't know if the LSP 
logic should be integrated into DMD and didn't think we'd 
discussed it before.

Robert said we hadn't discussed the details about it yet. He felt 
that if users had to do anything else beyond installing DMD to 
also get the LSP server, dmft, and D-Scanner, then we'd failed. 
He thought it all had to be bundled together as a release, 
whether it was built into the compiler binary or not. He strongly 
believed that the ultimate goal should be that anyone using D as 
a work tool should have the LSP there as soon as they open their 
editor if they've installed DMD. How that happened didn't matter 
much.

Walter agreed that dfmt, D-Scanner, and the LSP server should all 
be a matched set and should be included in the release. He'd 
recently had to use Dustmite and was thinking he'd have to get it 
from dub and rebuild it and all that, then he realized it was 
part of the compiler release. What a big difference that makes. 
It should be part of the release.

I reminded them that at the recent planning session, we'd agreed 
that Walter, Robert, Razvan, and hopefully Jan Jurzitza 
(Webfreak) would have a meeting to talk about the LSP server. I 
thought this would be a good discussion for that meeting. I was 
just waiting for Jan to get back to me saying if he was willing 
and able to join. I said I'd email everyone to set that up as 
soon as I heard from Jan. (UPDATE: That meeting happened the 
following Friday. I didn't participate, so I don't know what they 
discussed or decided.)

Martin said this all sounded like a discussion we'd had a few 
months before. He reiterated something he'd said then: he didn't 
want to see DMD's front end as it was right now, being augmented 
by every little field or whatever every little project using 
DMD-as-a-library needed. That was his main worry. So no talking 
about fields that were removed in one release and should be added 
back in because some little linter needed it.

What he could live with if we wanted to go the 
`version(DMD-as-a-library)` route was something like a generic 
field, like a `void*` pointer which could be used as an 
associative array for extra fields dynamically, but only 
something generic like that. If we were going to have 20 tools in 
the end depending on DMD-as-a-library and every tool had its own 
needs and extra fields and state, he didn't want to see the DMD 
repo full of little special cases. If we were to go the fork 
route, then we could do whatever we wanted.

Razvan said the point was that it was fine to modify the 
interface if you wanted to add fields or functions, but if you 
were deleting them the case could be made that you could just put 
them in `version(DMD-as-a-library)`. But he agreed that we 
shouldn't open the door for everyone using DMD-as-a-library to 
add a new field or something.

Martin said that sometimes when he did a bump to merge a new 
front end, there might be some things that had vanished in the 
meantime and which he just restored. He was pretty sure that Iain 
had this problem, too, from time to time. It would presumably be 
a similar case for tools that depend on DMD-as-a-library in the 
future. If they were syncing from upstream, they were going to 
notice some regressions or things that were still needed.

As an example, he said there were a couple of cases where some 
`extern(C++)` stuff had been converted to `extern(D)` under the 
assumption that it wouldn't be used in the C++ interface, then in 
some dark corner of LDC, there was a usage. So he just restored 
it to `extern(C++)`.

He said that in the case of fields that get removed, if that's a 
valid use case and you can argue for that use case, it should be 
pretty easy to restore them downstream. After noticing the 
problem, of course, which was its own problem. He wasn't sure if 
we were going to be able to CI test these dependent projects, for 
example.

Razvan said that we could, and D-Scanner was already tested in 
BuildKite. It was just that they were running a separate fork and 
not yet merged upstream, so that was why it wasn't caught.

Jonathan said it sounded like the core problem we were dealing 
with here was data structures that historically had been private 
to DMD, and it had been able to do what it wanted with them. But 
as soon as you made it a library, it was all in public. So 
whatever the process for handling it turned out to be, we had to 
take into account that DMD couldn't just treat that as completely 
private anymore. He couldn't say what the best approach to 
handling it would be, but it was core.

Walter said he didn't know which functions were used by GDC and 
LDC and which ones weren't. He requested that all the functions 
they used be marked `extern(C++)` so that he could tell them 
apart.

Razvan said that was the case right now. But some functions 
marked `extern(C++)` weren't actually used by either GDC or LDC. 
Walter said those should be removed, then, but he didn't know 
which ones they were.

Iain said he was pretty sure that every member function had an 
explicit `extern(D)` or `extern(C++)` already. Regarding changes, 
he said pretty much every week he gets the changelog entries and 
ends up updating no fewer than three files for changes in the 
front-end interface.

As for determining what's used and unused, he said that he was a 
bit unnerved by all the member functions Razvan had been moving 
into the global namespace. He was pretty sure that Martin was, 
too. So he'd had a look into how feasible it would be to move 
them into a namespace.

He'd found that the first version supporting C++ namespaces was 
2.083. Our current baseline was 2.079. No problem. We could bump 
it to 2.083. However, the first version that supported importing 
`extern(C++)` namespaces was 2.087 or 2.089 because of a compiler 
bug.

So Iain was thinking that we couldn't just put C++ namespaces 
inline where the function was defined, but maybe Razvan could 
make all the moved functions `extern(D)` and then have a leaf 
module, e.g., `dmd.cppapi`, containing `extern(C++)` forwarding 
functions for the functions that were moved. Then everything that 
was `extern(C++)` would be in one place instead of just scattered 
all over.

Walter said that sounded like a good idea. Razvan said it sounded 
great even for DMD-as-a-library. Walter said to make it happen.

As an aside, Iain said we should do more testing of older 
versions. We currently were testing the current baseline, 2.079, 
and then the latest. But he had found though you could build DMD 
with 2.079, you couldn't build it with 2.080, 2.081, 2.082... You 
couldn't build it with 2.083 because there was some code accepted 
by 2.079 that wasn't in 2.083. So he was having to rewrite that 
code to make it compatible with 2.083.

He wasn't suggesting that we have a pipeline that tests 26 
different versions of DMD for every pull request, be we should do 
some things that hit production. At least do some scattergun 
testing to make sure that we're still okay.



At this point, we'd covered all the prearranged agenda items. 
Before asking if anyone had anything else to cover, I took the 
first turn.

__Containers__

I told Paul and Steve that their names had come up in our 
planning session the week before. We had prioritized a list of 
tasks and projects that we'd put together sometime before, and 
containers had surprised everyone by bubbling to the top ([the 
Number 2 
spot](https://gist.github.com/mdparker/db7e9dafd14d4b9632b6d5056f50d236)).

Steve's name had come up because of his experience working on a 
container library. Robert had taken point on the new project. He 
was doing a DConf Online talk about containers and was going to 
get started on it sometime after that.

Robert said he had more of an academic approach to it at the 
moment. He was thinking of the cross-product of all possible 
attributes we'd want and exploring that. He was planning to reach 
out to Steve and Paul.

I told Paul that the reason he'd come into it was because of 
allocators. Allocators had ended up further down our priority 
list, but because they were closely tied to containers, we'd 
decided we should push them up. We'd heard he had been doing some 
work on or thinking about allocators.

Paul said he was working on a proof of concept for safe 
allocators using the system variables preview and the DIP1000 
preview features to handle the safety part. He had a design and 
most of a working proof of concept that he thought demonstrated 
that this could be done in a more or less reasonable way.

He said it wasn't yet at the point where it was ready to present 
to the world, but once it was, he'd be posting it on the 
newsgroup. He'd be happy at any point to discuss it with anyone 
interested.

Átila thanked Paul for the writeup he'd done about it. He was 
going to make the time to read it with care soon.

Steve said he'd be willing to provide input on the container 
project, but noted he hadn't written any container code since 
2012 or something. But he did have experience with it.

__Editions__

Átila said he'd like some more feedback on the email he'd sent 
about his draft proposal for editions. He hadn't seen many 
comments about it yet.

I asked if there was anyone who hadn't seen the email yet. Paul, 
Jonathan, and Adam raised their hands. I said I'd forward it to 
them.

Martin said he hadn't looked at it yet in detail, but one thing 
that had stood out to him was the example of removing the monitor 
field from D classes. He said that this showed some of the 
difficulties with such an approach. He was pretty sure the 
runtime would need to augment the type info of the classes with a 
bit or something, as we presumably needed that information, maybe 
for different functions, doing allocations, or locking.

Then there were template mixins. He said that with the edition 
applying to the scope of the declaring module rather than the 
instantiating module, he thought things might get funny or 
interesting when we think about template mixins. For example, 
what happens when you're mixing a static nested class into some 
other aggregate?

He was just thinking that there were some major difficulties we 
probably were only going to see once we started implementing 
stuff like this and tried to support multiple behaviors in 
parallel. Thinking about it before was probably going to be 
extremely difficult.

Átila agreed. He said there may be things we wouldn't be able to 
do because they'd get too complicated. With templates, he wasn't 
sure, but he didn't see any other way of it working aside from it 
being the scope of the declaring module, because that would 
probably be what the author of the template intended.

Razvan thought there were some situations where there would be a 
clash and it wasn't really obvious. As an example, think of a 
function in file A from Edition 1, and a function in file B from 
Edition 2. Both functions have the same name, and in file C you 
want to be able to merge them into the same overload set. If 
Edition 2 has modified the overloading rules a bit, then which 
one are you going to choose?

Átila said that was a good question, but he wasn't sure it would 
be a good idea to change the overloading rules.

Walter said it was a can of worms, but some people wanted to 
change them. He said it would never work between editions. We 
were going to be restricted with what we could do with editions 
because of things like that being incompatible.

Átila didn't think we could do everything, but it shouldn't stop 
us from doing what we could.

Jonathan said it should allow us to be able to do more than we 
could currently do. Past a certain point, you needed D3 
regardless if you were changing way too much and that was really 
where you wanted to go. But being able to change more than we 
currently were able to would certainly be beneficial.

Átila said that as soon as this was finalized in the community, 
with the dialogue over and the final version merged, he wanted to 
immediately write another DIP for ` safe` by default for the next 
edition.

__Seattle D meetups__

Walter reminded us that he'd set up a local D club. It had turned 
out to garner more interest than he'd expected. Non-D users had 
been turning up as well, as it wasn't exclusively about D but 
also programming in general. They'd had seven people at the last 
meeting, and more had told him they were coming to the next one.

He encouraged the rest of us to do something like that if there 
was a community for it where we were. It didn't have to be a 
serious thing. He said that at one meeting, they'd met up at a 
movie theater and just watched a movie. Then they'd ended up 
hanging out and talking about programming until the manager 
kicked them out. Then they'd continued out in the parking lot. He 
said he couldn't have asked for more fun evenings.

Some people with startups had started turning up looking to 
recruit people. He was really happy with the way it had turned 
out and encouraged us again to give it a try.

Átila said he'd be there in April. Walter said he could plan the 
April meeting around Átila's visit. When people in the D 
community visited Seattle, he always tried to meet with them and 
do a walk and talk or something. But he'd love to be able to 
incorporate us into the meetups when possible.

He said it could all just eventually blow up and peter out, but 
he was going to enjoy it while it was running.




Our next monthly meeting was held on March 8, 2024.

And now the usual reminder: if you have anything you'd like to 
bring to us for discussion, please let me know. I'll get you into 
the earliest monthly meeting or planning session we can arrange. 
It's always better if you can attend yourself to participate in 
the discussion of your topic, but if that's not possible, I can 
still put the item on the agenda without your presence with a bit 
of preparation.
Jun 13
parent Anonymouse <zorael gmail.com> writes:
On Thursday, 13 June 2024 at 10:20:03 UTC, Mike Parker wrote:
 [...]
Thanks!
Jun 13