digitalmars.D - Use of IA for PR - my POV
- user1234 (14/14) Feb 09 For some reasons I monitor a full hand of programming languages
- Dejan Lekic (9/18) Feb 10 False claims from people who hide behind random nicknames are
- Serg Gini (13/15) Feb 10 We can only hope this is the case, but I would say we still need
- user1234 (5/20) Feb 10 Coincidence but I think there's another one since today:
- monkyyy (3/8) Feb 10 Is there a real example of this? I cant imagine how a template
- Serg Gini (4/6) Feb 10 Literally next PR )))
- Dejan Lekic (2/8) Feb 10 He will understand it, have faith.
- Vladimir Panteleev (141/144) Feb 10 I guess I could post a few thoughts about AI / LLMs here if
- user1234 (3/10) Feb 10 thanks much for that reply.
- matheus (24/30) Feb 10 Interesting, since I'm not using AI I'd like to know, in this
- Vladimir Panteleev (43/51) Feb 10 The main way I use LLMs is with Claude Code. Here's how it works:
- matheus (14/21) Feb 10 Well first of all thanks for sharing this info, It's interesting
- user1234 (8/60) Feb 10 I feel so old-fashioned when I read this. The worst is that I've
- monkyyy (7/13) Feb 10 I dont think ai will ever replace writing new abstractions. Its
- H. S. Teoh (18/33) Feb 10 To pull out a Walter quote:
- FeepingCreature (12/18) Feb 11 hi! yep, I use AI heavily. I'm a doomer as well fwiw, so I'm by
- Julian Fondren (23/26) Feb 10 Some of the very worst tech ever made has as its primary feature
- Lance Bachmeier (16/22) Feb 10 You *can* run LLMs locally, but it depends what you want it to do
- Paolo Invernizzi (15/20) Feb 11 I'm on hurry now, but same feeling here with Claude-code: Opus
- Kapendev (8/23) Feb 10 My code is 100% vibe coded, from start to finish.
- Guillaume Piolat (17/17) Feb 11 Personally I like using Cursor for tooling but only use them one
- monkyyy (8/10) Feb 11 That was fake and a marketing psyop. Gpt 2 level of text
For some reasons I monitor a full hand of programming languages repos.This is interesting because I can compare to others. One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools. One problem I'd like to mention is that if for "this" or "that" AI may help, you have the risk of not understanding the codebase anymore, after merging 100 PR, let's say, the knowledge will be lost or costly to re-acquire. The paradox is however that often human-generated PR are just patches. The compiler is a huge patchwork, even if there's many sub programs ( visit this or that ). So you see it's a bit the same. People come and go, newcomers have to understand what people who left did. This is what matters.
Feb 09
On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:For some reasons I monitor a full hand of programming languages repos.This is interesting because I can compare to others. One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools. One problem I'd like to mention is that if for "this" or "that" AI may help, you have the risk of not understanding the codebase anymore, after merging 100 PR, let's say, the knowledge will be lost or costly to re-acquire.False claims from people who hide behind random nicknames are typically completely ignored by the community. Rightfully so. Cybershadow is a well-respected member of this community, contributing to the D ecosystem probably close to 20 years if I remember well. He also happens to be the guy who wrote the very software you used to write your message, in D ofc. Rest assured that IF he used AI to do his PRs, he _DOES_ understand that code.
Feb 10
On Tuesday, 10 February 2026 at 12:43:49 UTC, Dejan Lekic wrote:Rest assured that IF he used AI to do his PRs, he _DOES_ understand that code.We can only hope this is the case, but I would say we still need to be careful. At the minimum policy we should ask everyone (even long time contributors) explicitly and clearly adding in PR information that they were using AI for the PR. This should be applied for Dub packages as well. I think it would be a nice and responsible position of package authors to the community to let their users know this.. I already identified some packages in Dub registry that 99% were fully AI generated. And as DMD story with ai.d showed - nobody could be strong enough versus AI code ..
Feb 10
On Tuesday, 10 February 2026 at 12:43:49 UTC, Dejan Lekic wrote:On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:Coincidence but I think there's another one since today: https://github.com/dlang/dmd/pull/22550 I see CyberShadow PR based on Claude getting closed. Curious about what will happen for this one.For some reasons I monitor a full hand of programming languages repos.This is interesting because I can compare to others. One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools. One problem I'd like to mention is that if for "this" or "that" AI may help, you have the risk of not understanding the codebase anymore, after merging 100 PR, let's say, the knowledge will be lost or costly to re-acquire.False claims from people who hide behind random nicknames are typically completely ignored by the community.
Feb 10
On Tuesday, 10 February 2026 at 14:14:00 UTC, user1234 wrote:https://github.com/dlang/dmd/pull/22550 (ai slop)- Ambiguous template instantiation without parens crashes compilerIs there a real example of this? I cant imagine how a template init can be ambiguous
Feb 10
On Tuesday, 10 February 2026 at 12:43:49 UTC, Dejan Lekic wrote:Rest assured that IF he used AI to do his PRs, he _DOES_ understand that code.Literally next PR ))) "What do you think about this one? I don't fully understand it, but it's green and the diff is small 😛"
Feb 10
On Tuesday, 10 February 2026 at 14:28:46 UTC, Serg Gini wrote:On Tuesday, 10 February 2026 at 12:43:49 UTC, Dejan Lekic wrote:He will understand it, have faith.Rest assured that IF he used AI to do his PRs, he _DOES_ understand that code.Literally next PR ))) "What do you think about this one? I don't fully understand it, but it's green and the diff is small 😛"
Feb 10
On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools.I guess I could post a few thoughts about AI / LLMs here if people are interested. It's not a new thing that I've been interested in the idea of offloading menial work onto machines. DustMite was my first big project in that vein - you define an oracle, then drop your xMLOC codebase on it and go enjoy your weekend; then Digger to help automate bisecting regressions with our multi-repo setup, so LLMs are kind of in that vein, if applied properly. The LLMs themselves were pretty much useless toys for a long time when it came to writing code, and the vast majority of them still are. I think even what you get today on e.g. chatgpt.com is going to be underwhelming from many perspectives. However, it does seem like there was a huge jump last year with Opus 4.5. I've been experimenting with LLMs generating D and other code throughout the last year. Just last August I was playing around with the best model at the time - the results were, frankly, depressing. I think I spent $200 in tokens for a development process that I could have done much faster, prettier, more correct, etc. At that point, it was clear that there was nothing to be gained from agentic coding, at least for what I was doing. Then, Opus 4.5 came out. I'm not sure if it really was an objectively major breakthrough in capabilities, or if it merely crossed some threshold that would quantify it as such, or if that was just my perception, but for me it was the first model that seemed actually ... useful. - It could write non-trivial code - entire multi-module programs - that actually worked on first try. - It no longer regularly made stupid mistakes that no human would ever do. - It wrote correct D! Without hallucinating features! But what shocked me the most: - It knew about my personal D libraries! Somehow, my personal ae D library was in its training data set, with sufficient coverage that it could even use some parts of it blind! - Sometimes, it even wrote better D code than me! It would use patterns that I was not aware of or thought of! To me, this was mind-blowing, and turned my whole programming life upside down. Since then, I decided to do an experiment: Could I just use it for everything? Just stop writing code, and use it to do the code writing? Would this make me more productive or less? Would I eventually forget how to code? Would I eventually get buried by the big pile of slop and broken code that I don't understand? I didn't know the answers but I was really fascinated to try and find out. So, I got a $200/mo Claude Code Max subscription and set out with that self-imposed constraint. Here's what I can tell you so far: - I'm probably not as good as I was at hammering out code with a keyboard as I was last year. But, I do feel like I'm now much better at code review, multitasking, and task switching. Since the whole idea of AI is to make the bot work on your behalf, you can multi-track several projects (or multiple aspects of a project), or simply enjoy your hobby while checking in on the bot every half hour. Like with DustMite, if you use AI but then stare at the screen while it's working, you're Doing It Wrong. - The bot obviously still makes mistakes. But the mistakes it makes are different than the kind of mistakes a human would make: no typos, no copy-paste errors, no "I forgot to add this one line", no "I used the variable `foo` from the argument list instead of the variable `foo` from the local scope". - On the other hand, the bot is terrible at designing. The APIs are bad, the patterns are bad, the structure is bad. You still need to think ahead about what you want to build and what shape each part should be in. However, you now have a lot more cognitive bandwidth to focus on this exclusively. - Obviously you do need to read and understand the code it writes... - ...unless it's for one-off throwaway scripts, which are now really really easy to produce! You can script anything easily with zero investment, which is a big help sometimes. - Other things it's good at: - Bug hunting - drop a test case on it, come back half an hour later, and it will likely have found the root cause (and maybe even an initial patch for it). - Code research - have a technical question about a project and want a precise answer? `git clone` the GitHub repo, run the agent, and ask your question - you'll get an answer with citations to exact line numbers. - Speculative refactoring - have a complicated code base but don't want to invest the time in a refactoring that may make the code simpler or it may make it an even bigger mess? The bots are very good at mechanical code transformations, so you can give it 10 refactoring ideas and just leave it overnight. - Writing test cases, but everyone knows this one already. - In terms of getting things done, I do find myself to be more a lot more productive! Certainly not in terms of time, but definitely creative energy. I've picked up and even wrapped up a lot of projects from my backlog. I do miss some bugs in review (or sometimes am just too lazy to review the code), so the output quality is maybe not the same as what I would have churned out by hand, but I'll definitely take a 20% quality hit for a 500% productivity gain. On contributing to D: I think so far I used Claude to write patches for Phobos, Druntime, and DMD. In order: - Phobos: For me these are very easy to review. I'm confident in their quality, so it's just a time saver. - Druntime: These have been mainly translations of C headers to D. LLMs are good at these, so the main thing to watch out for is that the translation follows our conventions. And then there's the compiler, DMD. So, here's the thing. Maybe my perspective is off, but my point of view is: in order to understand and be able to meaningfully review patches to the compiler, you must be a compiler developer. And, you do not simply become a compiler developer. As much as I wish I could understand and help out with all parts of D, I need to pick my battles. In my mind, D compiler hackers are the most elite of the elite D developers. I bow to them and plead for their mercy as they consider my bug reports and patches. This puts me in a difficult situation every time I run into a blocking compiler bug. I could: 1. Reduce the bug to a test case, file an issue, and watch as likely absolutely nothing happens for years (blocker or not, regression or not). Understandable, since compiler bugs are hard, compiler development is hard, and nothing in life is free. 2. Give up on some of my personal projects and invest into becoming a D compiler developer instead. 3. Tuck my tail and try to work around it in my code base, giving up on my perfect envisioned design. 4. [NEW!] Ask the bot to draft a patch, which it often ultimately succeeds at doing (at least to the point of getting the test suite to pass). Now, instead of filing a bug report, I can file a bug report with a machine-generated patch attached (in the form of a pull request), which might be total garbage - but at least it starts a discussion! I'm not sure how the compiler hackers feel about this, though, but I've always tried to be up-front about the provenance of the patches and so far I have not been asked to stop. What should we do about this? Should we use it more? Should we use it less? I don't know. There's definitely valid reasons - ethical, practical, legal, financial - to avoid using it, as have been mentioned here and elsewhere. But it also seems genuinely useful in at least some situations, and no one knows what the future holds. Anyway, one point I wanted to make is that if we're talking about the quality/implications/etc. of AI, we should definitely be clear about which specific model we're talking about. Things have been improving very rapidly and there's a lot of variation in what you might experience recently or even today.
Feb 10
On Tuesday, 10 February 2026 at 16:14:03 UTC, Vladimir Panteleev wrote:On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:thanks much for that reply.One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools.I guess I could post a few thoughts about AI / LLMs here if people are interested. [...]
Feb 10
On Tuesday, 10 February 2026 at 16:14:03 UTC, Vladimir Panteleev wrote:On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:Interesting, since I'm not using AI I'd like to know, in this case you have LLM locally and you point to D source folder and It learns from that database and do everything from there? I think this would be a nice topic/video to be made so to attract people, since D suffers from content lately. And show how do you're doing PR currently, maybe even guys like me who will go deep to help would try it. *** Now On the subject *** I remember a DConf with Scott Meyers where they talked about the language that would Kill "C", and I'm starting to think that AI could kill any language. I read a topic about this on Spectrum, and just as example, where I work (One of the biggest care provider in my country), they showed us a port of one module written over the past 20 years in a new language using AI in just a couple of hours, it was modernized and everything else through AI. So I wonder if usually programming languages have restrictions to ensure bad code don't mess with anything, but on the other hand AI keep getting better and learns how to avoid bad code, what's the point of having all these languages? Or in fact, could AI write a better programming language by itself? Matheus.One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools.I guess I could post a few thoughts about AI / LLMs here if people are interested...
Feb 10
On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:Interesting, since I'm not using AI I'd like to know, in this case you have LLM locally and you point to D source folder and It learns from that database and do everything from there?The main way I use LLMs is with Claude Code. Here's how it works: 1. You open the directory with your project in a terminal 2. You run `claude` 3. This opens a TUI that looks like a chat interface. You type your question or request for what you want the bot to do. 4. The bot looks at your code. If it's too big to fit into its context (a limited window of how much it can see at a time), it will search just for relevant bits. 5. If the problem is big, it will first write a plan for how it aims to accomplish its goal, for you to read and approve. 6. It does the thing. It can edit files and run commands in order to run your test suite (or at least check that the code compiles). By default it will ask before every edit or command. Many people run it in a sandbox and disable the prompts, so that it can work by itself but still doesn't accidentally delete your entire computer. 7. Sometimes the bot can automatically write down what it learned in a memory file. It will read this file automatically the next time you ask it to do something in that project. There isn't really a lot of "learning" other than something like this. Before/aside that, I also have a spare GPU which I'm using to run an autocomplete model. It's nice when writing code by hand. For that I use https://github.com/CyberShadow/company-llama + llama.cpp.So I wonder if usually programming languages have restrictions to ensure bad code don't mess with anything, but on the other hand AI keep getting better and learns how to avoid bad code, what's the point of having all these languages? Or in fact, could AI write a better programming language by itself?Agentic coding actually works better the stricter the language! This is because then the compiler can check if the code is correct immediately, and if it's not, the agent sees the error right away and can fix it before stopping. So, I think we will see more strictly typed languages or languages with built-in theorem proving become more popular. There are often too frustrating or time consuming for humans to use for every day programming, but it doesn't matter when the code is being written by AI. I am seeing this too with testing and Nix. Writing integration tests with Nix is usually a lot of work, but once it's written then it's rock-solid proof that your thing works and that everyone can verify that it works. You can even script entire VMs that can run any software for integration tests, and these VM tests run without any problems on any Linux machine including GitHub Actions. So, I've since been adding Nix based integration tests to all my projects (including this forum, which now has Nix/Playwright based end-to-end tests).
Feb 10
On Tuesday, 10 February 2026 at 17:54:29 UTC, Vladimir Panteleev wrote:On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:Well first of all thanks for sharing this info, It's interesting and some of the things are the way I had imagined. I read somewhere that Stack Exchange programming questions is only 22% of what it had 2 years ago[1], I think development with LLM will be the norm now. I tried a bit of AI sometime ago and it wasn't very pleasant experience, in fact it would be faster writing myself than fixing AI terrible code, but for what people are talking these days it has evolved a lot. Finally and again thanks for sharing how you are using it, Matheus. [1] - https://spectrum.ieee.org/top-programming-languages-2025Interesting, since I'm not using AI I'd like to know, in this case you have LLM locally and you point to D source folder and It learns from that database and do everything from there?The main way I use LLMs is with Claude Code. Here's how it works: ...
Feb 10
On Tuesday, 10 February 2026 at 17:54:29 UTC, Vladimir Panteleev wrote:On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:I feel so old-fashioned when I read this. The worst is that I've been warned, pre-Covid time (so 2019), while chating, that this will the next big thing. You seem to have the method. I've heard recently that apparently another notable D user, "Feep", is heavily using AI agents. I'm so done.Interesting, since I'm not using AI I'd like to know, in this case you have LLM locally and you point to D source folder and It learns from that database and do everything from there?The main way I use LLMs is with Claude Code. Here's how it works: 1. You open the directory with your project in a terminal 2. You run `claude` 3. This opens a TUI that looks like a chat interface. You type your question or request for what you want the bot to do. 4. The bot looks at your code. If it's too big to fit into its context (a limited window of how much it can see at a time), it will search just for relevant bits. 5. If the problem is big, it will first write a plan for how it aims to accomplish its goal, for you to read and approve. 6. It does the thing. It can edit files and run commands in order to run your test suite (or at least check that the code compiles). By default it will ask before every edit or command. Many people run it in a sandbox and disable the prompts, so that it can work by itself but still doesn't accidentally delete your entire computer. 7. Sometimes the bot can automatically write down what it learned in a memory file. It will read this file automatically the next time you ask it to do something in that project. There isn't really a lot of "learning" other than something like this. Before/aside that, I also have a spare GPU which I'm using to run an autocomplete model. It's nice when writing code by hand. For that I use https://github.com/CyberShadow/company-llama + llama.cpp.So I wonder if usually programming languages have restrictions to ensure bad code don't mess with anything, but on the other hand AI keep getting better and learns how to avoid bad code, what's the point of having all these languages? Or in fact, could AI write a better programming language by itself?Agentic coding actually works better the stricter the language! This is because then the compiler can check if the code is correct immediately, and if it's not, the agent sees the error right away and can fix it before stopping. So, I think we will see more strictly typed languages or languages with built-in theorem proving become more popular. There are often too frustrating or time consuming for humans to use for every day programming, but it doesn't matter when the code is being written by AI. I am seeing this too with testing and Nix. Writing integration tests with Nix is usually a lot of work, but once it's written then it's rock-solid proof that your thing works and that everyone can verify that it works. You can even script entire VMs that can run any software for integration tests, and these VM tests run without any problems on any Linux machine including GitHub Actions. So, I've since been adding Nix based integration tests to all my projects (including this forum, which now has Nix/Playwright based end-to-end tests).
Feb 10
On Tuesday, 10 February 2026 at 20:16:06 UTC, user1234 wrote:I feel so old-fashioned when I read this. The worst is that I've been warned, pre-Covid time (so 2019), while chating, that this will the next big thing. You seem to have the method. I've heard recently that apparently another notable D user, "Feep", is heavily using AI agents. I'm so done.I dont think ai will ever replace writing new abstractions. Its like an oo-ide writing 30 lines of getters and setters instantly. Thats great, such a marvel of engineering... it shouldn't actually been a problem to solve. 99% of lines of code will be ai. But, what does that actually mean.
Feb 10
On Tue, Feb 10, 2026 at 09:06:13PM +0000, monkyyy via Digitalmars-d wrote:On Tuesday, 10 February 2026 at 20:16:06 UTC, user1234 wrote:To pull out a Walter quote: I've been around long enough to have seen an endless parade of magic new techniques du jour, most of which purport to remove the necessity of thought about your programming problem. In the end they wind up contributing one or two pieces to the collective wisdom, and fade away in the rearview mirror. -- Walter Bright IMNSHO, if something can be automatically generated by AI, then it's not worth writing and shouldn't have been necessary to write in the first place. What's worth writing is what requires actual thought, actually solving a problem, not just regurgitating past solutions to previously-solved problems. If AI can automate those parts away, so much the better, I say. Spare me the tedium, give me more time/energy to focus on what actually needs solving. T -- What's at the bottom of the Bermuda Triangle? A wreck tangle.I feel so old-fashioned when I read this. The worst is that I've been warned, pre-Covid time (so 2019), while chating, that this will the next big thing. You seem to have the method. I've heard recently that apparently another notable D user, "Feep", is heavily using AI agents. I'm so done.I dont think ai will ever replace writing new abstractions. Its like an oo-ide writing 30 lines of getters and setters instantly. Thats great, such a marvel of engineering... it shouldn't actually been a problem to solve. 99% of lines of code will be ai. But, what does that actually mean.
Feb 10
On Tuesday, 10 February 2026 at 20:16:06 UTC, user1234 wrote:I feel so old-fashioned when I read this. The worst is that I've been warned, pre-Covid time (so 2019), while chating, that this will the next big thing. You seem to have the method. I've heard recently that apparently another notable D user, "Feep", is heavily using AI agents. I'm so done.hi! yep, I use AI heavily. I'm a doomer as well fwiw, so I'm by no means uncritical about AI. But programming has *always* been the art of getting done as much as possible with the least possible effort. AI simply continues the trend. Fwiw, despite using AI very heavily in private, my D code at work is still largely written manually. The more local context and insight is required, the more current AI struggles. Maybe next year we'll have local online learning... :) (Also yes, it's a trip that Claude knows boilerplate/serialized. The case for open-sourcing internal corporate code has never been stronger.)
Feb 11
On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:I remember a DConf with Scott Meyers where they talked about the language that would Kill "C", and I'm starting to think that AI could kill any language.Some of the very worst tech ever made has as its primary feature that it doesn't require a programmer, and invariably the result is so difficult that you still need a programmer, but now your programmer doesn't have any of the tooling or standards or documentation that programmers expect. Google has a SOAR solution for example that has a log-parsing configuration language which is a Lovecraftian sanity-threatening horror to contemplate, and everywhere in admin IT there are enormous "configurations" with inscrutable mutating state. With even the worst desktop AI, like a two-year-old model running in ollama, there's no longer any plausible reason for any of this garbage to exist. You can't tell a fable anymore about how domain experts can "just" edit some JSON instead of touch code, because AI would let anyone work with a configuration library used by a standard programming language, and easier and faster and more reliably than it can help with these NIH DSLs. For this reason I'm mostly optimistic about AI killing languages. Like a fever, or a mild poison, it'll kill this stuff first. That it makes legacy code easier to escape is a good thing for better languages, and it doesn't seem to have been bad enough at less-popular languages to provoke a Python rewrite of the dlang forums.
Feb 10
On Tuesday, 10 February 2026 at 17:38:40 UTC, matheus wrote:Interesting, since I'm not using AI I'd like to know, in this case you have LLM locally and you point to D source folder and It learns from that database and do everything from there?You *can* run LLMs locally, but it depends what you want it to do for you. I use llama with qwen3-coder-next on my desktop CPU. It works fine for asking it about what a function does, what some code does, writing short functions, or documentation of short functions. You no longer need a GPU for anything like that. Just a free model and a computer with enough RAM. If you want to do anything that requires thinking or a lot of context, AI services like Claude or Gemini are still your best bet, or you better have an expensive computer sitting around doing nothing.they showed us a port of one module written over the past 20 years in a new language using AI in just a couple of hours, it was modernized and everything else through AI.If you want to port C code to D, that's to a large extent already done. I can convert C header files quickly using ImportC plus Gemini for cleaning up the edge cases. I'd be very cautious about "modernizing" C code with an LLM. You better write an extensive test suite before you try that.
Feb 10
On Tuesday, 10 February 2026 at 16:14:03 UTC, Vladimir Panteleev wrote:On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:I'm on hurry now, but same feeling here with Claude-code: Opus 4.5 changed everything, and it really good in writing D code. We've curated it's configuration a lot (for example, pointing it to prefer design-by-introspection and CT and so on ...) and it's doing great! I think that CT is a big plus, as it can iterate fast catching errors directly trying to compile (that's why it works very well also in Elm!). It improved a lot also when we added something like: after having coded a library implementation, span an agent and tell him to build something with the library, then look at its performance and improve the point that were confusing for him. /P[...]I guess I could post a few thoughts about AI / LLMs here if people are interested. [...]
Feb 11
On Monday, 9 February 2026 at 21:25:02 UTC, user1234 wrote:For some reasons I monitor a full hand of programming languages repos.This is interesting because I can compare to others. One tendency I have noticed recently in the D world is one guy that is very good with AI. Cybershadow. Already 5 or 6 PR, he masters the tools. One problem I'd like to mention is that if for "this" or "that" AI may help, you have the risk of not understanding the codebase anymore, after merging 100 PR, let's say, the knowledge will be lost or costly to re-acquire. The paradox is however that often human-generated PR are just patches. The compiler is a huge patchwork, even if there's many sub programs ( visit this or that ). So you see it's a bit the same. People come and go, newcomers have to understand what people who left did. This is what matters.My code is 100% vibe coded, from start to finish. It's amazing really. I can watch an episode of The Walking Dead on one of my 4K screens and have Gemini CLI code for me on one of my 1080p screens. I think nobody noticed here that I was doing that. Well, I am sharing it now with you people. I am enjoying the gift that is AI for just $2.99!?!!
Feb 10
Personally I like using Cursor for tooling but only use them one month out of two, just in case. Some non-trivial downsides with AI, from what I heard: - it's a good feeling of going faster, however 96% of people don't trust the code but 48% check it, which means the tools encourage you not to read too much what it generates. At least when you write sme code, there is baseline level of caring about it. - as you can go faster, you also create more stuff to be busy and it can be more intense - working without it becomes feels a bit raw and "new", as if you forgot how to walk - if the LLM are left to their own devices, as happens in autonomous agent communities, they end up doing a whole lot of nothing, exchanging a-ha moments and manifestos rather than action. They also don't tend to test what they do, since they don't care about reputation. LLMs are very much gullible.
Feb 11
On Wednesday, 11 February 2026 at 21:47:16 UTC, Guillaume Piolat wrote:- if the LLM are left to their own devices, as happens in autonomous agent communities,That was fake and a marketing psyop. Gpt 2 level of text processing with promts like "pretend to be an scifi ai gaining sentience" "pretend to be planning on killing humanity" and then the cyptro scam bot farms did a quick rewrite of their tools to use the api and it was flooded with that. Everyone got what they wanted to see.
Feb 11









Serg Gini <kornburn yandex.ru> 