* no longer any pressure to contribute upstream
* no longer any need to use a library at all
* Verbose PRs created with LLMs that are resume-padding
* False issues created with LLM-detection by unsophisticated users
Overall, we've lost the single meeting place of an open-source library that everyone meets at so we can create a better commons. That part is true. It will be interesting to see what follows from this.
I know that for very many small tools, I much prefer to just "write my own" (read: have Claude Code write me something). A friend showed me a worktree manager project on Github and instead of learning to use it, I just had Claude Code create one that was highly idiosyncratic to my needs. Iterative fuzzy search, single keybinding nav, and so on. These kinds of things have low ongoing maintenance and when I want a change I don't need to consult anyone or anything like that.
But we're not at the point where I'd like to run my own Linux-compatible kernel or where I'd even think of writing a Ghostty. So perhaps what's happened is that the baseline for an open-source project being worthwhile to others has increased.
For the moment, for a lot of small ones, I much prefer their feature list and README to their code. Amusing inversion.
AI coding is kind of similar. You tell it what you want and it just sort of pukes it out. You run it then forget about it for the most part.
I think AI coding is kind of going to hit a ceiling, maybe idk, but it'll become an essential part of "getting stuff done quickly".
As someone who works on medical device software, I see this as a huge plus (maybe a con for FOSS specifically, but a net win overall).
I'm a big proponent of the go-ism "A little copying is better than a little dependency". Maybe we need a new proverb "A little generated code is better than a little dependency". Fewer dependencies = smaller cyberseucity burden, smaller regulatory burden, and more.
Now, obviously foregoing libsodium or something for generated code is a bad idea, but probably 90%+ of npm packages could probably go.
I feel npm gets held to an unreasonable standard. The fact is tons of beginners across the world publish packages to it. Some projects publish lots of packages to it that only make sense for those projects but are public anyway then you have the bulwark pa lager that most orgs use.
It is unfair to me that it’s always held as the “problematic registry”. When you have a single registry for the most popular language and arguably most used language in the world you’re gonna see massive volume of all kinds of packages, it doesn’t mean 90% of npm is useless
FWIW I find most pypi packages worthless and fairly low quality but no ones seems to want to bring that up all the time
Compare this to Java ecosystem where a typical project will get an order of magnitude fewer packages, from vendors you can mostly trust.
So in many senses AI is democratising open-source.
Many projects require a great deal of bureaucracy, hoop-jumping, and sheer dogged persistence to get changes merged. It shouldn't be surprising if some are electing it easier to just vibe-customize their own private forks as they see fit, both skipping that whole mess and allowing for modifications that would've never been approved of in mainline anyway.
The AI-forgery attacks are highly polished, complete with forged user photos and fake social networking pages.
The legitimate code contributions are from people who have near-zero followers and no obvious track record.
This is topsy-turvy yet good news for open source because it focuses the work on the actual code, and many more people can learn how to contribute.
So long as code is good enough to get in the right ballpark for a PR, then I'm fine cleaning the work up a bit by hand then merging. IMHO this is a great leap forward for delivering better projects.
I was fine with my work being a gift for all of humanity equally, but I did not consent with it being a gift to a for-profit company that I'm not personally benefiting from, that wont even follow the spirit of the open source license.
If AI doesn't have to follow the GPL, then I'm not going to create GPL code.
Knowing how to write a database could make one fabulously rich. Now the person who knows how to make and promote a simple crud app backed my MySql becomes the rich one, while the db people beg for donations.
Linux killed Sun/Solaris and SGI Irix
Developers have voluntarily moved further down in the chain of value - now just describing themselves as primarily a business liaison who can translate to code. All the computer whispering necessary to do all this is freely available and digestible for free.
LLMs are just the expected endpoint of this.
I’m convinced that GitHub and GitLab will eventually stop offering their services for free if the flood of low-quality, "vibe-coded" projects—complete with lengthy but shallow documentation—continues to grow at the current rate.
The trend of rewriting existing programs ("vibe-coding" a rewrite of $PROG in Rust, for example) threatens to undermine important, battle-tested projects like SQLite. As I described in this comment: https://news.ycombinator.com/item?id=46821246.
I’m quite sure developers will increasingly close-source their work and black-box everything they possibly can. After all, source code that cannot be seen cannot be so easily "rewritten" by vibe-coders.
I'm also not particularly fond of the other extreme of toxic positivity where any problem is just a challenge and everybody is excited to take them on.
Once seems to understate the level of agency people have and the other seems to overstate.
The world is changing. Adapting does seem to be the rational approach.
I don't think Open Source is being killed but it does need to manage the current situation in a way that provides the best outcome.
I have been thinking that there may be merit in AI branches or forks. Open source projects direct any AI produced PRs to the AI branch. Maintainers of that branch curate the changes to send upstream. The maintainers of the original branch need not take an active involvement in the AI branch. If the AI branch is inadequately maintained or curated, then upstream simply receives no patches. In a sense it creates an opportunity for people who want to contribute. It produces a new area where people can compartmentalise their involvement without disrupting the wider project. This would lower the barrier of entry to productively supporting an open source project.
I doubt the benefit of resume-padding will persist long in an AI world. By the very nature of their act, they are showing what they are claiming to do is unremarkable.
I do think that SDKs and utility-focused libraries are going to mostly go away, though, and that's less flashy but does have interesting implications imo.
Another article written by someone who doesn't actually use AI. Claude will literally search "XYZ library 2025" to find libraries. That is essentially equivalent to how it's always worked. It's not just what is in the dataset.
I'm fairly sure you made a typo, but considering the context, it's a pretty funny typo and would kind of demonstrate the point parent was trying to make :)
I agree with you overall though, the CLI agents of today don't really suffer from that issue, how good the model is at using tools and understanding what they're doing is much more important than what specific APIs they remember from the training data.
> The LLM will not interact with the developers of a library or tool, nor submit usable bug reports, or be aware of any potential issues no matter how well-documented.
Arn't these interactions responsible for the claimed burn-out suffered by open-source maintainers? If you want interaction then, I don't know, go to a conference? Again, I don't get the issue. Seems like a good thing! Users are able to find answers and solutions to their quesitons more efficiently--all the while, still using the open-source library. The usage chart is still seeing tremendous growth! Developers are still using the library to solve their problems. It seems like exactly what open-source was intended for.
The issue to me is that, the incentives for investing in open-source have changed for some maintainers in such a way that they're no longer in alignment with their return on their investment. Maybe there are fewere people interacting with them and so fewer people to discover how "great" they are. Maybe fewer eye balls on their resume. The point is, open-source was a means to an end. And, so, frankly, I don't give a shit.
LLMs are making open-source technology accessible to more people and that's a good thing.
The small libraries will be eliminated as a viable solution for production use, but that’s a good thing. They are supply chain risk, which is significantly amplified in the LLM age.
It may happen and it will be great if it happens, when open training datasets will replace those libraries to recalibrate LLM output and shift it from legacy to more modern approaches, as well as teaching how to achieve certain things.
Just because some things suck, for now, doesn't mean open source is being killed. It means software development is changing. It'll be harder to distinguish between a good faith, quality effort that meets all the expectations of quality control without sifting through more contributions.
Anonymous participation will decrease, communities will have to create a minimal hierarchy of curation, and the web of trust built up in these communities will have to become more pragmatic. The relationships and the tools already exist, it's just the shape of the culture that results in good FOSS that will have to update and adapt to the technology.
I work a lot with quants (who can program but are more focused on making money than on clean-code) and Opus 4.5 and Kimi 2.5 are extremely good at giving them architecture guidance. They tend to overcomplicate some things but the result is usually miles better than what they produced without LLMs.
their LLM "assisted" work seems to be the roughly the same quality (i.e. bad), but now there's much more of it
not an improvement
And while banning AI outright is certainly an option at a private company, it also feels like throwing out the baby with the bath water. So we’re all searching for a solution together, I think.
There was a time (decades ago) when projects didn’t need to use pull requests. As the pool of contributors grew, new tools were discovered and applied and made FOSS (and private dev) a better experience overall. This feels like a similar situation.
The internet is worse off.
The sports I participate in got cheaper to start with and are worse. Cultures worse.
What has gotten better because the barrier to entry is lower?
There are also a lot of open source projects that are simply one-man shows. And llm should be massively helping those and I really don't see that so far.
I would say they should be a massive gain to the open source community cuz let's face it. The people that do open source are simply going to be different than the people that just feed on it.
Llm should be a massive enabler to open source. It should permit easy porting between architectures, programming languages and interfaces to a degree that simply wasn't possible before
Again, I'm not really seeing that.
jauntywundrkind•1h ago
But the general purpose machinery, the substrait we work on? That's hugely open source today, and will gladly accept and make use of that platform innovation that you can offer up.
The authors talk about it being harder to get traction. And that's both true because of LLMs, and also, has been the case for a while now. Theres so much open source already, so many great tools, that it takes real effort and distinction to stand out & call attention to yourself.